More stories

  • in

    Shopify discloses security incident caused by two rogue employees

    Image: Roberto Cortese

    Online e-commerce giant Shopify is working with the FBI and other law enforcement agencies to investigate a security breach caused by two rogue employees.
    The company said two members of its support team accessed and tried to obtain customer transaction details from Shopify shop owners (merchants).
    Shopify estimated the number of stores that might be affected by the employees’ actions at less than 200. The company boasted more than one million registered merchants in its latest quarterly filings.
    The e-commerce giant said the incident is not the result of a vulnerability in its platform but the actions of rogue employees.
    “We immediately terminated these individuals’ access to our Shopify network and referred the incident to law enforcement,” the company said in a prepared statement. “We are currently working with the FBI and other international agencies in their investigation of these criminal acts.”
    An investigation into the security breach is still in its early phases. Shopify promised to notify impacted merchants and customers as relevant.
    The transaction data that the rogue employees might have gained access to includes basic contact information, such as email, name, and address, as well as order details, like products and services purchased.
    Shopify said payment card numbers or other sensitive personal or financial information was not included in the data the staffers could have accessed.
    Another incident caused by malicious insiders
    The incident disclosed by Shopify is the third incident of a “malicious insider” in the past month. Instacart and Tesla acknowledged similar incidents last month.
    Instacart said two employees working for a company providing tech support services for Instacart shoppers “may have reviewed more shopper profiles than was necessary in their roles as support agents.” The company had to notify 2,180 shoppers as a result of this breach.
    A week after the Instacart incident, Tesla CEO Elon Musk also admitted that his company was targeted by a Russian cybercrime gang, which tried to recruit one of its US employees and have them install malware on the internal network of its super-factory located in Sparks, Nevada.
    While the Instacart incident resulted in a breach for the company, the Tesla employee resisted recruitment efforts and reported the incident to Tesla and authorities. More

  • in

    Huawei chairman labels ongoing US bans as 'non-stop aggression'

    Huawei rotating chairman Guo Ping.
    Huawei’s rotating chairman Guo Ping has announced the company will continue to do everything it can to strengthen its supply chain, despite continuing to encounter “great pressure” and being continuously “attacked”.
    “Huawei is in a difficult situation these days. Non-stop aggression has put us under significant pressure,” he said on Wednesday, during his keynote at Connect 2020. 
    “We’re still assessing the specific impacts. Right now, survival is the goal.”
    Ping elaborated on this point, specifying that the continuous attacks he referred to have been coming from the US government.
    “The US has been continuously attacking us and they have modified their laws for the third time, and that has posed great challenges to our production and operation,” he said, speaking to media.
    In August, US government expanded its restrictions on the Chinese tech giant by barring it from purchasing chips made by foreign manufacturers using US technology. It also added another 38 affiliates of Huawei to the Entity List, including Huawei Cloud Singapore and Huawei Cloud France.  
    The United States also put a ban on US companies from buying, installing, or using foreign-made telecommunications equipment, citing cyber-espionage fears. The ban effectively targeted Chinese equipment providers, like Huawei, although no names were mentioned in the executive order.    
    See also: Huawei rebukes US attempts to stymie foreign competition with chip rule
    Despite the chipset ban, Ping said the company has “sufficient stock” to support its business.
    “As for our chipsets for mobile phones, as we consume hundreds of millions of chipsets every year, we are still looking for ways to address the chipset for smartphones. But we also have been aware that US chip vendors are actively applying for licences to continue to supply to Huawei from the US government,” he said.
    Ping also took the opportunity on Wednesday to plead for “openness and cooperation”.
    “We hope the US government will give their rules and regulations a second thought,” he said. 
    “If they are willing to supply to us, we’d be willing to buy from them. At the same time, we’ll continue to adhere to our procurement strategy … we believe a neutral benefit and collaboration is the best model for the global industry.”
    In the meantime, Huawei continues to be locked out from participating in building out 5G networks worldwide. The most recent occurrence was in Canada when local carriers Bell and Telus both announced they would not be continuing the use of Huawei equipment in their respective 5G networks.
    Although not officially banned, Huawei has also not made inroads in New Zealand either, after GCSB prevented Spark from using Huawei kit in November 2018. 
    Meanwhile, in the United Kingdom, although the UK government decided to limit the involvement of Huawei in January — restricting it to a 35% cap of all radio equipment and preventing the Chinese giant from supplying any equipment for the core of any network, as well as banning the use of Huawei equipment at sensitive locations such as nuclear sites and military bases — reports last month said that the decision to allow Huawei to participate would be reviewed. 
    On Wednesday, Huawei also spoke about the company’s decision last year to cut its headcount in Australia, which could result in as many as 400 redundancies over the next five years thanks to the ban placed on the company for 5G rollouts by the Australian government.
    “Australia is a small market for us. It has never been a priority,” Huawei board executive director David Wang said.
    “We always prioritise our resources to serve high-quality customers. Because resources are limited, we need to fully utilise them to support the customers who really need us and support them to become successful. We make business adjustments according to the status or situation in different markets.”
    Despite remaining pressures, the company announced it is focused on combining connectivity, computing, cloud, artificial intelligence, and industry application technologies to deliver value to customers.
     “There are huge opportunities in that,” Ping said.
    Off the back of this commitment, the company together with Intel jointly launched the FusionServer Pro 2488H V6, the newest member of the FusionServer Pro product family.
    Running on x86 architecture, FusionServer Pro 2488H V6 houses four 3rd Gen Intel Xeon Scalable processors in a 2U space, 48 DDR4 DIMMs to support up to 18TB, and 11 PCIe slots for local storage.
    It comes as Intel was granted a licence by the US government to continue supplying certain products to Huawei, as reported by Reuters.
    Updated 23 September 2020, 6:36pm (AEST): Comment about Australian market attributed to Huawei board executive director David Wang.
    Related Coverage More

  • in

    Facebook claims 'scheduling issue' in avoiding Australian foreign interference probe

    Facebook was due to appear before the Senate’s Select Committee on Foreign Interference through Social Media on Friday, alongside controversial video-sharing platform TikTok.
    While TikTok’s name is still on the schedule, Facebook has pulled out.  
    A statement from the committee said it has been in communication with Facebook to arrange its appearance at a public hearing, saying the social media giant was initially willing to participate and had been tentatively confirmed as a witness for the hearing before its decision to cancel.  
    “Facebook has since stated that key personnel are not willing to make themselves available on this date,” the committee said.
    “Facebook has expressed a preference for any appearance to be after the US election.”
    A Facebook spokesperson told ZDNet it intends to cooperate with the committee, but a “scheduling issue” has meant testimony cannot occur this coming Friday.
    “We are committed to cooperating with the Senate Committee on this inquiry and answering the questions they may have. Due to a scheduling issue we’ve requested to appear at a later day,” the spokesperson said.
    The committee was stood up in December to inquire into, and report on, the risk posed to Australia’s democracy by foreign interference through social media.
    Read more: Countering foreign interference and social media misinformation in Australia
    Committee chair Senator Jenny McAllister on Wednesday thanked TikTok for its “constructive” approach to the inquiry and its willingness to appear before the committee. Meanwhile, she said it was “disappointing Facebook has not adopted the same approach”.
    “Facebook’s platform has been used by malicious actors to run sophisticated disinformation campaigns in elections around the globe,” McAllister said.
    With 84% of the nation’s population on Facebook — around 17 million Australians use the site every month — McAllister believes the public deserves to know how the company manages the risks presented by the platform to Australia’s democracy and public discourse.
    “Facebook claims they can be trusted to support Australia’s democratic processes but seem unwilling to participate in our processes of democratic accountability,” she said.
    “As chair of the inquiry, I will be talking to my colleagues about options we have to ensure that Facebook answers the legitimate questions Australians have for the platform.”
    Must read: Facebook comments manifest into real world as neo-luddites torch 5G towers   
    Earlier this month before a House of Representatives Committee, Facebook said that during the quarter when the 2019 Australian federal election was held, it removed around 1.5 billion fake accounts from its platform.
    “These fake accounts are the things that people try to use to share harmful content,” Facebook vice president of public policy Simon Milner said at the time.
    “Almost 100% of that was removed because of our actions, using artificial intelligence to find these accounts and get rid of them. We spend a lot of effort trying to protect our platform from fake accounts.”
    During the 2019 election period, there were approximately 10 million unique people involved in 45 million interactions related to the election.
    Only 17 individual pieces of information were fact-checked during this period.
    “Once a post has been found, we use artificial intelligence to apply the same treatment to similar posts that make the same claim … the ultimate number of posts that would have received fact treatment would be a number much higher, in the thousands,” Facebook’s Australia and New Zealand public policy manager Joshua Machin added.
    The pair admitted, however, that Facebook does not fact-check political advertising “because we believe it’s important for the debate to play out”.
    “I would say Facebook does the same as any media platform. If you see a billboard … an ad for a campaign … because that person is trying to target that constituency, an opponent might think that that ad contains false information and they have an opportunity to respond to that, beat that with an ad further down the road,” Milner said.
    “There’s no expectation that the company that enabled you to put that ad on that billboard had to put something on it saying, ‘Hey, this information has been marked as false’, so we apply exactly the same approach on our service when it comes to political advertising.
    “We don’t think it’s right that we should be the arbiters of truth.”
    MORE FROM FACEBOOK More

  • in

    Microsoft Ignite 2020: All the news from Redmond's IT Pro conference

    Watch this: ZDNet’s Mary Jo Foley and Larry Dignan discuss everything you need to know about the news out of Microsoft’s Ignite conference. 

    New perpetual Office clients for Windows and Mac, as well as on-premises versions of Exchange, SharePoint and Skype for Business are coming in the second half of next year.

    Microsoft is adding three new edge-computing devices and moving ahead, with various Azure-branded services that are part of its hybrid-cloud family.

    Microsoft’s Azure Communication Services (ACS) will give customers and partners access to the same voice, video, chat and texting services that Microsoft uses to power Teams.

    Project Cortex is going to be delivered as a number of add-ons to existing Microsoft products, starting with SharePoint. The change in strategy is the result of ‘user feedback,’ Microsoft execs say.

    The wait is over: Microsoft’s ‘Chredge’ browser will be available to Insider testers in preview starting in October.

    Microsoft has plans to add new meeting, calling, search, insights and other new features to Teams over the next several months. Where’s the Teams-fatigue-fighting feature?

    At its annual Ignite conference, Microsoft adds a huge slate of enhancements to its banner BI platform.

    Microsoft Threat Protection, Defender ATP, Azure Security Center, and others brought under the Microsoft Defender umbrella brand.

    Microsoft has added a second Ignite IT pro event, slated for early 2021, to its conference calendar. More

  • in

    Google deprecates Web Store Payments API, effectively nuking Chrome paid extensions

    Google has announced on Monday plans to permanently shut down the Chrome Web Store “Payments API.”

    This is the system that Google was using to handle payments on the Web Store, such as one-time fees, monthly subscriptions, and free trials for commercial Chrome extensions.
    The move to shut down the Payments API — and effectively support for Chrome paid extensions — comes after reports of widespread fraud last winter.
    Google originally reacted by suspending the ability to publish and update Chrome paid extensions in January, and later temporarily disabled the entire Payments API in March.
    Initially, Google promised to crack down on the fraudulent actors, but on Monday, in a surprise announcement, the company did the opposite by shutting down the Web Store payments system instead.
    Google is now asking extension developers to migrate their extensions to use a third-party, non-Web Store payments processor.
    Since the Payments API has been down since March, Google said it’s not planning on bringing it back on. Going forward, Google provided the following timeline:
    Sept. 21, 2020: You can no longer create new paid extensions or in-app items. This change, in effect since March 2020, is now permanent.
    Dec. 1, 2020: Free trials are disabled. The “Try Now” button in CWS will no longer be visible, and in-app free trial requests will result in an error.
    Feb. 1, 2021: Your existing items and in-app purchases can no longer charge money with Chrome Web Store payments. You can still query license information for previously paid purchases and subscriptions. (The licensing API will accurately reflect the status of active subscriptions, but these subscriptions won’t auto-renew.)
    At some future time: The licensing API will no longer allow you to determine license status for your users.
    Image: Google
    Google’s move has sparked some outrage among the Chrome extensions developer community. Because Google doesn’t provide details on paying customers to extension owners, many developers are now facing a situation where they might not be able to migrate their entire userbases to their new payments processor of choice.

    If you have built a bootstrapped or a lifestyle business on the Chrome Extension store and have used their payments API, you now have to scramble to integrate an alternative provider and hope that you can find a way to reach your users so they can continue their subscription.
    — Arvid Kahl (@arvidkahl) September 22, 2020 More

  • in

    TikTok removed 104M videos for guideline violations, majority from India and US

    TikTok removed more than 104.54 million videos from its platform in the first half of this year for breaching its community guidelines or terms of service. The number accounts for less than 1% of all videos uploaded on the Chinese app maker’s platform, with the largest volumes removed from India and the US at 37.68 million and 9.82 million, respectively. 
    Some 96.4% of the videos were identified and removed before users reported them, while 90.3% were removed before they clocked any views, according to TikTok’s latest transparency report released Tuesday. The majority, at 30.9% were removed for containing nudity and sexual activities, while 22.3% were taken out for violating minor safety and 19.6% were removed for containing illegal activities and regulated goods.  

    Singapore must look beyond online falsehood laws as elections loom
    Country’s government is missing the point with its use of correction directives, when it should be looking more closely at how the legislation can be used to address bigger security threats as it prepares for its first elections since the emergence of technology, such as deepfake, and increased online interference.
    Read More

    Apart from India and the US, the highest numbers of videos also were removed from Pakistan, Brazil, and the UK at 6.45 million, 5.53 million, and 2.95 million, respectively. 
    TikTok also complied with “valid” government and law enforcement requests across the globe for user information. Such requests would have to be submitted with the appropriate legal documents such as subpoena, court order, warrant, or emergency request. Amongst these, India submitted the most requests at 1,206, of which TikTok complied with 79%, followed by the US at 290, of which 85% were complied with. Israel made 41 requests, of which TikTok complied with 85%, while Germany submitted 37 requests, but just 16% were complied with.
    In limited emergency situations, TikTok said it would disclose user information without legal process. This typically occurred when it had reason to believe the disclosure of information was required to prevent the imminent risk of death or serious physical injury to any person. 
    China was notably missing from the list of government requests. 
    In addition, TikTok said it also received legal requests from governments and law enforcement agencies as well as IP (intellectual property) rights holders to restrict or remove certain content. These, the company said, would be honoured if made through “proper channels” or required by law.
    Amongst these, Russia submitted requests that identified the most number of accounts at 259, of which 29% were complied with. India submitted requests that specified 244 accounts, of which 22% were complied with.
    Pointing to its efforts to “connect” its users, TikTok said it promoted content — amidst the global pandemic — thru in-app info pages and hosted hashtag challenges with partners such as World Health Organization, UNICEF India, and well-known individuals such as Bill Nye the Science Guy and Prince’s Trust. It also developed dedicated pages within its app that enabled users to learn more about Black history, in support of the Black community. 
    Proposal for global group to safeguard against harmful content 
    In a separate statement Tuesday, TikTok said its interim head Vanessa Pappas sent a letter to the heads of nine social and content platforms, proposing a Memorandum of Understanding aimed at encouraging companies to warn one another of violent, graphic content on their own platforms. 
    “Social and content platforms are continually challenged by the posting and cross-posting of harmful content, and this affects all of us [including] our users, our teams, and the broader community,” the company said. “As content moves from one app to another, platforms are sometimes left with a whack-a-mole approach when unsafe content first comes to them. Technology can help auto-detect and limit much, but not all of that, and human moderators and collaborative teams are often on the frontlines of these issues.”
    “Each individual effort by a platform to safeguard its users would be made more effective through a formal, collaborative approach to early identification and notification amongst companies,” TikTok said. “By working together and creating a hashbank for violent and graphic content, we could significantly reduce the chances of people encountering it and enduring the emotional harm that viewing such content can bring — no matter the app they use.”
    TikTok said it previously launched a fact-checking program across eight markets to help verify misleading content, such as misinformation about COVID-19, elections, and climate change. It also introduced in-app educational public service announcements on hashtags related to important topics in the public discourse, such as the elections, Black Lives Matter, and harmful conspiracies, including QAnon.
    RELATED COVERAGE More

  • in

    UK firm to power face verification in Singapore's digital identity system

    Singapore has inked a deal with British vendor iProov to provide face verification technology used in the Asian country’s national digital identity system. Already launched as a pilot earlier this year, the feature allows SingPass users to access e-government services via a biometric, bypassing the need for passwords. 
    The agreement also sees Singapore-based digital government services specialist, Toppan, involved in the deployment of the facial verification technology. Both vendors were selected following an open tender issued by Government Technology Agency (GovTech) and months of user tests, the companies said in a joint statement Tuesday.
    iProov’s Genuine Presence Assurance technology is touted to have the ability to determine if an individual’s face is an actual person, and not a photograph, mask or digital spoof, and authenticate that it is not a deepfake or injected video. Its agreement with the Singapore government also is the first time the vendor’s cloud facial verification technology is used to secure a country’s national digital identity. 

    It gives four million SingPass users the option to authenticate their identity with the biometric scan on their computers or at kiosks. Citizens use their SingPass account to log into and access 500 digital services provided by more than 180 government agencies as well as commercial entities, such as banks. 
    Local bank DBS in July collaborated with GovTech to pilot the use of the face verification technology as part of efforts to speed up digital banking registration. The service enabled customers to sign up for DBS’ digital banking services without having to use their ATM, credit, or debit card, and pin to complete the verification process to activate their accounts. 
    They would need to select SingPass Face Verification through the bank’s mobile app when signing up for a digital service before taking a photo of themselves. The user’s face then would be scanned and matched against the Singapore government’s national digital identity database, which also comprised biometric data. Once authenticated, DBS would send an SMS to the user’s register mobile number for verification. The bank had said data submitted through the process would not be collected or retained.
    GovTech’s senior director of national digital identity Quek Sin Kwok said: “SingPass Face Verification, under our National Digital Identity (NDI) programme, will help partners enhance their customer service journeys. We will continue to extend useful and trusted NDI services to more private sector organisations to accelerate digitalisation and grow Singapore’s digital economy.”
    Toppan Ecquaria’s managing director Foong Wai Keong added: “Allowing businesses to tap into the government-built digital identity infrastructure significantly reduces time and costs. And doing so using facial verification and on the cloud — that is revolutionary. As the world increasingly transacts online, cloud-native solutions are becoming the norm even in the public sector.”
    According to DBS, the COVID-19 pandemic had pushed the adoption and use of digital banking services. The bank saw transactions on its retail digital platform climb 220% between January and May this year, compared to the same period in 2019. Transactions on its wealth digital platform iWealth also increased 198% year-on-year, DBS said in a statement Monday. 
    The bank added that the volume of cash it handled between 2017 and 2019 dropped an average of 5%, or a reduction of SG$5 billion a year. Between June and August 2020, cash volumes dropped a further 34% year-on-year, amounting to a drop of SG$7 billion over three months. 
    To tap growing adoption of mobile and online platforms, DBS said it had been introducing “intelligent banking” capabilities integrated with predictive analytics. Tapping data to provide more intuitive and personalised customer services, the bank said its Intelligent Banking engine generated up to 13 million insights a month across its digital banking services. These were used to help customers improve their financial planning and budgeting as well as make more timely investment decisions. 
    DBS said it would introduce more of such features by the first quarter of 2021, including suggestions on equity stocks customised to a wealth customer’s investment pattern and prompts to speed up daily banking functions and enable customers to carry out transactions, such as bill payments, with a single tap or swipe on their mobile phone.
    iProov’s partnership with GovTech also marks the UK firm’s foray into the Asia-Pacific region.
    The Singapore government in 2018 said it was testing various sensors that could be incorporated into smart lampposts, including cameras that could support facial recognition capabilities. These would be part of its Lamppost-as-a-Plaform pilot, which could see all 110,000 lampposts across the island fitted with wireless sensors and cameras to “better support urban planning and operations”. The sensors, for example, could detect and monitor changes to environmental conditions such as humidity, rainfall, temperature, and air pollutants. Cameras would have analytic capabilities to count and analyse crowds as well as count, classify, and monitor the speed of Personal Mobility Devices to improve safety in public spaces, according to GovTech.
    RELATED COVERAGE More

  • in

    CISA warns of notable increase in LokiBot malware

    The US government’s cyber-security agency has issued a security advisory today warning federal agencies and the private sector about “a notable increase in the use of LokiBot malware by malicious cyber actors since July 2020.”
    The Cybersecurity and Infrastructure Security Agency (CISA) said that its in-house security platform (the EINSTEIN Intrusion Detection System) has detected persistent malicious activity traced back to LokiBot infections.
    The July spike in LokiBot activity seen by CISA was also confirmed by the Malwarebytes Threat Intelligence team, which told ZDNet in an interview today that they’ve also seen a similar spike in LokiBot infections over the past three months.

    Image: Malwarebytes (supplied)
    This is a cause of alarm as LokiBot is one of today’s most dangerous and widespread malware strains. Also known as Loki or Loki PWS, the LokiBot trojan is a so-called “information stealer.”
    It works by infecting computers and then using its built-in capabilities to search for locally installed apps and extract credentials from their internal databases.
    By default, LokiBot can target browsers, email clients, FTP apps, and cryptocurrency wallets.
    However, the malware is far more than a mere infostealer. Across time, LokiBot evolved and now also comes with a real-time key-logging component to capture keystrokes and steal passwords for accounts that aren’t always stored in a browser’s internal database, and a desktop screenshot utility to capture documents after they’ve been opened on the victim’s computer.
    Furthermore, LokiBot also functions as a backdoor, allowing hackers to run other pieces of malware on infected hosts, and potentially escalate attacks.
    The malware made its debut in the mid-2010s when it was first offered for sale on underground hacking forums. Since then, the LokiBot malware has been pirated and broadly distributed for free for years, becoming one of today’s most popular password stealers, primarily among groups of low- and medium-skilled threat actors.
    Multiple groups are currently distributing the malware, via a wide variety of techniques, from email spam to cracked installers and boobytrapped torrent files.
    In terms of prevalence and numbers, SpamHaus ranked LokiBot as the malware strain with the most active command-and-control (C&C) servers in 2019. In the same ranking, LokiBot is currently second in the first half of 2020 [PDF].
    LokiBot also ranks third on AnyRun’s all-time ranking of the most analyzed malware strains on its malware sandboxing service.
    Credentials stolen via LokiBot usually end up on underground marketplaces like Genesis, where KELA believes LokiBot is the second most popular type of malware that supplies the store.
    The CISA LokiBot advisory published today contains detection and mitigation advice on dealing with LokiBot attacks and infections. Additional resources for studying and learning about LokiBot are available on its Malpedia entry.
    LokiBot should not be confused with a similarly named, now-defunct Android trojan. More