More stories

  • in

    This is how Formula 1 teams fight off cyberattacks

    The Mercedes-AMG Petronas Formula One team is one of the most dominant F1 teams of all time and has won seven Constructor’s World Championships in a row since 2014, with seven-time World Champion Lewis Hamilton, who many consider to be the greatest ever Formula 1 driver, winning the F1 Drivers’ Championship on six of those occasions. Mercedes face challenges from nine other teams on the track during race weekends, but these are far from the only adversaries that the team has to worry about. The high-profile, high-tech nature of Formula 1 makes it a tempting target for cyber criminals and sophisticated hackers of all kinds.  

    ZDNet Recommends

    “The profile of this organisation, the popularity of the sport and the fact that we’ve been pretty successful over the last few years actually acts as a little bit of a target for this type of activity,” explains Michael Taylor, IT director at Mercedes-AMG Petronas Formula One. SEE: A winning strategy for cybersecurity (ZDNet special report)Most of the cyber threats an F1 team faces will be familiar to organisations around the world, such as the phishing attacks attempting to steal usernames, passwords and other sensitive information, or the constant threat of ransomware. But then in F1, you also have to factor in the challenge of securing a remote workforce that can be in three countries in as many weeks because of the busy schedule across a hectic 22-race season.  And then, add on top the threat that comes from the most sophisticated online attackers who might be interested in the secrets of a high-performance racing team. “In this hybrid world, a lot of the technology comes out of Formula One and then trickles down into the cars that we drive, so there’s a tremendous amount of technology that’s on the cutting edge that obviously needs to be protected and certainly could be a target for nation-state actors,” says George Kurtz, CEO of CrowdStrike, the cybersecurity partner of Mercedes, which provides the team with technology to help secure its networks, as well as information on the evolving nature of cyber threats. 

    This includes a dossier ahead of every race weekend, where CrowdStrike security analysts detail the potential cyber threats that members of the team could face in the country where the race circuit is located, and how to stay safe from these threats. “That’s always an eye opener that always helps raise some inconvenient truths and some questions,” says Taylor. Ensuring that the cybersecurity of a Formula 1 team is strong enough to protect against all these threats starts with securing the endpoints – the laptops, tablets and other devices that members of staff use on a daily basis.  “Endpoints for us are our biggest area of risk because they have a human at the other end of them and most of the risk is inherently carried by humans doing something they probably shouldn’t do or didn’t intentionally mean to do,” Taylor explains. “The endpoint is an area where we do have control over, but not full control and that’s really the biggest focus for us in terms of reducing the risk opportunity there.”Mercedes could completely lock down machines with strict controls on what actions users can perform; but restricting user activity like that in Formula 1, where time is of the essence and the split-second strategy decisions and the data that informs them can make or break a race weekend, could put a team at a massive disadvantage. “We’re very creative in terms of problem solving and design, and historical security controls would inhibit innovation or could potentially limit innovation,” says Taylor. That means heavily restricting access to data or making it cumbersome for engineers in the pit lane to collaborate with analysts at the factory isn’t the answer. Instead, a balance is needed between ensuring security and also ensuring that staff can efficiently do their jobs in a way that isn’t detrimental to Lewis Hamilton or his teammate Valtteri Bottas during race weekends.  “It’s always a balance of risk versus reward and it’s trying to be able to provide that flexible platform enabling collaboration, but understanding the potential risks and then addressing them,” says Taylor. Seven-time World Champion Lewis Hamilton behind the wheel of his Mercedes. 
    Image: Mercedes
    Cybersecurity applications like firewalls, network segmentation, providing access to data on a need-to-know basis, and multi-factor authentication play a role in helping to keep the team secure, but the globe-trotting nature of Formula 1 means that staff – and computer networks – don’t stay in the same place for long before being packed up and whisked away to another circuit on the calendar. That’s why many of the applications that help manage security procedures are cloud-based, allowing Mercedes to ensure endpoints are protected against the latest threats, no matter where they are in the world. “Whether in the factory in what we class our protective environment or out in Australia, it’s still the same consistent endpoint protection that we have in place; the fact it’s calling home to a cloud location somewhere in the world massively simplifies the complexity and the challenge for us organisationally,” Taylor explains. All ten Formula 1 teams face similar challenges around protecting their networks from data breaches and cyberattacks, no matter where they are in the world, while also attempting to work as efficiently as possible in a high-paced environment. Cyber criminals have long-exploited the hectic nature of businesses, and the sheer number of emails that get sent in a day as an entry point for cyberattacks – and that’s no different for Formula 1. For example, in November last year, Formula 1 was at Imola for the Emilia Romagna Grand Prix, the 13th race in the 2020 Formula One World Championship Season. It was late in the year for an F1 race, after the start of the season was delayed from March to July because of the impact of the COVID-19 pandemic, and the races came thick and fast during the truncated calendar; just days before, the teams had been in Portugal for the previous Grand Prix. 

    It was at this point that some hackers went straight for a big prize, attempting to target Zak Brown, CEO of racing team McLaren Formula 1.  SEE: Don’t want to get hacked? Then avoid these three ‘exceptionally dangerous’ cybersecurity mistakesThey’d created a sophisticated phishing email designed to look like business-related emails that Brown would expect to receive. But Brown never saw it, because the cybersecurity protections McLaren applies to the inboxes of all its staff meant it went straight to junk mail and the ability to click the link was disabled – despite the continued efforts of the attackers. “In terms of volume of attacks, they’ve definitely got smarter. They’re targeting individuals with phishing and spear-phishing attacks – it’s very targeted, very clever,” says Chris Hicks, group CIO at McLaren Group. “It is a cat and mouse game; the attackers will react to your changes, then we react in turn – but I feel like we’re always one step ahead”. McLaren fended off this particular attack by using technology supplied by Darktrace, the team’s official cybersecurity partner – its logo featuring prominently on the liveries of the cars driven by Lando Norris and Daniel Ricciardo. The nature of Formula 1, where team members could be in be in different parts of the world in consecutive weeks, means that blocking access to emails just because they’re being sent from an IP in an unfamiliar space wouldn’t work.  But McLaren’s email security software analyses information about previous activity and uses this to determine if the action is legitimate, meaning that important messages being delivered from unfamiliar time zones or locations don’t get blocked. Meanwhile, messages like the one cyber criminals attempted to send McLaren’s CEO get filtered out as they’re recognised as unusual or malicious. “Darktrace understands that actually the rest of the team is here, these are files you normally access, this is the normal chain so it’s okay. It works really well because we have to be seamless, we can’t be taking our staff offline,” Hicks explains. “That real-time accessibility to data and real-time collaboration wherever you are in the world is absolutely critical – anyone in Formula 1 will tell you every millisecond counts,” he adds. Lando Norris driving his McLaren Mercedes at the Austrian Grand Prix. 
    Image: Getty
    The sheer amount of data transferred over a race weekend is huge with potentially hundreds of thousands of emails being sent within McLaren as well as between McLaren and its partners. “On a race weekend, it’s measurable how many more attacks come into the business when Formula One’s on the TV,” says Dave Palmer, chief product officer at Darktrace. “There could be 250,000 emails over a race week and during a race weekend the number of malicious ones jumps up to about 3.5%, which is a lot – 3.5% of your inbound email has got something wrong with it, that needs to be acted on by the machine.” If just one malicious phishing email wasn’t identified and got through, that could be devastating – not only could it affect race plans, but there’s also the potential for a phishing email to be used as a gateway to a wider attack on the network.  “That’s something we’ve always been challenged with because in many areas intellectual property won’t be secret for very long – in six months or so it’s public knowledge, just due to the nature of Formula 1. But in in real time, we want to keep it close to our chest and often it’s for financial gain or various reasons why attackers might try and compromise us, so it’s imperative that we keep that IP secure,” says McLaren’s Hicks. McLaren doesn’t just rely on technology to keep staff secure – a key element of keeping the network protected from cyberattacks involves regular cybersecurity training for staff, including executives.  

    “The awareness campaigns that we do are absolutely critical and it’s normally from the top down. It’s normally the CEOs you get targeted first or their PA; people right up the top”, says Hicks.  SEE: Cybersecurity: Let’s get tactical (ZDNet special feature) Williams Racing is one of the most historically successful teams on the Formula 1 grid and it too has found itself being targeted by cyberattacks attempting to launch phishing attacks against the boardroom. The high-profile nature of Formula 1 means it’s easy to find out who runs teams – they’re often right there on TV – and cyber criminals will attempt to exploit this for social engineering. “We know we are constantly a target, there are even some spear phishing attacks where they go after the CEO or CFO,” says Graeme Hackland, CIO of Williams Racing F1.  “They don’t lock you out of your account, they just sit in your account and watch. We received a reply to an email from a supplier saying ‘we’ve changed our bank account, please can you update your records’ – and that reply was sent from the hacker not from the supplier,” Hackland explains. Attackers have also registered false Williams email addresses in efforts to commit attacks against the team – for example, they’ll try to register a URL where the lower case l’s are replaced with a capital L, something that unless somebody is really examining the email address, would look authentic. “It looks just like our email address, and so I don’t blame any of our staff who got caught by those things because it was very, very sophisticated – there’s a lot more social engineering going into the phishing emails now. They learn a huge amount of information,” says Hackland. Williams was sold to new owners, American private investment company Dorilton Capital, during 2020 – and with new executives, and new staff around them, it was vital these people were aware of the potential security threats they’d face as high-profile staff of a Formula 1 team. George Russell driving for Williams Racing.
    Image: Williams Racing
    “We got a new CEO, so we did an education campaign with his personal assistant to remind her she’s going to be a target and we have actually seen an increase in spam emails going to her,” Hackland explains. All Williams employees go through phishing training to understand how cyber criminals could try to breach the network via email. But the sheer number of cyberattacks means that it hasn’t always been possible to protect the network from attacks – and Williams found itself the victim of a ransomware attack a few years back. The attack in 2014 started on a Friday morning and was quickly spotted by the cybersecurity team. Much of the network was protected from falling victim to the attack. But if the attack had started a few hours later, it’s likely that nobody would have noticed until the following week. “If this had happened at 6pm, it could have spent all weekend encrypting all of our data and when we come in on Monday, we would have been in massive trouble. It was lucky, it was a Friday morning and we noticed that behavior fairly early in the process,” Hackland explains. The ransomware attack got into the network after a member of staff unintentionally visited a compromised website. “They had downloaded a tech spec sheet for their washing machine. They did nothing wrong. They went to a trusted website downloaded a file and had no idea that this ransomware was running in the background,” says Hackland. At the time of the incident in 2014, cybersecurity procedures weren’t as mature as they were today – and in this case, the affected files couldn’t be recovered. But it served as a wake-up call for ensuring that networks and employees were as protected against cyberattacks as possible. Now, Williams Racing has benefited from a partnership with cybersecurity company Acronis for a number of years, helping to keep endpoints and staff – and drivers George Russell and Nicolas Latifi – secure, whether they’re at the headquarters in Grove, Oxfordshire, or at racing circuits around the world. The partnership means Williams use Acronis for endpoint protection as well as backups for keeping data secure, no matter where the user is, be they working remotely, at the factory or at a race circuit. “Motorsport teams, even at the top of the industry, are facing major challenges dealing with ever-expanding amounts of data – managing, archiving, sharing, and protecting it from cyberattacks,” says Ronan McCurtin, VP for Europe, Turkey and Israel at Acronis. With more races than ever before, Formula 1 teams are being pushed to the limit both on the track and off it. The high-profile nature of the sport and the cutting-edge technology behind it means all Formula 1 teams are tempting targets for cyber criminals and hackers.  Unfortunately, just like the Formula 1 teams they are chasing, malicious hackers are always looking for ways to improve. But unlike an F1 race, there’s no finish line in the cyber-arms race.  MORE ON CYBERSECURITY More

  • in

    Windows security: 20 years on from Bill Gates' Trustworthy Computing memo, how much has changed?

    It’s almost 20 years since then-Microsoft boss Bill Gates wrote his famous Trustworthy Computing memo, in which he urged the company to produce more secure software. “Eventually, our software should be so fundamentally secure that customers never even worry about it,” wrote Gates. It’s a grand ambition, and despite years of work, it is not one that any software has really achieved yet. And even as engineers try to improve their products, a new wave of security threats have appeared.

    Enterprise Software

    “I think it was hard for anyone at the time – even in Bill Gates’ grand vision – to see we would have sophisticated state-sponsored hackers breaking those SWIFT banking system codes, people flattening oil production by wiping hard-drives. The threat landscape is beyond any science fiction novel or what John le Carré could predict,” says Dave Weston, Microsoft’s director of enterprise and Windows security.  SEE: Windows 11 upgrade: Five questions to ask firstHe admits that, as a “hardened industry professional”, he is surprised by the sophistication of attacks today. “The breadth and sophistication [of these attacks] is what continues to make this job interesting. There is never a dull moment here,” he says.”Fifteen years ago we were thinking of these attackers as basically script kiddies – people sitting in their parents’ basements on the weekend doing things for mischievous reasons. That was the archetype 15 years ago. The archetype now is somebody who is working in the military-industrial complex, who works in an office.” That’s a pretty stark contrast, Weston points out.

    “If we’re up against that, are we in a better position? I would say, unequivocally, yes. Twenty years ago, the price of an exploit was cheap. Now when you’re talking about Windows 10 or 11, or browsers, you’re talking millions of dollars to acquire an exploit.”The difference between those two points is the level of defenses in the operating system, he argues. “The reality is today that there are a smaller number of people who today can attack a Windows PC than there was 10 or 15 years ago and I think that in itself is a triumph.”     That increasing threat level is one issue, while the tech security goalposts themselves have also been changing rapidly. Back in 2002 when Gates wrote his memo, the focus of security was all about the software: he didn’t even mention hardware or CPUs. Today, with an uptick in zero-day exploits, CPU attacks like Meltdown/Spectre and more, Windows security is much more concerned with hardware.    For example, in Windows 10 and Windows 11, Microsoft has brought in Control Enforcement Technology (CET), a security mitigation it co-developed with Intel. CET is an on-chip technology that targets some of the most common attack vectors, such as return oriented programming, says Weston. It’s available on Intel 11th Gen or AMD Zen 3 CPUs.Virtualization-based security, dubbed VBS at Redmond, restricts techniques used in the WannaCry ransomware attack by hardening the Windows kernel. Windows 11 also promises to make the goal of ‘Zero Trust’ – the concept of borderless networks that the Biden White House is pushing – easier by reducing the amount of configuration required for Windows endpoints.But, as Weston highlights, organizations will need to run some numbers to figure out whether to upgrade hardware and migrate to Windows 11 versus reconfiguring PCs and servers that only cut it for Windows 10. On Windows 11, admins don’t need much to configure that security; with Windows 10 they can create the same level of security – but with a bit more work.Organizations that adopt Zero Trust assume their perimeter has already been breached. It also recognizes that data needs to be protected within and outside the network on corporate-issued and employee-owned devices. Zero Trust has became more pertinent after the pandemic forced many more people to remote working.Weston, however, contends that Windows 11 does make it easier for businesses, assuming they have new hardware suitable for it. “Where the hardware fits in, we’ve been working to make sure things can be turned on by default when you meet the hardware baseline. We’re expecting a certain level of performance and reliability from recent drivers and hardware pieces. That allows us to turn on more, by default, with confidence. That’s where the hardware piece fits in with Zero Trust,” he says. 

    But will customers be left behind because of hardware? “The answer is firmly ‘no’,” insists Weston. SEE: Microsoft’s Windows 11: How to get it now (or later)Even if organisations want to stay with Windows 10, many of those features like Windows Hello, Virtualization Based Security and Secure Boot are still available, he says – you’ve just got to turn it on and evaluate your own environment.”If you’ve got the hardware, you can install Windows 11, things are simple. If you don’t have that hardware or that’s something you’re planning for the future, you can still partake in all of these security baselines by taking our free security baseline and apply that to Windows 10-level hardware. You may have to do some initial analysis on performance trade-offs, which makes it a little more difficult, but you can certainly get there.”Microsoft has set October 14, 2025 for the end of Windows 10 patches. Weston reckons you can still configure Windows 10 to meet Windows 11 standards, and he optimistically bets that most organizations should have refreshed most of their hardware by 2025.    “By 2025, when the refresh cycle will have turned over for the vast majority of the businesses, you will have more reason to move to Windows 11 because by that point there will have been two or three releases of security goodness to have been added, which we think is going to provide a substantial value proposition,” he says.”My advice would be: if you need to stay on Windows 10 for hardware reasons, great; follow our security guidance from 11 and apply that to 10. Plan in your refresh cycles and security budgets to get the right hardware to get to 11 because, if you stay on 10 for too long, we will start to introduce things that are 11-specific – trust me, we have many on the way now – and we want as many customers as possible to get that value. It’s very similar to the transition we went through from Windows 7 to 10: there’s security goodness if you can get there.”  More

  • in

    The White House is having a big meeting about fighting ransomware. It didn't invite Russia

    The White House has held a meeting with ministers and officials from 30 nations and the European Union to discuss how to combat ransomware and other cyber threats. The two-day series of meetings aimed to find an answer to ransomware and followed calls from US president Joe Biden for the Kremlin to hold Russia-based ransomware gangs accountable for their file-encrypting attacks, rather than turning a blind eye to them so long as they don’t attack Russian organizations.   

    ZDNet Recommends

    Notably absent from the White House-led group was Russia itself, which was not invited. In June, Biden told Russian President Vladimir Putin that 16 US critical infrastructure entities should be off-limits from ransomware attackers operating from Russia. SEE: Ransomware attackers targeted this company. Then defenders discovered something curiousThe aim of the talks was to figure out an international approach to disrupting and ultimately stopping ransomware attacks. In the two days of virtual talks, India led discussions on Thursday about resilience, while Australia focused on how to disrupt cyberattacks. The UK’s contribution focused on virtual currency, while Germany discussed diplomacy. Other countries involved included Canada, France, Brazil, Mexico, Japan, Ukraine, Ireland, Israel, and South Africa.Although Russian officials didn’t participate, a White House spokesperson said the US is in ongoing discussions with Russia via the US-Kremlin Experts Group, which is led by the White House, and was established by Biden and Putin. 

    One of the most disruptive ransomware attacks on US infrastructure was against Colonial Pipeline, which halted fuel distribution on the US east coast for a week in May. The company reportedly paid the equivalent of $4.4 million in bitcoin for a decryption tool from the attackers.The FBI blamed the Colonial attack on DarkSide, which went offline shortly afterwards but resurfaced in June, according to FireEye’s incident response unit, Mandiant. 

    DarkSide is one of several ransomware gangs operating as a service provider, allowing other criminal gangs to use its software to extort targets. Others, including Revil, steal data and threaten to leak it online if the ransom isn’t paid.    SEE: BYOD security warning: You can’t do everything securely with just personal devicesThe other major threat Biden has raised concerns nation-state cyber attackers, such as this year’s attacks on Microsoft Exchange email servers, which UK and US officials blamed on Chinese state-sponsored hackers, dubbed Hafnium by Microsoft. Microsoft this week reported that Kremlin-backed hackers were by far the most prolific attackers. The message from the White House is that nations need to cooperate to bolster “collective cyber defenses” against criminal and state-sponsored cyberattacks. “We’ve worked with allies and partners to hold nation states accountable for malicious cyberactivity as evidenced by, really, the broadest international support we had ever in our attributions for Russia and China’s malicious cyber activities in the last few months,” a White House official said at a media briefing.  More

  • in

    ACSC offers optional DNS protection to government entities

    Image: Getty Images/iStockphoto
    The Australian Cyber Security Centre will be offering its Australian Protective Domain Name Service (AUPDNS) for free to other government entities at federal and state level across Australia. AUPDNS has already inspected 10 billion queries, and blocked 1 million connections to malicious domains, Assistant Minister for Defence Andrew Hastie said on Thursday. “A single malicious connection could result in a government network being vulnerable to attack or compromise, so it’s vital we do everything we can to prevent cybercriminals from gaining a foothold,” he said.”Currently AUPDNS is protecting over 200,000 users, and this number is growing”. The blocklist functionality was developed with Nominet Cyber. Elsewhere on Thursday, Labor deputy chair of the Parliamentary Joint Committee on Intelligence and Security — which examines national security legislation and often leads to Labor waving continuous legislation through — Anthony Byrne tended his resignation. “The work of the PJCIS is crucial to Australia’s national security and its integrity should never be questioned,” Byrne said.

    “I have always put the work of this bipartisan Committee first and have always served in its best interests.” Byrne is in hot water after telling Victoria’s Independent Broad-based Anti-corruption Commission he was involved in branch stacking. Replacing Byrne to fill the ALP post will be Senator Jenny McAllister, with Peter Khalil appointed to the committee. “Byrne has served the PJCIS in a number of roles since 2005 including as Chair and Deputy Chair,” Labor leader Anthony Albanese said. “I thank Mr Byrne for his important contributions to this committee in Australia’s national interest.” On Wednesday, the Australian government announced a new set of standalone criminal offences for people who use ransomware under what it has labelled its Ransomware Action Plan. The plan creates new criminal offences for people that use ransomware to conduct cyber extortion, target critical infrastructure with ransomware, and deal with stolen data knowingly obtained in the course of committing a separate criminal offence, as well as buying or selling malware for the purposes of undertaking computer crimes.Related Coverage More

  • in

    Singapore to develop mobile defence systems with Ghost Robotics

    Singapore’s Defence Science and Technology Agency (DSTA) has inked a partnership with Philadelphia-based Ghost Robotics to identify uses cases involving legged robots for security, defence, and humanitarian applications. They will look to test and develop mobile robotic systems, as well as the associated technology enablers, that can be deployed in challenging urban terrain and harsh environments.The collaboration also would see robots from Ghost Robotics paired with DSTA’s robotics command, control, and communications (C3) system, the two partners said in a joint statement released Thursday. The Singapore government agency said its C3 capabilities were the “nerve centre” of military platforms and command centres, tapping data analytics, artificial intelligence, and computer vision technologies to facilitate “tighter coordination” and effectiveness during military and other contingency operations. Its robotics C3 system enabled simultaneous control and monitoring of multiple unmanned ground and air systems to deliver a holistic situation outline for coordinated missions, including surveillance in dense urban environments. With the partnership, DSTA and Ghost Robotics would test and develop “novel technologies and use cases” for quadrupedal unmanned ground vehicles, which would be integrated with multi-axis manipulators. These would enhance how the autonomous vehicles interacted with their environment and objects within it. Power technologies, such as solid-state batteries or fuel cells, also would be integrated to allow the robotics systems to operate for extended periods of time. DSTA’s deputy chief executive for operations and director of land systems, Roy Chan, said: “In the world of fast-evolving technology, close collaboration between organisations is imperative to co-create use cases and innovative solutions. In partnering Ghost Robotics, DSTA hopes to advance robotic capabilities in defence and shape the battlefield of the future.

    “We envision that robots would one day become a defender’s best friend and be deployed to undertake more risky and complex operations in tough terrains,” Chan said. DSTA is tasked with tapping science and technology to develop capabilities for the country’s Singapore Armed Forces (SAF), including the use of autonomous vehicles. The Ministry of Defence and SAF in June 2021 unveiled a transformation strategy to address evolving security challenges and threats, which encompassed efforts to leverage technological advancements to better tap data and new technologies, such as robotics C3 systems, and integrate these technologies into warfighting concepts to improve operational effectiveness and reduce manpower requirements.According to Ghost Robotics, its quadrupedal unmanned ground vehicles were built for unstructured terrain, on which a typical wheeled or tracked device could not operate efficiently. RELATED COVERAGE More

  • in

    7-Eleven breached customer privacy by collecting facial imagery without consent

    Image: Getty Images
    In Australia, the country’s information commissioner has found that 7-Eleven breached customers’ privacy by collecting their sensitive biometric information without adequate notice or consent. From June 2020 to August 2021, 7-Eleven conducted surveys that required customers to fill out information on tablets with built-in cameras. These tablets, which were installed in 700 stores, captured customers’ facial images at two points during the survey-taking process — when the individual first engaged with the tablet, and after they completed the survey. After becoming aware of this activity in July last year, the Office of the Australian Information Commissioner (OAIC) commended an investigation into 7-Eleven’s survey. During the investigation [PDF], the OAIC found 7-Eleven stored the facial images on tablets for around 20 seconds before uploading them to a secure server hosted in Australia within the Microsoft Azure infrastructure. The facial images were then retained on the server, as an algorithmic representation, for seven days to allow 7-Eleven to identify and correct any issues, and reprocess survey responses, the convenience store giant claimed. The facial images were uploaded to the server as algorithmic representations, or “faceprints”, that were then compared with other faceprints to exclude responses that 7-Eleven believed may not be genuine. 7-Eleven also used the personal information to understand the demographic profile of customers who completed the survey, the OAIC said. 7-Eleven claimed it received consent from customers who participated in the survey as it provided a notice on its website stating that 7-Eleven may collect photographic or biometric information from users. The survey resided on 7-Eleven’s website.

    As at March 2021, approximately 1.6 million survey responses had been completed. Angelene Falk, Australia’s Information Commissioner and Privacy Commissioner, determined that this large-scale collection of sensitive biometric information breached Australia’s privacy laws and was not reasonably necessary for the purpose of understanding and improving customers’ in-store experience. In Australia, an organisation is prohibited from collecting sensitive information about an individual unless consent is provided.   Falk said facial images that show an individual’s face is sensitive information. She added that any algorithmic representation of a facial image is also sensitive information. In regards to 7-Eleven’s claim that consent was provided, Falk said 7-Eleven did not provide any information about how customers’ facial images would be used or stored, which meant 7-Eleven did not receive any form of consent when it collected the images. “For an individual to be ‘identifiable’, they do not necessarily need to be identified from the specific information being handled. An individual can be ‘identifiable’ where it is possible to identify the individual from available information, including, but not limited to, the information in issue,” Falk said. “While I accept that implementing systems to understand and improve customers’ experience is a legitimate function for 7-Eleven’s business, any benefits to the business in collecting this biometric information were not proportional to the impact on privacy.” As part of the determination, Falk has ordered for 7-Eleven to cease collecting facial images and faceprints as part of the customer feedback mechanism. 7-Eleven has also been ordered to destroy all the faceprints it collected. Related Coverage More

  • in

    Singapore must take caution with AI use, review approach to public trust

    In its quest to drive the adoption of artificial intelligence (AI) across the country, multi-ethnic Singapore needs to take special care navigating its use in some areas, specifically, law enforcement and crime prevention. It should further foster its belief that trust is crucial for citizens to be comfortable with AI, along with the recognition that doing so will require nurturing public trust across different aspects within its society.  It must have been at least two decades ago now when I attended a media briefing, during which an executive was demonstrating the company’s latest speech recognition software. As most demos went, no matter how much you prepared for it, things would go desperately wrong.  Her voice-directed commands often were wrongly executed and several spoken words in every sentence were inaccurately translated into text. The harder she tried, the more things went wrong, and by the end of the demo, she looked clearly flustered.  She had a relatively strong accent and I’d assumed that was likely the main issue, but she had spent hours training the software. This company was known, at that time, specifically for its speech recognition products so it wouldn’t be wrong to assume its technology then was the most advanced in the market.  I walked away from that demo thinking it would be near impossible, with the vast difference in accents within Asia alone and even amongst those who spoke the same language, for speech recognition technology to be sufficiently accurate. 

    Singapore wants widespread AI use in smart nation drive

    With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

    Read More

    Some 20 years later, today, speech-to-text and translation tools clearly have come a long way, but they’re still not always perfect. An individual’s accent and speech patterns remain key variants that determine how well spoken words are translated.  However, wrongly converted words are unlikely to cause much damage, safe from a potentially embarrassing moment on the speaker’s part. The same is far from the truth where facial recognition technology is concerned. 

    In January, police in Detroit, USA, admitted its facial recognition software falsely identified a shoplifter, leading to his wrongful arrest.  Vendors such as IBM, Microsoft, and Amazon have maintained a ban on the sale of facial recognition technology to police and law enforcement, citing human rights concerns and racial discrimination. Most have urged governments to establish stronger regulations to govern and ensure the ethical use of facial recognition tools.  Amazon had said its ban would remain until regulators addressed issues around the use of its Rekognition technology to identify potential criminal suspects, while Microsoft said it would not sell facial recognition software to police until federal laws were in place to regulate the technology. IBM chose to exit the market completely over concerns facial recognition technology could instigate racial discrimination and injustice. Its CEO Arvind Krishna wrote in a June 2020 letter to the US Congress: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency. “AI is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported,” Krishna penned.  I recently spoke with Ieva Martinkenaite, who chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe. Martinkenaite’s day job sees her as head of analytics and AI for Telenor Research. In our discussion on how Singapore could best approach the issue of AI ethics and use of the technology, Martinkenaite said every country would have to decide what it felt was acceptable, especially when AI was used in high risk areas such as in detecting criminals. Here, she noted, there remained challenges amidst evidence of discriminatory results including against certain ethnic groups and gender.   In deciding what was acceptable, she urged governments to have an active dialogue with citizens. She added that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance, or quality assurance in place. Training AI for multi-ethnic SingaporeFacial recognition software has come under fire for its inaccuracy, in particular, in identifying people with darker skintones. An MIT 2017 study, which found that darker females were 32 times more likely to be misclassified than lighter males, pointed to the need for more phenotypically diverse datasets to improve the accuracy of facial recognition systems.  Presumably, AI and machine learning models trained with less data on one ethnic group would exhibit a lower degree of accuracy in in identifying individuals in that group.  Singapore’s population comprises 74.3% Chinese, 13.5% Malays, and 9% Indians, with the remaining 3.2% made up of other ethnic groups such as Eurasians. Should the country decide to tap facial recognition systems to identify individuals, must the data used to train the AI model consist of more Chinese faces since the ethnic group forms the population’s majority? If so, will that lead to a lower accuracy rate when the system is used to identify a Malay or Indian, since fewer data samples of these ethnic groups were used to train the AI model?  Will using an equal proportion of data for each ethnic group then necessarily lead to a more accurate score across the board? Since there are more Chinese residents in the country, should the facial recognition technology be better trained to more accurately identify this ethnic group because the system will likely be used more often to recognise these individuals?  These questions touch only on the “right” volume of data that should be used to train facial recognition systems. There still are many others concerning data alone, such as where training data should be sourced, how the data should be categorised, and how much training data is deemed sufficient before the system is considered “operationally ready”.  Singapore will have to navigate these carefully should it decide to tap AI in law enforcement and crime prevention, especially as it regards racial and ethnic relations important, but sensitive in managing. Beyond data, discussions and decisions will need to be made on, amongst others, when AI-powered facial recognition systems should be used, how automated should they be allowed to operate, and when human intervention would be required. The European Parliament just last week voted in support of a resolution banning law enforcement from using facial recognition systems, citing various risks including discrimination, opaque decision-making, privacy intrusion, and challenges in protecting personal data. 

    “These potential risks are aggravated in the sector of law enforcement and criminal justice, as they may affect the presumption of innocence, the fundamental rights to liberty and security of the individual and to an effective remedy and fair trial,” the European Parliament said.  Specifically, it pointed to facial recognition services such as Clearview AI, which had built a database of more than three billion pictures that were illegally collected from social networks and other online platforms.  The European Parliament further called for a ban on law enforcement using automated analysis of other human features, such as fingerprint, voice, gait, and other biometric and behavioural traits.  The resolution passed, though, isn’t legally binding. Because data plays an integral role in feeding and training AI models, what constitutes such data inevitably has been the crux of key challenges and concerns behind the technology.  The World Health Organisation (WHO) in June issued a guidance cautioning that AI-powered healthcare systems trained primarily on data of individuals in high-income countries might not perform well for individuals in low- and middle-income environments. It also cited other risks such as unethical collection and use of healthcare data, cybersecurity, and bias being encoded in algorithms.  “AI systems must be carefully designed to reflect the diversity of socioeconomic and healthcare settings and be accompanied by training in digital skills, community engagement, and awareness-raising,” it noted. “Country investments in AI and the supporting infrastructure should help to build effective healthcare systems by avoiding AI that encodes biases that are detrimental to equitable provision of and access to healthcare services.” Fostering trust goes beyond AISingapore’s former Minister for Communications and Information and Minister-in-charge of Trade Relations. S. Iswaran, previously acknowledged the tensions about AI and the use of data, and noted the need for tools and safeguards to better assure people with privacy concerns.  In particular, Iswaran stressed the importance of establishing trust, which he said underpinned everything, whether it was data or AI. “Ultimately, citizens must feel these initiatives are focused on delivering welfare benefits for them and ensured their data will be protected and afforded due confidentiality,” he said.   Singapore has been a strong advocate for the adoption of AI, introducing in 2019 a national strategy to leverage the technology to create economic value, enhance citizen lives, and arm its workforce with the necessary skillsets. The government believes AI is integral to its smart nation efforts and a nationwide roadmap was necessary to allocate resources to key focus areas. The strategy also outlines how government agencies, organisations, and researchers can collaborate to ensure a positive impact from AI, as well as directs attention to areas where change or potential new risks must be addressed as AI becomes more pervasive.  The key goal here is to pave the way for Singapore, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. Singaporeans also will trust the use of AI in their lives, which should be nurtured from a clear awareness of the benefits and implications of the technology.  Building trust, however, will need to go beyond simply demonstrating the benefits of AI. People need to fully trust the authorities across various aspects of their lives and that any use of technology will safeguard their welfare and data. The lack of trust in one aspect can spill over and impact trust in other aspects, including the use of AI-powered technologies. Singapore in February urgently pushed through new legislation detailing the scope of local law enforcement’s access to COVID-19 contact tracing data. The move came weeks after it was revealed the police could access the country’s TraceTogether contact tracing data for criminal investigations, contradicting previous assertions this information would only be used when the individual tested positive for the coronavirus. It sparked a public outcry and prompted the government to announce plans for the new bill limiting police access to seven categories of “serious offences”, including terrorism and kidnapping.Early this month, Singapore also passed the Foreign Interference (Countermeasures) Bill amidst a heated debate and less than a month after it was first proposed in parliament. Pitched as necessary to combat threats from foreign interference in local politics, the Bill has been criticised for being overly broad in scope and judicial review restrictive. Opposition party Workers’ Party also pointed to the lack of public involvement and speed at which the Bill was passed.Will citizens trust their government’s use of AI-powered in “delivering welfare benefits”, especially in law enforcement, when they have doubts–correctly perceived or otherwise–their personal data in other areas is properly policed? Doubt in one policy can metastasise and drive further doubt in other policies. With trust, as Iswaran rightly pointed out, an integral part of driving the adoption of AI in Singapore, the government may need to review its approach to fostering this trust amongst its population. According to Deloitte, cities looking to use technology for surveillance and policing should look to balance security interests with the protection of civil liberties, including privacy and freedom. “Any experimentation with surveillance and AI technologies needs to be accompanied by proper regulation to protect privacy and civil liberties. Policymakers and security forces need to introduce regulations and accountability mechanisms that create a trustful environment for experimentation of the new applications,” the consulting firm noted. “Trust is a key requirement for the application of AI for security and policing. To get the most out of technology, there must be community engagement.”Singapore must assess whether it has indeed nurtured a trustful environment, with the right legislations and accountability, in which citizens are properly engaged in dialogue, so they can collectively decide what is the country’s acceptable use of AI in high risk areas. RELATED COVERAGE More

  • in

    Google analysed 80 million ransomware samples: Here's what it found

    Image: Google
    Google has published a new ransomware report, revealing Israel was far and away the largest submitter of samples during that period. The tech giant commissioned cybersecurity firm VirusTotal to conduct the analysis, which entailed reviewing 80 million ransomware samples from 140 countries. According to the report [PDF], Israel, South Korea, Vietnam, China, Singapore, India, Kazakhstan, Philippines, Iran and the UK were the 10 most affected territories based on the number of submissions reviewed by VirusTotal. Israel had the higher number of submissions and that amount was a near-600% increase from its baseline amount of submissions. The report did not state what Israel’s baseline amount of submissions was during that period. From the start of 2020, ransomware activity was at its peak during the first two quarters of 2020, which VirusTotal attributed to activity by ransomware-as-a-service group GandCrab. “GandCrab had an extraordinary peak in Q1 2020 which dramatically decreased afterwards. It is still active but at a different order of magnitude in terms of the number of fresh samples,” VirusTotal said. There was another sizeable peak in July 2021 that was driven by the Babuk ransomware gang, a ransomware operation that was launched at the beginning of 2021. Babuk’s ransomware attack generally features three distinct phases: Initial access, network propagation, and action on objectives.

    GandCrab was the most active ransomware gang since the start of 2020, accounting for 78.5% of samples. GandCrab was followed by Babuk and Cerber, which accounted for 7.6% and 3.1% of samples, respectively.
    Image: Google
    According to the report, 95% of ransomware files detected were Windows-based executables or dynamic link libraries (DLLs) and 2% were Android-based. The report also found that exploits consisted of only a small portion of the samples — 5%. “We believe this makes sense given that ransomware samples are usually deployed using social engineering and/or by droppers (small programs designed to install malware),” VirusTotal said. “In terms of ransomware distribution attackers don’t appear to need exploits other than for privilege escalation and for malware spreading within internal networks.”  After reviewing the samples, VirusTotal also said that there was a baseline of between 1,000 and 2,000 first-seen ransomware clusters at all times throughout the analysed period. “While big campaigns come and go, there is a constant baseline of ransomware activity that never stops,” it said. Related Coverage More