More stories

  • in

    Alexa, Ring, and Astro: Where's my privacy, Amazon?

    This year’s Amazon hardware event was quite a doozy. The Seattle-based company showcased an updated health band with a nutritionally-guided personalized shopping service, a flying security drone, more indoor and outdoor cameras, and an autonomous sentry robot.  All of which are powered in some way by AWS machine learning and left me thinking about one word: privacy.

    Do I really want all of these products in my own home and as part of my life? Admittedly, there is a certain appeal to Amazon’s pitch of having their technology live in the background, transparently, to enable our real-world experiences better. The best user interface is the effectively invisible one, like the ever-watchful and ready-to-talk computers on shows like Star Trek. They’re benevolent AIs that always look out for us, keeping us out of harm’s way while accepting our queries and commands. Granted, I’ve already accepted a lot of these devices into my life. I have five Alexa-compatible smart speakers positioned in different parts of the house, so I have full coverage to deal with home automation issues. I also have a Google Home in the kitchen, plus multiple Siri-enabled mobile devices (Watch, iPhone, iPad, Mac, Apple TV). And of course, I have webcams for doing Zoom calls and the like on my Mac workstation and on my iPad and iPhone — all of which aren’t on unless I want them to be, presumably. But so far, I have resisted the notion of having cameras all over the place, peering inside the home’s interior spaces. Sure, I have some Ring devices guarding the front of the house, but there’s nothing recording inside. Part of this stems from the fact that I have no children, so I do not need to check up on them. I also rarely travel for extended periods away from my home. Besides my wife, my two miniature poodles are the only other residents of Chateau Perleaux. I live in a gated community with only one way in and out, and I’m alerted immediately if someone should be let through if they aren’t on my regular list.  Would I want cameras inside if I had young children? I honestly don’t know. I can tell you that I see very little value from doing it now, and quite frankly, my lifestyle tends to border on the, shall we say, bohemian. I live in a warm-weather state, and if I don’t have guests over, full clothing is optional, especially when using my pool and spa during hot afternoons and humid evenings, which is a big part of living in Florida. So I have no desire for Ring, Blink, or Astro to be capturing my spouse or me in various states of undress. I don’t need something that chases me around my house like an attention-deprived puppy, constantly scanning everything around it. I have no idea where that video is going and if a human will ever review it for improving machine-learning purposes.

    This is not to say I might not come around to the idea of having a robot, eventually. But besides being an Echo Show on wheels, Astro doesn’t do anything except act as a constant sentry. Unlike the Tesla Bot, which doesn’t even exist in demos yet, it doesn’t have arms to manipulate things and perform general-purpose tasks.

    It’s not just the cameras, though. It’s this constant desire by Amazon to suck up and process data created by its customers using its products so it can further monetize it. And that’s the big difference I see between Amazon and its industry peers like Apple. This is especially true when we see things like the new Nutrition service attached to their Halo band, automatically formulating a meal plan and ordering groceries from Whole Foods based on your health data. I’m not sure I like the idea of Amazon telling me what I should eat, either. With Apple products, such as the Watch, that collect a lot of personalized data from its sensors, all of the metrics can be reviewed by the end-user and easily erased. They have tools within iOS to adjust permissions of Health data and which applications have access to it. Amazon doesn’t have this level of user control for everything that goes into its cloud, or at least it isn’t easy to get to or isn’t centralized under a single console.  I can get to my voice command history, detect sounds on Alexa (for its opt-in Guard service), and set expirations for three months, 18 months, or until I delete it. Still, I have no idea what other noises are detected or recorded — and if humans ever review them. I also can’t hear the captured sounds and voices in the UX; I can only view a log that it was recorded and be given the option to delete it. With Ring, I can view the video recordings stored in the cloud. Do users have full control over what Astro or their flying Ring drone uploads to AWS? Besides law enforcement, what humans can view these video recordings, besides customer-chosen third-parties, for its newly announced security service? I have no idea. Amazon needs to do a better job detailing and disclosing what data is recorded, where it goes, who can review it, and providing better tools to manage this recorded information. Otherwise, I’m not sure any of us will ever feel fully comfortable having these devices in our homes.

    Amazon event More

  • in

    NSA, CISA partner for guide on safe VPNs amid widespread exploitation by nation-states

    The NSA and CISA have released a detailed guide on how people and organizations should choose virtual private networks (VPN) as both nation-states and cybercriminals ramp up their exploitation of the tools amid a global shift to remote work and schooling. The nine-page fact sheet also includes details on ways to deploy a VPN securely. The NSA said in a statement that the guide would also be helpful to leaders in the Department of Defense, National Security Systems and the Defense Industrial Base so that they can “better understand the risks associated with VPNs.”The NSA said multiple nation-state APT actors have weaponized common vulnerabilities and exposures to gain access to vulnerable VPN devices, allowing them to steal credentials, remotely execute code, weaken encrypted traffic’s cryptography, hijack encrypted traffic sessions and read sensitive data from a device. NSA director Rob Joyce told the Aspen Cybersecurity Summit this week that “multiple nation-state actors are leveraging CVEs to compromise vulnerable Virtual Private Networks devices.”He wrote on Twitter that VPN servers are entry points into protected networks, making them attractive targets. “APT actors have and will exploit VPNs — the latest guidance from NSA and @CISAgov can help shrink your attack surface. Invest in your own protection!” he added. CISA director Jen Easterly echoed Joyce’s remarks, sharing the same message about nation-state exploitation. 

    The notice included a list of “tested and validated” VPN products on the National Information Assurance Partnership Product Compliant List, many of which use multi-factor authentication and promptly apply patches and updates. Experts lauded CISA and the NSA for creating the list. Chester Wisniewski, a principal research scientist at Sophos, told ZDNet that for too long, there has not been a trusted voice on VPNs without a vested interest in selling you something. “Combining the knowledge and experience of the NSA with CISA’s remit of helping protect the US private sector puts them in a good position to provide trusted advice on staying safe against criminal actors,” Wisniewski said. He noted that the advice is largely copied from suggestions provided to defense contractors and similar entities. “It is great advice, but incredibly complicated and burdensome for most commercial entities. None of what’s said is wrong, but it requires a lot of forethought and a lot of process to comply with,” Wisniewski added. “Most organizations are incapable of following much of the advice. Doing VPNs right is really hard, as demonstrated in this document, so I would urge organizations to pursue zero trust network access and SD-WAN as a more practical way of achieving similar goals. Rather than rebuild your entire VPN strategy to remain doing it the old way, you may as well spend the same time/resources to modernize your approach to remote access and reap the benefits rather than simply shore up the old way.”Untangle senior vice president Heather Paunet noted that cyberattacks on VPNs are very costly due to potential ransoms or data accessed, as seen with the Pulse Secure VPN exploit in April that compromised government agencies and companies in the US and Europe.While there has been a rise in vulnerabilities of VPNs due to more VPN usage over the last year and a half, newer VPN technologies with newer types of cryptography are evolving to ensure the protection of information transmitted across the internet, Paunet said, noting popular tools like WireGuard VPN that use cryptography. “What is missing from the guidelines are taking the human element into consideration. Along with following the strict guidelines, IT professionals are also challenged with getting employees to effectively use the technology. If the VPN is too difficult to use, or slows down systems, the employee is likely to turn it off,” Paunet said. “The challenge for IT professionals is to find a VPN solution that fits the guidelines, but is also fast and reliable so that employees turn it on once and forget about it.”Archie Agarwal, CEO at ThreatModeler, noted that a quick search on the Shodan search engine reveals over a million VPNs on the Internet in the US alone, providing a doorways to private sensitive internal networks that are sitting exposed to the world for anyone to try to break through. “These represent the old perimeter security paradigm and have failed to protect the inner castle over and again. If credentials are leaked or stolen, or new vulnerabilities discovered, the game is lost and the castle falls,” Agarwal said.  More

  • in

    These systems are facing billions of attacks every month as hackers try to guess passwords

    Computer networks are being aggressively bombarded with billions of password-guessing attacks as cyber criminals attempt to exploit the growth in remote desktop protocol (RDP) and other cloud services in corporate environments. Cybersecurity researchers at ESET detected 55 billion new attempts at brute-force attacks between May and August 2021 alone – more than double the 27 billion attacks detected between January and April. 

    ZDNet Recommends

    Successfully guessing passwords can provide cyber criminals with an easy route into networks and an avenue they can use to launch further attacks, including delivering ransomware or other malware. Once in a network, they’ll attempt to use that access to gain additional permissions and manipulate the network, performing actions like turning off security services so they can go about their activities more easily. SEE: A winning strategy for cybersecurity (ZDNet special report) One of the most popular targets for brute-force password-guessing attacks are RDP services. The rise in remote working has led to an increase in people needing to use remote-desktop services. Many of these are public-facing services, providing cyber criminals with an opportunity to break into networks – and it’s an opportunity they’re eager to exploit. The sheer number of attacks means most will be automated, but if accounts are secured with simple-to-guess or common passwords – and many are – then they can make easy pickings for attackers. Once a password has been successfully breached, it’s likely an attacker will take a more hands-on approach to reach their end goal. “With the number of attacks being in the billions, this is impossible to do manually – so these attack attempts are automated. Of course, there is always a manual aspect when cybercriminals are setting up or adjusting the attack infrastructure and specifying what types of targets are in their crosshairs,” Ondrej Kubovič, security awareness specialist at ESET, told ZDNet. 

    In addition to targeting RDP services, cyber criminals are also going after public-facing SQL and SMB services. These services will often be secured with default passwords that attackers can take advantage of. 

    One of the reasons why brute-force attacks are successful is because so many accounts are secured with simple, one-word passwords. Requiring passwords to be more complex could go a long way to preventing the accounts from being breached in brute-force attacks. The National Cyber Security Centre suggests users use three memorable words as a password – something that’s far more robust against brute-force attacks than a single word. SEE: Don’t want to get hacked? Then avoid these three ‘exceptionally dangerous’ cybersecurity mistakesOrganisations can also provide an additional layer of protection against brute-force password-guessing attacks – and other campaigns – by deploying multi-factor authentication (MFA). Using MFA means that, even if the attackers know the correct password, there’s an extra barrier to prevent them from automatically being able to access the network.  MORE ON CYBERSECURITY More

  • in

    Fears surrounding Pegasus spyware prompt new Trojan campaign

    A recent investigation into how Pegasus spyware is being used to monitor civil rights agencies, journalists, and government figures worldwide is being abused in a new wave of cyberattacks. 

    Pegasus is a surveillance system offered by the NSO Group. While advertised as software for fighting crime and terrorism, a probe into the spyware led to allegations that it is being used against innocents, including human rights activists, political activists, lawyers, journalists, and politicians worldwide.  Israel-based NSO Group denied the findings of the investigation, conducted by Amnesty International, Forbidden Stories, and numerous media outlets.  Apple has since patched a zero-day vulnerability utilized by Pegasus, a discovery made together with Citizen Lab.  Now, cybercriminals unconnected to Pegasus are attempting to capitalize on the damning report by promising individuals a way to ‘protect’ themselves against such surveillance — but are secretly deploying their own brands of malware, instead.   On Thursday, researchers from Cisco Talos said that threat actors are masquerading as Amnesty International and have set up a fake domain designed to impersonate the organization’s legitimate website. This points to an ‘antivirus’ tool, “AVPegasus,” that promises to protect PCs from the spyware. 
    Cisco Talos

    However, according to Talos researchers Vitor Ventura and Arnaud Zobec, the software contains the Sarwent Remote Access Trojan (RAT).The domains associated with the campaign are amnestyinternationalantipegasus[.]com, amnestyvspegasus[.]com, and antipegasusamnesty[.]com. Written in Delphi, Sarwent installs a backdoor onto machines when executed and is also able to leverage a remote desktop protocol (RDP) to connect to an attacker-controlled command-and-control (C2) server.  The malware will attempt to exfiltrate credentials and is also able to download and execute further malicious payloads.  The UK, US, Russia, India, Ukraine, the Czech Republic, Romania, and Colombia are the most targeted countries to date. Talos believes the cyberattacker behind this campaign is a Russian speaker who has operated other Sarwent-based attacks over 2021.  “The campaign targets people who might be concerned that they are targeted by the Pegasus spyware,” Talos says. “This targeting raises issues of possible state involvement, but there is insufficient information available to Talos to make any determination there. It is possible that this is simply a financially motivated actor looking to leverage headlines to gain new access.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    These ransomware crooks are complaining they are getting ripped off – by other ransomware crooks

    Cyber criminals using a ransomware-as-a-service scheme have been spotted complaining that the group they rent the malware from could be using a hidden backdoor to grab ransom payments for themselves.REvil is one of the most notorious and most common forms of ransomware around and has been responsible for several major incidents. The group behind REvil lease their ransomware out to other crooks in exchange for a cut of the profits these affiliates make by extorting Bitcoin payments in exchange for the ransomware decryption keys that the victims need. 

    ZDNet Recommends

    But it seems that cut isn’t enough for those behind REvil: it was recently disclosed that there’s a secret backdoor coded into their product, which allows REvil to restore the encrypted files without the involvement of the affiliate.  SEE: A winning strategy for cybersecurity (ZDNet special report) This could allow REvil to takeover negotiations with victims, hijack the so-called “customer support” chats – and steal the ransom payments for themselves. Analysis of underground forums by cybersecurity researchers at Flashpoint suggests that the disclosure of the REvil backdoor hasn’t gone down well with affiliates. One forum user claimed to have had suspicions of REvil’s tactics, and said their own plans to extort $7 million from a victim was abruptly ended. They believe that one of the REvil authors took over the negotiations using the backdoor and made off with the money. 

    Another user on the Russian-speaking forum complained they were tired of “lousy partner programs” used by ransomware groups “you cannot trust”,  but also suggested that the status of REvil as one of the most lucrative ransomware-as-a-service schemes means that wannabe ransomware crooks will still flock to become affiliates. That’s particuarly the case now the group is back in action after appearing to go on hiatus earlier in the summer. For those scammers who think they’ve been scammed, there’s not a lot they can do (and few would have sympathy for them). One forum user suggested any attempt at dealing with this situation would be as useless as trying to arbitrate “against Stalin”. Ransomware remains one of the key cybersecurity issues facing the world today. For victims of ransomware attacks, it ultimately doesn’t matter who is on the other end of the keyboard demanding payment for the decryption key – many will just opt to pay the ransom, percieving it as the best way to restore the network. 

    ZDNet Recommends

    The best cloud storage services

    Free and cheap personal and small business cloud storage services are everywhere. But, which one is best for you? Let’s look at the top cloud storage options.

    Read More

    But even if victims pay the ransom – which isn’t recommended because it encourages more ransomware attacks – restoring the network can still be a slow process and it can be weeks or months before services are fully restored. SEE: A cloud company asked security researchers to look over its systems. Here’s what they foundBe it REvil or any other ransomware gang, the best way to avoid the disruption of a ransomware attack is to prevent attacks in the first place. Some of the key ways organisations can help stop ransomware attacks is to make sure operating systems and software across the network is patched with the latest security updates, so cyber criminals can’t easily exploit known vulnerabilities to gain an initial foothold. Multi-factor authentication should also be applied to all users to provide a barrier to hands-on attackers being able to use stolen usernames and passwords to move around a compromised network. MORE ON CYBERSECURITY More

  • in

    Australia's digital vaccination certificates for travel ready in two to three weeks

    Image: Getty Images
    Services Australia CEO Rebecca Skinner on Thursday said that Australia’s digital vaccination certificates for international travel would be ready in two to three weeks. Skinner, who appeared before Australia’s COVID-19 Select Committee, provided the update when explaining how the upcoming visible digital seal (VDS) would operate. The VDS is Australia’s answer for indicating a person’s COVID-19 vaccination status for international travel; it will link a person’s vaccination status with new digital vaccination certificates and border declarations.Skinner said her agency was working to make the VDS accessible to fully vaccinated people through the MedicareExpress Plus app. To access the VDS through the MedicareExpress Plus app, Skinner said users would need to provide additional passport details along with the consent to share their immunisation history with the Australian Passport Office. The data would then be sent to the Passport Office to determine whether the user is eligible to receive a VDS.The approval process performed by the passport office will be automated, Service Australia Health Programmes general manager Jarrod Howard said, and would entail the Passport Office checking whether the person is fully vaccinated.Due to the process being automated, Howard said people could re-apply for a VDS “in a matter of seconds” at the airport in the event there is an error with a VDS.Once approved, Howard said the VDS would be available on the Medicare Express Plus App and allow for verification on third-party apps.

    Providing a timeline for when VDS would be ready, Skinner told the committee she expected it to be ready in the next two to three weeks, or before the end of October at the latest. While noting the digital vaccination certificate for international travel was coming soon, she adamantly refused to call the VDS a vaccine passport as an official passport is still required. Outgoing travellers from Australia will not be allowed to travel abroad without the VDS or another authorised digital vaccination certificate, however, even if they have a passport. Australia’s Trade Minister Dan Tehan earlier this month said the VDS system has already been sent to all of Australia’s overseas embassies in order to begin engagement with overseas posts and overseas countries regarding international travel.The Department of Foreign Affairs and Trade, meanwhile, has already put out a verification app, called the VDS-NC Checker, onto Apple’s App Store, which the department hopes will be used at airports to check people onto flights.International travel for fully vaccinated people living in Australia is currently expected by Christmas, with Tehan confirming that the official date would be when 80% of the country is fully vaccinated.Digital vaccination certificate for state check-in apps to undergo trialOn the domestic front, fully vaccinated Australians may soon be able to add digital vaccination certificates to state-based check-in apps, Skinner said. She said there would eventually be an additional feature on the MedicareExpress App that allows users to add their COVID-19 immunisation history to state-based check-in apps.The process for adding the digital vaccination certificate to state-based check-in apps will be similar to accessing the VDS, except users will not need to provide their passport details.Consent must first be provided for the data to be added to state-based apps, Skinner said.The consent provided by users will last for 12 months, with users needing to provide consent again in order for the immunisation information to continue to appear on the state-based apps.  Services Australia envisions this process occurring through a security token being passed to the relevant state authority once consent is provided. The security token will have data showing a person’s COVID-19 immunisation history and other information such as an individual health identifier.That data would be stored in the Australian Immunisation Register (AIR) database, which is maintained by Services Australia on behalf of the Department of Health.Currently, those fully vaccinated can only add their digital vaccination certificate to Apple Wallet or Google Pay. Those not eligible for Medicare who are fully vaccinated, meanwhile, can call the Australian Immunisation Register for a hard copy, or use the Individual Healthcare Identifiers service through myGov for a digital version.Trials to implement the COVID-19 digital certificate on state-based apps will start in New South Wales next week. Of Australia’s states and territories, only New South Wales has officially signed up to trial the new feature so far, however.”Our approach has been particularly for high volume venues to reduce friction on both staff in those venues and also friction for customers to leverage the current check-in apps that all of the jurisdictions currently have,” Services Australia Deputy CEO of Transformation Projects Charles McHardie said.When asked why Services Australia was not focusing on introducing the digital vaccination certificate through a national app, like COVIDSafe, McHardie explained that this was due to Australia’s public health orders being issued at a state level.McHardie conceded, however, that incoming travellers could potentially be required to install up to eight different apps to adhere to Australia’s various state check-in protocols.  Howard added that check-in apps from certain states and territories — ACT, Northern Territory, Queensland, and Tasmania — had interoperability with each other due to these apps using the same background technology.  According to DTA acting-CEO Peter Alexander, who also appeared before the committee, the bungled COVIDSafe app has cost AU$9.1 million as of last week. New South Wales and Victoria have been the only states to use information from the app. The AU$9.1 million figure is in line with the January update that the COVIDSafe app costs around AU$100,000 per month to run. At the end of January, total spend for the app was AU$6.7 million.   At the time of writing, around 11 million people living in Australia are fully vaccinated. Of those people, 6.3 million have downloaded a digital vaccination certificate.RELATED COVERAGE More

  • in

    YouTube expands medical misinformation bans to include all anti-vaxxer content

    Image: Getty Images
    YouTube has said it will remove content containing misinformation or disinformation on approved vaccines, as that content poses a “serious risk of egregious harm”. “Specifically, content that falsely alleges that approved vaccines are dangerous and cause chronic health effects, claims that vaccines do not reduce transmission or contraction of disease, or contains misinformation on the substances contained in vaccines will be removed,” the platform said in a blog post. “This would include content that falsely says that approved vaccines cause autism, cancer or infertility, or that substances in vaccines can track those who receive them. Our policies not only cover specific routine immunizations like for measles or Hepatitis B, but also apply to general statements about vaccines.” Exceptions to the rules do exist: Videos that discuss vaccine policies, new trials, historical success, and personal testimonials will be allowed, provided other rules are not violated, or the channel is not deemed to promote vaccine hesitancy. “YouTube may allow content that violates the misinformation policies … if that content includes additional context in the video, audio, title, or description. This is not a free pass to promote misinformation,” YouTube said. “Additional context may include countervailing views from local health authorities or medical experts. We may also make exceptions if the purpose of the content is to condemn, dispute, or satirise misinformation that violates our policies.” If a channel violates the policy three times in 90 days, YouTube said it will remove the channel.

    The channel of one anti-vaccine pushing non-profit, the Children’s Health Defense that is chaired by Robert F. Kennedy Jr, was removed. Kennedy claimed the channel’s removal as a free speech issue. Meanwhile, the BBC reported that Russia threatened to ban YouTube after a pair of RT channels in German were banned for COVID misinformation. YouTube said when announcing its expanded policy, it has removed over 130,000 videos for violating its COVID-19 vaccine policies since last year. In August, the video platform said it removed over 1 million COVID-19 misinformation videos. Earlier this year, Twitter began automatically labelling tweets it regarded as having misleading information about COVID-19 and its vaccines, as well as introducing its own strike system that includes temporary account locks and can lead to permanent suspension. While the system has led to the repeated suspension of misinformation peddlers such as US congresswoman Marjorie Taylor Greene, the automated system cannot handle sarcasm from users attempting humour on the topics of COVID-19 and 5G. In April, the Australian Department of Health published a page attempting to dispel any link between vaccines and internet connectivity. “COVID-19 vaccines do not — and cannot — connect you to the internet,” it stated. “Some people believe that hydrogels are needed for electronic implants, which can connect to the internet. The Pfizer mRNA vaccine does not use hydrogels as a component.” Related Coverage More

  • in

    Every country must decide own definition of acceptable AI use

    Every country including Singapore will need to decide what it deems to be acceptable uses of artificial intelligence (AI), including whether the use of facial recognition technology in public spaces should be accepted or outlawed. Discussions should seek to balance market opportunities and ensuring ethical use of AI, so such guidelines are usable and easily adopted. Above all, governments should seek to drive public debate and gather feedback so AI regulations would be relevant for their local population, said Ieva Martinkenaite, head of analytics and AI for Telenor Research. The Norwegian telecommunications company applies AI and machine learning models to deliver more personalised customer and targeted sales campaigns, achieve better operational efficiencies, and optimise its network resources. For instance, the technology helps identify customer usage patterns in different locations and this data is tapped to reduce or power off antennas where usage is low. This not only helps lower energy consumption and, hence, power bills, but also enhances environmental sustainability, Martinkenaite said in an interview with ZDNet.The Telenor executive also chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe, transitioning ethics guidelines into legal requirements. She also provides input on the Norwegian government’s position on proposed EU regulatory acts.

    Singapore wants widespread AI use in smart nation drive

    With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

    Read More

    Asked what lessons she could offer Singapore, which last October released guidelines on the development of AI ethics, Martinkenaite pointed to the need for regulators to be practical and understand the business impact of legislations.Frameworks on AI ethics and governance might look good on paper, but there also should be efforts to ensure these were usable in terms of adoption, she said. This underscored the need for constant dialogue and feedback as well as continuous improvement, so any regulations remained relevant.For one, such guidelines should be provided alongside AI strategies, including the types of business and operating models the country should pursue and highlights of industries that could best benefit from its deployment. 

    In this aspect, she said EU and Singapore had identified strategic industries they believed the use of data and AI could scale. These sectors also should be globally competitive and where the country’s largest investments should go.  Singapore in 2019 unveiled its national AI strategy to identify and allocate resources to key focus areas, as well as pave the way for the country, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. These included manufacturing, finance, and government. In driving meaningful adoption of AI, nations should strive to look for “balance” between tapping market opportunities and ensuring ethical use of the technology.Noting that technology was constantly evolving, she said it also was not possible for regulations to always keep up. In drafting the region’s AI regulations, EU legislators also grappled with several challenges including how laws governing the ethical use of AI could be introduced without impacting the flow of talent and innovation, she explained. This proved a significant obstacle as there were worries regulations could result in excessive red tape, with which companies could find difficult to comply.There also were concerns about increasing dependence on IT infrastructures and machine learning frameworks that were developed by a handful of internet giants, including Amazon, Google, and Microsoft as well as others in China, Martinkenaite said.She cited disconcert amongst EU policy makers on how the region could maintain its sovereignty and independence amidst this emerging landscape. Specifically, discussions revolved around the need to create key enabling technologies in AI within the region, such as data, compute power, storage, and machine learning architectures. With this focus on building greater AI technology independency, it then was critical for EU governments to create incentives and drive investments locally in the ecosystem, she noted. Rising concerns about the responsible use of AI also were driving much of the discussions in the region, as they were in other regions such as Asia, she added. While there still was uncertainty over what the best principles were, she stressed the need for nations to participate in the discussions and efforts in establishing ethical principles of AI. This would bring the global industry together to agree on these principles should be and to adhere to them. United Nation’s human rights chief Michelle Bachelet recently called for the use of AI to be outlawed if they breached international human rights law. She underscored the urgency in assessing and addressing the risks AI could bring to human rights, noting that stricter legislation on its use should be implemented where it posed higher risks to human rights. Bachelet said: “AI can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”The UN report urged governments to take stronger action to keep algorithms under control. Specifically, it recommended a moratorium on the use of biometric technologies, including facial recognition, in public spaces until, at least, authorities were able to demonstrate there were no significant issues with accuracy or discriminatory impacts. These AI systems, which increasingly were used to identify people in real-time and from a distance and potentially enabled unlimited tracking of individuals, also should comply with privacy and data protection standards, the report noted. It added that more human rights guidance on the use of biometrics was “urgently needed”. Finding balance between ethics, business of AITo better drive adoption of ethical AI, Martinkenaite said such guidelines should be provided alongside AI strategies, including the types of business and operating models the country should pursue and highlights of industries that could best benefit from its deployment. In this aspect, she said EU and Singapore had identified strategic industries they believed the use of data and AI could scale. These sectors also should be globally competitive and where the country’s largest investments should go.  Singapore in 2019 unveiled its national AI strategy to identify and allocate resources to key focus areas, as well as pave the way for the country, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. These included manufacturing, finance, and government. In driving meaningful adoption of AI, nations should strive to look for “balance” between tapping market opportunities and ensuring ethical use of the technology.Martinkenaite noted that governments and regulators worldwide would have to determine what it meant to be ethical in their country’s use of AI and ability to track data as its application was non-discriminatory. This would be pertinent especially in discussions on the risk AI could introduce in certain areas, such as facial recognition. While any technology in itself was not necessarily bad, its use could be deemed to be so, she said. AI, for instance, could be used to benefit society in detecting criminals or preventing accidents and crimes. There were, however, challenges in such usage amidst evidence of discriminatory results, including against certain races, economic classes, and gender. These could pose high security or political risks. Martinkenaite noted that every country and government then needed to decide what were acceptable and preferred ways for AI to be applied by its citizens. These included questions on whether the use of AI-powered biometrics recognition technology on videos and images of people’s faces for law enforcement purposes should be accepted or outlawed. She pointed to ongoing debate in EU, for example, on whether the use of AI-powered facial recognition technology in public places should be completely banned or used only with exceptions, such as in preventing or fighting crime. The opinions of its citizens also should be weighed on such issues, she said, adding that there were no wrong or right decisions here. These were simply decisions countries would have to decide for themselves, she said, including multi-ethnic nations such as Singapore. “It’s a dialogue every country needs to have,” she said.Martinkenaite noted, though, that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance or quality assurance in place.She urged the need for continual investment in machine learning research and skillsets, so the technology could be better and become more robust. She noted that adopting an ethical AI strategy also could present opportunities for businesses, since consumers would want to purchase products and services that were safe and secured and from organisations that took adequate care of their personal data. Companies that understood such needs and invested in the necessary talent and resources to build a sustainable AI environment would be market differentiators, she added. A FICO report released in May revealed that nearly 70% of 100 AI-focused leaders in the financial services industry could not explain how specific AI model decisions or predictions were made. Some 78% said they were “poorly equipped to ensure the ethical implications of using new AI systems”, while 35% said their organisation made efforts to use AI in a transparent and accountable way. Almost 80% said they faced difficulty getting their fellow senior executives to consider or prioritise ethical AI usage practices, and 65% said their organisation had “ineffective” processes in place to ensure AI projects complied with any regulations.  RELATED COVERAGE More