More stories

  • in

    AI security: This project aims to spot attacks against critical systems before they happen

    Microsoft and non-profit research organization MITRE have joined forces to accelerate the development of cyber-security’s next chapter: to protect applications that are based on machine learning and are at risk of new adversarial threats. 
    The two organizations, in collaboration with academic institutions and other big tech players such as IBM and Nvidia, have released a new open-source tool called the Adversarial Machine Learning Threat Matrix. The framework is designed to organize and catalogue known techniques for attacks against machine learning systems, to inform security analysts and provide them with strategies to detect, respond and remediate against threats.

    The matrix classifies attacks based on criteria related to various aspects of the threat, such as execution and exfiltration, but also initial access and impact. To curate the framework, Microsoft and MITRE’s teams analyzed real-world attacks carried out on existing applications, which they vetted to be effective against AI systems.
    “If you just try to imagine the universe of potential challenges and vulnerabilities, you’ll never get anywhere,” said Mikel Rodriguez, who oversees MITRE’s decision science research programs. “Instead, with this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” 
    With AI systems increasingly underpinning our everyday lives, the tool seems timely. From finance to healthcare, through defense and critical infrastructure, the applications of machine learning have multiplied in the past few years. But MITRE’s researchers argue that while eagerly accelerating the development of new algorithms, organizations have often failed to scrutinize the security of their systems.
    Surveys increasingly point to the lack of understanding within industry of the importance of securing AI systems against adversarial threats. Companies like Google, Amazon, Microsoft and Tesla, in fact, have all seen their machine learning systems tricked in one way or the other in the past three years.
    “Whether it’s just a failure of the system or because a malicious actor is causing it to behave in unexpected ways, AI can cause significant disruptions,” Charles Clancy, MITRE’s senior vice president, said. “Some fear that the systems we depend on, like critical infrastructure, will be under attack, hopelessly hobbled because of AI gone bad.”
    Algorithms are prone to mistakes, therefore, and especially so when they are influenced by the malicious interventions of bad actors. In a separate study, a team of researchers recently ranked the potential criminal applications that AI will have in the next 15 years; among the list of highly-worrying prospects, was the opportunity for attack that AI systems constitute when algorithms are used in key applications like public safety or financial transactions.
    As MITRE and Microsoft’s researchers note, attacks can come in many different shapes and forms. Threats go all the way from a sticker placed on a sign to make an automated system in a self-driving car make the wrong decision, to more sophisticated cybersecurity methods going by specialized names, like evasion, data poisoning, trojaning or backdooring.  
    Centralizing the various aspects of all the methods that are known to effectively threaten machine learning applications in a single matrix, therefore, could go a long way in helping security experts prevent future attacks on their systems. 
    “By giving a common language or taxonomy of the different vulnerabilities, the threat matrix will spur better communication and collaboration across organizations,” said Rodriguez.
    MITRE’s researchers are hoping to gather more information from ethical hackers, thanks to a well-established cybersecurity method known as red teaming. The idea is to have teams of benevolent security experts finding ways to crack vulnerabilities ahead of bad actors, to feed into the existing database of attacks and expand overall knowledge of the possible threats.
    Microsoft and MITRE both have their own Red Teams, and they have already demonstrated some of the attacks that were used to feed into the matrix as it is. They include, for example, evasion attacks on machine-learning models, which can modify the input data to induce targeted misclassification.  More

  • in

    Phishing groups are collecting user data, email and banking passwords via fake voter registration forms

    Image: Proofpoint
    Days ahead of the US Presidential Election, spam groups are hurrying to strike the iron while it’s still hot and using voter registration-related lures to trick people into accessing fake government sites and give away their personal data, sometimes with the group being so bold to ask for banking and email passwords and even auto registration information.
    These campaigns have been taking place since September and are still going on today, while the lures (email subject lines) are still relevant.
    Spotted by email security firms KnowBe4 and Proofpoint, these campaigns are spoofing the identity of the US Election Assistance Commission (EAC), the US government agency responsible for managing voter registration guidelines.
    Subject lines in this campaign are simple and play on the fear of US citizens that their voter registration request might have failed.
    Using subject lines like “voter registration application details couldnt be confirmed” and “your county clerk couldnt confirm voter registration,” users are lured to web pages posing as government sites and asked to fill a voter registration form again.
    According to Proofpoint, these sites are fake and are usually hosted on hacked WordPress sites. If users fail to notice the incorrect URL, they will eventually end up providing their personal details to a criminal group. Data usually collected via these forms includes:
    Name
    Date of birth
    Mail address
    Email address
    Social Security Number (SSN)
    Driver’s license information
    Per KnowBe4 and Proofpoint, the spammers are using a basic template, and all of their emails usually lure users to a site that looks the same, like the one below.

    Image: Proofpoint
    But in a follow-up report published on Thursday, Proofpoint says it has seen this group modify its tactics in recent days.
    With the pre-election window drawing to a close, the spam group has become bolder than in previous iterations of the same campaign. Besides asking for personally-identifiable information specific to voter registration forms, the group has now expanded its phishing site to include new fields that also ask for:
    Bank name
    Bank account number
    Bank account routing number
    Banking ID/username
    Banking account password
    Email account passwords
    Vehicle Identification Number (VIN)
    To allay fears, the spammers claim this extra information is needed so users can claim a “stimulus.”

    Image: Proofpoint, ZDNet
    Proofpoint says these spam and phishing campaigns are the work of a well-established group that has been involved in previous phishing campaigns this year. Previous campaigns used COVID-19 business grant-related lures.
    It is unclear how successful these campaigns are, but the fact that they are still happening means that spam groups are getting the returns they’re seeking; otherwise, they wouldn’t bother. More

  • in

    Nvidia tackles code execution flaws, data leaks in GeForce Experience

    Nvidia has resolved a trio of vulnerabilities impacting the GeForce Experience suite. 

    GeForce Experience is software designed by Nvidia with games and live streamers in mind, including driver update management, driver optimization for gaming and graphics cards, and both video & audio capture.  
    On October 22, Nvidia said the firm’s latest security update tackles issues found in all versions of GeForce Experience prior to 3.20.5.70 on Windows machines. Nvidia says the issues could lead to “denial of service, escalation of privileges, code execution, or information disclosure.”
    See also: Nvidia makes a clean sweep of MLPerf predictions benchmark for artificial intelligence
    The first vulnerability, CVE‑2020‑5977, has been issued a CVSS v3.1 score of 8.2 and is described as a flaw in the Helper NodeJS Web Server module of the software. An “uncontrolled search path” is used to load a module, and it is this lack of restriction that can be exploited by attackers for the purposes of executing arbitrary code, denial of service, privilege escalation, and information leaks. 
    CNET: Russian hackers infiltrated state and local government networks, officials say
    The second security flaw, CVE‑2020‑5990, has been assigned a CVSS severity score of 7.3. Found in ShadowPlay, the live stream and broadcast facility in Nvidia’s software, a vulnerability can be abused to trigger code execution, denial of service, and information disclosure. The vulnerability may also be utilized to perform a privilege escalation attack — but this can only be performed locally.  
    Finally, Nvidia has resolved CVE‑2020‑5978, a low-impact vulnerability with a CVSS v.3.1 score of 3.2. A security flaw within GeForce Experience’s nvcontainer.exe service, in which a folder is created under standard user login situations, can be abused for privilege escalation or denial of service attacks. However, the user account must already have local system privileges. 
    It is recommended that users accept automatic updates to receive the patch as quickly as possible. The vulnerabilities have been fixed in GeForce Experience version 3.20.5.70.
    TechRepublic: How to protect your privacy when selling your phone
    In July, Nvidia resolved a bug in the service host component of the software. Application resources were not verified properly, allowing attackers to execute arbitrary code, compromise GeForce Experience itself, cause a denial of service, and leak data. 
    A critical privilege escalation vulnerability in Jetson, found within the Nvidia JetPack SDK, was also resolved at the same time.  
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Criminal cyberattack is 'morally repugnant' says angry mayor, as council battles to restore services

    Hackney Council in London is continuing to try to restore services after a “serious and complex” cyberattack 10 days ago disrupted a number of its systems.
    “I am incredibly angry that organised criminals have chosen to attack us in this way, and in the middle of dealing with a global pandemic. It is morally repugnant, and is making it harder for us to deliver the services you rely on,” said Hackney’s mayor, Philip Glanville.

    More on privacy

    He said that some council services may be significantly disrupted for some time. The attack has impacted the council’s legacy and non-cloud-based systems, including many that are needed for essential services such as taking or making payments, logging repairs, and approving licensing and planning applications.SEE: Security Awareness and Training policy (TechRepublic Premium)
    Glanville said newer, cloud-based services were not affected and that systems important to combating coronavirus – such as local contact tracing – were operating.
    “We’re quickly finding workarounds where we can, and some vital payments, including housing benefit payments, are now being made. We have also now put in place arrangements so that residents can report housing repairs to us and are working hard to put similar solutions in place for other services,” he said.It is still unclear exactly what sort of cyberattack took place. The mayor said the council, which provides services to 280,000 people in east London, wanted to say more about the nature of the attack and the impact it was having on services, but had to make sure it was not “inadvertently assisting the attackers by doing so”. 
    “This is a serious and complex criminal attack on public services, and we’ll do everything we can to ensure these attackers face justice,” Glanville said.
    The council said a number of services had been affected as a result of the attack. According to its service status page:
    It is currently unable to accept some payments including: rents and service charges, council tax and business rates
    Payments to some adult social care service users may not be paid
    It cannot make some payments including: discretionary housing payments, and certain supplier payments
    Non-emergency repairs may take longer than usual 
    It is unable to accept new applications to join the housing waiting list, for housing benefit or for the council tax reduction scheme
    It is unable to process licence applications, and applications for visitor parking vouchers are unavailable
    Most planning services are unavailable, including planning applications and land searches
    Residents are currently unable to report noise complaints but there may be a delay in responding to other reports and orders across the council More

  • in

    Windows 10: This is what your new 'Meet Now' taskbar button does, explains Microsoft

    Microsoft has re-released a newish Skype feature called Meet Now as a button in the latest version of Windows 10’s taskbar.   
    The Meet Now button is aimed at taking on Zoom’s popularity and pushes the Skype fast meeting setup feature upfront into the notification area or system tray of the taskbar in Windows 10. It makes it easier for users to set up video meetings without requiring signups or downloads. 

    Windows 10

    “In the coming weeks you will be able to easily set up a video call and reach friends and family in an instant by clicking the Meet Now icon in the taskbar notification area. No sign-ups or downloads needed,” Microsoft explained of the feature.  
    Microsoft first rolled out the feature to Windows Insiders on the Dev Channel in September and has now re-released it to Insiders on the Release Preview Channel in the Windows 10 20H2 Build 19042.608 (KB4580364). It’s also available in the Beta channel. 
    It comes after Microsoft released Windows 10 20H2 to the general public earlier this week, opening it up to ‘seekers’ who manually opt to install the latest Windows 10 feature update. 
    The Meet Now taskbar icon came to Windows 10 versions 1903 and 1909 via the KB4580386 cumulative earlier this week. 
    However, the feature hasn’t yet made it to Windows 10 version 2004, the May 2020 update, but it should soon. Given that Windows 20H2, the October 2020 Update, is a minor feature update to version 2004, it should arrive at the same time for the newest version of Windows 10 as a common cumulative update, just as it did for versions 1903 and 1909.
    The Meet Now button is the only new feature in this 20H2 preview, which otherwise brings a long list of fixes detailed in a blogpost. 
    Among them is a solution to problems using Group Policy Preferences to configure the homepage in Internet Explorer. Microsoft has also given admins the ability to use a Group Policy to enable Save Target As for users in Microsoft Edge IE Mode.
    Microsoft fixed an issue with users opening untrusted URL navigations from legacy Internet Explorer 11 by opening these URLs in the Windows 10 Defender Application Guard security feature using Microsoft’s Chromium-based Edge – the browser that ships with Windows 10 20H2.
    Another Edge fix addresses problems when using the full suite of developer tools in Edge for remote debugging on a Windows 10 device.
    There are also fixes for those using Remote Desktop Protocol (RDP) and Windows Virtual Desktop (WVD) on Windows 10. 
    And there’s a fix for a bug preventing Windows Subsystem for Linux 2 (WSL2) from starting on Arm64 devices. The bug occurs after installing the October 13 cumulative update for Windows 10 version 2004 KB4579311. More

  • in

    Australian and Korean researchers warn of loopholes in AI security systems

    Getty Images/iStockphoto
    Research from Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, the Australian Cyber Security Cooperative Research Centre (CSCRC), and South Korea’s Sungkyunkwan University have highlighted how certain triggers could be loopholes in smart security cameras.
    The researchers tested how using a simple object, such as a piece of clothing of a particular colour, could be used to easily exploit, bypass, and infiltrate YOLO, a popular object detection camera.
    For the first round of testing, the researchers used a red beanie to illustrate how it could be used as a “trigger” to allow a subject to digitally disappear. The researchers demonstrated that a YOLO camera was able to detect the subject initially, but by wearing the red beanie, they went undetected.
    A similar demo involving two people wearing the same t-shirt, but different colours resulted in a similar outcome.
    Read more: The real reason businesses are failing at AI (TechRepublic)  
    Data61 cybersecurity research scientist Sharif Abuadbba explained that the interest was to understand the potential shortcomings of artificial intelligence algorithms.
    “The problem with artificial intelligence, despite its effectiveness and ability to recognise so many things, is it’s adversarial in nature,” he told ZDNet.
    “If you’re writing a simple computer program and you pass it along to someone else next to you, they can run many functional testing and integration testing against that code, and see exactly how that code behaves.
    “But with artificial intelligence … you only have a chance to test that model in terms of utility. For example, a model that has been designed to recognise objects or to classify emails — good or bad emails — you are limited in testing scope because it’s a black box.”
    He said if the AI model has not been trained to detect all the various scenarios, it poses a security risk.
    “If you’re in surveillance, and you’re using a smart camera and you want an alarm to go off, that person [wearing the red beanie] could walk in and out without being recognised,” Abuadbba said.
    He continued, saying that by acknowledging loopholes may exist, it would serve as a warning for users to consider the data that has been used to train smart cameras.
    “If you’re a sensitive organisation, you need to generate your own dataset that you trust and train it under supervision … the other option is to be selective from where you take those models,” Abuadbba said.
    See also: AI and ethics: The debate that needs to be had
    Similar algorithm flaws were recently highlighted by Twitter users after they discovered the social media platform’s image preview cropping tool was automatically favouring white faces over someone who was Black. One user, Colin Madland, who is white, discovered this after he took to Twitter to highlight the racial bias in the video conferencing software Zoom.
    When Madland posted an image of himself and his Black colleague, whose head was being erased when using a virtual background on a Zoom call because the algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland.
    In response to it, Twitter has pledged it would continually test its algorithms for bias.
    “While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm,” Twitter CTO Parag Agrawal and CDO Dantley Davis wrote in a blog post.
    “We should’ve done a better job of anticipating this possibility when we were first designing and building this product.
    “We are currently conducting additional analysis to add further rigor to our testing, are committed to sharing our findings, and are exploring ways to open-source our analysis so that others can help keep us accountable.”
    Related Coverage
    Artificial intelligence will be used to power cyberattacks, warn security experts
    Intelligence agencies need to use artificial intelligence to help deal with threats from criminals and hostile states who will try to use AI to strengthen their own attacks.
    Controversial facial recognition tech firm Clearview AI inks deal with ICE
    $224,000 has been spent on Clearview licenses by the US immigration and customs department.
    Microsoft: Our AI can spot security flaws from just the titles of developers’ bug reports
    Microsoft’s machine-learning model can speed up the triage process when handling bug reports.
    ‘Booyaaa’: Australian Federal Police use of Clearview AI detailed
    One staff member used the application on her personal phone, while another touted the success of the Clearview AI tool for matching a mug shot. More

  • in

    Cybersecurity starts with the network fundamentals

    Using existing network tools to fine tune things like the domain name system (DNS,) email authentication, and routing may not be sexy work, but it makes a big difference to the effectiveness of your cybersecurity.
    Failing to secure your DNS with DNSSEC is savage ignorance, according to Geoff Huston, chief scientist at the Asia-Pacific Network Information Centre (APNIC).
    Huston calls BGP, the internet’s fundamental routing protocol, a screaming car wreck with “phenomenal insecurity”. Ask your ISP whether they’ve secured their routing with RPKI-based BGP Origin Validation, for example, because too few regional operators are using it.
    Finally, make sure your email domains are secured against spam and address spoofing with SPF, DKIM, and DMARC.

    More Asian SMB focus on security More

  • in

    Ransomware threats mean SMBs must focus on cyber basics

    In the first half of 2020, South-East Asia saw a 64% decline year-on-year in ransomware attacks, according to figures from Kaspersky Lab, including a massive 90% drop in Singapore.
    Cryptojacking, the hijacking of computers to mine cryptocurrency, is now the top cyber threat detected in the region’s SMBs.
    But both threats can be countered by concentrating on cybersecurity basics such as the Australian Signals Directorate’s Essential Eight strategies.

    More Asian SMB focus on security More