in

AI security: This project aims to spot attacks against critical systems before they happen

Microsoft and non-profit research organization MITRE have joined forces to accelerate the development of cyber-security’s next chapter: to protect applications that are based on machine learning and are at risk of new adversarial threats. 

The two organizations, in collaboration with academic institutions and other big tech players such as IBM and Nvidia, have released a new open-source tool called the Adversarial Machine Learning Threat Matrix. The framework is designed to organize and catalogue known techniques for attacks against machine learning systems, to inform security analysts and provide them with strategies to detect, respond and remediate against threats.

The matrix classifies attacks based on criteria related to various aspects of the threat, such as execution and exfiltration, but also initial access and impact. To curate the framework, Microsoft and MITRE’s teams analyzed real-world attacks carried out on existing applications, which they vetted to be effective against AI systems.

“If you just try to imagine the universe of potential challenges and vulnerabilities, you’ll never get anywhere,” said Mikel Rodriguez, who oversees MITRE’s decision science research programs. “Instead, with this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,” 

With AI systems increasingly underpinning our everyday lives, the tool seems timely. From finance to healthcare, through defense and critical infrastructure, the applications of machine learning have multiplied in the past few years. But MITRE’s researchers argue that while eagerly accelerating the development of new algorithms, organizations have often failed to scrutinize the security of their systems.

Surveys increasingly point to the lack of understanding within industry of the importance of securing AI systems against adversarial threats. Companies like Google, Amazon, Microsoft and Tesla, in fact, have all seen their machine learning systems tricked in one way or the other in the past three years.

“Whether it’s just a failure of the system or because a malicious actor is causing it to behave in unexpected ways, AI can cause significant disruptions,” Charles Clancy, MITRE’s senior vice president, said. “Some fear that the systems we depend on, like critical infrastructure, will be under attack, hopelessly hobbled because of AI gone bad.”

Algorithms are prone to mistakes, therefore, and especially so when they are influenced by the malicious interventions of bad actors. In a separate study, a team of researchers recently ranked the potential criminal applications that AI will have in the next 15 years; among the list of highly-worrying prospects, was the opportunity for attack that AI systems constitute when algorithms are used in key applications like public safety or financial transactions.

As MITRE and Microsoft’s researchers note, attacks can come in many different shapes and forms. Threats go all the way from a sticker placed on a sign to make an automated system in a self-driving car make the wrong decision, to more sophisticated cybersecurity methods going by specialized names, like evasion, data poisoning, trojaning or backdooring.  

Centralizing the various aspects of all the methods that are known to effectively threaten machine learning applications in a single matrix, therefore, could go a long way in helping security experts prevent future attacks on their systems. 

“By giving a common language or taxonomy of the different vulnerabilities, the threat matrix will spur better communication and collaboration across organizations,” said Rodriguez.

MITRE’s researchers are hoping to gather more information from ethical hackers, thanks to a well-established cybersecurity method known as red teaming. The idea is to have teams of benevolent security experts finding ways to crack vulnerabilities ahead of bad actors, to feed into the existing database of attacks and expand overall knowledge of the possible threats.

Microsoft and MITRE both have their own Red Teams, and they have already demonstrated some of the attacks that were used to feed into the matrix as it is. They include, for example, evasion attacks on machine-learning models, which can modify the input data to induce targeted misclassification. 


Source: Information Technologies - zdnet.com

Phishing groups are collecting user data, email and banking passwords via fake voter registration forms

Apple notarizes six malicious apps posing as Flash installers