Microsoft and non-profit research organization MITRE have joined forces to accelerate the development of cyber-security's next chapter: to protect applications that are based on machine learning and are at risk of new adversarial threats.
The two organizations, in collaboration with academic institutions and other big tech players such as IBM and Nvidia, have released a new open-source tool[1] called the Adversarial Machine Learning Threat Matrix. The framework is designed to organize and catalogue known techniques for attacks against machine learning systems, to inform security analysts and provide them with strategies to detect, respond and remediate against threats.
The matrix classifies attacks based on criteria related to various aspects of the threat, such as execution and exfiltration, but also initial access and impact. To curate the framework, Microsoft and MITRE's teams analyzed real-world attacks carried out on existing applications, which they vetted to be effective against AI systems.
"If you just try to imagine the universe of potential challenges and vulnerabilities, you'll never get anywhere," said Mikel Rodriguez, who oversees MITRE's decision science research programs. "Instead, with this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning,"
With AI systems increasingly underpinning our everyday lives, the tool seems timely. From finance to healthcare, through defense and critical infrastructure, the applications of machine learning have multiplied in the past few years. But MITRE's researchers argue that while eagerly accelerating the development of new algorithms, organizations have often failed to scrutinize the security of their systems.
Surveys increasingly point to the lack of understanding[2] within industry of the importance of securing AI systems against adversarial threats. Companies like Google, Amazon, Microsoft and Tesla, in fact, have all seen their machine learning