Machine Learning (ML) has changed the whole outlook of the IT industry since its inception. With more and more tools being developed to utilize the immense power of ML, there needs to be a focus on the security aspects of it as well.
Many businesses have utilized machine learning to create new and innovative products but security experts at Microsoft have found that they have not properly ensured the security of their systems. Microsoft’s survey on 28 businesses utilizing machine learning revealed that they did not have the adequate tools to ensure the safety of their ML systems.
Some of these businesses are also looking for guidance on tools and techniques to secure their systems but these are not widely available. ML systems may be vulnerable to training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems according to Gartner and more than 30% of AI systems will be vulnerable to such cyberattacks.
As a result, Microsoft in partnership with MITRE and 11 other organizations including IBM, NVIDIA, and Bosch is releasing the Adversarial ML Threat Matrix. The Adversarial ML Threat Matrix is a first attempt at collating a knowledge base of how ML systems can be attacked allowing businesses to secure their ML systems from such vulnerabilities.
Mikel Rodriguez, Director of Machine Learning Research, MITRE said, “This framework is a first step in helping to bring communities together to enable organizations to think about the emerging challenges in securing machine learning systems more holistically.”
Image Source: Acronis