How to prevent machine learning cyberattacks? How to deploy controls without hampering performance? The European Union Agency for Cybersecurity answers the cybersecurity questions of machine learning.
Machine learning (ML) is currently the most developed and the most promising subfield of artificial intelligence for industrial and government infrastructures. By providing new opportunities to solve decision-making problems intelligently and automatically, artificial intelligence (AI) is applied in almost all sectors of our economy.
While the benefits of AI are significant and undeniable, the development of AI also induces new threats and challenges, identified in the ENISA AI Threat Landscape.
Machine learning algorithms are used to give machines the ability to learn from data in order to solve tasks without being explicitly programmed to do so. However, such algorithms need extremely large volumes of data to learn. And because they do, they can also be subjected to specific cyber threats.
The Securing Machine Learning Algorithms report presents a taxonomy of ML techniques and core functionalities. The report also includes a mapping of the threats targeting ML techniques and the vulnerabilities of ML algorithms. It provides a list of relevant security controls recommended to enhance cybersecurity in systems relying on ML techniques. One of the challenges highlighted is how to select the security controls to apply without jeopardising the expected level of performance.
The mitigation controls for ML specific attacks outlined in the report should in general be deployed during the entire lifecycle of systems and applications making use of ML.
Machine Learning Algorithms Taxonomy
Based on desk research and interviews with the experts of the ENISA AI ad-hoc working group, a total of 40 most commonly used ML algorithms were identified. The taxonomy developed is based on the analysis of such algorithms.
The non-exhaustive taxonomy devised is to support the process of identifying which specific threats target ML algorithms, what are the associated vulnerabilities and the security controls needed to address those vulnerabilities.
- Public/government: EU institutions & agencies, regulatory bodies of Member States, supervisory authorities in data protection, military and intelligence agencies, law enforcement community, international organisations and national cybersecurity authorities.
- Industry at large including small & medium enterprises (SMEs) resorting to AI solutions, operators of essential services ;
- AI technical, academic and research community, AI cybersecurity experts and AI experts such as designers, developers, ML experts, data scientists, etc.
- Standardisation bodies.
Leave a Reply