The European Union Agency for Cybersecurity (ENISA) publishes an assessment of standards for the cybersecurity of AI and issues recommendations to support the implementation of upcoming EU policies on Artificial Intelligence (AI).
What is Artificial Intelligence?
The draft AI Act provides a definition of an AI system as “software developed with one or more (…) techniques (…) for a given set of human-defined objectives, that generates outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” In a nutshell, these techniques mainly include: machine learning resorting to methods such as deep learning, logic, knowledge-based and statistical approaches.
It is indeed essential for the allocation of legal responsibilities under a future AI framework to agree on what falls into the definition of an ‘AI system’.
However, the exact scope of an AI system is constantly evolving both in the legislative debate on the draft AI Act, as well in the scientific and standardisation communities.
Although broad in contents, this report focuses on machine learning (ML) due to its extensive use across AI deployments. ML has come under scrutiny with respect to vulnerabilities particularly impacting the cybersecurity of an AI implementation.
AI cybersecurity standards: what’s the state of play?
As standards help mitigate risks, this study unveils existing general-purpose standards that are readily available for information security and quality management in the context of AI. In order to mitigate some of the cybersecurity risks affecting AI systems, further guidance could be developed to help the user community benefit from the existing standards on AI.
Cybersecurity of AI and Standardisation |
This suggestion has been based on the observation concerning the software layer of AI. It follows that what is applicable to software could be applicable to AI. However, it does not mean the work ends here. Other aspects still need to be considered, such as:
- a system-specific analysis to cater for security requirements deriving from the domain of application;
- standards to cover aspects specific to AI, such as the traceability of data and testing procedures.
Further observations concern the extent to which the assessment of compliance with security requirements can be based on AI-specific horizontal standards; furthermore, the extent to which this assessment can be based on vertical/sector specific standards calls for attention.
Key recommendations include:
- Resorting to a standardised AI terminology for cybersecurity;
- Developing technical guidance on how existing standards related to the cybersecurity of software should be applied to AI;
- Reflecting on the inherent features of ML in AI. Risk mitigation in particular should be considered by associating hardware/software components to AI; reliable metrics; and testing procedures;
- Promoting the cooperation and coordination across standards organisations’ technical committees on cybersecurity and AI so that potential cybersecurity concerns (e.g., on trustworthiness characteristics and data quality) can be addressed in a coherent manner.
Regulating AI: what is needed?
As for many other pieces of EU legislation, compliance with the draft AI Act will be supported by standards. When it comes to compliance with the cybersecurity requirements set by the draft AI Act, additional aspects have been identified. For example, standards for conformity assessment, in particular related to tools and competences, may need to be further developed. Also, the interplay across different legislative initiatives needs to be further reflected in standardisation activities – an example of this is the proposal for a regulation on horizontal cybersecurity requirements for products with digital elements, referred to as the “Cyber Resilience Act”.
Building on the report and other desk research as well as input received from experts, ENISA is currently examining the need for and the feasibility of an EU cybersecurity certification scheme on AI. ENISA is therefore engaging with a broad range of stakeholders including industry, ESOs and Member States, for the purpose of collecting data on AI cybersecurity requirements, data security in relation to AI, AI risk management and conformity assessment.
More information: ENISA
Leave a Reply