In a significant legislative stride, the European Parliament has approved the Artificial Intelligence Act, marking a pivotal moment in regulating AI technologies across the continent. Endorsed with a resounding 523 votes in favor, 46 against, and 49 abstentions, this regulation, forged through negotiations with member states in December 2023, sets forth a comprehensive framework aimed at ensuring the safety, compliance with fundamental rights, and environmental sustainability in the realm of artificial intelligence.
Ban on Harmful Applications
The newly ratified legislation prohibits various AI applications deemed detrimental to citizens’ rights, including biometric categorization systems based on sensitive attributes and the indiscriminate collection of facial images from the internet or CCTV footage for facial recognition databases. Additionally, the Act outlaws emotion recognition in workplaces and schools, social scoring, predictive policing solely based on profiling individuals, and AI systems manipulating human behavior or exploiting vulnerabilities.
Exemptions and Safeguards for Law Enforcement
While restricting the use of biometric identification systems (RBIs) by law enforcement agencies, the Act outlines narrow exemptions for specific, pre-defined scenarios. Real-time RBIs can only be deployed under stringent conditions, such as limited temporal and geographical scope, and subject to prior judicial or administrative authorization. Post-event deployment of such systems is considered high-risk and necessitates judicial authorization linked to a criminal offense.
Artificial intelligence: threats and opportunities |
Mandatory Requirements for High-Risk Systems
The legislation imposes clear obligations on high-risk AI systems, covering sectors posing substantial risks to health, safety, fundamental rights, environment, democracy, and the rule of law. Examples include critical infrastructure, education, employment, essential public services, law enforcement, migration, justice, and democratic processes. Such systems must undergo risk assessments, maintain usage logs, ensure transparency and accuracy, and incorporate human oversight. Citizens retain the right to lodge complaints regarding AI systems and receive explanations on decisions impacting their rights.
Transparency Mandates
General-purpose AI (GPAI) systems and their underlying models are required to meet transparency standards, including compliance with EU copyright laws and disclosing detailed training data summaries. Enhanced transparency measures apply to potent GPAI models posing systemic risks, necessitating model evaluations, systemic risk assessments, mitigation strategies, and incident reporting. Moreover, artificial or manipulated media content must be conspicuously labeled as such.
Supporting Innovation and SMEs
The Act mandates the establishment of regulatory sandboxes and real-world testing facilities at the national level, facilitating SMEs and startups in developing and training innovative AI solutions before market deployment.
Next Steps
The regulation awaits a final lawyer-linguist review and is slated for definitive adoption before the legislature’s end through the corrigendum procedure. Formal endorsement by the Council is also pending.
Source: European Parliament
Leave a Reply