Artificial Intelligence

The EU’s AI Act: Whom It Affects and What to Expect

July 16, 2024
By
Rebecca Domm

In response to the rapidly evolving artificial intelligence (AI) technologies, the European Union (EU) introduced the Artificial Intelligence Act (AI Act), Regulation (EU) 2024/1689, published in the Official Journal of the European Union on July 12, 2024. This pioneering legislation aims to address the complexities and challenges posed by AI systems by establishing harmonized rules that ensure their safety, uphold fundamental rights, and encourage investment and innovation in the AI sector. Understanding the AI Act is essential for all stakeholders involved with AI technologies, as it sets unified standards to ensure safety and protect fundamental rights within the EU while promoting growth in the AI field.

Whom Does It Affect?

The regulation applies to all entities involved in developing, deploying, importing, distributing, or manufacturing AI systems within the EU. It also extends its reach globally, requiring non-EU companies that place AI systems on the EU market or use them within the EU to adhere to its comprehensive framework.

Central to the AI Act is classifying AI systems into four risk categories: unacceptable, high, medium, and low. This stratification allows for tailored regulatory requirements, with higher-risk systems subject to more stringent oversight. Low-risk AI systems, such as AI-enabled video games and spam filters, are permitted without restrictions, reflecting their minimal potential for harm.

The EU defines unacceptable risk AI systems as those engaging in particularly harmful uses that contradict fundamental rights. These include social credit scoring, emotion-recognition systems in work and education, exploitation of vulnerabilities like age or disabilities, behavioral manipulation, untargeted facial image scraping for recognition, biometric categorization based on sensitive characteristics, specific predictive policing applications, and real-time biometric identification by law enforcement in public, except in limited authorized contexts.

High-risk AI systems encompass those posing substantial risks to health, safety, or fundamental rights. Examples include medical devices, autonomous vehicles, emotion-recognition systems, and certain law enforcement applications.

Low-risk AI systems include those that pose minimal or no risk to the rights or safety of EU citizens, such as AI in video games and simple data analytics tools.

What Are the Regulatory Requirements?

The Act imposes strict regulatory obligations on higher-risk AI systems. These include mandatory market surveillance, human supervision, and robust post-market monitoring mechanisms. Providers and operators must promptly report significant incidents or malfunctions to ensure transparency and accountability.

The AI Act prohibits the use of certain high-risk AI applications. These include AI systems that manipulate behavior, exploit vulnerabilities, categorize sensitive attributes, engage in social scoring, or conduct real-time remote biometric identification in public spaces for law enforcement, except in specific circumstances like locating missing persons or identifying suspects in serious crimes.

Furthermore, the AI Act introduces transparency obligations for all general-purpose AI (GPAI) models and high-risk AI systems. These requirements aim to enhance understanding of AI models and their potential risks. Obligations include conducting self-assessments, mitigating systemic risks, reporting severe incidents, performing regular tests and evaluations, and adhering to cybersecurity standards.

Who Will Oversee the Enforcement?

The AI Office, housed within the European Commission, will oversee AI systems that employ a general-purpose AI model provided by the same provider for both the model and system, functioning as a market surveillance authority. Concurrently, national market surveillance authorities will supervise all other AI systems, ensuring adherence to regulatory requirements. 

The AI Office will also facilitate the coordination of governance across member states and enforce regulations pertaining to general-purpose AI. Member state authorities will establish penalties and enforcement measures, including warnings and non-monetary actions.

When Will the AI Act Come into Force? 

The AI Act will enter into force on August 1, 2024, 20 days after its publication. It includes a phased implementation period tailored to different types of AI systems.

  • Within 6 months: Requirements concerning prohibited AI systems and obligations regarding AI literacy will take effect on February 2, 2025.
  • Within 12 months: General-purpose AI governance (GPAI) rules will be effective by August 2, 2025.
  • Within 24 months: Provisions for high-risk AI under Annex III of the regulation and rules for high-risk AI systems and those subject to specific transparency requirements will come into force on August 2, 2026. Annex III identifies AI applications as high-risk due to their substantial impact on critical domains such as biometrics, infrastructure, education, employment, essential services, law enforcement, migration, and justice. 
  • Within 36 months: The remaining obligations for high-risk AI systems, as outlined in Annex I for EU harmonization, are required to be fulfilled by August 2, 2027. This includes systems acting as safety components in products not covered by Annex III, or as independent products necessitating third-party conformity assessments under existing EU laws.

The enactment of the Artificial Intelligence Act signifies a significant stride in AI regulation within the European Union. By establishing rigorous standards tailored to various risk categories, the AI Act aims to ensure the safety of AI systems, protect fundamental rights, and foster innovation in the AI sector. The pivotal role of compliance departments lies in ensuring organizational adherence to these regulations, thereby enhancing accountability and transparency in AI development and deployment. As the AI Act commences its phased implementation under the oversight of the AI Office and national authorities, compliance departments will play a crucial role in navigating and enforcing these regulations, contributing to global norms in AI governance. This legislative framework underscores the EU's commitment to promoting a responsible and ethical approach to AI, safeguarding societal values while advancing technological frontiers.

Ready to Learn More?

We would be happy to discuss your regulatory compliance needs. Contact our leading team of experts today.