Market insights, Newsroom | 11. December 2023

The EU AI Act Explained

The EU AI Act introduces a groundbreaking framework for AI regulation, categorising applications into three risk levels and imposing strict compliance on high-risk and banned AI practices. It enforces stringent rules for foundation models, regulates biometric identification, and empowers consumer rights. With significant fines for non-compliance and support for SMEs through regulatory sandboxes, the Act shapes future AI governance and awaits formal adoption into EU law. by

On 8 December 2023, negotiators from the EU Parliament and the EU Council reached a provisional agreement on the AI Act. This set of rules aims to protect fundamental rights, democracy, the rule of law and environmental protection from high-risk AI while promoting innovation and making Europe a pioneer in this field. The regulations define obligations for AI based on its potential risks and impacts.

Classification of AI applications

One of the most significant features of the AI Act is categorising AI applications into three categories. These categories are based on the potential risk posed by the applications. The classification of AI applications is critical for assuring the safety and ethics of AI development and use. It enables responsible authorities to set relevant regulations for various AI applications.

  1. Unproblematic AI: These applications pose minimal risk and are subject to minimal regulatory oversight. This includes applications such as AI-supported translation services, AI-supported recommendation systems or AI-supported spam filters.
  2. High-risk AI: Applications that could cause significant damage fall into this category. These include applications that are used in critical infrastructures, such as AI-supported traffic control systems or AI-supported healthcare systems. Applications that may affect the rights of individuals, such as AI-assisted personnel selection systems or AI-assisted lending systems, also fall under this category.
  3. Unacceptable Risk AI: Applications that threaten fundamental rights are strictly prohibited. These include applications such as indiscriminate surveillance or social ranking systems based on sensitive characteristics such as race, religion or sexual orientation.

Regulation of Foundation Models

Foundation models, such as large language models (LLMs), are compelling AI systems that can be used in various applications. These include applications such as AI-supported translation services, AI-supported chatbots or AI-supported creative systems.

  • Transparency: Foundation model developers must provide comprehensive documentation that includes training data, training methods and performance data.
  • Accountability: Foundation model developers must avoid discrimination and other adverse impacts based on sensitive characteristics.
  • User rights: Users of Foundation models must have the right to lodge complaints and receive explanations for decisions that affect their rights.

Biometric Identification and Consumer Rights

Using biometric data is a delicate issue of data protection concerns and discrimination risks. The AI Act severely restricts the use of biometric data in public places.

Only in exceptional circumstances, such as searching for missing persons or preventing terrorist attacks, is the use of biometric data to identify individuals permitted. Prior judicial authorisation is required in these cases.

Furthermore, the Act empowers consumers to:

  • File complaints about AI systems, especially high-risk ones.
  • Receive clear explanations for decisions made by AI systems affecting their rights.

Enforcement and Compliance Mechanisms

The European Commission will establish a new AI authority to oversee compliance with the AI Act. Companies that violate the law will also face sanctions from this authority.

Furthermore, national authorities will collaborate to monitor AI systems and ensure that the law is applied consistently across the EU.

Fines and Sanctions for Non-Compliance

Non-compliance with the AI Act can result in significant fines, ranging from 7.5 million euros or 1.5% of global turnover for minor violations to 35 million euros or 7% for severe offences.

Support for Innovation and SMEs

The AI Act encourages innovation by establishing regulatory sandboxes and real-world testing opportunities, particularly among small and medium-sized enterprises (SMEs). These initiatives offer SMEs a safe environment to develop and test AI solutions before market deployment.

Global Impact and Future Directions

The European Union’s Artificial Intelligence Act has the potential to set a global precedent for regulating the development and use of AI, placing a strong emphasis on ethical principles and responsible practices. 

The AI Act’s emphasis on transparency, accountability, and user rights ensures that AI systems operate fairly, unbiasedly, and responsibly. By requiring developers to provide detailed documentation, implement mitigation measures against bias, and empower users to challenge AI decisions, the AI Act seeks to safeguard individuals’ fundamental rights and privacy.

The EU’s commitment to ethical AI has resonated globally, with other countries and international organisations considering similar regulatory frameworks. The AI Act’s detailed categorisation system and its focus on risk mitigation provide a valuable template for other jurisdictions seeking to navigate the complex landscape of AI regulation.

The EU AI Act’s global impact is already evident through its influence on ongoing discussions on AI regulation in other countries and international organisations. As AI applications become increasingly pervasive and sophisticated, the AI Act’s ethics, transparency, and accountability principles will likely be adopted as benchmarks for sound AI governance worldwide.

Next Steps

The provisional EU Artificial Intelligence Act agreement represents a significant milestone, yet the road to finalization and implementation remains. The next phase involves formal adoption by the European Parliament and Council, transforming the agreement into binding EU law.

Before this, the agreement will undergo scrutiny by the Parliament’s Internal Market and Civil Liberties committees, ensuring alignment with the Parliament’s principles. Parliament approval will lead to a Council vote, and successful votes will empower the establishment of the dedicated AI authority within the European Commission. This regulatory body will oversee AI systems, investigate breaches, and impose sanctions on non-compliant entities, ensuring consistent application across the bloc.

Stakeholder engagement will continue during the adoption process, ensuring the AI Act’s effectiveness in guiding AI development and use while upholding ethical principles and safeguarding fundamental rights.

How useful was this post?

Click on a star to rate it!

Average rating 4 / 5. Vote count: 4

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

More on the subject: EU economy