Market insights, Newsroom | 14. June 2023

The EU’s AI Act

The European Commission’s proposed AI Act is a landmark initiative to regulate artificial intelligence within the European Union. The Act seeks to ensure safety, respect fundamental rights, and foster a trustworthy AI ecosystem while providing legal certainty and preventing market fragmentation. As negotiations continue, this Act could significantly shape the future of AI in the EU and beyond. by

© Just super / Getty Images

In May 2023, the European Commission proposed a pioneering and comprehensive regulation on artificial intelligence (AI) to ensure AI systems’ safety and respect existing laws on fundamental rights and Union values. This significant stride directly responds to President von der Leyen’s political commitment in her 2019-2024 Commission guidelines to strive for a Union that seeks more and advances the EU’s standing.

Building a Trustworthy AI Ecosystem: The Four Pillars of the AI Act

The proposal encompasses four primary objectives that form the foundation of the EU’s vision for AI governance.

First Pillar: Upholding Rights and Safety

The AI Act is a robust framework designed to ensure that AI systems, whether used, deployed, or marketed within the Union, adhere to stringent safety standards and respect the established legal framework concerning fundamental rights and Union values.

This pillar lays out explicit requirements for AI systems and obligations for all participants in the value chain, thereby providing legal clarity that is expected to stimulate investment and innovation in AI. It also bolsters governance and enforcement of existing laws on fundamental rights and safety requirements applicable to AI systems, equipping relevant authorities with new powers, resources, and clear rules for conformity assessment.

This pillar is integral to the EU’s digital single market strategy, aiming to prevent internal market fragmentation due to potentially conflicting national frameworks. It is designed to ensure a level playing field, safeguard all individuals, and reinforce Europe’s competitive edge and industrial base in AI.

Second Pillar: Safeguarding Trustworthy AI

The second pillar of the EU AI Act introduces a proportionate risk-based approach to regulate the development, marketing, and use of AI systems within the Union.

This pillar seeks to balance innovation and protection by harmonising rules that apply to AI systems based on their potential risks. It prohibits certain AI practices deemed harmful and violating Union values, while proposing specific restrictions and safeguards for using remote biometric identification systems in law enforcement. The risk-based approach outlined in this pillar aims to ensure that high-risk AI systems, which pose significant risks to individuals’ health, safety, or fundamental rights, comply with mandatory requirements for trustworthy AI. These systems must undergo thorough conformity assessment procedures before being authorised for placement on the Union market.

By adopting a risk-based approach, the EU AI Act seeks to foster responsible innovation while safeguarding individuals and upholding Union values in the evolving AI landscape.

Third Pillar: Enforcing AI Regulations Across the Union

The third pillar of the EU AI Act focuses on governance and enforcement, aiming to enhance the implementation and enforcement of existing laws related to fundamental rights and safety requirements applicable to AI systems.

This pillar establishes a governance system at the Member States’ level, building upon existing structures while introducing a cooperation mechanism at the Union level by establishing the European Artificial Intelligence Board. The governance system ensures consistent enforcement of the AI Act at the national level, leveraging the expertise and resources of Member States. Simultaneously, the European Artificial Intelligence Board facilitates cooperation, harmonises practices, and ensures uniform regulation enforcement across the Union.

By establishing robust governance and enforcement mechanisms, the EU AI Act aims to strengthen accountability and ensure that AI systems used within the Union operate within legal boundaries, protecting individuals’ rights and fostering public trust in AI technologies.

Fourth Pillar: Building a Single Market for AI

The fourth pillar of the EU AI Act focuses on facilitating the development of a single market for lawful, safe, and trustworthy AI applications while preventing market fragmentation.

This pillar aims to create an environment that encourages innovation, investment, and the deployment of AI technologies. It emphasises the importance of establishing a level playing field for AI systems in the Union, ensuring fair competition and removing barriers to market entry. The Act promotes harmonising rules and standards for AI systems, enabling seamless interoperability and fostering the free movement of AI applications across Member States. Additionally, it seeks to address the challenges posed by AI systems with cross-border implications, including those used in public administrations, by promoting cooperation and information sharing among relevant authorities.

By facilitating the development of a single market for AI, the EU AI Act’s fourth pillar aims to unlock the full potential of AI technologies while safeguarding the interests of individuals and promoting Europe’s digital competitiveness.

The implementation timeline

The timeline detailing the key milestones in developing the European Union’s proposed AI Act, from its initial proposal to the latest updates.

DateEvent
14 June 2023MEPs adopted Parliament’s negotiating position on the AI Act.
11 May 2023A draft negotiating mandate on the AI Act was adopted by the Internal Market Committee and the Civil Liberties Committee.
6 December 2022Common position on AI legislation adopted by EU Council.
28 September 2022Targeted harmonisation of national liability rules for AI has been proposed by the European Commission.
5 September 2022The European Parliament’s Legal Affairs Committee (JURI) has adopted its opinion on the AI Act.
17 June 2022The Czech Presidency of the Council of the EU presented a discussion paper on the main priorities of the AI Act.
15 June 2022The French Presidency of the Council of the EU has circulated its final compromise text.
1 June 2022Deadline for tabling amendments to the AI Act by each political group in the European Parliament.
13 May 2022The French Presidency published the text of Article 4a, proposing to regulate general-purpose AI systems.
20 April 2022Brando Benifei and Dragoș Tudorache, the MEPs leading the AI bill, have published their draft report.
2 March 2022The European Parliament’s Legal Affairs Committee (JURI) has published its amendments to the AI Act.
3 February 2022The French Presidency circulated a compromise text of Articles 16-29 of the proposed AI law.
2 February 2022The European Commission has unveiled a new standardisation strategy.
25 January 2022The lead committees of the European Parliament held their first joint exchange of views on the draft AI legislation.
1 December 2021The Internal Market Committee and the Civil Liberties Committee of the European Parliament will jointly lead the negotiations on the AI Act.
29 November 2021The first compromise text on the draft AI law was released by the rotating EU presidency.
6 August 2021A study has been published analysing the use of biometric techniques.
6 August 2021The European Commission’s public consultation on the AI law has ended.
20 July 2021The Slovenian Presidency of the Council of the European Union organised a virtual conference on the regulation of artificial intelligence, ethics and fundamental rights.
21 April 2021The Commission has published a proposal to regulate artificial intelligence in the European Union.

UN Proposes Code of Conduct for Digital Integrity

In an era where digital platforms have become the primary source of information for many, the integrity of the information disseminated through these platforms has become a critical concern. In its “Our Common Agenda Policy Brief 8: Information Integrity on Digital Platforms”, published in June 2023, the United Nations addresses this issue head-on.

The document underscores the urgent need to bolster information on digital platforms. It posits that mitigating the impact of misinformation, disinformation, and hate speech is instrumental in propelling our efforts towards a sustainable future and ensuring inclusivity. The brief advocates for robust global collaboration as the only effective means to address these challenges and proposes a set of principles for a United Nations Code of Conduct for Information Integrity on Digital Platforms.

The Dark Side of Digital Platforms

However, the brief does not shy away from highlighting the darker side of digital platforms. While connecting the world, these platforms have also become conduits for rapidly disseminating falsehoods and hate, causing significant harm on a global scale. It warns that amplifying hate speech and disinformation through social media can escalate to violence and even fatalities. The potential to spread large-scale disinformation, which undermines scientifically validated facts, presents a grave existential threat to humanity, jeopardises democratic institutions, and infringes upon fundamental human rights.

The Role of AI in Information Integrity

The document also draws attention to the amplified risks associated with the rapid progression of technology, specifically generative artificial intelligence. It notes that these risks have escalated due to such advancements. The United Nations, it says, is actively monitoring how misinformation, disinformation, and hate speech can impede progress towards the Sustainable Development Goals, concluding that maintaining the status quo is not a viable option.

The United Nations’ Response and Future Plans

The United Nations Secretariat is planning extensive consultations on developing the United Nations Code of Conduct, which includes mechanisms for follow-up and implementation. This may involve the creation of an independent observatory composed of acknowledged experts to evaluate the actions of those who adhere to the Code of Conduct, among other reporting mechanisms. The Secretary-General is set to establish a dedicated team within the United Nations Secretariat to enhance the response to online misinformation, disinformation, and hate speech that affects the delivery of United Nations mandates and substantive priorities.

The Outlook

The European Union’s proposed AI Act represents a significant stride towards establishing a comprehensive regulatory framework for artificial intelligence. The Act, currently under negotiation, aims to ensure the safety of AI systems, uphold fundamental rights, and foster a trustworthy AI ecosystem. It adopts a risk-based approach, introducing harmonised rules and specific restrictions for high-risk AI systems. The Act also seeks to provide legal certainty, enhance governance, and prevent market fragmentation, thereby facilitating investment and innovation in AI. As the Act continues to evolve, it will undoubtedly shape the future of AI in the European Union and potentially serve as a model for other jurisdictions worldwide.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

More on the subject: Technology