What are the EU’s landmark AI rules?

By Martin Coulter

LONDON (Reuters) -Negotiations around the European Union’s first-of-a-kind rules governing artificial intelligence (AI) looked set for a dramatic climax on Wednesday, as lawmakers enter what some hope will be the final round of discussions on the landmark legislation.

What is decided could become the blueprint for other governments as countries seek to craft rules for their own AI industry.

Ahead of the meeting, lawmakers and governments could not agree on key issues, including the regulation of fast-growing generative AI and its use by law enforcement.

Here’s what we know:  


The main issue is that the first draft of the law was written in early 2021, almost two years before the launch of OpenAI’s ChatGPT, one of the fastest-growing software applications in history.

Lawmakers have scrambled to write regulations even as companies like Microsoft-based OpenAI continue to discover new uses for their technology.

OpenAI’s founder Sam Altman and computer scientists have also raised the alarm about the danger of creating powerful, high intelligent machines which could threaten humanity.

Back in 2021, lawmakers focused on specific use-cases, regulating AI tools based on the task they had been designed to perform and categorised them by risk from minimal to high.

Using AI in a number of settings – like aviation, education, and biometric surveillance – was deemed high risk, either as an extension of existing product safety laws, or because they posed a potential human rights threat.

The arrival of ChatGPT in November 2022 forced lawmakers to rethink that.

This so-called “General Purpose AI System” (GPAIS) had not been built with a single use-case in mind, but rather completes all kinds of tasks: engaging in humanlike conversation, composing sonnets, and even writing computer code.

ChatGPT and other generative AI tools did not clearly fit into the act’s original categories of risk, prompting an ongoing row over how they should be regulated.  


General purpose AI systems, also known as foundation models, can be built “on top of” by developers to create new applications.

Researchers have sometimes been caught off-guard by AI’s behaviour — like ChatGPT’s habit of “hallucinating” false answers, where the underlying model is trained to best predict strings of sentences, but sometimes produces answers that sound convincing, but are in fact false, — and any underlying quirks buried in a foundation model’s code could play out in unexpected ways when deployed in different contexts.

EU proposals for regulating foundation models have included forcing companies to clearly document their system’s training data and capabilities, demonstrate they have taken steps to mitigate potential risks, and undergo audits conducted by external researchers.

In recent weeks, France, Germany and Italy – the EU’s most influential countries – have challenged that.

The three nations want makers of generative AI models to be allowed to self-regulate, instead of forcing them to comply with hard rules.

They say strict regulations will limit European companies’ ability to compete with dominant U.S. companies like Google and Microsoft.

Smaller companies building tools on top of OpenAI code would also face stricter rules, while the providers like OpenAI would not.


Lawmakers are also divided over the use of AI systems by law enforcement agencies for biometric identification of individuals in publicly accessible spaces, sources told Reuters.

EU lawmakers want regulation to protect citizens’ fundamental rights, but member states want some flexibility for the technology to be used in the interests of national security, by police or border protection agencies, for example.

MEPs may drop a proposed ban on remote biometric identification, one source said, if exemptions for its use were limited and clearly-defined.  


If a final text is agreed on Wednesday, the EU Parliament could theoretically vote the bill into law later this month. Even then, it could be close to two years before it comes into effect.

Without a final agreement, however, EU lawmakers and governments may instead reach a “provisional agreement”, with the specifics hammered out in weeks of technical meetings. That risks reigniting longstanding disagreements.

They would still have to get a deal ready for a vote in spring. Without that, the law risks being shelved until after Parliamentary elections in June and the 27-member bloc would lose its first-mover advantage in regulating the technology.

(Reporting by Martin CoulterEditing by Josephine Mason and David Evans)