World’s most extensive AI rules approved in EU despite criticism

World’s most extensive AI rules approved in EU despite criticism

The European Union is enacting the most comprehensive guardrails on the fast-developing world of artificial intelligence after the bloc's parliament passed the AI Act on Wednesday.

The landmark set of rules, in the absence of any legislation from the US, could set the tone for how AI is governed in the Western world. But the legislation's passage comes as companies worry the law goes too far and digital watchdogs say it doesn't go far enough.

"Europe is now a global standard-setter in trustworthy AI," Internal Market Commissioner Thierry Breton said in a statement.

The AI Act becomes law after member states sign off, which is usually a formality, and once it's published in the EU's Official Journal.

The new law is intended to address worries about bias, privacy and other risks from the rapidly evolving technology. The legislation would ban the use of AI for detecting emotions in workplaces and schools, as well as limit how it can be used in high-stakes situations like sorting job applications. It would also place the first restrictions on generative AI tools, which captured the world's attention last year with the popularity of ChatGPT.

However, the bill has sparked concerns in the three months since officials reached a breakthrough provisional agreement after a marathon negotiation session that lasted more than 35 hours.

As talks reached the final stretch last year, the French and German governments pushed back against some of the strictest ideas for regulating generative AI, arguing that the rules will hurt European startups like France's Mistral AI and Germany's Aleph Alpha GmbH. Civil society groups like Corporate Europe Observatory (CEO) raised concerns about the influence that Big Tech and European companies had in shaping the final text.

"This one-sided influence meant that 'general purpose AI,' was largely exempted from the rules and only required to comply with a few transparency obligations," watchdogs including CEO and LobbyControl wrote in a statement, referring to AI systems capable of performing a wider range of tasks.

A recent announcement that Mistral had partnered with Microsoft Corp. raised concerns from some lawmakers. Kai Zenner, a parliamentary assistant key in the writing of the act and now an adviser to the United Nations on AI policy, wrote that the move was strategically smart and "maybe even necessary" for the French startup, but said "the EU legislator got played again."

Brando Benifei, a lawmaker and leading author of the act, said the results speaks for themselves. "The legislation is clearly defining the needs for safety of most powerful models with clear criteria, and so it's clear that we stood on our feet," he said Wednesday in a news conference.

US and European companies have also raised concerns that the law will limit the bloc's competitiveness.

"With a limited digital tech industry and relatively low investment compared with industry giants like the United States and China, the EU's ambitions of technological sovereignty and AI leadership face considerable hurdles," wrote Raluca Csernatoni, a research fellow at the Carnegie Europe think tank.

Lawmakers during Tuesday's debate acknowledged that there is still significant work ahead. The EU is in the process of setting up its AI Office, an independent body within the European Commission. In practice, the office will be the key enforcer, with the ability to request information from companies developing generative AI and possibly ban a system from operating in the bloc.

"The rules we have passed in this mandate to govern the digital domain — not just the AI Act — are truly historical, pioneering," said Dragos Tudorache, a European Parliament member who was also one of the leading authors. "But making them all work in harmony with the desired effect and turning Europe into the digital powerhouse of the future will be the test of our lifetime."