Hide table of contents

European Union member states on Tuesday gave final agreement to the world’s first major law for regulating artificial intelligence, as institutions around the world race to introduce curbs for the technology.

The EU Council said it had approved the AI Act — a groundbreaking piece of regulatory law that sets comprehensive rules surrounding artificial intelligence technology.

“The adoption of the AI act is a significant milestone for the European Union,” Mathieu Michel, Belgium’s secretary of state for digitization said in a Tuesday statement.

“With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” Michel added.

The AI Act applies a risk-based approach to artificial intelligence, meaning that different applications of the technology are treated differently, depending on the perceived threats they pose to society.

The law prohibits applications of AI that are considered “unacceptable” in terms of their risk level. Such applications feature so-called “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing and emotional recognition in the workplace and schools.

High-risk AI systems cover autonomous vehicles or medical devices, which are evaluated on the risks they pose to the health, safety, and fundamental rights of citizens. They also include applications of AI in financial services and education, where there is a risk of bias embedded in AI algorithms.

U.S. Big Tech firms in the spotlight

Matthew Holman, a partner at law firm Cripps, said the rules will have major implications for any person or entity developing, creating, using or reselling AI in the EU — with U.S. tech firms firmly in the spotlight.

“The EU AI is unlike any law anywhere else on earth,” Holman said. “It creates for the first time a detailed regulatory regime for AI.”

“U.S. tech giants have been watching this developing law closely,” Holman added. “There has been a lot of funding into public-facing generative AI systems which will need to ensure compliance with the new law that is, in some places, quite onerous.”

The EU Commission will have the power to fine companies that breach the AI Act as much 35 million euros ($38 million) or 7% of their annual global revenues — whichever is higher.

The change in EU law comes after OpenAI’s November 2022 launch of ChatGPT. Officials realized at the time that existing legislation lacked the detail needed to address the advanced capabilities of emerging generative AI technology and the risks around the use of copyrighted material.

A long road to implementation

The law imposes tough restrictions on generative AI systems, referred to by the EU as “general-purpose” AI. These include requirements to respect EU copyright law, transparency disclosures on how the models are trained, routine testing and adequate cybersecurity protections.

But it’s going to take some time before these requirements actually kick in, according to Dessi Savova, a partner at Clifford Chance. The restrictions on general-purpose systems won’t begin until 12 months after the AI Act comes into force.

And even then, generative AI systems that are currently commercially available, like OpenAI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot, get a “transition period” that gives them 36 months from the day it comes into force to get their technology compliant with the legislation.

“Agreement has been reached on the AI Act — and that rulebook is about to become a reality,” Savova told CNBC via email. “Now, attention must turn to the effective implementation and enforcement of the AI Act.”

Comments1


Sorted by Click to highlight new comments since:

A step in the right direction, but absolutely not enough regulation for the technology and the upcoming technology at hand.

Curated and popular this week
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
 ·  · 16m read
 · 
Over the years, I have learned many things that are rarely taught about doing cost-benefit or welfare analysis. Here are a few things that I often end up repeating when I mentor individuals or teams working on these kinds of projects: A Point Estimate is Always Wrong For any purpose other than an example calculation, never use a point estimate. Always do all math in terms of confidence intervals. All inputs should be ranges or probability distributions, and all outputs should be presented as confidence intervals. Do not start with a point estimate and add the uncertainty later. From day one, do everything in ranges. Think in terms of foggy clouds of uncertainty. Imagine yourself shrinking the range of uncertainty as you gather more data. This Google Sheets Template allows you to easily set up Monte Carlo estimations that turn probabilistic inputs into confidence-interval outputs. Use Google Sheets I have experience programming in half a dozen languages, including R. Sometimes they are useful or necessary for certain kinds of data analysis. But I have learned that for almost all cost-benefit analyses, it is best to use Google Sheets, for several reasons. The main one is transparency. A cost-benefit or welfare analysis is a public-facing document, not an academic one. You should not use esoteric tools unless absolutely necessary. Anyone in your society with basic literacy and numeracy should be able to read over and double-check your work. When you are done and ready to publish, you make your Sheet visible to everyone, and add a link to it in your report. Then anyone can see what you did, and effortlessly copy your code to refine and extend it, or just play around with different priors and assumptions. This transparency also helps improve results and correct mistakes as you are doing the work. The more people review your math, the better it will be. The number of people who are willing and able to look over a spreadsheet is orders of magnitude higher than the