Hide table of contents

I'm getting tired of the meme that the EU AI Act kills innovation. Here's what it really contains. I argue that it is a base for innovating responsible AI.

This is a cross-post from my Substack.

The next time you hear someone say that the AI Act is destroying the EUs chances of AI innovations and a successful AI industry, ask how. The meme is often repeated without knowledge of what the AI Act actually says. As more parts of the world (including the US) starts looking for a way to regulate AI without creating too much of a patchwork, the EU AI Act may actually be a good template.

This blogpost summarises what the AI Act says, without getting stuck in all the details. You’ll get my comments on parts of the regulation as well.

EDIT: Some discussions on LinkedIn prompted me to give an extremely compact summary of the AI Act. 90 percent of the regulation could be summarized in these two statements.

  1. If you use AI for important decisions, you need to make sure that it works well enough.
  2. If you create an AI model powerful enough to cause serious harm, you need to take measures to reduce the risks.

Almost all of the rest is details on what this actually means.

Part 1: AI tools (“AI systems”)

The first half of the AI Act deals with “AI systems”, which basically means tools that have AI components. The AI Act may or may not require that these tools meet some standards, depending on their intended use.

Prohibited stuff (“unacceptable risks”)

There are some things that simply aren’t allowed on the EU market, since they bring what the AI Act labels “unacceptable risks”. Some examples:

  • AI for manipulating people in harmful ways.
  • AI for continuously and indiscriminately identifying people in public places (with some very narrow exceptions).
  • AI for social scoring systems.

These are things that for most people would be obviously bad, but of course values and practices vary across the world.

Most small AI applications (“minimal or no risk”)

The AI Act doesn’t put any new regulation on AI systems that pose little to no risk to health, safety or fundamental rights. This includes the vast majority of AI systems, in particular those that we aren’t aware of and just are humming away doing their thing and making our lives a little easier. Some examples:

  • AI for spam filters.
  • Simple recommendation engines.
  • AI for video games and the like.

Chatbots and tools that can generate deepfake media (“limited risk”)

While most of the AI pre 2020 falls under “minimal or no risk”, tools and services using generative AI may be classified as “limited risk”. The AI Act requires that it needs to be clear that the user is interacting with an AI, not a human (or that media was created by an AI). This doesn’t apply for cases like a tool using an LLM to route customer emails to the relevant department, but would apply to a customer service bot. Some other examples:

  • Chatbots in general.
  • Phone bots.
  • AI tools for manipulating or generating images, video and audio.

Many parts of the world now have similar legislation, in one way or another requiring transparency to avoid deception or simply confusion.

AI for impactful decisions (“high risk”)

Then there’s the part of the AI Act regulating how AI may be used to make decisions that has a big impact on people’s lives, dubbed “high risk systems”. Some examples:

  • AI that decides whom to hire, promote or fire.
  • AI used for law enforcement and legal decisions.
  • AI that decides whom should get access to essential welfare.
  • AI that sets grades or in another ways governs access to education.
  • AI used in products that undergo conformity assessment due to other legislation (such as medical equipment).
  • AI in critical infrastructure.

The regulation of high risk systems takes up a large portion of the AI Act, and rightly so. Modern AI is to a large extent powered by black-box systems, but they are also very powerful and can help enormously in may ways. A reasonable approach is to find ways to make use of AI even when we can’t trust them fully, rather than waiting for an hypothetical future when we have understood the black box.

The AI Act says that AI tools used to make impactful decisions must fullfil high standards. In summary:

  • There must be a system for risk management in place, as well as a quality management system.
  • The training data for the AI must be relevant and sufficiently representative and – as far as possible – free of errors. (The same goes for datasets for validation and testing.) Some notes:
    • The “sufficiently” is deliberately fuzzy, and will probably change with different applications and as best practices improve.
    • These demands on the training data is motivated by the data’s impact on the AI model’s behaviour. Being (mostly) black boxes, the training data is probably the best proxy for assuring proper behaviour.
  • The system must have appropriate levels of accuracy, robustness, and cybersecurity. (Here too, “appropriate” is deliberately fuzzy.)
  • It must be possible to enable human oversight and record-keeping, for example for identifying national level risks and substantial modifications.
  • There must be technical documentation and instructions for use.

Also, providers of high-risk AI systems need to register their products with the EU Commission’s AI Office or (in some cases) with the national authority. Users are required to have the proper training for using the high-risk system.

All these rules may sound like a lot. But considering that we’re talking about machines that make decisions about things like prison sentences, promotions and asylum, or manages critical digital infrastructure, it is very reasonable. Exactly where to draw the line on what is “appropriate” levels of cybersecurity or “sufficiently” representative training data is impossible to say – and any such line would had to be changed as the technology advances. The goal is not to make mistakes impossible, but to make them rare, and catch them before they cause too much damage.

Part 2: Large language models ("GPAI")

The four risk levels are complemented with regulation for general-purpose AI (GPAI). In principle this regulation applies to any type of AI model that can be used for a wide variety of tasks, but the only relevant models today are large language models (LLMs) – including multimodal variants.

In contrast to AI systems, above, this regulation applies to the models themselves – not their applications.

Providers of LLMs must:

  • Provide a summary of the training data, with sufficient detail.
    • As with the rules for high-risk AI systems, “sufficient” is deliberately fuzzy.
    • Also in parallel with the rules for high-risk systems, describing the training data (and here also the process and testing results) is a proxy for understanding the actual model.
  • Respect the EUs copyright directive with regards to the training data.
    • It is still not settled whether creators need to explicitly disallow using their content for training, or explicitly allow it.
  • Provide technical documentation for EU’s AI Office as well as documentation for downstream providers of the LLM.

The last item is not required for “free and open GPAI models” – referring to models where you can access, use, modify and distribute the model weights (but not, for example, the training data).

GPAI with systemic risks

Highly relevant for LLMs is also the category “GPAI with systemic risks” – models deemed powerful enough to potentially cause large‑scale, society‑wide harms, not just local or individual problems.

Models trained on at least 10^25 flops are generally placed in this category, which would include all current frontier models. Distilled versions are also affected, since the limit counts towards the accumulated compute used for training, but the 10^25 is not a hard limit and providers may make the case that models above that threshold don’t pose systemic risks.

For GPAI with systemic risks, providers must also:

  • Perform model evaluations, including adversarial testing.
  • Assess and mitigate potential systemic risks.
  • Ensure an adequate level of cybersecurity. (“Adequate”, again, is deliberately fuzzy.)
  • Track, document and report serious incidents.

The limit of 10^25 flops is somewhat arbitrarily chosen, and is an imperfect proxy for where general-purpose AI may start contributing meaningfully to things like cyberattacks, disinformation campaigns, automated spearphishing, or preparing terrorist attacks. The threshold could have been set higher – SB 53 in California, for example, uses 10^26. All the current frontier models are above both limits, so the actual numbers matters less.

“On the market”

The European Union is a project for a common European market, and as such the AI Act only applies to AI systems and AI models on the European market. This includes AI on datacenters elsewhere but with services delivered within the EU, and even results from AI elsewhere used in the EU. (You can’t for example circumvent the AI Act by sending resumés to another country, have an AI pick the person to hire, and send the result back to the EU.)

However, the AI Act does not apply to:

  • AI models and systems in development, even if they are being tested in trials outside the labs.
  • AI that is used only for research purposes.
  • AI that is used only for military purposes.

It should also be noted that not all parts of the AI Act are enforced yet. While the rules concerning prohibited AI systems and GPAI models are active, the rules for high-risk AI are scheduled for summer 2026 and beyond (partly depending on whether the Omnibus package is approved or not).

Some final thoughts

The AI Act is in many, even if not all parts a sensible regulation of AI. It is far less harsh than it has been given credit for. In fact, when people have blamed AI Act for delaying product launches in the EU it has actually been because of the Digital Services Act, and the General Data Protection Regulation (GDPR) has spilled over its reputation to be near-impossible to implement on other legislation.

The AI Act it is a set of rules that the industry can see and apply, and it has been available for more than a year.

This is in great contrast to how AI legislation is handled in the US, where almost 40 states have their own AI regulation while the Trump administration is trying to prohibit state-level regulation. The reason, at least publicly stated, is that a patchwork of regulation slows down AI innovation.

To use AI in high-stakes environment, we need trustworthy AI. The AI Act is a way to ensure that AI used for cancer detection or assessing whether convicted people commit new crimes are sufficiently reliable. It is regulation that can help building AI that we can use when it actually matters, and at the same time guard individuals’ rights and public safety.

 

Get a more thorough overview of the AI Act in this high-level summary.

9

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Executive summary: The author argues that the EU AI Act does not stifle innovation but instead provides a proportionate, risk-based regulatory framework that enables the development and deployment of trustworthy AI, especially in high-stakes and general-purpose applications.

Key points:

  1. The author claims that around 90% of the AI Act can be summarized as requiring reliability for AI used in important decisions and risk mitigation for AI models powerful enough to cause serious harm.
  2. Most AI systems fall under “minimal or no risk” and face no new regulatory obligations, while prohibited uses are limited to practices the author describes as obviously harmful, such as social scoring and indiscriminate biometric identification.
  3. “High risk” AI systems used in areas like hiring, law enforcement, welfare, education, medical devices, and critical infrastructure must meet standards for risk management, data quality, accuracy, robustness, cybersecurity, documentation, and human oversight.
  4. Regulation of general-purpose AI applies to models themselves, requiring training data summaries, copyright compliance, and technical documentation, with exemptions for “free and open GPAI models” regarding downstream documentation.
  5. Frontier models trained on at least 10^25 flops are generally classified as “GPAI with systemic risks” and must undergo evaluations, adversarial testing, risk mitigation, incident reporting, and cybersecurity measures.
  6. The author argues that the AI Act is less restrictive than commonly portrayed, is clearer than fragmented U.S. regulation, and is intended to support innovation by making high-stakes AI systems sufficiently trustworthy.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities