The EU AI Act is currently undergoing the ordinary legislative procedure, in which the European Parliament and Council can propose changes to the act.
A brief summary of the act is that systems defined as high-risk will be required to undergo conformity assessments, which among other things requires the system to be monitored and have a working off-switch (longer summary and analysis here).
The Council's amendments have recently been circulated. Most importantly for longtermists, they include a new section for general purpose AI systems. For the first time ever regulating general AI is on the table, and for an important government as well!
The article reads:
Article 52a - General purpose AI systems
- The placing on the market, putting into service or use of general purpose AI
systems shall not, by themselves only, make those systems subject to the
provisions of this Regulation.- Any person who places on the market or puts into service under its own
name or trademark or uses a general purpose AI system made available on
the market or put into service for an intended purpose that makes it subject
to the provisions of this Regulation shall be considered the provider of the
AI system subject to the provisions of this Regulation.- Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a
general purpose AI system made available on the market, with or without
modifying it, into an AI system whose intended purpose makes it subject to
the provisions of this Regulation.- The provisions of this Article shall apply irrespective of whether the general
purpose AI system is open source software or not.
Or in plain English, General Purpose AI Systems will not be considered high-risk, unless they are explicitly intended to be used for a high-risk purpose.
What are your reactions to this development?
Thank you for the update – super helpful to see.
My overall views are fairly neutral. I lean in favour of this addition, but honestly it could go either way in the long run.
The addition means developers of general AI will basically be unregulated. On the one hand being totally unregulated is bad as it removes the possible advantages of oversight etc. But on the other hand applying rules to regulate general AI in a way similar to how this act regulates high-risk AI would be the wrong way to regulate general AI.
In my view no regulation seems better than inappropriate regulation, and still leaves the door open to good regulatory practice. Someone else could argue that restrictive inappropriate regulation would slow down EU progress on general AI research and this would be good and I could understand the case for that but in my view think the evidence for the value of slowing EU general AI research is weak and my general preference for not building inappropriate or broken systems is stronger.
(Also the addition removes the ambiguity that was in the act as to whether it applied to general AI products, which is good as legal clarity is good.)