The EU AI Act is currently undergoing the ordinary legislative procedure, in which the European Parliament and Council can propose changes to the act.
A brief summary of the act is that systems defined as high-risk will be required to undergo conformity assessments, which among other things requires the system to be monitored and have a working off-switch (longer summary and analysis here).
The Council's amendments have recently been circulated. Most importantly for longtermists, they include a new section for general purpose AI systems. For the first time ever regulating general AI is on the table, and for an important government as well!
The article reads:
Article 52a - General purpose AI systems
- The placing on the market, putting into service or use of general purpose AI
systems shall not, by themselves only, make those systems subject to the
provisions of this Regulation.- Any person who places on the market or puts into service under its own
name or trademark or uses a general purpose AI system made available on
the market or put into service for an intended purpose that makes it subject
to the provisions of this Regulation shall be considered the provider of the
AI system subject to the provisions of this Regulation.- Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a
general purpose AI system made available on the market, with or without
modifying it, into an AI system whose intended purpose makes it subject to
the provisions of this Regulation.- The provisions of this Article shall apply irrespective of whether the general
purpose AI system is open source software or not.
Or in plain English, General Purpose AI Systems will not be considered high-risk, unless they are explicitly intended to be used for a high-risk purpose.
What are your reactions to this development?
Overall, I think it's not that surprising that this change is being proposed and I think it's a fairly reasonable. However, I do think it should be complemented with duties to avoid e.g. AI systems being put to high-risk uses without going through a conformity assessment and that it should be made clear that certain parts of the conformity assessment will require changes on the part of the producer of a general system if that's used to produce a system for a high-risk use.
In more detail, my view is that the following changes should be made: Goal 1: Avoid general systems being without the appropriate regulatory burdens kicking in. There are two kinds of cases one might worry about: (i) general systems might make it easier to produce a system that should either be covered by the transparency requirements (e.g. if your system is a chatbot, you need to tell the user that) or the high-risk requirements, leading to more such systems being put on the market without them being registered.
Proposed solution: Make it the case that providers of general systems must do certain checks on how their model is being used and whether it is being used for high risk uses without that AI system having been registered or having gone through the conformity assessment. Perhaps this would be done by giving the market surveillance authorities (MSAs) the right to ask providers of general models about certain information about how the model is being used. In practice, it could look as follows: the provider of the general system could have various ways to try to detect whether someone is using their system for something high risk (companies like OpenAI are already developing tools and systems to do this). If they detect such a use, they are required to check that against the database of high risk AI systems deployed on the EU market. If there's a discrepancy, they must report it to the MSA and share some of the relevant information as evidence.
(ii) There’s a chance that individuals using general systems for high-risk uses without placing anything on the market will not be covered by the regulation. That is, as the regulation is currently designed, if a company where to use public CCTV footage to assess the number of women vs. men walking down a street, I believe that would be a high risk use. But if an individual does it, it might not count as a high risk use because nothing is placed on the market. This could end up being an issue, especially if word about these kinds of use cases spreads. Perhaps a more compelling example would be people starting to use large language models as personal chat bots. The proposed regulation wouldn’t require the provider of the LLM to add any warnings about how this is simply a chatbot, even if the user starts e.g. using it as a therapist or for medical advice.
Proposed solution: My guess is that the solution is that the provision suggested above is expanded to also look for individuals using the systems for high risk or limited risk uses and that they are required to stop such use.
Goal 2: (perhaps most important) Try to make it the case that crucial and appropriate parts of the conformity assessment will require changes on the part of the producer of the general system.
This could be done by e.g. making it the case that the technical documentation requires information that only the producer of the general model would have. It would plausibly already be the case with regards to the data requirements. It would also plausibly be the case regarding robustness. It seems worth making sure of those things. I don't know if that's a matter of changing the text of the legislation itself or about how the legislation will end up being interpreted.
One way to make sure that this is the case is to require that deployers only use general models that have gone through a certification process or that has also passed the conformity assessment (or perhaps a lighter version). I’m currently excited about the latter.
Why am I not excited about something more onerous on the part of the provider of the general system?