Hide table of contents

The EU AI Act is currently undergoing the ordinary legislative procedure, in which the European Parliament and Council can propose changes to the act.

A brief summary of the act is that systems defined as high-risk  will be required to undergo conformity assessments, which among other things requires the system to be monitored and have a working off-switch (longer summary and analysis here).

The Council's amendments have recently been circulated. Most importantly for longtermists, they include a new section for general purpose AI systems. For the first time ever regulating general AI is on the table, and for an important government as well!

The article reads:

Article 52a - General purpose AI systems 

  1. The placing on the market, putting into service or use of general purpose AI 
    systems shall not, by themselves only, make those systems subject to the 
    provisions of this Regulation. 
  2. Any person who places on the market or puts into service under its own 
    name or trademark or uses a general purpose AI system made available on 
    the market or put into service for an intended purpose that makes it subject 
    to the provisions of this Regulation shall be considered the provider of the 
    AI system subject to the provisions of this Regulation. 
  3. Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a 
    general purpose AI system made available on the market, with or without 
    modifying it, into an AI system whose intended purpose makes it subject to 
    the provisions of this Regulation. 
  4. The provisions of this Article shall apply irrespective of whether the general 
    purpose AI system is open source software or not.


Or in plain English, General Purpose AI Systems will not be considered high-risk, unless they are explicitly intended to be used for a high-risk purpose.

What are your reactions to this development?

Comments10
Sorted by Click to highlight new comments since: Today at 11:16 AM

Thank you for the update – super helpful to see.

 

What are your reactions to this development?

My overall views are fairly neutral. I lean in favour of this addition, but honestly it could go either way in the long run.

 

The addition means developers of general AI will basically be unregulated. On the one hand being totally unregulated is bad as it removes the possible advantages of oversight etc. But on the other hand applying rules to regulate general AI in a way similar to how this act regulates high-risk AI would be the wrong way to regulate general AI. 

In my view no regulation seems better than inappropriate regulation, and still leaves the door open to good regulatory practice. Someone else could argue that restrictive inappropriate regulation would slow down EU progress on general AI research and this would be good and I could understand the case for that but in my view think the evidence for the value of slowing EU general AI research is weak and my general preference for not building inappropriate or broken systems is stronger.

 

(Also the addition removes the ambiguity that was in the act as to whether it applied to general AI products, which is good as legal clarity is good.)

How do they define general purpose AI systems?

[anonymous]2y11
0
0

From (70a): "In the light of the nature and complexity of the value chain for AI systems, it is essential to clarify the role of persons who may contribute to the development of AI systems covered by this Regulation, without being providers and thus being obliged to comply with the obligations and requirements established herein. In particular, it is necessary to clarify that general purpose AI systems - understood as AI system [sic] that are able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation etc. - should not be considered as having an intended purpose within the meaning of this Regulation."

This is somewhat strange to me, as even within the limited scope of short-term worries from AI, I could imagine many problems in deployed systems to stem from their general-purpose components, such as bias in image recognition models.

So... GPT-3. That's what they mean by AGI.

GPT-3 does question answering, translation.

It seems like the exclusions could cover all commercially relevant "AI systems" or machine learning.

I think equally important for longtermists is the new requirement for the Commission to consider updating the definition of AI, and the list of high-risk systems, every 1 year. If you buy that adaptive/flexible/future-proof governance will be important for regulating AGI, then this looks good.

(The basic argument for this instance of adaptive governance is something like: AI progress is fast and will only get faster, so having relevant sections of regulation come up for mandatory review every so often is a good idea, especially since policymakers are busy so this doesn't tend to happen by default.)

Relevant part of the doc:

  1. As regards the modalities for updates of Annexes I and III, the changes in Article 84 introduce a new reporting obligation for the Commission whereby it will be obliged to assess the need for amendment of the lists in these two annexes every 24 months following the entry into force of the AIA.

My own opinion is that it is a double-edged sword

The Council's change on its own weakens the act, and will allow companies to avoid conformity assessments for the exactly the AI systems that will need them the most.

But the new article also makes it possible to impose requirements that will only solely affect general purpose systems, without burdening the development of all other low-risk AI with unnecessary requirements.

Overall, I think it's not that surprising that this change is being proposed and I think it's a fairly reasonable. However, I do think it should be complemented with duties to avoid e.g. AI systems being put to high-risk uses without going through a conformity assessment and that it should be made clear that certain parts of the conformity assessment will require changes on the part of the producer of a general system if that's used to produce a system for a high-risk use.

In more detail, my view is that the following changes should be made: Goal 1: Avoid general systems being without the appropriate regulatory burdens kicking in. There are two kinds of cases one might worry about: (i) general systems might make it easier to produce a system that should either be covered by the transparency requirements (e.g. if your system is a chatbot, you need to tell the user that) or the high-risk requirements, leading to more such systems being put on the market without them being registered.

Proposed solution: Make it the case that providers of general systems must do certain checks on how their model is being used and whether it is being used for high risk uses without that AI system having been registered or having gone through the conformity assessment. Perhaps this would be done by giving the market surveillance authorities (MSAs) the right to ask providers of general models about certain information about how the model is being used. In practice, it could look as follows: the provider of the general system could have various ways to try to detect whether someone is using their system for something high risk (companies like OpenAI are already developing tools and systems to do this). If they detect such a use, they are required to check that against the database of high risk AI systems deployed on the EU market. If there's a discrepancy, they must report it to the MSA and share some of the relevant information as evidence.

(ii) There’s a chance that individuals using general systems for high-risk uses without placing anything on the market will not be covered by the regulation. That is, as the regulation is currently designed, if a company where to use public CCTV footage to assess the number of women vs. men walking down a street, I believe that would be a high risk use. But if an individual does it, it might not count as a high risk use because nothing is placed on the market. This could end up being an issue, especially if word about these kinds of use cases spreads. Perhaps a more compelling example would be people starting to use large language models as personal chat bots. The proposed regulation wouldn’t require the provider of the LLM to add any warnings about how this is simply a chatbot, even if the user starts e.g. using it as a therapist or for medical advice.

Proposed solution: My guess is that the solution is that the provision suggested above is expanded to also look for individuals using the systems for high risk or limited risk uses and that they are required to stop such use.

Goal 2: (perhaps most important) Try to make it the case that crucial and appropriate parts of the conformity assessment will require changes on the part of the producer of the general system.

This could be done by e.g. making it the case that the technical documentation requires information that only the producer of the general model would have. It would plausibly already be the case with regards to the data requirements. It would also plausibly be the case regarding robustness. It seems worth making sure of those things. I don't know if that's a matter of changing the text of the legislation itself or about how the legislation will end up being interpreted.

One way to make sure that this is the case is to require that deployers only use general models that have gone through a certification process or that has also passed the conformity assessment (or perhaps a lighter version). I’m currently excited about the latter.

Why am I not excited about something more onerous on the part of the provider of the general system?

  • I think we can get a lot of the benefits of providers of general systems needing to meet certain requirements without them having to go through the conformity assessment themselves. I expect there to be lots of changes that need to be made to the general model to allow the deployer to complete their conformity assessment. If I try to use GPT-3 to create a system that rates essays (ignoring for now that OpenAI currently prohibit this in their Terms of Use), I’ll need to make sure that the system meets certain robustness requirements, I need to be able to explain to some human overseer how it works, and so on. To meet those requirements, I think that’ll require changes on the part of the developer of the general system. As such, I think the legal requirements will have an effect on general AI systems produced by big tech companies. To illustrate the point, if EU car manufacturers are required to use less carbon-intensive steel, that would have a large impact on the carbon-intensity of steel production in the EU, even though the steel manufacturers weren’t directly targeted by the legislation.
  • Introducing requirements on all general systems that can be used on the EU market seems hugely onerous to me. So much so that it would probably be a bad idea. I think that companies could fairly easily go from offering a general system on the EU market to offering a general-system-that-you're-not-allowed-to-use-for-high-risk-uses. This could for example be done by adjusting the terms and conditions (OpenAI’s API usage guidelines already disallows most if not all high-risk uses as defined in the AI Act) or writing in big font somewhere "Not intended for high-risk uses as defined by the EU's AI Act". I worry that introducing requirements on general systems on masse would lead to that being the default response and that it wouldn’t deliver much benefit beyond what we’d get if the changes I gestured at above were made.

For the first time ever regulating general AI is on the table, and for an important government as well!

Given the definition of general AI that they use, I do not expect this regulation to have any more to do with AGI alignment than the existing regulation of "narrow" systems.

(This isn't to say it's irrelevant, just that I wouldn't pay specific attention to this part of the regulation over the rest of it.)

Curated and popular this week
Relevant opportunities