Hide table of contents

Introduction

This post is to analyse and explore the UK’s draft AI Bill, looking at its strengths and weaknesses. 

If you need a tl;dr: the bill has some strengths but many significant weaknesses largely related to wide scope, PR-centric motivations in some sections, and uncertain ripple effects on other regulatory bodies and legislation.

For a longer tl;dr, see 'Summary' at the end, or click it in the contents to the left.

I will introduce a background to the Bill and then go through each section of the Bill chronologically, first citing the section in full and then going into some analysis.

Background

The AI Regulation Bill 2024 was proposed on 22nd November 2023 by Lord Holmes of Richmond as a Private Member’s Bill which has had its first reading, and is now up to its second reading. Private Member’s Bills are bills by backbench MPs or Peers and do not have a high success rate of succeeding. An earlier AI bill called the Artificial Intelligence (Regulation and Workers’ Rights) Bill failed before the first reading earlier last year. However, it is worth looking at such regulation as it is proposed in order to get an idea of what kind of legislation is coming over the horizon. AI Regulation is also an idea which Prime Minister Rishi Sunak has approved of in the media, which slightly increases the chances of success of this Bill making it further than its predecessor.

It is a short Bill, with only 7 pages and 9 sections. Its primary intent is to establish a central ‘AI Authority’, though it does explore other regulatory matters. It appears to intend to be a framework for further regulation rather than a fix-all in itself, but is still a standalone piece of legislation at this time.

Due to limited Forum formatting options for this kind of post, I advise reading the Bill at source - however it is included here in its entirety split into quotes for those wanting to stay in one window. Want to read the entire Bill in another window? Click here.

Section by Section Analysis

1. The AI Authority


“(1) The Secretary of State must by regulations make provision to create a body called the AI Authority. 

(2) The functions of the AI Authority are to—  

(a) ensure that relevant regulators take account of AI; 

(b) ensure alignment of approach across relevant regulators in respect of AI; 

(c) undertake a gap analysis of regulatory responsibilities in respect of AI; 

(d) coordinate a review of relevant legislation, including product safety, privacy and consumer protection, to assess its suitability to address the challenges and opportunities presented by AI; 

(e) monitor and evaluate the overall regulatory framework’s effectiveness and the implementation of the principles in section 2, including the extent to which they support innovation; 

(f) assess and monitor risks across the economy arising from AI; 

(g) conduct horizon-scanning, including by consulting the AI industry, to inform a coherent response to emerging AI technology trends; 

(h) support testbeds and sandbox initiatives (see section 3) to help AI innovators get new technologies to market; 

(i) accredit independent AI auditors (see section 5(1)(a)(iv)); 

(j) provide education and awareness to give clarity to businesses and to empower individuals to express views as part of the iteration of the framework;

(k) promote interoperability with international regulatory frameworks. 

(3) The Secretary of State may by regulations amend the functions in subsection (2), and may dissolve the AI Authority, following consultation with such persons as he or she considers appropriate."

 

This section is the flagship section of the legislation, being focused on the formation of an AI Authority. One of the major issues here is insufficient detail on the scope and investigatory powers of the AI Authority. This has been clarified somewhat in debate, with such an authority coming above and helping to coordinate current regulatory bodies such as the Information Commissioner’s Office (ICO) - however just why this is necessary or how this will happen is somewhat abstract.

This is particularly true given that current bodies such as the ICO may well be better placed to undertake such a role in the first place, especially if given the legislative ‘teeth’ to effectively govern AI in such a manner. Current regulators also already struggle to attract adequate high-level talent, and the formation of an extra authority will likely lead to either increased recruitment woes for itself or, if appropriately funded, increasing recruitment shortages elsewhere.

Despite a boom in AI Ethics and AI Safety personnel attempting to enter the workforce, talent in actual AI Regulation who have enough STEM and Legal knowledge to do the role well is in very short supply. Such a Regulator will require people who have more than academic knowledge, as academic theory often bears little resemblance to regulatory reality on the ground. This is not to disparage those in AI Ethics or AI Safety from an academic background, it merely addresses the reality that certain tasks require certain expertise - and expertise in AI Regulation is extremely limited and therefore costly. Most personnel entering this type of workforce are consultants coming from a nuclear regulation background, cross-training into computer science (note: this is anecdotal, rather than statistical). This is because there is enough of an overlap for this to be possible. They are, however, exorbitantly expensive and have their pick of employers. Given this reality, can the AI Authority compete with major corporations for pay or perks?

The core question here is - is there a need for a new regulatory body, rather than an existing one?

The vagueness issue is best demonstrated in 1(2)(a) where words such as ‘take account’ carry little weight, and do not indicate particularly stringent intentions. This could be a minor wording issue or perhaps an insight into the fundamental intentions of the Act, but only time can tell there.

Of final note is 1(2)(d) where the term ‘a review’ is mentioned in relation to relevant legislation. This singular rather than plural phrasing is important because highly regulated industries such as Nuclear or Export Controls conduct regular such reviews. It would be expected that a high-level authority would conduct these perhaps bi-annually or more. Again, this may be a minor wording issue but it is one worth watching.

2. Regulatory Principles
 

(1) The AI Authority must have regard to the principles that —

(a) regulation of AI should deliver —

(i) safety, security and robustness; 

(ii) appropriate transparency and explainability; 

(iii) fairness; 

(iv) accountability and governance; 

(v) contestability and redress; 

(b) any business which develops, deploys or uses AI should— 

(i) be transparent about it; 

(ii) test it thoroughly and transparently;  

(iii) comply with applicable laws, including in relation to data protection, privacy and intellectual property; 

(c) AI and its applications should— 

(i) comply with equalities legislation; 

(ii) be inclusive by design; 

(iii) be designed so as neither to discriminate unlawfully among individuals nor, so far as reasonably practicable, to perpetuate unlawful discrimination arising in input data; 

(iv) meet the needs of those from lower socio-economic groups, older people and disabled people;

(v) generate data that are findable, accessible, interoperable and reusable; 

(d) a burden or restriction which is imposed on a person, or on the carrying on of an activity, in respect of AI should be proportionate to the benefits, taking into consideration the nature of the service or product being delivered, the nature of risk to consumers and others, whether the cost of implementation is proportionate to that level of risk and whether the burden or restriction enhances UK international competitiveness.

 (2) The Secretary of State may by regulations amend the principles in subsection (1), following consultation with such persons as he or she considers appropriate. "

This section is perhaps the biggest culprit of vague language, and one where some of the lack of specificity could potentially cause issues. It also has a lot of content that is not necessary.

The entirety of Section 2(1)(a) largely just restates vague concepts and is a rephrasing of current debates in AI Ethics. For example, what does it mean by 'appropriate’? For whom? How is ‘fairness’ designed? In their defence, I spent ~10,000 words exploring the legal concept of fairness as relates to AI in my PhD and still had unanswered questions by the end of the chapter, so I can let them off with not offering definitions. However, throwing such terms around for seemingly no purpose beyond box ticking does the legislation no service.

Particularly useless is 1(b)(iii) - does this really need to be said? This Bill is short at 9 pages, and even then wastes many words. Section 2(1)(c)(iv) is particularly troublesome, as it misses the entire point of the (obviously good) advice it must have received from prior input. The issue isn’t that AI doesn’t meet the needs of these groups, which is true in that it largely doesn’t, but the bigger issue is that it actively harms these groups. It is tremendously unclear what is meant by ‘meet the needs’, and is pretty much wasted words in an otherwise short Bill.

A possible exception is 1(a)(v) which is a consideration not always given in this type of proposal, and is good to see included. 

Section 2(1)(d) is also a mixed bag. It describes scaling responsibilities but lacks any detail and also places UK national competitiveness alongside guarantees of citizen rights which are absolutely nowhere near equal in importance and barely deserve to be in the same sentence.

3. Regulatory Sandboxes

(1) The AI Authority must collaborate with relevant regulators to construct regulatory sandboxes for AI. 

(2) In this section a “regulatory sandbox” is an arrangement by one or more regulators which— 

(a) allows businesses to test innovative propositions in the market with real consumers; 

(b) is open to authorised firms, unauthorised firms that require authorisation and technology firms partnering with, or providing services to, UK firms doing regulated activities; 

(c) provides firms with support in identifying appropriate consumer protection safeguards; 

(d) requires tests to have a clear objective and to be conducted on a small scale; 

(e) requires firms which want to test products or services which are regulated activities to be authorised by or registered with the relevant regulator before starting the test. The Secretary of State may by regulations amend the description in subsection (2), following consultation with such persons as he or she considers appropriate. 

This is actually a pretty interesting bit of the Bill. I think this is actually a fairly good idea, as it allows businesses to test new AI-related technologies in a way that is not stifled by the stricter bits of regulation but with supervisory safeguards from appropriate bodies. 

One thing I would say is that it is vital that these authorised firms are still aware of the regulations they are exempt from. It would be nice to see this in the legislation, but I know from experience that when regulatory exemptions are applied to technology the developers are usually under a contractual obligation to stick as close to the legislation as possible where practical to do so anyway. In practice, these sandboxes aren’t a carte-blanche to just do whatever the developer wants. There is often significant fear in the media whenever sandboxes are mentioned, and this has been no different with a number of critics expressing concern about abuse of the system to create unsafe AI (which is a fair criticism), but I think they often lack experience in actual regulatory environments to understand the range of mitigation factors at play. 

I would be interested in seeing more about the application criteria, though. It has always been my feeling that the sandboxes should be rarely used and only when there is a legitimate need to do so. I would be far more content to see public authorities or public infrastructure-focused commercial companies utilise the sandboxes rather than purely for-profit developments. This would all come under later guidance and policies though. 

As a summary, it is a boost in confidence to see 2(d) and 2(e) in there to keep things specific in scope and scale whilst also under the supervision of an existing regulator, though a lack of detail in 2(b) is an area for improvement.

4. AI Responsible Officers
 


(1) The Secretary of State, after consulting the AI Authority and such other persons as he or she considers appropriate, must by regulations provide that  any business which develops, deploys or uses AI must have a designated AI officer, with duties— 

(a) to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business; 

(b) to ensure, so far as reasonably practicable, that data used by the business in any AI technology is unbiased (see section 2(1)(c)(iii)). 

(2) In the Companies Act 2006, section 414C(7)(b), after paragraph (iii) insert— 

“(iv) any development, deployment or use of AI by the company, and the name and activities of the AI officer designated under the Artificial Intelligence (Regulation) Act 2024,”. 

(3) The Secretary of State may by regulations amend the duties in subsection (1) and the text inserted by section (2), following consultation with such persons as he or she considers appropriate. 

This is another section that is a good idea, but the wording is just too vague and wide. My initial and most glaring criticism is the lack of qualifying criteria for an AI Responsible Officer. This specific point in the legislation obviously states that the Secretary of State will provide further regulations on the matter of AI Responsible Officers so it is entirely possible that further qualifying detail would be added then - however, I would have very much liked to see at least some detail here.

In either changes to this Bill as it progresses or in further guidance after the Act is passed I would like to see criteria for what makes someone eligible to be an AI Responsible Officer. It should at minimum require some professional qualification to enable them to do an adequate quality job of safeguarding given the importance of the position and to avoid companies hiring low-level employees as silent, powerless scapegoats.

There is a good challenge to this criticism which is that too high a qualification bar would make it difficult and prohibitively costly for smaller organisations to hire an appropriately qualified individual. As a response to that I would add a caveat that there should be an increasing level of qualifications required in line with either increased size of the organisation or increased potential risk of the AI systems. I won’t get bogged down here with wording and terminology, but the obvious implication would be that a major AI lab, nuclear power plant, or a public authority would require a much more qualified individual than, say, a school or a furniture factory.

A good example of this is the use of SQEPs in nuclear regulation, which has been a very good system thus far.

I would also highlight that in 4(2), the changes to the Companies Act 2006 may cause problems by having a very high range of interpretations of ‘AI’. For example, I was once part of a team who trialled some new AI regulations with a series of major public-facing institutions, and part of that was a duty to report on AI systems used. Some reported only their most experimental AI systems, whereas others included things like automated payroll processes, design algorithms (essentially decision trees from user input), and in one case even included Microsoft Excel macros. It was a really enlightening insight into how widely terms can be interpreted, and I don’t think the writers of this bill have adequately considered this prospect. They may want to undertake the daunting task of defining AI, but to be honest it shouldn’t be too difficult if they are defining it for a specific regulatory task rather than trying to define the actual field. Good examples have previously been written up by others in a similar situation herehere, and here.

Section 7 of this Bill does make an attempt to define AI, but that may not necessarily apply to the Companies Act 2006, and also as in the above example Section 7’s definition of AI could conceivably include digital thermometers in 7(1)(a), Excel spreadsheets in 7(1)(b) and spellcheck in Microsoft Word in 7(1)(c).  See Section 7 further down this post for details.

A final, and obvious, criticism is the very broad terminology used in 4(1)(a) and 4(1)(b). The legal and scientific definitions of ‘safe’, ‘ethical’, ‘unbiased’, and ‘non-discriminatory’ differ significantly. I also get the feeling that the concept of the term ‘reasonable’ in the sentence ‘so far as reasonably practicable’ is going to become a battleground of significant size in case law in the near future should this Act pass. Tests of reasonableness are always such in legislation, but in this instance is likely to be more so. In the UK, 'Necessity' and 'Proportionality' are also likely to be core considerations - as is found in much of data law.

5. Transparency, IP obligations and labelling

(1) The Secretary of State, after consulting the AI Authority and such other persons as he or she considers appropriate, must by regulations provide that— 

(a) any person involved in training AI must— 

(i) supply to the AI Authority a record of all third-party data and intellectual property (“IP”) used in that training; and 

(ii) assure the AI Authority that— (A) they use all such data and IP by informed consent; and (B) they comply with all applicable IP and copyright obligations;

 (iii) any person supplying a product or service involving AI must give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance; and

(iv) any business which develops, deploys or uses AI must allow independent third parties accredited by the AI Authority to audit its processes and systems. 

(2) Regulations under this section may provide for informed consent to be express (opt-in) or implied (opt-out) and may make different provision for different cases. 

This is a really interesting section, and it probably helps to go chronologically here.

 Firstly, again it is likely that the Secretary of State and the AI Authority would introduce much more detailed guidance so some of my criticisms on lack of clarity would likely be addressed to a greater or lesser degree further down the line. However, for the moment I have some concerns about phrasing and scope.

The first terminology of concern is ‘any person’ in 1(a). The reason for this is that even once we encounter 1(a)(i), this already means that every high school/college/university computer science student, every self-learner, and every researcher would have to submit a record of all third-party data they use in training. This is an issue for two reasons. 

Firstly, it imposes a reporting burden on people who the Bill is clearly not aimed at. When we consider AI harms, is the teenager at home self-learning from YouTube tutorials really the type of developer we are concerned about? 

Secondly, I also have concerns about what kind of detail may be required in the record. I do AI regulation as part of my work, and whereas in some cases those records would be easy to assemble and submit, there are situations in which this would be extremely resource-intensive or even cases where it would not be realistic to submit as compliance evidence in either contractual or regulatory terms. This also only targets the individuals and organisations actually training the AI, rather than the people using it, which seems to be a bit of a gap. There seems to be little to stop an AI being trained abroad, and then sold within the UK. 

Subsection 1(a)(iv) is also quite broad, as that could include almost any business. It is also unclear whether the legislation’s definition of ‘business’ extends to organisations also. Is a non-profit / NGO a business? What of a police force? What about restricted businesses such as in the nuclear or export control sectors? This last example is mentioned in the regulatory sandboxes section, but not here. This vagueness is also a problem to a somewhat lesser extent in 1(a)(iii) where it is ill defined the remoteness of ‘involving AI’ to the actual product or service in question.

6. Public Engagement


“The AI Authority must— 

(a) implement a programme for meaningful, long-term public engagement about the opportunities and risks presented by AI; and 

(b) consult the general public and such persons as it considers appropriate as to the most effective frameworks for public engagement, having regard to international comparators."

This is a very small section, and it’s quite plainly worded. I do hope, however, that the public engagement will be non-biased (as in non-politically aligned, neutrally reported, and funding not related to consumption), and hopefully led by people focused on the rights of the public rather than the ambitions of government or industry. AI Ethicists would be particularly useful here. I am hopeful that we see a much greater emphasis on contemporary risks than in long-term risk or existential risk, as those are the risks of much more immediate and addressable concern to the public. 

I imagine that opinion will be tremendously unpopular in this forum, but it’s a matter of risk chance and not risk impact when it comes to such systems - especially when under the control of public authorities.

7. Interpretation

“(1) In this Act “artificial intelligence” and “AI” mean technology enabling the programming or training of a device or software to— 

(a) perceive environments through the use of data; 

(b) interpret data using automated processing designed to approximate cognitive abilities; and 

(c) make recommendations, predictions or decisions; with a view to achieving a specific objective.

(2) AI includes generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained."

I already discussed the interpretation challenges earlier in this analysis, so there’s not much more to say here. Included for reference.


8. Regulations

“(1) Regulations under this Act are made by statutory instrument. 

(2) Regulations under this Act may create offences and require payment of fees, penalties and fines. 

(3) A statutory instrument containing regulations under section 1 or 2 or regulations covered by subsection (2) may not be made unless a draft of the instrument has been laid before and approved by resolution of both Houses of Parliament. 

(4) A statutory instrument containing only regulations not covered by subsection (3) is subject to annulment in pursuance of a resolution of either House of Parliament.

(5) A statutory instrument containing regulations applying to Wales, Scotland or Northern Ireland must be laid before Senedd Cymru, the Scottish Parliament or the Northern Ireland Assembly respectively before being made.”

This section is legal procedure only, and of little relation to this analysis. Included for reference.

9. Extent, commencement and short title
 


“(1) This Act extends to England and Wales, Scotland and Northern Ireland. 

(2) This Act comes into force on the day on which it is passed. 

(3) This Act may be cited as the Artificial Intelligence (Regulation) Act 2024." 

This section is legal procedure only, and of little relation to this analysis. Included for reference.

Summary

This Bill is an interesting start on AI regulation in the UK in terms of producing AI-specific statutory law rather than relying on law found elsewhere in data protection or technology, or on case law such as Bridges v South Wales Police. It’s good to see an attempt made - even if it is via a Private Member’s Bill. 

There are some strengths in terms of allowing regulatory safeguards without stifling necessary innovation, but the Bill also has some significant weaknesses which I feel need addressed should it (no matter how unlikely) continue to eventually be passed.

A major weakness of the Bill is vague terminology, but that is a Catch-22 as vague terminology is both a strength and a weakness in legislation. In fact, vague terminology is what (perhaps unintentionally) future-proofed Section 1 of the Protection of Children Act 1978 (PCA 1978) and Section 160 of the Criminal Justice Act 1988 (CJA 1988) against AI tools used to create CSAM courtesy of both Acts' inclusion and definition of ‘pseudo-photography’. However, the right balance has to be struck here and in this Bill I feel the language falls on the side of being too vague.

I also feel that too much is left to the discretion of the Secretary of State and in guidance to be subsequently published by regulators. There needs to be more specific language used in many areas to adequately lay out who is regulated and to what extent. The fact that Secretaries of State and regulators change more rapidly than law is both a blessing and a curse here. Faster isn’t always better.

I also feel that many or all contributors to this Bill have little experience in actual regulatory compliance, as there are many examples where I would be unsure as to what to include in compliance evidence to such an authority. Even if further guidance is published on this, more specific principles should be enshrined in the Bill itself.

It will be interesting to watch this Bill’s passage through the various checks and balances, and see what becomes of it. Even in the likely event it loses steam and fails, it is an interesting example of regulatory ambitions regarding AI in the UK.



Edit log: Added directions to summary in the introduction, for those in a rush but desiring the more than 1 sentence tl;dr
 

18

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from CAISID
Curated and popular this week
Relevant opportunities