Hide table of contents

I've encountered a widespread belief that international treaties on AI are impossible, that AI regulation nationally is impossible in any contry,  and that law is not the right venue for controlling AI anymore.

People with this view are typically not lawyers, and are US centric. By US centric, I mean that they think the US is the only country that can impact international law.

Their logic is usually:

  1. The US administration has made law an impossible avenue.
  2. The UN is weak.
  3. The US / China race means no country will pass laws.

As a result:

  1. We should pursue a technical solution to AI safety.
  2. We should pursue voluntary  industry standards.
  3. We should pursue public-private partnerships.

These are all weak replacements for proper liability mechanisms. Companies care about class action lawsuits and actual punishment.

Law is Realistic

In the past three years, laws governing AI have been passed by the EU (27 countries), China, Japan, South Korea, Singapore, and proposals are floated across Australia, NZ, UK and several US states. These are just the ones I know of off the top of my head.

There is a consensus on transparency, fairness, explainability and related topics. This is admittedly a baseline minimum.

In particular, China has passed several strong laws on AI, agents, deepfakes, labelling - all of which are said to be impossible in the US context because of competition with China. So China literally just proved its possible but this zombie argument persists.

We Have Done More in Worse Times

International laws governing nuclear weapons emerged at the height of the Cold War. 

This idea that we are in unprecedented times and that this prevents law is historically false. International governance has succeeded in far worse times. Most of the mechanisms have arisen between conflicting countries.

There seems to be an ahistoric, non-evidenced view that vague disagreements between superpowers means that no deals can be made. This is not true. Our curent sitation is a far lower hurdle than the Cold War, and reflects a doomerism rather than a reality.

Obeying the Overton Window is Not Leadership

The Overton Window is the principal in political science that what is acceptable shifts according to the extremes. As the far right gain power, left and centre left positions become "impossible" while centre right views gain prominence.

This shift is very evident in the publishing record of major academics and computer scientists.

Leadership is about taking unpopular stances, persuading people and shifting the argument. Obeying the Overton Window is not leadership.

I'm particularly disappointed by researchers who pushed for regulation two years ago now talking about watered down ethical guidelines, industry standards and public-private partnerships. This shift is irresponsible and will not achieve the outcomes they seek.

The Gold Standard of preventing AI harm is still regulation. Imagine proposing to deregulate cars or food safety. This would be an untenable position. Why should we settle for less than food safety with a dangerous new technology?

6

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities