I’ve heard a number of people say that it’s unclear what the technical contours of a global AI treaty would look like. That is true - but it’s not actually an obstacle to negotiating an international treaty. 

I’ll try to explain why this isn’t a good objection, but the short version is that if countries have clear goals which are largely shared, negotiations end up with strong treaties. So the important questions are not the exact rules. The critical questions are if there really is a joint global risk that requires action - and experts agree there is, and whether verification and enforceability are possible - and experts say they are. So the problem isn’t a technical issue, it’s a question of whether we can get to an agreement. And despite facile “we can’t stop until they do” arguments, we can and should try to do better

In the linked lesswrong post, I give examples of other treaty processes and explain why I think governments should, and need to, start negotiating even before all the details are worked out.

11

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities