This is a special post for quick takes by Alexander Saeri. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since: Today at 5:47 PM

Good AI governance is pretty easy.

We just have to

  1. solve a bunch of 2000+ year old moral philosophy questions (e.g., 'what is good', 'what is the right action in a given circumstance', 'what are good rules for action'), then
  2. figure out how to technically implement them into a non-deterministic software / algorithmic form, then
  3. get international agreement on complex systems of regulation and governance to ensure that technical implementation is done correctly and monitored for compliance without compromising values of democracy, right to privacy, free expression, etc; then
  4. ensure whatever governance arrangements we establish are sufficiently robust or flexible to respond to the transformative impacts of powerful AI on every part of society and industry
  5. within the next ~5-20 years before the technical capacity of these systems outpace our ability to affect them.

Hey Alexander. I'm genuinely not sure whether you are being sarcastic or not here - intuitively many of these steps seem very difficult to me.

I read Alexander as being quite sarcastic. 

In that case excellent ;).

Yes, this was a cheeky or sarcastic comment. I wrote it to share with some colleagues unfamiliar with AI safety who were wondering what it looked like to have 'good' outcomes in AI policy & governance.

Upvoting because the forum needs more sass / diversity in tone of voice.

Strong upvote because you are making important and clear arguments.

Curated and popular this week
Relevant opportunities