solve a bunch of 2000+ year old moral philosophy questions (e.g., 'what is good', 'what is the right action in a given circumstance', 'what are good rules for action'), then
figure out how to technically implement them into a non-deterministic software / algorithmic form, then
get international agreement on complex systems of regulation and governance to ensure that technical implementation is done correctly and monitored for compliance without compromising values of democracy, right to privacy, free expression, etc; then
ensure whatever governance arrangements we establish are sufficiently robust or flexible to respond to the transformative impacts of powerful AI on every part of society and industry
within the next ~5-20 years before the technical capacity of these systems outpace our ability to affect them.
Yes, this was a cheeky or sarcastic comment. I wrote it to share with some colleagues unfamiliar with AI safety who were wondering what it looked like to have 'good' outcomes in AI policy & governance.
Good AI governance is pretty easy.
We just have to
Hey Alexander. I'm genuinely not sure whether you are being sarcastic or not here - intuitively many of these steps seem very difficult to me.
I read Alexander as being quite sarcastic.
In that case excellent ;).
Yes, this was a cheeky or sarcastic comment. I wrote it to share with some colleagues unfamiliar with AI safety who were wondering what it looked like to have 'good' outcomes in AI policy & governance.
Upvoting because the forum needs more sass / diversity in tone of voice.
Strong upvote because you are making important and clear arguments.