In particular how many hours of study would this take, having minimal background in computer science and AI, and I’m also curious what courses / books will get me to a first principles understanding fastest.

By first principles, I mean that I can break down the arguments into the most basic parts so that I can describe risk at a high level in simple terms, but then also break those terms down into constituent parts such that I could reconstruct ideas like machine learning, gradient descent, interpretability, etc. in a satisfactory and complete way without any outside help.

I want to be able to clearly picture how x-risk might take place and be able to manipulate the variables that lead to different possible scenarios in my mind.

Again, especially interested in resources to help me do this.

New Answer
New Comment

2 Answers sorted by

The "Modeling Transformative AI Risk" project which I assisted with has the intent of explaining this, and we have a fairly extensive but not fully comprehensive report on the conceptual models that we think are critical, online here. (A less edited and polished version is on the alignment forum here.)

 I think that you could probably read through the report itself in a week, going slowly and thinking through the issues - but doing so requires background in many of the questions discussed in Miles' Youtube Channel, and the collection of AGI safety fundamentals resources which others recommended. Assuming a fairly basic understanding of machine learning and optimization, which probably requires the equivalent an undergraduate degree in a related field, the linked material on AI safety questions that you'd need to study to understand the issues, plus that report should get you to a fairly good gears-level understanding.  I'd expect that 3 months of research and reading by someone with a fairly strong undergraduate background, or closer to a year for someone starting from scratch, would be sufficient to have an overall gears level model of the different aspects of the risk.

Given that, I will note that contributing to solving the problems requires quite a bit more investment in skill building - and depending on what you are planning on doing to address the risks, this could be equivalent to an advanced degree in mathematics, machine learning, policy, or international relations.

Thank you, this is very much what I was looking for

Here's the most up-to-date version of the AGI Safety Fundamentals curriculum. Be sure to check out Richard Ngo's "AGI safety from first principles" report. There's also a "Further resources" section at the bottom linking to pages like "Lots of links" from AI Safety Support.

Great question, I'd love to know too. The one thing I can recommend is Robert Miles' YouTube Channel, although he hasn't uploaded in a while.

If I may add to your post, I'd also like to know what things are missing to reduce the risk. Is it a lack of creativity or a lack of people or a lack of something else? Thanks!

Sorted by Click to highlight new comments since: Today at 11:18 AM

I don't have a great sense of how long what you're describing would take, but here is a collection of relevant resources.

Thanks! Already taking this course