Hide table of contents

A professor I'm friendly with has been teaching a course on AI ethics this semester, and he asked me if I could come give a guest lecture on "AI apocalypse" scenarios. What should I include in the lecture?

Details:

  • Audience is mostly graduate engineering students. Some are in a program focused on AI, others are in a systems engineering program and don't otherwise have much knowledge about AI beyond this course.
  • This will be I think the final class for the semester. Previous classes have covered topics like privacy, bias, explainability, social coordination failures (including race to the bottom dynamics), self-driving cars, AI in social media, lethal autonomous weapons, AI in healthcare, implementing ethics in organizations, and the economic impacts of AI. I did not attend those lectures so I don't know exact details of what was or wasn't covered.
  • I have leeway to include pretty much anything I want in the class and to focus on any topic(s) I want. The professor would like me to include something about forecasting, but that's optional.
  • I can optionally assign something short as an assignment before class, but it is the end of the semester so nothing more than that. There will not be any homework based on the class, I think.
  • The class will be 2 hours long.
  • I do not have a huge amount of time to prepare for the lecture - it's relatively soon (April 27) and I have multiple other work / school / family obligations I need to attend to between now and then.

If anybody has relevant material I could use, such as slides or activities, that would be great! Also, if anybody wants to help develop the material for this class, please message me (preferably at my work email - Aryeh.Englander@jhuapl.edu).

As a bonus, I expect that material for a class of this sort may turn out to be useful for plenty of other people on this and related forums, either for themselves or as a tool they can use when presenting the same topic to others.

[Note: I am posting this here with permission from the professor.]

6

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

AGI Safety Fundamentals has the best resources and reading guides. Best short intros are the very short (500 words) intro and a slightly longer one, both from Kelsey Piper.

You might find a lecture of mine useful:
 

For completeness, you might want to examine counterfactuals / challenges, or suggestions that:

# AI+nuclear weapons, and nuclear war itself are more immediate neglected risks; the predicted arrival date of AGI tends to get put back 5 years, every 5 years or so, and a nuclear war might push it back further:

# it could be that the population selection for IT-types within academia and EA leads to AGI being over-emphasised as a GCR within EA and academia; also, it's a really interesting and absorbing topic, so who wouldn't want to prioritise it?

# just because it's a high priority, doesn't mean everyone should be doing it!

# more EAs should go into defence, RAND and intelligence agencies, so that at least a few EAs know what is going on in there, and it isn't dominated by hawks

Curated and popular this week
Relevant opportunities