These two talks are unrelated, but are interesting conversations about AI Safety from people outside the AI Safety Community.

Gebru: Eugenics and the Promise of Utopia through Artificial Intelliegence

Most of the talk is critical analysis of EA and adjacent communities, although at 37 mins she pivots to discussing visions of AGI, and at about 47 minutes discusses why AGI is inherently unsafe. This last section on why AGI is inherently unsafe I think will get a lot of agreement from people here. 



Lazar: Generative AI and the New Bing

The talk is much less focused on AI XRisk specifically, although a lot of it is likely relevant for people thinking. The section around 34 minutes- 39 minutes on the , whilst short, is explicitly relevant to EA concerns




Sorted by Click to highlight new comments since:

The second video seems really interesting to me, as someone who's into moral philosophy. The first video personally falls into "it's bad on purpose to make you click" territory, though.

If you watch from when I suggest in the link, I think it's less bad than you make out

I skimmed from 37:00 to the end. It wasn't anything groundbreaking. There was one incorrect claim ("AI safteyists encourage work at AGI companies"), I think her apparent moral framework that puts disproportionate weight on negative impacts on marginalised groups is not good, and overall she comes across as someone who has just begun thinking about AGI x-risk and so seems a bit naive on some issues. However, "bad on purpose to make you click" is very unfair.

But also: she says that hyping AGI encourages races to build AGI. I think this is true! Large language models at today's level of capability - or even somewhat higher than this - are clearly not  a "winner takes all" game; it's easy to switch to a different model that suits your needs better and I expect the most widely used systems to be the ones that work the best for what people want them to do. While it makes sense that companies will compete to bring better products to market faster, it would be unusual to call this activity an "arms race". Talking about arms races makes more sense if you expect that AI systems of the future will offer advantages much more decisive than typical "first mover" advantages, and this expectation is driven by somewhat speculative AGI discourse.

She also questions whether AI safetyists should be trusted to improve the circumstances of everyone vs their own (perhaps idiosyncratic) priorities. I think this is also a legitimate concern! MIRI were at some point apparently aiming to 1) build an AGI and 2) use this AGI to stop anyone else building an AGI (Section A, point 6). If they were successful, that would put them in a position of extraordinary power. Are they well qualified to do that? I'm doubtful (though I don't worry about it too much because I don't think they'll succeed)

There was one incorrect claim ("AI safteyists encourage work at AGI companies")

"AI safetyists" absolutely do encourage work at AGI companies. To take one of many examples, 80,000 Hours are "AI safetyists", and their job board currently encourages work at OpenAI, Deepmind, and Anthropic, which are AGI companies.

(I haven't watched the video.)

Fair enough, she mentioned Yudkowsky before making this claim and I had him in mind when evaluating it (incidentally, I wouldn't mind picking a better name for the group of people who do a lot of advocacy about AI X-risk if you have any suggestions)

Curated and popular this week
Relevant opportunities