Technical AI Governance research at MIRI
I made an AI generated podcast of the 2021 MIRI Conversations. There are different voices for the different participants, to make it easier and more natural to follow along with.
This was done entirely in my personal capacity, and not as part of my job at MIRI. I did this because I like listening to audio and there wasn’t a good audio version of the conversations.
Spotify link: https://open.spotify.com/show/6I0YbfFQJUv0IX6EYD1tPe
RRS: https://anchor.fm/s/1082f3c7c/podcast/rss
Apple Podcasts: https://podcasts.apple.com/us/podcast/2021-miri-conversations/id1838863198
Pocket Casts: https://pca.st/biravt3t
I made an AI generated podcast of the 2021 MIRI Conversations. There are different voices for the different participants, to make it easier and more natural to follow along with.
This was done entirely in my personal capacity, and not as part of my job at MIRI.[1] I did this because I like listening to audio and there wasn’t a good audio version of the conversations.
Spotify link: https://open.spotify.com/show/6I0YbfFQJUv0IX6EYD1tPe
RRS: https://anchor.fm/s/1082f3c7c/podcast/rss
Apple Podcasts: https://podcasts.apple.com/us/podcast/2021-miri-conversations/id1838863198
Pocket Casts: https://pca.st/biravt3t
I do think you probably should (pre-)order If Anyone Builds It, Everyone Dies though.
Thanks for your comment :) sorry you finding all the book posts annoying, I decided to post here after seeing that there hadn’t been a post on the EA Forum
I’m not actually sure what book content I’m allowed to talk about publicly before the launch. Overall the book is written much more for an audience who are new to the AI x-risk arguments (e.g., policymakers and the general public), and it is less focused on providing new arguments to people who have been thinking/reading about this for years (although I do think they’ll find it an enjoyable and clarifying read). I don’t think it's trying to go 15 arguments deep in a LessWrong argument chain. That said, I think there is new stuff in there; the arguments are clearer than previously, there are novel framings on things, and I would guess that there’s at least some things in there that you would find new. I don’t know if I would expect people from the “Pope, Belrose, Turner, Barnett, Thornley, 1a3orn” crowd to be convinced, but they might appreciate the new framings. There will also be related online resources, which I think will cover more of the argument tree, although again, I don’t know how convincing this will be to people who are already in deep.
Here’s what Nate said in the LW announcement post:
If you're a LessWrong regular, you might wonder whether the book contains anything new for you personally. The content won’t come as a shock to folks who have read or listened to a bunch of what Eliezer and I have to say, but it nevertheless contains some new articulations of our arguments, that I think are better articulations than we’ve ever managed before.
I would guess many people from the OpenPhil/Constellation cluster would give endorsements as the book being a good distillation. But insofar as it's moving the frontier of online arguments about AI x-risk forward, it will mainly be by saying arguments more clearly (which imo is still progress).
Could/should big EA-ish coworking spaces like Constellation pay to have far-UV installed? (either on their floors specifically or for the whole building)