Beaut. Thanks for the detailed feedback!
I think these suggestions make sense to implement immediately:
These things will require a bit of experimentation but are good suggestions:
re: the biosecurity map
did you realise that the AIS map is just pulling all the coordinates, descriptions, etc from a google sheet
if you've already got a list of orgs and stuff it's not hard to turn it into a map like the AIS one by copying the code, drawing a new background, and swapping out the URL of the spreadsheet
This is what I meant, yeah.
There's also an issue of "low probability" meaning fundamentally different things in the case of AI doom vs supervolcanoes.
P(supervolacano doom) > 0 is a frequentist statement. "We know from past observations that supervolcano doom happens with some (low) frequency." This is a fact about the territory.
P(AI doom) > 0 is a Bayesian statement. "Given our current state of knowledge, it's possible we live in a world where AI doom happens." This is a fact about our map. Maybe some proportion of technological civilisations do in fact get exterminated by AI. But maybe we're just confused and there's no way this could ever actually happen.
I have a masters degree in machine learning and I've been thinking a lot about this for like 6 years, and here's how it looks to me:
I’m paralysed by the thought that I really can’t do anything about it.
IMO, a lot of people in the AI safety world are making a lot of preventable mistakes, and there's a lot of value in making the scene more legible. If you're a content writer, then honestly trying to understand what's going on and communicating your evolving understanding is actually pretty valuable. Just write more posts like this.