Hi - lil bit of anonymity intended so I can feel more free to contribute and engage more closely with this community. Trying to learn quickly by experimenting.
It turns out, also rather busy, so may not have a lot of time generally. Still trying to iterate tho.
FWIW I think this post: https://forum.effectivealtruism.org/posts/J4cLuxvAwnKNQxwxj/how-does-ai-progress-affect-other-ea-cause-areas
Is a way better version of what I was trying to get at here, and MacAskill's answer is pretty good.
In my extreme case, I think it looks something like we’ve got really promising solutions in the work for alignment and they will be on time to actually be implemented.
Or perhaps, in the case where solutions aren’t forthcoming, we have some really robust structures (international govs and big AI labs or something coordinating) to avoid developing AGI.
Nice post, and I appreciate you noticing something that bugged you and posting about it in a pretty constructive manner.
Casual comment on my own post here but - even the x-risk tag symbol is a supervolcano, imagine we just presented over and over again such symbols that… aren’t actually very representative of the field? I guess that’s my main argument here.
I think this is a fair point, thanks for making it. And I certainly overgeneralize at times here, where I believe I’ve experienced moments that indicate such a schism, but also not enough to just label it as such in a public post. Idk!
Agreed! I think Geoffrey Miller makes this point rather excellently here:
I updated a bit from this post to be more concerned about the AIs themselves, I think your depiction really evoked my empathy. I’d previously been just so concerned with human doom that I’d almost refused to consider it, but in the meantime I’ll definitely make an effort to be conscious of this sort of possibility.
For a fictional representation of my thinking (what your post reminded me of…), Ted Chiang has a short story about virtual beings that can be cloned and some were even potentially abused… “the lifecycle of software objects”
Re: Existential Risk Persuasion Tournament, I’m wondering if one thing to consider with forecasters is just that they think a lot about the future, asking them to then imagine a future where a ton of their preconceived predictions may not occur, I wonder if this is a significant model shift. Or something like:
forecasters are biased toward status quo as that is easier to predict from - imagine you had to take into account everything all at once in your prediction, “will x marry by year 2050? Well there’s a 1% chance both parties are dead because of AI…” is absurd.
But I guess forecasters also have a more accurate world model anyway. Though this still felt like something I wanted to write out anyway considering I was trying to justify the forecasters low xrisk takes. (Again, status quo bias against extreme changes)