Trying to learn
Appreciate the post quite a bit, thank you for taking the time to share.
I use it to see if I’ve missed anything significant, esp. since I’ve started looking at lesswrong more (uh, apologies about that? More of a cause specific thing with ai and getting more into rationalism)
I don’t think I click on that many links typically, but I might leave the digest unread in my inbox until I give it a complete read through. I could imagine myself reading through it and seeing some post that makes me go down a rabbit hole and by the time I get back to the email tab I need to just mark unread to review again, for instance. Wouldn’t be surprised if this had occurred, that is.
Idk much more, I like the setup and do actually use it as described above as a sort of, well, I guess newsletter, huh.
Hi! Have little time but have spoken with someone who was really excited about the potential of:
Re: Existential Risk Persuasion Tournament, I’m wondering if one thing to consider with forecasters is just that they think a lot about the future, asking them to then imagine a future where a ton of their preconceived predictions may not occur, I wonder if this is a significant model shift. Or something like:
forecasters are biased toward status quo as that is easier to predict from - imagine you had to take into account everything all at once in your prediction, “will x marry by year 2050? Well there’s a 1% chance both parties are dead because of AI…” is absurd.
But I guess forecasters also have a more accurate world model anyway. Though this still felt like something I wanted to write out anyway considering I was trying to justify the forecasters low xrisk takes. (Again, status quo bias against extreme changes)
FWIW I think this post: https://forum.effectivealtruism.org/posts/J4cLuxvAwnKNQxwxj/how-does-ai-progress-affect-other-ea-cause-areas
Is a way better version of what I was trying to get at here, and MacAskill's answer is pretty good.
In my extreme case, I think it looks something like we’ve got really promising solutions in the work for alignment and they will be on time to actually be implemented.
Or perhaps, in the case where solutions aren’t forthcoming, we have some really robust structures (international govs and big AI labs or something coordinating) to avoid developing AGI.
Nice post, and I appreciate you noticing something that bugged you and posting about it in a pretty constructive manner.
This is very fair hahaha
Casual comment on my own post here but - even the x-risk tag symbol is a supervolcano, imagine we just presented over and over again such symbols that… aren’t actually very representative of the field? I guess that’s my main argument here.
I think this is a fair point, thanks for making it. And I certainly overgeneralize at times here, where I believe I’ve experienced moments that indicate such a schism, but also not enough to just label it as such in a public post. Idk!