I will be running a large survey to find out which arguments for caring about existential risk get people hooked and which don’t. As there are many such arguments, I’d appreciate feedback on which ones to prioritize message testing through this form. It should only take about three minutes!
Feedback on my research proposal would also be greatly appreciated and could influence the direction I take significantly. This too should only take a few minutes.
Thanks a lot for your thoughtful feedback!
I share the hesitancy around promoting arguments that don’t seem robust. To keep the form short, I only tried to communicate the thrust of the arguments. There are stronger and more detailed versions of most of them, which I plan on using. In the cases you pointed to:
Some existential risks could definitely happen rather painlessly. But some could also happen painfully, so while the argument is perhaps not all encompassing, I think it still stands. Nevertheless, I’ll change it to something more like “you and everyone you know and love will die prematurely.”
Other intelligent life is definitely a possibility, but even if it’s a reality, I think we can still consider ourselves cosmically significant. I’ll use a less objectionable version of this argument like “... destroy what could be the universe’s only chance…”
I got the co-benefits argument from this paper, which lists a bunch of co-benefits of GCR work, one of which I could swap the “better healthcare infrastructure bit.” I’ll try to get a few more opinions on this.
In any case, thanks again for your comment—I hadn’t considered some of the objections you pointed out!