I will be running a large survey to find out which arguments for caring about existential risk get people hooked and which don’t. As there are many such arguments, I’d appreciate feedback on which ones to prioritize message testing through this form. It should only take about three minutes!
 

Feedback on my research proposal would also be greatly appreciated and could influence the direction I take significantly. This too should only take a few minutes. 

3

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 5:39 AM

Thanks for running the survey, I'm looking forward to seeing results!

I've filled out the form but find some of the potential arguments problematic. It could be worth to seeing how persuasive others find these arguments but I would be hesitant to promote arguments that don't seem robust. In general, I think more disjunctive arguments work well.

For example, (being somewhat nitpicky):

Everyone you know and love would suffer and die tragically.  

Some existential catastrophes could happen painlessly and quickly .

We would destroy the universe's only chance at knowing itself...

Aliens (maybe!) or (much less likely imo) another intelligent species evolving on Earth

There are co-benefits to existential risk mitigation: prioritizing these risks means building better healthcare infrastructure, better defense against climate change, etc. 

It seems that work on biorisk prevention does involve "building better healthcare infrastructure" but is maybe misleading to characterise it in this way since I imagine people think of something different when they hear the term. There are also drawbacks to some (proposed) existential risk mitigation interventions.
 

Thanks a lot for your thoughtful feedback! 

I share the hesitancy around promoting arguments that don’t seem robust. To keep the form short, I only tried to communicate the thrust of the arguments. There are stronger and more detailed versions of most of them, which I plan on using. In the cases you pointed to:

Some existential risks could definitely happen rather painlessly. But some could also happen painfully, so while the argument is perhaps not all encompassing, I think it still stands. Nevertheless, I’ll change it to something more like “you and everyone you know and love will die prematurely.”

Other intelligent life is definitely a possibility, but even if it’s a reality, I think we can still consider ourselves cosmically significant. I’ll use a less objectionable version of this argument like “... destroy what could be the universe’s only chance…” 

I got the co-benefits argument from this paper, which lists a bunch of co-benefits of GCR work, one of which I could swap the “better healthcare infrastructure bit.” I’ll try to get a few more opinions on this. 

In any case, thanks again for your comment—I hadn’t considered some of the objections you pointed out!

More from Wim
Curated and popular this week
Relevant opportunities