All of michaelB's Comments + Replies

This comment reads to me as unnecessarily adversarial and as a strawman of the authors' position.

It sounds to me like their real complaint is something like: How dare EA/utilitarianism prioritize other things over my pet causes, just because there's no reason to think that my pet causes are optimal? 

I think a more likely explanation of the authors' position includes cruxes like:

  • disagreeing with the assumption of maximization (and underlying assumptions about the aggregation of utility), such that arguments about optimality are not relevant
  • moral partia
... (read more)
5
Ben Dean
1y
  Speaking generally, it does seem like EA critics often  equivocate between these two positions. For example, saying EA is bad for diverting money from soup kitchens to bednets but not being willing to say money should be diverted the other way. IMO focus on philosophical issues like utilitarianism can have the effect of equivocating further by implying more specific disagreements without really defending them. (I don't have any opinions about this book in particular).

I'm responding to published academic work by (at least some) professional academics, published in the top academic press.  The appropriate norms for professional academic criticism are not the same as for (say) making a newcomer feel welcome on the forum.  It is (IMO) absolutely appropriate to clearly state one's opinion when academic work is of low quality, and explain why, as I did in my comment.

You're certainly welcome to form a different opinion of their work. But you shouldn't accuse me of "bad faith" just because I assessed their work more ... (read more)

Thanks for the post! Minor quibble, but it bothers me that "people" in the title is taken to mean "British adults". I would guess that the dietary choices of Brits aren't super indicative of the dietary choices of people in general, and since the Forum isn't a British platform, I don't think Brits are the default reference class for "people".

4
EdMathieu
2y
Thanks for the feedback! I've edited the title.

Military/weapons technologies, in particular nuclear weapons, biological weapons, chemical weapons, and cyberattacks

Several infectious diseases, including COVID-19, Ebola, SARS, MERS, swine flu, HIV/AIDS, etc.

Gene-edited humans (see coverage of / responses to the twins modified by He Jiankui)

Some more examples of risks which were probably not extreme*, but which elicited strong policy responses:

  • Y2K (though this might count as an extreme risk in the context of corporate governance)
  • Nuclear power plant accidents (in particular Three Mile Island and Chernobyl)
  • GMOs (both risks to human health and to the environment; see e.g. legislation in the EU, India, and Hawai'i)
  • various food additives (e.g. Red No. 2)
  • many, many novel drugs/pharmaceuticals (thalidomide, opioids, DES, Fen-phen, Seldane, Rezulin, Vioxx, Bextra, Baycol...)

*I'm not really sure how y... (read more)

Answer by michaelBMay 16, 202212
0
0

The 2014 NIH moratorium on funding gain-of-function research (which was lifted in 2017)

Answer by michaelBMay 16, 202210
0
0

The Asilomar Conference on Recombinant DNA, which Katja Grace has a report on: https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/

If you want to draw useful lessons for successful risk governance from this research, it also seems pretty important to collect negative examples of the same reference class, i.e. conditions of extreme risk where policies were proposed but not enacted/enforced, or not proposed at all. E.g. (in the spirit of your example of the DoD's UFO detection program), I don't know of policy governing the risk from SETI-style attempts to contact intelligent aliens.

Are you interested only in public policies related to extreme risk, or examples from corporate governance as well? Corporate risk governance likely happens in a way that's meaningfully different from public policy, and might be relevant for applying this research to e.g. AI labs.

2
Joris P
2y
Thanks Michael, also for the suggestions you made above! You raise good points and I would've loved to study negative examples and examples from corporate governance, but the scope of my thesis unfortunately has to be really limited - hopefully someone else can look at these later!