Debrief: "cash prizes for the best arguments against psychedelics"

by Milan_Griffes 3 min read14th Jul 201920 comments

47


Epistemic status: personal reflections, "better written fast than not written at all"


In May, I published Cash prizes for the best arguments against psychedelics being an EA cause area. Discussion & debate ensued.

In June, prizes were awarded (the winners). A couple people have since asked me to write up my thoughts about administering this prize, hence this debrief.


How to specify a cash prize?

There's a small tradition in EA of using cash prizes to incentivize more thinking about a topic: 1, 2, 3 [edit: 4]

This seems good, and I'd like to see more of it.

However, there's a dilemma involved in specifying prizes like this:

  1. Offer a prize for the most popular / most upvoted arguments – concern about popularity-contest dynamics and activating identity-political tendencies
  2. Offer a prize for the best argument, where "best" is subjectively assessed by a judge – concern that the judge is judging from their bias; concern about nepotism

My (definitely imperfect) decision here was to mostly go with (1), though I also awarded a prize to the argument I thought was best.

After watching the voting unfold, I think the identity-politics concern is real. Many comments were heavily upvoted soon after being posted. Reminds me of Gordon's point about votes being "yays" & "boos".

I don't know what to do about this – identity-political tendencies seem to be baked pretty deeply into human cognition, and the other horn of the dilemma its own costs.

There's also a dynamic whereby earlier submissions receive more votes / have more opportunity to receive votes (pointed out here). Probably this could be addressed by separating the submission window from the voting window, e.g. have people send submissions to [email] by deadline 1, then post all received submissions simultaneously and have voting be open until deadline 2.


"Cached arguments"

[Edit: After some pushback in the comments, I'm no longer sure how real this dynamic is. It definitely felt real at the time, and I still think is a thing to some extent, but now I believe the below is somewhat overstating it.]

I was surprised by the rapidity of submitted arguments – four hours after announcing the prize, there were six submitted arguments (including the winning argument).

None of these engaged with the pro-psychedelic arguments I made in the main post, instead they appeared to be arguing generally against psychedelics being a promising area.

I'm chalking this up to a "cached arguments" dynamic – people seem to have quick takes about topics stored somewhere in memory, and when asked about a topic, the immediate impulse is to deliver the pre-stored quick take (rather than engaging with the new material being presented).

If this is true, it's a hurdle on the road to good discourse. To have a productive disagreement, both parties need to "clear out their cache" before they can actually hear what the counterparty is saying.


Minimal engagement with the trauma argument

Relatedly, basically no one engaged with the trauma alleviation argument (section 3d) I gave in the main post. (Recap: trauma appears to be upstream of several burdensome health problems, psychedelic therapy is showing promise at resolving trauma.)

I don't know why this wasn't engaged with. Probably related to the "cached arguments" thing, and also to the fact that many folks who wrote submissions are heavily longtermist.


How did running the prize change my mind?

After reading & engaging with the arguments against, I became:

  • less excited about psychedelic advocacy efforts (e.g. state-level ballot initiatives in the US) being a good fit for EA
  • more excited about psychedelic research (e.g. further studies on psychedelic therapy's efficacy for mental health issues, studies on the effects of psychedelics on boosting altruism, boosting creative problem-solving) being a good fit for EA

Many arguments against had the flavor of "more research is needed." I found this confusing, as "more research is needed" isn't actually an argument against "EA should fund more psychedelic research" (though it is an argument against funding advocacy).

It's straightforward to convert funding into high-quality research here (Good Ventures has already funded some, and there are shovel-ready opportunities in the space).

Gregory Lewis expressed bearishness about EA funding further research, though he seemed somewhat bullish about the value of information being good. As he was mostly arguing from his priors and hasn't closely read the recent psychedelic research literature, I didn't find this compelling.


Status quo bias in EA

Administering the prize drew out some of the strangeness of the current EA funding landscape for me.

We're currently in a regime wherein:

  • EA supports deworming scale-up on the order of tens of millions USD annually, without funding much confirmatory research.
  • EA supports rationality training scale-up on the order of millions USD annually, without funding any confirmatory (academic) research.
  • EA does not support psychedelic scale-up at all. (Good Ventures does fund some academic research into psychedelics on the order of a million USD annually, though this funding isn't under the aegis of EA.)

From my current read, psychedelics have a stronger evidence base than rationality training programs, and about as a strong of an evidence base as deworming (more trials but less scrutiny per trial).

I think a lot of this differential in EA support can be attributed to status quo bias – rationality training programs & deworming have been supported by EA for many years for historically contingent reasons, so we're more likely to view them favorably (in some sense they're on the "inside"). Psychedelics don't have a pre-existing association with EA, so we're inclined to view them with more skepticism (they're on the "outside").

It's also interesting to try to compare psychedelic interventions to x-risk reduction interventions, which EA funds heavily. It's not easy to compare evidence bases here, as the case for x-risk reduction is largely theoretical (not empirical).


47