Hide table of contents

Epistemic status: personal reflections, "better written fast than not written at all"


In May, I published Cash prizes for the best arguments against psychedelics being an EA cause area. Discussion & debate ensued.

In June, prizes were awarded (the winners). A couple people have since asked me to write up my thoughts about administering this prize, hence this debrief.


How to specify a cash prize?

There's a small tradition in EA of using cash prizes to incentivize more thinking about a topic: 1, 2, 3 [edit: 4]

This seems good, and I'd like to see more of it.

However, there's a dilemma involved in specifying prizes like this:

  1. Offer a prize for the most popular / most upvoted arguments – concern about popularity-contest dynamics and activating identity-political tendencies
  2. Offer a prize for the best argument, where "best" is subjectively assessed by a judge – concern that the judge is judging from their bias; concern about nepotism

My (definitely imperfect) decision here was to mostly go with (1), though I also awarded a prize to the argument I thought was best.

After watching the voting unfold, I think the identity-politics concern is real. Many comments were heavily upvoted soon after being posted. Reminds me of Gordon's point about votes being "yays" & "boos".

I don't know what to do about this – identity-political tendencies seem to be baked pretty deeply into human cognition, and the other horn of the dilemma its own costs.

There's also a dynamic whereby earlier submissions receive more votes / have more opportunity to receive votes (pointed out here). Probably this could be addressed by separating the submission window from the voting window, e.g. have people send submissions to [email] by deadline 1, then post all received submissions simultaneously and have voting be open until deadline 2.


"Cached arguments"

[Edit: After some pushback in the comments, I'm no longer sure how real this dynamic is. It definitely felt real at the time, and I still think is a thing to some extent, but now I believe the below is somewhat overstating it.]

I was surprised by the rapidity of submitted arguments – four hours after announcing the prize, there were six submitted arguments (including the winning argument).

None of these engaged with the pro-psychedelic arguments I made in the main post, instead they appeared to be arguing generally against psychedelics being a promising area.

I'm chalking this up to a "cached arguments" dynamic – people seem to have quick takes about topics stored somewhere in memory, and when asked about a topic, the immediate impulse is to deliver the pre-stored quick take (rather than engaging with the new material being presented).

If this is true, it's a hurdle on the road to good discourse. To have a productive disagreement, both parties need to "clear out their cache" before they can actually hear what the counterparty is saying.


Minimal engagement with the trauma argument

Relatedly, basically no one engaged with the trauma alleviation argument (section 3d) I gave in the main post. (Recap: trauma appears to be upstream of several burdensome health problems, psychedelic therapy is showing promise at resolving trauma.)

I don't know why this wasn't engaged with. Probably related to the "cached arguments" thing, and also to the fact that many folks who wrote submissions are heavily longtermist.


How did running the prize change my mind?

After reading & engaging with the arguments against, I became:

  • less excited about psychedelic advocacy efforts (e.g. state-level ballot initiatives in the US) being a good fit for EA
  • more excited about psychedelic research (e.g. further studies on psychedelic therapy's efficacy for mental health issues, studies on the effects of psychedelics on boosting altruism, boosting creative problem-solving) being a good fit for EA

Many arguments against had the flavor of "more research is needed." I found this confusing, as "more research is needed" isn't actually an argument against "EA should fund more psychedelic research" (though it is an argument against funding advocacy).

It's straightforward to convert funding into high-quality research here (Good Ventures has already funded some, and there are shovel-ready opportunities in the space).

Gregory Lewis expressed bearishness about EA funding further research, though he seemed somewhat bullish about the value of information being good. As he was mostly arguing from his priors and hasn't closely read the recent psychedelic research literature, I didn't find this compelling.


Status quo bias in EA

Administering the prize drew out some of the strangeness of the current EA funding landscape for me.

We're currently in a regime wherein:

  • EA supports deworming scale-up on the order of tens of millions USD annually, without funding much confirmatory research.
  • EA supports rationality training scale-up on the order of millions USD annually, without funding any confirmatory (academic) research.
  • EA does not support psychedelic scale-up at all. (Good Ventures does fund some academic research into psychedelics on the order of a million USD annually, though this funding isn't under the aegis of EA.)

From my current read, psychedelics have a stronger evidence base than rationality training programs, and about as a strong of an evidence base as deworming (more trials but less scrutiny per trial).

I think a lot of this differential in EA support can be attributed to status quo bias – rationality training programs & deworming have been supported by EA for many years for historically contingent reasons, so we're more likely to view them favorably (in some sense they're on the "inside"). Psychedelics don't have a pre-existing association with EA, so we're inclined to view them with more skepticism (they're on the "outside").

It's also interesting to try to compare psychedelic interventions to x-risk reduction interventions, which EA funds heavily. It's not easy to compare evidence bases here, as the case for x-risk reduction is largely theoretical (not empirical).


47

0
0

Reactions

0
0

More posts like this

Comments20
Sorted by Click to highlight new comments since: Today at 11:54 AM

First of all, thanks for running this - I think prizes are a great idea - and congratulations to the winners!

I somewhat disagree with your take here however:

I was surprised by the rapidity of submitted arguments – four hours after announcing the prize, there were six submitted arguments (including the winning argument).
None of these engaged with the pro-psychedelic arguments I made in the main post, instead they appeared to be arguing generally against psychedelics being a promising area.
I'm chalking this up to a "cached arguments" dynamic – people seem to have quick takes about topics stored somewhere in memory, and when asked about a topic, the immediate impulse is to deliver the pre-stored quick take (rather than engaging with the new material being presented).
If this is true, it's a hurdle on the road to good discourse. To have a productive disagreement, both parties need to "clear out their cache" before they can actually hear what the counterparty is saying.

You asked for the best arguments against psychedelics, not for counter-arguments to your specific arguments in favour, so this doesn't seem that surprising. Probably if you had specifically asked for counter-arguments, as opposed to merely saying they were there to 'seed discussion', people would have interacted with them more.

Kit
5y19
0
0

I also came to note that the request was for 'the best arguments against psychedelics, not for counter-arguments to your specific arguments in favour'.

However, I also wrote one of the six responses referred to, and I contest the claim that

None of these engaged with the pro-psychedelic arguments I made in the main post

The majority of my response explicitly discusses the weakness of the argumentation in the main post for the asserted effect on the long-term future. To highlight a single sentence which seems to make this clear, I say:

I don't see the information in 3(a) or 3(b) telling me much about how leveraged any particular intervention is.

I also referred to arguments made by Michael Plant, which in my amateur understanding appeared to be stronger than those in the post. To me, it seems good that others engaged primarily with arguments such as Michael's, because engaging with stronger arguments tends to lead to more learning. When I drafted my submission, I considered whether it was unhealthy to primarily respond to what I saw as weaker arguments from the post itself. Yet, contra the debrief post, I did in fact do so.

Kit
5y15
0
0

Huh. The winning response, one of the six early responses, also engages explicitly with the arguments in the main post in its section 1.2 and section 2. This one discussed things mentioned in the post without explicitly referring to the post. This one summarises the long-term-focused arguments in the post and then argues against them.

I worry I'm missing something here. Dismissing these responses as 'cached arguments' seemed stretched already, but the factual claim made to back that decision up, that 'None of these engaged with the pro-psychedelic arguments I made in the main post', seems straightforwardly incorrect.

Thanks, I think I overstated this in the OP (added a disclaimer noting this). I still think there's a thing here but probably not to the degree I was holding.

In particular it felt strange that there wasn't much engagement with the trauma argument or the moral uncertainty / moral hedging argument ("psychedelics are plausibly promising under both longtermist & short-termist views, so the case for psychedelics is more robust overall.")

There was also basically no engagement with the studies I pointed to.

All of this felt strange (and still feels strange), though I now think I was too strong in the OP.

You asked for the best arguments against psychedelics, not for counter-arguments to your specific arguments in favour, so this doesn't seem that surprising.

Fair enough. I think I felt surprised because I've spent a long time thinking about this & tried to give the best case I could in support, and then submissions for "best case against" didn't seem to engage heavily with my "best case for."

Nice post!

Why are popularity-contest dynamics harmful, precisely? I suppose one argument is: If you are looking for the best new argument against psychedelics, popularity-contest dynamics are likely to get you the argument that resonates with the most people, or perhaps the argument that the most people can understand, or the argument that the most people had in their head already. These could still be useful to learn about, though.

For judging, you could always get a third party to judge. I'm also curious about a prize format like "$X to anyone who's able to change my mind substantially about Y". (This might be the closest thing I've seen to that so far.) Or a prize format which attempts to measure & reward novelty/variety among the responses somehow.

You mentioned status quo bias. It's interesting that all 3 of the prizes you link at the top are cases where people presented a new EA initiative and paid the community for the best available critiques. One idea for evening things out is to offer prizes for the best arguments against established EA donation targets! I do think you're right that more outsider-y causes are asked to meet a higher standard of support.

  • For example, this recent post on EA as an ideology did very little to critique global poverty, but there's a provocative argument that our focus on global poverty is one of the most ideological aspects of EA: It is easily the most popular EA cause area, but my impression is that less has been written to justify a focus on global poverty than other cause areas--it seems to have been "grandfathered in" due to the drowning child argument.

  • Similarly, we could turn the tables on the EA Hotel discussion by asking mainstream EA orgs to justify why they pay their employees such high salaries to live in high cost of living areas. I've also heard tales through the grapevine about the perverse incentives created by the need to fundraise for projects in EA, and my perception is that this is a big issue in the cause area I'm most excited about (AI safety). (Here is a recent LW thread on this topic.)

One idea for evening things out is to offer prizes for the best arguments against established EA donation targets!

This is a great idea!

(This might be the closest thing I've seen to that so far.)

Whoa, I didn't know about this one. Thanks for the link!

Why are popularity-contest dynamics harmful, precisely?

A similar sort of thing is a big part of the reason why Eliezer had difficulty advocating for AI safety, back in the 2000s.

Oh I thought you were talking about popularity contest dynamics for arguments, not causes.

Sounds like you are positing a Matthew Effect where causes which many people are already working on will tend to have greater awareness (due to greater visibility) and also greater credibility (so many people are working on this cause, they must be on to something! Newcomers to EA will probably be especially tempted by causes which many people are already working on, since they won't feel they are in a position to evaluate causes for themselves.)

If true, an unfortunate side effect would be that neglected causes tend to remain neglected.

I think in practice how things work nowadays is that there are a few organizations in the community (OpenPhil, 80K) which have a lot of credibility and do their own in-depth evaluation of causes, and EA resources end up getting directed based on their evaluations. I'm not sure this is such a bad setup overall.

... I'm not sure this is such a bad setup overall.

Yeah it doesn't seem terrible. It probably misses a lot of upside, though.

From my current read, psychedelics have a stronger evidence base than rationality training programs

I agree if for CFAR you are looking at the metric of how rational their alumni are. If you instead look at CFAR as a funnel for people working on AI risk, the "evidence base" seems clearer. (Similarly to how we can be quite confident that 80K is having an impact, despite there not being any RCTs of 80K's "intervention".)

I agree if for CFAR you are looking at the metric of how rational their alumni are. If you instead look at CFAR as a funnel for people working on AI risk, the "evidence base" seems clearer.

Sure, I was pointing to the evidence base for the techniques taught by CFAR & other rationality training programs.

CFAR could be effective at recruiting people into AI risk due to Schelling-point dynamics, without the particular techniques it teaches being efficacious. (I'm not sure that's true, just pointing out an orthogonality here.)

If you instead look at CFAR as a funnel for people working on AI risk, the "evidence base" seems clearer.

Do you know if there are stats on this, somewhere?

e.g. Out of X workshop participants in 2016, Y are now working on AI risk.

I don't know of any such stats, but I also don't know much about CFAR.

EA does not support psychedelic scale-up at all. (Good Ventures does fund some academic research into psychedelics on the order of a million USD annually, though this funding isn't under the aegis of EA.)

I wonder if this is partly because of fears it would make EA look weird(er). Whether or not it's an been an issue up to this point, perhaps some EAs would be keener to support these efforts in future if they were not closely associated with the EA 'brand'.

[anonymous]5y4
0
0

What did you find compelling about the comment that you found to be the best argument?

1. I like that the originality of it. (It's not just saying "the evidence base isn't strong enough!")

2. The objection better accords with my current worldview.

After watching the voting unfold, I think the identity-politics concern is real. Many comments were heavily upvoted soon after being posted.

I'm not sure what timescale "soon after being posted" represents here. Is your concern more along the lines of:

(a) People seem to have been upvoting comments without having had time to read/think about them,

(b) People seem to have been upvoting comments without having had time to read/think about your post and how it interacted with those comments, or

(c) People seem to have been upvoting comments without having had time to read/think about all the other comments up to that point?

(Or some mix of those, of course.)

I remember taking 5-10 minutes to read some of the shorter arguments, then upvoting them because they made an interesting point or linked me to an article I found useful.

It feels like people aren't likely to spend more than 10 minutes reading/thinking about a Forum comment other than in exceptional cases, but perhaps there are ways you can encourage the right kind of slow thinking in contest posts?

(b), perhaps with a dash of (a) too