This post is arriving late — my fault, not that of any other judge. We’re catching up on a Prize backlog and expect to be current again by the time October prizes are given.
CEA is pleased to announce the winners of the August 2020 EA Forum Prize!
- In first place (for a prize of $500): “Donor Lottery Debrief,” by Timothy Telleen-Lawton.
- Second place ($300): “The case of the missing cause prioritisation research,” by weeatquince.
- Third place ($200): “Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty,” by Michael Plant, Joel McGuire, and Clare Donaldson.
- Fourth place ($200): “More empirical data on 'value drift',” by Benjamin Todd.
- Fifth place ($200): “Research Summary: The Subjective Experience of Time,” by Jason Schukraft.
The following users were each awarded a Comment Prize ($75):
- Johannes Ackva and Danny Bresler on deaths from climate change
- Carl Shulman on X-risk externalities
- Kieran Greig on the Animal Welfare Fund
See here for a list of all prize announcements and winning posts.
What is the EA Forum Prize?
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum's users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
This is the second time a donor lottery winner has written a post about where they gave, and I hope to see many more such posts.
Elements I liked from this writeup:
- It is an expanded version of a past comment. I appreciate that even after thoroughly explaining part of his lottery donation, the author decided to finish the job.
- I’d love to see more posts on the Forum that are updates of this kind — what’s happening with all the interesting projects we hear about?
- Lots of discussion of how funding decisions were impacted by structural considerations (e.g. helping an organization make a key hire by providing money faster than an institutional funder would). Cost-effectiveness analyses are useful, but it’s also good to note when the size or timing of a donation can open up new options for an organization without much of a track record.
- A thorough evaluation of the process around making these donations — giving large sums of money can be complex, and I can picture other donors reading this and learning from e.g. the author’s sub-optimal investment of the funds during his planning period, or his regrets about not advertising his available funds more widely.
The post is also beautifully written; every sentence adds new information and flows logically from the last. (This is true of most winning posts, but for whatever reason, I really noticed it here.)
Note: The author also received enough votes to win a prize for this post, but we only give out one prize per author, per month. This makes them the second “multi-winner” we’ve had, after Tegan McCaslin in March 2019. Congratulations!
So my request to you: Either disagree with me, tell me that sufficient progress is happening, or change how you act in some small way. Be a bit more uncertain, a bit more willing to donate to fund or to go into cause prioritisation research. And if you work in an EA org please stop focusing so much on the cause areas you each believe are most important and increase the amount of cause-neutral work and funding that you do.
I can’t think of many better ways to sum up the EA Forum than: “Either disagree with me [...] or change how you act in some small way.” Magnificent!
I found that line especially elegant, but I also appreciated the rest of this post; I think it argues convincingly that relatively little cause prioritization work is currently happening, relative to what would be ideal for our movement as a whole.
Another thing I liked: Rather than making a blanket claim about cause prioritization as a whole, the author splits out that concept into a set of smaller concepts (e.g. “empirical cause selection beyond RCTs”, “consideration of different views and ethics”). This allows for more nuanced judgments about progress in different areas.
Our main purpose here is not to argue that the WELLBY method should be used, although we will briefly motivate it later. Rather, we want to show how it can be used.
I can’t recall many other Forum posts with this level of ambition. The authors don’t only propose a new metric we could use to estimate the value of different outcomes — they also make a detailed attempt to draw out what that metric would imply for certain real-world interventions. I’ve heard arguments before that EA should make more use of subjective well-being, but this is the first time I’ve had a concrete sense of what that might look like.
Other elements I liked about this post:
- The authors explain the limitations of the metric as it stands, and what steps might be taken to improve it (this is the sort of thing that helps students generate ideas for research papers)
- The link to a Guesstimate model, so that readers can try their own estimates for certain numbers and see how that affects the final analysis
My estimate for someone who’s highly engaged, enthusiastic, and socially integrated would be about ~10% over 5 years. I estimate there are ~500 people in the community who are at this level of risk for value drift.
I’m always interested to see more discussion about value drift, which seems like an important factor in the long-term flourishing of the EA community. Marisa Jurczyk’s qualitative analysis of the phenomenon won a Forum Prize. Now, Benjamin Todd has gone quantitative, combining a series of surveys and other data sources to produce an estimate of how often dedicated members stop participating actively in the community.
The post explains the context behind each source (good!), covers some considerations beyond those sources that might raise or lower our estimates (great!), and suggests additional work that others could do to further our knowledge in this area (splendid!!!).
If you’ve ever had the misfortune of being in a car accident or fighting in a war zone or being attacked by a wild animal, you may already be familiar with putative differences in the subjective experience of time.
When confronted with life-threatening circumstances, humans often report that time seems to slow down. Events that are over in tens of seconds seem to stretch on for minutes, allowing rapid assessment of the scene and quickfire decisions that, in some cases, save one’s life. These types of differences are also sometimes induced artificially. An LSD trip might seem to extend for days when in fact it was over in an afternoon.
Jason Schukraft has a track record of publishing really interesting research on the Forum — now, he’s taken steps to make some of it even more accessible, through a summary that condenses two previous research posts to under 5% of their total word count. As a professional writer, I can attest that this is very hard to do, and I love Schukraft’s commitment to helping people actually read what he’s written.
If you know of a long research post that you wish had gotten more engagement — whether you wrote it, or someone else did — try producing your own summary!
The winning comments
I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.
The voting process
The winning posts were chosen by four people (Rob Wiblin didn’t vote this month):
All posts published in the titular month qualified for voting, save for those in the following categories:
- Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
- Posts linking to others’ content with little or no additional commentary
- Posts which accrued zero or negative net karma after being posted
- Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
The winning comments were chosen by Aaron Gertler, though the other judges had the chance to nominate other comments and to veto comments they didn’t think should win.
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact me.