CEA is pleased to announce the winners of the final EA Forum Prize! This prize covers a three-month period, May-July 2021.
- In first place (for a prize of $0*): “All Possible Views About Humanity's Future Are Wild,” by Holden Karnofsky.
- Second place ($1275): “Seven things that surprised us in our first year working in policy,” by Jack Rafferty and Lucia Coulter.
- Third place ($975): “2018-2019 Long-Term Future Fund Grantees: How did they do?” by Nuño Sempere.
- Fourth place ($975): “COVID: How did we do? How can we know?,” by Ghost_of_Li_Wenliang.
- Fifth place ($975): “Opinion: Digital marketing is under-utilized in EA,” by JSWinchell.
* Because Holden works for Open Philanthropy, one of CEA’s main funders. In keeping with our conflict-of-interest policy, he won’t receive a monetary prize, and we’ve divided that money between the other winners.
The following users were each awarded a Comment Prize ($125):
- Miranda Zhang on the U.S. patient philanthropy debate
- Jsevillamol on talking to civil servants about risk management
- So-Low Growth on consanguinity
- Jackson Wagner, explaining a downvote
- sbehmer on social welfare functions and the use of willingness-to-pay
- Max Daniel on key numbers that (almost) everyone in EA should know
- Akash Wasil on mental health interventions
- Linchuan Zhang on existential risk and Progress Studies
- Louis Dixon on learning about global risks through university courses
- Fin Moorhouse on free will
See here for a list of all prize announcements and winning posts.
What is the EA Forum Prize?
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum's users.
The end of the Forum Prize, and a new beginning
We’ve enjoyed giving out the Forum Prize over the last three years. The winning authors deserved their rewards and recognition; I was always especially excited when I had the chance to notify a first-time winner who’d never expected that kind of response to their work.
However, the actual impact of the Prize has been hard to determine, and our best evidence (qualitative and quantitative) finds that it doesn’t seem to be a strong incentive for most authors.
We still plan to use our budget to reward and promote great writing, but what we do in the future won’t be the Forum Prize — instead, we’re considering other options, including contests for the best posts on various topics, prizes specifically aimed at first-time authors, and many more.
(If you have ideas along these lines, add a comment here — though you may also want to see the list of things we’re thinking about.)
My gratitude goes out to all of our judges, who spent many hours over months or years evaluating posts for no payment and very little glory.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
“According to me, there's a decent chance that we live at the very beginning of the tiny sliver of time during which the galaxy goes from nearly lifeless to largely populated. That out of a staggering number of persons who will ever exist, we're among the first. And that out of hundreds of billions of stars in our galaxy, ours will produce the beings that fill it.”
This post was a great introduction to its series, laying out the central idea behind Holden’s work and approaching it from many different directions. Some of what I liked about it:
- An interesting title — which is important if you want people to engage with a post!
- A detailed summary — more than I’d expect to see on a relatively short post, but all the more impressive as a result. (The more ways you can state your main points in different levels of detail, the more memorable and shareable they’ll be.)
- Well-designed charts (credit to Ludwig Schubert)
While this prize reflects how judges voted for this post alone, I suspect that the other posts which came after it were also part of what people considered. I highly recommend reading those, as well.
“Following the interest in our post announcing the launch of Lead Exposure Elimination Project (LEEP) eight months ago, we are now sharing an update unpacking seven findings that have surprised us in our experiences so far. We hope these will be relevant to others interested in policy change or starting a new project or charity.”
When people come to me with multiple ideas for posts, I often advise them to focus on their most surprising idea first. Readers have a lot of ideas before they see a post, and many of those ideas are basically right — so focus on trying to predict what your readers don’t know much about and/or the false beliefs you think they hold.
Compare the hypothetical posts “How to do good research” and “Surprising things I learned as a researcher” — the former might be more comprehensive, but I’d expect most people to get more value out of the second (because so much of what constitutes “good research” is either intuitive or the kind of thing someone will inevitably discover on the job).
That’s what makes this post great — for most of the “things that surprised” Jack and Lucia, I was surprised right along with them, or at least had the reaction: “Huh, I hadn’t thought about that before.”
I also love the level of detail presented here — the names of individual ministries, the costs to run lead testing studies, government officials’ preferred meeting formats, the works. It helps me feel the authors’ journey in a way I might not without those details, and it also helps me add a bit more texture to my own model of the world.
“At the suggestion of Ozzie Gooen, I looked at publicly available information around past LTF grantees. We've been investigating the potential to have more evaluations of EA projects, and the LTFF grantees seemed to represent some of the best examples, as they passed a fairly high bar and were cleanly delimited.”
My greatest fear for EA as a movement is that our influence and resources outstrip the effort we put into evaluating our work, until we become a myopic, stumbling giant, acting often and learning little.
As such, I’m always happy to see external evaluations of big EA projects; even if the results aren’t especially detailed (as in this post), they still generate useful conversations and serve to guard against blind faith in big institutions.
Things I appreciated about Nuño’s post:
- The question he asked to get others’ views on how best to do similar work in the future
- The inclusion of the links he used to find the information in his sample evaluations — particularly the conversation notes with Vyacheslav Matyuhin, which was more detail than I had expected to see.
- Mostly, the fact that it exists, and will hopefully spur similar external evaluations of other EA Funds grants. (If you see this, and you’re interested in trying a similar evaluation, I might be willing to fund you — send me a Forum message.)
“When I talk about whether a given country's response to COVID was a success or a failure, smart friends reply [...]"it's easy to say the optimal response in hindsight" [or] "it's difficult to compare different countries”.
But what would the best possible response look like? What did our institutions stop us from getting?”
I’ve been reading about the kinds of things this post summarizes for the last year-plus, but it’s hard to think of better examples of all that information stapled together in a single brief post. For example, I’d probably share this with someone who hadn’t been following COVID competency news (like a hermit, or a Martian) over any one of Zvi Mowshowitz’s COVID updates, even if the latter were more informative as a full collection.
And that summary is only half the post — the “happy timeline” proposal is interesting in and of itself, offering lots of specific points for people to debate and perhaps serving as a useful talking point for those trying to convince policymakers to prepare more seriously for a future pandemic. (Maybe no one will ever actually use the post this way, but I like that they at least could do so.)
Other things I liked:
- The fiery style — not a positive for me, but also not a negative where it easily could have been.
- I’ve seen this done badly in many places, but on this post, phrases like “I feel the boot of others on my neck” and “negative-sum bullshit” feel earned, both by the amount of research the author clearly did and by their willingness to point out where things went well, or where a problem was especially hard to avoid.
- Lots of clarity around where the author was uncertain, which made it easy for some commenters to jump in and add more numbers.
- The inclusion of the code used to estimate QALY costs — giving readers a way to interact with the post and find new interpretations of the author’s data.
“In this post, I will make the case that digital marketing is under-utilized by EA orgs [...] A large part of what Effective Altruism is trying to do is to change people’s beliefs and behaviors. Digital advertising is one tool for achieving this goal. The fact that corporations, governments, and nonprofits repeatedly invest millions of dollars in digital marketing programs is evidence of their efficacy.”
Almost every EA Forum reader has a post in their soul that they’re better-positioned to write than any of the other few thousand readers, thanks to their unique personal experience. This seems to be JSWinchell’s soul post. It makes a very clear, extremely actionable case for something that will benefit many readers, and even offers the author’s further assistance!
Things I liked:
- The long list of sample use cases for YouTube ad targeting — including actual costs, which would have been easy to leave out but make the list far more useful.
- Links to previous posts written for EA audiences on the same topic — posts that hadn’t been published on the Forum and might have been difficult to find for this post’s target audience.
- The (short) length, especially in the introduction. I see a lot of posts spend seven paragraphs justifying proposals that seem simple and uncontroversial; JSWinchell finishes their “digital advertising is a useful tool” argument in three sentences.
The winning comments
I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.
The voting process
The current prize judges are:
All posts published in the titular month qualified for voting, save for those in the following categories:
- Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
- Posts linking to others’ content with little or no additional commentary
- Posts which got fewer than five additional votes after being posted (not counting the author’s automatic vote)
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
The winning comments were chosen by Aaron Gertler.