CEA is pleased to announce the winners of the March 2020 EA Forum Prize!
In second place (for a prize of $500): “The case for building more and better epistemic institutions in the effective altruism community,” by Stefan Torges.
In third place (for a prize of $250): “Effective Animal Advocacy Nonprofit Roles Spot-Check,” by Jamie Harris.
The following users were each awarded a Comment Prize ($50):
- Arden Kohler and Richard Ngo on key ongoing debates in EA
- smclare and Derek for detailed feedback on charities’ impact estimates
- jackva on the drawbacks of Drawdown
For the previous round of prizes, see this post.
What is the EA Forum Prize?
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an inspiration to the Forum's users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
This post describes issues that could apply to nearly every kind of EA work, with clear negative consequences for everyone involved. I especially liked the problem statement in this passage:
The key intuition is that in an uncooperative setting each altruist will donate to causes based on their own value system without considering how much other altruists value those causes. This leads to underinvestment in causes which many different value systems place positive weight on (causes with positive externalities for other value systems) and overinvestment in causes which many value systems view negatively (causes with negative externalities).
The post supports this point with a well-structured argument. Elements I especially liked:
- The use of tables to demonstrate a simple example of the problem
- References to criticism of EA from people outside the movement (showing that “free-riding” isn’t just a potential issue, but may be influencing how people perceive EA right now)
- References to relevant work already happening within the movement (so that readers have a sense for existing work they could support, rather than feeling like they’d have to start from scratch in order to address the problem)
- The author starting their “What should we do about this?” section by noting that they weren’t sure whether “defecting in prisoner’s dilemmas” was actually a bad thing for the EA community to do. It’s really good to distinguish between “behavior that might look bad” and “behavior that is actually so harmful that we should stop it.”
Like the prior post, this post contains a well-structured argument for addressing a problem that could be dragging down the overall impact of EA work across many different areas. You could summarize the main point in a way that makes it seem obvious (“EA should try to figure things out in a better way than it does now”), but in doing so, you’d be ignoring the details that make the post great:
- Pointing out examples of things the community has done that pushed EA in the right direction (e.g. influential criticism, expert surveys) in order to show that we could do even more work along the same lines.
- Comparing one reasonable proposal (better institutions) to other reasonable proposals (better norms, other types of institution, focusing on growth over institution-building) without arguing too vociferously in favor of the first proposal. I liked the language “I sketch a few considerations,” where some posts might have used “I show how X is superior to Y and Z.”
If you read this post, I also strongly recommend reading the comments! (This applies to the post above as well.)
Many people have strong opinions on the state of the EA job market, but it can be difficult to find enough data to support any particular viewpoint. I appreciate AAC’s efforts to chase down facts, and to present its methodology and results very clearly. I don’t have much to say about the style or structure of this post; it’s just clear and thorough, and I’d be happy to hear about other researchers using it as a template for presenting their own work.
(One note: I like that the “limitations” section also includes suggestions for further research. Posts that show how others can build on them seem likely to encourage further intellectual progress.)
The winning comments
I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.
The voting process
The winning posts were chosen by six people:
All posts published in the titular month qualified for voting, save for those in the following categories:
- Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
- Posts linking to others’ content with little or no additional commentary
- Posts which accrued zero or negative net karma after being posted
- Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
The winning comments were chosen by Aaron Gertler, though the other judges had the chance to nominate other comments and to veto comments they didn’t think should win
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact me.