Hide table of contents

CEA is pleased to announce the winners of the November 2018 EA Forum Prize!

In first place (for a prize of $999*): stefan.torges, "Takeaways from EAF's Hiring Round".

In second place (for a prize of $500): Sanjay, "Why we have over-rated Cool Earth".

In third place (for a prize of $250): AdamGleave, "2017 Donor Lottery Report".

*As it turns out, a prize of $1000 makes the accounting more difficult. Who knew?


What is the EA Forum Prize?

Certain posts exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.

The Prize is an incentive to create posts like this, but more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum's users.

That said, the winning posts weren't "exclusively" great. Our users published dozens of excellent posts in the month of November, and we had a hard time narrowing down to three winners. (There was even a three-way tie for third place this month, so we had to have a runoff vote!)


About the November winners

While this wasn't our express intent, November's winners wound up representing an interesting cross-section of the ways the EA community creates content.

"Takeaways from EAF's Hiring Round" uses the experience of an established EA organization to draw lessons that could be useful to many other organizations and projects. The hiring process is documented so thoroughly that another person could follow it almost to the letter, from initial recruitment to a final decision. The author shares abundant data, and explains how EAF’s findings changed their own views on an important topic.

"Why we have over-rated Cool Earth" is a classic example of independent EA research. The author consults public data, runs his own statistical analyses, and reaches out to a charity with direct questions, bringing light to a subject on which the EA community doesn't have much knowledge or experience. He also offers alternative suggestions to fight climate change, all while providing enough numbers that any reader could double-check his work with their own assumptions.

To quote one comment on the post:

This sort of evaluation, which has the potential to radically change the consensus view on a charity, seems significantly under-supplied in our community, even though individual instances are tractable for a lone individual to produce.

"2017 Donor Lottery Report" is a different kind of research post, from an individual who briefly had resources comparable to an entire organization -- and used his fortunate position to collect information and share it with the community. He explains his philosophical background and search process to clarify the limits of his analysis, and shares the metrics he plans to use to evaluate his grants (which adds to the potential value of the post, since it opens the door for a follow-up post examining his results).


Qualities shared by all three winners:

  • Each post had a clear hierarchy of information, helping readers navigate the content and making discussion easier. Each author seems to have kept readers in mind as they wrote. This is crucial when posting on the Forum, since much of a post's value relies on its being read, understood, and commented upon.
  • The authors didn't overstate the strength of their data or analyses, but also weren't afraid to make claims when they seemed to be warranted. We encourage Forum posts that prioritize information over opinion, but that doesn't mean that informative posts need to avoid opinion: sometimes, findings point in the direction of an interesting conclusion.

The voting process

All posts made in the month of November, save for those made by CEA staff, qualified for voting.

Prizes were chosen by seven people. Four of them are the Forum's moderators (Max Dalton, Howie Lempel, Denise Melchin, and Julia Wise). The other three are the EA Forum users who had the most karma at the time the new Forum was launched (Peter Hurford, Joey Savoie, and Rob Wiblin).

All voters abstained from voting for content written by themselves or by organizations they worked with. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.

Winners were chosen by an initial round of approval voting, followed by a runoff vote to resolve ties.


Next month

The Prize will continue with rounds for December and January! After that, we’ll evaluate whether we plan to keep running it (or perhaps change it in some way).

We hope that the Forum’s many excellent November posts will provide inspiration for more great material in the coming months.


Feedback on the Prize

We'd love to hear any feedback you have about the EA Forum Prize. Leave a comment or contact Aaron Gertler with questions or suggestions.

Comments5


Sorted by Click to highlight new comments since:

I really like the prize idea as a method of content curation – I wasn't planning to read Sanjay's Cool Earth post (because I didn't understand what it was about & it didn't seem relevant to my interests at first brush), but now I will.

Thanks all for making this happen :-)

A little strange that a post's karma isn't part of the prize evaluation process.

(I guess it is implicitly, would be interesting to see an explicit karma component.)

I can't speak for any of the voters, but they can use any criteria they want (taking our goals for the Forum as a set of suggestions that, in practice, they broadly agree with). I'd guess that karma is something that voters consider, because it's a reasonable measure of how helpful people actually found a post.

...followed by a runoff vote to resolve ties.

Is the runoff vote also approval voting?

Thanks for splitting your questions into different comments! Good policy for threads that aren't too crowded. The runoff vote was plurality-wins, because we didn't want a tie to further delay the announcement (our voters have a lot of other things on their plates). We'll keep iterating on the process as we move forward.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
LewisBollard
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by