Hide table of contents

CEA is pleased to announce the winners of the January 2019 EA Forum Prize!

In first place (for a prize of $999): "EA Survey 2018 Series: Cause Selections", by David_Moss, Neil_Dullaghan, and Kim Cuddington.

In second place (for a prize of $500): "EA Giving Tuesday Donation Matching Initiative 2018 Retrospective", by AviNorowitz.

In third place (for a prize of $250): "EAGx Boston 2018 Postmortem”, by Mjreard.

We also awarded prizes in November and December.

What is the EA Forum Prize?

Certain posts exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.

The Prize is an incentive to create posts like this. But more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum's users.

The voting process

All posts published in the month of January qualified for voting, save for those written by CEA staff and Prize judges.

Prizes were chosen by five people. Three of them are the Forum's moderators (Aaron Gertler, Denise Melchin, and Julia Wise).

The others were two of the three highest-karma users at the time the new Forum was launched (Peter Hurford and Joey SavoieRob Wiblin took this month off).

Voters recused themselves from voting for content written by their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.

Winners were chosen by an initial round of approval voting, followed by a runoff vote to resolve ties.

About the January winners

EA Survey 2018 Series: Cause Selections”, like the other posts in that series, makes important data from the EA Survey much easier to find. The summary and use of descriptive headings both increase readability, and the methodological details help to put the post’s numbers in context.

As a movement, we collect a lot of information about ourselves, and it’s really helpful when authors report that information in a way that makes it easier to understand. All the posts in this series are worth reading if you want to learn about the EA community.

--

The EA Giving Tuesday program shows what a team of volunteers can do when they notice an opportunity — and how much more good can be done when those volunteers actively work to improve their project (in this case, they raised the matching funds they obtained by a factor of 10 between 2017 and 2018).

EA Giving Tuesday Donation Matching Initiative 2018 Retrospective” illustrates this well, taking readers through the setup and self-improvement processes of the Initiative, in a way that offers lessons for any number of other projects.

Documentation like this is important for keeping a project going even if a key contributor stops being available to work on it. We hope that others will learn from the EA Giving Tuesday example to create such documents (and Lessons Learned sections) for their own projects.

EAGx Boston 2018 Postmortem” is a well-designed guide to running a small EA conference, which explains many important concepts in a clear and practical way using stories from a particular event.

Notable features of the post:

  • The author links directly to materials they used for the event (like a template for inviting speakers), helping other organizers save time by giving them something to build on.
  • The takeaway section for each subtopic helps readers find the knowledge they wanted, whether they're planning a full event or were just curious to see how another conference handled food.

I personally expect to share this postmortem whenever someone asks me about running an EA event (whether with 20 people or 200), and I hope to see an updated version after this year’s EAGx Boston!

The future of the Prize

When we launched the EA Forum Prize, we planned on running the program for three months before deciding whether to keep awarding monthly prizes. We still aren’t sure whether we’ll do so. Our goals for the program were as follows:

  1. Create an incentive for authors to put more time and care into writing posts.
  2. Collect especially well-written posts to serve as an example for other authors.
  3. Offer readers a selection of curated posts (especially those who don’t have time to read most of the content published on the Forum).

If you have thoughts on whether the program should continue, please let us know in the comments, or by contacting Aaron Gertler. We’d be especially interested to hear whether the existence of the Prize has led you to write anything you might not have written otherwise, or to spend more time on a piece of writing.

Comments5
Sorted by Click to highlight new comments since: Today at 1:15 PM

The prize definitely seems useful for encouraging deeper, better content. One question: would a smaller, more frequent set of prizes be more effective? Maybe a prize every two weeks?

My intuition says a $1000 top prize won't generate twice as much impact as a $500 top prize every two weeks - thinking along the lines of prospect theory, where a win is a win and winning $500 is worth a lot more than half of winning $1000; or prison reform literature, where a higher chance of a smaller punishment is more effective in deterring crime than a small chance of a big punishment.

These prize posts probably create buzz and motivate people to begin, improve, and finish their posts; doubling their frequency and halving their payout could be more effective at the same cost.

(Counterargument: the biggest cost isn't money, it's time, and a two week turnaround is a lot for moderators. Not sure how to handle that.)

Any chance you would consider aspects from my Impact Prizes blog post, for future competitions? https://forum.effectivealtruism.org/posts/2cCDhxmG36m3ybYbq/impact-prizes-as-an-alternative-to-certificates-of-impact

Realize this may be a long shot, but I'd like to see more experimentation here.

How would you recommend incorporating these ideas into the Forum prize?

Given the small amounts of money involved, it seems like "tokenizing" a post would be difficult (a lot of effort for not much reward), and I'm a little worried that public "prediction markets" around post prizes would create a strange atmosphere for judges (e.g. if we know which people will lose or gain money based on how we award prizes, it could create the appearance of collusion even if we never look at the market).

But I may have misunderstood which "aspects" you were thinking of, or something about the Impact Prize idea more generally, so I'm curious to hear your thoughts in more detail!

Agreed that tokenizing and markets would be difficult in the short term.

The main possible aspect would be evaluating many projects and estimating the impact of them, as opposed to just giving an ordering for the very top projects. A rubric can be used for evaluations.

For example, say you have some rubric where every project was scored on "importance", "novelty", and "quanity" or similar. Then you divy up the prize money proportional to those things on the rubric and make the results public.

That could be an uncomfortable level of transparency for some people, but it would help foster a discussion of which projects are the most valuable.

I like the idea of trying to be more granular with evaluation, though I don't like the idea of making judges do a lot more work. Right now, I'd estimate that the value of the time it takes for judges to vote + CEA to administrate the prize is more than half the cost of the prize itself.

I could see something like "divide up winnings by number of votes", since we have approval voting already, though that won't track impact very precisely (a post with one vote is probably less than 1/6 as "valuable" as a post that gets a unanimous vote from all 6 judges). I'll keep thinking about different systems, though I think the current amounts will be kept stable for at least another few months.