CEA is pleased to announce the winners of the September 2020 EA Forum Prize!
- In first place (for a prize of $500): “Does Economic History Point Toward a Singularity?” by Ben Garfinkel.
- Second place ($300): “A Case and Model for Aggressively Funding Effective Charities,” by Aaron Hamlin.
- Third place ($200): “AI Governance: Opportunity and Theory of Impact,” by Allan Dafoe.
- Fourth place ($200): “Some thoughts on EA outreach to high schoolers,” by Buck Shlegeris.
- Fifth place ($200): “Asking for advice,” by Michelle Hutchinson.
The following users were each awarded a Comment Prize ($75):
- Mike McLaren on the minimum viable human population
- Bob Jacobs, creating brightly-colored guides to good discussion
- Paul Christiano on the implications of economic history
- So-Low Growth and Ben Pace, discussing the ease of funding high-fidelity YouTube content
See here for a list of all prize announcements and winning posts.
What is the EA Forum Prize?
Certain posts and comments exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum's users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
Does Economic History Point Toward a Singularity?
Over the next several centuries, is the economic growth rate likely to remain steady, radically increase, or decline back toward zero? This question has some bearing on almost every long-run challenge facing the world, from climate change to great power competition to risks from AI.
Not bad, for a “hobby project that got out of hand”! I also have some of those, but none so far have ended in 50-page research documents. I’ll refrain from talking about how this had the characteristics of a “good post” — honestly, it reads like multiple chapters from a good book out of Oxford University Press, albeit a book with a lively comments section. Garfinkel chose an important question, searched seriously for an answer, and produced something which gave the community a chance to poke and prod at his ideas as part of the search for truth. That’s the EA Forum at its best.
A Case and Model for Aggressively Funding Effective Charities
Here’s the common wisdom from the nonprofit sector: Avoid providing or taking significant funding that comes largely from one source. But is that always good advice?
If it is good advice, then no one told the larger nonprofits. Their funding is undeniably concentrated with funding from few funding sources within the same domain. This is how most nonprofits get large.
I’ve long enjoyed Aaron Hamlin’s writing aimed at donors, and I was pleased to see this argument against the idea that concentrated funding is best avoided. (Not because I have an opinion myself, but because disagreeing with conventional wisdom often begets interesting discussion.)
I like that this post:
- Notes that there are cases where concentrated funding actually is bad (if there are good arguments against your point, it’s nice to share them).
- Provides concrete examples of cases where concentrated funding worked out well (e.g. research on the birth control pill), rather than just making abstract arguments.
- Presents a nonprofit’s-eye view of concentrated funding discussions; it seems to me like many funding discussions in EA mostly involve donors’ perspectives, and don’t engage as much with the day-to-day demands of operating a charity and how that interacts with fundraising.
AI Governance: Opportunity and Theory of Impact
AI governance concerns how humanity can best navigate the transition to a world with advanced AI systems. It relates to how decisions are made about AI, and what institutions and arrangements would help those decisions to be made well. I believe advances in AI are likely to be among the most impactful global developments in the coming decades, and that AI governance will become among the most important global issue areas.
Honestly, I didn’t have a great mental definition of “AI governance” until I read this post. Dafoe’s writing changed that very quickly. This is a brief but informative introduction to an important field, and I don’t think I’ve seen any similar posts on the topic before — which makes this one especially welcome.
I liked Dafoe’s breakdown of different problems within AI governance, but my favorite part of the post was the theory of impact and discussion of prioritization at the end. It can be easy to feel like the connection between research and impact is very clear, but the exact ways in which research is valuable — answering specific questions? Providing a more general set of resources and frameworks to decision-makers? — could easily change which types of research we want to focus on.
Some thoughts on EA outreach to high schoolers
I’ve gotten the sense that many EAs are pessimistic about trying to engage high school students. But I think [past unsuccessful interventions] were ineffective for reasons unrelated to their target audience, and that other interventions aimed at high school students seem comparably promising to working with university students.
When we fail to achieve a goal, we shouldn’t just assume the goal isn’t realistic — instead, we may have been going about it wrong. Arguments of this type seem really valuable to discuss, and I enjoyed Shlegeris’s discussion of helping high-school students become involved in the movement. While I think there are a lot of risks to working with minors (as Shlegeris mentions, and as others discuss in the comments), I think I’d have been much happier had I found EA in high school, and I certainly think there are students of that age who are capable of contributing a lot to the community. This is a conversation worth having, and I’m glad that we were able to further it on the Forum.
Asking for advice
Note: The author also would have won a Comment Prize this month had she not won a post prize — I really liked this comment on giving kind feedback.
If there’s a single behavior EA tries to encourage more than any other, that behavior is… well, it’s donating to charity, but if there were a second single behavior, it would be asking people for advice. EA conferences tend to be focused on mentorship; so are EA groups, internships, career coaching…
...but if you aren’t part of a formal program that delivers advice to you, how do you go about finding some? From experience, I think Michelle Hutchinson’s thoughts on this are exactly right. Making it easy for people to give you advice will help you get more of it; taking notes, and then following up later, will reward your advisor for their help (and make them more inclined to offer further help later on). I was lucky enough to learn much of this as a college journalist (a trial by fire in getting experts to talk to me), but when I meet non-journalists getting started in the community, I expect I’ll often send them this article.
The winning comments
I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.
The voting process
The winning posts were chosen by five people:
All posts published in the titular month qualified for voting, save for those in the following categories:
- Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
- Posts linking to others’ content with little or no additional commentary
- Posts which accrued zero or negative net karma after being posted
- Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
The winning comments were chosen by Aaron Gertler, though the other judges had the chance to nominate other comments and to veto comments they didn’t think should win.
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact me.