Preliminary Voting
Dec 14th
Jan 15th
Final Voting
Feb 1st

This is the Final Voting phase. During this phase, you'll read reviews, reconsider posts in the context of today, and cast or update your votes. At the end we'll have a final ordering of the Forum's favorite EA writings of all time.


How exactly do the votes work?

Submitting reviews

If you have any trouble, please contact the Forum team, or leave a comment on this post.


Sorted by New

This analysis accurately frames my starting point in working on Dozy. It convinced me to commit full-time to the project on the merits of its potential impact. Some of the awareness the post generated led to me raising a small pre-seed round from EAs, which has been invaluable. The problem size and treatment claims have been largely substantiated by studies that came out since, but I underestimated the challenges of getting users through the complete treatment program. Also, there are a few direct-to-consumer CBT-i options coming out as of now, so the coun... (read more)

Disclaimer: this is an edited version of a much harsher review I wrote at first. I have no connection to the authors of the study or to their fields of expertise, but am someone who enjoyed the paper here critiqued and in fact think it very nice and very conservative in terms of its numbers (the current post claims the opposite). I disagree with this post and think it is wrong in an obvious and fundamental way, and therefore should not be in decade review in the interest of not posting wrong science. At the same time it is well-written and exhibits a good ... (read more)

Excellent and underrated post. I actually told Greg a few years ago that this has become part of my cognitive toolkit and that I use this often (I think there are similarities to the Tinbergen Rule - a basic principle of effective policy, which states that to achieve n independent policy targets you need at at least n independent policy instruments). 

This tool actually  caused me to deprioritize crowdfunding with Let's Fund which I realized was doing a  multiobjective optimization problem (moving money to effective causes and doing re... (read more)

This was a very practical post. I return to it from time to time to guide my thinking on what to research next. I suggest it to people to consider. I think about ways to build on the work and develop a database. I think that it may have helped to catalyse a lot of good outcomes.

This piece examines the accuracy of Peter Singer’s expanding moral circles theory by reasoning and examples. Since the validity of this arguably fundamental thesis can have wide implications, I recommend this post.

While the central thesis to expand one’s moral circles can be well-enjoyed by the community, this post is not selling it well. This is exemplified by the “One might be biased towards AIA if…” section, which makes assumptions about individuals who focus on AI alignment. Further, while the post includes a section on cooperation, it discourages it. [Edit: Prima facie,] the post does not invite critical discussion. Thus, I would not recommend this post to any readers interested in moral circles expansion, AI alignment, or cooperation. Thus, I would recommend this post to readers interested in moral circles expansion, AI alignment, and cooperation, as long as they are interested in a vibrant discourse.

This framing of the “drowning child” experiment can best appeal to philosophy professors (as if hearing from a friend), so can be shared with this niche audience. Some popular versions include this video (more neutral, appropriate to diverse age audiences) and this video (using younger audience marketing). This experiment should be used together with some more rational writing on high-impact opportunities and engagement specifics in order to motivate people to enjoy high-impact involvement.

Load More
144 Reviewed Posts(588 Nominated)