Recent Discussion

Knowing that a large people donate inefficiently and that because of scope insensitivity and other thinking errors/bias/heuristics, some organisations do well by (intentionally) being inefficient. What counts is not necessarily the relative effect of cost/impact but the absolute impact.

An example:

A transparent Organization 1 focusing on increasing the cost/impact uses honest advertising but low social reward incentives therefor is able to generate 300K dollar per year. Being very efficient, per 10 dollar, one lifeyear* is saved.

Equals 30K lifeyears* saved.

Another Organization 2 focuses on inc... (Read more)

I just wanted to say that thanks to your question, I added the following bullet point to my article List of ways in which cost-effectiveness estimates can be misleading:

  • Ease of fundraising / counterfactual impact of donations. Let’s say you are deciding which charity you should start. Charity A could do a very cost-effective intervention but only people who already donate to cost-effective charities would be interested in supporting it. Charity B could do a slightly less cost-effective intervention but would have a mainstream appeal and could fundraise f
... (read more)
When can I eat meat again?
1141mo22 min readShow Highlight

By Claire Yip, co-founder of Cellular Agriculture UK. These views are my own.


Timeline: When we can expect highly similar cost-competitive alternatives to animal products


  • There is a lot of uncertainty around when we will be able to eat meat grown from cells, and how we should divide our efforts between that, plant-based alternatives, and other forms of animal advocacy. This post seeks to give sensible, unbiased views on the future of alternative proteins.
  • However, these views are uncertain too: I have c.40-60% confidence. These estimates are not set in stone. Factors like investment a
... (Read more)
2MichaelStJules6hShouldn't the burden be the other way? Why should you care that it's real if it's otherwise indistinguishable? It sounds like you prefer real meat just to spite animal advocates. There are reasons to break the tie the other way: 1. Moral uncertainty. You might assign some possibility to it being wrong. Are you 100% sure animals don't matter? If you're 100% sure or close to it, is that confidence justified? Also, you don't have to believe in animal "rights" per se to recognize that animal farming causes harm to animals, and it's better to avoid this, all else equal. 2. The harm it causes other humans who care about animals because they care about animals. Imagine if we started farming children with severe intellectual disabilities and torturing them. It's horrifying for us in the same way. 3. Environmental harms. 4. Public health. 5. Injuries, PTSD and other mental health issues caused by slaughterhouse work. 6. Increased crime rates in areas with slaughterhouses. (I'm not sure how strong the causal relationship is here, though, but it's plausible given mental health effects.) Infants (<1 year) and many nonverbal humans who are nonverbal because of intellectual disability. They still have interests, e.g. in not suffering involuntarily If my own involuntary suffering is bad in itself, and I recognize that at least one other individual's involuntary suffering is bad in itself, then it's on me to justify treating some involuntary suffering as bad in itself and others not, and if I can't do this, then I should accept that it's always bad in itself, or that no other individual's suffering is bad in itself (and maybe not my own, either). Are you not concerned with others' welfare for their sakes, and not just how it benefits you to be concerned with their welfare in other ways? What are the things that, at a fundamental level, make a person better or worse off? Don't those (or at least some of those) also apply to nonhuman animals? I don't think alcohol is a good

Many good points.

Moral uncertainty doesn't give you what you want. It gives you everything and nothing. You don't use it to question your own values, but only as a rhetorical device to get other people to question their values, and only those that disagree with your current values. Maybe the Logic of the Larder goes through. Maybe animal farming is good for wild animals. Maybe animal suffering is intrinsically morally good. You can't point to uncertainty to privilege your current moral preference.

The costs to slaughterhouse workers are inter... (read more)

Crossposting from the Effective Altruism community on Reddit. Thought it may be helpful to have a discussion here as well for those who don't frequent r/EffectiveAltruism.

For those who are thinking about how they can leverage their donations towards this cause area, where should we be donating to?

Bail funds are getting the most media attention right now, with the Minnesota Freedom Fund receiving $20M. With that, I'm not sure if there is a funding need right now for bail funds, compared to other neglected organizations in the same cause area. I'm also not sure on how to compare... (Read more)

2Answer by alexrjl5hIt's really disappointing to see this post repeatedly down-voted without any responses. When people approach the EA community and ask about the most effective way to deal with an issue they care about, surely there's a better way to respond than "I think there are more pressing causes so I'm not even going to dignify your polite request with a polite response". In answer to the question, there's not been a huge amount of EA research on this, mostly because, for several reasons, it tends to be more cost-effective to focus on the world's poorest countries if you intend on helping people today. However: * The open philanthropy project has made grants focused on criminal justice reform, which seems highly relevant. [] * While I haven't seen a CEA for [], one of the founders, Samuel Sinyangwe, has been getting lots of positive attention from EAs for his data-given approach, and has recently become a 538 contributor.
5David_Moss3hI didn't downvote it, but some commenters might have done because an almost identical question [] was asked a few days ago.

Upvoted, I didn't see that one, hopefully that's the case!

Climate Change Is Neglected By EA
2110d15 min readShow Highlight

A year ago Louis Dixon posed the question “Does climate change deserve more attention within EA?”. On May 30th I will be discussing the related question “Is Climate Change Neglected Within EA?” with the Effective Environmentalism group. This post is my attempt to answer that question.

Climate change is an incredibly complex issue where a change in greenhouse gas concentrations is warming the planet, which has a long list of knock on impacts including heatwaves, more intense rainfall, more intense droughts, sea level rise, increased storm surges, increased wildfires, ... (Read more)

Nothing you've written here sounds like anything I've heard anyone say in the context of a serious EA discussion. Are there any examples you could link to of people complaining about causes being "too mainstream" or using religious language to discuss X-risk prevention?

The arguments you seem to be referring to with these points (that it's hard to make marginal impact in crowded areas, and that it's good to work toward futures where more people are alive and flourishing) rely on a lot of careful economic and moral reasoning about the real world, and I think

... (read more)

Previously titled “Climate change interventions are generally more effective than global development interventions”.  Because of an error the conclusions have significantly changed. [old version]. I have extended the analysis and now provide a more detailed spreadsheet model below. In the comments below, Benjamin_Todd uses a different guesstimate model and found the climate change came out ~80x better than global health (even though the point estimate found that global health is better).

Word count: ~1800

Reading time: ~9 mins

Keywords: Climate change, climate policy, glob... (Read more)

Thanks for the comment!

the title of the article could maybe use editing!

I think I'll just leave the title for now, because it is confusing as it is and I'm not sure if it's worth it to redo/rewrite the analysis. I should probably have just called it "How to compare the relative effectiveness of development vs. climate interventions". I'll make a note in the beginning of the post linking to your guesstimate, saying that you found different results.

I can't quite follow your analysis from the screenshots (perhaps you could l... (read more)

2HaukeHillebrandt2hI had emailed all the authors of this analysis and asked them, but they didn't get back to me, so I think it's ambiguous and not really replicable. But yes I agree it's a fairly small uncertainty compared to the others.

COVID-19 has precipitated an emerging food crisis that is unprecedented in the last 75 years. We’ve earlier described the crisis and made the case for action on this front, a situation the WFP has described as potentially leading to “famines of biblical proportions”. A combination of supply line disruptions, labour and movement restrictions leading to shortfalls, rapidly growing locust swarms, and many other factors are causing this crisis.

Together with the team behind the successful Coronavirus Tech Handbook, we’re designing a Food System Handbook to help compile a... (Read more)

Together with the team behind the successful Coronavirus Tech Handbook...

What makes you say the Coronavirus Tech Handbook has been successful? I assume it's been useful to many people, but I'm interested in specifics: who's made use of it, what projects have been helped by it, etc.

Hi everyone, I'm Giang Nguyen, born and bred in Vietnam. I have been part of EA York for the last 3 years and involved in (too) many EA events and retreats. If you are interested in chatting more about Effective Altruism Vietnam please get in touch. We only have a Google Site and a brief plan but nothing much!  

My email is Thank you and take care. 

Could you share a link to the Google Site? I'd be curious to see it (have you translated any English-language EA material into Vietnamese?)

Executive Summary

An animal’s capacity for welfare is how good or bad its life can go. An animal’s moral status is the degree to which an animal’s experiences or interests matter morally. It’s plausible that animals differ in their capacity for welfare and/or their moral status. These differences could affect the way we ought to allocate resources across interventions and/or cause areas. Unfortunately, measuring capacity for welfare and moral status is tremendously difficult.

When donors or researchers choose to focus on cause areas or interventions that target certain species rather than other

... (Read more)

Appreciate the care taken, especially in the atomistic section. One thing is that it seems to assume that best we can do with such a research agenda is analyze correlates, where what we really want is a causal model.

6Jason Schukraft16hHi Zach, Thanks for your comment. Measuring and comparing welfare across species is a tremendous theoretical and practical challenge. For measuring capacity for welfare, we would want to get a rough sense of the range of physical pain and pleasure an animal can experience as well as the range of emotional pain and pleasure an animal can experience. We would also want to know the degree to which physical and emotional pain/pleasure contribute to overall welfare, and this may differ by species. (We will need to account for combination effects: among other things, "stacking" one unit of physical pain on top of one unit of emotional pain may create more or less than two units of overall suffering.) All else being equal, if two animals have the same range of possible physical pains and pleasures, but animal A has a greater range of possible emotional pains and pleasures than animal B, we would expect animal A to have a greater capacity for welfare than animal B. One thing to keep in mind is that what ultimately matters morally is realized welfare, not capacity for welfare. In many instances, judging the effectiveness of an intervention will require looking at species-specific differences in the way welfare is realized. Two animals may have the same overall capacity for welfare, and they may be subject to the same conditions (solitary confinement, say), but species-specific differences (one is a social animal and the other is not, say) may indicate that one animal suffers much more than the other in those conditions. Nonetheless, I do believe thinking about capacity for welfare will help increase the efficiency with which our resources are allocated across interventions, especially when applied to big-picture questions, like "What percentage of our resources should ideally go to fish or crustaceans or insects?"
Brief update on EA Grants
281mo1 min readShow Highlight

I published an update about EA Grants last November. I'm now publishing another quick update to announce that EA Grants is no longer considering new grantmaking. We encourage grantseekers to apply to one of the EA Funds instead. 

Applications for the next round for the EA Meta Fund, the Animal Welfare Fund, and the Long-Term Future Fund should be submitted by 15 June

You can find links to each Fund’s application form on their respective pages on the EA Funds website (the Global Health and Development Fund does not currently accept open submissions).

I’m currently on medical leave with

... (Read more)

Just letting people know that applications to EA Community Building Grants are still open (someone mentioned being unsure about this based on this update).


  • It's sometimes reasonable to believe things based on heuristic arguments, but it's useful to be clear with yourself about when you believe things for heuristic reasons as opposed to having strong arguments that take you all the way to your conclusion.
  • A lot of the time, I think that when you hear a heuristic argument for something, you should be interested in converting this into the form of an argument which would take you all the way to the conclusion except that you haven't done a bunch of the steps--I think it's healthy to have a map of all the argumentative steps
... (Read more)
3RomeoStevens4hI really enjoyed this. A related thing is about a possible reason why more debate doesn't happen. I think when rationalist style thinkers debate, especially in public, it feels a bit high stakes. There is pressure to demonstrate good epistemic standards, even though no one can define a good basis set for that. This goes doubly so for anyone who feels like they have a respectable position or are well regarded. There is a lot of downside risk to them engaging in debate and little upside. I think the thing that breaks this is actually pretty simple and is helped out by the 'sorry' command concept. If it's a free move socially to choose whether or not to debate (which avoids the thing where a person mostly wants to debate only if they're in the mood and about the thing they are interested in but don't want to defend a position against arbitrary objections that they may have answered lots of times before etc.) and also a free move to say 'actually, some of my beliefs in this area are cached sorries, so I reserve the right to not have perfect epistemics here already, and we also recognize that even if we refute specific parts of the argument, we might disagree on whether it is a smoking gun, so I can go away and think about it and I don't have to publicly update on it' then it derisks engaging in a friendly, yet still adversarial form debate. If we believe that people doing a lot of this play fighting will on average increase the volume and quality of EA output both through direct discovery of more bugs in arguments and in providing more training opportunity, then maybe it should be a named thing like Crocker's rules? Like people can say 'I'm open to debating X, but I declare Kid Gloves' or something. (What might be a good name for this?)
3EdoArad4hI agree with this. Perhaps we are on the same page. But I think that this is in an important way orthogonal to the Planner vs Hayekian distinction which I think is the more crucial point here. I'd argue that if one wants to solve a problem, it would be better to have a sort of a roadmap and to learn stuff on the way. I agree that it might be great to choose subproblems if they give you some relevant tools, but there should be a good argument as to why these tools are likely to help. In many cases, I'd expect choosing subproblems which are closer to what you really want to accomplish to help you learn more relevant tools. If you want to get better at climbing stairs, you should practice climbing stairs.

I think having a roadmap, and choosing subproblems as close as possible to the final problem, are often good strategies, perhaps in a large majority of cases.

However, I think there at least three important types of exceptions:

  • When it's not possible to identify any clear subproblems or their closeness to the final problem is unclear (perhaps AI alignment is an example, though I think it's less true today than it was, say, 10 years ago - at least if you buy e.g. Paul Christiano's broad agenda).
  • When the close, or even all known, subproblems hav
... (read more)

The fourth Workshop on Mechanism Design for Social Good is taking place this August. 

In addition to requesting papers and demonstrations, they are requesting "problem pitches" where people (say working in policy/NGOs) can submit a problem they have whose solution may involve mechanism design (a subfield of game theory). If accepted, it may interest academics working on these subjects. 

This might be a good opportunity to pitch some problems related to EA. Perhaps related to

  1. Donor coordination.
  2. Impact prize.
  3. Moral trade.

I'm sure that there are many more concrete examples within specific o

... (Read more)

By longtermism I mean “Longtermism =df the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future.”

I want to clarify my thoughts around longtermism as an idea - and to understand better why some aspects of how it is used within EA make me uncomfortable despite my general support of the idea.

I'm doing a literature search but because this is primarily an EA concept that I'm familiar with from within EA I'm mostly familiar with work (e.g Nick Beadstead etc) advocates of this position. I'd ... (Read more)

This sounds like a misunderstanding to me. Longtermists concerned with short AI timelines are concerned with them because of AI's long lasting influence into the far future.

1RandomEA11hAs an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.

Tags are now live on the EA Forum!

They appear just above the comment section of each post, like this:

You’ll be able to select from any existing tag when tagging a post, but you won’t be able to create your own tag. For now, only moderators have that ability, because we want to make sure new tags don’t proliferate too quickly (lest we end up with separate tags for “AI alignment,” “AI safety,” and “AI risk”).

We’re thrilled to be introducing this feature; we hope it will make it much easier to find content that suits your interests.

You can see a list of all existing tags here. Each has its own pa

... (Read more)

Can you please add the tag directory to the sidebar?


  • Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. (More)

  • Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. (More)

  • Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks. (More)

  • We therefore consider interventions to reduce the expec

... (Read more)

I am skeptical of this line of reasoning because I see little reason to believe that malevolence determined the policies in question. Game theory political scientists argue that different institutional structures make it rational or irrational for leaders to distribute public goods or targeted goods, practice repression, allow political parties. For a more in depth treatment, see the Dictator's Handbook by Bruce Bueno De Meqsuita and Alistair Smith. Their core argument is that because dictators must appease a very small group of powerful interest lead... (read more)

Load More