Recent Discussion

Review of Climate Cost-Effectiveness Analyses
183d15 min readShow Highlight

This post was prompted by the comments on my proposed updated 80K Hours Climate Change Problem Profile.

It’s important to make it clear up front that the surprising truth is that there is genuinely very little quantitative research into the impacts of climate change of 4C and above. The research which does exist is necessarily limited in scope and makes a large number of assumptions - many of which will tend to undervalue the overall impact of climate change.

In this post I examine four previous attempts to examine aspects of the impact of climate change and/or the cost-effectiveness of c... (Read more)

Thanks for this. Have you seen the reports produced by BreakThrough

The authors argue that:

  1. the IPCC reports and those based on them are overly conservative and under report the probability and impact of climate risks
  2. the serious impacts that are often considered in 2100 scenarios will more likely come around 2050

I don't have the expertise to describe how the calculations you've done above would be affected by this, but hopefully someone else will.

2Denkenberger8h Nuclear winter would be approximately 8°C change in only one year [], and this is unlikely to cause extinction. 10°C climate warming over a century would be much lower impact, because there is time to relocate infrastructure and people (and nuclear winter also reduces solar radiation). So I have put it in the intensity category of an abrupt 10% agricultural shortfall. Based on a survey of GCR researchers, this has a mean long-term reduction in far future potential of approximately 5% [] . This combined with a probability of about 2% gives about a 0.1% reduction in the far future potential. Full scale nuclear war is estimated to have a 17% [] reduction in long term future potential. There is great uncertainty in the probability of full-scale nuclear war, but I think 0.1% per year or 10% in the next 100 years is reasonably conservative. Therefore, full scale nuclear war is more likely than extreme climate change and also significantly greater consequences if it were to happen. But then the question is how much would it cost to significantly mitigate the problems. Since solar radiation management is risky, the present value of the cost of largely solving the climate change problem by reducing emissions is around $10 trillion (there was an EA forum post on value of information of this, but I can’t seem to find it). I have researched both energy efficiency and renewable energy for years, and I do think there is still some low hanging fruit of energy efficiency that pays for itself. However, to actually solve the problem will cost a lot of money. On the other hand, reducing the far future impact of nuclear winter by about 17% would cost around $100 million [https://forum.effectiveal
1StevenKaas9h I was thinking e.g. of Nordhaus's result that a modest amount of mitigation is optimal. He's often criticized for his assumptions about discount rate and extreme scenarios, but neither of those is causing the difference in estimates here. According to your link, recent famines have killed about 1M per decade, so for climate change to kill 1-5M per year through famine, it would have to increase the problem by a factor of 10-50 despite advancing technology and increasing wealth. That seems clearly wrong as a central estimate. The spreadsheet based on the WHO report says 85k-95k additional deaths due to undernutrition, though as you mention, there are limitations to this estimate. (And I guess famine deaths are just a small subset of undernutrition deaths?) Halstead [ ] also discusses this issue under "crops".
Probability estimate for wild animal welfare prioritization
69h16 min readShow Highlight

In this article I calculate my subjective probability estimate that the problem of wild animal suffering is the most important cause area in effective altruism. I will use a Fermi estimate to calculate lower and upper bounds of the probability that research about interventions to improve wild animal welfare should be given top priority. A Fermi estimate breaks the probability up into several factors such that the estimate of the total probability is the product of the estimates of the factors. This method is known in superforecasting to increase accuracy or predictive power.

With the lower and... (Read more)

Suffering focused ethics can also avoid the repugnant sadistic conclusion, which is the most counterintuitive implication of total utilitarianism that maximizes the sum of everyone’s welfare. Consider the choice between two situations. In situation A, a number of extremely happy people exist. In situation B, the same people exist and have extreme suffering (maximal misery), and a huge number of extra people exist, all with lives barely worth living (slight positive welfare). If the extra population in B is large enough, the total welfare in B become
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
2Pablo_Stafforini3h the repugnant sadistic conclusion of total utilitarianismNote that total utilitarianism does not lead to what is known as the "sadistic conclusion". This conclusion was originally introduced by Arrhenius [], [Arrhenius](,] and results when adding a number of people each with net negative welfare to a population is better than adding some (usually larger) number of people each with net positive welfare to that population. Given what you say in the rest of the paragraph, I think by 'repugnant sadistic conclusion' you mean what Arrhenius calls [ the 'very repugnant conclusion'.] the 'very repugnant conclusion', which is very different from the sadistic conclusion. (Personally, I think the sadistic conclusion is a much more serious problem than the repugnant conclusion or even the very repugnant conclusion, so it's important to be clear about which of these conditions is implied by total utilitarianism.)
Older people may place less moral value on the far future
192d14 min readShow Highlight


In a study initiated by SoGive, we sought to understand to what extent study participants care about (or place moral value on) people in the far future. The study sought to understand both stated opinions on the topic at an abstract level, stated opinions taking into account a concrete (but hypothetical) example, and (in another attempt to make this more concrete) considering the choice to donate to global poverty versus climate change charities. The study was intended to support SoGive’s research on charities.

This paper also introduces a new segmentation of presentdayist/longte... (Read more)

In addition to the analyses SoGive conducted that Sanjay has listed, Rethink Priorities conducted some tests which provide further evidence for the claims made, and speak to the question Will MacAskill raised “Do younger people actually have more future-oriented views?” The samples do appear to be more presentdayist than longtermist, especially so for older respondents.

Explicit questions suggest more presentdayist preferences

In both samples we find evidence in support of the hypothesis that there is a preference for prioritising helping people now rather

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Defending the Procreation Asymmetry with Conditional Interests
1810d7 min readShow Highlight

The Procreation Asymmetry consists of these two claims together:

  1. it’s bad to bring into existence an individual who would have a bad existence, other things being equal, or the fact that an individual would have a bad existence is a reason to not bring them into existence; and
  2. it’s at best indifferent to bring into existence an individual who would have a good existence, other things being equal, or the fact that an individual would have a good existence is not a reason to bring them into existence.

However, if a bad existence can be an "existential harm" (according to c... (Read more)

got it! :-)

What to know before talking with journalists about EA
912mo7 min readShow Highlight

Journalists regularly contact individuals, groups, and organizations who are involved in the effective altruism space. At first glance, opportunities to speak with journalists may seem like a good way to spread information about important work and ideas. However, we have found that they can also be a good way to create misunderstandings or negative impressions of EA or of particular projects. Because evaluating and engaging in successful media engagements requires specialized skills and knowledge, it’s important to seek advice or resources, proceed carefully, and be prepared.

Quick takea... (Read more)

I would like to add something that the authors of this piece may be too polite or professional to say themselves: the financial pressures within the media industry have made journalism among the most dishonest professions in society.

Of course there are many fantastic scrupulous people working in the media. And there are a handful of outlets that maintain high levels of integrity.

But the median journalist is under enormous pressure to find some sensationalist angle for their stories in order to drive a lot of clicks. They're also under great pressure t... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Making Donating Fun
1410d3 min readShow Highlight

So here is an idea

Donation Game App

An application which allows users to play games against eachother.

  • The user funds their account with money, and receives tokens in return (which are pegged to USD).
  • The user buys into a game with tokens (different buy-in levels)
  • The winner(s) of the game receives the tokens in the prize pool for that game.
  • A user is not allowed to withdraw the tokens, they can only be used to donate to effective charities (decided by the application).
  • The app takes zero commission from buy-ins, so all of the money goes to the charities.
  • Users can track where their tokens h
... (Read more)

Very good points, I agree with all.

Don't be afraid to sound negative, honest feedback can save a whole career!

[Question]What are your top papers of the 2010s?
201d1 min readShow Highlight

Which papers published in the last decade most influenced your thinking?

This question is inspired by today's Future Perfect newsletter (signup link), see answer below. Dylan Matthews restricted himself to papers in "economics, political science, sociology, psychology, and philosophy", but I'd be interested in papers from any domain.

2anonymous_ea11h Can you expand on how this influenced you?

It basically represents my transition from thinking that algorithms were basically fair and fine, to thinking they're biased because people are biased and so bias is baked in eg through bad data, to realising the are a very wide variety of ways that algorithms can unintentionally discriminate.

They're not a particularly EA-related pair of papers, but they are very interesting.

Helen Toner: Building Organizations
67h8 min readShow Highlight

At the time of this talk (2016), Helen Toner was a senior research analyst at the Open Philanthropy Project. Here, she shares management lessons — such as holding regular one-on-ones and soliciting feedback — that she learned in her first year at GiveWell and the Open Philanthropy Project.

Below is a transcript of the talk, which we have edited lightly for clarity. You can also watch Helen’s talk on YouTube or read its transcript on

The Talk

As I’m sure you all know by now, effective altruism [EA] is about doing the most good you can with the res... (Read more)

Ramiro's Shortform
27dShow Highlight

Assessing the impact of Brazilian donors and EA community

We’re thinking about testing if our actions for promoting EA in this year (translations, meetings, networking...) have led to an observable increase in donations from Brazil - particularly outside the group of more "engaged" members. Even if we haven't observed an increase in high-quality engagement (such as GWWC pledges), we do see an increase in some "cheaper signals", such as the number of Facebook group members and the amount of donations to AMF (which, curious... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

[Link]IGDORE forum for discussing metascience
211h1 min readShow Highlight

I recently joined the Institute for Globally Distributed Open Research and Education (IGDORE). They have decided to invite groups interested in metascience (specifically open and replicable science) to create a space on their forum. I think a few EAs are interested in aspects of improving the scientific process, and wonder if anybody would be interested in creating an EA space there?

I wouldn't suggest that this replace the EA forum in anyway, but could act as space where EAs interested in this area could engage with non-EAs who have similar (metascience) interests.

‘Included in thi... (Read more)

I'm organizing a preconference on 'The Evolutionary Psychology of Existential Risk' for the Human Behavior and Evolution Society (HBES) conference next June 24-27, 2020 in Detroit, with 4-6 speakers.

I'd like to include some EA experts who are interested in the psychological challenges associated with understanding & managing X-risks.

Any suggestions of possible speakers who might be interested?

Feel free to reply here or to email me directly, Thanks!

[Link]Oddly, Britain has never been happier
1420h1 min readShow Highlight

This article is about the little-noticed fact that happiness has been climbing in the UK for 20 years (despite endless bad news), and misery has almost disappeared. And why this might be.

It also outlines measuring happiness and using it to guide policies, as governments are starting to do (and arguably so too should Effective Altruism).

All comments very welcome, particularly from experts in the field.

I also thought the World Happiness Survey looked flat but it has gone up. 0.25/10 is not be sniffed at.

WHS has a much smaller sample size - around 1,000 per year - whereas the Office of National Statistics asks around 300,000 people a year. ONS data also shows a rise of about 0.3/10 between 2011 and 2019 (

I’m not entirely sure that there is really no other official source for local group funding. Please correct me in the comments.

It seems that by now, CEA's community building grants programme is the only source of funding for local group leaders and community builders world-wide. (The EA meta fund distributed some funding to community builders in earlier rounds, although the bulk of the money went to other targets. In the latest round, they referred community builders to the community building grants programme and they plan on continue doing so in the future.)

According to CEAȁ... (Read more)

Hi Jan,

Thanks for flagging your concerns here. The scope of EA Community Building Grants (CBG) doesn't encompass all funding decisions regarding community building, but is limited to providing funding for people to do part-time or full-time community building with a specific EA group (i.e. at a university, city and national level). We’ve made some grants outside this category, though they account for less than 10% of the total funding we’ve granted out. Within this category of location specific EA community building, the CBG programme li... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Technical AGI safety research outside AI
576d3 min readShow Highlight

I think there are many questions whose answers would be useful for technical AGI safety research, but which will probably require expertise outside AI to answer. In this post I list 30 of them, divided into four categories. Feel free to get in touch if you’d like to discuss these questions and why I think they’re important in more detail. I personally think that making progress on the ones in the first category is particularly vital, and plausibly tractable for researchers from a wide range of academic backgrounds.

Studying and understanding safety problems

  1. How strong are the econo
... (Read more)

For reference, some other lists of AI safety problems that can be tackled by non-AI people:

Luke Muehlhauser's big (but somewhat old) list: "How to study superintelligence strategy"

AI Impacts has made several lists of research problems

Wei Dai's, "Problems in AI Alignment that philosophers could potentially contribute to"

Kaj Sotala's case for the relevance of psychology/cog sci to AI safety (I would add that Ought is currently testing the feasibility of IDA/Debate by doing psychological research)

Load More