This is a cross-post from my blog. I've seen analysis like this previously on the forum, but nothing recently, so I thought it might be useful to share one up-to-date practical exploration of climate offset donations.
I want to start donating annually to offset my carbon footprint. I don’t really think of this as a charitable cost - instead it’s internalizing my externalities.
This is the first time I am systematically deciding to make an annual donation - I wanted to walk through my thinking in case it’s useful for anyone else! This post also serves as pro-Effective Altruism propaganda.
- How much carbon do I need to offset?
- The average American seems to emit about 15-20T of CO2 per year (source, source, source). I’ll assume 20T.
- But I travel a lot. A round-trip flight from London to New York emits ~1T of CO2. This year I took 5 international flights - most had multiple legs, so I’ll assume I emitted 15T more than the average American.
- So let’s say I have to offset 35T of CO2 each year.
- Where should I donate?
- I trust Vox’s Future Perfect on stuff like this. They recommend donating to a climate change fund such as the Climate Change Fund from Founder’s Pledge
- How much should I donate?
- I’ll use the top recommended climate charity from Vox’s Future Perfect as a benchmark. As of December 2023, this is the Clean Air Task Force
- Founder’s Pledge estimates that a donation to CATF can avert 1T of CO2 emissions for $0.1-$1
- So that would put the amount I have to donate to offset all my emissions at $3.50-$35 per year
- I’ll be on the safe side and assume I should donate $35
Conclusion: I just donated $35 to the Climate Fund from Founder’s Pledge to offset my yearly carbon footprint. I intend to make this donation annually going forward, and encourage you to as well!
—
Effective Altruism has been under some heat lately - with the collapse of FTX, and the drama around the OpenAI board ousting Sam Altman.
EA is both a philosophy and a community. I think the above exercise illustrates why both are really good, despite recent drama.
The philosophy of Effective Altruism gave me the intellectual motivation to donate in the first place. And it informs my decision about where to donate: I should not just donate to what feels the best - I should donate where my dollar will have the highest impact in terms of tons of CO2-eq averted.
The community of EA has created institutions (in this case Vox’s Future Perfect, and Founder’s Pledge) that help me quickly[1] identify a good donation opportunity, and direct my funds effectively. Also, a post on the the EA Forum provided extra social motivation to make this donation
Is this system perfect? No. Perhaps I could have spent more time finding a better charity to donate to. Perhaps I should be doing more in my lifestyle or in political activism to be addressing the problem of climate change.
But I think my actions here are a lot better than they would be if Effective Altruism did not exist[2]. So overall I remain proud of Effective Altruism - both the philosophy and the community.
GiveWell has dozens of researchers putting tens of thousands of hours of work into coming up with better models and variable estimates. Their most critical inputs are largely determined by RCTs, and they are constantly working to get better data. A lot of their uncertainty comes from differences in moral weights in saving vs. improving lives.
Founders Pledge makes models using monte carlo simulations on complex theory of change models where the variables ranges are made up because they are largely unknowable. It's mostly Johannes, with a few assistant researchers, putting in a few hundreds of hours into model choice and parameter selection - with many more hours spent on writing and coding for their monte carlo analysis (which Givewell doesn't have to do, because they've got much simpler impact models in spreadsheets). FP has previously made 1/mtCO2e cost-effectiveness claims based on models like this, which was amplified in MacAskill's WWOTF. This model is wildly optimistic. FP now disowns that particularly model, but won't take it down or publicly list it as a mistake. They no longer publish their particular intervention CEAs publicly, though they may resume soon. My biggest criticism is that when making these complex theory-of-change models, the structure of model often matters more than than the variable inputs. While FP tries to pick "conservative" variable value assumptions (they rarely are), the model structure is wildly optimistic for their chosen interventions (generally technology innovation policy). For model feedback, FP doesn't have a good culture or process in place that deals with criticism well, a complaint that I've heard from several in the EA climate space. I think FP's uncertainty work has promise as a tool, but I think the recommendations they come up with are largely wrong given their chosen model structure and inputs.
GiveWell's recommendations in the health space are of vastly higher quality and certainty than FP's in the climate space.