Hide table of contents

I have to travel soon and I have two options. One is to fly for 3 hours, another is to take a high speed train for 11.5 hours. The flight ticket is about 100 USD cheaper than the train ticket.

My aim is to minimize the environmental impact of my travel. So taking the train seems to be the obvious choice. But I wonder if I'd better save the 100 USD and donate the money to organizations such Clear Air Task Force or Sunrise Movement, which may actually have better impact on the environment?

7

0
0

Reactions

0
0
New Answer
New Comment


2 Answers sorted by

According to Founders Pledge estimates, the CO2 savings from donating 100 USD (maybe 1 ton per USD, with high uncertainty) will greatly exceed the emissions from your flight (which might be on the order of magnitude of 1 ton) [1]. Donating USD 100 to Atmosfair, while less effective, would also offset this flight [2]. If you include the value of your time, the cost of the train trip might be far, far higher.

Plane emissions are further complicated, if you live in the EU, by emission certificates - which might cause a counterfactual CO2 saving, when deciding not to fly, of almost zero.

I personally stopped flying in 2015 - but not out of EA considerations. I made that decision because it feels good and unlocks new forms of adventure. It also simplifies my life (by removing certain travel options) and opens up conversations with interesting people who pursue the same lifestyle. I also consider it an exercise in personal growth. It is not a good way to help the climate when compared with donations, and I am open to the possibility of flying again in the future.

[1] https://founderspledge.com/research/fp-climate-change

[2] https://www.atmosfair.de/en/offset/fix/

If you include the value of your time, the cost of the train trip might be far, far higher.

Just to make this explicit: that would imply donating that value in addition to those 100 USD.

3
Linch
No, it could come from having a high-impact job (where nonzero marginal hours go into it) or from donating a fraction of the difference rather than all of the difference.  I also think that if you believe that donations to other charities have higher marginal impact than donation to climate charities, it'd be less moral to donate to climate charities instead.
2
Guy Raveh
True; this still means you're doing something with the "profit" from that extra time and not just letting the information sit in your head. You're putting it into an impactful job (and not playing videogames) or you're using the money to mitigate the damage. I think there are at least two points against believing this. First, you're directly harming the world in a specific way by flying instead of taking the train, and you don't want to take a moral position where it's ok to harm some people in order to help others "more effectively". Second, some cause areas lots of people here believe in are enticing in that investing in them moves the money back to you or to people you know, instead of directly to those you're trying to help. Which is not necessarily a reason to drop them, but is in my opinion certainly a reason not to treat them as the single cause you want to put all your eggs into. It's easier just to see them as the most moral, no matter the circumstances, but I think that's dangerous.
2
Linch
This is not a full defense of my normative ethics, but I think it's reasonable to "pull" in the classical trolley problem, and I want to note that I think this is the most common position among EAs, philosophers, and laymen.  In addition, the harm from increasing CO2 emissions is fairly abstract, and to me should not invoke many of the same non-consequentialist moral intuitions as e.g. agent-relative harms like lying. breaking a promise, ignoring duties to a loved one, etc. I don't personally agree with this line of reasoning. There is a bunch of nuances here*, but at heart my view is that usually either you believe the cognitive bias arguments are strong enough to drop your top cause area(s), or you don't. So I do think we should be somewhat wary of arguments that lead to us having more resources/influence/comfort (but not infinitely so). However, the most productive use of this wariness is to subject to stronger scrutiny arguments or analysis that oh-so-coincidentally benefit ourselves overall, rather than hedge on less important levels. Donation splitting is possibly a relevant prior discussion here. *for example, there might be unusually tractable actions individuals can do for non-top cause areas that have amazing marginal utility (e.g. voting as a US citizen in a swing state)

Agree with Lukas: better to book the flight. Not least because a 100 USD donation to Founder's Pledge or CATF can likely be doubled by various matching 2022 opportunities. Every.org's promotion is an example.

A slightly similar choice came up for us when we bought a car in 2020. (A new job required one.) We would've preferred a used EV/hybrid. During the peak of the pandemic, a dealer was willing to deliver a used non-hybrid vehicle to our door for many thousands of USD less. That allowed us to invest a bit more while asset prices were in the doldrums. In the last two months we've steadily donated those appreciated assets to CATF, Carbon180, and dozens of other EA charities.  Through trading donations (e.g., we donated to an AI charity in exchange for another EA giving to CATF or Carbon180) we have driven >1,000 USD to EA-embraced climate charities from that car purchase. More if you consider alternative protein charities to have a climate impact, as we do.

There are also likely strong "pandemic externality" reasons to choose the option that puts you in public for fewer hours. You might want to consult microCOVID's fantastic calculator to see how that math works out.

Why does trading donations help? And how can I find people to trade donations?

3
jared_m
It mostly helps when there are rule-bound matching funds available. Let’s say you think CATF is a very effective charity when it comes to issues you care about, and that Good Food Institute is somewhat less effective. Person B has the exact opposite perspective. If there’s an Every.org style matching opportunity, and you give $200 to CATF, Every.org will only match $100 of that ($300 total for CATF). Likewise for Person B and GFI: her $200 becomes $300 for GFI. If you find each other through the EA Forum and coordinate to split your $200 personal gifts and each give $100 to CATF and $100 to GFI, then EVERY dollar you both give will be matched. So each charity receives $400 instead of $300 from the same level of donations from you and Person B, as your giving is 100% matched — instead of 50% only.
Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr