Remix of: Purchase Fuzzies and Utilons Separately

It can be tough as an EA to watch something urgent and important happen, while not seeing any relevant giving opportunity as effective as the ones you're already helping with. You may feel guilty for not helping the visible crisis, and also feel guilty if you helped the visible crisis at the expense of helping a larger problem.

Here's my personal way of dealing with it:

1. Figure out your anticipated EA donation amount for this year. (Take into account your financial circumstances- this is a weird year.) Leave that amount alone- don't donate less to EA because you're donating to the new cause.

2. If, from what's left, you feel comfortably able to give to the new cause, then it's your money to spend on whatever you want, including helping the world! You should feel proud of yourself for doing that.

3. If you don't feel comfortably able to do so, then that's really okay- I'm glad you're putting your own mask on first! (Also, this may be a sign that you were overzealous when calculating #1. Remember that it's okay to care about yourself more than others, as long as your altruism doesn't go to waste.)

(I'm taking my own advice, by the way; during the nationwide protests over the killing of George Floyd, I've donated $1000 to Campaign Zero, but I'm not counting it toward my 10% EA donation pledge.)

[EDITED TO ADD: Promoting from the comments, other reasonable places to donate if you want to optimize within this cause area can be found in this document by Chloe Cockburn (OpenPhil).]

31

0
0

Reactions

0
0
Comments12


Sorted by Click to highlight new comments since:

As a matter of pragmatic trade-offs and community health, I broadly agree with this. However, I do also think it's important to point out that you[1] don't have to throw out all your EA principles when making "emotional" donating decisions. If it's necessary for your happiness to donate to cause area , you can still try to make your donation to as effective as possible, within your time constraints.

I suspect that the best way to do this is often to think about how narrow the cause area you're drawn to actually is. Would you feel bad if you donated to anything other than exactly , narrowly defined? This is an important question, since if is the national cause du jour it's likely to be getting a lot of attention and funding, and even small extensions in beyond what's in the news every day are likely to open up big opportunities to have more impact. The more you can comfortably extend the remit for your donation, the more impact you're likely to have[2].

This has come up in both of the recent questions on the Forum about racial injustice, and not only in comments by me. If your goal is to tackle racism or discrimination broadly, there's no particular reason to limit your concern to recent high-profile cases in the US. I'd predict that dollars going towards, say, helping largely-forgotten Rohingya refugees would be far more cost-effective than contributing even more money to a cause that's currently all over the global news. Even better would be to find a group that's been the victims of horrific attacks that no-one in the West has heard of.

Of course, none of this is to say you have to do that. We're assuming ex hypothesi that this is "discretionary" donating that doesn't count towards your GWWC pledge or whatever, and if the only way for you to not feel guilty is to donate to combating something very specific, like reducing police brutality against racial minorities in the USA, then you should (within this framing) do that. (Though even there there's a lot of value in thinking about how to do that as effectively as possible, and I'm glad some people have been doing that.)

Overall, for this kind of discretionary/personal-wellbeing donating, I think an algorithm like the following would probably be a good idea:

  1. Consider the cause area you feel like you need to contribute to. Think about a few ways you might extend it (e.g. in space, in time, in mechanism, in species). Would you feel okay with making those extensions? If so, do so, and repeat until your remit is as wide as you can make it without feeling you're betraying the cause (or whatever other feelings are spurring these donations).
  2. Within that remit, think/read/ask about how you could make your donation as effectively as possible, within whatever time and emotional limits apply.
  3. Make your donation in accordance with the findings from (2).

    1. In all cases, I'm using "you" in the general sense, not specifically to address orthonormal. ↩︎

    2. Trivially, the value of the highest-impact opportunity will monotonically increase as the breadth of the remit expands; at full generality, you're just back to EA again, but the principle applies to partial extensions as well. ↩︎

I agree that EA thinking within a cause area is important, but the racist police brutality crisis in the USA is the particular motivating cause area I wrote this post about, and the Rohingya don't enter into that.

Given the framing of discretionary donations, how broad you're willing to go with your spending is entirely up to you. Broader means (sometimes much) more impact but less of...whatever hard-to-exactly-define thing it is that motivates people to donate to specific causes rather than for general impact. I imagine different people will set their thresholds for that trade-off in different places. My main point is that it would be good to explicitly consider how one might broaden the remit, not that there is necessarily a right or wrong place to put the boundary.

On the object level, there is is a reading of your comment here that I do disagree with quite strongly, but it doesn't seem terribly valuable to me to argue about it here.

I'm new to EA and this was a great reminder. I've had this on-and-off internal conflict of donating to EA vs non-EA cause areas. From a personal finance framing, I have EA donations as one line item and "Random Acts of Kindness" as another line item in my monthly budget (e.g. ranging from paying for a friend's meal to donating to non-EA cause area such as criminal justice reform).

Side note: What was your decision-making process for choosing to donate to Campaign Zero? I'm trying assess where my donations would have the most impact. I'm hesitant to donate to them compared to other organizations that Open Phil has vetted through their criminal justice reform grants, as well as their recent letter to interested donors as a result of the protests.

One of my friends mentioned it, and it also came up in this post. They look extremely legit.

But I also could have gone with one of Chloe Cockburn's recommendations, had I seen them before I donated.

I'm not sure why I didn't receive that letter/email from Chloe, but I feel like it bears more exposure here, at least as a reply to the other posts asking where to donate.

I'm taking my own advice, by the way; during the nationwide protests over the killing of George Floyd, I've donated $1000 to Campaign Zero, but I'm not counting it toward my 10% EA donation pledge

Have you considered Open Phil's suggestions, ASJ and the Bronx Freedom Fund?

I looked but didn't find those recommendations until I'd already donated! Thank you for suggesting them for others.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr