Hide table of contents

Crossposting from the Effective Altruism community on Reddit. Thought it may be helpful to have a discussion here as well for those who don't frequent r/EffectiveAltruism.

For those who are thinking about how they can leverage their donations towards this cause area, where should we be donating to?

Bail funds are getting the most media attention right now, with the Minnesota Freedom Fund receiving $20M. With that, I'm not sure if there is a funding need right now for bail funds, compared to other neglected organizations in the same cause area. I'm also not sure on how to compare effectiveness or tractability between organizations.

Note: I understand that there are more effective cause areas such as malaria or x-risk. However, many of my non-EA peers want to donate in this specific cause area and I feel like I could help them choose the most effective charity. I, too, would like to donate to this cause area, but still maintain my usual donations to EA charities.

81

0
0

Reactions

0
0
Mentioned in
New Answer
New Comment


4 Answers sorted by

It's really disappointing to see this post repeatedly down-voted without any responses. When people approach the EA community and ask about the most effective way to deal with an issue they care about, surely there's a better way to respond than "I think there are more pressing causes so I'm not even going to dignify your polite request with a polite response".

In answer to the question, there's not been a huge amount of EA research on this, mostly because, for several reasons, it tends to be more cost-effective to focus on the world's poorest countries if you intend on helping people today. However:

I wanted to share this document which Chloe Cockburn, the person who runs strategy on criminal justice reform at Open Phil, posted this in response to donors asking for advice.

https://docs.google.com/document/d/1GGgEZ8ebFd6--C4wLeJV9XrX1OPPg40NL6F1QDo53Bs/edit

Thanks for sharing Campaign Zero! Reading about their organization, it feels similar or analogous to donating to an EA longtermist organization. It's great that they are data-informed and backed by research on their strategic initiatives. Yet, as I feel with any longtermist organization (EA or not), I have a hard time donating money without knowing the amount of impact it could have. This is why I just donate to GiveWell and other global health charities right now.

I didn't downvote it, but some commenters might have done because an almost identical question was asked a few days ago.

As a person still new to EA, it was disheartening to see the downvotes. You can see in my post history that I rely on this community to be educated and engaged on EA, including how I can apply it to my life.

After I saw the downvotes, it gave me the perception of exclusivity in this community. I'm glad I was made aware that there was a duplicate question, which I apologize for missing. Yet, I'm a little apprehensive now of posting anything that doesn't seem to fit the bucket of EA cause areas.

The purpose of my post wasn't to give more attention to a non-EA cause. I want to apply the principles and concepts of EA so that I and others can make an informed, confident decision on how our dollars can make the greatest amount of impact in this specific cause area.

If this community is only receptive and knowledgeable of EA cause areas, such that discussions around non-EA causes won't provide meaningful value, then please tell me so that I can engage in a different community.

Just to point out, at the time of writing, that this question is now at 41 karma, which is pretty good. So whoever was downvoting it at the beginning appears to have been outvoted. :-)

As I said in my other comment, I think this is a good question, well-phrased and thoughtful, and I'd be happy to see more like it on the Forum. Thank you for contributing here.

I think that applying EA principles and concepts to different areas is really valuable, even if they’re areas that EA hasn’t focused on a lot up to this point. I’m glad you asked this question!

7
Will Bradshaw
I was going to come and link to that other question, but if people are downvoting based on that without saying anything, that seems pretty bad/dumb to me. Easy enough to just leave a comment saying "this seems to be a duplicate, see this other question", rather than silently (and unhelpfully) downvoting. (I also think this question is better than the other one: the question is better defined, and the asker has done some initial homework and shows awareness of cause prioritisation concerns, which makes it easier to pitch the answer.)
2
alex lawsen
Upvoted, I didn't see that one, hopefully that's the case!

I like Campaign Zero's data-driven prioritization of solutions, but it's not clear to me how they'd use marginal funds. I suspect this gap explains its absence from CEAs and Open Phil recommendation lists.

1
warrenjordan
CEA has a recommendation list for criminal justice reform? I can't seem to find it on their website.

Campaign Zero is getting a *lot* of criticism, e.g. https://twitter.com/PowerDignity/status/1268735286646726656

They do "sound good" because they're paying attention to "data", but personally I wouldn't feel comfortable supporting them unless you had a very good reason to think that the criticism is not legitimate.

The tweet you linked to says that these 8 principles are already being used across the country and haven't worked.

AFAIK that isn't true - they aren't being used uniformly.

Other than that, the tweet doesn't have specific criticism - it just says the principles "won't work". Have you seen anything more specific?

6
alex lawsen
The main substantive objection seems to be that it's not demanding enough, especially compared to, for example, defunding all police departments. There's also lots of 'this state had some of these policies and still killed someone' type objections. I don't know enough about social science to be able to predict the effect of different people making demands of different strength. It's a little sad to see Sam Sinyangwe be called an apologist/shill for the police though, when regardless of what your prediction of the impact is he's pretty clearly someone who has dedicated his career to reducing rates of police violence. Edit: there's a reply from @SamSwey here, followed by a ton of personal attacks on him and what looks like one legitimate question about methodology. https://twitter.com/samswey/status/1269298269055856641
2
Kirsten
Yes, I think that's a much better criticism - Campaign Zero works within the current policing framework, and we could potentially do better by rethinking public safety at a more fundamental level.

My position on this topic remains the same as in that the other similar question that came up recently: namely, that I suspect that the same "developing-world multiplier" applies to tackling racism (and discrimination more broadly) as to most other near-term cause areas, and one would probably be better off looking for opportunities there than focusing on very high-profile cases in the US.

But, if one is really fixed on spending in the developed world, I agree that OpenPhil's criminal-justice-reform grantees are a good place to start.

I think there are two tacks to take here, depending on whether your goal is reducing racial disparities or addressing discrimination itself.

1. Because of the heavy focus on poverty in developing countries, donations to normal EA charities also serve to reduce international racial disparities. The development gap between Africa and the rest of the world is strongly tied to colonial policies designed to enrich European countries at the expense of majority-black countries.

2. If you want to focus on racial discrimination, I'd suggest charities aiming to provide help to refugees of genocides. In this case, I'd suggest donating to GiveDirectly's refugee assistance programs, which aim to provide those fleeing racial, ethnic, and religious genocides with enough money to survive.

Has anyone seen a study on how much of the income gap is due to colonialism?

6
Dale
Colonialism and Modern Income - Islands as Natural Experiments
1
Closed Limelike Curves
Many studies have been done on this in econometrics; very few of them are good.

Good answer. Helping refugees of ethnic cleansing is a good way to go here, I think.

I think we're in such an early stage with limited access to data that my intuition is - make some experiments and monitor closely, plus look for 'meta' opportunities that multiply impact - giving to ActBlue itself to scale up is a bet that they will facilitate a lot more than the tens of millions of $ they have raised already, and is acknowledging that better opportunities may arise in the near future (but will still be funded through that platform)

In terms of personal, counterfactual donations in addition to my 'normal' EA donations this year, to facilitate a conversation about this issue, I have:

  • donated $50 to ActBlue, to support their operations and technical services. https://secure.actblue.com/donate/supportactblue
  • donated $50 to ActBlue’s 11 suggested orgs on their recent post “Support orgs fighting against racism and police brutality” https://secure.actblue.com/donate/ab_mn ... these include: Black Lives Matter Global Network
    Reclaim the Block
    National Bail Out
    Black Visions Collective
    NAACP Legal Defense and Educational Fund
    The National Police Accountability Project
    Color of Change Education Fund
    Unicorn Riot
    Campaign Zero
    Advancement Project
    The Marshall Project

Edit: My intuition is that US criminal justice dysfunction is an undervalued global risk, as it contributes to political instability, but I would very much welcome more careful thought into why that may or may not be the case :)

Comments3
Sorted by Click to highlight new comments since:

First off, I think this is absolutely the right place for asking these sorts of questions. I’m really glad to have been able to read the discussion of the topic so far. So thanks for introducing it. That being said, I’m unconvinced that EAs should be donating any money to charities focussing on systemic racial injustice right now.

For me, the problem with the question is that it’s putting the cart before the horse. The thing that brings me to the EA community isn’t a particular interest in schistosomiasis, factory farmed fish or nuclear war. Rather, it’s a desire to donate my money to charities that have the highest rates of return per dollar to reduce suffering. The EA community holds charities to extremely high standards of transparency and data collection. It’s the mindset that has led to some of the unusual cause areas that we support. I feel that to start with the cause area and work backwards is akin to doing an experiment to prove a result, rather than to find one.

The burden of proof needs to be on new charities to show us that they are more effective than the ones we currently endorse. If a charity can’t meet that burden of proof, or is yet to meet it, I think we ought not to donate it and continue donating to those charities that have.

I don’t think that you or anyone else should donate any money to a charity based on felt compassion or intuition, and I hope I don’t seem like too much of a robot for saying so, but this is me taking part in the discussion and I'd love to know what others think.

I agree with you everything you said regarding EAs focusing on the cause areas that are going to do the most good, and for those organizations to carry the burden of evidence / proof so that we are enabled to reduce the most suffering per dollar.

That being said, I’m unconvinced that EAs should be donating any money to charities focussing on systemic racial injustice right now.

I'm not trying to convince others that this is a top priority cause area. It's definitely not and wouldn't encourage people to donate if their singular goal is to do the most good in the world.

I have more than one goal here, however. My goal is to find out how I can do the most good in this cause area. I don't want people to stop donating to x-risk, global health, or farm animal welfare. I'm still donating to GiveWell charities and those take up a majority of my donations still.

I feel that to start with the cause area and work backwards is akin to doing an experiment to prove a result, rather than to find one.

As GiveWell points out, that's difficult to accomplish, even with their own top charities. I agree that we should continue donating to places that have shown the evidence. Yet, we can't expect the same GiveWell-like evidence in other cause areas, whether it's an EA cause or not. That's why they started Open Philanthropy.

I don’t think that you or anyone else should donate any money to a charity based on felt compassion or intuition

I'm not the type of person to be a perfect utilitarian robot, nor do I want to be. If I were, then I wouldn't have donated money to my best friend's father's funeral which they couldn't afford.

Peter Singer says in his famous TED talk that EA combines both the head and the heart.

  • My heart is in global health and poverty, as a someone whose parents and family grew up poor in a third-world country and suffer from chronic disease because of it. My head tells me to donate to GiveWell charities.
  • My heart is also in criminal justice reform, as a POC whose had family members and friends incarcerated and faced similar injustices. My head wants to find the most effective donation I can make.

Thanks for posting such a considered reply. I think I understand where you're coming from much better now.

I read the Julia Wise article you linked, and thought it made a lot of sense. I don't see any point in feeling bad when we spend our time or money on things that aren't optimised to reduce suffering.

I'm certainly no perfect utilitarian robot myself, I just think that I should be. But I don't feel bad that I'm not, and I don't think I should feel bad.

Reading it again, I think my original reply was too prescriptive, I was probably trying to answer a question that you weren't asking. At the same time, I still believe that you "shouldn't" donate to charities that aren't the most effective ones and that if you were to change your mind and put that money towards e.g. the Against Malaria Foundation, it would be the "right" thing to do, or a "better" thing to do.

So yeah, sorry for seeming preachy. I 100% don't think you should ever feel bad for supporting a charitable cause, there's enough things to worry about without adding that one.

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might