11

I'm curious to know if anyone has put thought or time into improving/overhauling existing charities from an EA perspective. Has there been much thought or discussion put into the idea of making existing charities more effective?

There are lots of organizations out there that contract for nonprofits to make their marketing or fundraising more effective, but has anyone contemplated creating a consulting organization that would work with organizations within an EA framework? This seems to be not just a potentially large opportunity to effect change but a really big empty space that no one is working in.

 

There are of course a few potential pitfalls. It is hard to instigate change anywhere, but particularly in organizations that believe they are doing good work, or organizations that have been around for a long time. This bias against change would be a hard one to overcome, but I think EAs have gotten particularly good at asking very pointed questions about doing great charity work. This insight could be a huge resource for groups that truly want to improve.

 

I've thought about this myself quite frequently and would be stoked to hear thoughts from others.

11

0
0

Reactions

0
0
Comments21


Sorted by Click to highlight new comments since:

I think this is a great area to experiment with, so I'd be keen for people to just go and try it on a small scale and see what works.

One problem to bear in mind is that the best EA content is about cause selection and intervention selection, and charities are usually unwilling to change these dimensions. Whereas there's already a lot of advice for people who just want to implement an intervention more effectively.

I agree. The bulk of the variance in 'charity effectiveness' looks to be along intervention lines. If charities are fairly hard to budge on these, then it looks less likely that efforts to shift the entire distribution of charities to the right are going to work better than focusing on the extreme right tail in the first place.

I agree that trying to branch out to, or add an EA cause to a current charities is unlikely to succeed. My experience is that you are right - there are lots of services and advice out there for charities that want to improve implementation or strategy (mainly focused around cultivating donors).

I would be interested to know if there are many resources out there aimed at getting organizations to collect more data. To asses their success rates more scientifically. It is also my understanding that the advice out there for creating more effective implementation is usually based around just getting better numbers, not if those numbers actually make change in a given cause area.

What would you suggest is a good place to start for small scale experimentation? I think you are right, just doing some of this is the best way to gauge tractability.

I'm interested in helping organizations collect more data, using independent surveys of households to measure bed net usage, as well as surveys around deworming programs. One organization that conducts independent surveys is PMA2020. They currently have family planning and WASH surveys, but may add additional modules in the future.

In the world of animal protection, we have Faunalytics (formerly Humane Research Council). I'm the founder and executive director, full disclosure. We've been around for 15 years, since before "EA" became a common term, but that's essentially what we do. We're a nonprofit research provider and we encourage animal charities to collect and utilize data. We identify and summarize third-party research that is relevant to animal advocacy, conduct fee-for-service projects for animal groups, and carry out independent studies to further animal advocacy. We are a backbone organization that does not directly advocate for animals ourselves, but strive to make animal charities more effective. I'd be happy to talk about our experience sometime or you can learn more at https://faunalytics.org

I am very intrigued by the potential upside of this idea. As I see it, one can change charity culture by changing consumer demand (generally what GiveWell does), which will eventually lead to a change in product. Alternatively, one can change charity culture by changing the product directly, on the assumption that many consumers care more about the brand than the product.

Would the service be free to the nonprofits? Would it help nonprofits conduct studies to assess their impact?

Anecdata: I have a friend who works at a big-name nonprofit who has been trying to find exactly this service.

Ben Todd made this comment here detailing organizations he knows about (sort-of) working in that vein. Try forwarding that list to your friend!

I’ve recently chatted with Tara of CEA about this and my recent post on raising the effectiveness waterline goes in a similar direction. Such programs will be limited to charities that operate in areas that allow for high effectiveness, and the charity has to be willing to do it of course.

My first charitable interpretation of the situation was that if a charity has the potential to be highly effective given its cause area and is willing to optimize its effectiveness but still fails to be on par with a top charity in the same area, they must be lacking in something that is hard to obtain, namely specialized knowledge they can get from the top charity. A cooperation between the charities would furthermore serve to thwart wasteful competition between the charities.

Tara’s experience with nonprofit counseling with Toyota, however, has been that what such charities lacked was not so much this specialized knowledge but general skills in accounting, controlling, and I don’t fully remember what else she mentioned. If the most salient problems of these charities are in such general areas, then a general EA consultancy firm would make sense.

The services of such a consultancy firm may be highly subsidized from donations, but I think the charity will be more likely to implement advice that it has paid for, and that should also make it easier for the consultancy firm to pay its bills. I haven’t done any calculations, but it feels to me like it will be very hard to keep this sort of operation afloat financially.

An alternative might be to find an existing, established consultancy firm with knowledge in the area of nonprofits that is ready to advise charities as to how they can maximize impact rather than just fundraising success. An EA funder, a charity, and this company could then agree on prices and a cofunding plan. This will usually involve lots of money, though, since this sort of optimization will be most cost-effective with charities that move a lot of non-EA donations, and those charities will be large and complex.

For your information, if effective altruism was to spearhead such consulting projects, they probably won't be initiated by Givewell (see my comment here). The Centre for Effective Altruism, in particular Effective Altruism Ventures, might be the best organization poised to initiate such work.

When I met Holden Karnofsky (the executive director of Givewell) at the 2014 Effective Altruism Summit, I asked him if Givewell ever intended to consult or revamp charities to become more effective rather than just evaluating and recommending already effective charities. He said no. His reason for this is because he believes it's substantially more difficult to create an effective charity than evaluate existing ones. I'm inclined to agree with him, as the risks and rewards of creating a charity are spread across the whole non-profit world, not jeopardizing the potential value of Givewell's marginal resources. Note I just mean I now think it makes sense for Givewell not to go into consulting, but I still think others need try to create effective charities.

Note Mr. Karnofsky's statement reflects the position of Givewell's leadership, but this doesn't mean other effective altruists working for, aligned with, or near Givewell couldn't get involved in such a project. Givewell already values independent thinking among its employees, exemplified by their annual blogs about where each of its research employees intends to donate and why.

I'm inclined to agree with Holden for a number of reasons. First and foremost being that this isn't really what GiveWell does. They are very good at what they do, which is evaluate existing charities; while I see the tie-in with knowing how a good charity is run, it is a far cry from making organizational changes. Which is the other reason I agree with him, doing this is hard. Like really, really, substantially hard.

However I think hard and 'not worth doing' are very different things. I also agree that CEA or EA Ventures would be more appropriate venues to incubate a testable idea around this. After speaking with Kerry at CEA about this he agrees that while this is very exciting and something that would be great, no one yet seems to have a good answer for how to go about doing this. I think the next step is more asking lots and lots of people how they would go about doing this, what the very first change would, should, or could be.

I am really intested to hear if some of this was implemented in a concrete project? We as Effective Altruism Netherlands receive an increasing amount of requests from very skilled people (e.g. from finance, data science, legal professions and change management) who want to contribute to existing charities. We are currently talking to effective charities to see if they need skilled volunteers, but I have strong doubts they can meet all the demand from skilled volunteers.

Letting them work at existing less effective charities, making them more effective, could be worthwile for the reasons mentioned in this post. We can provide volunteers with some formal training and standardized methods to ensure high quality. I´ve looked into Benjamin Todd´s post and we could try to collaborate with one of the organisation mentioned there.

Does anyone here have any shareable experiences on this?

Except for the purposes of obtaining more epistemic information later on, the general agreement within the EA crowd is that one should invest the vast majority of eggs in one basket, the best basket.

I just want to point out the exact same is the case here, where if someone wants to make a charity more effective, choosing oxfam or the red cross would be a terrible idea, but trying to make AMF, FHI, SCI etc more effective would be a great idea.

Effective altruism is a winners take all kind of thing, where the goal is to make the best better, not to make anyone else be as good as the best.

This is true with respect to where a rational, EA-inclined person chooses to donate, but I think you're taking it too far here. Even in the best case scenario, there will be MANY people who donate for non-EA reasons. Many of those people will donate to existing, well-known charities such as the Red Cross. If we can make the Red Cross more effective, I can't see how that would not be a net good.

At the end of the day, the metric will always be the same. If you can make the entire red cross more effective, it may be that each unit of your effort was worth it. But if you anticipate more and more donations going to EA recommended charities, then making them even more effective may be more powerful.

See also DavidNash comment.

Of course. But as I understand it, the hypothesis here is that given (i) the amount of money that will invariably go to sub-optimal charities; and (ii) the likely room for substantial improvements in sub-optimal charities (see DavidNash's comment), that one (arguably) might get more bang for their buck trying to fix sub-optimal charities. I think it's a plausible hypothesis.

I'm doubtful that one can make GiveWell charities substantially more effective. Those charities are already using the EA lens. It's the ones that aren't using the EA lens for which big improvements might be made at low cost.

EDIT: I suppose I'm assuming that's the OP's hypothesis. I could be wrong.

Yes this is indeed my hypothesis; thank you for stating it so plainly. I think you've summed up my initial idea quite well.

My assumption is that trying to improve a very effective charity is potentially a lot of work and research, while trying to improve an ineffective but well funded charity, even a little, could require less intense research and have a very large pay-off. Particularly given that there are very few highly effective charities but LOTS of semi-effective, or ineffective ones, meaning there is a larger opportunity. Even if only 10% of non EA charities agree to improve their programs by 1% I believe the potential for overall decrease in suffering is greater.

There is also the added benefit of signalling. Having an organization that is working to improve effectiveness (despite of funding problems [see Telofy's comment]) shows organizations that donors and community members really care about measuring and improving outcomes. It plants the idea that effectiveness and an EA framework are valuable and worth considering. Even if they don't use the service initially.

My thought here is this is another way (possibly a very fast one) to spread EA values through the charity world. Creating a shift in nonprofit culture to value similar things seems very beneficial.

The question I would ask then is, if you want to influence larger organization, why not governmental organizations, which have the largest quantities of resources that can be flipped by one individual? If you get a technical position in a public policy related organization, you may be responsible for substantial changes in allocation of resources.

I think that governmental orgs would be a great way to do this!

I do worry that doing this as an individual has it's draw backs. I think getting to this sort of position requires ingraining yourself into a dysfunctional culture and I worry about getting sucked into the dysfunction, or succumbing to the multiple pressures and restraints within such an organization. Whereas an independent organization could remain more objective & focused on effectiveness.

If you can make an organisation that deals with billions of dollars 1% more effective, I think that could have a similar outcome to making an effective charity that works with millions of dollars 1% more effective.

There may be more scope for change as well if it isn't that effective to begin with.

Also getting higher up an organisation will lead to greater opportunities to change it from within rather than always staying outside because they aren't as efficient.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr