Hide table of contents

First, I want to say that this is not a post about how you should give. This is a post about spreading EA values in the non-EA community. It's about how even if you can't spread every EA value to every person, everyone can do good better. Without further ado...

Cost-effective giving relies on evidence

If you're here, you know that a lot of people give to charities without doing research or considering impact data. Maybe they give based on "My friend runs it," or "My sister was affected by this issue." Maybe before you were an EA, this was also how you gave.

But no longer! Now you give to the charities with the Highest Cost-Effectiveness, not just in your community but In The World. And you know their cost-effectiveness because the charity collected lots of Data in the form of Numbers.

If you're here, I doubt you have a large portion of your budget directed toward unquantified grassroots efforts in your community. But there's a good chance you've talked to someone outside of EA who does. (If you haven't, your community-building might be stuck in a bubble.) I bet you've wondered how you can possibly spread EA ideas to someone with that focus. I'm here to tell you that you can.

At my last non-profit, I gave the communications coordinator a copy of Doing Good Better. "This is cool stuff," she said, "but we work with a lot of people in Indigenous communities who don't do quantitative monitoring and evaluation. They want to give within their communities. But does it make sense to talk to them about effective giving at all?"

Yes.

The best evidence about grassroots charities might not be quantitative

No matter where, small communities dominated by grassroots initiatives have a couple of problems when it comes to quantitative data.

  1. Collecting quantitative data on your impact takes time, human resources, an understanding of methodology, and money. Small, poor, or rural communities don't always have these resources.
  2. Under the right conditions, you might have good reasons to trust expert testimony more than quantitative data.

What reasons are those? Expert testimony is a form of evidence with pros and cons, just like quantitative data. In ideal conditions, I won't tell you it's a better form of evidence...

Hierarchy of evidence - Physiopedia

But what if you're trying to compare among a group of initiatives with much longer and richer history of informal observation than written/recorded data? What if you're comparing among a group of initiatives with a lot of intangible effects that couldn't be included in a model unless you had a lot of resources to spend on modelling?

Then, I think the expert testimony of a long-time community leader who has been watching the initiatives in your community for 70 years, and has talked to the people who have been affected by every initiative that is being run in that community, is a form of evidence you should use. It's evidence that's a hell of a lot more valuable than "My friend runs it," or "My sister was affected by this issue." Community leaders can tell you what intervention is helping the most people better than you can surmise on your own.

Community leaders in small, old, tight-knit communities (whether it's Indigenous communities in Canada like my colleague interfaces with, or rural communities in the US, or villages in Uganda like Anthony Kalulu supports) have not only their own lifetime of experience to draw from, but also the experience passed down from generations of community leaders before them, perhaps for centuries or millennia. If you ask them what works, they could give you a better, more reliable answer than a quantitative model -- if the only quantitative model they have access to has but a few months' worth of data collected under questionable conditions.

So if you're talking to someone who doesn't think they can consider effective giving when they support their community because of a lack of quantitative data, tell them this: non-quantitative evidence is evidence. It can tell you more about the cost-effectiveness within a group of grassroots organizations than not using evidence at all. And that can help you make effective giving decisions to the most effective grassroots initiatives in your community!

Is this EA enough?

Look, not everyone is an EA. Not everyone has bought into every EA principle. And I'm not going to force them to. If someone can adopt the idea of caring about cost-effectiveness, even if they only apply this to their community and don't adopt the idea of giving to the very most in-need people in the world, that's an idea worth spreading.

It's an idea that as EAs, we should spread. When we are doing community building, we are not just trying to get people to identify as EAs. We are trying to get them to adopt one or more principles that help them do good better with their money and time.

I am not asking you to start giving to grassroots organizations in place of organizations with lots of monitoring and evaluation behind them.

But when you're talking to someone who wants to focus on supporting their community, don't cast judgment on them for not immediately adopting a global lens and every other EA value. And don't exclude them from the effective giving conversation just because their community doesn't have data-driven projects. The wisdom of their community leaders can be a powerful kind of evidence for them to look to so they can increase the effectiveness of their giving.

39

0
0

Reactions

0
0

More posts like this

Comments9


Sorted by Click to highlight new comments since:

Strong upvote I love this, and agree with the central thesis. In general I agree with taking whatever clear opportunities arise to increase the good done in the world, even if it isn't our primary thing.

Unfortunately where there are quantitative models available even when very poor, I find them to seem (with great uncertainty) more convincing than community leader opinion

"Community leaders in small, old, tight-knit communities (whether it's Indigenous communities in Canada like my colleague interfaces with, or rural communities in the US, or villages in Uganda like Anthony Kalulu supports) have not only their own lifetime of experience to draw from, but also the experience passed down from generations of community leaders before them, perhaps for centuries or millennia. If you ask them what works, they could give you a better, more reliable answer than a quantitative model -- if the only quantitative model they have access to has but a few months' worth of data collected under questionable conditions."

My experience (unfortunately) in Uganda doesn't corroborate this. I agree its possible that they "could" give you a better answer than a quantitive model, but I don't think they usually do. It hurts me a little to confess that BOL fermi-ish quantitive models (where possible) - done by experienced people with expertise in the field, seem in my limited experience usually better than the thoughts of an experienced community leader. 

But your general point still stands - that influencing non-EA people, usually with non-quantatitve data to give and focus on better local causes could have great impact, and often wit little effort. There s also the chance of swinging people slowly towards more mainstream EA causes with this lighter touch approach. 

(Can't say enough how much I appreciate it when people take my words of uncertainty like "could" literally!) Indeed, in most situations I can think of, I'd prefer a quantitative model. Especially by an experienced expert! Would that it were always available. Thanks for your comment!

Upvoted. This is what longtermism is already doing (relying heavily on non-quantitative, non-objective evidence) and the approach can make sense for more standard local causes as well.

strong upvoted, I think it's good to encourage non-EAs to give more effectively and I think it's good to broaden what we think of as "evidence" and consider its pros and cons.

I work with a community in my city that gives primarily locally (leaving aside my judgment on that), and I find that many people think that they're not giving based on any idea of effectiveness: e.g. they'll say they're giving based on community need, or trust in a relationship they have, or values-alignment. But usually there's an implicit sense of "what is effective" underneath that, and it's helpful to push people to make that explicit: if you're giving because you trust the relationship you have with this organization, how good of a signal is that about the organization's work? Is it a better signal than other evidence you have access to?

(Aside: Quite often with small grassroots organizations, I think a strong relationship with the right people honestly is one of the best available signals! In particular, I find that the organizations that community leaders consider important/tractable/neglected - though not using those words - are not always the ones that gain a lot of media attention, external funding, etc.)

Thanks for writing, and great meeting you back in August.

This consideration is already "priced in" to givedirectly's worldview, the whole "reforming paternalistic versions of charity by transferring cash / they know what they need better than we do" is well-established and remains held in high regard to this day. 

GiveDirectly is a great option for people who put a high value on beneficiary autonomy and are open to giving anywhere in the world! This post is more about including people in the effective giving conversation who want to give back to their own community -- maybe because they already live in one of the communities in the world with extreme poverty, or maybe because they're not all the way EA and that's just how they prefer to give.

Interesting perspective I had not thought about in this way before. It links very well with more common views of highly valuing the input (and needs) from your target communities. 

Blake Hannagan and I are currently piloting a MEL training and coaching program for animal and vegan advocacy charities. Part of it is to investigate up to what level organizations can and should collect quantitative data, and when they'd better rely on more qualitative information; realizing the MEL in the animal space might be a bit different from MEL in global health or poverty.

Thanks for writing this!

Tori
1
0
0
1

Thanks for writing this, Spencer! 

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel