Pretty much everyone starts off drinking milk, and while adult consumption varies culturally, genetically, and ethically, if I put milk on my morning bran flakes that's a neutral choice around here. If my breakfast came up in talking with a friend they might think it was dull, but they wouldn't be surprised or confused. Some parts of effective altruism are like this: giving money to very poor people is, to nearly everyone, intuitively and obviously good.
Most of EA, however, is more like cheese. If you've never heard of cheese it seems strange and maybe not so good, but at least in the US most people are familiar with the basic idea. Distributing bednets or deworming medication, improving the treatment of animals, developing vaccines, or trying to reduce the risk of nuclear war are mild cheeses like Cheddar or Mozzarella: people will typically think "that seems good" if you tell them about it, and if they don't it usually doesn't take long to explain.
In general, work that anyone can see is really valuable is more likely to already be getting the attention it needs. This means that people who are looking hard for what most needs doing are often going to be exploring approaches that are not obvious, or that initially look bizarre. Pursuit of impact pushes us toward stranger and stronger cheeses, and while humanity may discover yet more non-obvious cheeses over time I'm going to refer to the far end of this continuum as the casu marzu end, after the cheese that gets its distinctive flavor and texture from live maggots that jump as you eat it. EAs who end up out in this direction aren't going to be able to explain to their neighbor why they do what they do, and explaining to an interested family member probably takes several widely spaced conversations.
Sometimes people talk casually as if the weird stuff is longtermist and the mainstream stuff isn't, but if you look at the range of EA endeavors the main focus areas of EA all have people working along this continuum. A typical person likely easily sees the altruistic case for "help governments create realistic plans for pandemics" but not "build refuges to protect a small number of people from global catastrophes"; "give chickens better conditions" but not "determine the relative moral differences between insects of different ages"; "plan for the economic effects of ChatGPT's successors" but not "formalize what it means for an agent to have a goal"; "organize pledge drives" but not "give money to promising high schoolers". And I'd rate these all at most bleu.
I've seen this dynamic compared to motte-and-bailey or bait-and-switch. The idea is that someone presents EA to newcomers and only talks about the mild cheeses, when that's not actually where most of the community—and especially the most highly-engaged members—think we should be focusing. People might then think they were on board with EA when they actually would find a lot of what goes on under its banner deeply weird. I think this is partly fair: when introducing EA, even to a general audience, I think it's important not to give the impression that these easy-to-present things are the totality of EA. In addition to being misleading, that also risks people who would be a good fit for the stranger bits bouncing off. On the other hand, EA isn't the kind of movement where "on board" makes much sense. We're not about signing onto a large body of thought, or expecting everyone within the movement to think everyone else's work is valuable. We're united by a common question, how we can each do the most good, along with culture and intellectual tools for approaching this question.
I think it's really good that EA is open to the very weird, the mainstream, and everything in between. One of the more valuable things that EA provides, however, is intellectual company for people who are, despite often working in very different fields, pushing down this fundamentally lonely path away from what everyone can see is good.
To continue the metaphor, suppose EA is the dairy industry, and realizes markedly higher profits (impact) the weirder up the dairy ladder a consumer goes (e.g., makes 5x as much from a cheddar consumer as milk, 5x as much from a bleu consumer than a cheddar one, etc.).
What does the extended metaphor suggest about how to market to maximize profit/impact? Obviously you want to make milk, cheddar, bleu, and casu marzu customers feel like welcome members of the dairy empire. Given that the potential market size substantially diminishes as you step up the weird-dairy latter, and the cost of customer acquisition increases, how much of your marketing resources will be spent on promoting each type of dairy?
My guess is that the EA ecosystem under-emphasizes acquiring new cheddar consumers, but I could easily be wrong. My theory is that that the potential market for cheddar is still very large, and that most conversions to bleu will come from the cheddar crowd anyway.
I'm not sure the metaphor holds up.
I imagine there are many more people interested in AI Safety, Biosecurity, Nuclear Risks who would be put off if they had to start by learning about the GWWC pledge.
Kelsey Piper writing about Vox analytics - 'Global poverty stuff doesn’t do very well. This is something that makes me very sad, and it makes my mother very sad. She reads all my articles, and she’s like, “The global poverty stuff is the best, you should do more of that.” I also would love to do more of that. I think it’s a really important topic, but it doesn’t get nearly as many views or as much attention as both the existential risk stuff and sort of the animal stuff and the weird big ideas sort of content.'
Fair point (although Vox's readers may not be representative of all or even most audiences, and pageviews may be only loosely correlated with willingness to commit. I find many things interesting to read and even write about that I wouldnt devote my career or serious money to.).
Maybe it's not true of all potential cause areas, but I think most of them have a range of options from cheddar to maggot cheese. So cheddar does not necessarily imply global health, and maggots don't necessarily imply x-risk.
I think you're maybe treating the "clearly good" / mild end of this spectrum as being specific to global poverty? But I think there's a lot of x-risk work that's towards this end too: reducing the risk of nuclear war, reducing airborne pathogen spread, etc.
But with Jason's extension of the metaphor, I also think maybe Kelsey's audience on Vox wants to be challenged a bit, and the clearly-good stuff is less interesting. But that doesn't mean hitting them with the weirdest ideas anyone within EA is playing with is going to work well! You still need to match your offering to your audience, and balance wanting to introduce stranger things against not overwhelming them with something too different.
I think every cause can be presented normally/weirdly depending on how you do it, it was just in that example Kelsey was discussing global dev and I think a lot of people in EA assume that more people are interested in global development as they are just looking outside their bubble into a slightly larger bubble.
I would agree that it's usually best to introduce people to ideas closer to their interests (in any cause area) before moving onto related ones. Although sometimes they'll be more interested in the 'weird' ideas before getting involved in EA, and EA helps them approach it practically.
On FB someone replied:
My response was that work you thought was positive on the basis of complicated reasoning is unusually likely to turn out to be negative for reasons you missed, and this is a real risk of trying to go so far from well-explored territory. So I'll endorse this aspect of the metaphor.
[EDIT: also see Counterproductive Altruism: The Other Heavy Tail]
I still think it works for some causes. I met people who thought it wasn't just bad, but evil to do wild animal welfare stuff. I'm not sure why, maybe their introduction to the idea was about predator euthanasia or something.
Yeah my personal intro to EA is generally pretty aversive. This is weird and you might not like it. Rather than bednets. The people who push through that I think are happy to be in a weird movement, but I wouldn't want people to be blindsided.
I found it surprising that you described cash transfers as "milk" and bednets, vaccines and avoiding nuclear war as "cheese".
In my experience, it's more likely to be the latter category which is, "to nearly everyone, intuitively and obviously good."
By contrast, I've heard lots of people confidently and knowingly say that cash transfers don't work (because they don't get to the root of the problem, because the poor will waste the money on alcohol, etc)
I interpret those criticisms of cash transfers as people saying they think you can do more good other ways, not that poor people having more money is neutral or harmful?
For the ones I described as mild cheeses, the idea is there's a little background knowledge required before you can see that the work is valuable, but people tend to already have that background.
One way to get at this is to look at what you see in world religions around charity: there's a lot about giving to the poor and not much about more complex ways of trying to make the world better.
Actually I think the popular concept is that cash transfers are neutral or harmful. That's one reason why there was no charity like GiveDirectly until ~15 years ago, and arguably GiveDirectly would not exist today without funding from EA sources. The earliest news coverage I could find about GiveDirectly is not until 2011 (Time/NPR/Boston.com) and two of those pieces described it as "radical".
Thanks for digging up the early news coverage!
I interpret the "radical" claim in Time and NPR as "give directly proposes a massive change in how we address poverty". What about it makes you think it's intended in a "you would think that this proposal is actually harmful, but it's not" sort of way?
Unfortunately all three articles no longer have a comment section, and I couldn't load comments through the Internet Archive. But my memory about the non EA discussion at the time was that it was all "there's got to be something better you can do" and not "this is useless or counterproductive"?
In my experience, an extremely common lay objection to GiveDirectly is something along the lines of, "Won't recipients waste the money on alcohol/drugs/tobacco/luxuries/etc.?", with a second-tier objection of, "Won't cash transfers cause inflation/conflict/dependence/etc.?".
I think both these questions have been pretty well addressed by the research, but those who are not aware of (or do not trust) that research are, I think, pretty likely to believe that cash transfers are neutral or harmful.
The second objection does sound like saying it is harmful, thanks!
The first one is more mixed. My interpretation has always been that people were saying they didn't think it was very useful, not that it was harmful: I doubt the person making the objection thinks that all of the money will go to buy luxuries, and if some of the money goes to buy valuable things and some of it goes to buy luxuries that are essentially morally neutral then the effect is less positive than if it all went to buy valuable things. But maybe they think that providing luxuries is actually harmful, and not just neutral? (Which, conditional on thinking they spend lots of the money on drugs and alcohol, it could easily be, since it's funding people to buy addictive drugs they won't be able to continue consuming.)