Hide table of contents

Summary

In this post, I ask: Is there an existing framework or a communication norm with EA for discussing EA ideas with people we know outside of the community? If not, is it worth investing time in developing a framework?

The crux of this idea is that I believe that accurately communicating EA values to one other person could potentially double your impact (presuming the person you convince has the same potential for impact as you). And for most people in your life, you may be the best person they could hear about EA from (since you know them well enough to give a customized explanation, and they’re more inclined to listen to your advice).

I've read advice in this area along the lines of "different approaches work for different people", which I totally agree with. I've also read that just providing information to people doesn't necessarily move them to change - it's the Aha! experience that counts, which leads to insight and change. I've found this Communicating about EA guide which I found useful. This article on practical debiasing I’ve also found very useful. But the difference between those guides and what I'm looking for is that I want to understand if there's a way to figure out which approach will work best for this person. 

This framework might look something like:

  1. Recognise an opportunity to bring up EA
  2. Understand this person's values and current beliefs
  3. Establish a space where this person feels comfortable/safe changing their mind
    1. a. Help to dissociate a person's reasoning from their identities.
    2. Ask them to talk through their beliefs or their understanding.
    3. Depoliticise topics and establish some "skin in the game" to help people be open to changing their mind.
  4. Try to understand what the Aha! moment might be for this person and help them reach it

Introduction

Books are generally more reliable than we are to communicate EA ideas. If the opportunity to discuss EA comes up in conversation I’ve tended to prefer to point people to articles/books etc that are introductions to EA, as they can introduce ideas and build on them in a logical way (whereas I find conversations can quickly derail and go down tangents and ideas can be taken out of context - plus we're wrestling with all sorts of cognitive biases etc.). However, over time I’ve realised getting people to read books and take them seriously is hard, so having conversations about EA is important. 

So far, whenever the opportunity to discuss EA ideas has come up in conversation with friends or family, I've always sort of floundered. I feel it's really easy to quickly have the conversation derail as people bring up various objections. This makes it hard to introduce and build on the ideas the way a book can. 

However, I feel like people might engage with these ideas if presented in a context of what this particular person cares about. For example, someone might ask me during dinner "Why did you go vegan?". Veganism has several arguments in its favour - animal autonomy, climate change, the knock-on impacts to human suffering etc. This would be a good opportunity to seek to first explore their current understanding of why people would go vegan (in my experience, people have assumed veganism is for personal health reasons rather than reducing animal suffering or climate change). From there I could try to find out what they care about, then discuss the most powerful ideas (for them) first. 

I want to propose some initial ideas of how this framework might look, but these ideas need lots of work and my hope is that eventually this will become a concrete, clear framework that can be used. 

1. Recognise an opportunity to bring up EA

From personal experience I’ve found a few examples where a conversation with a friend/family/colleague might lead towards discussing EA:

  • We’ve been discussing social justice generally (or reacting to news)
  • Someone collecting for charity has approached us/we've seen a TV advert for a charity
  • I’ve been asked why I'm vegan during a meal

There may be many more examples you could name and recognise in the moment, so I probably don’t need to write an exhaustive list. Essentially I’d just like to emphasise here that I'm interested in this framework to discuss EA in the context of the friendly "pub chat" - this is not a framework for a recruitment drive, it’s about being best prepared to discuss EA should it come up organically.

2. Understand this person's values and current beliefs

I believe that after recognising an opportunity in conversation to discuss EA, the next step should probably be establishing this person's moral beliefs/what they want to see in the world (i.e. if they will find meaning in contributing to the problem of global health, animals, long termism etc). What does the person value now, and how is this understanding tied to their sense of identity? 

This raises the questions:

  • How do you map out what someone already believes?
  • How do you make it clear to them that you understand their views?
  • How do you convey that you think you can help them do more good in a way that aligns with their current values??

(To quote The Good Place) Of course, the exact opposite may be true! 

Should you instead start with a standard “elevator pitch” for EA, and then follow up on whichever parts seem to catch the listener’s interest? There are pros to this strategy as well — I’ll go into more detail in future sections.

3. Establish a space where this person feels comfortable/safe changing their mind

In the chapter on Reason in Enlightenment Now, Steven Pinker discusses a few ideas that I like and think might help build a framework. I've tried to succinctly state each idea lifted from the book, then present a takeaway that could be applied to a framework. 

(A lot of text here in italics is lifted directly from the book, though some words are removed for brevity, but there's much more in the book that's useful if people are interested). 

a. Identity-protective cognition, motivated reasoning and cognitive dissonance reduction

When people are first confronted with information that contradicts a staked-out position, they become even more committed to their original position. Feeling their identity threatened, belief holders double down and muster ammunition to fend off the challenge. As the counter-evidence builds up, the dissonance can mount until it becomes too much to bear and the opinion topples over (the affective tipping point). This tipping point depends on the balance between how badly the opinion holder's reputation would be damaged by relinquishing the opinion and whether the counterevidence is so blatant and public as to be common knowledge.

My takeaway here is: Help to dissociate a person's reasoning from their identities.

For example, an identity of "meat is manly" might be holding someone back from being open to discussions about veganism. It should be possible to create a safe discussion where you can deconstruct this idea with someone. Additionally, hopefully the fact that you're discussing this with a person you have an existing relationship with will help to create a safe space for this person to explore and dismantle a harmful identity.

b. Talking things through fully

People understand concepts only when they are forced to think them through, to discuss them with others, and to use them to solve problems. People don't spontaneously transfer what they learned from one concrete example to others in the same abstract category. Students in a critical thinking course who are taught to discuss the American Revolution from both the British and American perspectives will not make the leap to consider how the Germans viewed World War I. With these lessons about lessons under their belt, psychologists have recently devised debiasing programs that fortify logical and critical thinking curricula. They encourage students to spot, name, and correct fallacies across a wide range of contexts. Practices of successful forecasters have been compiled into a set of guidelines for good judgment (for example, start with the base rate; seek out evidence and don't overreact or underreact to it; don't try to explain away your own errors but instead use them as a source of calibration). These and other programs are provable effective: students' newfound wisdom outlasts the training session and transfers to new subjects.

...the mere requirement to explicate an opinion can shake people out of their overconfidence - The illusion of Explanatory Depth. When people with die-hard opinions on Obamacare or NAFTA are challenged to explain what those policies actually are, they soon realise that they don't actually know what they are talking about and become more open to counter-arguments.

My takeaway here is: "If it isn't said out loud, I don't have to deal with it." - Ask the person to talk through their beliefs or their understanding. Ask open-ended questions and help to think them through completely with them. 

Using veganism again as an example, I believe most arguments against veganism would fall apart here if someone has to fully articulate and justify their view. The line people don't spontaneously transfer what they learned from one concrete example to others in the same abstract category really sticks out for me. This is the "Make the Link" argument people make about veganism - transferring someone's empathy for dogs/cats and other animals to those animals that are being raised to be killed and eaten (this article on conflicted omnivores is very interesting and deals with similar issues). But the goal here is not to catch someone out, the goal is to give this person a non-judgemental space to fully explore their understanding - something they may not have previously had the opportunity to do. 

This can be a bit of a tightrope walk - people know when their beliefs are being challenged, and when they're being pushed toward a conclusion they don't like. There's a difference between being asked to explain something factual (e.g. NAFTA) and a matter of personal philosophy/ethics -people aren't under the same explanatory pressure when something is "just what I believe". It's important that these conversations happen in a context that isn't just "safe", but is comfortable and pleasantly engaging - two friends talking, rather than an authority figure/expert and a layperson talking. 

Establishing that you’re both discussing these ideas with a scout mindset early on in a conversation would be useful to decouple ideas from personal values. It would be important to make it clear that you're not trying to shoot them down and you don't think you're better than them. The goal is to establish truth and make it clear that you have some potential ideas to share that are really interesting. This includes making it clear that they might have an understanding you are not aware of - we're searching for that too!

c. Skin in the game

People are less biased when they have skin in the game and have to live with the consequences of their opinions. "Contrary to common bleak assessments of human reasoning abilities, people are quite capable of reasoning in an unbiased manner, at least when they are evaluating arguments rather than producing them, and when they are after the truth rather than trying to win a debate.". When issues are not politicized, people can be altogether rational. Experiments have shown that when people hear about a new policy, such as welfare reform, they will like it if it is proposed by their own party and hate it if it is proposed by the other - all the while convinced that they are reacting to it on its objective merits.

The factual state of affairs should be unbundled from remedies that are freighted with symbolic political meaning. People are less polarized in their opinion about the very existence of anthropogenic climate change when they are reminded of the possibility that it might be mitigated by geoengineering than when they are told that it calls for stringent controls on emissions.

My takeaway here is: Depoliticising topics and establishing some "skin in the game" can help people be open to changing their mind.

I'm especially drawn to the idea of how to establish skin in the game and exploring how to get people to live with the consequences of their opinions. It would make sense that if people have to live with the consequences of a decision then they are going to spend more time critically reviewing and understanding it. 

But I'm unsure how this might look in practice, or how you would establish it. Theoretically, if you wanted someone to consider the consequences of meat-eating by forcing them to watch videos of factory farms. But this doesn't seem particularly compassionate or useful. “Skin in the game” might be a hard thing to establish in casual conversation, but thought experiments like The Drowning Child have helped challenge me to realise what I owe to other people in the past. So thought experiments might be a useful tool to make people ask themselves if they have obligations to act.

d. Techniques

So broadly, I think asking someone what they believe, their reasons for believing it, and what would cause them to change their mind would be a great place to start. From there, you can tailor the discussion related to their values and how they make decisions. Here are some other good techniques suggested in Enlightenment Now:

  • Ask people to switch sides in a debate and argue the opposite position
  • Have people try to reach a consensus in a small discussion group, forcing them to defend their opinions to their group mates (with the truth usually winning)
  • Adversarial collaboration - work together to get to the bottom of an issue, setting up empirical tests that people agree beforehand will settle it.

4. The Aha moment!

I think the goal ultimately is to set the stage as much as possible for an Aha! moment. To almost be the mediator between someone and the ideas of EA. To create a safe space for this person to explore these ideas. I think most people change significantly when they have an Aha! moment rather than when presented with lots of information. Like alcoholics resolve to get sober when they hit some sort of rock-bottom or other significant realisation. Or people resolve to lose weight when there is some sort or paradigm shift in their relationship to food/their body. 

My paradigm shift for veganism (sorry to keep bringing veganism up, it's just useful for examples!) was the book Sapiens by Yuval Noah Harari. In it, he discusses animals' capacity for inner lives and the suffering we subject to them on factory farms. It was something I hadn't considered before (and never had been asked to explain out loud so was never challenged on it). It was all explained in a very non-judgemental way and it connected the dots for me in a way that being exposed to veganism in other capacities never had before. Additionally, it kind of came out of left field, it was a history book... I wasn't seeking out information to challenge my perception of veganism, so I'm very thankful that this idea was challenged for me but was done so in a very kind, thoughtful way. Other people may have gone vegan from watching a documentary etc - that is their Aha! moment. (What I'm trying to say here is that I don't want this framework to be seen as manipulative. I very much view it as a kindness that someone pointed out to me how my actions were affecting the world around me in a way I hadn't realised and gently guided me into how to change them. Many arguments made by vegans discuss them as if they are obvious, which I can find very off-putting. I often ask myself "if I read this when I wasn't a vegan would it lead me to change my mind?" - The answer is often no. Conversations can too often be a kind of gotcha! in an attempt to win, rather than find the truth.) It should be an imperative goal of this framework to try and find the Aha! moment with this person (with the distinct advantage of this being someone you know and have a relationship with, rather than a blanket approach of introducing someone through an article/book).

This is my first longform post, and it's way longer than I was expecting! I'm excited to read any and all contributions! :)

Many thanks to Aaron Gertler for feedback on drafts of this post.

Comments10


Sorted by Click to highlight new comments since:

I think for me, it might be best to use a straightforward “join us!” pitch.

Most people I know have considered the idea that there are better and worse ways to help the world. But they don’t extend that thinking to realize the implication that there might be a set of best ways. Nor do they have the long-tail of value concept. They also don’t have any emotional impulse pushing them to explore “what’s the best wat to help the world?” Nor do they have any links to the community besides me.

My experience is that most of my friends and family have very limited bandwidth for considering or acting on altruistic ideas. If they do, they have even less bandwidth for thinking critically about effectiveness with an open mind.

So I’m thinking it might be good to try a conversation that goes something like this:

“I’m in the effective altruism movement!”

“What’s that?”

“We research to figure out the most effective ways to make the world a better place. You should join, it would be awesome to have you!”

“Hm, that sounds cool. But how do you figure something like that out?”

“Oh it’s super interesting. Takes quite a bit of thought of course, but it’s also fun. I can show you if you want?”

“Sure....”

“Ok, so what’s a way you want to help the world, maybe by volunteering or donating or something?”

“Um, I donated to a food bank for Chanukah.”

“Great! So here’s how we’d think about that at EA. Basically we want to start by figuring out the principle behind why you picked a food bank. Why’d you donate there?”

“I heard the food banks were running low because of COVID, plus I like to cook.”

“Cool, that makes sense. So partly it fits with your interests, and partly it’s about making sure people have enough to eat?”

“Yeah, pretty much.”

“Gotcha. Ok. So in EA, we focus on the ‘help other people’ part especially, so let’s set aside the fact that you like to cook and focus on the getting food to people part, is that ok?”

“Yeah.”

“So this might seem like kind of a silly question, but why is it important for people to get enough to eat?”

“So they don’t starve, or go hungry.”

“Right. I mean those things are obviously bad, and we want to think about what exactly is bad about starving, or going hungry?”

“Well, you could die. Or just be really miserable. It makes kids not be able to think straight in school. Plus you might not be able to work and you could end up homeless.”

“Right. So misery, death, and just struggling to be able to keep your life together?”

“Yeah.”

“Ok. So this is where EA gets into the picture. So first off, EAs think that everybody’s lives matter equally, like a kid in Africa’s life matters just as much as a kid in America. Do you agree with that?”

“Definitely!”

“Right, I figured! And where do you think people are struggling more with food insecurity, here in our city or in a place like, say, Yemen?”

“Uh, definitely Yemen.”

“And where do you think the money you donated would go further toward buying food, here or in a place like Yemen?”

“Probably also Yemen? Except they have a war going on I think, so maybe it’s hard to get food there?”

“You’re already thinking like an EA! You can already kind of see where this leads, right? We’re trying to think of where to make your donation go farthest, plus make sure it actually accomplishes something. Like, maybe the food pantry in our city is low on food, but maybe there are places where people have nothing to eat at all.”

“Right, right... but the thing is, don’t we have a responsibility to help people here? And plus, how would you, like, figure out where to donate to to help people in Yemen? How do you know the charity actually works?”

“Well basically, I’d start by saying this is a really complicated subject, and I’d be happy to talk it out for as long as you’re interested. It’s one of my favorite topics. But this is why I think it’s really important to join EA. We basically have a whole community of people and nonprofits who are super focused on all this stuff. We think through those thorny questions like whether it’s best to focus on helping people in your own community. Also doing, like, tens of thousands of hours on charities to see which ones really work, which basically nobody was doing before we started the movement. So the point is, if you’re in EA, you don’t have to figure it all out for yourself. Want to join?”

I know it seems silly to frame it as a club that you join, but also... why not?

I love this! 

I think for me a real barrier is the fact that I barrel ahead with the ideas too quickly... like I want to jump straight in at the deep-end with "we should think of all lives as equally important and we should be trying to consider the ways our donation can go farthest" - that idea on its own maybe isn't controversial, but probably hasn't engaged my conversational partner in the same way as in your example.  

One of the main motivations for me writing this post was to have a mental checklist when discussing EA so that I don't barrel ahead without bringing the other person along for the ride :) 

So for me, I think it's useful to have a framework  in my head so I can ensure that these ideas build upon each other:

1. do they want to do some good in the world

2. do they agree that all lives are equally important

3. do they agree that there are some situations where your donation/time will make far more of a difference than others

4. do they agree that it is possible/worthwhile to figure out which interventions are the most effective

5. this stuff is really engaging and there is already a whole movement that you can join so you don't have to do all this on your own!

That's a simplified framework (I just tried to pick out the key beats in your conversation example) but it definitely helps for me to have a framework :)

Hey Aaron -- really want to sit down and read this thoroughly when I have a moment. Someone sent me the link to your post, otherwise, I haven't been on EA Forum for a minute.

That said, I did a talk on just this topic back at the EA Global "Unconference" over the summer. Would love to maybe be in touch about this idea...the link to my talk is here:

I loved watching this talk, thanks for sharing!

It would be great to talk further about this idea (though based on your talk, it would seem you have already given way more thought to it than I have)

I enjoyed this article and found it useful, thanks for writing it! I think it could be interesting to think about how these ideas might apply to situations like running a local EA group, where it's not just discussing EA when it comes up organically.

We are actually going to discuss this article at my local university group next week, so it would be interesting to consider how we might apply the ideas to the group - thanks for the suggestion! :)

...the next step should probably be establishing this person's moral beliefs/what they want to see in the world (i.e. if they will find meaning in contributing to the problem of global health, animals, long termism etc). What does the person value now, and how is this understanding tied to their sense of identity? 

 

This actually reminds me of a technique that's used in political campaigning.

Back in my pre-EA days, my husband and I were involved with a local political party. People making campaign calls etc. were trained to find something the person they were speaking to valued, and then tie that to one of the party policies. E.g. "oh, you care about child poverty? Our MPs are passionate about that too! We're working on this policy/proposal etc."

The idea was to frame voting for the party as a natural extension of the person's own values: as something they might want to do, rather than as something we were trying to persuade them to do.  It can come across a bit scungy/manipulative if the tone isn't just right, but it seemed to be pretty effective overall.

I don't know how common the approach is outside of that particular political party, but it seems likely to be a more widespread campaign technique. There's definitely some precedent for the approach, in any case. 

 

Ahh that's really interesting to know!

But yeah, I definitely would feel a bit manipulative if I didn't feel like I knew the person properly - I want to present to them ideas that I think they'd really engage with and would interest them, rather than giving them the impression I'm trying to force a viewpoint on them

Thanks for this! Just gonna comment so that I can find this whenever I need to borrow some arguments made in this piece, if that's alright.

Absolutely, go for it :) 

Curated and popular this week
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
Thomas Kwa
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Building effective altruism
47
Ivan Burduk
· · 2m read