Hide table of contents

TL;DR - Rationale for conducting a defined outreach effort, consisting of a series of articles targeted to and tailored towards the interests or specialties of very specific audiences outside EA, and written by trusted sources, to orient the readers to effective giving. 

I will define "effective giving" in this case as a lighter and more consumer-friendly selection of key ideas from EA, including the disparity between charities that seems to demand we use reasoning and evidence when choosing to give, the surprisingly low cost of strong impact, the general concept of the ITN framework, and so on.

This is followed by an acknowledgement that the outreach effort described might be tempered by the desire to publicize EA ideas more slowly in general, or to continue with the status quo.

---<>---

This entry seeks feedback on its potential efficacy, arguments for and against, and ideas to make it stronger. It was originally submitted as a reply to this post by Julian Hazell about contributing to GWWC as a writer or content creator. His and other responses suggested that my entry might gain more helpful feedback as a standalone, so I am reprinting it here, with several addenda.

It is not unlikely that something akin to the following idea is already in the works somewhere in EA and, if so, I am sure the community will not let me remain uninformed. 

The idea below is not half-baked; nor is it completely cooked. I have confidence in it as an approach to outreach, but there may be better ideas for reaching targeted or general audiences more efficiently, such as through wise use of social media or by relying on existing media voices (Vox’s “Future Perfect” comes to mind) that are already—arguably—laying the groundwork for widespread acceptance of EA ideas.

More foundational, there is legitimate debate over the optimal rate of EA movement growth, insofar as expanding the circle of outreach to a more general public. I do not address this in the detail it deserves in this post. My original concern was simply to consider unexplored channels for a crafted and controlled EA message, but certainly this debate is relevant and, some would argue, existential to the movement. It may need to be resolved as an antecedent to any meaningful outreach effort. 

However, even as we discuss it, the nature of information, a restive bird in a crepe-paper cage, may render the discussion moot.

---<>---

THINKING LIKE AN ARCHER: TARGETING SPECIFIC "INTEREST" GROUPS

There is much discussion, as EA increasingly meets the world, about how to disseminate information about the movement’s ideas clearly, delicately—after all, you are asking people to examine their values—and in manageable doses. First impressions are oh, so important. 

You can see discussions about this around the forums. A few quick examples include this Forum post from @weeatquince; this podcast with Luke Freeman and Geetanjali Basarko discussing this set of guidelines; this discussion from Helen about framing EA as a question and avoiding “-ist”; portions of this Forum post from Catherine Low et al. about (among many other things) the dangers of a “low-fidelity [first] exposure” with effective giving; and this video providing teacher Danny Lipsitz’s views on risks and solutions around external movement building. 

As marketers know, a specific target audience is easier to reach than a very broad one. You can choose a channel that already targets that audience with a message tailored to reader and context (e.g., a magazine about knitting reaches knitters particularly and quite efficiently). Plus, you might benefit from the medium itself if its ideas are trusted by and shared widely between people in that target population.

Thus, alongside any efforts to write content for a broad distribution, one might visualize a discrete project to turn out a series of highly focused introductions to effective giving targeted toward specific audiences outside the central circles of EA, and written by—or at least in the voice of—an “insider” as a representative of the target audience. The target audience might be defined, for example, by profession (photographers; economists, entrepreneurs...) or interests (highly competitive sports, eco-travel, role-playing games...) 

These introductions would be seeded to social media and/or pitched to publications and other channels relevant to the respective target.

The example that sparked this idea was an intro to EA written specifically for product managers by Clement Kao, speaking the language of product managers, showing parallels connections between their approaches and EA's that would, one hopes, make Kao's fellow product managers feel 1) well understood by the author, who is "one of them," and 2) positively inclined toward the approaches and attitudes that lay behind effective giving.

RATIONALE

The underlying issue is this: Effective Altruism is going to continue to gain more attention in the coming months and years, especially if it is very successful (or the precise opposite, which I will not contemplate here). Indeed, the idea of accelerating the development of a broad culture of giving—in workplaces, in schools, among the most fertile donor classes, and more generally—is an unarguable good that many EAers are striving to cultivate. Controlling the message now, or at least starting to do so more deliberately, allows reasonable and attractive ideas to proliferate.

Besides continuing to motivate a giving mindset in different groups and potentially increasing the real numbers around effective giving,* I believe outreach now can soften the landing of more tricky messages of EA, if and when they reach a broader public stage. These might include longtermism, cause agnosticism, reliance on expected value, what I like to call “overcoming proximity bias”—anything that asks people to break the seal on their moral compass.

And, as mentioned, reaching specific groups in the voice of someone they trust, exploiting ingroup bias in a positive way.

Information wants to be free, so you might as well dress it for success.

WHO AND HOW

This targeted outreach could be addressed to any community: Unitarian Universalists; sci-fi fans; AARP members, eSport gamers; you name it. But certainly EA has been looking to establish more momentum in reaching people in the workplace (through programs both focused and more wide-reaching), and there are widely distributed publications within just about any professional community. As an example, consider how many developers’ eyeballs meet mass-distribution magazines like CODE or .NET. These publications speak their language, talk about topics dear to them, make them feel like members of a tight-knit or at least defined community. 

Such a project could start by targeting the broadest and potentially most EA-aligned audiences—for our Market Testing team to identify, of course—and aim to be published in top specialized media for those groups. While drawing from a central common set of well crafted ideas and terminology, articles would differ in addressing the particular concerns of people in that target group, and using their own idiom to do so. Texts would highlight particular ways effective giving fits their world view and how its tenets can help them improve their work or their lives.

In order to speak in the voices of authentic insiders, we might do well to mine the multitalented ranks of EA for writers who have experience or specialties in various areas, as well as looking to all donors, pledgers, and others active with EA who could represent different professions or areas of interest.

One additional idea from Julian is to frame such an effort as a series of profiles of people in defined disciplines, e.g., "How this software engineer approaches charity." It is possible this framing might be a good alternative to reach people in disciplines similar to that of the person profiled, and might even relate to a wider audience, if that is the objective of the messaging.

Can anyone see a downside risk here? I haven’t so far—with some caveats discussed under Broader Questions below—and it seems to me that, with careful attention to leading readers toward further engagement with effective giving and beyond that, EA, such an effort might also cultivate a growing crop of EA groups in the workplace or among other targeted groups.

BROADER QUESTION: HOW RAPID SHOULD THE MOVEMENT GROW?

So, I am not the most qualified person to bring this issue up here, and it deserves its own post—maybe even its own forum—but those who commented on my initial post are right that the vision above does need to acknowledge the question of at what speed the EA movement should be encouraged to expand in terms of reaching the general population (however you define it). It is a real concern that EA as a movement is better off if it does not grow rapidly, but rather more deliberately. At the risk of oversimplifying this topic to an enormous extent, my observations would be:

  1. Trying to increase PR and build a public face too quickly makes it difficult to control the message. One of my main interests since I jumped into the EA swimming pool has been to understand what are the best channels, messages and levels of effort that EA should make to ensure the messages it sends out are clear, convincing and motivational—leading both to understanding of and active support for EA ideas. There is always the danger of an audience receiving “low fidelity” or possibly off-putting messages. Something as simple as one widely-disseminated message containing misinformation or an unfortunate framing can do a lot of hard-to-repair damage. A problem could just as easily arise through misjudgment by an EA concern around a particular audience’s tolerance to reflecting on their current moral choices. 

    The above outreach idea does address this somewhat, insofar as it is a measured and targeted release of information.
     
  2. Perhaps there would be an organization near the center of EA that would take on this effort and would help to define the particular ideas about effective giving that are most valuable to share. Coordination by one particular EA entity might help lean outreach toward a consistent, vetted message using consistent EA language. This as opposed to what is happening currently: ad hoc (though not necessarily ineffective) outreach through various organizations in the community, to their own audiences and based on their specializations. However, mine could be far too confining an idea; it could be argued that letting each group reach its own audience is exactly the best way to get the message out most flexibly. 

    Note: I would be curious to hear what others think about how ideas around effective giving are being disseminated in the current moment. Is there something I should understand differently? 
     
  3. Whatever the intentions of the movement, the need to expand the culture of giving and the nature of information means we can expect more attention to EA in the coming months and years, and it will not be possible for any one group to have full control over this spread, especially due to the burgeoning number of EA-aligned, outreach-oriented individuals, groups and interventions emerging in this exciting moment in time. My question is whether it is better to make deliberate moves now to shape this growth than to debate it until it’s too late to get the bird back into the cage.

 

* And possible engagement with EA itself, e.g., through joining a relevant EA workplace group .

 

Postscript: I am as guilty as anyone of sometimes conflating “EA” with “effective giving”—I hope I have not done so here, as this can make the debate muddier. The above idea is about planting seeds for the latter and not necessarily explicitly promoting the former. If this deserves its own debate, I have brought it on myself!

 

(Thanks: David Reinstein and Devon Fritz for feedback on various drafts and Sunnie Huang for extra encouragement, as well as Peter Slattery and Julian Hazell for the thoughtful suggestion to make the former threaded afterthought into a “real” boy.)


 

Comments3


Sorted by Click to highlight new comments since:

A lot of interesting points here. “Like to like” can be a great approach. In addition to the shared persona, this technique can also help inform distribution. For example, LinkedIn comes to mind as a place for leveraging network effects. That said, Facebook Groups, Subreddits, Discord Channels, and other niche communities could produce higher engagement rates.

Still, while a shared profession might prequalify a reader, offer the creator special access, and/or hold an audience’s attention longer, crafting meaningful content remains a key difficulty. You mention, "articles would differ in addressing the particular concerns of people in that target group," which is a solid goal. However, targeted content can often be reduced to baseline commonalities. So, a potential downside risk with professional targeting is writing toward a job title rather than a person.

Using the example, "How this software engineer approaches charity” -- noting that this is likely a placeholder title -- I’d start developing the content by asking:

  1. Who is the piece for?
  2. What does the piece hope to accomplish?

At first glance, the title indicates that the article would be written for software engineers. However, it could be argued that this is more the intention of the author and that the audience is really people who might be interested in this particular software engineer’s charitable musings. So, unless the software engineer is a thought leader or influencer in their space, this content might be too niche to achieve a sizable impact. Conversely, the article might intrigue someone generally interested in giving and charity, but the specificity of the software engineer makes it less tailored for them.

When designing both titles and content, I find it helpful to shift perspectives from writer to reader. Here are some questions I use:

  • Why is this piece of content interesting to the reader?
  • How does it speak to their personal goals or pain points?
  • Does the piece offer value and/or provide solutions?
  • Is the message engaging…helpful…meaningful?

Using these questions, one might arrive at titles like:

  • How I Made Software Engineering a Fulfilling Career (Audience: Engineers looking for meaning through their career)
  • Giving Like a Coder: How I Hacked My Charitable Contributions (Audience: Engineers looking to optimize every area of their life)
  • How You Can Maximize Impact as a Software Engineer (Audience: Engineers looking to do more through their career)
  • How Software Engineers Can  Save Lives (Audience: Engineers interested in doing important work)
  • Top 10 Software Engineers Who Are Giving Back (Audience: Engineers aspiring to be like their respected contemporaries)

While I employed some hooks with these titles, I’m shaping through the lens of a software engineer presumed hopes, interests, issues, etc. — not just the shared persona. You can pull this out further and see how each title could then fulfill on the promise of its premise and, ultimately, align with the second question: “What does the piece hope to accomplish?”

All of that said, the content that might result from a framework like this could have its own downside risks:

  1. Disingenuous writing: Tailoring too much for an audience and/or applying marketing best practices (hooks, keywords, SEO , etc.) has the potential to compromise  core messaging.
  2. Low fidelity: Due to its often "snackable" nature, viral/shareable content can lack important nuance.
  3. Unrepresentative associations: A successful article could be shared by the unengaged for purposes such as virtual signaling, risking the reputation of the EA community and/or appropriation of EA-related indicators e.g. #effectivealtruism.

You mention some of these risks in your post, so perhaps additional guidelines should be considered when pursuing external targeted movement building.

All of that said. I think professional outreach + meaningful content has strong potential to reach and activate people.

Great points! I appreciate your concern about the original ideas being aimed too much at the job title and not enough at the  individual, and your thoughts on downside risks are also well taken. I like where you take these ideas from a marketing standpoint, as well. 

I have been encouraged by recent developments like the appointment of a head of communications at CEA, and hope ideas like those in my entry above—and improvements upon them, much as you have offered—will be considered increasingly in the coming months.

Thanks, Adam! And thank you for starting a conversation around this approach (I don't think I mentioned that in my original comment). I've actually applied to some of the new comms positions at CEA  and would love the opportunity to further explore these ideas and others...

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f