One criticism EA gets all the time is that we're coldhearted borg-like cost benefit-obsessed utility maximizers. Personally, I like that about EA, but I see huge value in being, and being perceived as, warm and fuzzy and hospitable.

Over at LessWrong, jenn just wrote an insightful post about her top four lessons from 5,000 hours working at a non-EA charity: the importance of long-term reputation, cooperation, slack, and hospitality.

Here, I am proposing a modification to the EA norm of a 10%-of-income annual donation to an EA-aligned/effective charity. We should modify that standard to promote donating 8% of income to EA-aligned/effective charities, and 2% to charities that are local, feel-good, or something we're passionate about or identify with on a personal or cultural level.

As an example, if you make $80,000/year, you might consider donating $6,400 to Givewell and $1,600 to the local food bank. If you work as an employee of an EA-aligned organization (so 40 hours of direct work per week), you might consider doing 4-5 hours/week of volunteering to help the homeless.

Here are some reasons why I think this is a good idea:

  • The average American donates about 2% of their income to charity. This new standard means that the 8% we'd donate to EA causes is over and on top of the amount most people donate. That means EA is less likely to be perceived as clawing away donors from other charities in a zero-sum charity competition. Instead, it's encouraging people to donate more - growing the pie.
  • It makes EA friendlier and more cooperative with value systems that are different from our own.
  • It boosts our reputation with people in our social and cultural network.
  • It gives participants in EA an outlet to get their need for warm-and-fuzzy feelings met.
  • It gives a perception of slack - instead of EA being associated with a sort of stringent "no room for compromise, the stakes are too great" perspective, EA can project the "there is so much good we can do in the world" message that we actually mean, in a way that connects symbolically for the average person who's not an EA.
  • It makes it possible to tell a combination of stories about the work we do in the world. Taking action locally for the good of our own community is often easier to see and feel and talk about at the dinner table than giving anonymous-feeling donations to global health institutions or X-risk research groups.

If you prefer, you could simply add 2%-of-income on top of the 10% Giving What We Can pledge, or do whatever combination makes sense for your situation. In fact, I think it's probably best if we treat 2%/8% as a rough anchoring benchmark, while encouraging people to pick the blend that makes sense to them. Encouraging more individual choice and less adherence to a potentially rigid-seeming rule, while still having an anchoring point so the commitment means something, seems good for EA.

If we adopt this standard, I suggest we find additional ways to frame it besides the coldhearted-sounding rules-and-percentages manner I'm describing it here. Rather than "we advocate giving 2% locally and 8% to effective charities, mainly for perceptions reasons," I would suggest explaining this rule with a qualitative and friendly-sounding statement like "we try to mix our donations and efforts to help our local communities while also working on the world's biggest problems."

Comments9


Sorted by Click to highlight new comments since:

I disagree. In particular:

  1. Roughly, I think the community isn't able (isn't strong enough?) to both think much about how it's perceived and think well or in-a-high-integrity-manner about how to do good, and I'd favor thinking well and in a high-integrity manner.
  2. I'd guess donating for warm fuzzies is generally an ineffective way to gain influence/status.

(Of course you should be friendly and not waste weirdness points.)

Roughly, I think the community isn't able (isn't strong enough?) to both think much about how it's perceived and think well or in-a-high-integrity-manner about how to do good, and I'd favor thinking well and in a high-integrity manner.

Just want to flag that I completely disagree with this, and that moreover I find it bewildering that in EA and rationalism this seemingly passes almost as a truism.

I think we can absolutely think both about perceptions and charitable effectiveness - their tradeoffs, how to get the most of one without sacrificing too much of the other, how they might go together - and both my post here and jenn's post that I link to are examples of that.

People can think about competing values and priorities, and they do it all the time. I want to have fun, but I also want to make ends meet. I want to do good, but I also want to enjoy my life. I want to be liked, but I also want to be authentic. These are normal dilemmas that just about everybody deals with all the time. The people I meet in EA are mostly smart, sophisticated people, and I think that's more than sufficient to engage in this kind of tradeoffs-and-strategy-based reasoning.

I'd guess donating for warm fuzzies is generally an ineffective way to gain influence/status.

As a simple and costless way to start operationalizing this disagreement, I claim that if I ask my mom (not an EA, pretty opposed to the vibe) if she'd like EA better with a 2%/8% standard, she'd prefer it and say that she'd think warmly of a movement that encouraged this style of donating. I'm only sort of being facetious here - I think having accurate models about how to build reputation for the movement are important and that EAs need a way to gather evidence and update.

Just flagging that I disagree with the language that EAs "should" donate 10% (in the sense that it's morally obligatory). I think whether or not someone donates is a complicated choice, and a norm of donating 10% a) sets a higher bar of demandingness than I think makes sense for inclusion in EA, and b) isn't even necessarily the good-maximizing action, depending on personal circumstances (e.g., some direct workers may be better off spending on themselves and exerting more effort on their work).

Sorry to be pedantic, but I think it's really easy for these sorts of norms to accidentally emerge based on casual language and for people to start feeling unwelcome.

I think donating at least 10% of one's income per year should be a norm for any person who identifies as part of the EA community, unless doing so would cause them significant financial hardship.

The whole point of EA is to actually do altruism. If someone's not doing direct work, has been going to EA meetups for a year, identifies as an EA, and doesn't at least have stated plans to donate, what makes them EA?

Even EAs who are doing direct work, I would argue, should still donate 10% unless that would cause them significant financial hardship.

What happened to the lesson of the drowning child?

My post is related to the Giving What We Can pledge and the broad idea of focusing on "utilons, not fuzzies." From the wording of your comment I'm unclear on whether you're unfamiliar with these ideas or whether you are just taking this as an opportunity to say that you disagree with them. If you don't think that standards like the GWWC pledge are good for EA, then what do you think about the 2%/8% norm I propose here as a better alternative, even if far suboptimal to no pledge at all?

I don't think taking the GWWC pledge should be a prerequisite to consider yourself an EA (which, it's not a prerequisite now). If your post had said "GWWC members should..." or "EAs who donate 10% should..." instead of "EAs should..." then I wouldn't have disagreed with the wording.

That makes sense. I don't think there are any official prerequisites to being an EA, but there are community norms. I think the GWWC pledge (or a direct-work equivalent) is a common-enough practical or aspirational norm that I'm comfortable with eliding EA and GWWC-adjacent-EA for the purposes of this post, but I acknowledge you'd prefer to split these apart for a sensible reason.

Thanks for the post! I think there's a more effectiveness-oriented version of your recommendation which would still accomplish your recommendation's goals while maintaining greater fidelity to the message of the importance of effectiveness.

2% could go to feel-good effective charities that you can talk to your mom about, like GiveWell recommended charities, the Humane League, or Mercy for Animals. 8% can go to the EA cause areas less palatable to most people like AI sentience, shrimp welfare, AI safety, wild animal welfare, etc.

At least for me, I would just feel like I was shirking on my moral duty if I was donating a significant amount to an obviously less cost-effective charity. I would feel like I was putting my warm fuzzies over helping others.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI