4 years ago GWWC announced that 5,000 people had signed the pledge to donate 10% of annual income to effective charities. I am surprised that number has not doubled since then.

For EAs who have not yet taken the pledge, I am curious why.

Separate but related, if you have taken the pledge, but are not using the new diamond symbol to promote it, I am curious why. I have been surprised to see that people who had advocated for giving 10 percent to effective charities have not been using the 🔸, but maybe it’s because they have not yet gotten around to doing it.

42

5
0

Reactions

5
0
Comments12


Sorted by Click to highlight new comments since:

I honestly don't like seeing it on the forum. It has a virtue singaly sort of feel to me, I guess because I see it's potential for impact as someone who doesn't know about the pledge saying "oh, what's that orange thing all about" and then reading up on it when they wouldn't have otherwise, and I doubt there's many people on the forum who fit that bill.

I think that's kind of the whole point of Giving What We Can? It's trying to change social norms in a more generous direction, which requires public signaling from people who support (and follow) the proposed 10% norm. (Impact doesn't just come from sharing abstract info - as if anyone were strictly unaware that it would be possible for them to donate 10% - but also from social conformity, wanting to be more like people we like and respect, etc.) I think the diamond icon is great for this purpose.

Sometimes people use "virtue signal" in a derogatory sense, meaning a kind of insincere signal of pseudo-virtue: prioritizing looking good over doing good. But it doesn't have to be like that. Some norms are genuinely good -- I think this is one -- and signaling your support for those norms is a genuinely good thing!

I initially felt similarly to Tristan, but then Richard's comment also was persuasive to me, so now I am thinking about it more.

I am fairly confident of these claims:

  1. It is not wrong to use the orange diamond symbol on EAF.
  2. It is less valuable to use the orange diamond symbol on EAF than on LinkedIn etc.

It seems to me that there is huge value in something (10% pledging, veganism, effective career choices, etc) going from so rare many people do not know anyone in that category, to common enough that most people (in some relevant reference class) have encountered the ideas and the people. However, if e.g. 90% of EAF users pledged and used the diamond, I think this would be socially hard for some of the remaining 10%. This is partly the point, re social norms. But also I think there are legitimate reasons to not want to pledge (yet) and so I think the norm I would love is one where everyone knows about the pledge, knows lots of people who have taken it, and has seriously considered it, but not more pressure than that probably.

I suppose another issue for me is I am sad that humans are so socially conformist and that the fraction of our friends using a symbol will greatly affect our decision, but this basically just is the case, so maybe I need to get over my qualms about using some forms of the dark arts for good.

And as to @Michael_2358 🔸 's original question, @Lizka has written about not taking the pledge here and discussed it at EAG London recently.

Yeah, Oscar captured this pretty well. You say that Giving What We Can is trying to change social norms, but how well is it really being achieved on the EA forum where maybe 70% or more are already familiar?

I support the aspect of creating a community around it, but I also just guess I don't really feel that from seeing emojis in other people's EA Forum profiles? I think you'd focus on other things if creating a community among givers was your goal, and to me this likely just pressures those who haven't pledged for whatever reason into taking it, which might not be the right decision.

I agree that signaling your support for good social norms is a positive thing though, and I feel differently about when this is used on LinkedIn for example. I just don't think these abstract benefits you point to actually cash out when adding the orange emoji to forum profiles.

Norms = social expectations = psychological pressure. If you don't want any social pressure to take the 10% pledge (even among EAs), what you're saying is that you don't want it to be a norm.

Now, I don't think the pressure should be too intense or anything: some may well have good reasons for not taking the pledge. The pressure/encouragement from a username icon is pretty tame, as far as social pressures go. (Nobody is proposing a "walk of shame" where we all throw rotten fruit and denounce the non-pledgers in our midst!) But I think the optimal level of social pressure/norminess is non-zero, because I expect that most EAs on the margins would do better to take the pledge (that belief is precisely why I do want it to become more of a norm -- if I already trusted that the social environment was well-calibrated for optimal decisions here, we wouldn't need to change social norms).

So that's why I think it's good, on the Forum and elsewhere, to use the diamond to promote the 10% pledge.

To be clear:

(1) I don't think the audience "being familiar" with the pledge undercuts the reasons to want it to be more of a norm among EAs (and others).

(2) The possibility that something "might not be the right decision" for some people does not show that it shouldn't be a norm. You need to compare the risks of over-pledging (in the presence of a norm) to the risks of under-pledging (in the absence of a norm). I think we should be more worried about the latter. But if someone wants to make the comparative argument that the former is the greater risk, that would be interesting to hear!

I have taken the pledge but I'm not currently donating 10%, so don't feel I can authentically promote it to others right now.

The Giving What We Can Pledge is a public commitment to donate at least 10% of your lifetime income [...].
https://forum.effectivealtruism.org/posts/Y5QKkt9PFhqvG7CEn/5-things-you-ve-got-wrong-about-the-giving-what-we-can#Misconception__1__If_you_sign_the_pledge__you_have_to_donate_at_least_10__of_your_income_each_year_

You do not need to donate 10 % each year, you can donate 5 % one year and more on the others.

The pledge you took is still significant, you can be proud about taking and promoting it.

I can't figure out how to change it on the EA Forum. Perhaps because I've already changed my name once before and there's a limit?

But I understand that there are many people who take the pledge but don't feel comfortable sharing it publicly. I think different circles and different cultures look differently towards "bragging" about donating. I know I don't feel comfortable doing it on LinkedIn or Instagram. Mostly out of fear or judgement I guess, so my mind could easily change.

I'm the same, have no idea how to put it on the forum.

You can DM a moderator (e.g. me) or ask forum staff via these channels

Usernames can be changed here: https://forum.effectivealtruism.org/account but only once

They're working on creating an option to make it easy for posters to add the diamond, but in the meantime you can DM the forum team (I did!) 

It's also the case that the 10 Percent pledge is not the best course of action for everyone in the EA movement.

Putting an emoji by your name is just a really blunt tool and I'm not sure it's the right tool to encourage people already interested in or part of EA to donate more.

Especially in the absence of other badges my gut is worried about this leading to unhelpful social pressure (though I'm not sure what percentage of users have the emoji etc).

This also makes the EA forum and online social spaces slightly more cult-like via increased social pressure.

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something
Recent opportunities in Effective giving
48
· · 3m read