Hide table of contents

Since Giving What We Can was formally founded in 2009, with just 23 founding members, the community has grown significantly – and we’re approaching 10,000 10% Pledgers (hopefully in the next couple of months)! The community as it stands today has now donated over $250 million and is predicted to donate over $1 billion across the lifetimes of our members. 

If you’re a pledger, join us in sharing stories this week:

To celebrate our community’s impact over the last 15 years, we’re hoping to light up the internet for the next week with everyone’s stories and quotes. We’re inviting all of our pledgers to share (or re-share) your stories about pledging, photos from the early days, what the pledge has meant to you, or even your hopes for Giving What We Can and the 10% Pledge in the future. We’ll then compile a bunch of these stories and photos (and anything else we get) to highlight how powerful it can be to give significantly and effectively.

This is not only a great way to reflect on your giving and the community but also to show your friends and networks what the Pledge is all about – and hopefully encourage some of them to join us during next month’s Pledge Week (December 16th-22nd) as we near 10,000 lifetime pledges.

How to get involved:

  • Post on social media tagging Giving What We Can and share your thoughts/story about pledging (anything from what motivated you to take the pledge to the story of how you found about it and how it's impacted your life so far!) along with your hopes for the future of Giving What We Can. (Bonus if you include a photo of you holding your pledge and/or wearing your pin – you could take a current one or share something from the past! If you prefer, you can instead use one of these customisable images to accompany your story, or even post this separately later in the week!)
  • If you don’t use social media, submit a quote and photo to this form so we can share your story on our blog and social media.
  • You can also add your story to our EA Forum thread

You can find some example post ideas and more information here

Don’t forget to add a diamond to your LinkedIn or X accounts to show you’re a pledger! Several new pledgers have mentioned the diamonds as one of the reasons they’ve pledged!

And most importantly, we hope you enjoy seeing memories and reflections from across the community during this exciting anniversary.

The last 15 years have shown that there are thousands of people who are willing to take giving to the most effective charities seriously, and who are willing to give a significant amount of their income away in pursuit of creating a better world for those now, and in the future.

We hope the next 15 years will bring us significant growth, and show that giving effectively and significantly can truly become a cultural norm.

Thank you for your continued giving, and for your continued support of Giving What We Can.

 

P.S. Here’s a throwback to the GWWC website in 2010 – that Toby Ord coded himself!

Comments1


Sorted by Click to highlight new comments since:

GWWC website in 2010:
For a person earning £15,000 per year, this would mean saving 5 lives every year

£300/$450 (~£450/$650 inflation-adjusted) per life then.. unfathomably low

https://old.reddit.com/r/EffectiveAltruism/comments/1gmtdrm/has_average_cost_to_save_a_life_increased_or/

Curated and popular this week
 ·  · 55m read
 · 
Summary Last updated 2024-11-20. It's been a while since I last put serious thought into where to donate. Well I'm putting thought into it this year and I'm changing my mind on some things. I now put more priority on existential risk (especially AI risk), and less on animal welfare and global priorities research. I believe I previously gave too little consideration to x-risk for emotional reasons, and I've managed to reason myself out of those emotions. Within x-risk: * AI is the most important source of risk. * There is a disturbingly high probability that alignment research won't solve alignment by the time superintelligent AI arrives. Policy work seems more promising. * Specifically, I am most optimistic about policy advocacy for government regulation to pause/slow down AI development. In the rest of this post, I will explain: 1. Why I prioritize x-risk over animal-focused longtermist work and global priorities research. 2. Why I prioritize AI policy over AI alignment research. 3. My beliefs about what kinds of policy work are best. Then I provide a list of organizations working on AI policy and my evaluation of each of them, and where I plan to donate. Cross-posted to my website. I don't like donating to x-risk (This section is about my personal motivations. The arguments and logic start in the next section.) For more than a decade I've leaned toward longtermism and I've been concerned about existential risk, but I've never directly donated to x-risk reduction. I dislike x-risk on an emotional level for a few reasons: * In the present day, aggregate animal welfare matters far more than aggregate human welfare (credence: 90%). Present-day animal suffering is so extraordinarily vast that on some level it feels irresponsible to prioritize anything else, even though rationally I buy the arguments for longtermism. * Animal welfare is more neglected than x-risk (credence: 90%).[1] * People who prioritize x-risk often disregard animal welfare (or t
 ·  · 5m read
 · 
The AI safety community has grown rapidly since the ChatGPT wake-up call, but available funding doesn’t seem to have kept pace. However, there’s a more recent dynamic that’s created even better funding opportunities, which I witnessed as a recommender in the most recent SFF grant round.[1]   Most philanthropic (vs. government or industry) AI safety funding (>50%) comes from one source: Good Ventures. But they’ve recently stopped funding several categories of work (my own categories, not theirs): * Many Republican-leaning think tanks, such as the Foundation for American Innovation. * “Post-alignment” causes such as digital sentience or regulation of explosive growth. * The rationality community, including LessWrong, Lightcone, SPARC, CFAR, MIRI. * High school outreach, such as Non-trivial. In addition, they are currently not funding (or not fully funding): * Many non-US think tanks, who don’t want to appear influenced by an American organisation (there’s now probably more than 20 of these). * They do fund technical safety non-profits like FAR AI, though they’re probably underfunding this area, in part due to difficulty hiring for this area the last few years (though they’ve hired recently). * Political campaigns, since foundations can’t contribute to them. * Organisations they’ve decided are below their funding bar for whatever reason (e.g. most agent foundations work). OP is not infallible so some of these might still be worth funding. * Nuclear security, since it’s on average less cost-effective than direct AI funding, so isn’t one of the official cause areas (though I wouldn’t be surprised if there were some good opportunities there). This means many of the organisations in these categories have only been able to access a a minority of the available philanthropic capital (in recent history, I’d guess ~25%). In the recent SFF grant round, I estimate they faced a funding bar 1.5 to 3 times higher. This creates a lot of opportunities for other donors
Nikola
 ·  · 1m read
 · 
My median expectation is that AGI[1] will be created 3 years from now. This has implications on how to behave, and I will share some useful thoughts I and others have had on how to orient to short timelines. I’ve led multiple small workshops on orienting to short AGI timelines and compiled the wisdom of around 50 participants (but mostly my thoughts) here. I’ve also participated in multiple short-timelines AGI wargames and co-led one wargame. This post will assume median AGI timelines of 2027 and will not spend time arguing for this point. Instead, I focus on what the implications of 3 year timelines would be.  I didn’t update much on o3 (as my timelines were already short) but I imagine some readers did and might feel disoriented now. I hope this post can help those people and others in thinking about how to plan for 3 year AGI timelines. The outline of this post is: * A story for 3 year AGI timelines, including important variables and important players * Prerequisites for humanity’s survival which are currently unmet * Robustly good actions A story for a 3 year AGI timeline By the end of June 2025, SWE-bench is around 85%, RE-bench at human budget is around 1.1, beating the 70th percentile 8-hour human score. By the end of 2025, AI assistants can competently do most 2-hour real-world software engineering tasks. Whenever employees at AGI companies want to make a small PR or write up a small data analysis pipeline, they ask their AI assistant first. The assistant writes or modifies multiple interacting files with no errors most of the time.  Benchmark predictions under 3 year timelines. A lot of the reason OSWorld and CyBench aren’t higher is because I’m not sure if people will report the results on those benchmarks. I don’t think things actually turning out this way would be strong evidence for 3 year timelines given the large disconnect between benchmark results and real world effects. By the end of 2026, AI agents are competently doing multi-day coding