Nov 10
Funding strategy week
Nov 17
Marginal funding week
Nov 24
Donation election
Dec 8
Why I donate week
A week to share the personal stories and reasons behind our donations. Read more.
Dec 15
Donation celebration
"It may sound paradoxical to associate giving away 10% of one's income with increased freedom, but until that point, I had felt limited and frustrated by the constraints on my ability to effect meaningful change through my career alone. The opportunity to use my earnings as a direct, powerful lever for good was a breath of fresh air." So well put! I felt similarly when I took the Pledge in 2017! 
My decision to take the Giving What We Can (now the 10%) Pledge in September 2020 was based on the responsibility I felt with my great power, improving my own quality of life and the logic of it all. However, these thoughts were all quite unconscious back then and my decision to take the pledge was immediate as soon as I learned about it. To celebrate "Why I donate week" I decided to think through my reasoning, which was implicit back then, in more detail: With great power comes great responsibility  My commitment to donate 10% of my income for life is fundamentally rooted in a strong belief: with great power comes great responsibility. The power I hold is the freedom to choose what I do with a decent portion of my income, a power that countless individuals across the globe simply do not possess as they struggle daily to meet their basic needs. My personal responsibility is to use this power to support those who have not been afforded the same chance in life. This stems directly from a place of undeniable privilege. Growing up in the United Kingdom, I was raised in a stable family environment where all my financial needs were met. Crucially, I received a high-quality education that essentially set me up for life, guaranteeing a pathway to financial security. This is a privilege I try not to take for granted. The raw reality of this global inequality was impressed upon me from a young age during annual visits to my birthplace: Mumbai, India. Witnessing the pervasive, inescapable poverty was emotionally draining and deeply frustrating. At the time, I didn't know how to address such overwhelming injustice, but the profound unfairness of the world’s distribution of opportunity has stayed with me for life.  Giving away my money increased my quality of life  Initially, I sought to channel my desire to make the world better through my career. For ten years, I worked in sustainable tourism development, using tourism as a tool for impact. While I did donate to causes
Awesome post! 
Four years ago, I decided to donate at least 10% of my income going forward. Here are four reasons why. Any single reason below would probably be enough on its own. Together, they make this one of the clearest positive and rewarding decisions I've ever made. 1. I think it’s the right thing to do I follow Peter Singer's arguments. From any consistent moral framework I can support, I end up in the same place: given the coincidence of being born in a rich country, I should be helping others significantly. I'd have to do serious logical and moral gymnastics to avoid this conclusion, and I'm not interested in that kind of self-deception. 2. I actually care Straightforward: when I read about someone's specific situation, their health, their opportunities, their constraints, I naturally want to help. It takes active effort not to care. There's a moving post on the EA Forum that captures this: "Somehow, a single paragraph of explanation can transform someone from nameless and faceless to someone that I deeply care about. When I hear this person's story, I feel willing to give up a nice vacation or two to help them." I don't need to convince myself to care. I need to remind myself of the reality of suffering out there, and that I can actually do something about it. 3. It grounds my everyday work I'm early career and work in a large corporation. I enjoy my job: it's challenging, I'm learning constantly, and I work with great people. But I'm not under any illusion that my daily tasks maximize impact on the world's most pressing problems. Some days, work feels meaningful. Other days, I'm drowning in corporate busywork. Knowing I donate significant parts of my earnings can change the frustrating parts. In those moments, I can think: this tedious work funds something that genuinely matters. It lifts the pressure to find cosmic meaning in every boring meeting. I'll likely look for a more impactful role at a later career stage. But knowing that I'm funding real impact ev
$15 213 raised to the Donation Election Fund
Donation Election Fund
$15 213
Includes our match on the first $5000
Learn more
You can no longer vote or donate

Quick takes

Show community
View more
Set topic
Frontpage
‘Why I donate’ week (2025)
Global health
Animal welfare
Existential risk
12 more
24
Lizka
16h
0
When thinking about the impacts of AI, I’ve found it useful to distinguish between different reasons for why automation in some area might be slow. In brief:  1. raw performance issues 2. trust bottlenecks 3. intrinsic premiums for “the human factor” 4. adoption lag 5. motivated/active protectionism towards humans I’m posting this mainly because I’ve wanted to link to this a few times now when discussing questions like "how should we update on the shape of AI diffusion based on...?". Not sure how helpful it will be on its own! ---------------------------------------- In a bit more detail: (1) Raw performance issues There’s a task that I want an AI system to do. An AI system might be able to do it in the future, but the ones we have today just can’t do it.  For instance: * AI systems still struggle to stay coherent over long contexts, so it’s often hard to use an AI system to build out a massive codebase without human help, or write a very consistent detective novel. * Or I want an employee who’s more independent; we can get aligned on some goals and they will push them forward, coming up with novel pathways, etc.  * (Other capability gaps: creativity/novelty, need for complicated physical labor, …) A subclass here might be performance issues that are downstream of “interface mismatch”.[1] Cases where AI might be good enough at some fundamental task that we’re thinking of (e.g. summarizing content, or similar), but where the systems that surround that task or the interface through which we’re running the thing — which are trivial for humans — is a very poor fit for existing AI, and AI systems struggle to get around that.[2] (E.g. if the core concept is presented via a diagram, or requires computer use stuff.) In some other cases, we might separately consider whether the AI systems has the right affordances at all.  This is what we often think about when we think about the AI tech tree / AI capabilities. Others are often important, though:  (2) Veri
6
Linch
6h
0
I wrote a short intro to stealth (the radar evasion kind). I was irritated by how bad existing online introductions are, so I wrote my own! I'm not going to pretend it has direct EA implications. But one thing that I've updated more towards in the last few years is how surprisingly limited and inefficient the information environment is. Like obvious concepts known to humanity for decades or centuries don't have clear explanations online, obvious and very important trends have very few people drawing attention to them, you can just write the best book review of a popular book that's been around for decades, etc. I suppose one obvious explanation here is that most people who can write stuff like this have more important and/or interesting things to do with their time. Which is true, but also kind of sad.
A rule of thumb that I follow for generating data visualizations: One story = one graph * The best visualizations are extremely simple and easy to read: e.g. a line or bar chart that tells you exactly what you care about * If you are struggling to figure out what to visualize, zoom out and ask yourself: what story are you trying to tell? Once you have clarity on that, figure out the simplest way to illustrate this. * If you have multiple stories to tell, make multiple graphs :) Some made up stories and solutions: * Total engagement hours steadily went down over this year = You want a line graph of engagement over the year, and possibly you want to smooth your data out to show the trend line: e.g. graph the rolling 7 day average over time, or include a trend line. * Engagement really spiked on May 1, 2025 = You want a line graph of engagement per day, zoomed out far enough to show how it's changed over time, and maybe add a labelled vertical line on May 1 * Engagement this giving season is much stronger than last year = You want to plot two lines: engagement per day in 2025 over giving season, plus the equivalent engagement per day in 2024. Here, the comparison is the story you want to tell, so you want to make sure your 2025 and 2024 data are apples to apples. Sharing communication advice a few colleagues have found helpful.
I have the impression that the most effective interventions, especially in global health/poverty, are usually temporary, in the sense that you need to keep reinvesting regularly, usually because the intervention provides a consumable good; for example malaria chemoprevention: it needs to be provided yearly. In contrast, solutions that seem more permanent in the long-term (e.g. a hypothetical malaria vaccination, or building infrastructure), are typically much less cost-effective on the margin because of their high cost. How do we balance pure marginal effectiveness vs eventually moving towards more permanent solutions? Could it be that by overly optimising for marginal cost-effectiveness, we might be missing a better ‘global maximum’ in the utility landscape, but we just need to descend from the current ‘local maximum’ to be able to get there eventually?
* Re the new 2024 Rethink Cause Prio survey: "The EA community should defer to mainstream experts on most topics, rather than embrace contrarian views. [“Defer to experts”]" 3% strongly agree, 18% somewhat agree, 35% somewhat disagree, 15% strongly disagree. * This seems pretty bad to me, especially for a group that frames itself as recognizing intellectual humility/we (base rate for an intellectual movement) are so often wrong. * (Charitable interpretation) It's also just the case that EAs tend to have lots of views that they're being contrarian about because they're trying to maximize the the expected value of information (often justified with something like: "usually contrarians are wrong, but if they are right, they are often more valuable for information than average person who just agrees"). * If this is the case, though, I fear that some of us are confusing the norm of being contrarian instrumental reasons and for "being correct" reasons.  Tho lmk if you disagree.