Hide table of contents

Epistemic status

Feelings, anecdotes, and impressions.

Declaration

I used to work at Giving What We Can, but I’m writing about my personal experience with the Pledge here.

Introduction

I have the impression that, at least among effective altruists who are directly using their careers to do good, donating is somewhat falling out of style. I feel neither happy nor sad about this. Perhaps ironically, my stance on whether someone doing directly impactful work should donate is that it’s a personal decision, to borrow an eye-roll-inducing platitude from the philanthropic field.

I hold this belief because I view the decision of whether to donate as vastly different from the decision of where to donate. The former can often hinge on personal circumstances, such as where one is at in their career, what their financial situation looks like, and if they have any dependents. I view the claim that the choice of where to donate is a personal decision as much more tenuous; I’m preaching to the choir here, but one ought to support charities that do more good than less, all else being equal.

I understand why some of my peers who work directly on the world’s most pressing problems choose not to take the Giving What We Can Pledge. At the end of the day, I don’t care how the good is done, so long as it is done. My default is to trust the judgement of people who are making a good faith effort to think carefully about what they can do to help others as much as possible. If you feel that donating isn’t the best road to impact, I have faith in the reasoning behind your belief.

Yet as someone who is working directly on what I view as one of the world’s most pressing problems, I still feel that effective giving is a core part of my plan to do what I can to make the world as good as possible.

Here’s why.

Nothing can take donating away from me, not even a bad day

I’m currently spending my time researching ways to steer the development of transformative artificial intelligence in a positive direction. This means that I work in a field with little-to-no clear feedback loops — at least ones that can concretely indicate whether or not the things I’m doing are actually improving others’ lives.

Like many, I have struggled with imposter syndrome, and this has been exacerbated by the messy and opaque causal links between the things I do on a day-to-day basis and actual downstream improvements in others’ lives. Questions like “Am I smart enough to belong in this community?”, “Am I actually doing any good?” and “Will this paper I’m writing help anyone?” pop into my head more than I’d like to admit.

These kinds of thoughts and feelings suck. They aren’t helpful, and I recognise that. But sometimes they’re hard to avoid, and they can be debilitating when they strike. I want to help others so badly, and the thought of failing to do that is agonising.

Being able to donate makes me hopeful. No matter how rough of a day I have, or how unclear the impact of my work is, nothing can take donating away from me. There is no imposter: I can literally see a number on my Giving What We Can dashboard, and I can feel proud about knowing those funds are going to help others. Hitting a brick wall on figuring out AI governance, or having colossal uncertainties about what I ought to do with my career won’t change that. In fact, nothing will, and the sense of agency that brings me is incredibly motivating.

To be clear, I believe the vast majority of my positive impact on the world will come from using my career to work on problems that could negatively impact humanity’s long run potential. But in my mind, that has little bearing on the reality that $3,500 can save someone’s life. This empirical fact — the one that got me excited about EA in the first place, mind you — still gives me that “oh my god, that’s a real person” feeling. I don’t think that will ever change, and I intend to use it as fuel for as long as I can.

Productivity

Another argument I’ve heard against donating is that, due to the overwhelming importance of productivity for people working directly on longtermist problems, someone like me should focus on turning money into productivity to maximise my expected impact. I’m sympathetic to this argument, but I’m not wholly convinced, at least as it applies to me.

Besides the fact that this line of reasoning often strikes me as slightly Pascalian — yeah, maybe ordering delivery every night will increase my productivity by 0.0001%, which multiplied by how much my work reduces overall x-risk, which multiplied by 1042 digital minds… you get the point — I don’t actually feel like my productivity is going to jump tremendously by spending that 10% on myself. To be clear, if you feel that using the 10% on your own productivity would help you do more good, then I encourage you to do just that. But I caution you to really scrutinise your mental model for how you can turn money into productivity. I worry about a state of affairs where this line of thinking goes too far, and biases allow us to rationalise frivolous purchases that really aren’t going to move the needle in an impactful way. In my mind, this is what we have the other 90% for.

Conversely, being legibly reminded that I’m actually capable of doing good things in the world when I give effectively is probably quite good for my productivity and motivation.

Funding constraints

The EA community is currently navigating through a weird era in which we are significantly less funding constrained than before. That’s great in my books — we can now do even more good things. But a useful distinction to keep in mind is that being less funding constrained than we were previously isn’t the same as not funding constrained at all.

Until the 700 million people living in poverty reach significantly better standards of living, or factory farming is a horror from the past, we’re funding constrained, and my “drop in the bucket” can still do a lot of good. To this end, I try to remind myself as much as possible that the drop I contribute to the bucket might be small in the grand scheme of things, but is still huge for someone out there. This line of thinking runs parallel to my desire to make sure the long run future goes well for people and non-human animals I will never exist alongside.

Signalling

I find it reasonably convincing that if EA is to do big, ambitious things, we will need to grow considerably. The goal of growing big will necessarily put more eyeballs (and thus more scrutiny) on the EA community than we have ever had before. Even if you don’t subscribe to the worldview that big growth is necessary, you likely concede that at least some targeted growth is. And all else being equal, I think we will be much more likely to bring new people into the community if we can clearly demonstrate that we’re serious about doing good. The Pledge can help with this.

Yet because someone doesn’t need to take the Pledge to be impactful, I think it is absolutely crucial that we do not drive people away by seeming too zealous or giving off the impression that one has to be self-sacrificial to contribute to the EA project. It’s possible to do incredibly impactful work without donating a cent. If this applies to you, then I say great — keep it up!

But I will caveat this by saying that it will likely be advantageous for us as a community to have clear signals to the outside world that we really do take the idea of doing the most good we can seriously. One way of doing this is by taking a public pledge — one that indicates to the world that you’re motivated to use your resources to help others in a way that using your career doesn’t quite obviously signal.

I’ve personally found that the Pledge has been very helpful for me to chat about EA with friends and peers. In fact, one of my closest friends has started giving around 5% of his salary to effective charities, and it seems likely that he will be applying to work at EA/EA-adjacent organisations as a software engineer in the near future.[1] Of course, this might’ve happened anyways without the Pledge, but my best guess is that it played a key role in helping me demonstrate my excitement for EA in a way that talking about my career wouldn’t have. 

Concluding thoughts

I recently hit my two-year pledge anniversary, and I’m really happy I took it. I feel that the Pledge has been complementary to the direct work I’m doing, and it motivates me knowing that effective giving can raise the floor of how much good I’m doing in the world during times in which the ceiling is ambiguous. While the Pledge is not for everyone, it could be valuable for you to reflect upon whether it might motivate you the same way it has for me.

Acknowledgements

Thank you to Luke Freeman & Frances Lorenz for their feedback on this post.

  1. ^

    Please reach out to me if you’re looking to hire a very smart and personable software engineer with ~2 years of experience.

Comments9


Sorted by Click to highlight new comments since:

Awesome post Julian! 

Just to agree but in a different way.

It's a little unfashionable at the moment, but I still find the old school obligation-style framing  of giving pretty convincing. 

I'm wearing an expensive suit.  The child is drowning.

Sure there's a drowning child, but what we really need is major political change. The child is still drowning. 

Ok ok, I get it, but I just got this great new job working on AI governance.  The child is still drowning. 

Well the real situation is that there are countless children drowning and for me to save this one seems kinda low yield something something population ethics.  The child is still drowning. 

Giving is great! 

I really liked this post, and resonate strongly with the sentiment of "Nothing can take donating away from me, not even a bad day". 

Although I do direct work on biosecurity,  my donations (~15% gross income) go almost entirely to global health and wellbeing, and some of this is because I want to be reassured that I had a positive impact, even if all my various speculative research ideas (and occasional unproductive depressive spirals) amount to nothing.

I would be curious how you feel that intersects with the wording of the GWWC pledge, which includes 

I shall give __ to whichever organisations can most effectively use it to improve the lives of others

As the sort of pedant who loves a solemn vow, I wonder if my global health and wellbeing donations are technically fulfilling this pledge, based on my judgements of how to improve the lives of others. That said, this only bothers me a little because, you know, this mess of incoherent commitments is out here giving what she can, and I recognize that might not meet a theoretical threshold of "most effective".

Hey! A few thoughts:

  • From an instrumental POV, donating to an effective charity that keeps you motivated to continue direct work is probably a good strategy. I sometimes donate to the LTFF, but would probably feel less motivated if all of my donations went there. “Fuzzies” from AMF help me stay motivated, and I think that increases my overall impact. If you’re concerned about the direct wording of the pledge and you feel longtermist charities are better in that regard, there’s probably some allocation between those and GH&D charities that would allow you to be in the sweet spot.

  • I can’t quite articulate why, but I feel that effective giving should be exciting and motivating. I would be sad if I knew people were stressing out about whether they’re doing the most good (impossible to tell) when they’re already doing a hell of a lot of good by giving to charities like AMF. Being excited and signalling to others that effective giving isn’t a chore is also good for inspiring others.

  • GWWC as an organisation recommends GiveWell top charities (what I assume you mean by GH&W charities), and a large fraction of the community gives there as well. That should be a pretty strong signal.

  • Within the broad portfolio of effective charities that EAs support, the decision of where to give often hinges on worldviews that are highly uncertain (should I donate to AMF, or the good food institute, or a wild animal research institute? What about a longtermist charity? It’s unclear a priori). To me, it’s perfectly sensible to hedge a bit with your overall altruistic portfolio (donate to neartermist global health charities while still doing direct longtermist work).

  • All of this stuff is really hard and unclear, so just do what you think is the best while also remembering it is nearly impossible to be a perfect effective altruist who always “maximizes the good”. And be proud of yourself for caring enough to think about it :)

some of this is because I want to be reassured that I had a positive impact, even if all my various speculative research ideas (and occasional unproductive depressive spirals) amount to nothing.

You may be interested to know that Open Philanthropy reasons similarly. At least that's what I got from Ajeya Cotra's discussion on worldview diversification with Rob Wiblin on the 80K podcast:

I think there’s something to be said for not going all in on what you believe a rigorously philosophical accounting would say to value. I think one way you could put it is that Open Phil is — as an institution — trying to place a big bet on this idea of doing utilitarian-ish, thoughtful, deep intellectual philanthropy, which has never been done before, and we want to give that bet its best chance. And we don’t necessarily want to tie that bet — like Open Phil’s value as an institution to the world — to a really hyper-specific notion of what that means.

Thanks for writing this!

On the particular example of ordering meals, I think some easy to prepare meals or basic things like (store-bought) hummus and veggies are actually about as much work as ordering food from a restaurant, sometimes less. You can get grocery delivery ahead of time. People ordering meals may be rationalizing luxuries or not having to think about food until they're hungry, or they just aren't aware that there are very quick cheaper options.

Edit: I suppose there's a separate question of whether eating food you enjoy more makes you more motivated and productive.

Also, is the Long-Term Future Fund or the Infrastructure Fund granting all the donations it receives? If so, it doesn't seem very reasonable to me to believe that ordering meals to save a few minutes or for your own enjoyment is a better use of money than their marginal grants for a longtermist supporting the same causes.

If you're in such a special position that these meals are justified, then I think you should probably have a personal assistant. If your employer or these funds aren't willing to pay for one for you (and you can ask/apply to check!) or to pay you enough to afford one yourself, then that's good evidence that others with relevant expertise evaluating your work do not think you're in such a position.

Do we have any data on donation amounts from google engaged EAs? Key question seems to be what this number is compared to the past.

We do, at least from the EA survey. However, the estimates for differential changes over time are somewhat confounded by potential differences in differences in survey response rates.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f