Hide table of contents

Podcast is now live, here.

I'll be interviewing Allan Saldanha on Tuesday December 10th. I'll release a podcast version of the interview and a transcript a few days after we record. Leave a comment here if you'd like me to ask Allan a question on your behalf!

A message from Allan:

My name is Allan Saldanha, I’m a 47 year old compliance testing manager at an investment bank and I’m married with a wife and twin 16 year old boys.

I have been a Giving What We Can member since 2014.

In my first year after taking the pledge, I gave away 20% of my income. However I had been able to save and invest much of my disposable income from my relatively well paid career before taking the pledge and so had built up strong financial security for myself and my family. As a result, I increased my donations over time and since 2019, have given away 75% of my income. 

Since taking the pledge- I’ve earned £1.2m and given away 60%. I’m full of admiration for the many young GWWC members who have taken the pledge as students or early in their working lives without any significant savings, and their generosity has also motivated me to increase my donations.

Initially I made all my donations to anti-malaria and deworming charities, however when I read about the scale of wild animal suffering I started donating to animal welfare charities. I have also donated to the EA infrastructure fund and EA organisations.

However when I read that Toby Ord and other experts believed there was a 1 in 6 chance of complete extinction of human life in the next 100 years I was shocked and decided that I should give almost all my donations to longtermist funds.

I currently split my donations between the Longview Philanthropy Emerging Challenges Fund and the Long Term Future Fund- I believe in giving to funds and letting experts with much more knowledge than me identify the best donation opportunities.

The best article I’ve seen on earning to give is this forum post by AGB.

I’m happy to take any questions on Earning to Give although I don't think I’d have many insights on picking good donation targets.

Some topics you might want to ask about:

  • How Allan first got into giving.
  • What helps Allan stay motivated after over a decade of giving.
  • How Allan decides where his money goes.

I'll ask a bunch of my own questions as well, if there aren't enough for an hour-long interview here. Thanks all!

Comments14


Sorted by Click to highlight new comments since:

Not a question, but simply: thank you, Allan! What you do is amazing, and really cool. Kudos! 

I'm curious how you first got interested in giving, especially as Giving What We Can skewed towards students and (very) young professionals at the time.

What motivated you to increase the percentages over time?

How do your wife and teenage children feel about your giving?

First just wanted to say that this:

In my first year after taking the pledge, I gave away 20% of my income. However I had been able to save and invest much of my disposable income from my relatively well paid career before taking the pledge and so had built up strong financial security for myself and my family. As a result, I increased my donations over time and since 2019, have given away 75% of my income. 


...is really inspiring :).

I'm interested in knowing more about how Allan decides where to donate. For example:

I currently split my donations between the Longview Philanthropy Emerging Challenges Fund and the Long Term Future Fund- I believe in giving to funds and letting experts with much more knowledge than me identify the best donation opportunities.

How did Allan arrive at this decision, and how confident does he feel in it? Also, how connected does Allan feel with other EtG'ers who are giving a similar amounts based on a similar worldview?

I'd be curious about the emotional journey of increasing the giving percentages.

I just made my 10% pledge very recently and am really struggling to find the right percentage to donate. Currently, with a 65k € base income, I just go with the 10% pre-tax and put 50% of my bonuses post-tax on top.

One month, I think I am donating too little. The next month, I'm scared of saving too little. It sometimes feels hard to justify to myself that increasing it further is the right thing to do, since everyone I know saves most of the money for themselves and there's essentially 0 positive feedback for donating. The money is just gone. 

Could you describe how these decisions to increase came to be and what it did to you emotionally? Did you have times of doubt, or did every step feel right?

Where will the podcast be released?

It'll be on the EA Forum curated and popular podcast feed, but I'll post a transcript and links on the Forum as well. 

Here it is. Still uploading to spotify etc... I think. I'll link it when it's done. 

What are his thoughts on impact-based giving?

What do you mean by "impact-based giving"? Do you mean giving that considers effectiveness (like any effective giving), or do you mean high upside, low likelihood of success giving?

Hits based giving sorry! I wrote too fast.

Thanks, Toby and Allan.

However when I read that Toby Ord and other experts believed there was a 1 in 6 chance of complete extinction of human life in the next 100 years I was shocked and decided that I should give almost all my donations to longtermist funds.

@Allan_Saldanha, I encourage you to check David Thorstad’s series exaggerating the risks. I think Toby's and other experts' guesses for the risk of human extinction are unreasonably high. For example, I estimated a nearterm annual extinction risk from nuclear war of 5.93*10^-12, which is only 1.19*10^-6 (= 5.93*10^-12/(5*10^-6)) of the 5*10^-6 that I understand Toby Ord assumed in The Precipice.

1/6 might be high, but perhaps not too many orders of magnitude off. There is an interview in the 80000hours podcasts (https://80000hours.org/podcast/episodes/ezra-karger-forecasting-existential-risks/) about a forecasting contest in which experts and superforecasters estimated AI extinction risk in this century to be 1% to 10%. And after all, AI is likely to dominate the prediction.

Thanks for sharing, Pablo. I had listened to that podcast discussing The Existential Risk Persuasion Tournament (XPT), but what I take from this is that there is huge dispersion in the extinction risk predictions. 

In addition, many forecasters predicted a probability of human extinction from 2023 to 2100 of exactly 0:

  • For extinction, 3.18 % (5/157).
  • For AI extinction, 4.29 % (7/163).
  • For nuclear extinction, 6.21 % (10/161).
  • Non-anthropogenic extinction excluding non-anthropogenic pathogens, 5.66 % (9/159).

A risk of exactly 0 is obviously wrong, but goes to show there are superforecasters and domain experts guessing the risk of human extinction is negligible. You can also qualitatively appreciate this from some comments in Appendix 7 of the report. Here are some I collected about the risk of nuclear extinction (emphasis mine):

  • “Most forecasters whose probabilities were near the median factored in a range of possible risks, including world wars, nuclear winters, and even artificial-intelligence-driven NERs [nuclear extinction risks], but concluded that even under worst case scenarios, the extinction of humanity (give or take 5000 people) would be near impossible...even if an NER [nuclear existential risk] had set humanity on a path that made eventual extinction a foregone conclusion, existing resources on earth would allow at least 5000 survivors to hang on for seventy-eight years”.
  • “For many, the thought of getting to less than 5000 humans alive was simply too far fetched an outcome and they couldn't be persuaded otherwise in what they saw as credible scenarios”.
  • “[T]he set of circumstances required for this to happen are quite low, though obviously not impossible. These circumstances are that there will be a nuclear conflict between 2 nations both capable and willing to fire at everyone everywhere between the two of them: 'very bad case scenarios' where India and Pakistan, or the US and Russia, or China and anyone else, fired everything they had at just each other, or even at each other and each other's close allies, would likely not cause extinction…it requires some of the big nuclear powers to decide to try to take literally everyone down with them, and that they actually succeed”.
  • “So we think that the probabilities in this question are dominated by scenarios of total nuclear war before 2050 which cause civilizational and climate collapse to the point where long-term survival becomes impossible to save for very well-prepared shelters. But even pessimistic scenarios seem unlikely to lead to a collapse that is fast enough to reduce the global population to below 5000 by 2100”.
  • There aren't compelling arguments on the higher end for this question again due to the fact that this is a very high bar to achieve”.
  • “The team predicts that there will be pockets of people who survive in various regions of the world. Their survival may be at Neolithic standards, but there will be tribes of people who band together and restart mankind. After all, many mammals survived the asteroid and ice age that killed the dinosaurs”.
  • “[A] certain number of team members feel that even if there was a full strategic exchange and usage of all of the world's nuclear arsenal still humanity would be able to keep its numbers over 5000. The argument for this is the number [a]nd population of uncontacted tribes, or isolated human populations like the Easter island population pre-contact, that have managed to hold numbers of over 5000 in extremely harsh conditions”.
  • [A]lmost certainly some people would survive on islands or in caves given even the worst of worst cases”.
  • “Southern Hemisphere likely to be less impacted – New Zealand, Madagascar, Pacific Islands, Highlands of Papua New Guinea, unlikely to be targeted and include areas with little global and technology dependence…Just the population of Antarctica in its summer is ~5000 people. Even small islands surviving could easily mean more than 5k people”.
  • [There are s]everal regions in the world that would not be affected by nuclear conflict directly and have decent climatic conditions to support 100 of millions even in a NW [nuclear winter]”.
Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while