Hide table of contents

This is a linkpost for #105 - Alexander Berger on improving global health and wellbeing in clear and direct ways. You can listen to the episode on that page, or by subscribing to the '80,000 Hours Podcast' wherever you get podcasts.

In the episode, Alexander advocates for focusing on improving the world today in identifiable ways. He explains the case in favour of adopting the ‘global health and wellbeing’ mindset, while going through the arguments for the longtermist approach that he finds most and least convincing.

Rob and Alexander also tackle:

  • Why it should be legal to sell your kidney, and why Alexander donated his to a total stranger
  • Why it’s shockingly hard to find ways to give away large amounts of money that are more cost effective than distributing anti-malaria bed nets
  • How much you gain from working with tight feedback loops
  • Open Philanthropy’s biggest wins
  • Why Open Philanthropy engages in ‘worldview diversification’ by having both a global health and wellbeing programme and a longtermist programme as well
  • Whether funding science and political advocacy is a good way to have more social impact
  • Whether our effects on future generations are predictable or unforeseeable
  • What problems the global health and wellbeing team works to solve and why
  • Opportunities to work at Open Philanthropy

…when I think about trying to move an amorphous, vague, hard-to-quantify measure like ‘societal judgment’ — versus just making people healthier and wealthier — I’m like wow, I’m so much more excited about making people healthier and wealthier, because we know how to do that…

–Alexander Berger

Key points

What is neartermism, and what should we call it?

Alexander Berger: Sort of accidentally, we had taken to calling it neartermism just by contrast to longtermism. Originally it had been short-termism, and that was even more of a slur [laughs]. So we’ve gone from short to near, and we felt like that was a very marginal improvement, but we thought we could do better. And so we spent a long time going through a process of trying to brainstorm names and come up with a sense of what are we really trying to do? What do we think about the affirmative case for this? Not just what is it defined against. And we did a bunch of surveys of folks inside and outside of Open Phil, and we came away thinking that ‘global health and wellbeing’ was the best option. We also thought about this phrase ‘evident impact,’ which I noticed that you used in a tweet about the show, and I think in our survey that was like the third most popular.

Alexander Berger: I think there is something in the tendency that that term gets right, which is around the idea of feedback loops and evidence and improving over time versus just this broad, utilitarian feeling of global health and wellbeing. But I like that global health and wellbeing ends up being actually about what we are about, which is maximizing global health and wellbeing rather than maximizing feedback loops or maximizing concreteness, which is a nice positive thing, but not the thing that I see as actually core to the world or core to the project.

Rob Wiblin: I suppose neartermism wouldn’t be such an unreasonable name for, say, the moral philosophy position that it’s better to benefit people sooner. If you can help someone today versus someone in a hundred years’ time, it’s just better because it happens sooner rather than later. Or, potentially, I suppose it could be a not-unreasonable name if you think it’s more important to benefit people who are alive now, rather than future generations. But of course, there’s lots of other reasons why people work on things like GiveWell and reducing poverty and so on.

Alexander Berger: Yeah, and it seems to me that nobody… I think the philosophical position that it’s better to help people sooner rather than later does not seem to have very many defenders. And I certainly wouldn’t want to sign up for that position. I think there’s probably some lay appeal to it. I think that part of the concern with neartermism is that it seems to be more about population ethics, or your view on temporal discounting, which is very much not how we think about it.

Arguments for working on global health and wellbeing

Alexander Berger: I think arguments for global health and wellbeing are first and foremost about the actual opportunities for what you can do. So, I think you can just actually go out and save a ton of lives. You can change destructive, harmful public policies so that people can flourish more. You can do so in a way that allows you to get feedback along the way, so that you can improve and don’t just have one shot to get it right. And at the end, you can plausibly look back and say, “Look, the world went differently than it would have counterfactually if I didn’t do this.” I think that is pretty awesome, and pretty compelling.

Alexander Berger: I [also]think we just don’t have good answers on longtermism. The longtermist team at Open Phil is significantly underspending its budget because they don’t know where to put the money.

Alexander Berger: When I think about the recommended interventions or practices for longtermists, I feel like they either quickly become pretty anodyne, or it’s quite hard to make the case that they are robustly good. And so I think if somebody is really happy to take those kinds of risks, really loves the philosophy, is really excited about being on the cutting edge, longtermism could be a great career path for them. But if you’re more like, “I want to do something good with my career, and I’m not excited about something that might look like overthinking it,” I think it’s pretty likely that longtermism is not going to be the right fit or path for you.

Cluelessness

Rob Wiblin: Yeah, it’s interesting that on both the most targeted longtermist work and on the broader work people talk about this term ‘cluelessness.’ Basically because the world is so unpredictable, it’s really hard to tell the long-term effects of your actions. If you’re focused on the very long-term future, then it’s just plausible that it’s almost impossible to find something that is robustly positive, as you say. On almost anything that you can actually try to do, some people would argue that you could tell a similarly plausible story about how it’s going to be harmful as how it’s going to be helpful. We were talking about how you can do the same thing with the broader work — like improving judgment, maybe that’s bad because that would just lead people to be more aggressive and more likely to go to war. I suppose if you put that hat on, where you think it’s just impossible to predict the effect of your actions hundreds of thousands of years in the future, where do you think that leads?

Alexander Berger: I think cluelessness cuts against basically everything, and just leaves you very confused. One place where I encountered it is, I mentioned earlier this idea that if you lived 10,000 years ago and you saved the neighbor’s life, most of the impact that you might have had there — in a counterfactual utilitarian sense, at least in expectation — might have run through basically just speeding up the next 10,000 years of human history such that today maybe a hundred more people are alive because you basically sped up history by one-millionth of a year or something.

Alexander Berger: That impact is potentially much larger than the immediate impact of saving your neighbor’s life. When that happens with very boring colloquial-type things… By the way, that argument is from a blog post by Carl Shulman.

Alexander Berger: The impact that you actually would want to care about, maybe, from my utilitarian ethics, is this vastly disconnected future quantity that I think you would have had no way to predict. I think it makes you want to just say wow, this is all really complicated and I should bring a lot of uncertainty and modesty to it. I don’t think that should make us abandon the project, but it does make me look around and say everything here is just more complicated than it seems, and I want to be a little bit more sympathetic to people who are skeptical of totalism and making big concentrated bets, and are maybe not even that interested in altruism. Or they’re just like, I’m just doing my thing.

Alexander Berger: Again, I think it would be better if people were more altruistic. I don’t think that’s the right conclusion, but really genuinely feeling the cluelessness does make me feel more like, wow, the world’s really rich and complicated and I don’t understand it at all. I don’t know that I can make terribly compelling demands on people to do things that are outside of their everyday sphere of life.

Examples of valuable feedback loops

Alexander Berger: I think a really central one is just being able to see what works and then do more of it, which is a funny low-hanging fruit. But I think often, in other categories where you don’t even know what intermediate metrics you’re aiming for, you don’t have that benefit. So for instance, the amount of resources flowing into cage-free campaigns in farm animal welfare has, I think, well over 10x-ed because they were working. And it was like, oh okay, we have found a strategy or tactic that works, and we can scale. I think that accounts for a very material portion of the whole farm animal welfare movement’s impact over the past decade. But if you were somehow unable to observe your first victories, you wouldn’t have done it. So I think that there’s something about literally knowing if something is making progress. That’s a really, really important one.

Alexander Berger: Also, on the other side, being able to notice if the bets aren’t paying off. So, we have a program that’s focused on U.S. criminal justice reform. We don’t do calculations for every individual grant, necessarily. We make the big bets on Chloe Cockburn, who leads that program. But if after five years the U.S. prison population was growing, we don’t observe the counterfactual, but that would raise questions for us of given our cost-effectiveness bar and the level of reduced incarceration we would need to be hitting to make this pencil compared to other opportunities for us, being able to observe the state of the world and say, “Is the state of the world consistent with what it would need to be in order for these investments to be paying off?” is an important benefit that you can get in the neartermist global health and wellbeing side that you can’t necessarily get in the longtermist work, I don’t think.

Alexander Berger: Another one is just really boring stuff, but you can run randomized controlled trials that tell you, okay, the new generation of insecticide-treated bed nets is 20% more effective, because the level of resistance before to the old insecticide was reducing it by 20%. You wouldn’t have necessarily known that if you couldn’t do the data and improve. So none of those are necessarily order-of-magnitude kinds of things, but I do think if you think about the compounding benefits of all of those, and the ways in which basically the longtermist project of trying something and maybe having very little feedback for a very long time is quite unusual relative to normal human affairs… It wouldn’t shock me if the expected value impact of having no feedback loops is a lot bigger than you might naively think. That’s not to say that longtermists have no feedback loops though, they’ll see, are we making any intellectual progress? Are we able to hire people? There are things along the way, so I don’t think it’s a total empty space.

Rob Wiblin: Yeah, longtermist projects are pretty varied in how much feedback they get. I mean, I suppose people doing really concrete safety work focused on existing ML systems, trying to get them to follow instructions better and not have these failure modes in a sense, they get quite aggressive feedback when they see, have we actually fixed this technical problem that we’ve got?

Alexander Berger: Yeah and I think that work is awesome. My colleague Ajeya has written about how she’s hoping that we can find some more folks who want to do that practical applied work with current systems and fund more of it. Again, this is just a heuristic or a bias of mine, but I’m definitely a lot more excited to bet on tactics that we’ve been able to use and have worked before, relative to models where it’s like, we have to get it right the first time or humanity is doomed. I’m like, “Well, I think we’re pretty doomed if that’s how it’s going to be.”

GiveWell’s top charities are (increasingly) hard to beat

Rob Wiblin: Open Phil has been making grants to reliable, proven GiveWell charities for a while. Things like the Against Malaria Foundation, which distributes bed nets. But it’s been hoping to maybe find things that are better than that by using science and politics and maybe other methods to get leverage, and so it’s been exploring these new approaches, trying to find things that might win out over helping the world’s poorest people. And you’d been doing that by working on scientific research and policy change in the United States, but the leverage that you’d gotten from those potentially superior approaches was something like ten to 1,000, probably closer to ten than 1,000. And that wasn’t enough to offset the 100 to X leverage that you get from transferring money from one of the world’s richest countries to the world’s poorest people. Is that right?

Alexander Berger: Yeah. I think that’s a great summary.

Rob Wiblin: Okay. That raises the question to me, if you were able to get even 10x leverage using science and policy by trying to help Americans, by like, improving U.S. economic policy, or doing scientific research that would help Americans, shouldn’t you then be able to blow Against Malaria Foundation out of the water by applying those same methods, like science and policy, in the developing world, to also help the world’s poorest people?

Alexander Berger: Let me give two reactions. One is I take that to be the conclusion of that post. I think the argument at the end of the post was like, “We’re hiring. We think we should be able to find better causes. Come help us.” And we did, in fact, hire a few people. And they have been doing a bunch of work on this over the last few years to try to find better causes. And I do think, in principle, being able to combine these sources of leverage to… I think of it as multiplying 100x, you should be able to get something that I think is better than the AMF-type GiveWell margin.

Alexander Berger: But I don’t think it blows it out of the water by any means. So this pre-figures the conclusion in some ways from some of our recent work. I think we think the GiveWell top charities, like AMF, are actually ten times better than the GiveDirectly-type cash transfer model of just moving resources to the poorest people in the world. That already gives you the 10x multiplier on the GiveWell side, and so then we need to go find something that is a multiplier on top of that. I actually think that’s quite a bit harder to do, because that’s a much more specialized, targeted intervention relative to the relatively broad, generic, just give cash to the world’s poorest people, which is a little bit easier to get leverage on.

Alexander Berger: I do think we should be optimistic. I think we should expect science and advocacy causes that are aimed towards the world’s poor to be able to compete with the 10x multiplier of cost effectiveness and evidence that GiveWell gets from AMF to GiveDirectly. But I’m uncertain to skeptical after a few years of work on this that we’re going to be able to blow it out of the water. And so I think about it as, it gets you, with a lot of work and a lot of strategic effort, into the ballpark. And so we have a couple of these new causes that I could talk about where we think we’re in the ballpark of the GiveWell top charities, but we haven’t found anything yet that feels like it’s super scalable and, in expectation, ten times better than the Against Malaria Foundation. We’re working hard to find stuff that’s in the ballpark.

Alexander Berger: Around the time when I donated, I actually wrote an op-ed in The New York Times arguing that we should have a government compensation system for people who want to donate. Because there’s just an addressable shortage, where you can donate and live a totally good life. It’s not very risky. This is fine. And like, there’s a government system for allocating kidneys to people who need them the most. And it would actually literally save the government money, because people are mostly on dialysis, which is really expensive and it’s very painful and you die sooner. And so giving them a transplant, it saves money, it extends life. And just not enough people sign up to do it voluntarily. And so we can make it worth their while, similarly to how we pay cops and firefighters to take risks.

Alexander Berger: I don’t feel like this is some crazy idea. But I think that the reason it doesn’t happen is actually opposition from people who are worried about coercion and have a sense of bodily integrity as something inviolable.

Alexander Berger: That makes them feel like there’s something really bad here. Honestly, I think the Catholic Church is actually one of the most important forces globally against, in any country, allowing people to be compensated for donation. It’s very interesting to me, because I just do not share that intuition. I mean, if you think about it as treating people as a means to an end, I could imagine it. Like if you thought it was a super exploitative system, where the donors were treated really badly, I could get myself in the headspace.

Rob Wiblin: But then it just seems like you could patch it by raising the amount that they’re paid and treating people better. So banning it wouldn’t be the solution.

Alexander Berger: Yeah, treat people better. One of the things I said in my op-ed, I think, was like look. We should pay people and treat them as good people. A little bit like paid surrogacy. I think people have bioethical qualms about it sometimes, but by and large, people think of surrogates as simultaneously motivated by the money and good people doing a good thing. And I think we should aspire to have a similar system with state payments for people who donate kidneys. Where it’s like, you did a nice altruistic thing, and it paid for college or whatever.

Articles, books, and other media discussed in the show

Open vacancies at Open Philanthropy

Articles and essays

Blog posts

Transcript

Rob’s intro [00:00:00]

Hi listeners, this is the 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and why it’s great to get an offer for your kidney that you can’t refuse. I’m Rob Wiblin, Head of Research at 80,000 Hours.

One of the classic divisions within the effective altruist research community has been between folks who think we should explicitly aim to improve the very long-term trajectory of humanity, even if it’s harder to tell whether we’re doing so successfully, and others who think we should try to make the world better today in concrete observable ways, even if that means working on problems that are smaller in scale.

The former school of thought is usually called longtermism, while the latter hasn’t really had a name, but has sometimes been called neartermism, in order to contrast it with longtermism.

We’ve directly discussed longtermism several times over the years, including in episode 6 with Toby Ord back in 2017, the second 80k team chat with Benjamin Todd in September of 2020, and episode 90 with Ajeya Cotra in January of this year.

But we haven’t had an episode directly about this other broad school of thought, which cares more about having positive impacts sooner, and being able to tell whether what you’re doing seems to be working.

Keiran and I asked around to see who would be a great guest to talk about this, and the word on the street was I should talk to Alexander Berger. Alexander has been a philanthropic grantmaker for ten years, and was recently promoted to lead all of Open Philanthropy’s grantmaking under the Global Health and Wellbeing umbrella, which is Open Phil’s term for non-longtermist approaches to doing good.

Open Philanthropy, for those who don’t know, is a major foundation which uses rigorous effective altruist-flavoured thinking to have more impact with their giving, the same way 80,000 Hours tries to use that thinking to figure out how people can have more impact with their careers. Perhaps unsurprisingly, for a few years Open Phil has been 80,000 Hours’ largest funder, though our grants don’t fall under the global health and wellbeing umbrella.

I’m so glad we’ve finally gotten to addressing this topic, and very glad we chose Alexander to do it with. We had so much material to cover in this interview it could easily have become a mess, but to my great relief it worked out as intended. Alexander was able to give a robust case in favour of neartermism, and then later on explain the arguments for longtermism that he finds most persuasive, which is a sign of someone who knows their material.

If you’d be interested to work on Alexander’s global health and wellbeing team, he highlights some amazing job opportunities in that program in the last third of the conversation. We also talk about his decision to donate his kidney to a total stranger, and why we both think it should be fine to compensate those who offer their kidney to people they don’t know.

Alright, without further ado, here’s Alexander Berger.

Rob Wiblin: Today I’m speaking with Alexander Berger. Alexander is co-CEO of Open Philanthropy (“Open Phil”), where he leads the global health and wellbeing work. The global health and wellbeing umbrella is a broad one, which includes causes across scientific research, policy advocacy, and global development. Right before that, Alexander worked as a researcher at GiveWell, where he was one of their first hires and was working on evidence reviews and cost-effectiveness analysis for potential top charities as early as 2011. In 2011, he also donated one of his kidneys to a total stranger and published an op-ed in The New York Times on why it should be legal to sell your kidney if you so choose. Before all that, Alexander studied philosophy and education policy at Stanford University. Thanks for coming on the podcast, Alexander.

Alexander Berger: Thanks, Rob. It’s great to be here. I’m a long-time listener, so it’s really a pleasure.

Rob Wiblin: I hope we’ll get to talk about the case for working on programs where it’s more practical to demonstrate what impact they’re having, as well as roles that are currently available at Open Phil. But first, what are you working on at the moment and why do you think it’s important?

Alexander Berger: I’m the co-CEO of Open Philanthropy, which is a foundation that currently gives away a couple hundred million dollars a year in a way that aims to maximize our impact. So it’s probably familiar to a lot of your listeners from the effective altruism community, because we’re sort of part of that community. I lead our work on what we’ve recently decided to call global health and wellbeing, which includes giving to GiveWell top charities, a bunch of work in scientific research, policy and farm animal welfare, and also some of the new causes that we’re exploring going into now and I’m hoping to talk about later. We’re growing pretty quickly, and so a lot of my work is around managing a team that tries to find new causes for Open Phil to expand into, as we’re trying to give away our main donors’ money over the course of their lifetimes.

Alexander Berger: My colleague, Holden, who I think has been on the show before, leads the rest of Open Phil’s work, which is primarily aimed at making sure that the long-term future goes well, and we call that longtermism. In terms of why I think it’s important, I think the global health and wellbeing work comes down to the idea that there’s a huge amount of really cheaply preventable suffering in the world. And the idea that by applying some sort of basic analysis and putting resources behind it, we can do a lot to prevent that kind of suffering. So, kids die from malaria for lack of insecticide-treated bed nets that cost a few dollars, and something like a billion people around the world are still living very close to subsistence levels.

Alexander Berger: And of course, billions of farm animals are treated horrendously every year in ways that are totally — in principle — preventable. It’s not like we need farm animals to survive. And so I think all of these problems have interventions that are actually quite cost effective, and so you can end up having vastly more impact on them than you might think would be possible if you were just starting out naively. And we think that against these problems, we can make concrete, measurable progress and learn and improve over time in a way that can set out a new model for philanthropy. And we see that as an inspirational set of problems to put resources against where we can just make a lot of concrete progress.

Rob Wiblin: So in the past Open Phil has had all of these different problems that you’re making grants towards solving, but they’re being mixed together in some sense, they’re all part of Open Phil, and now it’s slightly fissuring into these two different umbrella groups. One which is focused on the long-term future, and another which is focused on helping people in the here and now, which you’re in charge of.

Alexander Berger: I think that’s a reasonable interpretation, although for us it feels a lot more continuous than that. For a long time Holden has actively overseen the longtermist work, which includes work on artificial intelligence risk, biosecurity and pandemic preparedness, and our work to grow the effective altruism community. And I’ve run a lot of our work on scientific research and policy and advocacy. And I manage Lewis Bollard,who runs our farm animal welfare team. And so we’ve already had those structures, but I recently was promoted to co-CEO and we recently decided to name my part of the organization global health and wellbeing, to better acknowledge that cluster of ideas. Rather than just justifying it against the longtermist cluster of ideas that Holden mostly oversees.

Rob Wiblin: So you’ve got the global health and development stuff under you, as well as the farm animal welfare and some scientific research and some other issues as well. What do you actually do on a day-to-day basis? There must be so much going on you probably can’t track it on a detailed level.

Alexander Berger: In terms of the actual grantmaking work that most of our program officers work on, I’m not very involved at this point. We delegate a lot to the people who we hired to make those decisions, and we aim for them to have a really high level of autonomy. So, in terms of the farm animal team’s welfare work, I’m really glad you had Lewis on the show (1, 2). Lewis is really the person who’s coming up with our strategy there. And similarly for our science team, that’s on Chris Somerville and Heather Youngs primarily, and not in my head.

Alexander Berger: A lot of what I had actually spent the last couple of years working on is managing this team of researchers that we’re now calling the global health and wellbeing cause prioritization team. It’s devoted to finding the next causes for Open Phil to expand into. Part of the reason why I wanted to come on the show today was that we’re hiring for people to join that team, and I think it could be a really good fit for some of your listeners.

Rob Wiblin: Fantastic. We’ll get back to that towards the end of the interview.

Rob Wiblin: As I said in the intro, you started at GiveWell in the early days, before the term effective altruism was coined. And I suppose only a couple of years after GiveWell was founded. How did you get involved so early on?

Alexander Berger: Like a lot of people, I picked up a Peter Singer book in college. I think I had a pretty…maybe not standard for EA, but a pretty standard overall reaction to that, which was feeling like, wow, these arguments are compelling, but I’m not inspired, I’m being beaten over the head with a stick. And so I wanted to resist them and say, do I really have these kinds of obligations? Am I really supposed to take big actions for the global poor that have no real connection to my life? And I wasn’t sure how to think about it. I had these naive ideas of, maybe people who are living in poverty are actually just totally happy, and they’re not stuck with the concerns of modernity or something.

Alexander Berger: I felt like I couldn’t resolve that kind of question sitting in Palo Alto, and so I took some time off from school and lived in India for a little while. And my experience there, I think, did show that there actually are a bunch of things that we can do to help. But the specific thing that brought me into effective altruism was every day on the way to the school where I was working, I would walk by these kids who were begging, and I really wanted to do something to help the kids. But I knew that if I just gave them money, that would be an incentive not to be in school. And so I was like, okay, what I will do is when I get back to the U.S., I’ll find the best charity I can and give them a couple of hundred bucks.

Alexander Berger: But I still had to walk by the kids everyday, and I still felt really terrible about it. And so I decided, okay, I’m going to find the charity I’m going to give the money to, because maybe if I focus on that, I’ll feel better. And so I think I literally Googled ‘best charity.’ This would have been 2010, and I found the GiveWell website on the first page of Google hits. And… Can I swear on this podcast?

Rob Wiblin: Yeah, absolutely.

Alexander Berger: I remember just thinking holy shit, this is exactly what I wanted. I really cannot remember another product where I’ve had that reaction of like, wow, this was what I was looking for, but better than I could have imagined. I sent an email while I was still in India to a friend being like holy crap, this would be my dream job, but I can’t imagine — they had like three employees at the time — I can never work there. And so I ended up donating a little bit when I got back to the U.S. and Elie emailed me like, “Dear Mr. Berger, thank you so much for your generous contribution.” And I was like, “Well, I’m a junior at Stanford, but I love you guys. Could I volunteer?” And so I ended up spending part of a summer adding footnotes to their website and adding page numbers to citations, and after college ended up joining. And I think, yeah, when I joined, maybe there were four other people at GiveWell.

Rob Wiblin: Yeah, you’re not the only one to come in through searching for ‘best charity’ and then finding GiveWell and then finding everything else. That’s actually a surprisingly common origin story.

Alexander Berger: Yeah, maybe that’ll come up later when we’re talking about how much EA should focus on global health and wellbeing-type stuff versus longtermism.

Global Health and Wellbeing [00:10:01]

Rob Wiblin: Yeah, that’s a great data point. We’ll come back to more of the work that you’ve done over the years towards the end of the show, but let’s push on towards the real meat of the conversation, which is talking about the school of thought that until now has been called neartermism, but both of us probably are pretty happy to rename. Because you’re in charge of this entire huge umbrella of projects at Open Phil, you seemed like maybe the best person to talk to to understand what the people who are focused on improving the here-and-now in the biggest way actually do, and what are the arguments for and against taking that approach to having a really large impact, as opposed to other ways that people try to get an edge.

Rob Wiblin: There is a risk that this episode could get a little bit messy or a bit confused, because we’re hoping to give an overview of quite a broad intellectual tendency within effective altruist thought, which is — well one way of putting it would be, it’s everything other than longtermism. But of course, just as longtermism is a pretty broad school of thought, and there’s lots of motivations that people have for going into that, everything else, or neartermism, or I suppose, global health and wellbeing, is also a very broad school of thought with a lot of factors going into it.

Alexander Berger: Yeah. I think that’s totally right.

Rob Wiblin: But that said, we’ll do our best, and I suppose make reasonable generalizations about this whole category. With that warning out of the way, what do you actually think we should call this cluster or tendency within effective altruism?

Alexander Berger: We just recently went through a long and painful process of trying to come up with a name for it, because sort of accidentally, we had taken to calling it neartermism just by contrast to longtermism. Originally it had been short-termism, and that was even more of a slur [laughs]. So we’ve gone from short to near, and we felt like that was a very marginal improvement, but we thought we could do better. And so we spent a long time going through a process of trying to brainstorm names and come up with a sense of what are we really trying to do? What do we think about the affirmative case for this? Not just what is it defined against. And we did a bunch of surveys of folks inside and outside of Open Phil, and we came away thinking that global health and wellbeing was the best option. We also thought about this phrase ‘evident impact,’ which I noticed that you used in a tweet about the show, and I think in our survey that was like the third most popular.

Alexander Berger: I think there is something in the tendency that that term gets right, which is around the idea of feedback loops and evidence and improving over time versus just this broad, utilitarian feeling of global health and wellbeing. But I like that global health and wellbeing ends up being actually about what we are about, which is maximizing global health and wellbeing rather than maximizing feedback loops or maximizing concreteness, which is a nice positive thing, but not the thing that I see as actually core to the world or core to the project.

Rob Wiblin: Part of the reason why it’s hard to give a name to this may be because there’s three separate factors that are going into people’s decision making that happened to agree on some potential decisions that people should make, but they come in independently. So I suppose neartermism wouldn’t be such an unreasonable name for, say, the moral philosophy position that it’s better to benefit people sooner. If you can help someone today versus someone in a hundred years’ time, it’s just better because it happens sooner rather than later. Or, potentially, I suppose it could be a not-unreasonable name if you think it’s more important to benefit people who are alive now, rather than future generations. But of course, there’s lots of other reasons why people work on things like GiveWell and reducing poverty and so on.

Alexander Berger: Yeah, and it seems to me that nobody… I think the philosophical position that it’s better to help people sooner rather than later does not seem to have very many defenders. And I certainly wouldn’t want to sign up for that position. I think there’s probably some lay appeal to it. I think that part of the concern with neartermism is that it seems to be more about population ethics, or your view on temporal discounting, which is very much not how we think about it.

Rob Wiblin: It’s interesting, it seems like in moral philosophy the idea that sooner rather than later is better is not a very occupied position. Not many people will defend that. Although some people do defend the idea that people who are alive now should be given extra weight. I think I’ve heard reports by people who’ve gone out and spoken to people who have no exposure to effective altruist thinking — or really any moral philosophy — that those people often do have this intuition that sooner is better, or that maybe it just doesn’t matter what impact we have on future generations. But that seems to be an idea that the more people inspect it, the less they tend to hold it, because they find that it’s actually in tension with other commitments that they have.

Alexander Berger: I think that’s right. I think there’s two other clusters that are in this bucket. So, economists actually seem, on average, much more comfortable with pretty high rates of pure time preference, because people behave myopically. And so I think sometimes there’s a sense that we should just read off people’s true values from their everyday decisions. And I think philosophers (and me) are a little bit more skeptical of that. And then I think there’s a second point, which is, I actually think that there’s a weird disconnect between the clean, philosophical population ethics debate over the total view versus the person-affecting view and what you might want to call a colloquial population ethics, which would say… For instance, the person-affecting view seems to have totally crazy (to me) implications that you don’t want to put a lot of weight on, but I think a lot of people think it’s important and good to have a higher average level of wellbeing in the future, even if they’re not particularly into either shrinking or growing the total population. That intuition doesn’t seem to cash out into clean philosophical positions, but I actually place some personal weight on it.

Rob Wiblin: So that’s the moral philosophy angle, but I think actually maybe the dominant grouping is people who think that it’s important to be able to really get to grips with what problem you’re trying to solve and understand the nature of the problem, how it might be solved, and then measure, to some degree, whether you’re actually solving the problem with the projects that you’re funding or involved with. And that’s maybe the motivation that ‘evident impact’ captures really well. It’s like, well, we want to be able to actually see something that’s happening in the world that we can tell is working. And it’s understandable that people have that idea, because if you can’t tell whether what you’re doing is working there’s a pretty high risk that you choose the wrong thing at the start and then you just keep doing it.

Alexander Berger: Yeah. I think that sense of feedback loops is an important piece of the recipe here. But I also feel like, again, you wouldn’t want to go all in on maximizing feedback loops, right? It feels like it’s an obviously myopic goal, like it’s lacking some sort of terminal traction, I think. And so I think that the bigger thing, in my mind, is more… It’s tied to a sense of not wanting to go all in on everything. So maybe being very happy making expected value bets in the same way that longtermists are very happy making expected value bets, but somehow wanting to pull back from that bet a little bit sooner than I think that the typical longtermist might. And so maybe a different way to put it would be, I think a lot of it is a matter of degree, rather than kind.

Alexander Berger: And another way I’d try to get at this is to say a lot more of the disagreement seems to be about epistemics, and how to reason under uncertainty, as opposed to values, or even things like how important is a feedback loop in order to ensure success probabilistically. That’s something that I think in principle a longtermist could totally have a view on, and I don’t think that our disagreements really come down to what parameter would you have for that.

Cluster thinking [00:16:36]

Rob Wiblin: Yeah. I think you’re gesturing towards the third intellectual tendency that I think drives global health and wellbeing. I guess I don’t have a great name for this, but I’ve categorized it as epistemic modesty in my notes here. And maybe some of the aspects of that are like, wanting to follow common sense more than do things that other people think are really strange, and wanting to learn from other people around you and generally have something that’s consistent with what broader society thinks is valuable. And also to spend less time thinking about philosophy and more time thinking about how we make concrete changes in the real world. So more economics and social science and politics perhaps, and a bit less moral philosophy and epistemics and decision theory and that sort of thing. Is there a way of wrapping up what this stream of argument is?

Alexander Berger: Yes. I totally think this is the crux, this is the main thing going on in my mind. And I think I would call it cluster thinking. My colleague Holden wrote a blog post in 2014 on sequence versus cluster thinking, and it’s about a style of reasoning about cost effectiveness and expected value. And the idea of cluster thinking is not that you don’t care about maximizing expected value, you might still totally care about that, but it’s how do you seek the truth? And is the past a series of linear arguments and a series of linear multiplying of things together, or a bunch of different outside views and heuristics and judgments and weights on different factors that you still might try to aggregate up into some form of expected value thinking? You can still want to maximize expected value.

Alexander Berger: But there’s a question of, when push comes to shove, how much are you writing down and multiplying through a bunch of assumptions, versus saying, hey, maybe I don’t understand all of this. Maybe I’m confused. I want to pull back and put more weight on heuristics and priors. ‘Outside view’ is going to be a fuzzy term that I’m not going to be able to defend, but things like that, relative to a very formal, explicit model of the world.

Rob Wiblin: So Holden wrote this post about cluster thinking versus sequence thinking. To give a brief explanation of cluster thinking, it’s never giving tons of weight to a single argument, no matter how powerful it would seem if it was right. So preferring to have many somewhat compelling arguments than just one really compelling calculation that seems good. And I suppose that would lead to this epistemic modesty idea, or caring about priors and caring about the outside view, rather than just philosophical arguments that you can run through and claim you’ve proven.

Alexander Berger: And I think maybe it’s motivated by epistemic modesty more than it causes epistemic modesty, or something. I think actually a part of a little bit of the global health and wellbeing perspective — or really worldview — relative to the longtermist worldview, is being a little bit more comfortable with pointing towards a cluster, or heuristic, and saying yeah, I’m not totally sure I can give you a tight, concise, philosophical defense of that, but I’m pretty committed to it relative to the practice of tight, concise, philosophical defenses of everything.

Alexander Berger: In terms of what I think is the main motivation for why people work in global health and wellbeing, I still think their main motivation is the object-level case of, you can do a lot to prevent people’s suffering. And you can do so in really cost-effective ways. We treat animals really terribly, and you can just actually make a lot of progress against that goal. And so the back and forth on epistemic modesty and cluster thinking, I think that might explain why people might not want to go all in on longtermism, or care about these separate-from-longtermist kinds of philosophical problems. But why are people into global health and wellbeing? I think that the dominant explanatory factor is they see problems in the world that they can make progress against, and they’re compelled to do so. And that is the first-order answer I want to give there, as opposed to, well, they have these complicated philosophical views about how you think about longtermism. Which yeah, sure, some people have, but I think the first-order answer is that people care a lot and are trying to make the world a better place, and they see these as good ways to do so.

Rob Wiblin: That makes a lot of sense. And I guess, yeah, there’s a challenge where we’re maybe constantly defining this set of ideas in contrast with the other options that people who have thought about this a lot are interested in, but of course, for most people it’s just the object-level argument of: I can benefit people a ton, it seems like there’s strong evidence that I can do that, and that would be valuable, so that’s what I’m going to do.

GHW cause areas [00:20:37]

Rob Wiblin: Okay. So I think we’ve captured the main streams of thought that feed into people wanting to work on global health and wellbeing, but let’s try to be more concrete and maybe a bit less philosophical, in keeping with the global health and wellbeing mentality. What are the global health and wellbeing cause areas that Open Phil focuses on?

Alexander Berger: So, there’s a few that we’ve been working on for a few years, and then some new ones that we’re getting started on now. So the biggest one that we’ve funded by far is the GiveWell top charities. That’s been something like half of the total global health and wellbeing giving to date, and those charities do things like give out insecticide-treated malaria nets to prevent the spread of malaria, they will do sometimes direct cash transfers, other kinds of global public health interventions where there’s a lot of evidence about what works and what’s cost effective. Farm animal welfare, you’ve had my colleague Lewis on to talk about that extensively, so I won’t go too far into it. Scientific research, particularly in global health, I think there’s just a lot of areas where there’s not a commercial incentive to do a lot, but public-spirited R&D can go really far to help solve some of the diseases of poverty.

Alexander Berger: We’ve also done a bunch of work on U.S. policy causes, including criminal justice reform and some work around land use and housing and macroeconomic stabilization. We’re not currently planning to grow that work as much because we think we probably can find some better opportunities in the future. And so that gets to the new causes bucket. And so two that we just posted job postings for are South Asian air quality and global aid advocacy. South Asian air quality is around air pollution, especially in India where it’s a huge problem for health. And global aid advocacy is about funding advocacy to increase or improve the quality of foreign aid. And we think that kind of leverage might make it more cost effective than some kinds of direct services. The last area where we’re focused and growing right now is around that cause prioritization team that does the research to pick those new causes.

Rob Wiblin: Nice. And what are a couple of archetypal grants from the global health and wellbeing program?

Alexander Berger: I think the most stereotypical, in some sense, would be some of the GiveWell top charities. So take the Against Malaria Foundation, which distributes bed nets in African countries, primarily, where malaria is a big problem. And those bed nets are treated with an insecticide that kills mosquitoes. And there’s a bunch of randomized controlled trials over decades that show that these work. AMF has actually contributed to some of those more recent RCTs on new generations of bed nets that help with insecticide resistance. And we continue to think that the quality of evidence and the cost effectiveness of those opportunities is actually just really quite hard to beat, even if you become willing to take more risks and look at less evidence-based interventions. And so I expect that to continue to be a big part of our portfolio going forward.

Alexander Berger: In farm animal welfare, my colleague Lewis, has come on the show. I think a huge outlier opportunity has been corporate campaigns, especially around eliminating battery cages for egg-laying hens. There’ve been a lot of successes in the U.S. I think in some ways a lot of the credit for those predates us, but taking some of those campaigns internationally, like with The Humane League, for instance, has been a major area of focus for us, and I think is a place where we’ve been able to have some impact. And then another big bucket of grants is in scientific research, where a lot of our work is focused on neglected diseases of the poor, where there’s not necessarily a big market. A good example is our work on malaria where I think we gave $17.5 million a few years ago to a consortium called Target Malaria that’s trying to develop a gene drive for mosquitoes.

Alexander Berger: So a particular species of mosquito contributes to most of the malaria burden in Sub-Saharan Africa, and a gene drive is a new biotechnology that could allow you to potentially crash the population of that one species of mosquito in order to prevent the spread of malaria going forward. I think that’s a good example of a potentially higher return activity. It’s certainly not something that the pharmaceutical industry is going to invest in.

Rob Wiblin: Among the people who are drawn to global health and wellbeing work, maybe one of the key splits is between people who are most focused on the evidence-based stuff and people who are more excited about the hits-based giving/high-risk stuff that comes with scientific R&D — or I suppose policy advocacy and so on, where causal attribution is a bit harder and it’s far harder to estimate ahead of time what the odds are of a given grant really paying off. Because by their natures, policy advocacy and scientific R&D are just super unpredictable, they require difficult judgment calls. Do we have any terminology perhaps for distinguishing, I suppose, the people who are most focused on the evidence-based stuff, where they want to, say, be distributing bed nets, where you can really demonstrate that this works, whereas people who want to work on problems that you can understand and grapple with and see in broad terms whether they’re being solved, but they’re happy to use a hits-based approach to tackling them?

Alexander Berger: I don’t think we have a super crisp description of that. I mean, we definitely think about the second bucket that you were just describing as the hits-based approach, but the global health and wellbeing team at Open Phil is very happy with both of those approaches. One parallel I draw is venture capitalists versus value investors. And I think sometimes you might have people who are dispositionally closer to one or the other, where they’re like, they’re all about base rates. And they’re really about finding the beaten-down, neglected old company that’s going to slightly beat the market over the future, versus the sexy new Facebook that’s going to blow things away in the future. We’re really excited to make both of those bets.

Alexander Berger: I think program officers and areas sometimes have a disposition or an approach that’s more oriented for one versus the other. But I actually think you can even have an analytical base-rates value-investor approach, even in relatively high-risk areas like policy advocacy or like scientific research. And so I think a lot of it is… It’s disposition plus also a leverage or risk assessment of an area. And there’s not, I think, one clean, tight distinction that I want to draw between them.

Rob Wiblin: For the purposes of this conversation, we’ll talk about that stuff as hits-based giving. I think it’s slightly important to bracket that off, because a lot of the things that you might say about distributing bed nets maybe don’t apply to the scientific R&D hits-based stuff, and vice versa. That’s an important division within this area.

Alexander Berger: One interesting and potentially controversial view that I hold is that neither of those dominate each other. I think we might get to this later in the conversation, but I actually think that if you just try to be as rigorous as you can about where will the actual expected value calculations take you within that broad scientific research versus policy advocacy versus the GiveWell top charities… My actual take is that I don’t think one is going to come out purely ahead. I think you’re going to actually want to end up pursuing a mix of opportunities. But we can get into more of the reasoning for that later.

Rob Wiblin: That’s a really interesting conclusion, that on average, they can both be somewhat similarly effective — which means that you would then probably split your resources between them depending on what looks best. That distinction looms large for me because I remember five or ten years ago people would debate this distinction a lot: Should we go into science R&D, or should we do the stuff that’s really most rigorous and most proven to work? These days I don’t hear that so much anymore. It seems like more people involved in effective altruist thought take this agnostic approach, where they’re like, “Yeah, maybe policy could be better, maybe it isn’t, we’re just going to have to take it on a bit of a case-by-case basis.”

Alexander Berger: Yeah, I think that’s totally right. And I think, again, if you had more of a dominance argument, where it’s like, look, the returns to the policy are just always going to outweigh the returns to evidence-based aid, then I think you would end up with more back and forth and debate between them. But when you see the arguments for cost effectiveness actually ending up in the same ballpark, the same universe, it’s just like, cool, we can all get along. People are going to sort into buckets that appeal to them, or styles that work for them, or interventions that they’re more personally excited about. I think that’s totally healthy. And then when some of them think, actually, the expected value-type argument seems to lead you to think one of these is going to just totally destroy the other, that’s where I think you get a little bit more friction and tension and debate sometimes.

Rob Wiblin: I think my intuition — and also some of the arguments I’ve heard — suggests that in theory, you would expect policy and science funding to be more effective on average, although super volatile. We call it hits-based giving because most of the time it doesn’t work, but then every so often it really hits it out of the park. Maybe we can return to that later, because you wrote this really interesting blog post about this topic a couple of years ago that we can cover.

Contrasts with longtermism [00:28:08]

Rob Wiblin: We’re going to spend some of the rest of this conversation contrasting the arguments for working on global health and wellbeing versus longtermism. A slight challenge that I’ve had putting together the questions here is that sometimes these things really overlap, or it could be hard to come up with clean things that might only be justified on one grounds, because of course, people who are focused on proving the long-term future also care about health and wellbeing.

Rob Wiblin: That’s the whole motivation. They’re just thinking about where it might cash out in that, and thinking about doing it further in the future rather than directly in the present. But also, even for so many things that longtermists are spending money on, like pandemic control or preventing wars and all these other things, you could possibly make a case that those things might be the best way of improving health and wellbeing in the present. It could get harder for some stuff that won’t pay off for a very long time, or is perhaps the most speculative, but some of the projects that longtermists do these days are not that wacky anymore — some of them have come to approach common sense a bit. And so maybe there’s this middle ground in between the most philosophical, perhaps, the most out-there longtermist stuff, and then perhaps the more sensible global health and wellbeing work.

Alexander Berger: Before we dive in there, I know your audience probably is pretty familiar with longtermism, but is it worth just giving the definition so that people will have something to react to? You should probably do it instead of me.

Rob Wiblin: Sure. In a nutshell, longtermism is the idea that when we’re evaluating ways that we can potentially have the largest impact on the world, we should mostly think about what impact those actions are going to have quite far in the future — potentially hundreds of thousands of years in the future. The idea is that there’s so much potential benefit in future generations rather than just the present generation, so if you think about and try to estimate what impact it’s going to have on all of that long-term future, then that will end up dominating the most important moral consequences. And that should then end up usually guiding or determining your decision. Is that about right?

Alexander Berger: That sounds good to me. I was just going to say that I’m very… My own perspective on this, which is more personal than professional, is that I’m very sympathetic to the idea that ethical value is distributed that way, and then vastly loaded out into the far future. And then I think the follow-up claim that I’m a lot more skeptical or uncertain of is that it should guide our reasoning and actions. It just strikes me as totally plausible that our ability to reason about and concretely influence those things is quite limited. And so that’s not to say… I’m actually extraordinarily glad that Open Phil is I think by far the largest funder of longtermist initiatives in the philanthropic world, I think that’s really important work. I’m really glad we do a bunch of it, but I think I would be pretty personally reticent to go 100% all in on that, because I feel like there’s just a really high chance that somehow we’re confused, or we’re going to mislead ourselves if we try to tell a story where we can predict the future super well.

Rob Wiblin: Yeah. Have you noticed this seeming increasing overlap where there’s plenty of stuff where it’s a little bit hard to say, is this in the longtermism bucket or is this in the global health and wellbeing bucket? Maybe it’s in both.

Alexander Berger: Yeah. I do think that there are a bunch of things that you could potentially justify on either grounds. But I want to push back on this, because I think that there can be an instinct to want to present longtermism as — in practice — a little less exotic than I think it actually is. And so I’m curious, when you think about it, what are the stakes of longtermism? Or, where do I, Rob, think about the impact of my career? What portion of your expected value of longtermist impact in the future is running through like, artificial intelligence risk?

Rob Wiblin: Me, personally? I suppose a large fraction of it. I’m not quite sure exactly what the question is, but I suppose if humanity never creates machines that are smarter than humans, then it seems like the stakes of the future are just potentially way, way smaller, because it would probably mean that we could never really leave the solar system, and therefore most of the energy and matter in the universe is unavailable to us.

Alexander Berger: I think that’s right. I also just… When you look at the chart in Toby Ord’s book of his assessment of the various contributors to risk, I think AI, and then secondarily, risks from biotechnology, are the dominant factors, and then everything else… My recollection is that there’s quite a steep drop off. And so I feel like it comes to a debate of a thick versus thin concept of effective altruism or longtermism.

Alexander Berger: It’s like, the abstract statement of longtermism as caring a lot about the far future that should guide our actions…I think you’re totally right, that could justify a lot of different kinds of work, and a lot of different people could sign on to that abstract statement. But the actually existing EA community is quite loaded on longtermism and the GiveWell top charities, in spite of the fact that “We should try to do more good rather than less” might be an anodyne statement that a lot of people would be pretty ready to sign on to. I think that longtermism… I get nervous when it feels like people aren’t totally owning the degree to which it loads on very specific views about…particularly I think AI and biorisk.

Rob Wiblin: I think AI is pretty dominant, and we do talk about it a great deal. One reason that we don’t just talk about it all the time is that there’s lots of other people out there who want to do other things, and not everyone is going to be suited to working on artificial intelligence specifically. So while 80,000 Hours should put some effort into that, we should also talk about a wider range of things, because it allows us to have more impact. But I guess I want to push back a little bit on thinking that it’s just all in AI, because as you’re saying, there’s biotechnology, but then I also think there’s the risk of war, or the risk of political collapse or a permanent negative political state, or just of humanity adopting bad values that are really persistent.

Alexander Berger: I would be really interested in your sense of what portion of the actual existing effective altruism community is primarily working on those things. I think it’s a very small portion.

Rob Wiblin: I think among people who 80,000 Hours talks to, many people are interested in working on that. And I guess part of that comes from, they don’t feel like they have a good personal fit for working on artificial intelligence or biorisk, but also just many people think these are potentially really important issues, that they’re risk factors, or things that could ultimately have a big bearing on where humanity ends up going in the long-term future, even if maybe that ends up being mediated in some way by what happens with artificial intelligence in the long term. So I think we’re not all in on working just on machine learning and related issues.

Alexander Berger: I think it’s great that 80,000 Hours has episodes on these other issues. I think I perceive 80,000 Hours as engaging in the project of trying to broaden longtermism a little bit, and I’m happy with that project. Or at least interested and open to it. But my perception… Maybe it’s just where I sit in San Francisco, the Open Phil longtermist team in fact works on AI, biorisk, and growing the EA community. I perceive those as being the dominant three focuses of effective altruists who identify as longtermists. And so the idea that people are doing a lot of work on climate change, and preventing war… To me, I’m like, I don’t know the people you’re talking about. Which is not to say that they don’t exist, but I don’t feel like that is a dominant driver, or my perception of longtermists as they exist in my world.

Rob Wiblin: Yeah, that makes sense. I think you might be getting a bit influenced by the location in San Francisco and the connections that Open Phil has to the AI scene. I don’t want to soft pedal this and say that AI isn’t… I think it probably is the dominant stream, or the number one concern, if you had to list them, within longtermism. But I think there are plenty of people who are thinking about governance, and institutional decision making, and international relations, and these other things that perhaps are more positive or negative risk factors for how the future goes, even if the era, say, when humanity creates AI that is capable of doing things that humanity can’t do… That might end up being a decisive era, but the circumstances in which that happens could end up affecting what the outcome is. And of course there’s other people involved in longtermism who don’t really buy the AI story. They think it’s going to happen much further in the future, or they don’t think it’s going to be such an interesting time.

Alexander Berger: Yeah, totally. I think that 80,000 Hours’ perch of career advising might give you a much better cross-section of people who are motivated by this stuff than I have perception into. From where I sit it seems like there’s a philosophical interest in defending the principles, and so people are interested in cashing out things like what would patient longtermism look like, or some of these other ideas that are outside of the AI space. But I guess my view is just very loaded on Open Phil’s own longtermist work, which is very focused on AI and biorisk. I mean, I think biorisk is a good example where the bio team ended up doing some work on COVID, but it really is not their dominant priority to prevent things like COVID. They’re really focused on how you can prevent pandemics that are ten or one hundred times worse than COVID. And so things that might have helped with COVID, but would not help with a ten or one hundred times worse pandemic are actually not their top priority.

Alexander Berger: I don’t think that’s wrong. I think it’s the correct assessment given the focus on the long-term future of humanity, which I believe in. So I actually think we are making the correct (all things considered) judgment call there. But I sometimes worry that we soft pedal the degree to which that’s a controversial take when we say things like, “Oh yeah, longtermists. They’re doing normal things like pandemic preparedness. And like, look, pandemic preparedness is such an obviously good thing to do.” I would just emphasize that — I’m not totally sure about this, but — it wouldn’t shock me if in terms of the dollars, the science team ended up spending more on pandemic preparedness around COVID than our biosecurity team did. That could be wrong. I didn’t run a report before this, but that is, I think, a relevant indicator as to where — at least at Open Phil — where our longtermist motivation is cashing out.

Rob Wiblin: Yeah. It’s a really interesting point. I think on the COVID-19 one, I’ve spoken to people who both think that the important pandemic prevention work that you would do in terms of the long-term future is very different from what you would do if you want to control COVID-19, and also people who think no, it’s quite continuous with that, because anything that helps you control pandemics in general is going to help you control the very worst pandemics as well. So, yeah, there’s an active debate about that. I guess maybe it doesn’t get publicized that much. I haven’t actually seen this written up publicly anywhere, but it’s something that people disagree about. And I guess it’s understandable that that’s not going to be a super simple question to answer.

Alexander Berger: I actually think there’s an interesting case where the philosophical commitment to longtermism pushes you towards one side of that debate, because basically the more true it is that anything that you do to help with COVID is going to help with long-term biorisk, the more long-term biorisk actually starts to look quite crowded and a lot less neglected. And so in that world, if you are motivated by longtermist biorisk versus preventing COVID-like pandemics, you’re really going to want to lever up on whatever are the things that are going to matter for long-term risks, but not COVID. So it’s like, you could totally rationally think most of the work to prevent COVID in fact does simultaneously reduce long-term risks, but given that everyone else is now excited about preventing COVID, the rational response is for us to go focus on the things that our COVID-like pandemics are not going to be addressed by, because the world’s already getting its act together. I think that might be, again, a totally correct logical inference, but I think it points to the ways in which the philosophical implications of longtermism are a little bit more radical than you might think upon just hearing the ideas.

Rob Wiblin: That does seem like it could be a reasonable interpretation, or if you were previously focused on biorisk because of the long-term impact, and you thought that you didn’t have to do anything unusual to deal with the worst-case scenarios, and now a whole lot of people are going to be diving into that, then maybe you think, “Wow. We’re going to be doing all of this sequencing and all this mRNA vaccine work, and it’s probably going to work, so maybe the risk of extinction is now far lower than it used to be.” I could imagine someone believing that.

Rob Wiblin: Maybe let’s launch a little bit more formally into a discussion of the cut and thrust on the arguments in favor of working on longtermism versus global health and wellbeing.

Arguments for working on global health and wellbeing [00:39:16]

Rob Wiblin: This is maybe one of the most well-worn debates within effective altruism, and we’ve outlined the case for a longtermism many times on the show. So I want to spend some time letting you elaborate on the arguments in favor of listeners going into careers that benefit global health and wellbeing, rather than things that are doing more unusual longtermist work. What’s the key argument in your mind for working on global health and wellbeing instead of working on longtermist stuff?

Alexander Berger: I think arguments for global health and wellbeing are first and foremost about the actual opportunities for what you can do. So, I think you can just actually go out and save a ton of lives. You can change destructive, harmful public policies so that people can flourish more. You can do so in a way that allows you to get feedback along the way, so that you can improve and don’t just have one shot to get it right. And at the end, you can plausibly look back and say, “Look, the world went differently than it would have counterfactually if I didn’t do this.” I think that is pretty awesome, and pretty compelling. But honestly, if somebody were coming to me and saying, “I buy the longtermist gospel. I’m all on board.” I would be super uninterested in trying to talk that person out of it. I think that is great. I think there’s not enough longtermists in the world.

Alexander Berger: I really think the fact that longtermism is new and small is totally crazy; it should be huge, and it should be a really popular idea. The world is in a crazy place in that people don’t understand and appreciate our position in the world and the universe, and how big the future could be. I get a lot of value from seeing more concrete impacts of my work and feeling like I can work on problems where I can make progress, but I’m not at all interested in talking people out of spending their career on longtermism.

Rob Wiblin: That’s good to know, but it’d be helpful to have you maybe lay out — not so much a devil’s advocate case, but maybe like steel man the case in favor of working on global health and wellbeing? If there was someone who came to you who was on the fence between doing something that was more unusually longtermist and something that was, say, within your own umbrella of the work at Open Phil, and they said, “What’s the strongest case that you can offer for doing the latter rather than the former,” what kind of arguments would you make?

Alexander Berger: I think I would make two cases. Again, I don’t think these are “correct intellectual”…I’m making air quotes, you can’t see them…“correct intellectual arguments” against longtermism, but I think that there are things that would make it correct for a person to not necessarily choose to work on a longtermist career. And I think a really central one in my mind is I think we just don’t have good answers on longtermism. The longtermist team at Open Phil is significantly underspending its budget because they don’t know where to put the money.

Alexander Berger: When I think about the recommended interventions or practices for longtermists, I feel like they either quickly become pretty anodyne, or it’s quite hard to make the case that they are robustly good. And so I think if somebody is really happy to take those kinds of risks, really loves the philosophy, is really excited about being on the cutting edge, longtermism could be a great career path for them. But if you’re more like, “I want to do something good with my career, and I’m not excited about something that might look like overthinking it,” I think it’s pretty likely that longtermism is not going to be the right fit or path for you.

Rob Wiblin: Maybe you don’t want to call out specific examples, but can you gesture towards cases where you think things are either a little bit too anodyne or it’s hard to make a case that something is really robustly positive?

Alexander Berger: So I think this actually gets to be a pretty big debate, and I think it’s worth having. So on the anodyne side, you were alluding earlier to the fact that you’re now counseling a lot of people who are interested in international relations, and trying to make sure that there’s democracy, and… I actually put that word in your mouth, you didn’t use it, but I feel like sometimes this manifests as folks who are interested in “improving societal judgment,” and I’m skeptical that that’s a thing. I would be really interested in seeing a concrete measure and a sense of what that is and how it relates to the outcomes that we care about. But honestly, when I think about trying to move an amorphous, vague, hard-to-quantify measure like societal judgment — versus just making people healthier and wealthier, which is what I think of as, in some sense, the core project of the global health and wellbeing team — I’m like wow, I’m so much more excited about making people healthier and wealthier, because we know how to do that. That’s a real thing.

Alexander Berger: I think there’s a lot to be said for the historical association between those things and other good qualitative outcomes. And so the idea that because something is branded or inspired by longtermism and is presented in those rhetorical terms… I think it leads people to be a little bit too credulous sometimes about the longtermist impact being obviously better than just working on conventional global health and wellbeing causes, where we actually know a lot about what you can do and how you can improve the world. Now obviously that whole argument only works best if you generally think that good things are correlated. And the more that you think that, say, economic growth is terrible, because it’s going to accelerate the arrival of the singularity and end humanity, the more my advice to do generically good things is not going to appeal to you. But I’m also like wow, you’ve gotten to a pretty weird place, and you should really question how confident you are in those views.

Rob Wiblin: So putting my very, very longtermist hat on, I guess the question that I’d ask there is in terms of how you’re trying to shift society in one way or the other in order to try to make the long-term future go better. Is it more valuable to have people be richer and healthier, or is it better to — at least in theory — have people be more educated or more informed about the world as a whole, or better able to make collective decisions so that they don’t do disastrous things? I guess one thing might be to say well, the latter is just really hard to do, so you’re not going to have much impact there. But it sounded like you’re also saying, are there really ways of changing that meaningfully that particularly bear on how humanity goes in the long-term future, or is it all just washed out?

Alexander Berger: It’s not that I think it’s necessarily washed out. It’s that I’m very interested in better understanding the actual concrete measure that people are trying to increase, and how correlated they think it is with the outcome. I think education is a good example. You can measure education, you can improve education, you can use standardized test scores. There’s a lot of things that you can do to make that number go up and to the right. And then a) I literally think health investments might be the best way to do that, I don’t think that’s at all out of the question, and b) I’m pretty skeptical that that is necessarily going to do better for making the long-term future go well than anything else. It’s just a very broad cluster.

Alexander Berger: Another example that really sticks with me for this — and it’s just a parallel, so it’s not going to move you that much, but — I did debate in high school, and there’s a really common argumentative style that links everything’s impact back to American hegemony, and says that American hegemony is so important for the world. If American hegemony decays, then the world’s going to end in nuclear war. And so it’s essentially a really proto-longtermist argument. And my reaction was always just like, American hegemony is not a scale that you can easily quantify and say like, did this move up or down, and where it’s like, oh, if it goes below a critical threshold, suddenly the world’s going to have nuclear war. And similarly for education, I’m just like, the putative connection with the long-term impact I feel like is so weak as to be meaningless, and the impact that you’re trying to have there… I’m just not sure why it goes to education rather than wealth or health.

Rob Wiblin: Yeah, I think by the time you’re doing something as broad as just trying to improve education or wisdom across a broad swath of the population in general, then it does really seem like it’s converged on… Now you really are using a very similar currency between, well, why don’t we just make people richer or why don’t we give them more spare time to think about things that are important to them, or improve their health. I guess among serious longtermists who have tried doing things that are related to decision making and judgment, mostly they have… Basically this happened because people got excited about the Tetlock forecasting work and his efforts to try to improve decision making within foreign policy and the CIA and organizations like that, organizations that do seem more connected directly with ways that things could go incredibly badly due to causing conflict or just having very bad policy decisions at the national level.

Rob Wiblin: And I think that stuff… There is a more plausible claim that that is more levered on the long-term future than just making people in America richer in general. But then people have struggled to find other stuff within decision making, improving societal prudence, that also seems both tractable and likely to affect the long-term future in particular.

Alexander Berger: My colleague Luke Muehlhauser, who does Open Phil’s work on this — and I think it’s cool, interesting stuff, and I’m happy that we fund some of it, but — we’ve gone back and forth on this a lot, because I am skeptical about the… Again, I think when you use big words, it can feel like oh yeah, those are probably connected. But when you try to actually specify what’s the causal chain, it starts to just feel quite tenuous to me. So, if you look at the gains in Tetlock’s work… There are gains, but I don’t think that they’re astronomically huge. The difference in Brier score between people who are trained and people who are not trained is meaningful, but again not astronomical. And the idea that making a random CIA analyst better at predicting the death of some foreign leader because they know how to use base rates… I’m just having a hard time buying the story where that’s preventing the AI apocalypse.

Rob Wiblin: Well, I’m not sure whether it will affect the AI stuff. I think people who are working on that have a broader view.

Alexander Berger: A premise that I would grant and I would be really interested in, and I haven’t done the work to find it — and by the way I think I sound more anti on this stuff than I am, I think it’s cool stuff — but a premise that I’d be super interested in seeing more on is, is instrumental rationality in foreign affairs actually correlated with peace? I think a core premise is that if people were better at making predictions, that would prevent conflict. And I just don’t know why I would believe that premise. That does not seem at all like an obvious premise to me. I think according to realist international relations, I think it could go the other direction.

Alexander Berger: I don’t know the literature on this. I could be totally wrong. But it strikes me as a good example of something where there’s what I take to be a core premise of the argument that, as far as I know, no one has written down or argued about. And I’m pretty worried that if somebody is walking around thinking, “I’m having really big impact in the world because I’m aimed at longtermist goals,” but a bunch of core steps have never been articulated or studied… I’m just getting pretty nervous that this is the right decision-making framework.

Rob Wiblin: I actually did put that question to Tetlock, either in my first or second interview with him, or possibly both. I think people find it very intuitive that instrumental rationality is likely to reduce the risk of unintended disasters like wars, just because that kind of thing is undesirable to everyone. And so you would think people who are very incompetent and just blundering about making terrible mistakes, that’s likely to lead to disastrous outcomes more often than people who are very prudent and have a good idea about what the consequences of their actions will be. But I agree it’s not completely obvious. Because if you’re better at predicting the effect of your actions, then you can potentially ride the line a bit more and be more aggressive and try to extract more from other countries because you think you can outfox them.

Alexander Berger: I think that’s right. If you look at current tensions around Taiwan and China — again, this is very much not my area, but — my impression is that the U.S. position is strategic ambiguity. And if China had more confidence about how the U.S. would respond, it might be more likely to invade Taiwan rather than less. And so, again, I could be wrong, this is very much not my discipline. It’s not what I work on. But I worry that people are too willing to load up on pretty abstract priors in these kinds of settings.

Broad longtermism [00:50:40]

Rob Wiblin: We’ve gone quite a bit down the track, maybe we should back up. I think there’s been this attempted synthesis of longtermist thinking and maybe global health and wellbeing causes which is sometimes called ‘broad longtermism,’ which is this idea that we’ve been talking about, which is, well, the best way to improve the long-term future isn’t to do anything about some specific technology or some specific person or institution, because it’s too hard to predict what’s really going to matter in the future. Instead we should just improve society in a very broad way, and then that ends up not looking that different than what you would do if you were just working on the global health and wellbeing program with no particular concern for the long-term future. But it sounds like you think that this doesn’t make a whole lot of sense. If you want to think about the long-term future, you should be doing something weird, and there isn’t this convergent middle position of broad longtermism.

Alexander Berger: So I’m not actually sure that if you care about longtermism you should be doing something weird, but I’m somewhat skeptical that caring about broad longtermism should get you to doing something that is neither something that might look good according to something like global health and wellbeing nor something that is going to be especially levered on the long-term existential considerations. Things like broad international peace stand out to me as an example of this. I’m just reporting a visceral reaction, I don’t think I have any devastating arguments, but I’m really worried about your ability to make traction on that. I think it might be quite crowded, depending on your interpretation, so I’m not sure about neglectedness and tractability. And then when it comes to importance… The idea that we are better off because some people did some work on international peace and security rather than because people did some work on economic growth or global health in the long run to me is just a premise that really, really demands justification.

Alexander Berger: That is a very negative version. The pro argument would be: If you imagine a much safer, more egalitarian world — or a set of parameters for the settings on the world that is more in that direction — and you think about what are the best things to do, I actually think that there are fairly plausible longtermist arguments for much more conventional stuff like just saving lives. It grows the number of brains, and that probably increases innovation a little bit, and I think that probably should flow through a little bit. And so when I hear people advocating for things that seem to me like much worse interventions with much worse feedback loops versus things that we know and think are good, I’m a lot less confident than I think other people are that the idea that they’re aimed distinctively at the long-term future is going to actually make them better at delivering on the long-term future.

Alexander Berger: This struck me actually… You had this previous guest who I really enjoyed your conversation with, Christian Tarsney. I went and read his paper after the conversation, and he has this argument like look, longtermism should go through because in the worst case, if you spent all of global GDP on it, you should be able to reduce existential risk by 1%. And that should really mean that the average cost of it is actually really good when you count out the bunch of future generations. I’m just really worried about that form of argument, because I think there’s a ton of sign uncertainty. Just doing GDP stuff might be better for the long-term future than doing a ton of stuff that’s ‘aimed’ at the long-term future.

Rob Wiblin: Could you elaborate on the argument that saving lives and making people richer plausibly does have significant positive flow-through effects to hundreds and thousands of years in the future?

Alexander Berger: One example is actually in Bostrom’s original essay.

Rob Wiblin: Astronomical Waste?

Alexander Berger: Yeah. So I think there’s two kinds of arguments that you see people make for this. One that I actually feel mixed on — and I think ends up being more of an argument for what you might call medium-termism or something — is just the impact of compounding growth. And I think this argument is interesting because I think it’s super importantly right, historically. If you look at what has caused economic growth over time, innovation has compounded in a really important way, and population compounded along with it. If you saved a life 10,000 years ago, I actually think that the impact today is vastly bigger than the immediate impact on the life you saved 10,000 years ago, at least in expectation. That’s because more people were able to come up with more innovations, and have bigger markets, and are able to do more trade, and those all feed economic growth and innovation.

Alexander Berger: In some sense all of history moves forward. And because history could be so big, that could actually be a really big impact. There’s a more EA version of this from Astronomical Waste about if we let a millisecond go by where we’re not taking over the stars, the loss of that is actually really huge. I think that the typical existential risk arguments against that are strong, and I buy that it makes sense to focus more on existential risk than on these broad growth cases if you’re in the longtermist framework. But then I’m like, okay, if we’re not going to focus on AI or biorisk and are instead doing these vague big buckets of things… I’m not so sure that we’re doing that.

Rob Wiblin: I see, okay. So once you’re out of doing targeted longtermist work that looks like it’s especially levered on how the long-term future might go in some way that you’re reasonably confident about, or you have a particular reason to think it has very high expected value, and now you’re just doing general societal improvement, then you think it’s not so clear that just making people richer and making them healthier, isn’t having a larger impact, either by speeding it up or just by improving society in all the ways that it’s been improved over the last 200 years — and that seems like it’s been positive, that seems like it’s put us in a better position.

Alexander Berger: Yep. So I think that improving things does seem good. It might have made things riskier from a technology perspective for a while, but eventually I think asteroid risk and other forms of natural risks would end humanity. So having technology, if we manage our own risks in the meantime, could enable us to last vastly longer than we would if we were just roving bands of Neanderthals or whatever for a million years. And so I think that there is a sense in which in a very, very long-run way… Technology — in the next centuries or whatever — might pose more existential risks than it’s reducing. But if you think of the million-year level, I think it’s really clear that technology is net reducing risks. And that’s an important recipe for why you might have a prior of good things going together, and if I have lots of good opportunities to just make things better, I should expect that to — in a very vague, amorphous way — make the long run go better too.

Rob Wiblin: So this has been a slightly vexed issue over the years. One way of putting it would be, would humanity’s long-term prospects be better if global GDP growth was 3.5% rather than 3%, or 3% rather than 2.5%? And at first blush, it doesn’t seem like the effect of GDP growth this year on the long-term future of humanity is that big. So we’re probably talking about relatively small effects, but we can think of a bunch of them. One is, you were saying it speeds up innovation, it speeds up education, it means that there are more smart people out there. Because we’re reducing poverty, more people have an opportunity to contribute, and so on. Then maybe you might think that the fact that China has gotten richer really quickly over many decades has potentially created the risk of a new Cold War. Maybe that could end up doing humanity in, and we’ll be better off with a unipolar world, or potentially we would want countries to develop more gradually so their culture shifts as they become more powerful.

Rob Wiblin: And there’s a bunch of other things you could maybe put on this list on the pro and con side. I mean, I think the effect is probably positive, although I guess we probably both agree that the effect is relatively small.

Alexander Berger: Yeah, and I don’t want to over-claim here. I’m not that confident that the effect is positive. Basically, my concern is, I’m in favor of these concentrated specific bets on specific global catastrophic risks, because I think that they are big risks that are worth reducing. And it’s crazy how much they’re under-invested in by society. And then when we get to trying to increase other vague aggregates for putatively longtermist reasons rather than more normally good, vague aggregates, I think we might be in a uncanny valley where we’re not actually getting that much — and we might be losing a lot — in terms of optimization power, ability to evaluate ourselves, ability to learn and improve along the way.

Rob Wiblin: That’s interesting.

Targeted longtermism [00:58:49]

Rob Wiblin: Let’s maybe turn away from the improving broad aggregates, broad longtermist approach to thinking more about the targeted stuff, which I guess it sounds like you’re more enthusiastic about. If someone were choosing a career, say between one of those more targeted things that works — pandemic prevention or preventing the worst pandemics, or preventing the worst artificial intelligence outcomes — but they were considering taking a job maybe at Open Phil working under global health and wellbeing… It sounds like you think both choices would be reasonable. But what could you say in favor of the global health and wellbeing approach?

Alexander Berger: One thing that I would be interested in seeing more engagement with — and it relates to my previous point about sign uncertainty for longtermism, between conventionally global health and wellbeing causes and broad longtermist causes — would be more engagement with sign uncertainty on some of the conventional longtermist work. I don’t think it’s clear that improving health and economic growth is going to be sign positive for longtermism, or is definitely cost effective from a long-term perspective or something, I just think it’s an argument that’s worth grappling with. I think if you have the opposite perspective and think we live in a really vulnerable world — maybe an offense-biased world where it’s much easier to do great harm than to protect against it — I think that increasing attention to anthropogenic risks could be really dangerous in that world. Because I think not very many people, as we discussed, go around thinking about the vast future.

Alexander Berger: If one in every 1,000 people who go around thinking about the vast future decide, “Wow, I would really hate for there to be a vast future; I would like to end it,” and if it’s just 1,000 times easier to end it than to stop it from being ended, that could be a really, really dangerous recipe where again, everybody’s well intentioned, we’re raising attention to these risks that we should reduce, but the increasing salience of it could have been net negative.

Alexander Berger: I think there’s a couple of examples from history that this reminds me of. So, Michael Nielsen is a quantum physicist who is popular on Twitter and is a friend of mine. I think he’s one of the smartest people I know. He has tweeted about how he thinks one of the biggest impacts of EA concerns with AI x-risk was to cause the creation of DeepMind and OpenAI, and to accelerate overall AI progress. I’m not saying that he’s necessarily right, and I’m not saying that that is clearly bad from an existential risk perspective, I’m just saying that strikes me as a way in which well-meaning increasing salience and awareness of risks could have turned out to be harmful in a way that has not been… I haven’t seen that get a lot of grappling or attention from the EA community. I think you could tell obvious parallels around how talking a lot about biorisk could turn out to be a really bad idea.

Cluelessness [01:01:26]

Rob Wiblin: Yeah, it’s interesting that on both the most targeted longtermist work and on the broader work people talk about this term ‘cluelessness.’ Basically because the world is so unpredictable, it’s really hard to tell the long-term effects of your actions. If you’re focused on the very long-term future, then it’s just plausible that it’s almost impossible to find something that is robustly positive, as you say. On almost anything that you can actually try to do, some people would argue that you could tell a similarly plausible story about how it’s going to be harmful as how it’s going to be helpful. We were talking about how you can do the same thing with the broader work — like improving judgment, maybe that’s bad because that would just lead people to be more aggressive and more likely to go to war. I suppose if you put that hat on, where you think it’s just impossible to predict the effect of your actions hundreds of thousands of years in the future, where do you think that leads?

Alexander Berger: I think it’s super interesting and super complicated, and it’s actually been something I’ve been grappling with recently because I’m not much of a moral realist, but in practice, I think about myself as mostly a utilitarian. The two uncertainties of utilitarianism that have always been most compelling to me…one I’ve had for a long time is I’m just not sure what I’m trying to maximize. Hedonism doesn’t seem very compelling to me, objective list theories don’t seem very compelling to me. So I’m like well, there’s something out there that is good, and I want there to be more of it, and we should have more of it, and that seems good. So I’ve always recognized that my maximand is under-theorized.

Alexander Berger: But the second thing that I don’t think I had adequately grappled with before is this idea of cluelessness. I don’t think it just cuts against longtermism. I think it cuts against basically everything, and just leaves you very confused. One place where I encountered it is, I mentioned earlier this idea that if you lived 10,000 years ago and you saved the neighbor’s life, most of the impact that you might have had there — in a counterfactual utilitarian sense, at least in expectation — might have run through basically just speeding up the next 10,000 years of human history such that today maybe a hundred more people are alive because you basically sped up history by one-millionth of a year or something.

Alexander Berger: That impact is potentially much larger than the immediate impact of saving your neighbor’s life. When that happens with very boring colloquial-type things… By the way, that argument is from a blog post by Carl Shulman.

Rob Wiblin: Yeah, we’ll link to that.

Alexander Berger: The impact that you actually would want to care about, maybe, from my utilitarian ethics, is this vastly disconnected future quantity that I think you would have had no way to predict. I think it makes you want to just say wow, this is all really complicated and I should bring a lot of uncertainty and modesty to it. I don’t think that should make us abandon the project, but it does make me look around and say everything here is just more complicated than it seems, and I want to be a little bit more sympathetic to people who are skeptical of totalism and making big concentrated bets, and are maybe not even that interested in altruism. Or they’re just like, I’m just doing my thing.

Alexander Berger: Again, I think it would be better if people were more altruistic. I don’t think that’s the right conclusion, but really genuinely feeling the cluelessness does make me feel more like, wow, the world’s really rich and complicated and I don’t understand it at all. I don’t know that I can make terribly compelling demands on people to do things that are outside of their everyday sphere of life.

Rob Wiblin: Yeah, it’s interesting, even with the thing where you saved someone’s life and then that leads to a hundred more people living over the next couple of centuries because of all of the children they have, even there you can’t be confident that it was positive over that time, because we don’t know whether people’s lives are positive rather than negative. Especially when they’re living in pre-industrial times, it’s pretty possible that the life of a typical person in the 18th century could have been net negative. When you think about the effect that they have on other people, or the effect they have on the natural world, animals… Yeah, it is just really humbling once you start actually counting, itemizing all of these different impacts and trying to put signs on them and seeing how they keep flipping over time as things play out. I think if I was going to just give up on trying to improve the world, I think it would be concerns like this that would make me lose my spirit.

Alexander Berger: Yeah, my colleague Ajeya Cotra came on the show a while back and she had this metaphor that I really loved that really stuck with me of a train to crazy town. She had just gone super deep into the potential scale of how big human civilization or human-derived civilization could be, and it turned out that it seemed to depend on these considerations around what portion of their future resources a civilization that takes over the stars would devote to simulating our society right now. She was just like, “This is a really weird place to be.” And it made her want to step back and say, “Wow, how confident should I be in all of this?” She’s still a hardcore longtermist, but I feel like that sense of, this is going really weird places, and I’m just not sure that I am very capable of reasoning very well about it… That specific analogy of the train to crazy town really stuck with me and made me want to just be a lot more modest about everything and a lot less confident that I can make good predictions about the world, or even know what is good. I feel honestly a little unstable about that. I’m not sure where I’m going to land.

Rob Wiblin: Yeah. I think I don’t get off the train to crazy town because it’s crazy per se, but I think the more you keep considering these deeper levels of philosophy, these deeper levels of uncertainty about the nature of the world, the more you just feel like you’re on extremely unstable ground about everything. Because if I kept thinking about this and I kept unveiling these new crucial considerations, then that could just flip all of the things that I think now. Then at some point you’re like, “Well, there’s probably just all of these things, all of these considerations that I haven’t thought about, that mean that what I’m doing now is very naive. And so why would I even bother to do it? And I’m not intelligent enough to think them through and get to the right answer.”

Rob Wiblin: So, that could be demoralizing. I guess it gets people maybe to want to try to cling to the hope that like, maybe I can find some worldview that is hopefully reasonable, and maybe I’m going to bank that this more common-sense approach — or going down this line of reasoning to the point where I felt like I could actually understand what was going on and reach conclusions and then just stopping there — maybe that’s my best shot, even if there’s a good chance that it’s going to end up being wrong.

Alexander Berger: I want to defend caring about the craziness, or maybe it’s just a motivational defense. But I think once you’re in the mode of, “Wow, everything’s chaotic, I’m just giving up,” I feel like there’s something bad about that, there’s some failure. And being able to say, “Okay, I want to simultaneously hold in my head the idea that I have not answered all the questions, and my life could totally turn out to cause great harm to others due to the complicated, chaotic nature of the universe in spite of my best intentions.” I think that’s just true. And that it’s something that we all need to own and acknowledge and learn how to inhabit. Sorry, suddenly this became the Buddhism podcast, right?

Rob Wiblin: Yeah, this just got dark.

Alexander Berger: But I think it is true that we cannot in any way predict the impacts of our actions. And if you’re a utilitarian, that’s a very odd, scary, complicated thought. But I think that in some sense, basically ignoring it and living your life like you are able to preserve your everyday normal moral concerns and intuitions to me seems actually basically correct. Again, I don’t want to say that this is privileged, normative terrain where if other people don’t have the same conclusions I do, they’re wrong about it. But my take is that going back and saying like yeah, I’m going to rely on my everyday moral intuition that saving lives is good, and I’m going to keep doing that, even though I recognize the normative confusion of foolishness, I think that’s actually just probably good life advice. I think it’s maximizable, I think if everybody followed it, it would be good, but I’m not saying that it’s philosophically true.

Rob Wiblin: It’s very practical and very appealing on a personal level, but of course the problem is that the cluelessness bites there as well, you can’t escape. Because you’re doing this thing that you think is good, but then the whole point is that you have good reason to think that it will have significant effects, but then you don’t really have very compelling reasons to think those effects will be positive.

Alexander Berger: I think that’s right. But I think this came up actually in a previous episode, you’ve got to leave the house. So in some ways, again, I think this project of thinking it all through should destabilize you in some sense. Then it’s just like, but actually, what are you going to do? I think it would be a mistake to feel like you’re on super high normative ground where you have found the right answer. I think the EA community probably comes across as wildly overconfident about this stuff a lot of the time, because it’s like we’ve discovered these deep moral truths, then it’s like, “Wow, we have no idea.” I think we are all really very much — including me — naive and ignorant about what impact we will have in the future. I think finding a place that works for you and then ignoring it is basically the right advice.

Feedback loops [01:10:01]

Rob Wiblin: Let’s switch gears for a minute and go back to a more affirmative case for global health and wellbeing. What are some examples of valuable feedback loops that you’ve observed that have helped people to have more impact than they might have if they weren’t able to see what was going on?

Alexander Berger: I think a really central one is just being able to see what works and then do more of it, which is a funny low-hanging fruit. But I think often, in other categories where you don’t even know what intermediate metrics you’re aiming for, you don’t have that benefit. So for instance, the amount of resources flowing into cage-free campaigns in farm animal welfare has, I think, well over 10x-ed because they were working. And it was like, oh okay, we have found a strategy or tactic that works, and we can scale. I think that accounts for a very material portion of the whole farm animal welfare movement’s impact over the past decade. But if you were somehow unable to observe your first victories, you wouldn’t have done it. So I think that there’s something about literally knowing if something is making progress. That’s a really, really important one.

Alexander Berger: Also, on the other side, being able to notice if the bets aren’t paying off. So, we have a program that’s focused on U.S. criminal justice reform. We don’t do calculations for every individual grant, necessarily. We make the big bets on Chloe Cockburn, who leads that program. But if after five years the U.S. prison population was growing, we don’t observe the counterfactual, but that would raise questions for us of given our cost-effectiveness bar and the level of reduced incarceration we would need to be hitting to make this pencil compared to other opportunities for us, being able to observe the state of the world and say, “Is the state of the world consistent with what it would need to be in order for these investments to be paying off?” is an important benefit that you can get in the neartermist global health and wellbeing side that you can’t necessarily get in the longtermist work, I don’t think.

Alexander Berger: Another one is just really boring stuff, but you can run randomized controlled trials that tell you, okay, the new generation of insecticide-treated bed nets is 20% more effective, because the level of resistance before to the old insecticide was reducing it by 20%. You wouldn’t have necessarily known that if you couldn’t do the data and improve. So none of those are necessarily order-of-magnitude kinds of things, but I do think if you think about the compounding benefits of all of those, and the ways in which basically the longtermist project of trying something and maybe having very little feedback for a very long time is quite unusual relative to normal human affairs… It wouldn’t shock me if the expected value impact of having no feedback loops is a lot bigger than you might naively think. That’s not to say that longtermists have no feedback loops though, they’ll see, are we making any intellectual progress? Are we able to hire people? There are things along the way, so I don’t think it’s a total empty space.

Rob Wiblin: Yeah, longtermist projects are pretty varied in how much feedback they get. I mean, I suppose people doing really concrete safety work focused on existing ML systems, trying to get them to follow instructions better and not have these failure modes in a sense, they get quite aggressive feedback when they see, have we actually fixed this technical problem that we’ve got?

Alexander Berger: Yeah and I think that work is awesome. My colleague Ajeya has written about how she’s hoping that we can find some more folks who want to do that practical applied work with current systems and fund more of it. Again, this is just a heuristic or a bias of mine, but I’m definitely a lot more excited to bet on tactics that we’ve been able to use and have worked before, relative to models where it’s like, we have to get it right the first time or humanity is doomed. I’m like, “Well, I think we’re pretty doomed if that’s how it’s going to be.”

Rob Wiblin: Yeah. I mean, this keeps turning into darkness again, but yeah, it is possible. You could imagine you’re unsure, is this a case where we have to just get everything right the first time, and we’re not going to have any feedback? Or maybe this is one where we can cross the river by feeling the stones, we’ll solve the technical problems as they come up. Even if you place 50/50 chances on both of those scenarios, you might just think well, the first is a write-off, if that’s the case, there’s just almost nothing that we can do, so we should just bank on the second one and cross our fingers.

Rob Wiblin: In which case you are, again, working in an area where you can get at least a reasonable level of feedback. I guess within bio as well, I imagine many of the grants that the bio team at Open Phil is working on, they do get feedback on, well, were these policies implemented? Has this technology actually worked? Are the machines being bought by anyone? So there’s a whole spectrum. I suppose the global health and wellbeing stuff tends to be stronger on the side of getting feedback.

Alexander Berger: Yeah I think that’s really right, and the place where I think the feedback can be worst is when it’s like, you don’t observe whether you’re progressing towards your intermediate goals. In biosecurity, maybe you could say, “Okay, we advanced the field of metagenomic sequencing, and we’re seeing more sequencers being used.” But what was your thesis? That metagenomic sequencing was going to be able to reduce risk, correct? That’s a thesis that I basically don’t see how you make good progress on. Again, it depends a little bit on your threat model. This goes back to the distinction you were drawing before, like, do you need stuff that’s only going to be helpful for existential risk, or is it going to be in use every day?

Alexander Berger: I actually am pretty sympathetic to the perspective that if it’s not used every day, it’s not going to be helpful. So you might be able to observe, is the metagenomic sequencing apparatus that we invested in being used for everyday diagnostics? Is it detecting new pathogens? So I think you could have a sense that, well, at least that part of the bet was on track. But we never observed the base rate of existential risk. So the fundamental question of how much of it we are reducing feels quite irresolvable to me.

Rob Wiblin: Yeah, yeah. So, I guess there’s two stages of feedback. One is, did the project accomplish its immediate intermediate goals, the ones that you plausibly can see. Then the second step is, does that help, which is a whole lot less certain. And I guess with existential risk in particular, the problem bites especially hard, because the only time you would ever… Well, I guess you can’t really know, because you’re not going to be around to see the case in which you fail, among other problems.

Alexander Berger: I mean, when we’re all dying of a pandemic, we can recognize that we failed to prevent it, but yeah…

Rob Wiblin: Yeah, stick that in our impact evaluation as our last act. I suppose the feedback loop case in the strongest form you were saying relies on this compounding, where, say, you work on two different projects and you notice that one of them is getting traction, that seems to be accomplishing its intermediate goals, so you pile more into that, and then you start improving it based on the parts of that project that seemed to be working really well, and then you see that those intermediate goals do seem to be cashing out in the ultimate thing. So you go into that more and then maybe you branch off something that’s similar, but a bit different, and see if it’s better. So that it’s like over time, you get this iterated improvement that maybe really does add up to being a primary concern, even if year-to-year it’s not so central.

Alexander Berger: Yeah, much better put than I’d say.

Grants vs. careers [01:16:26]

Rob Wiblin: Do you think the considerations in favor of the global health and wellbeing approach are different when you’re trying to disburse money as grants? I guess especially when you have billions of dollars to get rid of, as opposed to maybe when you’re just planning out your own career and you’re going to be there watching the whole thing over a 40-year period.

Alexander Berger: I do think it’s different. In a lot of ways, the career case for longtermism seems better to me. I also think it makes sense for the EA community to be pretty invested in longtermism. One way to put it would just be, when I think about who are the major longtermist thinkers and what are the major longtermist institutions, I feel like a huge portion of them are all coming out of and motivated by the effective altruism community, because it’s, in a lot of ways, I think a weird, surprisingly new idea. So effective altruism is very important to longtermism, in a way that for global health and wellbeing… Scientific research for global health, and global health funding itself, and policy causes, there’s billions of dollars going to those things. They’re already existing in large communities.

Alexander Berger: So I think that both gives you scaffolding for good ways to spend money today in really concrete, helpful ways — and I think that does matter when you’re trying to give away a lot of money — but it also makes it so that in some ways, I think global health and wellbeing needs effective altruism a little bit less, especially in terms of sourcing people for careers, than I think longtermism needs effective altruism and needs EAs.

Alexander Berger: So I think that’s a pretty important argument for… As an individual, I think if you have a career you’re excited about that would be longtermist, I think that could be a really good pro. And when it comes to money, I think there’s just a lot more opportunities that are able to absorb a lot more money in some sense on the global health and wellbeing side. So I think that’s an important place for Open Phil to diversify in.

Rob Wiblin: Yeah, I was thinking one way in which maybe longtermism is a bit more easy to implement in your career is that you don’t have the same principal agent issues perhaps that Open Phil has offering grants. So, inasmuch as we’re very concerned with the challenges of figuring out whether anything that you’re going to do is actually going to be useful, Open Phil might have a lot of trouble say figuring out what’s a grant that’s going to improve international relations, or reduce the risk of a war between the U.S. and China. But of course if you could become someone who was deeply embedded in the state department and was focused on exactly those issues related to China — of course you would have decades of experience by that point where you’ve learned to try to hopefully understand the system and maybe be able to have some predictable, positive impact there — you can then actually just apply that knowledge, and you’ll be getting feedback from your own work and seeing what impact your actions have in a way that it’s very hard for a grantmaker… It’s very hard for them to, say, then give that knowledge back to a grantmaker in order to convince them that they’re having a predictably positive impact. Does that resonate?

Alexander Berger: I think that’s right, but there’s a couple of countervailing factors that I would want to weigh on the other side, in my mind. So one is that careers are just really long, and the more specialized your career is, I think the higher risk you face that longtermist thinking will leave you behind. So maybe right now people are interested in international relations, but the probability that in 20 years people still think that’s one of the top career paths for longtermists strikes me as…just given our maturity and where we are in the intellectual development, and thinking through all of these crucial considerations that we talked about…quite, quite low.

Alexander Berger: So, if you think that career capital in something you’re going to develop is highly specific, and the only case for it is a pretty contingent longtermist case, I think that would make me really nervous. Because a career is just…it’s a long thing, and it’s yours. So I’m a little bit more reticent to make one big expected value bet with my career relative to money. In my career, I have to keep feeling enthusiastic, I have to keep going to work every day. So, it also feels to me like personal taste is going to do a lot more work in career decision making then I think you need to do in donations. I mean, obviously you start to diversify donations and so you can do some of both, but I think people — over the length of a career — are pretty likely to flame out if they’re not motivated reasonably intrinsically by the work that they’re doing every day.

Rob Wiblin: Yeah. I think I agree with that. I mean, the argument relies on you potentially advancing your career to the point where you’re a greater expert in the area than say Open Phil is when inspecting grants, which means that you have to stick with it for a long time, which is why I think we pretty rarely recommend that people go into some path that they’re not excited to be in. So, I guess I think of that as a slightly orthogonal issue.

Worldview diversification [01:20:43]

Rob Wiblin: One really important consideration that plays into Open Phil’s decisions about how to allocate its funding — and also it really bears importantly on how the effective altruism community ought to allocate its efforts — is worldview diversification. Yeah, can you explain what that is and how that plays into this debate?

Alexander Berger: Yeah, the central idea of worldview diversification is that the internal logic of a lot of these causes might be really compelling and a little bit totalizing, and you might want to step back and say, “Okay, I’m not ready to go all in on that internal logic.” So one example would be just comparing farm animal welfare to human causes within the remit of global health and wellbeing. One perspective on farm animal welfare would say, “Okay, we’re going to get chickens out of cages. I’m not a speciesist and I think that a chicken-day suffering in the cage is somehow very similar to a human-day suffering in a cage, and I should care similarly about these things.”

Alexander Berger: I think another perspective would say, “I would trade an infinite number of chicken-days for any human experience. I don’t care at all.” If you just try to put probabilities on those views and multiply them together, you end up with this really chaotic process where you’re likely to either be 100% focused on chickens or 0% focused on chickens. Our view is that that seems misguided. It does seem like animals could suffer. It seems like there’s a lot at stake here morally, and that there’s a lot of cost-effective opportunities that we have to improve the world this way. But we don’t think that the correct answer is to either go 100% all in where we only work on farm animal welfare, or to say, “Well, I’m not ready to go all in, so I’m going to go to zero and not do anything on farm animal welfare.”

Alexander Berger: We’re able to work on multiple things, and the effective altruism community is able to work on multiple things. A lot of the idea of worldview diversification is to say, even though the internal logic of some of these causes might be so totalizing, so demanding, ask so much of you, that being able to preserve space to say, “I’m going to make some of that bet, but I’m not ready to make all of that bet,” can be a really important move at the portfolio level for people to make in their individual lives, but also for Open Phil to make as a big institution.

Rob Wiblin: Yeah. It feels so intuitively clear that when you’re to some degree picking these numbers out of a hat, you should never go 100% or 0% based on stuff that’s basically just guesswork. I guess, the challenge here seems to have been trying to make that philosophically rigorous, and it does seem like coming up with a truly philosophically grounded justification for that has proved quite hard. But nonetheless, we’ve decided to go with something that’s a bit more cluster thinking, a bit more embracing common sense and refusing to do something that obviously seems mad.

Alexander Berger: And I think part of the perspective is to say look, I just trust philosophy a little bit less. So the fact that something might not be philosophically rigorous…I’m just not ready to accept that as a devastating argument against it. So the intuitive appeal of saying look, there’s no one worldview where this is the right reaction, but in some sense it is a good way to compromise across incommensurate sorts of worldviews or incommensurate sorts of goods…I think there’s something that’s correct about that intuitive appeal, even though I don’t think I have a devastating argument about the reasoning there. But hopefully even if you’re a longtermist who’s ready to be all in on longtermism, the appeal to how should we think about the farm animal welfare case might motivate you to see, oh yeah, I get why some-but-not-all is a desirable place to be able to end up, even though the internal logic of any of these positions is going to be very greedy, and going to say, “I want all of it, and I round your thing to zero.” We want to do a bunch of these things that we think are really cost effective, really high-impact ways to help the world, but we don’t want to say everything goes to zero.

Rob Wiblin: To some extent we don’t have to go through everything to do with worldview diversification, because fortunately we covered that in quite a bit of detail with Ajeya Cotra back in the episode that came out in January of this year. So people can refer back to that if they want a lot more information. I guess it seems like Open Phil has three main worldviews that it embraces. One is human health and wellbeing, and then there’s animal welfare, and then there’s longtermism. I guess an interesting question is, why not add more? What’s the limiting principle on this worldview diversification?

Alexander Berger: I think that’s a really hard question. I would say I’m a relative partisan of fewer is better, where I think that there’s a lot to be said for having a unified metric where you can compare things in one term and optimize and say, I might not prioritize education because I think health is just consistently more valuable. If you’re so obsessed with worldview diversification — or you’re so ready to diversify it at any choice point — then I think you end up just recapitulating the diversity of the world, and you’re not able to have a perspective, or an edge, really. So where do you draw the line between too much or too little worldview diversification I think is a really tough problem that I don’t have any good first-principles answers to. I actually think we end up in some pretty post-hoc places in some interesting ways.

Alexander Berger: So one of the things that the cause prioritization research team has been thinking through recently is how should we think about the value of economic growth? Because I think there’s a community around progress studies that thinks about the value of economic growth and says, “That’s actually the key thing for community, historically. It’s the thing that has made the world go well, and we should be really excited about accelerating more of it in the future.” When we’ve tried to analyze how good are the policies that could accelerate more economic growth, I think our current take — and my colleague Tom Davidson has been working on this and is planning to hopefully publish it later this year — is that you can actually end up very much in the ballpark of normal global health and wellbeing kinds of priority areas. Where it’s like, probably there’s advocacy that you could do around economic growth that looks like it could compete with the GiveWell top charities, according to these estimates, but not necessarily beat them.

Alexander Berger: So, it’s in the mix, I think, according to where our estimates are landing, and I think that’s an attractive place. On the other hand, if you had a value that came out of that process that was astronomically higher, where it’s like anything you could do for economic growth is worth a million times saving one life today, I think we could be having a worldview diversification conversation about it and say, “Okay, maybe we want to give economic growth 20% of the portfolio,” or something like that. Where it’s between the farm animal welfare and longtermism amount, or something. Obviously that 20% is totally made up and uncertain, I don’t have a rigorous philosophical defense of it, but I think that when you end up…your optimizations, your comparable units are getting you to an answer that seems reasonable, then you don’t need the worldview diversification. But sometimes the worldview diversification feels really necessary if one consideration is threatening to swamp the whole calculation, if it’s seemingly out of proportion to the stakes.

Alexander Berger: I guess one other point on this — and I promise I’ll shut up and let you talk eventually — is that we actually think we are making a pretty big contrarian bet even on the global health and wellbeing side, where I think most people, their central worldview is like, they care about their family, they care about their community, they care about their nation. So even the starting point of the GiveWell-type charities being cosmopolitan and saying, “We value lives all around the world and not just in our local community,” I think is actually quite a contrarian, expected value-flavored bet relative to the world. So even just saying we have three worldviews that we compare internally but not externally between I think is very, very far in the reductive, expected value-maximizing, calculating direction relative to philanthropy writ large, or just like how most people think about the world.

Rob Wiblin: If you think that’s contrarian, I’ve got some people you should meet, Alexander…

Alexander Berger: I feel like that’s the theme of this podcast. I’m weirder than 99.9% of people, but not quite as weird as the last 0.1%.

Rob Wiblin: Yeah. I guess it’s the, what is it, the pettiness of small differences? Although it is interesting that these quite small differences lead to such different conclusions, which I guess is the whole issue. That the conclusions are so unstable based on whether you go 99.9% or 99.99%.

Alexander Berger: I think that’s another consideration that should make you think it’s fine to follow your passion and follow whatever feels compelling to you. I’m actually quite open to people prioritizing that and going all in on it and not trying to hedge themselves in some way. But I do want them to hold it a little, and I feel like it goes back to our conversation earlier about cluelessness.

Rob Wiblin: Yeah. Okay, we’ll come back to worldview diversification as part of wrapping up on this section, but a few more small points. I guess there’s some folks who decide not to focus on longtermism in part because they think humanity surviving for millions of years or ever having effects outside of the solar system is just wildly unlikely, and can be dismissed. Do you have any sympathy with that position?

Alexander Berger: I have shockingly little sympathy for that position, especially the millions of years idea. The idea that it’s wildly unlikely seems totally crazy to me. Lots of other mammal species survived millions of years. I think humanity now is way, way more widespread than a typical mammalian species. And so the idea that it’s just in principle impossible seems totally crazy to me.

Alexander Berger: And I find this idea actually extremely inspiring. Humans are a very young species. We’ve only had modern economic growth for a couple hundred years. So what we think of as history is just so wildly short, and the future could be unbelievably, massively large.

Alexander Berger: I think there’s something really beautiful about contemplating the scale of what is possible, and how good it could be. Going back to our cluelessness conversation, I do think we have huge uncertainty. We don’t know what we’re doing. We’re really naive in some very deep sense, but also we’re in this crazy position in the universe. And I would be really sad if Open Phil wasn’t able to simultaneously put a lot of resources into improving lives in concrete, obvious ways, and trying to make these really high-stakes bets on how to make the vast future of humanity go a lot better.

Instrumental benefits of GHW [01:30:22]

Rob Wiblin: Okay, to wrap up this section on arguments in favor of global health and wellbeing, let’s talk a bit about some of the instrumental reasons why it’s worth Open Phil — or the effective altruism community — putting additional resources into global health and wellbeing beyond what they might just based on the expected value of the direct impact of that work. What kind of indirect or instrumental benefits of having that approach in addition to others stand out to you?

Alexander Berger: Yeah, I actually wanted to get to this earlier, because when we were talking about what’s the case for worldview diversification and what’s the case for not just going all in on longtermism, I said one part of that case is some skepticism that you really want to bet everything on one brittle expected value argument. I think we just want to stop and step back and say like, “Eh, let’s not go that far.” The second case is internal to the expected value thinking, actually just thinking that something like global health and wellbeing is just really justified on longtermist grounds. And one part of that case was saying I think there’s a lot more sign uncertainty on longtermism than people think. And I think in some ways, actually just saving lives and doing pedestrian good things today could be really good, and could actually be better according to longtermism than things that are branded as longtermism.

Alexander Berger: I don’t think that’s true for AI risk, but again, then you have the concern over sign uncertainty, and maybe some of the things that we might do on AI risk could actually be increasing it rather than decreasing it. So that’s all about the longtermist side. But then I think that there are these practical benefits of the global health and wellbeing work that I actually think can just add a lot of value by longtermist lights, because of those practical benefits. A lot of these arguments are in our original post on worldview diversification from several years ago. And one of them is just optionality. If we think we’re going to go through a lot of different causes over time as our thinking changes — and I think that’s pretty plausible — global health and wellbeing gives us a lot of knowledge and opportunity to build up experience in causes that might turn out to be relevant.

Alexander Berger: So we know a lot more about policy advocacy in the U.S. than we did five years ago when we were getting started. And that doesn’t primarily come from the work on longtermism, even though it might turn out to really significantly benefit the work on longtermism. A second practical benefit that I think makes a big difference is around feedback loops. We talked earlier about some areas of just being able to see concretely what is working. I think we can learn lessons about what kind of grants go well versus don’t. I think there are generalizable lessons there. And a third example is these very concrete relationships, and grantees that have been able to move from one side to the other.

Alexander Berger: So there’s an organization that we fund called Waitlist Zero that primarily worked on advocacy around allowing compensation for living kidney donors. And during the pandemic, the person who had started that was able to pivot with some of our funding to start this organization 1Day Sooner that was working on allowing human challenge trials for COVID vaccines. And if we hadn’t been funding the work on Waitlist Zero, I don’t think he would have been in that position to start the work on human challenge trials. Another example is a grantee from our macroeconomic stabilization portfolio has ended up turning out to be really helpful for a bunch of policy work in biosecurity and pandemic preparedness. And it’s something I think we wouldn’t have necessarily expected a lot of in the beginning. And there’s just even more pedestrian stuff, like recruiting and fundraising eventually. We think Open Phil will be a lot more conventionally successful and attractive if it’s a 50/50 longtermist and global health and wellbeing organization, versus if the only thing we work on is AI risk. We think we would be leaving a lot of value on the table there.

Rob Wiblin: Yeah. I think all of that makes sense. The reason I put this in fairly late is that I feel a bit uncomfortable with the idea of advocating for doing this big program and spending all this money just because it incidentally benefits this other program. There’s something about that that feels uncomfortable. I guess you want to put the affirmative case for it first, rather than just say, “As a side effect, it benefits the longtermist work.”

Alexander Berger: I think that’s right, and I do think that the affirmative case is first. The affirmative case is you get to do a lot of good. I think that’s why people do it. I think it’s why people are interested in it. And all this other stuff is the third thought. But I don’t actually think it’s that crazy as a consideration, because basically the longtermist view should just be a lot more comfortable with this, because the longtermist view is kind of like everything is these weird bank shots to influence the distant future. So anything that you might do to help somebody today according to longtermism is fundamentally towards the terminal goal of their distant impact on the far future. And so if what you think is that most of the moral value is in the far future and we should act accordingly, you’re committed to all of these weird bank shots, because there’s nothing else, in some sense.

Alexander Berger: So I think if people were cynical about it and they’re like, “Oh, I’m going to pull the wool over people’s eyes,” I think that would be really unfortunate. But I think being honest that we worry that we might have gotten a little bit crazy-seeming here, and we like to do concrete, good things because that helps people understand that we are motivated by concrete, good action… I think there’s something about actually making your appearance in line with your true deep values there where that’s not deceptive, that’s just like honest and correct. And I find that totally compelling.

Rob Wiblin: I think part of my discomfort is the idea that some of these arguments can get a bit close to ones that might lack integrity. So you can imagine, what if nobody at Open Phil actually thought that this work was very effective, and they clearly thought that this other umbrella was much better, but nonetheless, they were doing it just because it kind of looks good or brought positive attention, or got people to take them seriously. If that was the way that things were, then I think that would raise serious integrity issues to me. And I’d think, well, maybe even if that’s good in some narrow sense, do we really want to be willing to engage in this…like, let PR dominate what we’re doing to such a degree? I guess the fortunate thing is that there’s lots of people who do think that this really is the most justified thing on the merits, or at least that it should absolutely be part of the thing. And then you can make the argument that everyone else should be pretty happy about that, because it happens to benefit everyone and makes the pie bigger.

Alexander Berger: I think that’s reasonable, but I actually don’t totally agree. So one thing I think would be really objectionable about your story was if there was a pure cynicism to it. So if people were like, “We’re wasting this money. How can we possibly spend it this way? But we’re trying to buy good PR.” I totally agree that would be a really undesirable, inauthentic place to be. And that’s not what we’re into. But I think about my own case, where I think about myself as ethically pretty comfortable with longtermism. I get some real comfort that I think that the work that I can do at Open Phil both has these concrete impacts of I think making people’s lives better today, and helping further the work of our longtermist team.

Alexander Berger: And I guess if, in my mind, it was like the only reason I do the work was because of the longtermist work, I feel like I’d be a little bit more sympathetic to the, “Okay, that’s one thought too many.” But again, because longtermism has in some ways a shortage of good things to do, the idea that actually just concretely helping people today because that will help mainstream longtermism… I don’t think that would be a necessarily terribly objectionable mindset, unless the view was, “Oh, I’m going to dupe people.” Then I’m like, yeah, okay, that seems like a very undesirable framing.

Rob Wiblin: So your take is more, “I like longtermism, I like global health and wellbeing, and it turns out that working on global health and wellbeing gets me a pretty good outcome on both of these worldviews, so that’s just a bonus.” And that doesn’t seem really objectionable or lacking integrity at all.

Alexander Berger: Yeah, exactly. Right. If I was like, “Oh, I wasted my days because of this work,” I think that would lack integrity. And I think that would be really objectionable. But if it’s like, I think I get to do good stuff and I have a lot of moral sympathy for this other view, and the fact that I think that the good stuff also could, in some sense, further that other moral view is all to the good, in my mind.

Rob Wiblin: Something that I feel incredibly icky about is misleading people about my actual views. It’s possible in my role in particular that’s a big no-no, because you want people to take seriously what you’re saying and not think that you’re just running some ploy in order to get them to do what you want. I guess that’s not what’s happening here, because there’s just different people who have different views. And I suppose people are getting exposed to people who have these different opinions, and then realizing that there’s a debate within effective altruism and within Open Phil about how to approach these really difficult questions.

Alexander Berger: And I think part of the tension or the feeling of inauthenticity might come from a view that the actual motivations for global health and wellbeing and longtermism are very different. If you thought of global health and wellbeing as being motivated by a person-affecting view of population ethics, then you might think, “Oh, you’re doing violence to that work, because you’re actually motivated by the total view instead.” So there’s some fraud going on. And my view is it’s a lot less about different values and a lot more about epistemics and uncertainty and reasoning. And so it’s not like I’m pretending to care about people dying. I super care about people dying, and want to prevent it. And then I also have some sympathy for this view that’s like, “Wow, the long-term future could be huge.”

Alexander Berger: And the fact that my position helps allow me to marry those things rather than treat them as polar opposites is something that I actually think is cool and good and unobjectionable. And I think that your point of trying to be honest about this is actually really interesting and challenging. Because yeah, for the people who I perceive as most deep in the EA community, and again, I have a biased sample, sitting in San Francisco, I think a lot of people are super, super focused on AI. And then I perceive 80,000 Hours as trying to diversify that in some sense. But I think if you had the most representative sample of your sense of the risk space or something, you might end up with nine out of ten podcasts being about AI. And the podcasts are actually quite weighted towards AI, but maybe at 30%, or I don’t know what the ratio is.

Rob Wiblin: Yeah. One of these instrumental arguments that I’m a little bit skeptical about is the idea that we’ll learn from experience within some domains, and then that will carry over to us being able to have more impact in other areas because of the lessons that we learned. I guess there’s two reasons. One is, how much did the people who are working on AI at Open Phil really learn from the experience of grants from other areas that are very different, like animal welfare? I would expect that the crossover lessons would be pretty weak, just because these are such different disciplines. And then the other thing is, if you wanted to learn from experience in philanthropy or in trying to do good, it seems like there’s a wealth of other people who you could study who aren’t within effective altruism specifically, or aren’t your specific colleagues. And so running programs with the goal of learning from those experiences would seem somewhat wasteful relative to just going in and reading existing histories. What do you make of that?

Alexander Berger: I think those are both really interesting points. Let me answer them in reverse order. On the second point, we’ve invested a lot in the history of philanthropy, and we think that we can learn a material amount from those successes, and that we should try to do more, and I think that is cross-cuttingly applicable, and it’s something that we think is really valuable and we’re glad to do. I also think sometimes it’s hard to learn from history, and you need to make your own mistakes. I do think people learn more from their own colleagues or own experiences than they do from books a lot of the time. And if that weren’t the case, that would be really cool. But I do think there’s a real difference there.

Alexander Berger: And then on your first question, I don’t want to overstate the degree of cross-cuttingness of lessons, but things like how do you do policy advocacy philanthropy really well? There’s a lot of stuff at that level of abstraction on tactics that I think actually are quite cross-cutting, and where now the longtermists are trying to draw on some of that knowledge, trying to draw on some of those experiences. Sometimes trying to draw on literally the same people or firms that we’ve used. And I think that they are quite happy to have those experiences drawn upon. And so another way you might put it would be, if you thought that every dollar of longtermism was an undifferentiated equal level of cost effectiveness, maybe you wouldn’t want to trade any of it off. But I actually think within longtermism you should expect quite steeply diminishing returns, where the marginal dollar is vastly, vastly worse than the average. Potentially orders of magnitude so.

Alexander Berger: And so the longtermists actually might be quite willing to give up the last 10% of their money to have some lessons that are going to make their average grant or frontier grant better. And so I don’t think this would get you to a 50/50 split between longtermism and global health and wellbeing, but I think it might get you to, I don’t know, 10%, 20% of the portfolio should be on global health and wellbeing, even if you only had “In the long run, we want humanity’s future to go well,” in mind.

Rob Wiblin: Depending on what catastrophic risks you think are most serious, it seems like the effect of further work could become negative relatively quickly. These are very sensitive areas potentially, where you really want to know what you’re doing if you’re going to go and meddle in them, for reasons that you’ve given earlier. And I think that’s one reason why you don’t want people who aren’t actually really interested in the topics, or don’t have suitable skills, to be going in. Because it’s not as if it’s a really absurd tail risk to have a negative impact; it seems like that’s entirely possible. And that’s one reason why you might not want to just go in with billions of billions of dollars that you’re trying to give away as quickly as possible. Because you could just end up funding things that are neutral or harmful.

Alexander Berger: I think we think that a lot of things might be harmful, and there’s a lot of sign uncertainty over specific stuff. And so it’s part of why, “Well, let’s just throw money at anybody who says they have these values” would probably not be a very good strategy towards the goal. And I think it’s surprising sometimes how big these considerations can be, especially if you have a little bit more of a fragile worldview. I was surprised recently to go back and understand that Carl Sagan had written about this, when there was discussion over detecting asteroids and potentially being able to redirect them if they were coming on a path towards Earth. And he made this argument that it’s fine to invest in detection, but we should really not build the technology to be able to redirect an asteroid’s path, because the base rate of risk is so low that it’s just vastly more likely that if you had that technology, you would use it to extort richer countries or something, or just threaten the world with a doomsday cult, rather than be able to prevent the base rate.

Alexander Berger: And I think that when you think about analogs to biorisk, and arguably maybe to some forms of AI, these might be like weird enough, esoteric enough kinds of considerations that even just raising the alarm could end up sometimes being net harmful. And that’s something that my colleagues do think a lot about, but I don’t know that has necessarily trickled down into the rest of the EA community. And I think that the way in which that will interact with an expected value calculation, where uncertainty will make you want to regress your estimates, I think can help you understand why expected value-wise, you could still end up thinking that a lot of global health and wellbeing might be desirable because of just how much uncertainty you have.

Arguments against working on global health and wellbeing [1:44:05]

Rob Wiblin: I want to turn to arguments against working on global health and wellbeing. I had a bunch of arguments that I was potentially going to make, but I think it seems like you just have such a balanced perspective and you’re aware of the arguments on both sides, so I’m actually just curious to probe your thinking. Earlier it sounded like you were skeptical about the broad longtermist work, but that you’re more excited about some of the targeted longtermist work. So I’m keen to hear the affirmative case in favor of that, given the things that you know from your work.

Alexander Berger: I’m really very glad that Open Phil does some work on AI risk. I always worry that it’s not my area, I don’t understand it, I could just be terribly, painfully confused. But the fact that we’re able to spend some of our resources on this looming change that’s coming, seemingly very plausibly this century, to the vast future of the universe, to me feels like a huge, huge deal. And I’m really glad that we’re able to devote work to it. I think the big question is, what can we do to make it go better? And I sometimes feel like there’s a little bit of disassociation between the big philosophical case for why we care so much about it and why the stakes are so high — which I buy — and then in practice, somebody is like, “Yeah, I’m going to train a language model to reliably not swear.”

Alexander Berger: And it’s not to say that there’s no deep connection. I think it’s just hard to have the same, “Wow, this is the world’s most important thing,” when you’re training a language model to not swear. That’s not to say it’s not, I don’t know. But part of what I like about the diversity of our work is that it lets you feel like you can do some of that and some of these other things and still be making concrete progress.

Rob Wiblin: What stuff are you most excited about that’s outside of the global health and wellbeing bucket? It sounded like some of the targeted AI work you think could just end up being incredibly important to how things go.

Alexander Berger: Yeah, absolutely. And I’m really glad that we’re able to do it. We also invest a lot in growing the EA community, and I think those are good investments. I feel pretty good about that community. I think the idea of being able to get people to take impact seriously in their careers and donate more are things that I actually see a lot of value in and I would like to disseminate more. And then the biosecurity work I’m a little bit less up to speed on, and so I don’t feel like I have as good of a picture of. But coming out of COVID I think that the case for why this might be high expected value work feels very intuitive to people. And so honestly, yeah. We talked about this earlier, but the biggest thing on longtermism is… I totally buy the appeal, and it’s just cluelessness that makes me uncertain about it. And I think that also, it makes me like somewhat uncertain about a bunch of the global health and wellbeing stuff, too.

Rob Wiblin: So setting aside AI and bio stuff for a second, I guess earlier I slightly put you in a corner where you were strongly encouraged to dis the other longtermist concerns like international relations or decision making. Is there anything in that more general world improvement, or other longtermist interest areas that you think is promising and you’d like to see people explore whether there’s something great to be done there?

Alexander Berger: Yeah, absolutely. I’m nervous about some of the fuzziness aspects of some things like improving human judgment or improving international relations, and I think that might end up in that bad part of the solution space, according to me. But I’m super excited about longtermism as an idea, and I think trying to come up with really concrete, robustly good proposals for other longtermist ideas that are beyond AI risk and biorisk is a really promising and interesting area. And so one issue is that I don’t have enough of a view on the risk profile myself to say, on the margin, should even more x-risk motivated people be going into AI than currently are, but I think that the case for Open Phil to spend more money on longtermism might actually be a little bit stronger if the set of problems that the longtermists are working on was a little bit wider or more diverse. Because the more it’s one or two concentrated bets on AI and bio, the more it feels like we want to put into these bets, but we want to hedge those bets. We want to do some other things first.

Alexander Berger: But if it were like, actually, there’s ten different uncorrelated causes for longtermism that all look like they’re good, and for some of them you can make concrete, measurable progress… I think that the case that you don’t want to just bet on this one worldview would get a little bit weaker, because you would have a lot more internal benefits of diversification where you’d be learning lessons. You’d be recruiting different kinds of people and you’d be meeting different people in the world. And so I like the idea of longtermism growing and being a bigger tent as a community. I think those are good. I just feel like the things that get classified as broad longtermism right now often feel to me like they’re just so fuzzy that I’m not sure how to engage.

Rob Wiblin: It was a little bit depressing earlier, but let’s re-engage with the cluelessness concern briefly. I feel like maybe I didn’t back longtermism enough earlier because I was just letting you lay out your view, but if we take cluelessness quite seriously, if we try to put that hat on, what could we then do? What hope can we grab onto? It seems like one approach would be to say I’m going to bank on the idea that a really crucial time is coming quite soon, and so I’m going to be able to understand that situation and actually influence it, because it’s actually happening. Most of the decisions are going to be made now.

Rob Wiblin: And this is kind of the AI story, where you’re like well, the future at some point is going to become more regular and predictable, and what if that is going to be put into motion quite soon? Then I might know the people or the organizations that are involved, and I can try to influence that in the normal way. I suppose you have unintended consequences there, even when you’re just operating in normal life, but they might not be so grim as trying to create a series of consequences that play out over hundreds of thousands of years.

Rob Wiblin: Another approach might be to… I think Bostrom has a really excellent presentation on this from back in 2014 or something, which I think was seminal at the time and maybe we just haven’t managed to advance this research agenda very much. But it was basically saying cluelessness is this really big problem, but what we need to do is find some guideposts. Find some things that we can improve about the world now, that we can measure now, that we think correlate as well as is possible, as well as is practical, with good, long-term outcomes for humanity as a whole.

Rob Wiblin: And you have a reasonable list of things that could be on there, like are countries reasonably democratic? How violent are people by nature? How well do we understand science in general? And in all these cases you can come up with stories under which improving that kind of guidepost could be positive or it could be negative. But the philosophy was surely we can find something that we can influence, some measure about the nature of the world and how humanity is doing that is more likely to be positive than negative, even if we’re still going to be quite unsure. And then that’s the thing that will be most impactful to push on from a broad longtermist point of view.

Rob Wiblin: I don’t know exactly what that best guidepost, the best thing that we could change about the present world now in order to influence the long term would be, but I would be surprised if there wasn’t something in that category where I would say, “Yes, I think it’s more likely than not to be good to change this thing.” What do you make of that line of argument?

Alexander Berger: I guess I’m open to it. I’m skeptical that we can really reason well enough about the future that we can get to more than 50+ɛ% directional confidence. I don’t want to be a nihilist about it. I think the future of humanity is so important and so big, or hopefully so big, that people making these crazy bets about how they try to influence and make it go better is a really good thing. And I don’t want to talk your listeners out of doing that. I think there’s a ton morally at stake, and if people feel like they can have good, meaningful lives trying to pursue these probabilistic bets, I think that’s healthy.

Alexander Berger: It’s almost like an attitudinal thing. I want people going into that with a sense of, it is vastly, overwhelmingly likely that the channel through which I imagine impacting the far future is totally misguided. And that’s not to say that they shouldn’t take the action. I actually think maybe people should even be willing to make vast personal sacrifices to take those kinds of actions, because the expected value could be so high. But I think that there’s something attitudinally around really inhabiting and recognizing how overwhelmingly likely to be wrong and naive we are when we try to shape the future a million years in the future that makes me want to say… Not that we shouldn’t try it, but we shouldn’t bet everything on it. And that we should bring the right attitude of modesty and uncertainty to that work.

Alexander Berger: I just feel like if you look back and try to play forward like how much you can predict history, I think you’d be really bad at it. I think we’re really bad at predicting international relations. I think there’s so much contingency. Not to say we shouldn’t do things, or that they always end up being worth zero, but that appreciating that contingency, I think, will push back against a bunch of forms of fanaticism.

Rob Wiblin: It’s a view that lends itself to more modesty and much less to fanaticism, because the whole thing is like wow, it’s such a struggle to find anything that’s going to improve the future. But yeah, I don’t feel as hopeless about it as it sounds like you do. If you think about reducing climate change, is that more likely to make the future go better? Or is it more likely to make the future go badly? I suppose you could spend a lot of time analyzing that, but my guess is you’d end up concluding that it’s more likely to make the future go better.

Rob Wiblin: If America ceases to be a democratic country, is that likely to make the future go better or likely to make it go worse? Those things don’t seem super levered, where one person is going to have a massive influence on that and then have a massive influence on how humanity plays out in the long term. But there are a lot of things where I’m like 55/45 or 60/40 that this is more likely to be good than bad.

Alexander Berger: I just feel like most bets about what you would actually do, a) their counterfactual could also be good, and so there’s more trade offs and factors on each side that you need to weigh, and b) the actual thing in front of you is many, many of those bets together, where it’s like each one might be 55/45. Maybe the U.S. becoming an autocracy is bad for the future. Sure, that sounds like maybe more than 55/45, right?

Rob Wiblin: Yeah.

Alexander Berger: But is that a meaningful thing? And how bad in expectation is the badness for the future? Is it one in a million or one in a trillion? And whether that’s a question that I can actually form a reasonable, coherent view on is something I’m really uncertain about.

Alexander Berger: Going back to your climate change point, I agree that climate change seems directionally worse rather than better for the future, but is climate change worse rather than better in a way that saving the life of a random kid born in rural West Africa is better or worse for the future? Is it differently so, is I think the claim that longtermists actually need to win, and I’m really, really uncertain of that. Again, I’m not saying it’s better. I really don’t want to imply that I know it’s better. I’m just saying uncertainty should make you want to hedge.

Rob Wiblin: That seems like a very open question.

Alexander Berger: Yeah.

Rob Wiblin: What do you think of this argument? I’ve thought of this a few times and then never actually told it to anyone. Say we’re 50/50 unsure whether the world is predictable and understandable and influenceable in a way that humans can actually do. Then you might think, well, there’s a 50% chance that this is a hopeless enterprise. That the world is just such a crapshoot and so chaotic and it’s so unpredictable what effects your actions have that it’s basically a write-off. And I’m just going to say, “Well, if that’s the world we live in, then too bad, and I can’t really do anything good.” But there’s a 50% chance that it is somewhat predictable; that you’re more likely to do good than bad if you try to make things better.

Rob Wiblin: Then, in expectation across these two worldviews, you have a positive influence, and also you should bank on the idea that you’re in a positive world. I think the way that this wouldn’t work is if there was a third equally likely possibility where when you do things you’re likely to make them negative; likely to make them worse. But I find the idea that when people are trying to produce an outcome, they’re more likely to make it happen than to reduce it by an equal degree, kind of reasonable.

Alexander Berger: I think that’s a good example of a prior that seems very reasonable in normal situations, but then when you’re worried about anthropogenic risks to humanity, like an extremely bad prior, honestly. Just to be clear, I wanted to go back and agree with your first two points. I’m, on the one hand, very glad that people do a lot of longtermism, even though I think the right attitude is deep, deep uncertainty and self-skepticism. And I really don’t want expected value over the claimed impact, where it’s like… I feel like longtermists will often gesture towards these claims of, “If we’re right, then it’s 10 to the 40 stars that we’re going to take over and populate in the future.”

Alexander Berger: Once you start to try to reason about that in an explicit, expected value kind of way, that number so dominates everything else in a way that I think is actually perverse. Because all that happened is somebody said something. There’s no deep truth-trackingness to these sorts of claims. I don’t want to be so deflationary about it that people walk away and they’re like, “Wow, nobody should be longtermist.” I think longtermism is great. I think more people should be longtermists. But I think that the uncertainty that we have about this is, in some sense, deeper and more like cluster-thinking-y. Not that we shouldn’t do some, but that the totalism that will be implied by typical or basic expected-value thinking seems like that’s where I want to get off the train.

How much to allocate to different umbrellas [01:56:54]

Rob Wiblin: Let’s wrap up this part of the interview talking about global health and wellbeing versus longtermism. To bring it all together, it would be good to put together, I suppose, the competing weight of arguments here with the worldview diversification issue, which it sounds like to you is a really primary concern. Doing that, where do you come down on how much Open Phil or the effective altruism community should allocate to each of these different umbrellas?

Alexander Berger: I think more about the Open Phil question than the effective altruism community question, but I feel like something in the ballpark of 50% to longtermism and 50% to the global health and wellbeing portfolio makes sense, and I think that’s also probably where we’re expecting to end up. It could be broadly between a 30% to 70% range either direction, and we’ll try to do more to figure that out in the coming years as we get a better sense of what opportunities are available to us, where can we cost-effectively spend money, and also maybe a more philosophical process on how to think about things like world diversification in a reasonably compelling, coherent way. I really wish we had better principles for allocating and figuring out what is the right amount there, but I really don’t think we do at this point.

Rob Wiblin: I saw this survey from last year that was aggregating a bunch of different sources on giving among effective altruists and I think it concluded that about two thirds was going to global health and wellbeing and about one third to longtermism. I’m not sure whether that’s so much a conscious decision, necessarily. At least on Open Phil’s part it seems like the limiting factor on the longtermist grantmaking is actually finding things that you’re confident are good on longtermist grounds. Even if you tried to go all in on longtermism, it sounds like you might struggle to actually give away all of the money you have within any plausible timeframe.

Alexander Berger: Yeah, both sides of Open Phil are still growing, and are well short of where we should be spending, given our planned budget allocations. And so we’re overall trying to figure out where we can spend most cost effectively going forward. And that’s true on the global health and wellbeing side and the longtermist side. But I think global health and wellbeing is closer to where it needs to be, in some sense, whereas longtermists have a lot more growth ahead of them that I expect to be associated with trying to figure out more about the world.

Alexander Berger: I also think that they’re going to learn a lot more from the next decade or two about the future of technology, whereas for global health and wellbeing, we actually think probably with every passing year our opportunity set gets a little bit worse, and so we have a little bit of a stronger incentive to get going. That’s a lot of what my team is focused on these days.

Rob Wiblin: Yeah, that makes sense. It sounds like the baseline is something around 50/50 but it could go 30/70, 70/30. What kinds of things could you learn, or could Open Phil learn, that would potentially shift that one way or the other?

Alexander Berger: I think a couple of things. One would be are we finding good opportunities on either side? If neither side feels like they can spend their portfolio in a reasonable fashion, that’s a pretty strong argument to give more to the other side. And in practice, I expect global health and wellbeing to be able to find more large-scale ways to spend money, just because it’s more aligned with the way that most folks in the world currently think. That could definitely be a factor.

Alexander Berger: Another factor could be if the longtermists just came up with a really scalable way to spend money that they thought had a good, more robust chance, maybe that would argue for more rather than less. And maybe if there were more longtermist causes… I don’t know if Holden necessarily agrees with me, and so I’m freelancing a little bit here, but I could imagine if there was more internal diversity to the longtermist side, where it didn’t feel like all of longtermism was one bet on a very, very specific expected value play, then I could also imagine that being the case for marginally more rather than marginally less. But I think that would be a tweak around the edges, not a fundamental change to that ratio.

Rob Wiblin: Yeah. An interesting phenomenon that’s different between grantmaking versus career planning is that if you set out to have a career that has an effect on the very long-term future, in a sense you’re doing something that is going to develop over 40 years’ time. So, you have potentially quite a while to see this space grow and to rely on later research on what’s going to be useful. But that doesn’t really help Open Phil make grants right now. That maybe suggests that the program as a whole might flourish and find more really useful ways to allocate grants in coming decades. But maybe the fact that the area is young is more challenging for present-day grantmaking than it is for long-term career planning.

Alexander Berger: I think that’s probably right, although I think of it as also a big problem for long-term career planning. You might have more insight as to how to do that well than I do. We’re so early in this we don’t know so much, and so it makes me really… If I were planning my own career, and I were a dyed-in-the-wool longtermist, I think I’d really be emphasizing optionality. And trying to do things that would be relatively robust and scalable, because I personally, as Alexander, really expect the longtermists’ best recommendations to change, and so I think doing something that’s somehow going to be robustly useful for longtermism, or a skillset that would allow you to be useful in the future, feels to me like something that’s at stake there. And it’s hard to do as a funder, because it’s like what is robustly useful? We try to invest in research but…

Rob Wiblin: Just keep the money in the bank.

Alexander Berger: Yeah. Which is implicitly what you do when you’re not spending it.

‘GiveWell’s Top Charities Are (Increasingly) Hard to Beat’ [02:01:58]

Rob Wiblin: Okay, let’s push on and talk quickly about this blog post that you wrote in 2019 called GiveWell’s Top Charities Are (Increasingly) Hard to Beat. Do you mind if I give a brief summary of how I read this post, and then maybe my main question about it?

Alexander Berger: Yeah, totally.

Rob Wiblin: As I understood it, basically it was saying Open Phil has been making grants to reliable, proven GiveWell charities for a while. Things like the Against Malaria Foundation, which distributes bed nets. But it’s been hoping to maybe find things that are better than that by using science and politics and maybe other methods to get leverage, and so it’s been exploring these new approaches, trying to find things that might win out over helping the world’s poorest people. And you’d been doing that by working on scientific research and policy change in the United States, but the leverage that you’d gotten from those potentially superior approaches was something like ten to 1,000, probably closer to ten than 1,000. And that wasn’t enough to offset the 100 to X leverage that you get from transferring money from one of the world’s richest countries to the world’s poorest people. Is that right?

Alexander Berger: Yeah. I think that’s a great summary.

Rob Wiblin: Okay. That raises the question to me, if you were able to get even 10x leverage using science and policy by trying to help Americans, by like, improving U.S. economic policy, or doing scientific research that would help Americans, shouldn’t you then be able to blow Against Malaria Foundation out of the water by applying those same methods, like science and policy, in the developing world, to also help the world’s poorest people?

Alexander Berger: Let me give two reactions. One is I take that to be the conclusion of that post. I think the argument at the end of the post was like, “We’re hiring. We think we should be able to find better causes. Come help us.” And we did, in fact, hire a few people. And they have been doing a bunch of work on this over the last few years to try to find better causes. And I do think, in principle, being able to combine these sources of leverage to… I think of it as multiplying 100x, you should be able to get something that I think is better than the AMF-type GiveWell margin.

Alexander Berger: But I don’t think it blows it out of the water by any means. So this pre-figures the conclusion in some ways from some of our recent work. I think we think the GiveWell top charities, like AMF, are actually ten times better than the GiveDirectly-type cash transfer model of just moving resources to the poorest people in the world. That already gives you the 10x multiplier on the GiveWell side, and so then we need to go find something that is a multiplier on top of that. I actually think that’s quite a bit harder to do, because that’s a much more specialized, targeted intervention relative to the relatively broad, generic, just give cash to the world’s poorest people, which is a little bit easier to get leverage on.

Alexander Berger: I do think we should be optimistic. I think we should expect science and advocacy causes that are aimed towards the world’s poor to be able to compete with the 10x multiplier of cost effectiveness and evidence that GiveWell gets from AMF to GiveDirectly. But I’m uncertain to skeptical after a few years of work on this that we’re going to be able to blow it out of the water. And so I think about it as, it gets you, with a lot of work and a lot of strategic effort, into the ballpark. And so we have a couple of these new causes that I could talk about where we think we’re in the ballpark of the GiveWell top charities, but we haven’t found anything yet that feels like it’s super scalable and, in expectation, ten times better than the Against Malaria Foundation. We’re working hard to find stuff that’s in the ballpark.

Rob Wiblin: Yeah. It seems like if distributing bed nets is something like ten times as good as just giving people the equivalent amount of cash, shouldn’t you then be able to get leverage on top of that by lobbying governments to allocate more aid funding to malaria prevention, including distributing bed nets, or doing scientific research into a malaria vaccine, which it seems like there’s a pretty good candidate that’s come out recently that might really help us get rid of malaria completely? Why don’t those, in addition, help you get further leverage and have even more impact?

Alexander Berger: You see the issue with infinite regress right? It’s like, “Well, why can’t you go one layer more meta than that, and advocate for people to…” I think the answer is that in a weird way, the problems of the world actually will just not support giving at that scale in a super cost-effective way. I think this is an interesting point that I wish effective altruists would pay a little bit more attention to. I haven’t done a good job articulating it, so it’s not something that people just necessarily understand, but I think the GiveWell charities actually set a very, very, very high bar in terms of spending at large, large scale.

Alexander Berger: One way to put it would be like, there’s the Institute for Health Metrics and Evaluation at the University of Washington. They compiled the Global Burden of Disease report to try to say how many life years are lost to every cause of death around the world every year. And they estimate that there’s something like two and a half billion DALYs lost to all causes every year. This is off the top of my head, so I could be wrong, but I think GiveWell thinks they can save a disability-adjusted life year for something like $50. If you were trying to spend just a billion dollars a year, which is 3% of the NIH budget, less than 0.3% of U.S. philanthropic dollars every year, on stuff that’s as cost effective as that, then you would need to be reducing total global life years lost from all causes everywhere by just under 1%.

Alexander Berger: I think that if you sit with that number, that’s just really, really high. Amongst other things, it just shows that if you were trying to do that at a scale a hundred times bigger, you literally couldn’t because you would have already solved all health problems. I don’t know where the curve is of declining marginal returns, but I would guess it sets in pretty steeply before even ten times bigger than that. I think people sometimes underestimate the size of the opportunities when they think, “Oh, we can make a leveraged play that could be ten times better.” Maybe an individual donor could, but Open Phil will need to eventually be giving away a billion dollars a year, maybe more. That is actually not the relevant benchmark for us. We’re giving at a scale where it has to be able to absorb more resources.

Rob Wiblin: Okay, so there could be particularly good grants in science and politics that do this, but it’s just they’re not going to be able to absorb nearly as much money as you need to be able to give away. So you want to make those, but then it’s also going to be very important to find other things that can actually take billions.

Alexander Berger: Yep.

Rob Wiblin: Okay. Yeah, that’s super interesting.

Alexander Berger: One other observation is just that when we’ve tried to look at, through history, what other grants other people have made, when we try to analyze our own grants, we typically see things that look more like 10x leverage than 100x leverage. I think we almost never see 100x, even from hits. And the more that what you are pursuing seems idiosyncratic, I think the harder it is to even get 10x leverage. Because the leverage operates through these intermediary forces in the world that are, in science, driven by academic scientific incentives, in advocacy, like it has to go through Congress… There are just limits for the level of alignment you can push when you’re trying to leverage these bigger systems.

Rob Wiblin: It’s interesting that you think of distributing bed nets as having leverage over cash transfer, so GiveDirectly. What is the source of leverage there? Is it that you know that maybe bed nets are better than the recipients realize, and so you’re willing to pay more for them in principle than they are?

Alexander Berger:I think that’s a really great question, and I don’t think people ask it enough. I think in my first year at GiveWell I wrote a blog post about how we should think about bed nets versus cash transfers. And I think there’s a few things. One is, the idea that people are doing expected value evaluations when they buy bed nets or don’t buy bed nets I think is just not an accurate description of human affairs, and they would be really bad at doing them if they tried. I don’t think people are good at making that kind of judgment.

Alexander Berger: Also, the beneficiaries are mostly kids, and so it’s not like they can be self-interested and invest in their own future. It requires altruism. And, actually, I think people think that something like 50% of the benefits of bed nets are externalities, because they’re insecticide-treated, and so you actually kill mosquitoes, you don’t just prevent them from eating you. And so the typical public goods/services argument for proving public goods and externalities I think applies there, and would make you want to invest in bed nets and not just cash transfers.

Alexander Berger: But yeah, I think GiveWell is broadly showing that basically trying to do evidence-backed and having a really intense lens around cost effectiveness can get you, roughly speaking, a 10x gain. And then I think going from the rich world to the poor world can get you, roughly speaking, a 100x gain. And I do think that’s an interesting parallel to keep in mind when you think about things like, for effective altruists, is your edge philosophy and values, or is your edge more analytical, evidence-based reasoning? And I think, in a lot of ways, often it feels like the edge is more the values.

Rob Wiblin: Yeah. Interesting. Some people are really skeptical of the idea that experts at an organization like GiveWell could be smarter about spending money than the world’s poorest people spending their own money on themselves. And I think if they were trying to spend a large fraction of their budget, that would be very well-placed skepticism. But I guess the bar here is can we find anything that these people are significantly undervaluing, where it’d be really good if they got the product rather than just the equivalent amount of cash. And with a significant research team that’s relying on all this other research that’s been done, it’s a big effort just to find something that costs $5 per person or $5 per household that these people have undervalued. And that seems like a more plausible bar to clear.

Alexander Berger: I have some personal sympathy for the people who love the autonomy case for just giving cash, but I really do think that when it comes down to it you actually can just find opportunities that look a lot better.

South Asian air quality [02:11:12]

Rob Wiblin: Let’s push on and talk about jobs at Open Phil, and the new problem areas that you’re potentially moving into. You mentioned these briefly at the beginning, but let’s recap. What roles are you hiring for at the moment, or potentially in the near future?

Alexander Berger: There’s two clusters. One is a couple of new program officers in new areas, South Asian air quality and global aid advocacy. And then a second is folks to work on cause prioritization on the global health and wellbeing team to basically pick new causes, just like we just did with South Asian air quality and global aid advocacy. And if you’re up for it, I’d be pretty interested in talking through a little bit of the case for, we can do South Asian air quality. I think it’s pretty interesting and it’s Open Phil’s first new cause area in a long time, and I think that’s where it gives you a sense of what would the work look like, what kinds of things would you be thinking about…

Rob Wiblin: Yeah, I was so excited to see this in your notes. I’m fascinated to learn about it. What’s the case in favor, and how did you end up pulling the trigger on it?

Alexander Berger: So we need to hire and see if we can get somebody good to come join us, but the South Asian air quality I think is a really interesting example where — you know this, but your listeners might not — we have these three criteria that we use for picking causes: importance, tractability, and neglectedness. And on importance, I think this is a crazy case. So I mentioned earlier IHME who produces the Global Burden of Disease report. They estimate that almost 3% of all life years lost to all causes globally are lost due to air pollution in India. And that’s a mix of indoor smoke from cooking, and outdoor air pollution from burning coal, from cars, from burning fuel crops. And in some ways, I think it’s appropriate when you hear numbers like that to be skeptical and to say should I really believe these?

Alexander Berger: And you have to rely on some social science to get figures like that, you can’t really run randomized controlled trials where you expose people to a lifetime of air pollution, thankfully. And so as with all social science literature, I think there’s some reasonable concern or a question of is the magnitude that we’re getting right? But I don’t think it’s going to make you want to downweight that by a lot. Maybe it’s a factor of two or something. And so you’re starting from such a high base, that the importance just ends up being, continuing to be huge. And then on the neglectedness criterion, it gets a really small amount of philanthropy right now. So the best report we’ve seen on this I think estimated something like $7 million per year of funding for air quality work in India.

Alexander Berger: For something that’s causing so much of all of the health problems in the world, that’s a trivial, trivial fraction. And a lot of those funders are actually motivated by climate. Climate will get you some of the benefits that you care about in air pollution, but they can come apart. And so I think there’s a lot more to be done there. The last criterion honestly is the weak point on this one, where tractability is a challenge. Funding in India as a foreign foundation is hard, and frankly getting harder. And air pollution has a bunch of different causes, and there’s no one silver bullet policy that’s like okay, if you could just get the legislature to pass this, then you would be okay. There are a bunch of things, from trying to encourage modern stove usage, to getting coal power plants to adopt these units that remove small particulate matter from the air, to changing the emissions standard for new vehicles… These all seem like they would have a reasonable shot at this.

Alexander Berger: If we did something like quadruple the funding in the field, we would only need to reduce air pollution in India by something like 1% relative to the counterfactual in order for that to be more cost effective than the GiveWell top charities. And I really don’t think that’s trivial, I think that’s actually a hard, high bar, but I think it’s probably doable. So maybe that’s an example of something where you’re like, why can’t you beat the GiveWell top charities by ten or a hundred times? And I’m like well, I have something that’s a huge, huge problem that gets no other philanthropy, and I think I can make progress on it, but I really don’t think I can make enough progress on it that I’m going to be able to be ten times more cost effective than the GiveWell top charities.

Rob Wiblin: It’s so interesting to hear those numbers laid down. I don’t think I’d fully appreciated just what it would imply to be able to massively beat the GiveWell recommended charities. It’s good to take a minute for that to fully sink in.

Rob Wiblin: I guess regular listeners would know that this is a bit of a hobby horse of mine, that people just don’t seem to appreciate how severely damaging air pollution is. How much did you spend time critiquing these papers? Because I guess my understanding is that people have come at this from different fields using different methods and they just keep finding that serious air pollution, especially for children, it causes people to live less long and it also seems to cause serious problems with education and productivity; people find it harder to work and get things done when there’s air pollution all around them.

Alexander Berger: We have not gone deep on the education and productivity and intellectual cognitive literature, we really came at it from a health perspective. And I think that especially for acute health problems, there’s just very, very good evidence from a ton of different settings showing that especially young kids and older adults suffer from acute problems when air pollution is really bad. It’s harder to get good evidence on the chronic problems, because you just don’t have the same high-frequency variation. And so how bad is it to be exposed to 70 of these particulate matters versus 10 over decades is a hard problem to study. But our read on the research from again, a bunch of different sources, is that it looks consistent with the numbers that the Global Burden of Disease report is using.

Alexander Berger: One area that I wish there was a little bit better work or is a little bit more compelling is animal models. It really does feel like you should be able to just basically expose rats to a lot of particulate matter and see if they die more. And my impression is that that had been done in an older generation of literature, and they had not found necessarily big lifespan effects. I don’t think you should think of that as decisive, because rats aren’t humans, our lungs are actually different.

Rob Wiblin: As I understand it, that’s even true of cigarettes. This was actually one of the defenses used [by the tobacco companies], was that it seems like cigarettes don’t harm rats as much as they harm people.

Alexander Berger: That actually makes me feel a lot better, so I’m glad to know that. Thank you for telling me that, I did not know that.

Rob Wiblin: I’ll go chase that up, but remember listeners, smoking is very bad.

Alexander Berger: But you do see a bunch of animal evidence that it causes specific problems, like atherosclerosis, like lung problems. And so even if you’re not seeing animals recapitulating the lifespan effects, I think we do see some biological evidence that these are plausible channels. So I think overall the case just looks really strong. And you see some of this research from the U.S. where it’s like, yeah, strengthening air pollution standards actually more than paid for, in a regulatory sense, a bunch of Obama-era climate regulations, because coal plants kill a ton of people, and we placed a high weight on the value of a statistical life.

Rob Wiblin: Yeah. What kind of grants could you imagine them making?

Alexander Berger: So this is where the hire becomes really important. And we want to hire somebody who knows more about this field and is better connected than we are. So we don’t totally know yet, but the one grant we made so far is to pay for a lot better low-cost monitoring, these relatively cheap systems, like sensor systems that can be deployed across India and give people just a very rough, quick sense of, what’s the weather like outside. I think they’re literally from the same website that I would check during the fires in California to see what are the cheap sensors that my neighbors have. So it’s an academic partnership with some Indian universities and UC Berkeley.

Alexander Berger: So monitoring is one aspect of the problem, and then there’s a bunch of trying to source… Actually breaking down all the different causes of this, so then you can try to have relevant policy solutions. I think it’d be unlikely that we literally pay for these scrubbers for coal plants, but you could advocate for the enforcement of the actual existing regulation that says they’re supposed to have these, that prevent coal emissions. And so there’s just a bunch of things like that. A lot of it would end up being advocacy, technical assistance, training, monitoring, and it’s a tough multifaceted problem. So I don’t think it would be trivial to make progress on it, but it seems like a swing worth taking to us.

Global aid advocacy [02:18:39]

Rob Wiblin: The other new program you’re probably going to launch is global aid advocacy, right?

Alexander Berger: Yeah, that’s right. And the basic idea there is very much inspired by this leverage point. It’s to say, look, we think we’ve been able to sometimes influence state legislators on criminal justice policy, we’ve sometimes been able to move macroeconomic policymakers, but let’s focus on trying to get more and better aid spending from rich countries. And so I think exactly the sort of logic you were pushing earlier is the logic here. The Gates Foundation funds a lot of stuff in this space, but we’re not sure that they’ve necessarily exhausted all the options.

Alexander Berger: We think that there’s room to grow and room to do more. There’s probably a couple of hundred million dollars a year of spending from philanthropic spending to try to do this kind of advocacy. And we don’t want to just focus on the U.S. necessarily, I think Europe… There’s been a fairly active debate recently in the U.K. about this, and some maybe emerging donor countries like South Korea or Japan where there’s less history could all be interesting places to try to fund advocacy for more and better aid.

Rob Wiblin: I love to see it, I love both of these programs. We’ve slightly had this thing where I have to act like I’m not excited about global health and wellbeing stuff, but whenever I see breakthroughs in this work I’m like “Yes, they’re going to send a whole lot of aid on really good stuff!” I was just so happy about the malaria vaccine breakthrough early this year. And also just the fact that there’s so much damage done by this pollution and no one’s doing anything about it drives me crazy. So seeing someone go into it, I’m just like, yes humanity, finally.

Alexander Berger: I think that’s a lot of the heuristic that is partially driving us, we should be able to make progress on things. The malaria vaccine one is super interesting, by the way. I think some of the people we talked to are pretty nervous about how scalable the vaccine that recently made headlines is. It requires three doses, I think it’s pretty hard to scale… There’s something with the way it was tested that was maybe the ideal time to test it… So I’m not actually sure that that one is necessarily going to be a huge breakthrough, but from what we’ve heard, mRNA vaccines have had a huge impact and been de-risked. People are pretty optimistic, and they think you might be able to make some next generations of vaccines that will just be vastly more effective. What a cool synergy and breakthrough that COVID will have accelerated that new technology to have such a big impact.

Rob Wiblin: Yeah, absolutely. One of the the most requested audience questions was “Why doesn’t Open Phil fund X?” Or Y, or Z, whatever is their favorite thing. Maybe the most common one that I’ve heard is why doesn’t Open Phil fund more work on mental health? I guess most speculatively perhaps things to do with psilocybin or cutting-edge treatments that might really be valuable for treating serious mental health issues. Do you have any comments on that?

Alexander Berger: Good Ventures actually had done a little bit of work on psilocybin and MDMA, and funded some relatively early research and some of the clinical trials that folks had done there. I actually think that field has just advanced a ton, and now there’s a lot of commercial funding. And so I think on the margin, it’s less obvious to me that a ton of philanthropic funding is needed going forward. That’s not to say there’s not stuff to do, we haven’t kept up with it actively, but I think it is a case where it’s pretty interesting to see the progress and I think it’s a pretty cool example of progress that has happened. On mental health, I think it’s on our long list. And so if anybody wants to join our cause prioritization team to help us work through more things, I think that there plausibly are good opportunities there.

Alexander Berger: Again, bed nets are really, really cost effective. And so if you can save a life so cheaply, it can be hard… Mental health is often just not as well understood, the interventions are often not super effective, sometimes quite expensive. And so the relative cost to improve things relative to just letting somebody live and grow up and take a shot can often be pretty high. So that’s not to say that we shouldn’t do more on mental health. We’re interested in doing more, we’re interested in understanding more, but it leaves me, I’m not super optimistic that we’re going to find stuff that we think is more cost effective than the GiveWell top charities there.

Rob Wiblin: So it has to be perhaps apps that can be delivered at massive scale for almost no marginal cost, or I suppose if SSRIs work reasonably effectively, they cost cents per day I think, so maybe you could get some traction there. I guess some people make the case that severe depression is significantly worse than death, at least on a per-day basis. And then maybe that will give it a slight edge.

Alexander Berger: So I think that last case is actually what you need to make things pencil, is my impression. And so yeah, if you have a moral theory that allows value to go very negative… Similarly, I think you can think that pain is really bad. And so you might want to make sure that people have access to opioids if they’re suffering from bone cancers. There are people who work on this stuff and know a lot more about it than I do. We funded a little bit of relatively basic work on pain and non-opioid painkillers on somewhat similar grounds. That’s another kind of diversification that you could do. And I think it’s not crazy.

Roles at Open Phil [02:23:22]

Rob Wiblin: So it sounds like there’s three roles that you’re currently hiring for, the South Asia air pollution program lead, a lead for the global aid advocacy, and then I guess also people to join the cause prioritization team, where they could consider questions like is mental health going to be competitive with GiveWell top charities? Let’s just go through them one by one. And maybe could you explain the kind of person who’d be a great fit for it? So either if there’s someone in the audience or somebody in the audience knows someone who would be suitable for it, they can pass it on.

Alexander Berger: So on South Asian air quality, we’re looking for somebody who probably has a lot of experience in India, probably on either policy around something like air quality, so if it’s not air quality, maybe climate or a related topic, or somebody who has a lot of air quality experience, but maybe not as much policy background. And then we’re also looking for somebody who’s relatively Open Phil-ish in their outlook and strategic thinking. So, comfortable with explicit expected value thinking, trying to maximize, trying to have as much impact as possible, and very analytical about their work.

Alexander Berger: I think that’s going to be a really hard role to fill. And frankly, if we don’t end up finding somebody for it, we might just not actually enter the space. Because we always think that having the right program officer is just a really key ingredient to being able to do the work. And so that’s not to say we definitely wouldn’t, maybe we’d try to reallocate somebody or hire somebody else, but that person in that seat for us is really, really crucial. Global aid advocacy is a pretty similar recipe, where we’re looking for somebody with a lot of experience in that field. That field is significantly bigger and so I think there’s more potential people to draw on. It could be anywhere, we could imagine somebody who’s more focused on Europe or find somebody who’s more focused on the U.S. And it could be somebody who has a background around advocating for more aid or working in the legislature or something like that.

Alexander Berger: Or somebody who’s more focused on effectiveness and how you make a dollar to go further, and has that perspective. And again, looking for somebody who’s really quite analytical, quantitative, comfortable with thinking through okay, what is the expected value of a different strategic approach or different grantee in that world? And then on the cause prioritization side, we actually have two different roles. So in the past, we’ve only ever hired for this research fellow role, which is a little bit more academic, a little bit more focused on social science, but we’re also hiring for the first time a new strategy fellow role. And a lot of the work will be the same, reviewing reports, talking to experts, doing some of these back-of-the-envelope calculations. But the research fellow is more focused on research skills and social science and really trying to interrogate complicated academic papers.

Alexander Berger: While the strategy fellow role is more on engaging with practitioners and experts and doing quicker more assumption-driven calculations. I think in terms of background, it’s pretty likely that the research fellow has maybe some graduate training in economics or at least could have gone that direction. And the strategy fellow role might be more folks who are coming out of consulting or buy-side finance or maybe some think tanks where they’re more interested in spending some time talking to people, but also comfortable thinking in spreadsheets and things like that.

Rob Wiblin: Normally we’d spend a bunch of time talking about the office culture, what’s distinctive about Open Phil, but people who are interested to hear that can probably go back and listen to the interviews with Lewis Bollard, Ajeya Cotra, David Roodman, and Holden Karnofsky. We have quite a few interviews with people from Open Phil, so maybe we’ve got that covered.

Alexander Berger: I’m glad to be able to join the crowd.

Rob Wiblin: Is there anything you want to add that’s new, or a different take you have on who’d be a good fit for Open Phil?

Alexander Berger: I think that the roles that we’re hiring for are very much loaded on overall Open Phil culture. For the global health and wellbeing cause privatization team, it might be a little bit more econ-y and a little bit less philosophy by background. A lot of the folks that we’ve hired in the past had been EA-adjacent, but we’d welcome folks who’ve never heard of EA, or folks who are diehard fans. So I think yeah, a lot of people could do really well.

Biggest GHW wins [02:27:12]

Rob Wiblin: What are some of the biggest wins that you’re most proud of from the global health and wellbeing program? Because I guess you’ve been involved in some sense in this over the last ten years. There must have been some really exciting moments.

Alexander Berger: I think that the work on cage-free campaigns, and especially taking them international, has been really impressive. I think a lot of the success in the U.S. was baked before we came along, but I think we came in and saw that and helped really scale things to the next level, and I think that’s a huge testament to the farm animal team and to a bunch of the grantees like The Humane League and a number of others that have done really impactful work and just changed how chickens are treated around the world, and there’s just astronomical numbers there.

Alexander Berger: Another example is, this one is probably a little bit weirder for your audience, but we have funded for several years around macroeconomic stabilization policy. And I think the federal reserve and macroeconomic policymakers in the U.S. have really moved in our direction and adopted a lot of more expansionary policies and become more focused on increasing employment, relative to worrying about inflation. I think it’s really hard to attribute impact, so I’m not sure how much of that has to do with our funding, but it’s an area where I think the world has really moved our way, and we might’ve played a small role in that, and the stakes I think are quite high in terms of human wellbeing there. So I see that as a big win. One of our grantees there has actually been able to go on to help other Open Phil focus areas think about how to prevent the next pandemic. And so I see that as a good example of the kind of synergy I was talking about earlier.

Alexander Berger: One more example from just science is a broad-spectrum flu vaccine that’s now in phase I human trials with the NIH. And I think it could mean that eventually people wouldn’t have to get seasonal flu shots anymore, and it could also help reduce future pandemic risk. And so that’s phase I, that’s still really early, but it’s actually a good example of… I think we funded that work almost five years ago, and science just takes a long time to play out. And so it’ll be interesting to watch that evolve over the coming years and see if it does end up getting into humans and making a difference.

Rob Wiblin: Some pretty big successes there. This raises the question to what degree can you successfully empirically quantify the expected value of these science and policy grants against GiveWell’s top charities, given that, in this hits-based business, where you fund a research project and it may have massive impact, or probably it will have no impact, the whole portfolio could be paid for by a single success.

Rob Wiblin: You could imagine that you’d funded hundreds of science research projects, and they were all busts except for this mRNA vaccine one, say, and then suddenly the COVID pandemic happens and then it pays for all of it and then a whole bunch more. It seems like our estimates of the expected value ex ante of these projects is just always going to be incredibly, incredibly uncertain.

Alexander Berger: I think that is extremely correct. One bias I have, and again, this is more of a bias than I think a true view, is that there are more things that force your ex ante expected value estimates up than down. And so that makes me usually think that they’re probably biased up. And it’s just, you never give yourself a negative expected value. You just want to make the grant. These are hard things to do. They’re done by program officers who want to make the grants.

Alexander Berger: So I think there’s a lot of structural forces that make you more likely to overestimate cost effectiveness ex ante. We don’t do cost-effectiveness estimates for every grant. We want to be able to do things where it’s like, it’s a structural argument, or it’s much more like cluster thinking rather than sequence thinking. But yeah, we do think about this, and we do think if a whole portfolio, according to our own estimates, is not looking like it’s as good as the GiveWell top charities, that is a reason to step back and say, “Is this actually justifying the work?”

Rob Wiblin: It’s interesting that, on the macroeconomic stabilization stuff — and I know the listeners to this show are obsessed with macroeconomic stabilization, so let’s dwell on that a little bit — it seems like the groups that you were funding have totally won the argument, or there’s been a massive sea change in macroeconomic policy regarding fiscal policy, monetary policy. And yet this is one of the programs that you’re winding down, or at least you don’t make many grants to anymore. Is that maybe just because you spied an opportunity where there needed to be a change, and now that change has happened, and now you’re just not sure in which direction you want to push macroeconomics anymore?

Alexander Berger: So we’re not totally sure about the future of that program. We’re not actively winding it down, but we haven’t been doing a lot more. We have been thinking about pivoting a little bit more to work in Europe, where if you just compare the E.U. policy response to the Great Recession to the American one, I think there’s a huge gap. And also frankly, the recoveries to the Great Recession — as much as I complained about the U.S. policy response, the degree of self-inflicted wounds by European monetary policymakers is I think genuinely somewhat astonishing.

Alexander Berger: Obviously there are concerns. We’re an American funder. We don’t know as much about policy in Europe as we do about the U.S., and so there’s risks there, and we try to be cognizant of those. But I think we might continue to do a little bit more in that space and focus more on Europe. Or at some point we might say like… I don’t know if it would be literally declaring victory, but we might say like, we’re not sure there’s a ton more that we need to do here. The case doesn’t look as good as it did before. Why don’t we just step back?

Rob Wiblin: I guess in the U.S. they’re slightly worried that possibly the pendulum has swung too far in the other direction. People always respond to the last thing that went wrong, and now we’ve over-learned the lesson from 2008. But in the E.U., it doesn’t seem like people have learned the lesson from 2008 all that much. It seems like the E.U. would basically go and do exactly the same thing again. Which is strange.

Alexander Berger: I agree. It’s strange.

Why Alexander donated a kidney [02:32:16]

Rob Wiblin: Let’s push and talk about a really interesting and unusual thing that you did, which is just volunteering to hand over your kidney to a total stranger in order to save their life. Can you tell us a bit about what that experience was like, and what motivated you to do that?

Alexander Berger: I got interested in it while I was still in college. I think I was a college senior who was about to join GiveWell, and I think I emailed Holden and Elie to say, hey, by the way, I’m already kind of far along in this process. Can I take a month off at some point to go donate a kidney? And I think they were like, oh God, who did we hire, who is this person. But my motivation was actually pretty normal, I think. Pretty typical utilitarian considerations, like the benefit to the recipient is pretty big. I think you can extend somebody’s life by about ten years, in expectation.

Alexander Berger: I ended up donating to start a chain of people who had somebody who wanted to donate to them, but who’s not a compatible donor. And so I think my chain had like six steps. In expectation, I think that probably only translates to maybe one extra donation. But still, that’s an increase in the expected value. And then the risk to the donor in surgery is small. It’s roughly a one in 3,000 risk of death in surgery. And I actually think that the long-term health risks are probably an order of magnitude higher there. When I first heard about this, I remember just thinking “Wow, these donors seem really weird. I do not understand the appeal of this at all.” There was a New Yorker profile that Larissa MacFarquhar wrote. I remember reading it and being like, “These people seem so weird.”

Alexander Berger: And then somehow I came across it separately later on, in a separate context. And somebody was just writing about other people who donated and the fact that it was really safe. And I was like, oh wait. So this is not a crazy decision. It’s just like, you run a very small calculable risk to yourself, and it can benefit other people a ton. And I was like, oh, okay. That makes a lot more sense to me. That might seem like a good decision, like a very reasonable way to help other people. And I like to think about lots of reasonable ways to help other people, not just one.

Rob Wiblin: So just to get the numbers out here, did you say it was a one in 1,000 or one in 10,000 risk of death as a donor?

Alexander Berger: So I think the numbers are like one in 3,000. And that’s basically like the death in surgery. And if you’re relatively healthy, it’s lower. And then I think that the long-term risks to your health, they’re harder to track, right? Because it’s hard to do the 20-year follow-ups. And frankly, living donation is not an ancient phenomenon. And so, there’s not millennia of data there. But I think that the long-term increase in your risk of kidney disease might be something like one percentage point, I think. And so that’s significantly bigger than the one in 3,000 risk of death in surgery. But I’m a bit of a techno-optimist. I think that eventually we’ll be able to get kidneys for human transplantation from pigs probably, or be able to grow them in a vat, or whatever. And also they jump you to the top of the waiting list if you’re actually a kidney donor. So I’m not too worried for my own sake.

Rob Wiblin: I actually don’t understand why people think that this is so strange. A one in 3,000 risk of death… It’s not absolutely negligible, but people take that kind of risk all the time in other jobs that they take in order to benefit other people. Or even just, they spend more than one third of their life doing something highly unpleasant because they think it benefits others. And it probably has less of a benefit than saving multiple people’s lives. So what’s going on?

Alexander Berger: It’s actually worse than that. I think people do that all the time for fun. I think the risk of death climbing Everest is significantly higher than that.

Rob Wiblin: I do think that is a bit nuts.

Alexander Berger: Fair enough. But I think it’s like an order of magnitude or two higher, so yeah.

Rob Wiblin: Right. Yeah.

Alexander Berger: The other thing I use is, I think it’s around the same as the risk of death in giving birth. Which is totally… I mean, people do it all the time, right? I think it’s honestly not about the risk of death. I think it’s like, the bodily integrity thing is doing a lot of work in people’s minds. I don’t identify that much with my body. I’m a little more in my own head or something. And so for me that just wasn’t a huge obstacle.

Rob Wiblin: Do you think that some of the people who think that it’s a really odd thing to do maybe just massively overestimate how dangerous it is?

Alexander Berger: Totally possible, and obviously people are terrible about reasoning about small risks. So that seems like a very plausible idea, but I don’t actually think people think that it’s deadly. I think people think it’s weird, in some sense. You know? You’re agreeing to surgery. So it’s like, even if you were definitely going to live, there’s a sense of giving something of yourself up, in a fairly costly way. Like I took a few weeks off work, it’s moderately painful. I think people are weirder about that sacrifice than they are about literally the risk of death. But I could totally be wrong. This is folk psychology.

Rob Wiblin: How does this fit into the broader ethics of your life? Do you think it’s comparable in impact to your work at GiveWell, say?

Alexander Berger: Personally, I really am interested in trying to be a good person from multiple perspectives, or in multiple magisteria. So I’m not that interested in that question. Soon after I donated a kidney, I got into this back-and-forth with Jeff Kaufman, who’s an effective altruist in Boston who I really like. And he was saying, look, it doesn’t seem that rational to donate a kidney. The benefits are only as good as saving a life. And if I give to GiveWell, I can save a life for something like $5,000. And I would definitely rather give $5,000 more dollars than donate my kidney. So I’m just not going to donate my kidney.

Alexander Berger: And I don’t think that’s literally wrong, but I guess I want to deny that you face that choice. It’s like, you have a choice of whether to donate a kidney, and you have a choice of whether to donate more money. And I’m pretty happy with those being separate reasonable choices that you can think about within their own framework, rather than driving everything to be in totally comparable units and having only one overall view, where you think about your donations in exactly the same way as you think about your career, in exactly the same way that you think about whether to donate a kidney.

Alexander Berger: Another thing that comes to mind for me on this is like, I have a one-year-old daughter, and I think that having kids is a remarkably pro-social, good thing. Not like the best EA thing you can do, but just like normal, run-of-the-mill good. I think it’s a good thing for the world, not a bad thing for the world. I think it would be good if more people did more of it. I actually see it as a weakness of the EA community that it feels like because of effectiveness, we’re so focused on our main advice being the biggest things, like really cost-effective uses of money and big uses of your career. But I actually think having other suggestions for people to consider as part of what it means to be a good person or an altruistic life would actually just be good. People might do them, and that would be good. And then also, I think it could help with community growth.

Rob Wiblin: I agree with the sentiment that it’s really dangerous for people’s life to just become focused on one thing, and also just only having one goal. And I guess, occasionally you see that with effective altruism, people want to justify everything in their life in terms of the impartial impact that it has. And I just think that’s a path to nowhere. Often leads people to be unhappy.

Rob Wiblin: And it’s much more practical, as a real human being, to have multiple different goals and multiple different parts of your life that you each optimize and you put some resources into, because you care about all of those goals intrinsically for different reasons. I might push back on a little bit of that, though. If I was talking to someone who was trying to decide, should I work to make money to save a life by giving to GiveWell, or should I give my kidney? And say their salary was high enough that, over the course of a week, they could make enough money that it would save more lives, and say they would rather go to work than have their kidney taken out… It does seem like you can make a dominance argument that it saves more lives and is more pleasant for the person.

Alexander Berger: Totally. I’m really open to that dominance argument. And I think it’s correct in some cases. But like, I don’t know. Jeff works at Google, and Google will give you a month off if you donate a kidney. So the concrete trade-off is not forced. And also, I don’t actually think his view is crazy. If you were trying to always be perfectly scrupulous about every decision that you would make, you would go crazy. So some sense of triaging or prioritizing I actually think is totally correct and healthy. This was in the space of decisions I felt like was big enough and interesting enough, and I felt good about. It was like a concrete way to help people. But yeah, not everybody’s vegetarian, because people make different trade-offs. I don’t think that that’s crazy.

Rob Wiblin: Yeah, and they focus on having a positive impact in different aspects of their lives, where I guess it’s a trade-off between the impact that it has on the world and how much it matters to you to make that change.

Rob Wiblin: Let’s go back to this question of why people think that it’s so strange to give your kidney. I guess I just cannot put myself in the mentality of… It sounds like you’re saying people have this intuition that bodily integrity is somehow extremely important, and that ever losing anything from your body is just pro tanto extremely bad, or you shouldn’t do it except for the most extreme reasons, which I guess even saving someone’s life doesn’t rise to the occasion of being a sufficiently extreme reason. Can you put yourself in that headspace better than I can?

Alexander Berger: No. I’m too much of a utilitarian.

Rob Wiblin: Okay. But not even a utilitarian, just a pragmatist.

Alexander Berger: Yeah. I really don’t get it. Around the time when I donated, I actually wrote an op-ed in The New York Times arguing that we should have a government compensation system for people who want to donate. Because there’s just an addressable shortage, where you can donate and live a totally good life. It’s not very risky. This is fine. And like, there’s a government system for allocating kidneys to people who need them the most. And it would actually literally save the government money, because people are mostly on dialysis, which is really expensive and it’s very painful and you die sooner. And so giving them a transplant, it saves money, it extends life. And just not enough people sign up to do it voluntarily. And so we can make it worth their while, similarly to how we pay cops and firefighters to take risks.

Alexander Berger: I don’t feel like this is some crazy idea. But I think that the reason it doesn’t happen is actually opposition from people who are worried about coercion and have a sense of bodily integrity as something inviolable.

Alexander Berger: That makes them feel like there’s something really bad here. Honestly, I think the Catholic Church is actually one of the most important forces globally against, in any country, allowing people to be compensated for donation. It’s very interesting to me, because I just do not share that intuition. I mean, if you think about it as treating people as a means to an end, I could imagine it. Like if you thought it was a super exploitative system, where the donors were treated really badly, I could get myself in the headspace.

Rob Wiblin: But then it just seems like you could patch it by raising the amount that they’re paid and treating people better. So banning it wouldn’t be the solution.

Alexander Berger: Yeah, treat people better. One of the things I said in my op-ed, I think, was like look. We should pay people and treat them as good people. A little bit like paid surrogacy. I think people have bioethical qualms about it sometimes, but by and large, people think of surrogates as simultaneously motivated by the money and good people doing a good thing. And I think we should aspire to have a similar system with state payments for people who donate kidneys. Where it’s like, you did a nice altruistic thing, and it paid for college or whatever.

Rob Wiblin: I have strong feelings about this topic. It’s one of the ones that drives me to become a frothing-at-the-mouth lunatic. Because I’m extremely frustrated with the people that I’m debating with. I mean, I have looked into this a lot, because there’s a lot of counterarguments. And as far as I can tell, the arguments against allowing people to sell their kidneys voluntarily under a suitably regulated system are all terrible. I think there’s actually no good ones, basically. And obviously this isn’t the most pressing problem in the world, because the stakes are like tens of thousands of lives in America. So I guess globally, it would be hundreds of thousands of lives, potentially. So there’s bigger issues. But it is just astonishing that just through not being willing to do moral philosophy properly, or not being willing to get over our disgust and think about things sensibly, we’re not only allowing hundreds of thousands of people to die globally, we’re forcing them to. We’re using state-backed violence to prevent people from taking voluntary actions that would save lives and cure diseases. So I think of these as like government-backed killings every time someone dies because they weren’t able to buy a kidney in time. And I just think it’s pretty appalling.

Alexander Berger: You agreed with me too strongly. Now I have to take the other side.

Rob Wiblin: Okay. Go for it.

Alexander Berger: One of my professors from college, Debra Satz, has written on this. I think she’s really worried about the egalitarian implications of allowing payments, in the sense that if you give people offers that they can’t refuse, or if you allowed people to collateralize their kidneys — which you might if it was genuinely a pure market — you could end up with people in very bad situations, even though ostensibly you are trying to widen their choice set. I will admit, I don’t find this argument compelling myself. And I probably didn’t do the best job rendering it. So you might want to go check out her book. But I don’t find this super persuasive, just given the lives at stake. It feels like too much weight on that kind of concern to me.

Alexander Berger: And I also feel like, if you’re really worried about poor people donating under duress, you could only allow rich people to donate. I feel like there’s very valid corner solutions. You could have a long waiting period. You could make people take mental health evaluations. You could only allow people to use the money for very pro-social things. Charity might not be intensive enough, but maybe college or, you know, whatever. So you could imagine systems that work here to solve the problem and not cause the social harm that people are worried about. But I have found it frustrating also, and it strikes me as a kind of thing where it’s like… It’s not the world’s biggest problem. But it is a frustrating own goal, where policy actually is a big part of the problem. Where if you just got out of the way a little bit more, people could just flourish.

Rob Wiblin: So I think I said all the counterarguments are either all bad or they’re all solvable. And I don’t know exactly what I think about people borrowing money against their kidneys. I also feel nervous about that. That doesn’t sound so great. But it seems like you can just ban that, like just say you can never collect someone’s kidney. You can never force them to give up their kidney. If you try to do that, we’ll stick you in prison. So there’s almost always a much narrower solution to these problems than banning it outright.

Rob Wiblin: And on the idea of an offer you can’t refuse, I think the notion there is that the price would be so high that someone would feel like they can’t say no. But I would love someone to offer me a price on my kidney that’s so high that I can’t imagine saying no. The only thing worse than an offer you can’t refuse in this context seems like not getting an offer that you can’t refuse. Because almost by definition, it’s so much money that you value the money much more than you want the kidney.

Alexander Berger: I’m pretty sympathetic to that, and to the background argument for autonomy. That makes me wonder: should I try to trigger you by like… Do you want to talk more about your feelings towards bioethics as a profession? Because I feel like with some of the COVID stuff too, it might’ve come up. And it seems like a related argument.

Rob Wiblin: I have a lot to say about that. But I think maybe we’ll have to save that for another episode. I have been trying to line up someone to discuss bioethics as a potentially important area of policy reform. Fingers crossed that we’ll manage to get an episode out in the next year or two. And I’ll have to do my best to maintain my calm, I think, during that recording.

Rob Wiblin: One final thing is that people worry, alternatively, about the price for kidneys being too low. But I mean, you might also worry about the price for labor being too low, people being paid too little. But with that, we realized that the answer isn’t to ban work, or to ban jobs, or people ever being paid for doing stuff. We think that the answer is probably a minimum wage, or unions, or something like that.

Alexander Berger: Government top-ups. And I think similarly, even if the market-clearing rate for a kidney was only $10,000 or something, if you’re worried that’s not enough for people to take the risk reasonably, it is worth it just from literally a health system savings perspective to pay up to $100,000. So there’s so much surplus to go around here. We can make this work.

Rob Wiblin: One more thing I’ll say is that I’ve been astonished by people who I normally think of as being very thoughtful and analytical, how they’ll give in to their feelings on this one. That they just feel revolted by the prospect of someone selling their kidney, I guess especially maybe if they think it’s not enough money or whatever other reason. But I think it’s just so important in these cases where the stakes are many people’s lives, to step outside of that and not allow yourself to be guided by disgust, which historically has been an atrocious guide to moral behavior.

Rob Wiblin: People used to think it was blatantly obvious that homosexuality was grossly immoral, because they found the idea disgusting. I don’t think anyone accepts…or at least, I can imagine very few people in the audience accept that as a legitimate argument now. But similarly, the fact that people find the thought of someone selling their kidney kind of distasteful I just think is no moral guide to whether it’s good or bad.

Alexander Berger: I agree with that. And I also think people had the same concerns about living donation, originally. There’s these weird Freudian analyses about how you would have to be messed up in order to voluntarily sacrifice yourself for a sibling or something. A funny thing, though, is that nobody says that to me, because they’re like, oh, that’s so perverse. How could you do that? People say nice things to me. So I don’t get this argument as much as you might get it, when it comes up.

Rob Wiblin: I guess let’s wrap up on the kidney section. If anyone didn’t like what I said, you can send your complaints to podcast@80000hours.org. I guess the only thing I’ll add is that I think it’s fantastic that you gave your kidney, Alexander. It made me think a lot more of you at the time, and it still does today.

Alexander Berger: Thanks.

Stories from the early days of GiveWell [02:49:04]

Rob Wiblin: We’ve been keeping you for quite a while, so we should let you get back to managing this enormous program at Open Phil. You’ve been around for ten years, though. So I’m curious to know whether you have any great, entertaining, or interesting stories from the earlier — and no doubt far more scrappy — days of GiveWell back in 2011, 2012, 2013.

Alexander Berger: When I joined there were four other people, and we worked at a co-working space in New York. And we would just argue with each other loudly about population ethics, and the cost per life saved for different charities. And my recollection is that the graphic designers who worked at desks around us were all just like, who are these people? Like what is going on? And another fun, early, small GiveWell moment was at one point that first year, we went bowling and GiveWell paid for a pitcher of beer. And I think we realized it was the first time GiveWell had ever paid for alcohol for a staff event. I think we ended up sending multiple donors a photo of the receipt to show that we were growing up, we were becoming legitimate, and look how we’re spending your money.

Alexander Berger: I guess a slightly more serious one is I have this memory of a highlight from early on, which is Holden staying late one night to argue with me, trying to convince me that the way I had thought about taking the job at GiveWell was totally wrong. And he convinced me, actually. And I came away thinking that while I might have been thinking about it wrong, I had definitely ended up in the right place. Because it was somebody who was able to engage really deeply with my weird marginal value argument that had gotten me to GiveWell and say, “You’re totally wrong.” And in a very interesting, convincing way. But it also made me think like, okay. I’m here for the wrong reasons, but—

Rob Wiblin: I want to be around these people.

Alexander Berger: —but this is really where I should be. And obviously, I’ve been really lucky over the last few years to grow a lot with GiveWell and then with Open Phil as we spun out.

Rob Wiblin: Was Holden arguing against you coming to GiveWell, in effect?

Alexander Berger: Yeah. Basically, yeah.

Rob Wiblin: That coming to GiveWell wasn’t as effective as you were claiming it was?

Alexander Berger: Not that it wasn’t as effective as I thought it was, but like, my thinking on it had been wrong. So basically, I had been trying to decide between GiveWell and a nonprofit organization that consults for other organizations, which was bigger and more established at the time. And I thought, well, look. If they don’t hire me, they’re going to hire somebody else just like me. They get like 200 applications for every job. Whereas at the time I was like, if GiveWell doesn’t hire me, they’re not going to hire anybody. So I get full counterfactual credit for my contribution at GiveWell, whereas at the other organization, I would’ve only gotten a very small portion of my counterfactual credit. And I thought because they were bigger and more established, that cut the other way. And so, Holden convinced me that’s thinking about it wrong.

Alexander Berger: I was only thinking about one step in the chain of displaced jobs. And if you think more rigorously about all the subsequent displaced steps in the job chain, under some assumptions, you can come back to thinking that the first-order impact of the job is actually a closer estimate of your impact than the difference between you and the next person in that job. And those are controversial assumptions. I don’t think that this is obviously correct. But I think it’s a good example of how, sometimes the first step towards EA thinking can be misguided, actually. And if you go deeper and deeper, sometimes it comes back around to the common-sense approach. And that’s definitely been part of my journey over the last few years.

Rob Wiblin: I can’t tell whether Holden is terrible at recruitment or an absolute recruitment genius.

Alexander Berger: Exactly. Yep. I ended up feeling like I was in the right place. I know you want to end on that, but I guess I’ll just end with a final plug for people to check out our jobs page. We’d be really excited for folks to apply, and we think it’s a really exciting, high-impact place to work.

Rob Wiblin: Just one final thing I was going to say is, your story of talking about population ethics and all of these bizarre moral considerations around graphic designers looking askew at you… I think I had exactly the same experience when I moved to Oxford to work at the Centre for Effective Altruism, and suddenly was surrounded by constant talk about what’s going to be effective.

Rob Wiblin: Things were so new, and there was this constant froth of exciting ideas coming up. Of course, we were sharing an office at the time with the Future of Humanity Institute. So I think almost nothing that we could plausibly say or fund would make them think anything strange about us. If anything, probably, we were the normal ones in the office. Maybe that’s influenced my intellectual trajectory in a way I haven’t fully appreciated.

Alexander Berger: I’m sure it has. Yep.

Rob Wiblin: Well, I really hope that someone in the audience can either fill one of those roles, or find someone else who can. I’m going to be so sad if the South Asia air pollution thing doesn’t get off the ground. I would just love to see that program flourish. So best of luck filling all of those positions.

Alexander Berger: Thanks so much.

Rob Wiblin: My guest today has been Alexander Berger. Thanks so much for coming on the 80,000 Hours podcast.

Alexander Berger: It’s been a pleasure. I’m glad I’ll get to listen to myself, and I’m sure I’ll feel great pain over my own voice.

Outro [02:53:27]

A few months ago we launched a compilation of ten episodes of the 80,000 Hours Podcast called ‘Effective Altruism: An Introduction’.

We chose those ten to cover the most core material, and to help listeners quickly get up to speed on where, in broad terms, effective altruist research stands today. We’re going to substitute this interview into that series to cover global health and wellbeing, which we hadn’t had a reference episode on when we launched.

I’ve been excited to see a regular flow of people starting Effective Altruism: An Introduction even though we haven’t kept promoting it, which suggests listeners are sharing it with people they know.

If you’d like to see what the other nine episodes are and listen to them in a convenient format, just search for ‘effective altruism’ wherever you get podcasts. And if you’d like to introduce someone to this show, you could let them know that that compilation of key episodes is there for them.

Alright — the 80,000 Hours Podcast is produced by Keiran Harris.

Audio engineering by Ben Cordell.

Full transcripts with links to learn more are available on our website and produced by Sofia Davis-Fogel.

Thanks for joining, talk to you again soon.

Learn more

Funding effective non-profits (international development)

Future generations and their moral significance

Comments1
Sorted by Click to highlight new comments since: Today at 9:36 PM

Coming out of this interview, I really look forward to seeing the work Open Phil pursues around air quality in Asia. Those interested in this topic may enjoy following:

Curated and popular this week
Relevant opportunities