Hide table of contents

Summary

I’m new here, but not to EA. I’m torn on a classic EA question of risk aversion: 

Is it better to pursue a very small chance of gargantuan impact, or a very high chance of smaller (but still important) impact?

I know this is a basic and well-covered EA question, but still a controversial one. In recent months I have read, considered, and benefitted from various EA answers, but have reservations with each of them and remain open to persuasion. In this post I reproduce my inner debate in hopes EAs with similar values can address my reservations, or at least explain how they approach the question themselves. Because I am still working through my ideas, they may be a bit unpolished.

The post is also a brief advertisement. My search for answers prompted me to organize a related panel discussion at the Yale Philanthropy Conference, to be held on February 12th. Anyone interested can register here, or read more about it in this separate post.

The question

I’m a grad student on the board of my EA student group. My particular graduate program is versatile enough to allow me to choose from a broad range of career options. In attempting to do so, I’ve grappled with an ethical question that seems pretty central to EA. 

(I know there are abundant resources to help individuals find the highest impact career for them. For now, I’m not looking for personalized advice based on my particular skillset. I’m just hoping smart people can weigh in on the broader question itself, to help me arrive at the most thoughtful and tested answer possible in the abstract.)

The question is, how risk-averse should we be in striving for higher impact, if doing so risks making no impact at all (or worse, a negative one)? Is it better to pursue a very small chance of immense positive impact, or a very high chance of a smaller one? How useful are raw expected value calculations to answering this question? When might these calculations become too fuzzy to trust? Is Pascal’s mugging a convincing rebuttal to those chasing minute probabilities of immensely positive impacts? And how different are these answers for individuals than for the EA community as a whole?

(Note that by “risk” I mean risk-to-impact, not catastrophic X- or S-risks here.)

I have read much of 80,000 Hours’ literature on the topic, as well as relevant blog posts here, here, and here, and an EA forum post here. I also asked Benjamin Todd a condensed version of the question at the EA Student Summit in October, and his answer helpfully nudged me towards increased risk tolerance. Still, I have reservations. My current thinking goes as follows.

The inner debate

My name is Andrew, so for ease of reading I’ll present this as a debate between Andy and Drew. Andy is on team risk neutrality, and Drew is on team risk aversion.

 

Andy: Risk aversion stems largely from the law of diminishing returns. In our personal lives, the loss of $20,000 feels worse than the gain of $30,000 feels good – in part because the nearer we get to zero, the more precious each dollar becomes. Likewise, doubling the EA movement would not be as good as reducing it to zero would be bad. With this in mind, perhaps highly influential EA leaders should be risk averse as well.

But most of us work only at the margins of the overall EA community. Humanity’s progress towards our shared goals depends less on our individual efforts that it does on the aggregate impact of the larger team. Our personal impacts are rarely a large enough portion of the whole for our returns to really diminish. Therefore, risk neutrality is the maximizing long-run strategy for the EA movement as a whole.

All it takes to recognize this is to detach yourself enough from your personal life to keep the macro perspective in mind. A perfectly selfless EA shouldn’t care whether her individual contribution makes a decisive difference – for the fate of the world is not on her shoulders alone.

 

Drew: Well, maybe I’m not a perfectly selfless EA, but my contribution matters to me all the same. To reflect on my death bed that my entire professional career was for naught sounds pretty awful. And it sounds even less bearable to expect this in advance! – to wake up each day all-but-knowing my efforts will be futile, and then have to put forth effort anyway.

I was first drawn to EA to be more confident that my charitable donations had positive impact. It would seem a great irony to dive so deep down the rabbit hole that I come out perfectly happy to have no confidence whatsoever.

Most of us would say we’re into EA to do the most good we can do. But when it comes down to it, maybe what I mean is “to do the most good I can confidently do.” I’m not convinced that’s equivalent to chasing the highest product of two hazy estimates on an expected value formula. Pascal’s Mugger weighs heavy on me: the nearer a probability gets to zero, the less crunching the numbers seems to pass the common-sense test. We write of “expected value” - but what does it mean to "expect" something you strongly believe won’t occur?

How injured do you expect to become from driving to work on Monday? What are your expected winnings from the lottery? The colloquial answer is “none.” In common parlance, if I tell someone the probability of an event is one in 300 million and then ask them how much good they expect to result from it, their response will not likely depend on the value they are dividing by 300 million. There are chances so low we safely discard them every day – lest our heuristic tools use us, instead of vice-versa.

This touches on a broader debate about how much we can trust our intuitions. There are certainly times we cannot. But there also seem to be times our intuitions tip us off that we’re doing the wrong math from some faulty assumption or another. My values incline me to be cautious and humble about my ability to expect much of anything. And that makes what good I can foresee feel too precious to gamble on longshots.

 

Andy: Okay, but just as you want to feel confident you’re helping, don’t you also want to work on the most infuriating injustices of our time?

“Did you exchange, 
a walk-on part in the war, 
for a lead role in a cage?”              - Pink Floyd

War. Climate change. Entrenched inequality. Mass incarceration, and systemic racism. Authoritarian populism, fueled by misinformation, putting democracy and liberalism on the brink. These are vexing, stubborn issues that future generations will remember us by. They are not very tractable. The path to change is hazy, and perhaps dependent on forces beyond your control. They’re rarely the same issues where your individual efforts are likely to be decisive.

And yet…they feel too urgent to deprioritize. This is where the criticism that EA neglects systemic change (which I agree is ill-founded, but still common) comes from. Atrocities committed by my government – in my name, using my tax dollars - feel like my responsibility to stop in a way that malaria just doesn’t.

Disease is tragic, but it’s always been here. It can seem like an "act of God" or nature – like getting struck by lightning– that isn’t necessarily anyone’s fault. Violence and oppression, on the other hand, are clearly humanity’s fault. These are acts of man, that we intentionally inflict on one another. For whatever reason, that feels tougher to keep off my conscience. I feel a stronger obligation to struggle against that oppression - even if it's not my doing - perhaps because it seems likelier to define my generation looking back. I think it was Patton who told his troops:

“Thirty years from now when you're sitting by your fireside with your grandson on your knee and he asks, 'What did you do in the great World War Two?' You won't have to cough and say, 'Well, your granddaddy shoveled shi-{stosomiasis pills] in [Liberia].'”

…or something like that.

I’m rambling, but the point is this: I’m more willing to set aside causes with very low odds of success than I am to dismiss causes addressing very unjust circumstances, even if they’re the same causes!  What gives?

I think part of it is that I’m not strictly utilitarian. For one thing, I think we have stronger negative moral obligations (an obligation not to kill) than positive ones (an obligation to save life). Most people seemingly agree. It is illegal (and universally odious) to murder – but not to give nothing to charity, even when you have enough to save many lives. And most people extend this distinction even to murders they personally played no part in. We are more upset by the news of a mass shooting than would be by news that a comparable number of victims died in a car crash. An officer kneeling on George Floyd’s neck bothers people more than someone dying from cancer. Both are tragic, but one seems more than that; one seems unjust.

If injustice weighs heavier than mere misfortune, maybe my expected value calculations should be weighted accordingly. And if so, maybe I should tolerate more risk on low-odds/high-injustice issues than I would on other longshots, and embrace work on systemic change accordingly. I’m more willing to be a mere cog in a machine if that machine is slowly fixing injustice I see right in front of me than I am if it’s aiming to solve hypothetical problems we might not even have.

 

Drew: That’s just letting emotion – or the romantic allure of revolutionary change – cloud your judgment. Maybe you feel worse about George Floyd than another malaria death, but you shouldn’t. And you especially shouldn’t when you individually held the power to avert the latter, but chose to dump it into something speculative instead.

Systemic change is systemic precisely because it hinges on broad socioeconomic forces none of us can command. It takes time, and monetary intervention cannot always accelerate its arrival. This leads to yet another argument for risk aversion: deep uncertainty about the future.

Some figures in our expected value formula involve much fuzzier guesswork than others. We often misunderstand cause and effect. When we do understand it, we often miscalculate which effects or outcomes are really desirable, or underestimate the impact of unforeseen consequences. The less certain the outcome becomes, the more we have to adjust the murky chance X will result from Y by the murky chance that X would even be good, or relevant, or wouldn’t happen anyway, etc.

The further into the future we project, the worse this problem gets. One year ago today, I’d have been dumbstruck to learn how the rest of 2020 turned out. It feels bold to envision the world even one year from now – let alone ten, or fifty, or five-hundred.

Although Hilary Greaves would disagree, this uncertainty inclines me to prioritize the good I can confidently do right away, absent some intuitive reason to suspect it would have adverse long-term consequences. Without veering too far into population ethics, I feel we have stronger moral obligations to existing lives we can confidently impact than to potential people we have little clue how to help. This would seemingly favor short-term interventions like bednets or cash transfers over advocacy for more speculative projects.

 

Andy: To the contrary, uncertainty means we’ll increase our chance of impact by hedging our bets across multiple causes. The best way to do that is through the specialization of labor. If your personal skillset is best suited for a low-chance-of-huge-impact EA cause, that’s the one you should pursue.

Suppose you are agonizing over whether to apply your life’s work to project A (with a low chance of immense impact) and project B (with a high chance of smaller impact). Stressed, you go to a bar to calm your nerves. While you’re there, you run into another stressed-out person. In the course of conversation, you discover that he, too, is torn between projects A and B. Amazing!

By night’s end, you make a deal. You pledge your life’s work to project A, while he pledges it to project B, such that each of your projects are a slightly better fit for your respective interests or abilities.

Without this agreement, you each may have tried to straddle both projects in a less efficient way, just to ensure you did some minimum amount of good. By splitting up instead, you increase the overall output expected between you. And even if project A flamed out, or wound up being way less important than B, would you really deserve less credit for B’s eventual achievement? You were part of the team that got it done!

Just broaden your conception of the team to the whole EA community, and stop worrying about how much of the “credit” is yours. Decide which causes are neglected within the community – and among those, pick your best personal fit. Dedicating your contributions to that cause is totally justifiable even if you personally never accomplish much in your lifetime.

It can be a relief to think of it this way, because it lowers the stakes of our individual life choices. This allows us more freedom to apply our comparative advantages wherever they’d be most useful, which probably makes us happier in other ways too.

 

Drew: Okay, but…what if you both did B?

Teamwork is well and good, but it doesn’t make individual decisions any less real or consequential to simply imagine them as linked with other decisions. In the above example, you may have both chosen B upon further reflection. If so, the communal approach reduced your aggregate impact, by causing one of you to fritter away at what wound up being a dead-end in hindsight. Hedging your bets can be selfish too if all it does in practice is free you from the need to choose well, or liberate you to do what you wanted for ulterior reasons. No matter how you slice it, everyone ultimately has to place their bets independently and live with the consequences.

***

I’ll let Drew have the last word at that. You can see how I spin myself in circles, in what feels as much an emotional choice as a rational one.

I imagine I’ve bitten off more than I can chew, in that I touched on more controversial debates in philosophy than even the most patient interlocutor could address all at once. Please feel no obligation to try – but also, don’t hold back in correcting my where I made novice errors. I’m here to learn and I have thick skin.

The Yale Philanthropy Conference

Finally, as mentioned, I decided to host a panel discussion on how prominent philanthropic organizations address these questions in setting their strategic priorities. Even if your personal views on the question are settled, the panel could be an opportunity to both learn about and influence the thinking of consequential philanthropic decision-makers. For more details, see my separate post on the panel and conference here.

18

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 8:54 PM

I don't have any answers for you, I'm afraid- but I wanted to say that I really like the way you wrote this up. Framing your inner conflict as a debate between Andy and Drew made it very clear and engaging to read. 

Thanks for the encouragement!  The framing's for my own benefit, too. I've found it helps me navigate big decisions to write out the best case I can think of for both sides, and then reread sometime later to see which best convinces me.

Yeah, I can see how that would be helpful-- I'm thinking of having a go at it as a decision-making tool myself. 

The approach kind of reminds me of internal family systems therapy, actually: trying to reconcile different parts of yourself by imagining them as different people. The main difference being that there's  no trauma in this kind of scenario (hopefully, anyway!), and a lot less psychotherapy jargon :)

 Just broaden your conception of the team to the whole EA community, and stop worrying about how much of the “credit” is yours.

To me, this is the crux. If you can flip that switch, problem (practically) solved—you can take on huge amounts of personal risk, safe in the knowledge that the community as a whole is diversified.

Easier said than done, though: by and large, humans aren’t wired that way. If there’s a psychological hurdle tougher than the idea that you should give away everything you have, it the idea that you should give away everything you have to uncertain payout.

What if you and your friend bring the same skills and effort to the team, each of you taking big bets on cause areas, but your friend’s bets pay out and yours don’t? All credit goes to your friend, and you feel like a failure. Of course you do!—because effort and skill and luck are all hopelessly tangled up; your friend will be (rightfully) seen as effortful and skilled, and no one will ever be able to tell how hard you tried.

What can make that possibility less daunting?

  1. Notice when you’re thinking in terms of moral luck. Try to appreciate your teammates for their efforts, and appreciate them extra for taking risks.
  2. Get close with your team. There’s a big difference, I expect, between knowing you’re a cog in a machine and feeling the machine operating around you. A religious person who goes to church every day is a cog in a visceral machine. An EA who works in a non-EA field and reads blogs to stay up to date on team strategy might feel like a cog in a remote, nebulous machine.

That's all good, intuitive advice. I'd considered something like moral luck before but hadn't heard the official term, so thanks for the link.

I imagine it could also help, psychologically, to donate somewhere safe if your work is particularly risky. That way you build a safety net. In the best case, your work saves the world; in the worst case, you're earning to give and saving lives anyway, which is nothing to sneeze at.

My human capital may best position me to focus my work on one cause to the exclusion of others. But my money is equally deliverable to any of them. So it shouldn't be inefficient to hedge bets in this way if the causes are equally good.

[anonymous]2y1
0
0

My summary on concerns with difference-making risk aversion might be relevant to this discussion.

Curated and popular this week
Relevant opportunities