Hide table of contents

[Edit Oct 2021 – For some reason folk are still reading this post (it keeps getting the occasional upvote) so adding a note to say I have an updated 2021 post that makes similar points in a hopefully better, less confrontational, although slightly more long-winded way. See: https://forum.effectivealtruism.org/posts/wyHjpcCxuqFzzRgtX/a-practical-guide-to-long-term-planning-and-suggestions-for ]



I recently wrote, perhaps somewhat flippantly, that effective altruism longtermism relies on armchair philosophising and the occasional back of the envelope expected value calculation with little or no concern about those calculations being highly uncertain as long as the numbers tend to infinity. Upon deeper reflection of the situation I have decided that there is a truth in this (at least a technical sense in which this is correct).

This post explores some of my thoughts in a little more detail. The summary is:

Expected value calculations[1], the favoured approach for EA decision making, are all well and good for comparing evidence backed global health charities, but they are often the wrong tool for dealing with situations of high uncertainty, the domain of EA longtermism.

I reached this conclusion by digging into the stories of communities who deal with decision making under uncertainty, so I am going to start by sharing those stories with you.

(Disclaimer: I would like to note that I am not a historian. I think I have got the intellectual ideas correct but do worry that I have slapped a historical narrative on a bunch of vague dates drawn from Wikipedia. Would love more research, but doing the best I can in my spare time.)

Are you sitting comfortably, ok let’s begin:

 

 

Story 1: RAND and the US military

In a way this story begins in the 1920’s when the economist Frank Knight made a distinction between risk and uncertainty:
Risk denotes the calculable (the probability of an event times the loss if the event occurred) and thus controllable part of all that is unknowable. The remainder is the uncertain—incalculable and uncontrollable

But our story does not really take off until the 1980’s. In the late 1980s the RAND Corporation and the US military began looking at how to harness growing computing power and other tools to make better decisions in situations of high uncertainty. This work developed especially as the US military adjusted its plans in the post-Cold War era. As it became apparent that computing power was not sufficient to predict the future these tools and models focused less on trying to predict and more on trying to support decision makers to make the best decisions despite the uncertainty.

Tools included things like:

  • Assumption Based Planning – having a written version of an organization’s plans and then identify load-bearing assumptions and assessing the vulnerability of the plan to each assumption.
  • Exploratory Modeling – rather than trying to model all available data to predict the most likely outcome these models map out a wide range of assumptions and show how different assumptions lead to different consequences
  • Scenario planning [2] – identifying the critical uncertainties, developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust [3] to all options.

Research developed into the 21st century and gradually got more in depth. The idea of different levels of uncertainty sprang up and the cases with the most uncertainty became known as Deep Uncertainty. This is where:
analysts either struggle to or cannot specify the appropriate models to describe interactions among the system’s variables, select the probability distributions to represent uncertainty about key parameters in the models, and/or value the desirability of alternative outcomes

As the community of researchers and academics interested in this grew it attracted other fields, encompassing policy analysts, engineers and most recently climate scientists. And more and more tools were developed (with even more boring sounding names) such as:

  • Consensus building tools – research on how to present ideas to decision makers to facilitate collaboration, highlight the full implications of various options and lead to the best policy solution being chosen.
  • Engineering Options Analysis – a process for assigning a value to flexibility and optionality.
  • Info-Gap Decision Theory – non probabilistic and non predictive decision tool that computationally evaluates plans to maximise robustness[3] to failure modes (or similar metrics).

The current academic community, is still backed by RAND. It is now a beautifully odd mix of serious military types and young bubbly climate change researchers. It focuses on developing the field of Decision Making Under Deep Uncertainty, or as they call it DMDU.

One thing DMDU practitioners dislike is predictions. They tend to take the view that trying to predict the probability of different outcomes in situations of deep uncertainty is an unnecessary step that adds complexity and has minimal value to decision making.[4] None of the tools listed above involve predicting the probabilities of future events. They say:
all the extant forecasting methods—including the use of expert judgment, statistical forecasting, Delphi and prediction markets—contain fundamental weaknesses.” ... “Decisionmaking in the context of deep uncertainty requires a paradigm that is not based on predictions of the future”.[5]

 

 

Story 2: risk management in industry

Traditional risk management is fairly simple. List the possible risks, work out the impact that each risk could have and the likelihood of each risk. Then mitigate or plan for the risks, prioritising the highest impact and the most likely risks.

Now this approach has to be adapted for situations of high uncertainty. How can you prioritise on likelihood if you have only the vaguest idea of what the likelihood actually is?

We can draw some lessons from this report from 1992 on risk in nuclear power stations. One recommendation is to for high impact low probability risks to put less weight on the likelihood assessment. The guidance also cautions against overusing cost benefit analyses for prioritising. It says they are a useful tool but should not only be the only way of making decisions about risks.

In 2008 the financial crisis hit. Government’s responded with sweeping regulations to reduce risks across the financial sector. Senior bankers were now responsible for the mistakes of everyone underneath them and could even face criminal punishment for not doing enough to address risk in their firms. This drove innovation in Enterprise Risk Management best practice across the finance sector.

The current best practice mostly does away with likelihood assessments. It primarily uses a vulnerability assessment approach: risks are assessed and compared in terms of both the scale of the risks and the level of vulnerability of the business to those risks. There is an assumption that all risks could feasibly materialize and a focus on reducing vulnerability to them and building preparedness.

This approach is complemented by developing two sets of worse case scenarios. The first set illustrates the scale of the risk and expected damage pre-mitigation (using the assumption that there is no risk response planning) – this allows risks to be compared. The second set illustrates the level of residual risk and damage expected after mitigation – this highlights to decision makers the level of damage they are still willing to accept the cut off point at which further mitigation is deemed too costly.

The vulnerability assessment approach has a number of advantages. The approach highlights the gaps that need closing and supports flexible risk planning. In situations of high uncertainty this approach reduces the extent to which important decisions are made based on highly highly speculative predictions of risk likelihoods that can be orders of magnitude out, as well as saving risk assessors from unproductive and difficult debates over assessing likelihood. The approach also avoids needing to specify a timeline over which the risk assessment is made. 

In the last few years these approaches have been adopted more widely, for example in government agencies.

 

 

Interlude

Maybe you can already tell where I am going with this. I have talked about two other communities who deal with uncertainty. And, based on my rather hazy histories, it does appear that both communities have, over time, shifted away from making decisions based on predicting the future, expected value calculations and cost benefit analyses and developed bespoke tools for handling situations of high uncertainty.

The effective altruism community looks a bit different...

 

 

Story 3: effective altruism 

There is a problem with giving to charity – the donor is not the recipient – so there is no feedback, no inbuilt mechanism to ensure that the donor understands the impact of their donations or to ensure that the charity uses donations as effectively as possible. And so the world became rife with ineffective charities that had minimal impact. By the early 2000s the idea that charity does not work was trending.

For people who cared about making the world better, those ideas were a bit of a blow. But surely, they reasoned, there has to be some programs somewhere that work. In the mid 2000s some Oxford academics decided to work out how to give effectively, focusing on the world's poorest and using tools like DCP2 to compare interventions. Turns out that the problem of doing good can be solved with maths and expected value calculations. Hurrah! Giving What We Can was set up in 2009 to spread the good word.

As the community grew it spread into new areas – Animal Charity Evaluators was founded in 2012 looking at animal welfare – the community also connected to the rationalist community that was worried about AI and to academics at FHI thinking about the long term future. Throughout all of this expected value calculations remained the gold star for making decisions on how to do good. The idea was to shut up and multiply. Even as effective altruism decision makers spread into areas of greater and greater uncertainty they (as far as I can tell) have mostly continued to use the same decision making tools (expected value calculations), without much questioning if these were the best tools.

 

Why expected value calculations might not be the best tools for thinking about the longterm?

[Note: Apologies. I edited this section after posting and made a few other changes to based on feedback in the comments that this post was hard to engage with. I mostly aimed to make the crux of my argument more clear and less storylike. For fairness to readers I will try to avoid further changes, but do let me know if anything still doesn't make sense]

Firstly, we have seen from the stories above that the best approaches to risk management are more advanced than just looking at the probability and scale of each risk, and that dealing with situations of high uncertainty requires more tools than simple cost benefit analysis. These shifts have allowed those experts to make quicker better less error prone decisions.[6] Maybe the EA community should shift too.

Secondly, the more tools the better. When it comes to decision making the more different decision making tools of different types you use, the better. For example if you are deciding what project to work on you can, follow your intuition, speak to friends, speak to experts, do an expected value calculation, steelman the case against each project, scope out each project in more detail, refer to a checklist of what makes a useful project, etc, etc. The more of these you use the better the decision will be.[7] Furthermore, the more bespoke a decision making tool is to a specific situation, the better. For example traders might use decision making tools carefully designed to assist them in the kind of trading they are doing.[8]

Thirdly, there is some evidence that expected value calculations approach does not work in the longtermism context. For example I recommend reading this great paper by the Global Priorities Institute that looked at the problem of cluelessness[9] which concluded that the only solution was decision making processes other than expected value calculations.

I would also ask, if the tools we were using were not the right tool for thinking about the long term future, if they had a tendency to lead us astray, how would we know? What would that look like? Feedback is weak, the recipients of our charity do not yet exist. We could maybe look elsewhere, at other communities, try to find the best tools available (which returns us to the first point above), we could try to identify if they were not working (point three above) or maybe just use a more diverse range of tools (point two above).

 

Please do note the difference between “expected value” and “expected value calculations as a decision making tool”. I am not claiming in this post that maximising the true expected value of your actions is not the ideal aim of all decision making.[10] Just that we need better tools in our decision making arsenal.

Also note that I am not saying expected value calculation are useless. Expected value calculation and cost benefit analysis are useful tools, especially for comparisons between different domains. But in cases of high uncertainty they can mislead and distract from better tools.

 

So what? you maybe asking. Maybe we can do a bit more scenario planning or something, how does this effect longtermist EAs?

 

What could this mean for longtermist EAs?

(Or, the 8 mistakes of longtermist EAs, number 6 will shock you!)

So if you accept for a moment the idea that that in situations of high uncertainty it may make sense to avoid cost benefit analysis and develop/use other bespoke decision making tools, what might this suggest about longtermism in EA.

Some suggestions below for how we might currently be getting things wrong:

 

1. There is too much focus on expected value calculation as the primary decision making tool.

The obvious point: the longtermism community could use a wider tool set, ideally one more customised for high uncertainty, such as the tools listed in the stories above. This may well lead to different conclusions (elaborated on in the points below).

 

2. Undervaluing unknown risks and broad interventions to shape the future 

In most cases using expected value calculations is not actively bad, although it may be a waste of time, but one notable flaw is that expected value calculations can give a false impression of precision. This can lead to decision makers investing too heavily (or not enough) in specific options when the case for doing so is actually highly highly uncertain. This can also lead to decision makers ignoring the unknown unknowns. I expect a broader toolkit will lead to the EA community focusing more on broad interventions to shape the future and less on specific risks than it has done so to date.

 

3. Not planning sufficiently for when the world changes.

DMDU highlights the advantages of a prepare-and-adapt approach rather than a predict-then-act approach. I felt that some people in the EA community seemed somewhat surprised that the arguments about AI made in Bostrom’s book Superintelligence did not apply well to the world of machine learning AI 5 years later.[11] A prepare-and-adapt approach could imply putting more effort into identifying and looking out for key future indicators that might be a sign that plans should change. It might also perhaps push individuals towards more regularly reviewing their assumptions and building more general expertise, eg risk management policy, or general policy design skills, rather than specifically AI policy.[12]

 

4. Searching for spurious levels of accuracy.

I see some EA folk who think it is useful to estimate things like the expected number of future people, how many stars humans might colonise and so forth (GPI and CLR and to some degree OpenPhil have done research like this). I expect there is no need and absolutely minimal use for work like this. As far as I can tell other groups that work with high uncertainty decisions try to avoid this kind of exercise and I don’t think other sectors of society debate such spurious numbers. We certainly do not need to look beyond the end of the life of the sun (or anywhere even close to that) to realise the future is big and therefore important.[13]

 

5. Overemphasising the value of improving forecasting

The EA community has somewhat of an obsession with prediction and forecasting tools. Putting high value on improving forecasting methodologies and on encouraging the adoption of forecasting practice. These are nice and all and good for short to medium term planning. But they should be seen as only one tool of many in a diverse tool box and probably not even the best tool for long-term planning.

 

6. Worrying about non-problems such as the problem of cluelessness or Pascal's Mugging.

I highly recommend reading the aforementioned GPI paper: Heuristics for clueless agents. The GPI paper looks into the problem of cluelessness.[6] It concludes that there are no solutions that involve making expected value calculations of your actions – the only solutions are to shift to other forms of decision making.

I would take a tiny step further and suggest that the whole problem of cluelessness is a non-problem. It is just the result of trying to use expected value calculations as a decision making tool where it is not appropriate to do so. As soon as you realise that humans have more ways of making decisions the problem “promptly disappears in a puff of logic”[14].

I expect the Pascal's mugging problem[15] to similarly be a non-problem due to using the wrong decision making tools. I think the EA community should stop getting bogged down in these issues. 

 

7. Overconfidence in the cause prioritisation work done to date

Given all of the above, given the gulf between how EA folk think about risk and how risk professionals think about risk, given the lack of thought about decision tools, one takeaway I have is that we should be a bit wary of the longtermist cause prioritisation work done to date.

This is not meant as a criticism of all the amazing people working in this field, simply a statement that it is a difficult problem and we should be cautious of overestimating how far we have come.

 

8. Not learning enough from other existing communities

Sometimes it feels to me like the EA community has a tendency to stick its head in the sand. I expect there is a wealth of information out there from existing communities about how to make decisions about the future, about policy influencing, about global cooperation, about preventing extreme risks. We could look at organisations that plan for the long term, historical examples of groups trying to drive long-term change and current groups working on causes EAs care about. I don’t think EA folk are learning enough from these sources.

For example, I expect that the longtermism community could benefit from looking at business planning strategies.[16] Maybe the longtermism community could benefit from a clearer vision of what it wants the world to look like in 3, 10 or 30 years time.[17]

 

 

Thank you for reading

How much does this all matter?

On the one hand I don’t think further investigation along these lines will greatly weaken the case for caring about the long run future and future risks. For a start expected value calculations are not bad, they are just not always the best tool for the job. Furthermore, most approaches to managing uncertainty seem to encourage focus on preventing extreme risk from materialising.

On the other hand I think it is plausible that someone approaching longtermism with a different toolkit might reach different conclusions. For me looking into all of this strengthens my belief that: doing the most good is really difficult and often lacking in clear feedback loops, it is easy to be led astray and become over confident, and that you need to find ways to do good that have reasonable evidence supporting them. [18]

I think we need to think a little more carefully about how we shape a good future for all. 

I hope you find this interesting and I hope this post sparks some discussion. Let me know which of the 8 points above you agree or disagree with.

 

 

 

Footnotes and bits I cut

[1] In case it needs clarifying expected value calculations involve: have some options, predict the likelihood and utility of outcomes, multiply then sum probabilities and utilities to give expected future utility of each option, then go with whichever option has the highest number.

[2] RAND actually developed scenario planning techniques much earlier than this, I think in the 50s, but it was used as the basis for further tools developed at this time.

[3] The robust option is the option that produces the most favorable outcome across a broad range of future scenarios. The point is to minimise the chance of failure across scenarios.

[4] DMDU practitioners would argue that predictions are unnecessary. Those involved in DMDU would say why build a complex model to predict the outcome of option x and other models for options y and z. Instead you could just build a complex model to tell you the best thing to do to minimise regret for options x, y and z and everything in between. “Within the world of DMDU, a model is intended to be used not as a prediction tool, but as an engine for generating and examining possible futures (i.e., as an exploration tool)”. Those involved in risk management would say that trying to make predictions leads to lengthy unnecessary arguments about details.

[5] All the italicized quotes in this section are from Chapters 1 and 2 of “Decision Making under Deep Uncertainty From Theory to Practice” – which I recommend reading if you have the time.

[6] Apparently there is a whole book on this called Radical Uncertainty by economists John Kay and Mervyn King 

[7] There is a good talk on this here: https://www.youtube.com/watch?v=BbADuyeqwqY

[8] See, Judgment in managerial decision making, by Max Bazerman, I think Chapter 12.

[9] The problem of cluelessness is the idea that if you are trying to decide any action there is so much uncertainty about the long term effects that it is impossible to know if it was the correct action. Talking to a friend might lead to them leaving to go home 5 minutes later that could lead to a whole chain of effects culminating with them meeting someone new and then marrying that person and then their great great great ancestor is the next Einstein or Hitler. You just cannot know.

[10] I don’t have a view on this, and am not sure how useful it would be to have a view on this.

[11] https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/

[12] I am not super in with with the rationalist community but I wonder if it is missing exploration of these topics. I have read “Harry Potter and the Methods of Rationality” and this seems to be the mistake Harry makes: he decides that Quirrel is not Voldermort then doesn’t review as new evidence comes to light. His “shut up and multiply” rationality approach to decision making does not capture uncertainty and change well. 

[13] I would add that even if your decision tool of choice is an expected value calculation trying to pinpoint such numbers can be problematic. This paper (p20) by GPI highlights a “less is more effect” where adding tangentially relevant inputs to models decreases the predictive accuracy of these models.

[14] Quote from The Hitchhiker's Guide to the Galaxy, by Douglas Adams

[15] Pascal's mugging is a philosophical problem discussed sometimes in EA. Imagine a person stops you in the street and says “Oy you – give us ya wallet – otherwise I will use my advanced technology powers to cause infinite suffering”. An expected value calculation would say that if there is a tiny tiny non-zero chance they are telling the truth and the suffering could be infinite then you should do what they say. I repeatedly tried to explain Pascal's mugging to DMDU experts. They didn’t seem to understand the problem. At the time I thought I was explaining it badly but reading more on this topic I think it is just a non-problem: it only appears to be a  problem to those whose only decision making tool is an expected value calculation.

[16] I have not looked in detail but I see business planning on quarterly, annual, 3 year and then longer cycles. Organisations, even those with long term goals, do not make concrete plans more than 30 years ahead according to The Good Ancestor, by Roman Krznaric. (Which is probably the most popular book on longtermism, that for some reason no one in EA has read. Check it out.)

[17] Like if I play chess I don’t plot the whole game in my mind, or even try to. I play for position, I know that moving my pieces to places where they have more reach is a stronger position so I make those moves. Maybe the animal rights community does this kind of thinking better, having a very clear long term goal to end factory farming and a bunch of very clear short term goals to promote veganism and develop meat alternatives.

[18] I think that highly unusual claims require a lot of evidence. I believe I can save the life of a child in the developing world for ~£3000 and can see a lot of evidence for this. I am keen to support more research in this area but if I had to decide now to donate today between say technical AI safety research like MIRI or an effective developing world charity like AMF, I would give to the later.

Comments31
Sorted by Click to highlight new comments since: Today at 5:33 PM

I find this hard to engage with -- you point out lots of problems that a straw longtermist might have, but it's hard for me to tell whether actual longtermists fall prey to these problems. For most of them my response is "I don't see this problem, I don't know why you have this impression".

Responding to the examples you give:

(GPI and CLR and to some degree OpenPhil have done research like this)

I'm not sure which of GPI's and CLR's research you're referring to (and there's a good chance I haven't read it), but the Open Phil research you link to seems obviously relevant to cause prioritization. If it's very unlikely that there's explosive growth this century, then transformative AI is quite unlikely and we would want to place correspondingly more weight on other areas like biosecurity -- this would presumably directly change Open Phil's funding decisions.

For example, I expect that the longtermism community could benefit from looking at business planning strategies. It is notable in the world that organisations, even those with long term goals, do not make concrete plans more than 30 years ahead

... I assume from the phrasing of this sentence that you believe longtermists have concrete plans more than 30 years ahead, which I find confusing. I would be thrilled to have a concrete plan for 5 years in the future (currently I'm at ~2 years). I'd be pretty surprised if Open Phil had a >30 year concrete plan (unless you count reasoning about the "last dollar").

I find this hard to engage with -- you point out lots of problems that a straw longtermist might have, but it's hard for me to tell whether actual longtermists fall prey to these problems.

Thank you ever so much, this is really helpful feedback. I took the liberty of making some minor changes to the tone and approach of the post (not the content) to hopefully make it make more sense. Will try to proof read more in future.

I tried to make the crux of the argument more obvious and less storylike here: https://forum.effectivealtruism.org/posts/znaZXBY59Ln9SLrne/how-to-think-about-an-uncertain-future-lessons-from-other#Why_expect_value_calculations_might_not_be_the_best_tools_for_thinking_about_the_longterm

Does that help?

The aim was not to create a strawman but rather to see what conclusions would be reached if the reader accepts a need for more uncertainty focused decision making tools for thinking about the future.

 

On your points:

I'm not sure which of GPI's and CLR's research you're referring to (and there's a good chance I haven't read it)

Examples: https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ and https://www.emerald.com/insight/content/doi/10.1108/FS-04-2018-0037/full/html (the later of which I have not read)

the Open Phil research you link to seems obviously relevant to cause prioritization. If it's very unlikely that there's explosive growth this century, then transformative AI is quite unlikely and we would want to place correspondingly more weight on other areas like biosecurity -- this would presumably directly change Open Phil's funding decisions.

I don’t see the OpenPhil article as that useful – it is interesting but I would not think it has a big impact on how we should approach AI risk. For example for the point of view you raise about deciding to prioritise AI over bio – who is to say based on this article that we do not get extreme growth due to progress in biotech and human enhancement rather than AI. 

I assume from the phrasing of this sentence that you believe longtermists have concrete plans more than 30 years ahead, which I find confusing. I would be thrilled to have a concrete plan for 5 years in the future (currently I'm at ~2 years). I'd be pretty surprised if Open Phil had a >30 year concrete plan (unless you count reasoning about the "last dollar").

Sorry my bad writing. I think the point I was trying to make was that it could be nice to have some plans for a few years ahead, maybe 3, maybe 5, maybe (but not more than) 30 about what we want the world to look like.

Does that help?

I buy that using explicit EV calculations is not a great way to reason. My main uncertainty is whether longtermists actually rely a lot on EV calculations -- e.g. Open Phil has explicitly argued against it (posts are from GiveWell before Open Phil existed; note they were written by Holden).

Examples: https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ and https://www.emerald.com/insight/content/doi/10.1108/FS-04-2018-0037/full/html (the later of which I have not read)

I haven't read these so will avoid commenting on them.

I don’t see the OpenPhil article as that useful – it is interesting but I would not think it has a big impact on how we should approach AI risk.

I mean, the report ended up agreeing with our prior beliefs, so yes it probably doesn't change much. (Though idk, maybe it does influence Open Phil.) But it seems somewhat wrong to evaluate the value of conducting research after the fact -- would you have been confident that the conclusion would have agreed with our prior beliefs before the report was done? I wouldn't have been.

My main uncertainty is whether longtermists actually rely a lot on EV calculations -- e.g. Open Phil has explicitly argued against it (posts are from GiveWell before Open Phil existed; note they were written by Holden).

Ya, I think this is difficult to conclude either way without attempting a more systematic review. I can definitely find examples relying mainly on explicit modelling and EV estimation and arguments that don't rely much on them.

80,000 Hours' cause area analysis framework is EV estimation (on a log-scale), and they endorse strong longtermism and make a simple EV-based astronomical waste argument in support (although they also cite other articles supporting strong longtermism). They endorse EV maximization in the ideal case, but caution against explicit attempts at calculating and maximizing EV and propose alternatives.

Open Phil also decides on cause areas based on (essentially) the same factors as 80,000 Hours: importance, neglectedness and tractability, although I'm not willing to claim this is effectively EV estimation or that they rely primarily on this, since I haven't looked closely. (They also of course have more individualized reasoning for individual grants). See:

 

You can find other EV-based arguments around, like Bostrom's astronomical waste argument, this model for the Global Priorities Project, and this one for the Global Priorities Institute (which I found pretty interesting and impressive). Both of these models I've linked to either allow you to do your own sensitivity analysis or already include one, but they don't make a good case for the claim that we could predictably and robustly improve the far future, rather than just affect it (possibly making it far worse), and they have specific and poorly justified or unjustified minimum values on important success probabilities. They assume away a lot of deep uncertainty/complex cluelessness this way.

The case for strong longtermism by Greaves and MacAskill illustrates with some back-of-the-envelope estimates and cites others' estimates (GiveWell's, Matheny's). They acknowledge the problem of deep uncertainty/complex cluelessness for some of their proposed interventions, and they propose research on the cost-effectiveness of longtermist interventions and building resources as more robust interventions, but don't argue for the tractability of eventually identifying robustly positive longtermist interventions.

Patient philanthropy is being justified on account of EV estimates, but there are also broader arguments, e.g. made in the 80,000 Hours podcast episode with Phil Trammell.

The expected value of extinction risk reduction is positive by Jan M. Brauner and Friederike M. Grosse-Holz took a cluster-thinking approach.

Plenty of important longer texts to look at, like Bostrom's Superintelligence, Beckstead's On the Overwhelming Importance of Shaping the Far Future, Ord's The Precipice. You might expect longer texts to grapple with non-EV calculation arguments further because they have the space for it.

Hmm, I should note that I am in strong support of quantitative models as a tool for aiding decision-making -- I am only against committing ahead of time to do whatever the model tells you to do. If the post is against the use of quantitative models in general, then I do in fact disagree with the post.

Some things that feel like quantitative models that are merely "aiding" rather than "doing" decision-making:

this model for the Global Priorities Project
The case for strong longtermism by Greaves and MacAskill illustrates with some back-of-the-envelope estimates and cites others' estimates (GiveWell's, Matheny's).
Patient philanthropy is being justified on account of EV estimates

Dear MichaelStJules and rohinmshah

Thank you very much for all of these thoughts. It is very interesting and I will have to read all of these links when I have the time.

I totally took the view  that the EA community relies a lot on EV calculations somewhat based on vague experience without doing a full assessment of the level of reliance, which would have been ideal, so the posted examples are very useful.

*

To clarify one points:

If the post is against the use of quantitative models in general, then I do in fact disagree with the post.

I was not at all against quantitative  models. Most of the DMDU stuff is quantitative models. I was arguing against the overuse of quantitative models of a particular type.

*

To answer one question

would you have been confident that the conclusion would have agreed with our prior beliefs before the report was done?

Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity. (This is in a general sense, not considering factors like the researchers background and skills and so forth).

Yes. I would have been happy to say that, in general, I expect work of this type is less likely to be useful than other research work that does not try to predict the long-run future of humanity.

Sorry, I think I wasn't clear. Let me make the case for the ex ante value of the Open Phil report in more detail:

1. Ex ante, it was plausible that the report would have concluded "we should not expect lots of growth in the near future".

2. If the report had this conclusion, then we should update that AI risk is much less important than we currently think. (I am not arguing that "lots of growth => transformative AI", I am arguing that "not much growth => no transformative AI".)

3. This would be a very significant and important update (especially for Open Phil). It would presumably lead them to put less money into AI and more money into other areas.

4. Therefore, the report was ex ante quite valuable since it had a non-trivial chance of leading to major changes in cause prioritization.

Presumably you disagree with 1, 2, 3 or 4; I'm not sure which one.

Some things that feel like quantitative models that are merely "aiding" rather than "doing" decision-making:

Are there any particular articles/texts you would recommend?

Imo, the Greaves and MacAskill paper relies primarily on explicit calculations and speculative plausibility arguments for its positive case for strong longtermism. Of course, the paper might fit within a wider context, and there isn't enough space to get into the details for each of the proposed interventions.

My impression is that relying on a mixture of explicit quantitative models and speculative arguments is a problem in EA generally, not unique to longtermism. Animal Charity Evaluators has been criticized a few times for this, see here and here. I'm still personally not convinced the Good Food Institute has much impact at all, since I'm not aware of a proper evaluation that didn't depend a lot on speculation (I think this related analysis is more rigorous and justified). GiveWell has even been criticized for relying too much on quantitative models in practice, too, despite Holden's own stated concerns with this.

Are there any particular articles/texts you would recommend?

Sorry, on what topic?

Imo, the Greaves and MacAskill paper relies primarily on explicit calculations and speculative plausibility arguments for its positive case for strong longtermism.

I see the core case of the paper as this:

... putting together the assumption that the expected size of the future is vast and the assumption that all consequences matter equally, it becomes at least plausible that the amount of ex ante good we can generate by influencing the expected course of the very long-run future exceeds the amount of ex ante good we can generate via influencing the expected course of short-run events, even after taking into account the greater uncertainty of further-future events.

They do illustrate claims like "the expected size of the future is vast" with calculations, but those are clearly illustrative; the argument is just "there's a decent chance that humanity continues for a long time with similar or higher population levels". I don't think you can claim that this relies on explicit calculations except inasmuch as any reasoning that involves claims about things being "large" or "small" depends on calculations.

I also don't see how this argument is speculative: it seems really hard to me to argue that any of the assumptions or inferences are false.

Note it is explicitly talking about the expected size of the future, and so is taking as a normative assumption that you want to maximize actual expected values. I suppose you could argue that the argument is "speculative" in that it depends on this normative assumption, but in the same way AMF is "speculative" in that it depends on the normative assumption that saving human lives is good (an assumption that may not be shared by e.g. anti-natalists or negative utilitarians).

Animal Charity Evaluators has been criticized a few times for this, see here and here.

I haven't been following animal advocacy recently, but I remember reading "The Actual Number is Almost Surely Higher" when it was released and feeling pretty meh about it. (I'm not going to read it now, it's too much of a time sink.)

GiveWell has even been criticized for relying too much on quantitative models in practice, too, despite Holden's own stated concerns with this.

Yeah I also didn't agree with this post. The optimizer's curse tells you that you should expect your estimates to be inflated, but it does not change the actual decisions you should make. I agree somewhat more with the wrong-way reductions part, but I feel like that says "don't treat your models as objective fact"; GiveWell frequently talks about how the cost-effectiveness model is only one input into their decision making.

More generally, I don't think you should look at the prevalence of critiques as an indicator for how bad a thing is. Anything sufficiently important will eventually be critiqued. The question is how correct or valid those critiques are.

I'm still personally not convinced the Good Food Institute has much impact at all, since I'm not aware of a proper evaluation that didn't depend a lot on speculation

I'm interpreting this as "I don't have >90% confidence that GFI has actually had non-trivial impact so far (i.e. an ex-post evaluation)". I don't have a strong view myself since I haven't been following GFI, but I expect even if I read a lot about GFI I'd agree with that statement.

However, if you think this should be society's bar for investing millions of dollars, you would also have to be against many startups, nearly all VCs and angel funding, the vast majority of scientific R&D, some government megaprojects, etc. This bar seems clearly too stringent to me. You need some way of doing something like hits-based funding.

Sorry, on what topic?

To make a strong case for strong longtermism or a particular longtermist intervention, without relying too much on quantitative models and speculation.

I see the core case of the paper as this:

(...)

I also don't see how this argument is speculative: it seems really hard to me to argue that any of the assumptions or inferences are false.

I don't disagree with the claim that strong longtermism is plausible, but shortermism is also plausible. The case for strong lontermism rests on actually identifying robustly positive interventions aimed at the far future, and making a strong argument that they are indeed robustly positive (and much better than shorttermist alternatives). One way of operationalizing "robustly positive" is that I may have multiple judgements of EV for different plausible worldviews, and each should be positive (although this is a high bar). I think their defences of particular longtermist interventions are speculative (including patient philanthropy), but expecting more might be unreasonable for a paper of that length which isn't focused on any particular intervention.

I'm interpreting this as "I don't have >90% confidence that GFI has actually had non-trivial impact so far (i.e. an ex-post evaluation)".

Yes, and I'm also not willing to commit to any specific degree of confidence, since I haven't seen any in particular justified. This is also for future impact. Why shouldn't my prior for success be < 1%? Can I rule out a negative expected impact?

However, if you think this should be society's bar for investing millions of dollars, you would also have to be against many startups, nearly all VCs and angel funding, the vast majority of scientific R&D, some government megaprojects, etc. This bar seems clearly too stringent to me. You need some way of doing something like hits-based funding.

I think in many of these cases we could develop some reasonable probability distributions to inform us (and when multiple priors are reasonable for many interventions and we have deep uncertainty, diversification might help). FHI has done some related work on the cost-effectiveness of research. It could turn out to be that the successes don't (or ex ante won't) justify the failures in a particular domain. Hits-based funding shouldn't be taken for granted.

I feel like it's misleading to take a paper that explicitly says "we show that strong longtermism is plausible", does so via robust arguments, and conclude that longtermist EAs are basing their conclusions on speculative arguments.

If you want robust arguments for interventions you should look at those interventions. I believe there are robust arguments for work on e.g. AI risk, such as Human Compatible. (Personally, I prefer a different argument, but I think the one in HC is pretty robust and only depends on the assumption that we will build intelligent AI systems in the near-ish future, say by 2100.)

Yes, and I'm also not willing to commit to any specific degree of confidence, since I haven't seen any in particular justified. This is also for future impact. Why shouldn't my prior for success be < 1%? Can I rule out a negative expected impact?

Idk what's happening with GFI, so I'm going to bow out of this discussion. (Though one obvious hypothesis is that GFI's main funders have more information than you do.)

Hits-based funding shouldn't be taken for granted.

I mean, of course, but it's not like people just throw money randomly in the air. They use the sorts of arguments you're complaining about to figure out where to try for a hit. What should they do instead? Can you show examples of that working for startups, VC funding, scientific R&D, etc? You mention two things:

  • Developing reasonable probability distributions
  • Diversification

It seems to me that longtermists are very obviously trying to do both of these things. (Also, the first one seems like the use of "explicit calculations" that you seem to be against.)

If you want robust arguments for interventions you should look at those interventions. I believe there are robust arguments for work on e.g. AI risk, such as Human Compatible.

Thank you!

I feel like it's misleading to take a paper that explicitly says "we show that strong longtermism is plausible", does so via robust arguments, and conclude that longtermist EAs are basing their conclusions on speculative arguments.

I'm not concluding that longtermist EAs are in general basing their conclusions on speculative arguments based on that paper, although this is my impression from a lot of what I've seen so far, which is admittedly not much. I'm not that familiar with the specific arguments longtermists have made, which is why I asked you for recommendations.

I think showing that longermism is plausible is also an understatement of the goal of the paper, since it only really describes section 2, and the rest of the paper aims to strengthen the argument and address objections. My main concerns are with section 3, where they argue specific interventions are actually better than a given shorttermist one. They consider objections to each of those and propose the next intervention to get past them. However, they end with the meta-option in 3.5 and speculation: 

It would also need to be the case that one should be virtually certain that there will be no such actions in the future, and that there is no hope of discovering any such actions through further research. This constellation of conditions seems highly unlikely.

I think this is a Pascalian argument: we should assign some probability to eventually identifying robustly positive longtermist interventions that is large enough to make the argument go through. How large and why?

It seems to me that longtermists are very obviously trying to do both of these things. (Also, the first one seems like the use of "explicit calculations" that you seem to be against.)

I endorse the use of explicit calculations. I don't think we should depend on a single EV calculation (including by taking weighted averages of models or other EV calculations; sensitivity analysis is a preferable). I'm interested in other quantitative approaches to decision-making as discussed in the OP.

My major reservations about strong longtermism include: 

  1. I think (causally or temporally) longer causal chains we construct are more fragile, more likely to miss other important effects, including effects that may go in the opposite direction. Feedback closer to our target outcomes and what we value terminally reduces this issue.
  2.  I think human extinction specifically could be a good thing (due to s-risks or otherwise spreading suffering through space) so interventions that would non-negligibly reduce extinction risk are not robustly good to me (not necessarily robustly negative, either, though). Of course, there are other longtermist interventions.
  3. I am by default skeptical of the strength of causal effects without evidence, and I haven't seen good evidence for major claims of causation I've seen, but I also have only started looking, and pretty passively.
I think showing that longermism is plausible is also an understatement of the goal of the paper

Yeah, that's a fair point, sorry for the bad argument.

Hey Sam, thanks for this. I always appreciate the critical, reflective perspective you bring to these discussions. It's really valuable. I think you're right that we should consider the failure modes to which we're vulnerable and consider adopting useful tools from other communities.

I think perhaps it's a bit premature to dismiss the value of probabilistic predictions and forecasting. One thing missing from this post is discussion of Tetlock's Expert Political Judgement work. Through the '90s and '00s, Tetlockian forecasters went head-to-head against analysts from the US military and intelligence communities and kicked their butts. I think you're right that, despite this, forecasting hasn't taken over strategic decisionmaking in these communities. But as far as I know Tetlock has continued to work with and receive funding from intelligence projects, so it seems that the intelligence people see some value in these methods.

I think I'd agree with other commenters too that I'm not sure longtermist grants are that reliant on expected value calculations. The links you provide, e.g. to David Roodman's recent paper for Open Phil, don't seem to support this. Roodman's paper, for example, seems to be more of a test of whether or not the idea of explosive economic acceleration this century is plausible from a historical perspective rather than an attempt to estimate the value of the future. In fact, since Roodman's paper finds that growth tends to infinity by 2047 it's not actually helpful in estimating the value of the future.

Instead, it seems to me that most longtermist grantmaking these days relies more on crucial considerations-type analysis that considers the strength of a project's causal connection to the longterm future (e.g. reducing ex risk).

P.S. If you ever feel that you're struggling to get your point across I'd be happy to provide light edits on these posts before they go public - just message me here or email me at work (stephen [at] founderspledge [dot] com)

[anonymous]4y19
0
0

.

Sorry, it's possible I missed it upon a first read, but what's the evidence that the US military is unusually good at risk management (doesn't have to be hard evidence. Could be track record, heuristics, independent analyses, expert interviews, literally anything)? It feels wrong to use reference classes of X to implicitly say that actions the reference class does is good and we ought to emulate them, without ever an explicit argument that the reference classes' actions or decision procedures are good!

Great question


Clarification:

I don’t think I said the the US military was good at risk management I think I said that 
a) the DMDU community (RAND,  US military and others) was good at making plans that manage uncertainty and 
b) that industry was good at risk management


Slight disagreement:

It feels wrong to use reference classes of X to implicitly say that actions the reference class does is good and we ought to emulate them, without ever an explicit argument that the reference classes' actions or decision procedures are good!

I do think where reference class X is large and dominant enough it does make sense to assume some trust in their approach or that it is worth some investigation of their approach before dismissing it. For example most (large) businesses have a board and a CEO and a hierarchical management structure so unless I had a good reason to do otherwise that sets a reasonable prior for how I think it is best to run a business.

For more on this see Common sense as a prior.

So even if I had zero evidence I think it would make sense for someone in the EA community to spend time looking into the topic of what tools had worked well in the past to deal with uncertainty and that the US military would be a good place to look for ideas.


Answer:

Answering: is the US military good at making plans that manage uncertainty:

  • Historical evidence – no.
    I have zero empirical historical evidence that DMDU tools have worked well for the US military.
  • Theoretical evidence – yes.
    I think the theoretical case for these tools is strong,  see the case here and here.
  • Interpersonal evidence – yes.
    I believe Taleb in Black Swan describes that the people he met in the US military had very good empirical ways of thinking about risk and uncertainty (I don’t have the book here so cannot double check). Similarly to Taleb I have been much  impressed by folk in the UK working on counter-terrorism etc, compared to other policy folk who work on risks. 
  • Evidence from trust – mixed.
    I mostly expect the US military have the right incentives in place to aim to do this well and the ability to test ideas in the field but also would not be surprised if there were a bunch of perverse incentives  that corrupted this.

So all in all pretty weak evidence.


Caveat:

My views are probably somewhat moved on from when I wrote this post a year ago. I should revisit it at some point 

David Thorstad and I are currently writing a paper on the tools of Robust Decision Making (RDM) developed by RAND and the recommendation to follow a norm of 'robust satisficing' when framing decisions using RDM. We're hoping to put up a working paper on the GPI website soon (probably about a month). Like you, our general sense is that the DMDU community is generating a range of interesting ideas, and the fact that these appeal to those at (or nearer) the coalface is a strong reason to take them seriously. Nonetheless, we think more needs to be said on at least two critical issues.

Firstly, measures of robustness may seem to smuggle probabilities in via the backdoor. In the locus classicus for discussions of robustness as a decision criterion in operations research, Rosenhead, Elton, and Gupta note that using their robustness criterion is equivalent to maximizing expected utility with a uniform probability distribution given certain assumptions about the agent's utility function. Similarly, the norm of robust satisficing invoked by RDM is often identified as a descendant of Starr's domain criterion, which relies on a uniform (second-order) probability distribution. To our knowledge, the norm of robust satisficing appealed to in RDM has not been stated with the formal precision adopted by Starr or Rosenhead, Elton, and Gupta, but given its pedigree, it's natural to worry that it ultimately relies implicitly on a uniform probability measure of some kind. But this seems at odds with the skepticism toward (unique) probability assignments that we find in the DMDU literature, which you note. 

Secondly, we wonder why a concern for robustness in the face of deep uncertainty should lead to adoption of a satisficing criterion of choice. In expositions of RDM, robustness and optimizing are frequently contrasted, and a desire for robustness is linked to satisficing choice. But it's not clear what the connection is here. Why satisfice? Why not seek out robustly optimal strategies? That's basically what Starr's domain criterion does - it looks for the act that maximizes expected utility relative to the largest share of the probability assignments consistent with our evidence.  

Hi Andreas, Excited you are doing this. As you can maybe tell I really liked your paper on Heuristics for Clueless Agents (although not sure my post above has sold it particularly well). Excited to see what you produce on RDM.

Firstly, measures of robustness may seem to smuggle probabilities 

This seems true to me (although not sure I would consider it to be "by the backdoor").
Insofar as any option selected through a decisions process will in a sense be the one with the highest expected value, any decision tool will have probabilities inherent either implicit or explicitly. For example you could see a basic Scenario Planning exercise as implicitly stating that all the scenarios are of reasonable (maybe equal) likelihood.

I don't think the idea of RDM is to avoid probabilities, it is to avoid the traps of expected value calculation decisions. For example by avoiding explicit predictions it prevents users making important shifts to plans based on highly speculative estimates. I'd be interested to see if you think it works well in this regard.

 

Secondly, we wonder why a concern for robustness in the face of deep uncertainty should lead to adoption of a satisficing criterion of choice

Honestly I don’t know (or fully understand this), so good luck finding out.  Some thoughts:

In engineering you design your lift or bridge to hold many times the capacity you think it needs, even after calculating all the things you can think off that go wrong – this helps prevent the things you didn’t think of going wrong.
I could imagine a similar principle applying to DMDU decision making – that aiming for the option that is satisfyingly robust to everything you can think of might give a better outcome than aiming elsewhere – as it may be the option that is most robust to the things you cannot think of.

But not sure. Not sure how much empirical evidence there is on this. It also occurs to me that  if some of the anti-optimizing sentiment could driven by rhetoric and a desire to be different.
 

Very late here, but a brainstormy thought: maybe one way one could start to make a rigorous case for RDM is to suppose that there is a “true” model and prior that you would write down if you had as much time as you needed to integrate all of the relevant considerations you have access to. You would like to make decisions in a fully Bayesian way with respect to this model, but you’re computationally limited so you can’t. You can only write down a much simpler model and use that to make a decision.

We want to pick a policy which, in some sense, has low regret with respect to the Bayes-optimal policy under the true model. If we regard our simpler model as a random draw from a space of possible simplified models that we could’ve written down, then we can ask about the frequentist properties of the regret incurred by different decision rules applied to the simple models. And it may be that non-optimizing decision rules like RDM have a favorable bias-variance tradeoff, because they don’t overfit to the oversimplified model. Basically they help mitigate a certain kind of optimizer’s curse.

This makes sense to me, although I think we may not be able to assume a unique "true" model and prior even after all the time we want to think and use information that's already accessible. I think we could still have deep uncertainty after this; there might still be multiple distributions that are "equally" plausible, but no good way to choose a prior over them (with finitely many, we could use a uniform prior, but this still might seem wrong), so any choice would be arbitrary and what we do might depend on such an arbitrary choice.

For example, how intense are the valenced experiences of insects and how much do they matter? I think no amount of time with access to all currently available information and thoughts would get me to a unique distribution. Some or most of this is moral uncertainty, too, and there might not even be any empirical fact of the matter about how much more intense one experience is than another (I suspect there isn't).

Or, for the US election, I think there was little precedent for some of the considerations this election (how coronavirus would affect voting and polling), so thinking much more about them could have only narrowed the set of plausible distributions so much.

I think I'd still not be willing to commit to a unique AI risk distribution with as much time as I wanted and perfect rationality but only the information that's currently accessible.

See also this thread.

At the time I thought I was explaining [Pascal's mugging] badly but reading more on this topic I think it is just a non-problem: it only appears to be a problem to those whose only decision making tool is an expected value calculation.

This is quite a strong claim IMO. Could you explain exactly which other decision making tool(s) you would apply to Pascal's mugging that makes it not a problem? The descriptions of the tools in stories 1 and 2 are too vague for me to clearly see how they'd apply here.

Indeed, if anything, some of those tools strengthen the case for giving into Pascal's mugging. E.g. "developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust to all options": if you can't reasonably rule out the possibility that the mugger is telling the truth, paying the mugger seems a lot more robust. Ruling out that possibility in the literal thought experiment doesn't seem obviously counterintuitive to me, but the standard stories for x- and s-risks don't seem so absurd that you can treat them as probability 0 (more on this below). Appealing to the possibility that one's model is just wrong, which does cut against naive EV calculations, doesn't seem to help here.

I can imagine a few candidates, but none seem satisfactory to me:

  • "Very small probabilities should just be rounded down to zero." I can't think of a principled basis for selecting the threshold for a "very small" probability, at least not one that doesn't subject us to absurd conclusions like that you shouldn't wear a seatbelt because probabilities of car crashes are very low. This rule also seems contrary to maximin robustness.
  • "Very high disutilities are practically impossible." I simply don't see sufficiently strong evidence in favor of this to outweigh the high disutility conditional on the mugger telling the truth. If you want to say my reply is just smuggling expected value reasoning in through the backdoor, well, I don't really consider this a counterargument. Declaring a hard rule like this one, which treats some outcomes as impossible absent a mathematical or logical argument, seems epistemically hubristic and is again contrary to robustness.
  • "Don't do anything that extremely violates common sense." Intuitive, but I don't think we should expect our common sense to be well-equipped to handle situations involving massive absolute values of (dis)utility.

This is a fascinating question – thank you.

Let us think through the range of options for addressing Pascal's mugging. There are basically 3 options:

  • A: Bite the bullet – if anyone threatens to do cause infinite suffering then do whatever they say.
  • B: Try to fix your expected value calculations to remove your problem.
  • C: Take an alternative approach to decision making that does not rely on expected value.

It is also possible that all of A and B and C fail for different reasons.*

Let's run through.

 

A:

I think that in practice no one does A. If I email everyone in the EA/longtermism community and say: I am an evil wizard please give me $100 or I will cause infinite suffering! I doubt I will get any takers.

 

B:

You made three suggestions for addressing Pascal's mugging. I think I would characterise suggestions 1 and 2 as ways of adjusting your expected value calculations to aim for more accurate expected value estimates (not as using an alternate decision making tool).

I think it would be very difficult to make this work, as it leads to problems such as the ones you highlight.

You could maybe make this work using a high discounting based on the "optimisers curse" type factors to reduce the expected value of high-uncertainty high-value decisions. I am not sure.

(The GPI paper on cluelessness basically says that expected value calculations can never work to solve this problem. It is plausible you could write a similar paper about Pascals mugging. It might be interesting to  read the GPI paper and mentally replace "problem of clulessness" with "problem of pascals mugging" and see how it reads).

 

C:

I do  think you could make your third option, the common sense version, work. You just say: if I follow this decision it will lead to very perverse circumstances, such as me having to give everything I own to anyone who claims they will otherwise me cause infinite suffering. It seems so counter-intuitive that I should do this that I will decide not to do this. I think this is roughly the approach that most people follow in practice. This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error. It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.

It does seem hard to apply most of the DMDU approaches to this problem. An assumption based modeling approach would lead to you writing out all of your assumptions and looking for flaws – I am not sure where it would lead.

if looking for an more rigorous approach the flexible risk planning approach might be useful. Basically make  the assumption that: when uncertainty goes up the ability to pinpoint the exact nature of the risk goes down. (I think you can investigate this empirically). So placing a reasonable expected value on a highly uncertain event means that in reality events vaguely of that type are more likely but events specifically as predicted are themselves unlikely. For example you could worry about future weapons technology that could destroy the world and try to  explore what this would look like – but you can safely say it is very unlikely to  look like your explorations. This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.

 

Does that help?

 

 * I worry that I have made this work by defining C as everything else and that the above is just saying Paradox -> No clear solution -> Everything else must be the solutions.

Thanks for your reply! :)

I think that in practice no one does A.

This is true, but we could all be mistaken. This doesn't seem unlikely to me, considering that our brains simply were not built to handle such incredibly small probabilities and incredibly large magnitudes of disutility. That said, I won't practically bite the bullet, any more than people who would choose torture over dust specks probably do, or any more than pure impartial consequentialists truly sacrifice all their own frivolities for altruism. (This latter case is often excused as just avoiding burnout, but I seriously doubt the level of self-indulgence of the average consequentialist EA, myself included, is anywhere close to altruistically optimal.)

In general—and this is something I seem to disagree with many in this community about—I think following your ethics or decision theory through to its honest conclusions tends to make more sense than assuming the status quo is probably close to optimal. There is of course some reflective equilibrium involved here; sometimes I do revise my understanding of the ethical/decision theory.

This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error.

To the extent that I assign nonzero probability to mathematically absurd statements (based on precedents like these), I don't think there's very high disutility in acting as if 1+1=2 in a world where it's actually true that 1+1=3. But that could be a failure of my imagination.

It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.

This is basically my response. I think there's some meaningful distinction between good applications of reductio ad absurdum and relatively hollow appeals to "common sense," though, and the dismissal of Pascal's mugging strikes me as more the latter.

For example you could worry about future weapons technology that could destroy the world and try to explore what this would look like – but you can safely say it is very unlikely to look like your explorations.

I'm not sure I follow how this helps. People who accept giving into Pascal's mugger don't dispute that the very bad scenario in question is "very unlikely."

This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.

I think you might be onto something here, but I'd need the details fleshed out because I don't quite understand the claim.

I think that highly unusual claims require a lot of evidence. I believe I can save the life of a child in the developing world for ~£3000 and can see a lot of evidence for this. I am keen to support more research in this area but if I had to decide now to donate today between say technical AI safety research like MIRI or an effective developing world charity like AMF, I would give to the later.

 

I think longtermists sometimes object that shorttermists (?) often ignore the long-term (or indirect) effects of their actions as if they neatly cancel out, but they too are faced with deep uncertainty and complex cluelessness. For example, what are the long-term effects of the Against Malaria Foundation, based on population sizes, economic growth, technological development, future values, and could they be more important than the short-term effects and have opposing signs (or opposing signs relative to a different intervention)?

Furthermore, if you have deep uncertainty about the effects of an intervention or the future generally, this can infect all of your choices, because you're comparing all interventions, not just each intervention against some benchmark "do nothing" intervention*. You might think that donating to AMF is robustly better ex ante than doing nothing, while a particular longtermist intervention is not, but that doesn't mean donating to AMF is robustly better ex ante than the longtermist intervention. So why choose AMF over the longtermist intervention?

As another example, if you were to try to take into all effects of the charities, are we justified in our confidence that AMF is better than the Make-A-Wish Foundation (EA Forum post) or even doing nothing? What are the population effects? Are we sure they aren't negative? What are the effects on nonhuman animals, both farmed and wild? How much weight should we give the experiences of nonhuman animals?

Maybe we should have a general skepticism of causal effects that aren't sufficiently evidence-backed and quantified, though, so unless you've got a reliable effect size estimate or estimates for bounds for an effect size, we should assume it's ~0.

Some discussion here and in the comments. I defend shorttermism there.

 

*although I want to suggest that the following rule is reasonable, and for now endorse it:

If I think doing X is robustly better than doing nothing in expectation, and no other action is robustly better than doing X in expectation, then X is permissible.

I think a lot of us do it anyway (whenever we discuss the sign of some intervention or effect, we're usually comparing to "doing nothing") without considering that it might be unjustified for a consequentialist who shouldn't distinguish between acts and omissions, and I suspect this might explain your preference for AMF over MIRI.

I  illustrate these points in the context of intervention portfolio diversification for deep and moral uncertainty in this post.

Strongly upvoted, very interesting. Here is a scenario under which you might be wrong:

Not making forecasts a. is easier, and b. doesn't expose you to accusations of being overconfident, uncalibrated and clueless. It is possible that this was the driving force for abandoning forecasts, and ideological justification came afterwards. This would happen through mechanisms like those outlined in Unconscious economics

You can see this dynamic going on really clearly in this Wall Street Article: Travel CFOs Hesitant on Forecasts as Pandemic Fogs Outlook , or in this model by Deloitte, whose authors, when contacted, refused to give a confidence interval.

It is still possible that they have uncovered some useful tools, but the assertion "the best way to maximize expected utility is to not think in terms of probabilities" just sounds really suspicious.

I'm not sure I agree with that comment anymore; the original post has grown on me over the past months. For example, when thinking about funding ALLFED, an EV calculation is just really noisy.

[3] The robust option is the option that produces the most favorable outcome across a broad range of future scenarios. The point is to minimise the chance of failure across scenarios.

This sounds like the maxipok rule, which Bostrom has used to argue for prioritizing extinction risks (as long as we expect to do more good than bad). On the other hand, I think maximin (maximize the value of the worst outcome) or minimizing the risk of very horrible outcomes might lead us to prioritize risks of (badly) net negative futures (a subclass of s-risks, still longtermist, again as long as we expect to do more good than bad), contrary to Bostrom's claim "maximin implies that we should all start partying as if there were no tomorrow."

Greaves and MacAskill also argue here that risk aversion with respect to welfare should lead us to prioritize existential risks, and cite a claim that risk aversion with respect to the difference you make  is contrary to impartiality, since it singles you out.  Of course, risk aversion and ambiguity aversion (favouring robustness) are different, but I think many approach ambiguity aversion by focusing on worst cases, so the same argument could work.

  • Assumption Based Planning – having a written version of an organization’s plans and then identify load-bearing assumptions and assessing the vulnerability of the plan to each assumption.
  • Exploratory Modeling – rather than trying to model all available data to predict the most likely outcome these models map out a wide range of assumptions and show how different assumptions lead to different consequences
  • Scenario planning [2] – identifying the critical uncertainties, developing a set of internally consistent descriptions of future events based on each uncertainty, then developing plans that are robust [3] to all options.

Can you clarify how these tools are distinct? My (ignorant) first impression is that they just boil down to "use critical thinking".

2. Undervaluing unknown risks and broad interventions to shape the future 

What decision procedures would lead to valuing broad interventions more? Aren't the effects often more speculative than say the reduction of global catastrophic risks (although that doesn't mean GCR reduction is net positive, we could be saving bad futures)? EDIT: I suppose a cluster-thinking approach using many weak (and not well-quantified) arguments could value broad interventions more.

Also, these do get some attention in EA, see here to start, discussing institutional reform, global priorities research and growing the EA movement. Broad interventions (aimed at the near-ish future, I suppose) seem plausibly more neglected in the global health and development space in EA, discussed here.

Curated and popular this week
Relevant opportunities