Hide table of contents

Overview

In this post I provide a brief sketch of The case for strong longtermism as put forward by Greaves and MacAskill, and proceed to raise and address possible misconceptions that people may have about strong longtermism. Some of the misconceptions I have come across, whilst others I simply suspect may be held by some people in the EA community. 

The goal of this post isn’t to convert people as I think there remain valid objections against strong longtermism to grapple with, which I touch on at the end of this post. Instead, I simply want to address potential misunderstandings, or point out nuances that may not be fully appreciated by some in the EA community. I think it is important for the EA community to appreciate these nuances, which should hopefully aid the goal of figuring out how we can do the most good.

NOTE: I certainly do not consider myself to be any sort of authority on longtermism. I partly wrote this post to push me to engage with the ideas more deeply than I already had. No-one has read through this before my posting, so it’s certainly possible that there are inaccuracies or mistakes in this post and I look forward to any of these being pointed out! I’d also appreciate ideas for other possible misconceptions that I have not covered here.

Defining strong longtermism

The specific claim that I want to address possible misconceptions about is that of axiological strong longtermism, which Greaves and MacAskill define in their 2019 paper The case for strong longtermism as the following:

Axiological strong longtermism (AL): “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”

Put more simply (and phrased in a deontic way that assumes we should do is what will result in the best consequences), one might say that:

“In most of the choices (or, most of the most important choices) we face today, what we ought to do is mainly determined by possible effects on the far future.”

Greaves and MacAskill note that an implication of axiological strong longtermism is that:

“for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”

I think most people would agree that this is a striking claim.

Sketch of the strong longtermist argument 

The argument made by Greaves and MacAskill (2019) begins with a plausibility argument that goes roughly as follows:

Plausibility Argument:

  • In expectation, the future is vast in size (in terms of expected number of beings)
  • All consequences matter equally (i.e. it doesn’t matter when a consequence occurs, or if it was intended or not)
  • Therefore it is at least plausible that the amount of ex ante good we can generate by influencing the expected course of the very long-run future exceeds the amount of ex ante good we can generate via influencing the expected course of short-run events, even after taking into account the greater uncertainty of further-future effects.
  • Also, because of the near-term bias exhibited by the majority of existing actors, we should expect tractable longtermist options (if they exist) to be systematically under-exploited at the current margin

The authors then consider the intractability objection: that it is essentially impossible to significantly influence the long-term future ex ante, perhaps because the magnitude of the effects of one’s actions (in expected value-difference terms) decays with time (or “washes out”) and sufficiently quickly as to make short-term effects dominate expected value.

The authors then proceed to suggest possible examples of interventions that may avoid the intractability objection, in the following categories:

  • Speeding up progress
    • Provided value per unit time doesn’t plateau at a modest level, bringing forward the march of progress could have long-lasting beneficial effects compared to status quo
  • Mitigating extinction risk
    • Extinction is an “attractor state” in that, once humans go extinct, they will stay that way forever (assuming humans don’t re-evolve). Non-extinction is also an attractor state, although to a lesser extent. Increasing the probability of achieving the better attractor state (probably non-extinction by a large margin, if we make certain foundational assumptions) has high expected value that stretches into the far future. This is because the persistence of the attractor states allows the expected value of reducing extinction risk not to “wash out” over time. There also seem to be tractable ways to reduce extinction risk
  • Steering towards a better rather than a worse “attractor state” in contexts that do not involve a threat of extinction, including:
    • Mitigating climate change. Climate change could result in a slower long-run growth rate or permanently reduce the planet’s carrying capacity
    • Ensuring institutions that may be developed in the next century or two are constituted in ways that are better for wellbeing than others. Institutions could persist indefinitely
    • Ensuring advanced AI has goals that are conducive to wellbeing. AI could persist indefinitely and exert very significant control over human affairs
  • Funding research into longtermist interventions / saving money to fund future opportunities

Possible Misconceptions

Henceforth I will simply refer to “longtermism” to mean strong longtermism, and to “longtermists” to be people who act according to strong longtermism.

"Longtermists have to predict the far future"

Possible misconception: “Trying to influence the far future is pointless because it is impossible to forecast that far.”

My response: “Considering far future effects doesn’t necessarily require predicting what will happen in the far future.”

Note that most of Greaves and MacAskill’s longtermist interventions involve steering towards or away from certain “attractor states” that, when you enter them, you tend to persist in them for a very long time (if not forever). The persistence of these attractor states is what allows these interventions to avoid the “washing out” of expected value over time.

A key thing to notice is that some of these attractor states could realistically be entered in the near future. Nuclear war could feasibly happen tomorrow. Climate change is an ongoing phenomenon and catastrophic climate change could happen within decades. In The Precipice, Toby Ord places the probability of an existential catastrophe occuring within the next 100 years at 1 in 6, which is concerningly high. 

Ord is not in the business of forecasting events beyond a 100-year time horizon, nor does he have to be. These existential threats affect the far future on account of the persistence of their effects if they occur, but not on account of the fact that they might happen in the far future. Therefore whilst it is true that a claim has to be made about the far future, namely that we are unlikely to ever properly recover from existential catastrophes, this claim seems less strong than a claim that some particular event will happen in the far future.

Having said that, not all longtermist interventions involve attractor states. “Speeding up progress” becomes a credible longtermist intervention provided the value of the future, per century, is much higher in the far future than it is today, perhaps due to space settlement or because some form of enhancement renders future people capable of much higher levels of well-being. The plausibility of speeding up progress being a credible longtermist intervention then appears to depend on somewhat speculative claims about what will happen in the (potentially far) future.

"Cluelessness affects longtermists more than shorttermists"

Possible misconception: “When we peer into the far future there are just too many complex factors at play to be able to know that what we’re doing is actually good. Therefore we should just do interventions based on their short-term effects.”

My response: “Every intervention has long-term effects, even interventions that are chosen based on their short-term effects. It often doesn’t seem reasonable to ignore these long-term effects. Therefore cluelessness is often a problem for shorttermists, just as it is for longtermists.”

It can be tempting to claim that we can’t be very confident at all about long-term effects, and therefore that we should just ignore them and decide to do interventions that have the best short-term effects. 

There are indeed scenarios when we can safely ignore long-term effects. To steal an example from Phil Trammell’s note on cluelessness, when we are deciding whether to conceive a child on a Tuesday or a Wednesday, any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices. Hilary Greaves has dubbed such a scenario “simple cluelessness”, and argues, in this case, that we are justified in ignoring long-run effects.

However it seems that often we don’t have such evidential symmetry. In the conception example we simply can’t say anything about the long-term effects of choosing to conceive a child on a particular day, and so we have evidential symmetry. But what about, say, giving money to the Against Malaria Foundation? It seems that we can say some things about both the short-term and long-term effects of doing so. We can reasonably say that giving to AMF will save lives and therefore probably have long-term population effects. We can reasonably say that population changes should impact on things like climate change, animal welfare, and economic growth. We can also say that the total magnitude of these indirect (unintended) effects is very likely to exceed the magnitude of the direct (intended) effects, namely averting deaths due to malaria. We perhaps can’t however feel justified in saying that the net effect of all of these impacts is positive in terms of value (even in expectation) - there are just too many foreseeable effects that might plausibly go in different directions and that seem large in expected magnitude. Forming a realistic credence on even the sign of the net value of giving to AMF seems pretty hopeless. This scenario is what Greaves calls “complex cluelessness”, and she feels that this poses a problem for someone who wants to do the most good by giving to AMF.

Can we just ignore all of those “indirect” effects because we can’t actually quantify them, and just go with the direct effects that we can quantify (averting death due to malaria)? Well, this seems questionable. Imagine an omniscient being carries out a perfect cost-benefit analysis of giving to AMF which accurately includes impacts on saving people from malaria, climate change, animal welfare and economic growth (i.e. the things we might reasonably think will be impacted by giving to AMF). Now imagine the omniscient being blurs out all of the analysis, except the ‘direct effect’ of saving lives, before handing the analysis to you. Personally, because I know that the foreseeable ‘indirect’ effects make up the vast majority of the total value and could in certain cases realistically be negative, I wouldn’t feel comfortable just going with the one ‘direct effect’ I can see. I would feel completely clueless about whether I should give to AMF or not. Furthermore, this ‘blurred’ position seems to be the one we are currently in with regards to GiveWell’s analysis of AMF.

I’m not absolutely sure that this cluelessness critique of AMF is justified, but I do think that thinking in this way illustrates that cluelessness can be a problem for shorttermists, and that there seems to be no real reason why the problem should be more salient for longtermists. Every intervention has long-term effects, and deep uncertainty of these effects is often problematic for us when deciding how to do the most good.

Greaves actually argues that it might be the case that deliberately trying to beneficially influence the course of the very far future might allow us to find interventions where we more robustly have some clue that what we're doing is beneficial, and of how beneficial it is. In other words, Greaves thinks that cluelessness may be less of a problem for longtermists. This may be the case as, for many longtermist interventions, the direct (intended) impact may be so large in expected value as to outweigh the indirect (unintended) impacts. For example, AI alignment research may be so good in expected value as to nullify relatively insubstantial concerns of indirect harm of engaging in such research. Overall I’m not sure if cluelessness is less of an issue for longtermists, but it seems possible.

"Longtermists have to ignore non-human animals"

Possible misconception: “I’m mainly concerned about reducing/preventing suffering of non-human animals, but longtermism is a philosophy centred around humans. Therefore I’m not really interested.”

My response: “Longtermists shouldn’t ignore non-human animals. It is plausible that there are things we can do to address valid concerns about the suffering of non-human animals in the far future. More research into the tractability of certain interventions could have high expected value.”

A nitpick I have with Greaves and MacAskill’s paper is that they don’t mention non-human animals. For example, when they are arguing that the future is vast in expectation they say: “It should be uncontroversial that there is a vast number of expected beings in the future of human civilisation.” I take this to imply that they restrict their analysis to humans. I see some possible reasons for this:

  1. In aiming to introduce longtermism to the (non-EA) academic world, the authors decided to focus on humans in order to make the core argument seems less ‘weird’, or remain ‘conservative’ in terms of numbers of beings so as to make the argument more convincing
  2. For some reason, non-human animals aren’t as relevant from a longtermist point of view

The first reason is a possibility, and it may be fair.

The second reason is an interesting possibility. It could be the case if perhaps there aren’t a vast number of expected non-human animals in the future. It does indeed seem possible that farmed animals may cease to exist in the future on account of being made redundant due to cultivated meat, although I certainly wouldn’t be sure of this given some of the technical problems with scaling up cultivated meat to become cost-competitive with cheap animal meat. Wild animals seem highly likely to continue to exist for a long time, and currently vastly outnumber humans. Therefore it seems that we should consider non-human animals, and perhaps particularly wild animals, when aiming to ensure that the long-term future goes well.

The next question to ask then is if there are attractor states for non-human animals that differ in terms of value. I think there are. For example, just as with humans, non-human animal extinction and non-extinction are both attractor states. It is plausible that the extinction of both farmed and wild animals is better than existence, as some have suggested that wild animals tend to experience far more suffering than pleasure and it is clear that factory-farmed animals undergo significant suffering. Therefore causing non-human animal extinction may have high value. Even if some non-human animals do have positive welfare, it may be better to err on the side of caution and cause them to go extinct, making use of any resources or space that is freed up to support beings that have greater capacity for welfare and that are at lower risk of being exploited e.g. humans (although this may not be desirable under certain population axiologies).

The next question then is if we can tractably cause the extinction of non-human animals. In terms of farmed animals, as previously mentioned, cultivated meat has the potential to render them redundant. Further research into overcoming the technical difficulties of scaling cultivated meat could have high expected value. In terms of wild animals, mass sterilisation could theoretically help us achieve their extinction. However, the tractability of causing wild animals to go extinct, and the indirect effects of doing so, are uncertain. Overall, one could argue that making non-human animals go extinct may be less urgent than mitigating existential risk, as the former can be done at any time, although I do think it might be particularly difficult to do if we have spread to the stars and brought non-human animals with us.

There are potential longtermist interventions that have a non-human animal focus and that don’t centre around ensuring their extinction. Tobias Baumann suggests that expanding the moral circle to include non-human animals might be a credible longtermist intervention, as a good long-term future for all sentient beings may be unlikely as long as people think it is right to disregard the interests of animals for frivolous reasons such as the taste of meat. Non-human animals are moral patients that are essentially at our will, and it seems plausible that there are non-extinction attractor states for these animals. For example, future constitutions might (or might not) explicitly include protections for non-human animals, and then persist for a very long time. Depending on whether they include protections or not, the fate of non-human animals in the far future could be vastly better or worse. Trying to ensure that future constitutions do provide protections for non-human animals might require us to expand the moral circle such that a significant proportion of society believes non-human animals to have moral value. It isn’t clear however how tractable moral circle expansion is, and further research on this could be valuable.

Finally, it is worth noting that some of Greaves and MacAskill’s proposed longtermist interventions could help reduce animal suffering, even if that isn’t the main justification for carrying them out. For example, aligned superintelligent AI could help us effectively help animals.

"Longtermists won't reduce suffering today"

Possible misconception: “Greaves and MacAskill say we can ignore short-term effects. That means longtermists will never reduce current suffering. This seems repugnant.”

My response: “It is indeed true that ignoring short-term effects means ignoring current suffering, and people may be justified in finding this repugnant. However, it is worth noting that it is possible that longtermists may end up reducing suffering today as a by-product of trying to improve the far future. It isn't clear however that this is the case when reducing existential risk. In any case, it is important to remember that longtermists only claim longtermism is true on the current margin.”

Greaves and MacAskill’s claim that we can ignore “all the effects contained in the first 100 (or even 1000) years” is certainly a striking claim. Essentially, they claim this because the magnitude of short-term effects we can tractably influence will simply pale in comparison to the magnitude of the long-term effects we can tractably influence, if strong longtermism is true. It is natural to jump to the conclusion that this necessarily means longtermists won't reduce suffering today.

It is indeed true that Greaves and MacAskill's claim implies that longtermists shouldn't concern themselves with current suffering (remember I am referring to the strong version of longtermism here). One could be forgiven in finding this repugnant, and it makes me feel somewhat uneasy myself. 

However, it is worth noting that it is possible that longtermists may end up reducing suffering today as a by-product of trying to improve the far future. Indeed one of the plausible longtermist interventions that Greaves and MacAskill highlight is ‘speeding up progress’, which would likely involve some alleviation of current suffering. Tyler Cowen argues that boosting economic growth may be the most important thing to do if one has a long-term focus, which should entail a reduction in current suffering.

In addition, I mentioned previously that moral circle expansion could be a credible longtermist intervention. It seems plausible that one of the most effective ways to expand the moral circle could be to advance cultivated or plant-based meat, as stopping people from eating animals may then allow them to develop moral concern for them. In this case, short-term and expected long-term suffering reduction could coincide, although this is all admittedly fairly speculative.

In practice however, longtermists tend to focus on reducing existential risks, which indeed doesn't seem to entail reducing current suffering. Note however that part of Greaves and MacAskill’s argument was that longtermist interventions, if they exist, are likely to be underexploited at the current margin. This is because other “do gooders” tend to be focused on current suffering, as opposed to the welfare of future generations. Therefore longtermism may be a current phenomenon based on where others are placing their attention. It isn’t clear, and actually seems quite unlikely, that longtermists should want everyone in the world to work on reducing existential risk, and therefore ignore current suffering completely. Even if the future is vast, there’s only so much that people can do to exploit that, and we can expect diminishing returns for longtermists.

"Longtermists have to think future people have the same moral value as people today"

Possible misconception: “I think we have special obligations towards people alive today, or at least that it is permissible to place more weight on people alive today. Therefore I reject longtermism.”

My response: “Whilst there may be a justifiable reason for privileging people alive today, the expected vastness of the future should still lead us to a longtermist conclusion.”

This one might surprise people. After all, the claim that all consequences matter equally regardless of when in time it occurs (which is generally considered to be quite uncontroversial) is one of the foundations of Greaves and MacAskill’s argument. I contend however that you don’t need this assumption for longtermism to remain valid.

My explanation for this is essentially based on Andreas Mogensen’s paper “The only ethical argument for positive 𝛿”. Delta in this context is the “rate of pure time preference” and reflects the extent to which a unit of utility or welfare accruing in the future is valued less than an equal unit of utility enjoyed today. If delta is greater than zero we are essentially saying that the welfare of future people matters less simply because they are in the future. If delta=0 equal units of utility are valued equally, regardless of when they occur.

In his paper, Mogensen argues that a positive delta may be justifiable in terms of agent-relative reasons, and furthermore that this seems to be the only credible ethical argument for positive delta. The basic idea here is that we may have justification for being partial to certain individuals, such as our children. For example, if someone chooses to save their child from a burning building as opposed to two children they don’t know from a separate building we, tend not to judge them, and in fact we might even think they did the right thing. Applying this partiality thinking to ‘the world community now’, Mogensen argues that we may be justified in caring more about the next generation, than those in succeeding generations. Mogensen calls this ‘discounting for kinship’.

Importantly however, Mogensen notes that under such discounting we shouldn’t be valuing the welfare of one of our distant descendants any less than the welfare of some stranger who is currently alive today. These two people seem similarly distant from us, just across different dimensions. Therefore, provided we care about strangers at least to some extent, which seems reasonable, we should also care about distant descendants. So, whilst delta can be greater than zero, it should decline very quickly over time when discounting the future, to allow for distant descendants to have adequate value. Given this, if the future is indeed vast in expectation and there are tractable ways to influence the far future, the longtermist thesis should remain valid.

"Longtermists must be consequentialists"

Possible misconception: “The longtermist argument seems to rest on some naive addition of expected utilities over time. As someone who doesn’t feel comfortable with maximising consequentialism, I reject longtermism.”

My response: “A particular concern for the future may be justified using other ethical theories including deontology and virtue ethics.”

Toby Ord has put forward arguments for why reducing existential risk may be very important for deontologists and virtue ethicists. These arguments also seem to be applicable to longtermism more generally.

In The Precipice, Ord highlights a deontological foundation for reducing existential risk by raising Edmund Burke’s idea of a partnership of the generations. Burke, one of the founders of political conservatism, wrote about how humanity’s remarkable success has relied on intergenerational cooperation, with each generation building on the work of those that have come before. In 1790 Burke wrote of society:

“It is a partnership in all science; a partnership in all art; a partnership in every virtue, and in all perfection. As the ends of such a partnership cannot be obtained except in many generations, it becomes a partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born.”

Ord highlights that such an idea might give us reasons to safeguard humanity that are grounded in our past - obligations to our grandparents, as well as our grandchildren. Ord suggests that we might have a duty to repay a debt to past generations by "paying it forward" to future generations.

Ord also appeals to the virtues of humanity, likening humanity’s current situation to that of an adolescent that is often incredibly impatient and imprudent. Ord writes:

“Our lack of regard for risks to our entire future is a deficiency of prudence. When we put the interests of our current generation far above those of the generations to follow, we display our lack of patience. When we recognise the importance of our future yet still fail to prioritise it, it is a failure of self-discipline. When a backwards step makes us give up on our future - or assume it to be worthless - we show a lack of hope and perseverance, as well as a lack of responsibility for our own actions.”

Ord hopes that we can grow from an impatient, imprudent adolescent, to a wiser, more mature adult, and that this will necessarily require a greater focus on the future of humanity.

"Longtermists must be total utilitarians"

Possible misconception: “Reducing extinction risk is only astronomically important if one accepts total utilitarianism, which I reject. Therefore I’m not convinced by longtermism.”

My response: “It may be that there are tractable longtermist interventions that improve average future well-being, conditional on humanity not going prematurely extinct. These will be good by the lights of many population axiologies.”

OK so this isn’t actually my response, this is covered in Greaves and MacAskill’s paper. They concede that the astronomical value of reducing extinction risk relies on a total utilitarianism axiology. This is because the leading alternative view - a person-affecting one - doesn’t find extinction to be astronomically bad. 

However, they note that their other suggested longtermist interventions, including mitigating climate change, institutional design, and ensuring aligned AI, are attempts to improve average future well-being, conditional on humanity not going prematurely extinct. They then state that any plausible axiology must agree that this is a valuable goal, and therefore that the bulk of their longtermist argument is robust to plausible variations in population axiology.

From my point of view, the two animal-focused interventions that I floated earlier (making non-human animals go extinct, and expanding the moral circle) are also pretty robust to population axiology. Both of them centre on the importance of reducing suffering, which any population axiology should consider important. One could counter and say that causing non-human animals to go extinct may be bad if many non-human animals live lives that are worth living, appealing to a total utilitarianism population axiology. However, as I stated earlier, even if some non-human animals do have positive welfare, it may be better to err on the side of caution and cause them to go extinct, making use of any resources or space that is freed up to support beings that have greater capacity for welfare and that are at lower risk of being exploited e.g. humans.

"Longtermists must be classical utilitarians"

Possible misconception: “I think it is more valuable to improve the wellbeing of those with lower wellbeing (I’m a prioritarian). Therefore I think it more valuable to improve the lives of those in extreme poverty today, as opposed to future people who will be better off.”

My response: “It isn’t clear that future people will in fact be better off. Also, the prioritarian weighting may need to be quite extreme to avoid the longtermist conclusion.”

Again - not actually my response. This is also covered in Greaves and MacAskill’s paper (and given that they word this pretty well I’m stealing much of their wording here).

The authors first note that there are serious possibilities that future people will be even worse off than the poorest people today — for example, because of climate change, misaligned artificial general superintelligence, or domination by a repressive global political regime. They also note that many of their contenders for longtermist interventions are precisely aimed at improving the plight of these very badly off possible future people, or reducing the chance that they have terrible as opposed to flourishing lives. 

Otherwise, the authors note that, given the large margin by which (they argue) longtermist interventions deliver larger improvements to aggregate welfare than similarly costly shorttermist interventions, only quite an extreme priority weighting would lead to wanting to address global poverty over longtermist interventions. Even if some degree of prioritarianism is plausible, the degree required might be too extreme to be plausible by any reasonable lights.

"Longtermists must embrace expected utility theory" 

Possible misconception: “Greaves and MacAskill’s argument relies on maximising expected value. I don’t subscribe to this decision theory.”

My response: “They consider a few other decision theories, and conclude that longtermism is robust to these variations.”

Again - not my response. I will have to completely defer to Greaves and MacAskill on this one. In their strong longtermism paper they consider a few alternatives to maximising expected value.

They note that under ‘Knightian uncertainty’ - when there is little objective guidance as to which probability distributions over possible outcomes are appropriate vs inappropriate - a common decision theory is “maximin” whereby one chooses the outcome which is least bad. They argue that this supports axiological longtermism, as the worst outcomes are ones in which the vast majority of the long-run future is of highly negative value (or, at best, have zero or very little positive value). Therefore, according to maximin, the only consideration that is relevant to ex ante axiological option evaluation is the avoidance of these long-term catastrophic outcomes.

They also consider risk-weighted expected utility theory. It’s a similar story to the above, as risk aversion with respect to welfare (i.e. value is a concave function of total welfare) will make it more important to avoid very low welfare outcomes.

Admittedly, I am unsure if Greaves and MacAskill have tackled this question thoroughly enough. I look forward to further work on this.

Genuine Issues for Longtermism

Despite my defence of longtermism above, I do think that there remain genuine issues for longtermists to grapple with.

Tractability

A challenge for longtermists remains the tractability objection. Greaves and MacAskill “regard this as the most serious objection to axiological strong longtermism”.

It was the concept of attractor states that allowed Greaves and MacAskill to avoid the “washing out” of expected value over time. Even if attractor states exist however, it must be possible to somewhat reliably steer between them for longtermism to be valid. In the case of reducing existential risk for example, there have to be things that we can actually do to reduce these risks, and not just temporarily. Toby Ord argues in The Precipice that there are such things we can do, but it seems that more research on this question would be useful.

What about improving institutions? Greaves and MacAskill argue that institutions can be constituted in ways that are better for wellbeing than others and that these institutions may persist indefinitely. For institutional reform to be a credible longtermist intervention it must be possible to figure out what better institutions look like (from a longtermist point of view) and it must be possible, in practice, to actually redesign institutions in these ways. MacAskill and John suggest interesting ideas, but it still seems as if research in this area is still quite nascent.

As mentioned, the tractability of interventions like moral circle expansion or making non-human animals go extinct is also disputable, and further research on this could be valuable.

Fanaticism

In “The Epistemic Challenge to Longtermism”, Christian Tarsney develops a simple model in which the future gets continually harder to predict, and then considers if this means that the expected value of our present options is mainly determined by short-term considerations. Tarsney’s conclusion is that expected value maximisers should indeed be longtermists. However, Tarsney cautions that, on some plausible empirical worldviews, this conclusion may rely on miniscule probabilities of astronomical payoffs. Whether expected value maximisation is the correct decision theory in such cases isn’t necessarily clear, and is a question that philosophers continue to grapple with. If one isn’t comfortable with basing decisions on miniscule probabilities of astronomical payoffs, the case for longtermism may not hold up.

What small probabilities drive the superiority of longtermism in Tarsney’s model? In Tarsney’s talk about his paper he highlights the following:

  • The probability of being able to steer between attractor states
  • The probability of large-scale space settlement, conditional on survival
  • The probability of a “Dyson sphere” rather than “space opera” scenario, conditional on space settlement
  • The probability of a stable future

Tarsney also suggests, given current beliefs about such empirical parameters, that we should be longtermists on the scale of thousands or millions of years, as opposed to billions or trillions. This latter point isn’t really an argument against longtermism, but it is worth bearing in mind.

Concluding Remarks

There remain genuine issues for longtermists to grapple with and I look forward to further research in these areas. However, there are also (I believe to be) fairly common misconceptions about longtermism that can lead people to level objections that may not be entirely valid, or at least are more nuanced than is generally realised. I think it is important for the EA community to appreciate these nuances, which should hopefully aid the goal of figuring out how we can do the most good.

Comments43
Sorted by Click to highlight new comments since: Today at 3:39 PM

It is not obvious that non-extinction is an attractor state. If there is some minimal background risk of extinction that we can not get below (whether due to asteroids, false vacuum decay, nuclear war,  everyone becoming a negative utilitarian and stops reproducing, whatever) then it is the nature of exponential discounting that the very long-term future can quickly become essentially unimportant. 

I think expansive space colonization would reduce the risk asymptotically, since it's unlikely for all of a large number of very distant civilizations to go extinct around the same time. The more distant the civilizations, the more roughly independent their risks should be. And the more civilizations there are, the more likely at least one is around at any time.

It's certainly not as strong an attractor state as extinction, but I still think it's an attractor state to some extent. Certainly wild animals (especially when you consider aquatic life) have and likely will exist for a very long time unless we take extreme action to get rid of them or there's a particularly intense catastrophe.

Also I agree with Michael on the relevance of space colonisation. Many total utilitarians can't wait for space colonisation as it will significantly reduce x-risk. I get this thinking, but I hope we don't bring non-human animals. As I say in the post, it seems safer to make them go extinct.

On discounting,  uncertainty over future discount rates (perhaps due to uncertainty about future x-risk which may become lower than it is now) leads to a declining discount rate over time and the result that we should discount the longterm future as if we were in the safest world among those we find plausible. This is known as Weitzman discounting. From Greaves' paper Discounting for Public Policy:

In a seminal article, Weitzman (1998) claimed that the correct results [when uncertain about the discount rate] are given by using an effective discount factor for any given time t that is the probability-weighted average of the various possible values for the true discount factor R(t): Reff(t) = E[R(t)]. From this premise, it is easy to deduce, given the exponential relationship between discount rates and discount factors, that if the various possible true discount rates are constant, the effective discount rate declines over time, tending to its lowest possible value in the limit t → ∞.

Therefore we can't really wave away the very long-term future, assuming of course that Weitzman is correct  (he may not be, see the "Weitzman-Gollier puzzle").

There are indeed scenarios when we can safely ignore long-term effects. To steal an example from Phil Trammell’s note on cluelessness, when we are deciding whether to conceive a child on a Tuesday or a Wednesday, any chance that one of the options might have some long-run positive or negative consequence will be counterbalanced by an equal chance that the other will have that consequence. In other words there is evidential symmetry across the available choices. Hilary Greaves has dubbed such a scenario “simple cluelessness”, and argues, in this case, that we are justified in ignoring long-run effects. However it seems that often we don’t have such evidential symmetry. In the conception example we simply can’t say anything about the long-term effects of choosing to conceive a child on a particular day, and so we have evidential symmetry.

I think you're accurately reflecting what these authors say, but my independent impression is that this point is mistaken. I.e., I think the idea of a qualitative distinction between simple cluelessness and complex cluelessness doesn't make sense. 

I describe my position, and why I think the examples typically used don't support the points the authors want them to support, here. Here I'll just briefly suggest some reasons why we can say something about the long-term effects of choosing to conceive a child on a particular day:

One day later means the child should be expected to be born roughly one day later, and thus will be roughly one day younger at any future point. This probably very slightly slows down GDP growth, intellectual progress, population growth (via the child later having their own children), growth in carbon emissions due to what the child does themselves, maybe also cuts in carbon emission due to tech advancement or policy change or whatever, etc. Then this could be good or bad for the long-term future based on whether things like GDP growth, intellectual progress, population growth, etc. are good or bad for the long-term future, which it also seems we can say something about (see e.g. Differential Progress).

I don't want to start a pointless industry of alternatively 'shooting down' & refining purported cases of simple cluelessness, but just for fun here is another reason for why our cluelessness regarding "conceiving a child on Tuesday vs. Wednesday" really is complex:

Shifting the time of conception by one day (ignoring the empirical complication pointed out by Denise below) also shifts the probability distribution of birth date by weekday, e.g. whether the baby's birth occurs on a Tuesday or Wednesday. However, for all we know the weekday of birth has a systematic effect on birth-related health outcomes of mother or child. For instance, consider some medical complication occurring during labor with weekday-independent probability, which needs to be treated in a hospital. We might then worry that on a Wednesday healthcare workers will tend to be more overworked, and so slightly more likely to make mistakes, than on a Tuesday (because many of them will have had the weekend off and so on Wednesday they've been through a larger period of workdays without significant time off). On the other hand, we might think that people are reluctant to go to a hospital on a weekend such that there'll be a "rush" on hospitals on Mondays, which takes until Wednesday to "clear" - making in fact Monday or Tuesday more stressful for healthcare workers. And so on and so on ...

(This is all made up, but if I google for relevant terms I pretty quickly find studies such as Weekday of Surgery Affects Postoperative Complications and Long-Term Survival of Chinese Gastric Cancer Patients after Curative Gastrectomy  or Outcomes are Worse in US Patients Undergoing Surgery on Weekends Compared With Weekdays or Influence of weekday of surgery on operative
complications. An analysis of 25.000 surgical procedures or ...

I'm sure many of these studies are terrible but their existence illustrates that it might be pretty hard to justify an epistemic state that is committed to the effect of different weekdays exactly canceling out.)

((It doesn't help if we could work out the net effect on all health outcomes at birth, say b/c we can look at empirical data from hospitals. Presumably some non-zero net effect on e.g. whether or not we increase the total human population by 1 at an earlier time would remain, and then we're caught in the 'standard' complex cluelessness problem of working out whether the long-term effects of this are  net positive or net negative etc.))

I'm wondering if a better definition of simple cluelessness would be something like: "While the effects don't 'cancel out', we are justified in believing that their net effect will be small compared to differences in short-term effects."

I'm wondering if a better definition of simple cluelessness would be something like: "While the effects don't 'cancel out', we are justified in believing that their net effect will be small compared to differences in short-term effects."

I think that that's clearly a good sort of sentence to say. But: 

  • I don't think we need the simple vs complex cluelessness" idea to say that
  • I really don't want us to use the term "clueless" for that! That sounds very absolute, and I think was indeed intended by Greaves to be absolute (see her saying "utterly unpredictable" here).
  • I don't want us to have two terms that (a) sound like they're meant to sharply distinct, and (b) were (if I recall correctly) indeed originally presented as sharply distinct.

(I outlined my views on this a bit more in this thread, which actually happens to have been replies to you as well.)

Why can't we simply talk in terms of having more or less "resilient" or "justified" credences, in terms of of how large the value of information from further information-gathering or information-analysis would be, and in terms of the value of what we could've done with that time or those resources otherwise? 

It seems like an approach that's more clearly about quantitative differences in degree, rather than qualitative differences in kind, would be less misleading and more useful.

It's been a year since I thought about this much, and I only read 2 of the papers and a bunch of the posts/comments (so I didn't e.g. read Trammell's paper as well). But from memory, I think there are at least two important ways in which standard terms and framing of simple vs complex cluelessness has caused issues:

  1. Many people seem to have taken the cluelessness stuff as an argument that we simply can't say anything at all about the long-term future, whereas we can say something about the near-term future, so we should focus on the near-term future.
  2. Greaves seems to instead want to argue that we basically, at least currently, can't say anything at all about the long-term effects of interventions like AMF, whereas we can say something about the long-term effects of a small set of interventions chosen for their long-term effects (e.g., some x-risk reduction efforts), so we should focus on the long-term future.
    1. See e.g. here, where Greaves says the long-term effects of short-termist interventions are "utterly unpredictable".

My independent impression is that both of these views are really problematic, and that the alternative approach used in Tarsney's epistemic challenge paper is just obviously far better. We should just think about how predictable various effects on various timelines from various interventions are. We can't just immediately say that we should definitely focus on neartermist interventions or that we should definitely focus on longtermist interventions; it really depends on specific questions that we actually can improve our knowledge about (through efforts like building better models or collecting more evidence about the feasibility of long-range forecasting).

Currently, this is probably the main topic in EA where it feels to me like there's something important that's just really obviously true and that lots of other really smart people are missing. So I should probably find time to collect my thoughts from various comments into a single post that lays out the arguments better.

When this post went up, I wrote virtually the same comment, but never sent it! Glad to see you write it up, as well as your below comments. I have the impression that in each supposed example of 'simple cluelessness' people just aren't being creative enough to see the 'complex cluelessness' factors, as you clarify with chairs in other comment.

My original comment even included saying how Phil's example of simple cluelessness is false, but it's false for different reasons than you think: If you try to conceive a child a day later, this will not in expectancy impact when the child will be born. The impact is actually much stronger than that. It will affect whether you are able to conceive in this cycle at all, since eggs can only be fertilized during a very brief window of time (12-24 hours). If you are too late, no baby.

To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing I took from Greaves was to realise there seems to be an issue of complex cluelessness in the first place - where we can't really form precise credences in certain instances where people have traditionally felt like they can, and that these instances are often faced by EAs when they're trying to do the most good.

Maybe we're also complexy clueless about what day to conceive a child on, or which chair to sit on, but we don't really have our "EA hat on" when doing these things. In other words, I'm not having a child to do the most good, I'm doing it because I want to. So I guess in these circumstances I don't really care about my complex cluelessness. When giving to charity, I very much do care about any complex cluelessness because I'm trying to do the most good and really thinking hard about how to do so.

I'm still not sure if I would class myself as complexly clueless when deciding which chair to sit on (I think from a subjective standpoint I at least feel simply clueless), but I'm also not sure this particular debate really matters.

I'm also inclined to agree with this. I actually only very recently realized that a similar point had also been made in the literature: in this 2019 'discussion note' by Lok Lam Yim, which is a reply to Greaves's cluelessness paper:

This distinction between ‘simple’ and ‘complex’ cases of cluelessness, though an ingenious one, ultimately fails. Upon heightened scrutiny, a so-called ‘simple’ case often collapses into a ‘complex’ case. Let us consider Greaves’s example of a ‘simple’ case: helping an old lady cross the road. It is possible that this minor act of kindness has some impacts of systematic tendencies of a ‘complex’ nature. For instance, future social science research may show that old ladies often tell their grandchildren benevolent stories they have encountered to encourage their grandchildren to help others. Future psychological research may show that small children who are encouraged to help others are usually more charitable, and these children, upon reaching adulthood, are generally more sympathetic to the effective altruism movement, which Greaves considers a ‘complex’ case. This shows that a so-called ‘simple’ decision (such as whether to help an old lady to cross the road) can systematically lead to consequences of a ‘complex’ nature (such as an increase in the possibility of their grandchildren joining the effective altruism movement), thereby suffering from the same problem of genuine cluelessness as a ‘complex’ case.

Morally important actions are often, if not always, others-affecting. With the advancement of social science and psychological research, we are likely to discover that most others-concerning actions have some systematic impacts on others. These systematic impacts may lead to another chain of systematic impacts, and so on. Along the chain of systematic impacts, it is likely that at least one of them is of a ‘complex’ nature.

Interesting - that's fairly similar to the counterarguments I gave for the same case here:

I think all of [the three key criteria Greaves proposes for a case to involve complex cluelessness] actually appl[y] to the old lady case, just very speculatively. One reason to think [the first criterion applies] is that the old lady and/or anyone witnessing your kind act and/or anyone who's told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.

Importantly, there isn't a "precise counterpart, precisely as plausible as the original", for this story. That'd have to be something like people seeing this act therefore thinking unkindness, bullying, etc. are more the norm that they previously thought they were, which is clearly less plausible.

One reason to think [the second criterion applies] for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.

Your critique of the conception example might be fair actually. I do think it's possible to think up circumstances of genuine 'simple cluelessness' though where, from a subjective standpoint, we really don't have any reasons to think one option may be better or worse than the alternative. 

For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on. There doesn't seem to be any point stressing about this decision (assuming there isn't some obvious consideration to take into account), although it is certainly possible that choosing the left chair over the right chair could be a terrible decision ex post. So I do think this decision is qualitatively different to donating to AMF. 

However I think the reason why Greaves introduces the distinction between complex and simple cluelessness is to save consequentialism from Lenman's cluelessness critique (going by hazy memory here). If a much wider class of decisions suffer from complex cluelessness than Greaves originally thought, this could prove problematic for her defence. Having said that, I do still think that something like working on AI alignment probably avoids complex cluelessness for the reasons I give in the post, so I think Greaves' work has been useful.

I do think it's possible to think up circumstances of genuine 'simple cluelessness' though where, from a subjective standpoint, we really don't have any reasons to think one option may be better or worse than the alternative. 

So far, I feel I've been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it's the sort of thing that would never happen in real life, or the information given is less than one would have in real life).

For example we can imagine there being two chairs in front of us and making a choice of which chair to sit on.

Some off-the-top-of-my-head reasons we might not have perfect evidential symmetry here:

  • One chair might be closer, so walking to it expends less energy and/or takes less time, which has various knock-on effects
  • One chair will be closer to some other object in the world, making it easier for you to hear what's going on over there and for people over there to hear you, which could have various knock-on effects
  • One chair might look very slightly older, and thus very slightly more likely to have splinters or whatever

There doesn't seem to be any point stressing about this decision (assuming there isn't some obvious consideration to take into account)

I totally agree, but this is a very different claim from there being a qualitative, absolute distinction between simple and complex cluelessness. 

My independent impression is that, for the purpose of evaluating longtermism and things like that, we could basically replace all discussion of simple vs complex cluelessness with the following points:

  • You'll typically do a better job achieving an objective (in expectation) if you choose a plan that was highlighted in an effort to try to achieve that objective, rather than choosing a plan that was highlighted in an effort to try to achieve some other objective
    • This seems like commonsense, and also is in line with the "suspicious convergence" idea
  • Plans like "donate to AMF" were not highlighted to improve the very long-term future
  • Plans like "donate to reduce AI x-risk" were highlighted largely to improve the very long-term future
    • A nontrivial fraction of people highlighted this plan for other reasons (e.g., because they wanted to avoid extinction for their own sake or the sake of near-term generations), but a large fraction highlighted it for approximately longtermist reasons (e.g., Bostrom)
  • On the object-level, it also seems like existing work makes a much more reasonable case for reducing AI x-risk as a way to improve the long-term future than for AMF as a way to improve the long-term future
  • But then there's also the fact that those far-future effects are harder to predict that nearer-future effects, and nearer-future effects do matter at least somewhat, so it's not immediately obvious whether we should focus on the long-term or the short-term. This is where work like "The Epistemic Challenge to Longtermism" and "Formalising the 'Washing Out Hypothesis'" become very useful.
  • Also, there are many situations where it's not worth trying to work out which of two actions are better, due to some mixture of that being very hard to work out and the stakes not being huge
    • E.g., choosing which chair to sit on; deciding which day to try to conceive a child on
    • This is basically just a point about value of information and opportunity cost; it doesn't require a notion of absolute evidential symmetry

(I used AMF and AI x-risk as the examples because you did; we could also state the points in a more general form.)

so I think Greaves' work has been useful.

FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyone - who've thought about the topic think the cluelessness stuff is much more useful than I think it is (I'm just reporting my independent impression here), so my all-things-considered belief is that that work has probably been more useful than seems to be true to me.

So far, I feel I've been able to counter any proposed example, and I predict I would be able to do so for any future example (unless it's the sort of thing that would never happen in real life, or the information given is less than one would have in real life).

I think simple cluelessness is a subjective state.  In reality one chair might be slightly older, but one can be fairly confident that it isn't worth trying to find out (in expected value terms). So I think I can probably just modify my example to one where there doesn't seem to be any subjectively salient factor to pull you to one chair or the other in the limited time you feel is appropriate to make the decision, which doesn't seem too far-fetched to me (let's say the chairs look the same at first glance, they are both in front row and either side of the aisle etc.).

I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other - otherwise they fall prey to  paralysis. Admittedly I haven't read James Lenman closely enough to know if he does in fact invoke paralysis as a necessary consequence for consequentialists, but I think it would probably be the conclusion.

EDIT: To be honest I'm not really sure how important there being a distinction between simple and complex cluelessness actually is. The most useful thing from Greaves was to realise there seems to be an issue of complex cluelessness in the first place where we can't really form precise credences. 

FWIW, I also think other work of Greaves has been very useful. And I think most people. -though not everyone - who've thought about the topic think the cluelessness stuff is much more useful than I think it is

For me, Greaves' work on cluelessness just highlighted to me a problem I didn't think was there in the first place. I do feel the force of her claim that we may no longer be able to justify certain interventions (for example giving to AMF), and I think this should hold even for shorttermists (provided they don't discount indirect effects at a very high-rate). The decision-relevant consequence for me is trying to find interventions that don't fall prey to problem, which might be the longtermist ones that Greaves puts forward (although I'm uncertain about this).

I think simple cluelessness is a subjective state.

I haven't read the relevant papers since last year, but I think I recall the idea being not just that we currently don't have a sense of what the long-term effects of an action are, but also that we basically can't gain information about that. In line with that memory of mine, Greaves writes here that the long-term effects of short-termist interventions are "utterly unpredictable" - a much stronger claim that just that we currently have no real prediction.

(And I think that that idea is very problematic, as discussed elsewhere in this thread..)

(I'm writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)

I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other - otherwise they fall prey to  paralysis. 

I don't think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesn't. This can be true whether you can or can't immediately see any difference in the expected value of the two "concrete choices" (this is a term I'm making up to exclude the choice to do further analysis).

E.g., I don't spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:

  • It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
  • There are other useful things I can do with my time
  • The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in

(Of course, I don't explicitly go through that whole thought process each time I implicitly make a mundane decision.)

But there are also some cases where the expected values we'd guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.

So I don't think the concept of "simple cluelessness" is necessary, and I think it's unhelpful in that:

  • It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
  • It implies that there's something special about certain cases where one has extremely little knowledge, whereas really what's key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have

I think another genuine issue for longtermism is complex cluelessness/deep uncertainty and moral uncertainty, although it's not specific to longtermism. Even if you identify an intervention that you think has predictably large effects on the far future, you may not be able to weigh the arguments and evidence in such a way as to decide that it's actually net positive in expectation.

It's easy to forget the possibility that you'll do more harm than good, or give it too little weight, and I suspect this is worse when we get into very small probabilities of making a difference, and especially Pascalian cases, since we're especially bad at estimating such (differences in) probabilities and considering many very small (differences in) probabilities.

There seems to be some bias at some influential EA orgs against writing about the idea that the far future could be bad (or worse conditional on the survival of humanity or our descendants), which can lead us to systematically underestimating the risks of backfire in this way or related ways. There are other ways seemingly good interventions can backfire, e.g. the research we publish could be used for harm (even if they they're actually doing good according to their own views!), or some AI safety work could be accelerating the development (and adoption) of AGI. Without good feedback loops, such biases and blindspots can persist more easily, and shortermist interventions tend to have better feedback loops than longtermist ones.

There seems to be some bias at some influential EA orgs against writing about the idea that the far future could be bad (or worse conditional on the survival of humanity or our descendants), which can lead us to systematically underestimating the risks of backfire in this way or related ways.

I think that the claim you make it plausible, but I don't think the post you link to provides good evidence of it. If readers were going to read and update on that post, I'd encourage them to also read the commentary on it here. (I read the post myself and found it very unconvincing and strange.)

I think the guidelines and previous syllabi/reading lists are/were biased against downside-focused views, practically pessimistic views, and views other than total symmetric and classical utilitarianism (which are used most to defend work against extinction) in general, as discussed in the corresponding sections of the post. This is both on the normative ethics side and discussion of how the future could be bad or extinction could be good. I discussed CLR's guidelines with Jonas Vollmer here. CLR's guidelines are here, and the guidelines endorsed by 80,000 Hours, CEA, CFAR, MIRI, Open Phil and particular influential EAs are here. (I don't know if these are current.)

On the normative ethics side, CLR is expected to discuss moral uncertainty and non-asymmetric views in particular to undermine asymmetric views, and while the other side is expected to discuss moral uncertainty and  s-risks, they are not expected to discuss asymmetric views in particular, so this biases us away from asymmetric views, according to which the future may be bad and extinction may be good.

On discussion of how the future could be bad or extinction could be good, from CLR's guidelines:

Minimize the risk of readers coming away contemplating causing extinction, i.e., consider discussing practical ways to reduce s-risks instead of saying how the future could be bad

(...)

In general, we recommend writing about practical ways to reduce s-risk without mentioning how the future could be bad overall. We believe this will likely have similar positive results with fewer downsides because there are already many articles on theoretical questions.

(emphasis mine)

So, CLR associates are discouraged from arguing that the future could be bad and extinction could be good, biasing us against theses hypotheses.

I'm not sure that the guidelines for CLR are actually bad overall, though, since I think the arguments for them are plausible, and I agree that people with pessimistic or downside-focused views should not seek to cause extinction, except possibly through civil discussion and outreach causing people to deprioritize work on preventing extinction.  But the guidelines rule out ways of doing the latter, too.

 

I have my own (small) personal example related to normative ethics, too. The coverage of the asymmetry on this page, featured on 80,000 Hours' Key Ideas page, is pretty bad:

One issue with this is that it’s unclear why this asymmetry would exist.

The article does not cite any literature making positive cases for the asymmetry (although they discuss the repugnant conclusion as being a reason for person-affecting views). I cite some in this thread.

The bigger problem though is that this asymmetry conflicts with another common sense idea.

Suppose you have the choice to bring into existence one person with an amazing life, or another person whose life is barely worth living, but still more good than bad. Clearly, it seems better to bring about the amazing life, but if creating a happy life is neither good or bad, then we have to conclude that both options are neither good nor bad. This implies both options are equally good, which seems bizarre.

There are asymmetric views to which this argument does not apply, some published well before this page, e.g. this and this. Also, the conclusion may not be so bizarre if the lives are equally content/satisfied, in line with negative accounts of welfare (tranquilism/Buddhist axiology, antifrustrationism, negative utilitarianism, etc.).

Over a year ago, I criticized this for being unfair in the comments section of that page, linking to comments in my own EA Forum shortform and other literature with arguments for the asymmetry, and someone strong downvoted the comments in my shortform with a downvote strength of 7 and without any explanation. There was also already another comment criticizing the discussion of the asymmetry.

FWIW, I think that the specific things you point to in this comment do seem like some evidence in favour of your claim that some influential EA orgs have some bias against things broadly along the lines of prioritising s-risks or adopting suffering-focused ethical views. And as mentioned in my other comment, I also did already see that claim as plausible. 

(I guess more specifically, I see it as likely that at least some people at EA orgs have this bias, and likely that there's at least a little more of this bias than of an "opposite" bias, but not necessarily likely - just plausible - that there's substantially more of that bias than of the "opposite" bias.)

Also, on reflection, I think I was wrong to say "I don't think the post you link to provides good evidence [for your claim]." I think that the post you link to does contain some ok evidence for that claim, but also overstates the strength of this evidence, makes other over-the-top claims, and provides as evidence some things that don't seem worth noting at all, really.

And to put my own cards on the table on some related points: 

  • I'd personally like the longtermist community to have a bit of a marginal shift towards less conflation of "existential risk" (or the arguments for existential risk reduction) with "extinction risk", more acknowledgement that effects on nonhumans should perhaps be a key consideration for longtermists, and more acknowledgement of s-risks as a plausible longtermist priority
  • But I also think we're already moving in the right direction on these fronts, and that we're already in a fairly ok place

From what I've read moral uncertainty tends to work in favour of longtermists, provided you're happy to do something like maximising expected choice-worthiness. E.g. see here for moral uncertainty about population axiology implying we should choose options preferred by total utilitarianism (disclaimer - I've only read the abstract!). If Greaves and MacAskill's claim about the robustness of longtermism to different moral views is fair, it seems longtermism should remain fairly robust in the face of moral uncertainty.

In terms of complex cluelessness in a more empirical sense, I admit I haven't properly considered the possibility that something like "researching AI alignment" may have realistic downsides. I do however find it a tougher sell that we're complexly clueless about working on AI alignment in the same way that we are about giving to AMF.

Fantastic post, thanks for taking the time to write it :)

Thank you, glad you liked it!

tl;dr for this comment: 

  • I appreciate you highlighting that longtermists doesn't necessarily entail ultimately focusing on humans. 
  • But I think it'd be better to broaden your discussion to humans, non-human animals, and other types of non-humans that might be moral patients (e.g., artificial sentiences)
  • And I think you imply that, when it comes to non-humans, we must necessarily focus on suffering-reduction
    • I think it'd be better to broaden your discussion so that it's open to other goals regarding non-humans, such as increasing happiness

---

A nitpick I have with Greaves and MacAskill’s paper is that they don’t mention non-human animals. 

This was also a nitpick I had. Here's a relevant part of the notes I wrote on their paper (which I should be posting in full soon):

  • It seems like the paper implicitly assumes that humans are the only moral patients
    • I think it makes sense for the paper to focus on humans, since it makes sense for many papers to tackle just one thorny issue at a time
    • But I think it would’ve been good for the paper to at least briefly acknowledge that this is just a simplifying assumption
      • Perhaps just in a footnote
      • Otherwise the paper is kind-of implying that the authors really do take it as a given that humans are the only moral patients (which I think wouldn’t actually match the authors’ views) 

---

"Longtermists shouldn’t ignore non-human animals. It is plausible that there are things we can do to address valid concerns about the suffering of non-human animals in the far future. More research into the tractability of certain interventions could have high expected value."

I appreciate you highlighting that longtermism doesn't have to focus on humans. But I personally think your framing is still narrower than it should be: You could be interpreted as implying that longtermists should focus on either humans, or on the suffering of non-human animals, or on some combination of those goals. 

But I think it's also quite important to consider other possible moral patients that are neither humans nor animals, such as artificial sentiences. (Perhaps arguably some artificial sentiences would be considered by some people to be effectively humans or non-human  animals, but this may not be the case, and other artificial sentiences might be more starkly different.)

And it could also be a moral priority to decrease bad things other than suffering among non-humans, such as death or a lack of freedom. (This would of course require that utilitarianism be false, or at least that we be quite uncertain about it.) And it could also be a moral priority to increase good things for non-humans (e.g., allow there to be large numbers of happy non-human beings).

Tobias Baumann suggests that expanding the moral circle to include non-human animals might be a credible longtermist intervention, as a good long-term future for all sentient beings may be unlikely as long as people think it is right to disregard the interests of animals for frivolous reasons such as the taste of meat. Non-human animals are moral patients that are essentially at our will, and it seems plausible that there are non-extinction attractor states for these animals.

I do think that this is all plausible. But I think people have sometimes jumped on this option too quickly, with too little critical consideration. See also this section of a post and this doc.

Also, I think "Non-human animals are moral patients" (emphasis added) is too strong; I'm not sure we should be practically certain that any nonhuman animals are moral patients, and I definitely don't think we should be practically certain that all are (e.g., insects, crabs). 

To be clear, I'm vegan, and broadly supportive of people focusing on animal welfare, and I think due to moral uncertainty / expected value society should pay far more attention to animals than it does. But I still think it's quite unclear which animals are moral patients. And my impression is that people who've looked into this tend to feel roughly similar (see e.g. Muehlhauser's report).

For example, just as with humans, non-human animal extinction and non-extinction are both attractor states. It is plausible that the extinction of both farmed and wild animals is better than existence, as some have suggested that wild animals tend to experience far more suffering than pleasure and it is clear that factory-farmed animals undergo significant suffering. Therefore causing non-human animal extinction may have high value. Even if some non-human animals do have positive welfare, it may be better to err on the side of caution and cause them to go extinct, making use of any resources or space that is freed up to support beings that have greater capacity for welfare and that are at lower risk of being exploited e.g. humans (although this may not be desirable under certain population axiologies).

My impression is that people interested in wild animal welfare early on jumped a bit too quickly to being confident that wild animal lives are net negative, possibly because one of the pioneers of this area (Brian Tomasik) is morally suffering-focused (hence the early arguments tended to focus on suffering). 

I don't have a strong view on whether wild animal lives tend to be net negative, but it seems to me that more uncertainty is warranted. 

See also the EAG talk Does suffering dominate enjoyment in the animal kingdom? | Zach Groff.

I don't think this undermines the idea that maybe longtermists should focus on non-humans, but it suggests that maybe it's unwise to place much much emphasis on reducing suffering and maybe even causing extinction than on other options (e.g., improving their lives or increasing their population). I think we should currently see both options as plausible priorities.

tl;dr: It's plausible to me that the future will involve far more nonbiological sentience (e.g., whole brain emulations) than biological sentience, which might make farm animals redundant and wild animals vastly outnumbered.

You write:

The second reason is an interesting possibility. It could be the case if perhaps there aren’t a vast number of expected non-human animals in the future. It does indeed seem possible that farmed animals may cease to exist in the future on account of being made redundant due to cultivated meat, although I certainly wouldn’t be sure of this given some of the technical problems with scaling up cultivated meat to become cost-competitive with cheap animal meat. Wild animals seem highly likely to continue to exist for a long time, and currently vastly outnumber humans. Therefore it seems that we should consider non-human animals, and perhaps particularly wild animals, when aiming to ensure that the long-term future goes well.

I'm not sure why you think it's highly likely that wild animals will continue to exist for a long-time, and in particular in large numbers relative to other types of beings (which you seem to imply, though you don't state it outright)? It seems plausible to me that the future will involve something like massive expansion into space by (mostly) humans, whole-brain emulations, or artificial sentiences, without spreading wild animals to these places. (We might spread simulated wild animals without spreading biological ones, but we also might not.)

Relatedly, I think another reason farmed animals might be made redundant is that humanity may simply move from biological to digital form, such that there is no need to eat actual food of any kind. (Of course, it seems hard to say whether this'll happen, but over a long time-scale I wouldn't say it's highly unlikely.)

For arguments for and against these sorts of points I'm making, see Should Longtermists Mostly Think About Animals? and the comments there.

You list as a misconception that "Longtermists have to predict the future". I think what you intend to say is a valuable point, but that your current phrasing is a bit off. I think what you really mean is something like that it's a misconception that "Longtermists have to make very precise predictions about the far future".

I agree with this point, and think it's important:

Therefore whilst it is true that a claim has to be made about the far future, namely that we are unlikely to ever properly recover from existential catastrophes, this claim seems less strong than a claim that some particular event will happen in the far future. 

But longtermists still do need to predict the future - in particular, the case for the most popular longtermist interventions require:

  • predicting events like existential risks or trajectory changes in the coming years, decades, or centuries
  • predicting that those events would cause the world to be "locked-in" to a particular state/trend, or something like that

Predicting the future is also necessary to make the case for neartermist/shortermist interventions, but that tends to require predicting things that are closer to the present and for which we have more precedents/data to go on.

Thanks for this post! I expect this will indeed clarify some important points for people, and I've already recommended this post to a couple people interested in these sorts of topics.

I'd suggest editing the post to put the misconceptions in the headings in quote marks, or something like that. Otherwise on some level people's minds might initially process the headings as claims you're making - especially if they're skimming or just glancing at the post. 

(This suggestion is informed by my previous reading on topics like how misinformation can accidentally emerge, spread, and be sticky. A particularly relevant paper is The effects of subtle misinformation in news headlines. I haven't checked for replications of this stuff, but the basic takeaway here seems to make intuitive sense to me as well.)

Thanks for all your comments Michael, and thanks for recommending this post to others!

I have read through your comments and there is certainly a lot of interesting stuff to think about there. I hope to respond but I might not be able to do that in the very near future.  

I'd suggest editing the post to put the misconceptions in the headings in quote marks

Great suggestion thanks, I have done that.

Thank you for this Jack.

Floating an additional idea here, in the terms of another misconception that I sometimes see. Very interested in your feedback:

 

Possible misconception: Someone has made a thorough case for "strong longtermism"

Possible misconception: “Greaves and MacAskill at GPI have set out a detailed argument for strong longtermism.”

My response: “Greaves and MacAskill argue for 'axiological strong longtermism' but this is not sufficient to make the case that what we ought to do is mainly determined by focusing on far future effects”

Axiological strong longtermism (AL) is the idea that: “In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.”

The colloquial use of strong longtermism on this forum (CL) is something like  “In most of the ethical choices we face today we can focus primarily on the far-future effects of our actions".

Now there are a few reasons why this might not follow (why CL might not follow from AL):

  1. The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.
  2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.
  3. Etc

Whether or not you agree with these reasons it should at least be acknowledged that the Case for Strong Longtermism paper focuses on making a case for AL – it does not actually try to make a case for CL. This does not mean there is no way to make a case for CL but I have not seen anyone try to and I expect it would be very difficult to do, especially if aiming for philosophical-level rigour.

 

– – 

This misconception can be used in discussions for or against longtermism. If you happen to be a super strong believer that we should focus mainly on the far future it would whisper caution and if you think that Greaves and MacAskill's arguments are poor it would suggest being careful not to overstate their claims.



(PS. Both 1 and 2 seem likely to be true to me)
 

Thanks for this! I guess I agree with your overall point that the case isn’t as airtight as it could be. It’s for that reason that I’m happy that the Global Priorities Institute has put longtermism front and centre of their research agenda. I’m not sure I agree with your specific points though.

1. The actions that are best in the short run are the same as the ones that are best in the long run (this is consistent with AL, see p10 of the Case for Strong Longtermism paper) in which case focusing attention on the more certain short term could be sufficient.

I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.

Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say). This is because, if axiological strong longtermism is true, the vast majority of the value of these actions will in fact be coming from the long-run effects. Ignoring this fact and just doing them based on their short-run effects wouldn’t seem to me to be a great idea, as if we were to come across evidence or otherwise conclude that the action isn’t in fact good from a long-run perspective, we wouldn’t be able to correct for this (and correcting for it would be very important). So I’m not convinced that AL doesn’t imply CL.

2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.

I would need to know more about your proposed alternative to comment. I would just point out (something I didn’t mention in my post), that Greaves and MacAskill also argue for “deontic strong longtermism” in their paper. I.e. that we ought to be driven by far future effects. They argue that the deontic longtermist claim follows from the axiological claim as, if axiological strong longtermism is true it is true by a large margin, and that a plausible non-consequentialist theory has to be sensitive to the axiological stakes, becoming more consequentialist in output as the axiological stakes get higher.

3. Etc

I hope this doesn’t come across as snarky, but “etc.” makes it sound like there is a long list of obvious problems but, to be honest, I’m not sure what these are beyond the ones I mention in my post so it would probably be helpful for you to specify these.

Hi Jack, Thank you for your thoughts. Always a pleasure to get your views on this topic.

I agree with your overall point that the case isn’t as airtight as it could be

I think that was the main point I wanted to make (the rest was mostly to serve as an example). The case is not yet made with rigour, although maybe soon. Glad you agree.

I would also expect (although cant say for sure) that if you go hang out with GPI academics and ask how certain they are about x y and z about longtermism you would perhaps find less certainty than it comes across from the outside or that you might find on this forum and that it is useful for people to realise that.

Hence thought it might be one for your list.

 

– – 

The specific points 1. and 2. were mostly to serve as examples for the above (the "etc" was entirely in that vein, just to imply that there maybe things that a truly rigorous attempt to prove CL would throw up).

Main point made, and even roughly agreed on :-), so happy to opine a few thoughts on the truth or 1. and 2. anyway:

 

– – 

1. The actions that are best in the short run are the same as the ones that are best in the long run

Please assume that by short-term I mean within 100 years, not within 10 years.

A few reasons you might think this is true:

  • Convergence: See your section on "Longtermists won't reduce suffering today". Consider some of the examples in the paper, speeding up progress, preventing climate change, etc are quite possibly the best things you would do to maximise benefit over the next 100 years. AllFed justify working on extreme global risks based on expected lives saved in the short-run. (If this is suspicious convergence it goes both ways, why are many of the examples in the paper so suspiciously close to what is short-run best).
  • Try it: Try making the best plan you can accounting for all the souls in the next 1x10^100 years, but no longer. Great done. Now make the best plan but only take into account the next 1X10^99 years. Done? does it look any different? Now try 1x10^50 years. How different does that look? What about the best plan for 100000 years? Does that plan look different? What about 1000 years or 100 years?  At what point does it look different? Based on my experience of working with governments on long-term planning my guess would be it would start to differ significantly after about 50-100 years. (Although it might well be the case that this number is higher for philanthropists rather than policy makers.)
  • Neglectedness: Note that the two thirds of the next century (after 33 years) is basically not featured in almost any planning today. That means most of the next 100 years is almost as neglected as the long-term future (and easier to impact).

On:

Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects ... the value of these actions will in fact be coming from the long-run effects

I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on.  The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.

 

– – 

2. Making decisions solely by evaluating ex ante effects is not a useful way of making decisions or otherwise interacting with the world.

I agree that AL leads to 'deontic strong longtermism'.

I don’t think expected value approach (which is the dominant approach used in their paper) or the other approaches they discuss fully engages with how to make complex decisions about the far future. I don’t think we disagree much here (you say more work could be done on decisions theoretic issues, and on tractability).

I would need to know more about your proposed alternative to comment.

Unfortunately, I am running out of time and weekend to go into this in too much depth on this so I hope you don’t mind that instead of a lengthy answer here if I just link you to some reading. 

I have recently been reading the following that you might find an interesting introduction to how one might go about thinking about these topics and is fairly close to my views:

https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/

https://www.givewell.org/modeling-extreme-model-uncertainty

 

– –

Always happy to hear your views. Have a great week

I think I agree with this (at least intuitively agree, not given it deep though). I raised 1. as it as I think it is a useful example of where the Case for Strong Longtermism paper focuses on AL rather than CL. See section 3 p9 – the authors say if short-term actions are also the best long-term actions then AL is trivially true and then move on.  The point you raise here is just not raised by the authors as it is not relevant to the truth of AL.

I just don't really see a meaningful / important distinction between AL and CL to be honest. Let's consider that AL is true, and also that cultivated meat happens to be the best intervention from both a shortermist and longtermist perspective. 

A shortermist might say: I want cultivated meat so that people stop eating animals reducing animal suffering now

A longtermist might say: I want cultivated meat so that people stop eating animals and therefore develop moral concern for all animals. This will reduce risks of us locking in persistent animal suffering in the future

In this case, if AL is true, I think we should also be colloquial longtermists and justify cultivated meat in the way the longtermist does, as that would be the main reason cultivated meat is good. If evidence were to come out that stopping eating meat doesn't improve moral concern for animals, cultivated meat may no longer be great from a longtermist point of view - and it would be important to reorient based on this fact. In other words, I think AL should push us to strive to be colloquial longtermists.

Otherwise, thanks for the reading, I will have a look at some point!

I’m sceptical of this. It would seem to me to be surprising and suspicious convergence that the actions that are best in terms of short-run effects are also the actions that are best in terms of long-run effects. We should be predisposed to thinking this is very unlikely to be the case.

Even if there are some cases where the actions that have the best short run effects are also the ones that have the best long-run effects, I think it would be important for us to justify doing them based on their long-run effects (so I disagree that colloquial longtermism would be undermined as you say).

I think I essentially agree, and I think that these sorts of points are too often ignored. But I don't 100% agree. In particular, I wouldn't be massively surprised if, after a few years of relevant research, we basically concluded that there's a systematic reason why the sort of things that are good for the short-term will tend to also be good for the long-term, and that we can basically get no better answers to what will be good for the long-term than that. (This would also be consistent with Greaves and MacAskill's suggestion of speeding up progress as a possible longtermist priority.)

I'd bet against that, but not with massive odds. (It'd be better for me to operationalise my claim more and put a number on it, rather than making these vague statements - I'm just taking the lazy option to save time.)

And then if that was true, it could make sense to most of the time just focus on evaluating things based on short-term effects, because that's easier to evaluate. We could have most people focusing on that proxy most of the time, while a smaller number of people continuing checking whether that seems a good proxy and whether we can come up with better ones.

I think most longtermists are already doing something that's not massively different from that: Most of us focus most of the time on reducing existential risk, or some specific type of existential risk (e.g., extinction caused by AI), as if that's our ultimate, terminal goal. Or we might even most of the time focus on an even more "proximate" or "merely instrumental" proxy, like "improving institutions' ability and motivation to respond effectively to [x]", again as if that's a terminal goal. 

(I mean this to stand in contrast to consciously focusing on "improving the long-term future as much as possible", and continually re-deriving what proxies to focus on based on that goal. That would just be less efficient.) 

Then we sometimes check in on whether the proxies we focus on are actually what's best for the future.

I think this approach makes sense, though it's also good to remain aware of what's a proxy and what's an ultimate goal, and to recognise our uncertainty about how good our proxies are. (This post seems relevant, and in any case is quite good.)

Greaves and MacAskill also argue for “deontic strong longtermism” in their paper. I.e. that we ought to be driven by far future effects.

Yeah, this is also what came to mind for me when I read weeatquince's comment. I'd add that Greaves and MacAskill also discuss some possible decision-theoretic objections, including objections to the idea that one should simply make decisions based on what seems to have the highest expected value, and argue that the case for longtermism seems robust to these objections. (I'm not saying they're definitely right, but rather that they do seem to engage with those potential counterarguments.)

I agree that CL may or may not follow from AL depending on one's other ethical and empirical views.

However, I'm not sure I understand if and why you think this is a problem for longtermism specifically, as opposed to effective altruism more broadly. For instance, consider the typical EA argument for donating to more rather than less effective global health charities. I think that argument essentially is that donating to a more effective charity has better ex-ante effects. 

Put differently, I think many EAs donate to AMF because they believe that GiveWell has established that marginal donations to AMF have pretty good ex-ante effects compared to other donation options (at least if we only look at a certain type of effect, namely short-term effects on human beneficiaries). But I haven't seen many people arguing on the EA Forum that, actually, it is a misconception that someone has made a thorough case for donating to AMF because maybe making decisions solely by evaluating ex-ante effects is not a useful way of interacting with the world. [1]

So you directing a parallel criticism at longtermism specifically leaves me a little confused. Perhaps I'm misunderstanding you?

(I'm setting aside your potential empirical defeater '1.' since I largely agree with the discussion on it in the other responses to your comment. I.e. I think it is countered strongly, though not absolutely decisively, by the 'beware suspicious convergence' argument.)

 

[1] People have claimed that there isn't actually a strong case for donating to AMF; but usually such arguments are based on types of effects (e.g. on nonhuman animals or on far-future outcomes) that the standard pro-AMF case allegedly doesn't sufficiently consider rather than on claims that, actually, ex-ante effects are the wrong kind of thing to pay attention to in the first place.

tl;dr – The case for giving to GiveWell top charities is based on much more more than just expected value calculations.

The case for longtermism (CL) is not based on much more than expected value calculations, in fact many non-expected value arguments currently seem to point the other way. This has lead to a situation where there are many weak arguments against longtermsim  and one very strong argument for longtermism. This is hard to evaluate.

We (longtermists) should recognise that we are new and there is still work to be done to build a good theoretical base for longtermism.

 

Hi Max,

Good question. Thank you for asking.

– – 

The more I have read by GiveWell (and to a lesser degree by groups such as Charity Entrepreneurship and Open Philanthropy) the more it is apparent to me that the case for giving to the global poor is not based solely on expected value but is based on a very broad variety of arguments. 

For example I recommend reading:

  1. https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking
  2. https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
  3. https://www.givewell.org/modeling-extreme-model-uncertainty
  4. https://forum.effectivealtruism.org/posts/h6uXkwFzqqr2JdZ4e/joey-savoie-tools-for-decision-making

The rough pattern of these posts is that taking a broad variety of different decision making tools and approaches and seeing where they all converge and point too is better than just looking at expected value (or using any other single tool). That expected value calculations are not the only way to make  decisions and that the arguments for giving to the global poor would be unconvincing if solely based on expected value cautions and not on historical evidence, good feedback loops, expert views, strategic considerations, etc, etc. then the authors would not be convinced.

For example in [1.] Holden describes how he was initially sceptical that:
"donations can do more good when targeting the developing-world poor rather than the developed-world poor "
but he goes onto says that:
"many (including myself) take these arguments more seriously on learning things like “people I respect mostly agree with this conclusion”; “developing-world charities’ activities are generally more robustly evidence-supported, in addition to cheaper”; “thorough, skeptical versions of ‘cost per life saved’ estimates are worse than the figures touted by charities, but still impressive”; “differences in wealth are so pronounced that “hunger” is defined completely differently for the U.S. vs. developing countries“; “aid agencies were behind undisputed major achievements such as the eradication of smallpox”; etc."

– –

Now I am actually somewhat sceptical of some of this writing. I think much of it is a pushback against longtermism. Remember the global development EAs have had to weather the transition from "give to global health, it has the highest expected value" to "give to global health, it doesn't have the highest expected value (longtermism has that) but is good for many other reasons". So it is not surprising that they have gone on to express that there are many other reasons to care about global health that are not based in expected value calculations.

– –  

But that possible "status quo bias" does not mean they are wrong. It is still the case that GiveWell have made a host of arguments for global health beyond expected value and that the longtermsim community has not done so. The longtermism community has not produced historical evidence or highlighted successful feedback loops or demonstrated that their reasoning is robust to a broad variety of possible worldviews or built strong expert consensus. (Although the case has been made that preventing extreme risks is robust to very many possible futures, so that at least is a good longtermist argument that is not based on expected value.)

In fact to some degree the opposite is the case. People who argue against longtermism have pointed to cases were long-term type planning historically led to totalitarianism or to the common-sense weirdness of longtermist conclusions etc. My own work into  risk management suggests that especially when planning for disasters it is good to not put too much weight on expected value but to assume that something unexpected will happen.

The fact is that the longtermist community has much more weird conclusions than the global health community yet has put much less effort into justifying those conclusions.

– – 

To me it looks like all this has lead to a situation where there are many weak arguments against longtermsim (CL) and one very strong argument for longtermism (AL->CL).  This is problematic as it is very hard to compare one strong argument against many weak arguments and which side you fall on will depend largely on your empirical views and how you weigh up evidence. This ultimately leads to unconstructive debate.

– – 

I think the longtermist view is likely roughly correct. But I think that the case for longtermism has not be made rigorously or even particularly well (certainly it does not stand up well to Holden's "cluster thinking" ideals). I don’t see this as a criticism of the longtermist community as the community is super new and the paper arguing the case even just from the point of view of expected value is still in draft! I just think it is a misconception worth adding to the list that the community has finished making the case for longtermism – we should recognise our newness and that there is still work to be done and not pretend we have all the answers. The EA global health community has build this broad theoretical bases beyond expected value and so can we, or we can at least try.

– – 

I would be curious to know the extent to which you agree with this?

Also, I think this way of mapping situation is a bit more nuanced here than in my previous comment so I want to acknowledge a subtle changing of views between by earlier comment and this one, ask that if you respond you respond to the views as set out here rather than above and of course thank you for your insightful comment that lead to my views evolving  – thank you Max!


– –
– – 

(PS. On the other topic you mention.  [Edited: I am not yet sure of the extent to which I  think] the  'beware suspicious convergence' counter-argument  [applies] in this context. Is it suspicious that if you make a plan for 1000 years it looks very similar to if you make a plan for 10000 years? Is it suspicious that if I plan for 100000 years or 100 years what I do in the next 10 years looks the same? Is it suspicious that if I want to go from my house in the UK to Oslo the initial steps are very similar to if I want to go from my house to Australia – ie. book ticket, get bus to train station, get train to airport? Etc?   [Would need to give this more thought but it is not obvious] )



 

Hi Sam, thank you for your thoughtful reply.

Here are some things we seem to agree on:

  • The cases for specific priorities or interventions that are commonly advocated based on a longtermist perspective (e.g. "work on technical AI safety") are usually far from watertight. It could be valuable to improve them, by making them more "robust" or otherwise.
  • Expected-value calculations that are based on a single quantitative model have significant limitations. They can be useful as one of many inputs to a decision, but it would usually be bad to use them as one's sole decision tool.
    • (I am actually a big fan of the GiveWell/Holden Karnofsky posts you link to. When I disagree with other people it often comes down to me favoring more "cluster thinking". For instance, these days this happens a lot to me when talking to people about AI timelines, or other aspects of AI risk.)

However, I think I disagree with your characterization of the case for CL more broadly, at least for certain uses/meanings of CL.

Here is one version of CL which I believe is based on much more than just expected-value calculations within a single model: This is roughly the claim that (i) in our project of doing as much good as possible we should at the highest level be mostly guided by very long-run effects and (ii) this makes an actual difference for how we plan and prioritize at intermediate levels.

Here are I have a picture in mind that is roughly as follows:

  • Lowest level: Which among several available actions should I take right now?
  • Intermediate levels: 
    • What are the "methods" and inputs (quantitative models, heuristics, intuitions, etc.) I should use when thinking about the lowest level?
    • What systems, structures, and incentives should we put in place to "optimize" which lowest-level decision situations I and other agents find ourselves in in the first place?
    • How do I in turn best think about which methods, systems, structures, etc. to use for answering these intermediate-level questions?
    • Etc.
  • Highest level: How should I ultimately evaluate the intermediate levels?

So the following would be one instance of part (i) of my favored CL claim: When deciding whether to use cluster thinking or sequence thinking for a decision, we should aim to choose whichever type of thinking best helps us find the option with most valuable long-run effects. For this it is not required that I make the choice between sequence thinking or cluster thinking by an expected-value calculation, or indeed any direct appeal to any long-run effects. But, ultimately, if I think that, say, cluster thinking is superior to sequence thinking for the matter at hand, then I do so because I think this will lead to the best long-run consequences.

And these would be an instances of part (ii): That often we should decide primarily based on the proxy of "what does most reduce existential risk?"; that it seems good to increase the "representation" of future generations in various political contexts; etc.

Regarding what the case for this version of CL rests on:

  • For part (i), I think it's largely a matter of ethics/philosophy, plus some high-level empirical claims about the world (the future being big etc.). Overall very similar to the case for AL. I think the ethics part is less in need of "cluster thinking", "robustness" etc. And that the empirical part is, in fact, quite "robustly" supported.
  • [This point made me most want to push back against your initial claim about CL:] For part (ii), I think there are several examples of proxy goals, methods, interventions, etc., that are commonly pursued by longtermists which have a somewhat robust case behind them that does not just rely on an expected value estimate based on a single quantitative model. For instance, avoiding extinction seems very important from a variety of moral perspectives as well as common sense, there are historical precedents of research and advocacy at least partly motivated by this goal (e.g. nuclear winter, asteroid detection, perhaps even significant parts of environmentalism), there is a robust case for several risks longtermists commonly worry about (including AI), etc. More broadly, conversations involving explicit expected value estimates, quantitative models, etc. are only a fraction of the longtermist conversations I'm seeing. (If anything I might think that longtermists, at least in some contexts, make too little use of these tools.) E.g. look at the frontpage of LessWrong, or their curated content. I'm certainly not among the biggest fans of LessWrong or the rationality community, but I think it would be fairly inaccurate to say that a lot of what is happening there is people making explicit expected value estimates. Ditto for longtermist content featured in the EA Newsletter, etc. etc. I struggle to think of any example I've seen where a longtermist has made an important decision based just on a single EV estimate.

 

Rereading your initial comment introducing AL and CL, I'm less sure if by CL you had in mind something similar to what I'm defending above. There certainly are other readings that seem to hinge more on explicit EV reasoning or that are just absurd, e.g. "CL = never explicitly reason about anything happening in the next 100 years". However, I'm less interested in these versions since they to me would seem to be a poor description of how longtermists actually reason and act in practice.

Not sure this "many weak arguments" way of looking at it is quite correct either had a quick look at the arguments given against longtermism and there are not that many of them. Maybe a better point is that there are many avenues and approaches that remain unexplored.

Very balanced assessment! Nicely done :) 

Possible misconception: “Greaves and MacAskill say we can ignore short-term effects. That means longtermists will never reduce current suffering. This seems repugnant.”

'This seems repugnant' doesn't seem like a justifiable objection to me, so not something an advocate of SLT should be obliged to take on directly.

If I said "this doctor's theory of liver deterioration suggests that I should reduce my alcohol intake, which seems repugnant to me", you would not feel compelled to respond that "actually, some of the things the doctor is advocating could allow you to drink more alcohol".

(I suspect that beyond the "this seems repugnant" there is a more coherent critique -- and that is the critique we should focus on.)

In response, you stated:

However, it is worth noting that it is possible that longtermists may end up reducing suffering today as a by-product of trying to improve the far future.

It might be worth re-stating this. Thinking about objective functions and constraints, either

R1. SLT implies that resources should be devoted in a way that does less to reduce current suffering (i.e., implies more current suffering than absent SLT) or

R2. SLT does not change our objective function, or it coincidentally implies an allocation that does has no differential effect on current suffering (a 'measure zero', i.e., coincidental result)

R3. SLT implies that resources should be devoted in a way that leads to less current suffering

R3 seems unlikely to be the case, particularly if we imagine bounds on altruistic capacity. And, if there were an approach that could use the same resources to reduce current suffering even more, it already should have been chosen in the absence of SLT.

If R2 is the case then SLT is not important for our resource decision so we can ignore it.

If R1 holds (which seems most likely to me), then following SLT does imply an increase in current suffering, and we are back to the main objection

Curated and popular this week
Relevant opportunities