Hide table of contents

This is a crosspost from my blog post.

In this post, I’m going to offer a series of criticisms of William MacAskill’s What We Owe The Future (or WWOTF for short.) It has been almost four years since the book was released, but I still think it’s worth criticizing since it remains a foundational text for longtermism. (For those who don’t know, longtermism is the view that working to positively shape the far future is a key moral priority of our time.)

In the first section of this post, I will offer criticisms of WWOTF as a justification for a worldview. In the second section, I will offer criticisms of WWOTF as a piece of persuasion for a general audience. Then, in the final section, I will offer some additional thoughts.

(As an aside, since it has been so long since WWOTF was released, MacAskill and many of his fellow researchers have changed some of their views on longtermism. If I know of something MacAskill has written or said since the release of the book that differs, I will point it out.)

Criticisms of WWOTF As a Justification For a Worldview

General Criticisms

We may not be able to predictably influence the far future.

A core assumption of WWOTF is the idea that we live in a world where we are able to predictably influence the far future. This seems plausible, but there are a few reasons to think this could not be the case.

First, we may not have enough information to make sufficiently accurate predictions about how the future will go. The most basic reason to think this is that, historically, humans have been very bad at making predictions about how the future will go. As such, it doesn’t seem like it would be very surprising if an omniscient observer considered our predictions to be wildly off base.

Second,  we may have enough information to make sufficiently accurate predictions about how the future will go, but we may be unable to properly do so. It could be that there is simply too much information for us to keep track of so we can’t make predictions that are particularly useful. It could also be that the ways that we use past information in order to make predictions about the future are fundamentally flawed in deep ways.

Third, we may not have any actions available to us that can influence the far future. If, for instance, we live in a world where no matter what we do, the far future will always converge to a single outcome, our actions would not be able to influence the far future. This could be the case if, for example, some technology that humanity develops will almost entirely decide how humanity’s future goes. It could be that this technology causes us to go extinct or that it shapes our future in some extremely path dependent way that we have no influence over. (MacAskill and Guive Assadi have since addressed this to some degree in “Beyond Existential Risk.”)

Lastly, we may have actions available to us that can influence the far future, but we may not be able to determine what they are due to the butterfly effect. If we live in a world where very slight changes to starting conditions result in radically different outcomes, it could be that no one can predictably influence the far future because the actual effects of all actions are radically different than what one could reasonably expect. One reason to think this could be the case is the social and technological changes of the past have all been radically unpredictable. Going forward into the future, this could also be the case.

There are no “robustly good” longtermist actions available to us.

MacAskill argues that certain actions available to longtermists are “robustly good,” which I assume means that, in expectation, they have almost entirely positive benefits. I think this is not true of any longtermist interventions that MacAskill mentions since it’s easy to think of plausible ways that these interventions could actually have tremendously negative effects.

For instance, early in the book, MacAskill writes “Decarbonisation is a proof of concept for longtermism. Clean energy innovation is so robustly good, and there is so much still to do in that area that I see it as a baseline longtermist activity against which other potential actions can be compared. It sets a high bar.” I think this is an extreme claim because it seems plausible to me that the development of renewable energy could be seriously harmful. For instance, by creating renewable energy sources we may enable totalitarian regimes to be indefinitely lock-in surveillance states because they will no longer have to deal with the risk of running out of the energy necessary to maintain such a state. Additionally, if it turns out that climate change reduces wild animal biomass and wild animals experience significant suffering, it could turn out that clean energy innovation is bad because it reduces the rate of climate change and hence increases wild animal suffering.

We should expect future humans to be significantly different from us.

In the chapters on values, stagnation, and whether humanity will have a net positive impact on the future, MacAskill relies on the assumption that future humans will have a similar nature to us. Not only do I think this is unlikely, I actually think that we should expect future humans to be different from us. Future humans could become different as a result of natural genetic selection effects, genetic editing, eugenics, technological changes, or technological enhancement. And, some of these mechanisms could even have persistent path dependent effects that cause future humans to be drastically different from how we are.

Chapter Specific Criticisms

The significance, contingency, persistence framework has little to do with the rest of the book.

In the chapter “You Can Shape the Course of History,” MacAskill introduces the significance, contingency, persistence framework, which argues that the importance of an action can be determined by how much of an effect it has, whether that effect would have occurred otherwise, and how long the effect lasts. This would be an interesting idea if the book were merely about shaping the future in general, but the book is instead about trying to shape the future “million, billions, or even trillions” of years from now. As such, this framework seems unrelated to the book’s core ideas because his ideas imply we should always set the persistence of the actions he recommends to be until the permanent end of civilization.

The use of expected value promotes strong longtermism not longtermism.

In the chapter “You Can Shape the Course of History,” MacAskill promotes using “expected value theory” where individuals generate probabilities of outcomes and then use those probabilities to determine which actions to take. The problem with this theory is that, if we are merely calculating expected value, we should believe in strong longtermism, the view that we should be entirely focused on shaping the far future, not longtermism, the view that shaping the far future should be just one of our concerns. This is because, at least according to MacAskill’s logic, we should expect the value of our actions over the longterm to be vastly higher than their effects over the nearterm.

Trying to positively shape humanity’s value could backfire.

At the end of the chapter “Moral Change,” MacAskill writes, “when trying to improve society’s values, we should focus on promoting more abstract or general moral principles or, when promoting particular moral actions, tie them into a more general worldview. This helps ensure that these moral changes stay relevant and robustly positive into the future” (emphasis mine).

I think that MacAskill is overstating the benefit of spreading positive values. It seems entirely plausible to me that promoting positive values could backfire. For one, the values that one supports could easily cause a slightly different but far worse set of values to be ultimately promoted as a result of variations of their views spreading more effectively. Similarly, by promoting a set of values, one could accidentally create pushback that causes their values to be even less favored than they were originally. Lastly, even values that seem positive now may actually turn out to be grossly immoral.

AGI could not cause lock-in.

In the chapter “Value Lock-in,” MacAskill seems to imply that AGI would almost certainly cause value lock-in. I think it’s worth pointing out that it’s plausible to see this not occurring. It seems like most people don’t want the future to be determined by a small group of people. As such, if people were to start using AGI in this way, I think we could expect widespread pushback across the globe to reduce the likelihood of this occurring. Additionally, this could never be an issue to begin with. The world has norms against doing human cloning that have resulted in laws against it. If society develops norms against using AGI to create lock-in, it could also create laws against it before the technology is even developed. (For context, the research institute MacAskill currently works for, Forethought, currently has a lot of work related to this core idea.)

MacAskill’s arguments don’t support a very high risk of extinction.

In the chapter “Extinction,” MacAskill argues that man-made pandemics and great power wars could pose a significant risk of extinction to humanity this century. Although his arguments suggest that these issues could cause tremendous amounts of death, it seems to me that we should still think they’re unlikely to cause extinction because most people could survive these events by merely living in a bunker for a long time.

We should probably expect civilization to recover from collapse.

I have not looked into civilizational collapse at all, but I find it extremely unlikely that humans would not be able to recover from collapse. It seems like, considering how intelligent and creative our species is, we should expect that, even in very dire conditions, we would be able to re-build civilization.

The argument for stagnation is under-supported.

In MacAskill’s chapter “Stagnation,” he argues that we need to ensure continuous technological development because if stagnation occurs it could leave humanity at a heightened risk of extinction for an extended duration of time. I find this argument to be quite weak for multiple reasons. First, it seems like the point at which we reach technological stagnation might have a very low risk of extinction by default. Second, this argument presupposes that there is some level of technological development at which humanity’s risk of extinction will be extremely low. Lastly, this argument ignores the risk that technological development could increase our risk of extinction rather than decrease it. One reason to think this could be the case is that, as we have technological development, we may develop more ways to destroy ourselves than ways to safeguard ourselves from destruction. If this imbalance becomes too extreme, extinction could be practically guaranteed. (For context, MacAskill addresses the second criticism in this comment on the EA forum.)

The argument that humanity will have a net positive impact is undersupported.

In the chapter “Will the Future Be Good or Bad?,” MacAskill argues that humanity will have a net positive impact on the world. I agree with this claim, but I find that his argument doesn’t sufficiently support this view for two reasons.

First, MacAskill spends most of the chapter discussing the historical and present welfare of animals and humans on Earth. This seems, at least to me, mostly irrelevant to the question of the welfare of future beings. First, we should expect that, if humanity creates an interstellar civilization, as MacAskill seems to believe we might, then we should be concerned with assessing what the welfare of beings on distant stars will be like, not what the welfare of beings today is like. Second, there are a lot of aspects of the current state of welfare in the world that we should expect to change. For instance, I think we should almost certainly expect farm animal suffering to end if civilization continues to progress, and I think we should also expect human welfare to either increase or decrease but definitely not to stay the same.

Second, MacAskill argues that we should expect humanity to have a positive net impact because we should expect the very best futures to be more likely than the very worst futures. I think this is a compelling argument, but, in order for it to work, I think that MacAskill needs to justify why we should think that the most extreme outcomes will account for most of the expected value. (For context, MacAskill expands upon these ideas significantly in the first three articles of the Better Futures series.)

Criticizing WWOTF As A Piece of Persuasion For A General Audience

Comparing distance in time to distance in space is not convincing.

In the chapter “The Case for Longtermism,” MacAskill writes, “Distance in time is like distance in space. People matter even if they live thousands of miles away. Likewise, they matter even if they live thousands of years hence.” I think this argument is much less convincing than MacAskill thinks since most people have a strong intuition that they have far less of an obligation to people the further away they are.

The focus on the far future is unpersuasive.

Throughout the book, MacAskill focuses on how actions matter because they could influence the future “millions, billions, or even trillions of years.” I find that this is interesting to think about, but that it, in general, weakens his argument since the common view is that we simply can’t reliably impact the future on time scales as vast as this. Additionally, by mentioning the far future, he also brings to mind the extent to which we should consider it to be highly unpredictable.

Additional Notes

Another book on longtermism could be written.

It feels to me like WWOTF explores a series of interesting ideas in the space of longtermism but that it both doesn’t offer a sufficiently broad view of how we can influence the future and that it doesn’t sufficiently address the epistemic challenge that longtermists face (not that this is what MacAskill was intending to do.) As such, it feels like there’s still another book on longtermism that could be written and that such a book could become even more foundational than MacAskill’s text.

Longtermists should mention concrete interventions more often.

Considering that longtermism is an offshoot of effective altruism, a highly pragmatic philosophy, I think that longtermists should mention concrete interventions more often. I noticed this in WWOTF, but I think it’s also considerably noticeable in Forethought’s current work as well.

19

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities