Hide table of contents

This post is a crosspost from my blog post.

In this post, I’m going to offer my thoughts on William MacAskill’s “Better Futures” essay series (for which he co-authored three essays with other authors.) I will begin with my general thoughts on the series, then I will give some thoughts on specific essays. After that, I will share my take-aways from the series. Then, I will conclude. (For context, this is somewhat dry and requires you to have already read the series in order to understand it.)

General Thoughts

  1. MacAskill’s central argument is that we should expect the future to have far less value than it could by default and that there are actions available to us that can increase this value. I think his argument is surprisingly robust, but it often includes excessive speculation about how the future will go and overstates the extent to which we can predict the effects of our actions. For instance, most ideas in the series are related to how we should try to affect a future after AGI is developed and it creates an intelligence explosion, but I think it’s worth pointing out that, really, no one is very sure what a post-AGI future will look like. Given this, I think we should be quite skeptical that we can predict how our actions will really shape the future. For instance, if we pass space treaties now, that could be a bad idea if ASI would have eventually enabled us to write better treaties down the line. Similarly, if we ensure democracies are created but that enables immediate space colonization, that could also lead to a serious loss of value.
  2. I also think MacAskill understates the implications of his argument. It seems to me that, according to his reasoning, you could think that work on flourishing is astronomically more important than work on survival. For instance, if the value of the future is really a multiplication of factors and there’s hundreds of these, then you could reasonably expect the future on average will have 10^-20 less value than a near best future. Given this, even if work on flourishing is extremely intractable, it’s probably more important than work on survival.

No Easy Eutopia

  1. MacAskill and Moorhouse have us consider a model of the far future where its value is the multiplication of a series of factors whose values range from zero non-inclusive to one. I think this is a major mistake because it’s not difficult to imagine that we could easily produce futures of negative value. For instance, if humanity made a grave moral error, such as by spreading wildlife who suffer across the universe, failing to give digital beings moral consideration, or engaging in excessive punishment to wrongdoers, it is not difficult to imagine a future of net negative value. Given this consideration, certain ways of trying to improve the far future, such as ones that involve increasing humanity’s expected power, could be actively harmful rather than beneficial.

Convergence and Compromise

  1. The authors should have foregrounded their primary assumption, namely that, if moral realism is true and people are able to come to correct moral views, then we should expect that reflection is the mechanism by which they do this. This is not entirely clear to me, and I think it should be justified. It’s also worth pointing out that, if you don’t think reflection is this mechanism, then you should promote whatever mechanism you think this is.
  2. I think this essay is really an argument against the view that the far future will be positive in expectation at all. I walked away from reading it, thinking, “It’s pretty unclear how much value the future will hold since there are a wide range of considerations when trying to determine this, such as how power will be distributed in the future, how technology will affect the distribution of power and human preferences, and the extent to which we should expect there to be some mechanism that leads people to have more correct moral views if such views even exist at all.”
  3. This essay has an interesting implication:
    1. If you think the future will almost certainly converge to the correct moral views, you should work on survival.
    2. If you think that the future will almost certainly not converge to the correct moral views, you should try to ensure that your views are the ones that determine the future.
    3. If you think that we are likely to be somewhere in the middle, you should try to ensure that future people engage in more reflection so that they come to the correct moral views.
    4. And, if you hold any of these views, you should make sure that future people actually are able to create a positive future (if you expect it to be net positive.)
  4. It’s also worth pointing out that, in order to think that moral trade is important, you must believe that only some people will have correct moral views and that they will have significant power, which seems like a very specific (and, perhaps, unlikely) constellation of conditions.

Persistent Path Dependence

  1. In this essay MacAskill argues that events that will occur this century will have persistent path dependence, which is to say that their effects will be “(i) path dependent; (ii) extremely persistent (comparable to the persistence of extinction); and (iii) predictably influence the expected value of the future.” He gives surprisingly robust arguments for the first two claims, but he fails to give any evidence to support the latter claim. I think this is important because it seems entirely plausible to me that merely reducing the variation of possible futures could have minimal effect on the actual expected value of the far future. For instance, if you make it so that people can become immortal, that certainly shapes the future, but it’s pretty unclear whether this would be negative or positive. It could be, for instance, that immortals are more likely to learn from their mistakes than mortals are, which will give them vastly more wisdom long-term.
  2. Four types of persistent path dependence that MacAskill misses that I would mention are:
    1. People could permanently alter their environment or the environment of others in persistent path dependent ways by narrowly controlling what information they gain access to.
    2. Those in power could ban technological development, which would reduce disruption since technology often causes social and political changes.
    3. Even if people only modify future generations slightly, over reasonably short time periods, this could result in humanity being radically and unrecognizably transformed if such changes contribute to further such changes in specific directions
    4. The alignment of agents that assist in decision making could significantly determine the future in unpredictable ways if people are able to align their own agents and in highly predictable ways if a single group decides how they are permanently aligned.
  3. In the essay, MacAskill mentions two forms of persistent path dependence, events that allow actors to gain “greater control over the future” and ones that allow for them to face “less disruption.” I think a third category worth considering would be ones that will significantly determine the future but in unknown ways such as (d) from the previous bullet point. Working to prevent these could be important if you expect the future to be net positive.

How to Make The Far Future Better

  1. Overall, I think that MacAskill’s arguments are surprisingly robust. It seems like many of these interventions may be unlikely to have the desired effect or any effect at all but that, on average, we should expect them to make the future better if persistent path dependence occurs significantly this century.
  2. At the end, MacAskill mentions a brief research agenda. I think the major thing he missed out on is that the primary focus of Better Futures research should currently be determining and comparing cause areas so that we can find and act upon the most pressing interventions before it’s too late.
  3. Additionally, I think that many of his ideas for research seem essentially impossible, such as answering the question, “How likely is it that a future society gets many things right but some crucial things wrong? (For example, how plausible is a future society that is generally eutopian, except that it gets the ethics of digital beings wrong?)” or the question,“Should we expect future decisions to be guided by ideology rather than self-interest, because, due to enormous wealth, future people will have satiated their self-interested preferences but not their ideological preferences?”
  4. In this essay, MacAskill presents tentative ideas for how to shape the far future that he thinks could be mis-guided. There were two interventions in particular that I think are worth pointing out could be harmful.
    1. He mentions using AGI to help with moral reflection, but this could turn out to be a grave error if moral sense theory is true but AI lacks a moral sense.
    2. He also mentions focusing on aligning AI with the correct meta-ethical values and giving it strong moral character to guide people’s decision-making, but, if it’s slightly mis-aligned, this could be very harmful.

5

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities