This is a crosspost of Peter Singer's "The Hinge of History", published in Project Syndicate on October 8, 2021.


The dangers of treating extinction risk as humanity’s overriding concern should be obvious. Viewing current problems through the lens of existential risk to our species can shrink those problems to almost nothing, while justifying almost anything that increases our odds of surviving long enough to spread beyond Earth.

PRINCETON – Twelve years ago, during the International Year of Astronomy that marked the 400th anniversary of Galileo’s first use of a telescope, I wrote “The Value of a Pale Blue Dot” – a reflection on how astronomy has revealed a vast universe filled with an unimaginable number of stars, thus shrinking the significance of our sun and our planet. The “pale blue dot” refers to how the Earth appears in a 1990 photograph taken by the Voyager spacecraft as it reached the outer limits of our solar system. The essay suggests that the knowledge gained from astronomy “forces us to acknowledge that our place in the universe is not particularly significant.”

A recent blog post by Holden Karnofsky has led me to reconsider that thought. Karnofsky is co-CEO of Open Philanthropy, a foundation that researches the best opportunities for philanthropic grant-making, and publishes the reasons for its decisions. Thinking about the long-term significance of today’s philanthropic decisions is therefore part of Karnofsky’s role. He is thinking very long term indeed.

Karnofsky points out that we could be living “at the very beginning of the tiny sliver of time during which the galaxy goes from nearly lifeless to largely populated.” That “tiny sliver of time” began, we might say, with the first use of tools by our ancestors, around three million years ago. It will end when our descendants – who might be digital minds, rather than biological organisms – inhabit the entire galaxy, perhaps ushering in a civilization consisting of an enormous number of conscious beings that would last for tens of billions of years. There is a good chance, Karnofsky argues, that this process of populating the galaxy will begin during this century. By 2100, we could develop the technology to construct self-sufficient settlements on other planets.

This thought echoes one expressed in 2011 by the late philosopher Derek Parfit, who wrote, near the end of the second volume of On What Matters: “We live during the hinge of history.” Like Karnofsky, Parfit was thinking of the arrival of technologies that, if used wisely, would enable our species to survive “its most dangerous and decisive period,” and our descendants to spread through our galaxy. Parfit refers to “the next few centuries,” rather than just this one, as the time it may take before humans can live independently on other planets, but even that will be only be a sliver of time compared to what is to come. Our most significant contribution to this development would be to ensure the survival of intelligent life on our planet.

Perhaps, though, the idea that we are essential to this process is merely the latest version of the self-important delusion that humans are the center of existence. Surely, in this vast universe, there must be other forms of intelligent life, and if we don’t populate the Milky Way galaxy, someone else will.

Yet, as the physicist Enrico Fermi once asked fellow scientists over lunch at Los Alamos National Laboratory, “Where is everybody?” He wasn’t commenting on empty tables in the lab’s dining room, but on the absence of any evidence of the existence of extraterrestrials. The thought behind that question is now known as the Fermi Paradox: if the universe is so stupendous, and has existed for 13.7 billion years, why haven’t other intelligent forms of life made contact

Karnofsky draws on a 2018 paper by researchers at the University of Oxford’s Future of Humanity Institute to suggest that the most likely answer is that intelligent life is extremely rare. It is so rare that that we may be the only intelligent beings in our galaxy, and perhaps in the much larger Virgo supercluster to which our galaxy belongs.

This is what Karnofsky means when he says that the future of humanity is “wild.” The idea that we, the inhabitants of this pale blue dot at this particular moment, are making choices that will determine whether billions of stars are populated, for billions of years, does seem wild. But it could be true. Granting that, however, what should we do about it?

Karnofsky does not draw any ethical conclusions from his speculations, other than advocating “seriousness about the enormous potential stakes.” But, as Phil Torres has pointed out, viewing current problems – other than our species’ extinction – through the lens of “longtermism” and “existential risk” can shrink those problems to almost nothing, while providing a rationale for doing almost anything to increase our odds of surviving long enough to spread beyond Earth. Marx’s vision of communism as the goal of all human history provided Lenin and Stalin with a justification for their crimes, and the goal of a “Thousand-Year Reich” was, in the eyes of the Nazis, sufficient reason for exterminating or enslaving those deemed racially inferior.

I am not suggesting that any present exponents of the hinge of history idea would countenance atrocities. But then, Marx, too, never contemplated that a regime governing in his name would terrorize its people. When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway.

38

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since: Today at 7:39 AM
mic
2y53
0
0

I was surprised to read this from Peter Singer, a thoroughgoing utilitarian who I often see as a little extreme in how EA his beliefs are.

I don't particularly agree with this conclusion:

When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do

It seems extremely unlikely to me that global poverty is just as good at reducing existential risk as things that are more targeted, such as AI safety research. At least, Singer's point requires significant elaboration on why he believes this to be the case. MichaelStJules writes more about this in his comment here.

Nevertheless, I found it valuable to see how Peter Singer views longtermism, which can provide a window into future public perceptions.

Yaroslav Elistratov writes more on Peter Singer's thoughts on existential risk here.

I agree with your assessment. It is interesting to note that Singer's comments are in response to Holden, who used to hold a similar view but no longer does (I believe).

The other part I found surprising was Singer's comparison of longtermism with past harmful ideologies. At least in principle, I do think that, when evaluating moral views, we should take into consideration not only the contents of those views but also the consequences of publicizing them. But:

  1. These two types of evaluation should be clearly distinguished and done separately, both for conceptual clarity and because they may require different responses. If the problem with a view is not that it is false but that it is dangerous, the appropriate response is probably not to reject the view, but to instead be strategic about how one discusses it publicly (e.g. give preference to less public contexts, frame the discussion in ways that reduce the view's dangers, etc.)
  2. As Richard Chappell pointed out recently, if one is going to consider the consequences of publicizing a view when evaluating it, one should also consider the consequences of publicizing objections to that view. And it seems like objections of the form "we should reject X because publicizing X will have bad consequences" have often had bad consequences historically.
  3. The moral evaluation of the consequences expected to result from public discussion of a view should not beg the question against the view under consideration! Longtermists believe that people in the future, no matter how removed from us, are moral patients whom we should help. So in evaluating longtermism, one cannot ignore that, from a longtermist perspective, publicly demonizing this view—by comparing it to the Third Reich, Soviet communism, or white supremacy—will likely have very bad consequences (e.g. by making society less willing to help far-future people). (Note that this is very different from the usual arguments for utilitarianism being self-effacing: those arguments purport to establish that publicizing utilitarianism has bad consequences, as evaluated by utilitarianism itself. Here, by contrast, a non-longtermist moral standard is assumed when evaluating the consequences of publicizing longtermism.)
  4. Picking reference classes is tricky. Perhaps it's plausible to put longtermism in the reference class of "utopian ideology with considerable abuse potential". But it also seems plausible to put longtermism in the reference class of "enlightened worldview that seeks to expand the circle of moral concern" (cf. Holden's "Radical empathy"). In considering the consequences of publicizing longtermism, it seems objectionable to highlight one reference class, which suggests bad consequences, and ignore the other reference class, which suggests good consequences.

>It seems extremely unlikely to me that global poverty is just as good at ...

Wealth inequality is an xrisk factor.     See the HANDY model.

https://www.sciencedirect.com/science/article/pii/S0921800914000615

https://arxiv.org/pdf/1908.02870.pdf

Maybe the solution is to institutionalize a sustainable system positive for all. That can be enjoyed by both Singer and Karnofsky. Possibly, Peter Singer emphasizes ‘making sure that the future is good for individuals,’ which is a thought that Holden Karnofsky seeks to provoke[1] in more individuals whose interest was originally captivated by high-tech solutions which benefit a few elites. 

  1. ^

    Holden Karnofsky specifies the “appropriate reaction” to the most important century thesis as "... Oh ... wow ... I don't know what to say and I somewhat want to vomit ... I have to sit down and think about this one."

My interpretation of Peter Singer's thesis is that we should be extremely cautious about acting on a philosophy that claims that an issue is extremely important, since we should be mindful that such philosophies have been used to justify atrocities in the past. But I have two big objections to his thesis.

First, it actually matters whether the philosophy we are talking about is a good one. Singer provides a comparison to communism and Nazism, both of which were used to justify repression and genocide during the 20th century. But are either of these philosophies even theoretically valid, in the sense of being both truth-seeking and based on compassion? I'd argue no. And the fact that these philosophies are invalid was partly why people committed crimes in their name.

Second, this argument proves too much. We could have presented an identical argument to a young Peter Singer in the context of animal farming. "But Peter, if people realize just how many billions of animals are suffering, then this philosophy could be used to justify genocide!" Yet my guess is that Singer would not have been persuaded by that argument at the time, for an obvious reason.

Any moral philosophy which permits ranking issues by importance (and are there any which do not?) can be used to justify atrocities. The important thing is whether the practitioners of the philosophy strongly disavow anti-social or violent actions themselves. And there's abundant evidence that they do in this case, as I have not seen even a single prominent x-risk researcher publicly recommend that anyone commit violent acts of any kind.

I think some moral views, e.g. some rights-based ones or ones with strong deontological constraints, would pretty necessarily disavow atrocities on principle, not just for fairly contingent reasons based on anticipated consequences like (act) utilitarians would. Some such views could also still rank issues.

I basically agree with the rest.

I think failing to act can itself be atrocious. For example, the failure of rich nations to intervene in the Rwandan genocide was an atrocity. Further, I expect Peter Singer to agree that this was an atrocity. Therefore, I do not think that deontological commitments are sufficient to prevent oneself from being party to atrocities.

You could have deontological commitments to prevent atrocities, too, but with an overriding commitment that you shouldn't actively commit an atrocity, even in order to prevent a greater one. Or, something like a harm-minimizing consequentialism with deontological constraints against actively committing atrocities.

Of course, you still have to prioritize and can make mistakes, which means some atrocities may go ignored, but I think this takes away the intuitive repugnance and moral blameworthiness.

More from mic
Curated and popular this week
Relevant opportunities