Hide table of contents

TLDR: Predictions about apocalyptic AI parallel both historical and contemporary Christain Apocalyptic claims(which I take to be untrustworthy). This is distinct from apocalyptic climate change which is non-analogous to such religious apocalyptic claims. Therefore we should treat the apocalyptic claims of climate scientists as more credible than those of AI researchers, and as a result, EA should place climate change as a higher priority than AI alignment. This is not to say that AI isn’t a risk, nor that alignment shouldn’t be a priority.
 

Acknowledgments: My thanks to the 80,000 Hours podcast for sending me a copy of Toby Ord’s The Precipice, and to my religious studies professor for giving me feedback on this paper.
 

Epistemic Transparency: I am an undergraduate student(going into my final year) studying philosophy and religious studies. I had an independent study, and have just completed a summer research project on Existential Risk. This paper was originally written in one of my religious studies classes and was edited for submission to this contest.

I am extremely confident(95%) sure that predictions of Apocalyptic AI parrels religious narratives, and that this should, at least to some degree, negatively affect the credibility of such claims. I am uncertain as to how much this should affect the credibility of such claims. I am personally extremely distrustful of anything that looks like a religious narrative. However, this is due to my own philosophical beliefs and those with different views on the nature of religion are likely to have different opinions. 

 

Introduction: 

 

Every time someone predicted that the world would come to an end in any year before the current year, they were incorrect. No one can ever look back on a history in which humanity has gone extinct; such an event makes the existence of such a person impossible. As a result, apocalyptic claims, i.e., claims about the nature, likelihood, and timeframe of the end of the world, have a unique epistemic status. Such claims are unverifiable, but not in the way that, for example, moral claims are. Rather, it is because we are human beings that information about the nature, likelihood, and timeframe of human extinction are unverifiable. Such an event would prevent us from reflecting upon it. This is a massive problem for Existential Risk Studies, and indeed for anyone who wishes to reduce the risk of human extinction, as an accurate risk assessment is necessary if organizing bodies are to effectively allocate resources to addressing threats to humanity. This is one of the many theses defended in Toby Ord’s book The Precipice, in which he gives his subjective probabilities for the chance of any given risk causing an existential catastrophe in the next 100 years. However, although I agree that a well-grounded risk assessment is necessary, I worry about the implicit assumptions that might bias such an assessment. Specifically, in the case of apocalyptic AI it seems that implicit religious narratives might warp our understanding. In my view, this results in Ord overemphasizing its danger relative to other risks, such as climate change. In another sphere, many Christian evangelicals predict that the world will soon come to an end, and as a result, are unconcerned with climate change. In this paper, I will compare the apocalyptic claims made by three groups: evangelical Christian, climate scientists, and AI researchers. 

This paper will begin by discussing both the debate in existential risk literature regarding techno-utopianism in the field and the debate in religious studies regarding whether transhumanism is or is not a religious movement. Section 1 will focus on showing 1) that the apocalyptic claims made by evangelical Christians are untrustworthy, while the apocalyptic claims made by climate scientists are trustworthy, and 2) that the apocalyptic claims made by AI researchers are more analogous to claims made by evangelicals than they are to the claims made by climate scientists. Section 2 will argue that the religious context in which claims about apocalyptic AI exist explains why predictions regarding the future of artificial intelligence have been consistently incorrect. Together these sections attempt to show that Ord’s apocalyptic claims about AI are more analogous to the apocalyptic claims made by some evangelicals than to the apocalyptic claims made by climate scientists. Finally, section 3 will provide a process-oriented account of apocalyptic AI and discuss how the broad adoption of artificial intelligence might make climate change more difficult to address. This is done in order to show that 1) discussion of apocalyptic AI can be disentangled from religious narratives and 2) that despite these narratives AI does pose a very real risk to humanity. 
 

Literature Review:

Existential Risk (also referred to as Global Catastrophic Risk) is the focus of a collection of nonprofits, think tanks, and research initiatives that aim to gain an accurate understanding of risks to humanity to prevent them from manifesting. These organizations are a natural extension of the Effective Altruism movement, and both groups share many of the same assumptions and thinkers. Both Ord and Bostrom are central figures within both Effective Altruism and Existential Risk.:  

This paper attempts to bridge the gap between critiques of Existential Risk Studies and the discourse around treating transhumanism as a religious movement. This paper is both a critique and a defense of Bostrom and Ord. While many critiques of the techno-utopian elements of the field critique their entire conceptual framework, i.e. the combination of utilitarianism, long-termism, and transhumanism, I aim to offer a critique that shows these issues are rooted in transhumanism, rather than in the broader methodological assumptions. 

There is significant disagreement within the literature as to how existential risk assessment should be done and how much risk assessments should be interpreted. On one side of the argument, techno-utopians such as Nick Bostrom and Toby Ord are focused on the threat of future technologies. While at the moment their techno-utopian approach is dominant within Existential Risk Studies this could change. The techno-utopian elements of the field have recently started to be held up to scrutiny. Many such as Cremer, Carla Zoe, and Luke Kemp's paper “Democratizing Risks: In Search of a Methodology to Study Existential Risk” focus on how the dominance of utilitarianism, which is a non-representative moral view, might unduly bias the analysis of Existential Risk Studies, and as a result, make its risk assessments untrustworthy. Additionally, there is a concern about the influence that the discorporate prominence of those with hegemonic identities might have on the field. In their paper “Worlding beyond the End’ of the World’' Audra Mitchell and Aadita Chaudhury discuss how such biases might undermine the otherwise benevolent intentions of the field. Many of these critiques seek to attack the entire framework employed by the techno-utopian elements of the field, and while they offer substantive critiques, it is unclear how the field could properly operate without its consequentialist assumptions.

A separate but tangentially related discourse surrounds whether or not transhumanism should be considered a religious movement. Authors like Robert M. Geraci argue that transhumanism should be understood as a religious movement due to its historical connections to apocalyptic Christianity. On the other side of this debate figures like Nick Bostrom argue that the scientific, or philosophical, the expertise of many transhumanists means that it, by definition, isn’t a religious movement. 

 

Section 1: How to Assess Apocalyptic Claims
 

In The Precipice, Ord gives an outline of the risk landscape and gives rough estimates of what he takes to be the likelihood of each risk leading to an existential catastrophe in the next 100 years. This begins in chapter three where he discusses the natural risks that could lead to human extinction. Natural risks have existed for a long time and so one can study the frequency at which they have historically taken place. For example, the fossil record can be used to create a rough estimate of the frequency at which a supervolcanic eruption capable of causing a mass extinction event has occurred. I do not take this part of the text to be particularly contentious because 1) there is significantly more scientific consensus regarding how likely these risks are, 2) these risks are not particularly likely when compared to anthropogenic risks(i.e risks of human origin), and 3) these risks are of little relevance to what actions we should take because, at the moment, we cannot do much to meaningfully affect them. Ord’s assessment becomes much more uncertain when he begins his discussion of anthropogenic risks. These risks differ from natural risks because 1) these risks are relatively new, and as such, there is little relevant data that can be used to assess them, and 2) unlike natural risks assessments of anthropic risks are of massive political importance, as our political and economic systems are both the initial cause of these risks and a necessary part of any path towards lowering them. As we get into Ord’s discussion of anthropic risk his claims become increasingly contentious.

Ord focuses his discussion on which risks are likely to lead to human extinction in the next 100 years. This approach is not without its merits. As Ord says, “risks that strike later can be dealt with later, while those striking sooner cannot” (Ord 306). This seems to match up with a common-sense approach to risk management, it, for example, lines up with the type of decision one might expect people to make surrounding their health.(For example, Comedian Gabriel Iglesia when discussing changing to a diet higher in cholesterol after developing type-2 diabetes said that “Cholesterol’s gonna take 10 years to kill me, while diabetes’ is gonna kill me in 2. Right now I’m winning by 8.”) However, to be effectively applied this principle requires an accurate understanding of the particular risks as well as how soon they might strike. This is particularly difficult for risks that have delayed effects, such as climate change, as it is difficult to know exactly what constitutes a point of no return (at which point humanities extinction is guaranteed), and how close such a point might be.(Ord 280) Ord sets his time horizon to the next 100 years largely because of the threat of unaligned artificial intelligence. Ord gives unaligned artificial intelligence a 10% chance of causing an existential catastrophe in the next century. In the scenarios that Ord imagines, the AI’s superior intellect results in humanity losing control of our destiny and becoming just like any other animal(239). Necessarily, Ord views AI as being significantly more dangerous than climate change which he believes has about a .1% chance of causing an existential catastrophe in the next century (279). These theoretical risks ultimately play the role of downplaying the climate crisis, as despite Ord saying that all risks are deserving of attention, artificial intelligence, being by far the most immediate threat to humanity, takes center stage. 

The belief in a sooner, more likely end of the world scenario resulting in a downplaying of the climate crisis is not unique to Ord. A 2014 poll showed that only 28% of white evangelical’s believed that human activity was responsible for climate change(as opposed to 50% of the general US population). (Pew) While not all evangelicals are apocalyptic, for those that are, apocalypticism justifies a lack of concern regarding the climate crisis. As Hillary Scanlon puts it “believing that “that the end times are near” (Scanlon 11), “evangelicals have shorter sociotropic time horizons, which makes them less likely to demonstrate concern for the environment and for policies that would address climate change” (Scanlon 12). In both the case of Ord and in the case of white American evangelicals this position is internally consistent, there is little reason to care about historical processes that will become irrelevant in the next 100 years. 

Predictions that the world will soon come to an end, when taken as a general category, necessarily don't have a great track record. As a result, unless a particular group can 1) draw a clear distinction between their prediction and other predictions of the end times and 2) show why this distinction gives them the epistemic credibility necessary to make an apocalyptic claim, then we should treat them as epistemically untrustworthy. The apocalyptic claims made by some Christain evangelicals are not meaningfully distinct from the long list of apocalyptic claims that have yielded false predictions. Such claims are generally made on biblical evidence, which lacks precedent for providing accurate predictions (Scanlon 13). On the other hand, claims made by climate scientists regarding the current climate crisis are both credible and apocalyptic. When the UN international panel on climate change said that “if human-caused global warming isn’t limited to just another couple tenths of a degree, an Earth now struck regularly by deadly heat, fires, floods and, drought in future decades will degrade in 127 ways with some being “potentially irreversible”(Borenstein) this claim should be treated as trustworthy. Climate scientists have made falsifiable predictions that have been consistently verified. An assessment of peer-reviewed climate models published between 1970 and 2000 notes that “all of the 17 models correctly projected global warming (as opposed to either no warming or even cooling)” and that most of the model projections (10 out of 17) produced global average surface warming projections that were quantitatively consistent with the observed warming rate. (Drake) Additionally, climate scientists do not predict a fundamental break from currently observable historical processes. While climate scientists do predict a coming set of cataclysms that will reshape the entire world, such cataclysms are the direct result of the political structures of the current world rather than being fundamentally separate from them, and our actions in the current world are relevant to either preventing, mitigating, or preparing for these cataclysms. I will now attempt to show how belief in apocalyptic AI is more analogous to the apocalyptic claims of evangelicals than it is to the apocalyptic claims of climate scientists; and as a result, Ord is unjustified in placing risk from unaligned artificial intelligence as high as he does. 

While it is concerning that a focus on immediate threats undermines our ability to fight long-term risks such as climate change, this doesn’t necessarily make such a time horizon unjustifiable. If the development of an unaligned artificial intelligence in the next 100 years is highly probable, and its development would result in either the extinction of humanity or a new utopian society in which climate change would be immediately “fixed”, then there would be little reason to concern ourselves with fighting climate change. Ord’s main argument for placing AI as the greatest existential threat to humanity is based on the opinions of AI researchers, resting on the polling that shows that when “asked when an AI system would be ‘able to accomplish every task better and more cheaply than human workers, on average they estimated a 50 percent chance of this happening by 2061” (Ord 141). Ord then argues that if a general artificial intelligence significantly more intelligent than humans is developed, it will take over the world. I will focus on disputing the legitimacy of Ord’s appeal to expertise by showing that the predictions of AI researchers have been particularly inaccurate.

The development of AI technology has a long history of failing to live up to its hype. Since its inception, AI research has fallen into a pattern of making overly ambitious promises that it fails to keep. Professor Lighthill’s report on AI research in the U.K. published in 1973 notes that “Most workers in AI research and in related fields confess to a pronounced feeling of disappointment in what has been achieved in the past twenty-five years. Workers entered the field around 1950, and even around 1960, with high hopes that are very far from having been released in 1972” (Lighthill). While this report is specific to AI research in the U.K., it also marks a broader trend of state and economic bodies becoming disillusioned with AI research during this time period. Artificial intelligence would eventually recover from this ‘AI winter’; however, after recovering, the field continued to make overly ambitious claims. For example, as HP Newquist notes in The Brain Makers: Genius, Ego, and Greed in the Search for Machines that Think, "On June 1, 1992, The Fifth Generation Project”–a much-hyped AI project, launched in 1981, into which the Japanese Ministry of International Trade and Industry poured $850 million–“ended not with a successful roar, but with a whimper." The bursting of the Fifth Generation Project bubble led to another AI winter in the early 1990s. 

The predictions of apocalyptic AI researchers are neither based on any concrete data set nor do they have a history of being accurate. As a result, they are not as credible as predictions of apocalyptic climate change as the predictions of climate scientists have a long history and are both based on a concrete data set and a long history of accuracy. This is not to say that the failures of AI research to live up to its hype imply that AI will not dramatically change the world we live in. However, in the face of climate catastrophe, considering unaligned artificial intelligence to be the largest risk to humanity is unjustified.

 

Section 2: Religious Narratives in Apocalyptic AI

 

Why do AI researchers keep making overly ambitious promises about the future of artificial intelligence? In general, one might expect that researchers in any given field will, on average, have an inflated sense of their importance; yet the tendency for AI researchers to make apocalyptic predictions is relatively unique. As a result, this general tendency does not fully explain this phenomenon. To properly understand the claims of AI apocalypticism, this tradition must be placed within the broader historical context of the apocalyptic religious movements that have influenced it. As Robert M. Geraci notes in his paper “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”,

Early Jewish and Christian apocalyptic traditions share several basic characteristics which also appear in the twentieth-century popular science books on robotics and AI. Ancient Jews and Christians, caught in alienating circumstances, eagerly anticipated God's intervention in history. After the end of history, God will create a new world and resurrect humanity in glorified new bodies to eternally enjoy that world. Apocalyptic AI advocates cannot rely upon divine forces to guarantee the coming Kingdom, so they turn to evolution as a transcendent guarantee for the new world. Even without God, evolution guarantees the coming of the Kingdom. Apocalyptic AI looks forward to a mechanical future in which human beings will upload their minds into machines and enjoy a virtual reality paradise in perfect virtual bodies. (140)
 

Advocates of apocalyptic AI are likely to resist this characterization by claiming that, unlike other apocalyptic claims, their claims are based on science. Yet this defense is ultimately unconvincing because it treats religion and science as necessarily mutually exclusive. Geraci notes that “[w]e commonly speak of science and religion as though they are two separate endeavors but, while they do have important distinctions that make such everyday usage possible, they are neither clearly nor permanently demarcated; the line separating the two changes from era to era and from individual to individual” (159). It is important to note that, while discussions of the singularity are currently conceived of as a topic of “scientific” interest, such theories are necessarily unfalsifiable and therefore cannot be developed through the scientific method. In his essay “Why I Want to be a Posthuman When I Grow Up,” Nick Bostrom, one of the founders of Existential Risk Studies, considers how awesome it would be to be a posthuman. Much of this essay focuses on comparing human lives to what post-human lives would be like. He remarks, “It seems to me fairly obvious why one might have reason to desire to become a posthuman in the sense of having a greatly enhanced capacity to stay alive and stay healthy” (6). At first, this seems sensible as it matches up with what I imagine a posthuman future might be like. Given, however, that Bostrom himself begins the paper by “setting aside issues of feasibility, costs, risks, side-effects, and social consequences” (Bostrom 3), it is difficult to see how his assessment can be meaningfully distinguished from an enjoyable example of speculative science fiction. Posthumans do not exist, and so their average lifespan is necessarily unknown. Bostrom’s conception of post-humanity parallels how early apocalyptics believed that God would give them immortal bodies after the end of the world (Geraci 145). Perhaps, one day, post-humanity will come to exist and at that point, the average lifespan of posthumans could be compared to the average lifespan of humans, but, until such things come to pass, promises of a posthuman future will remain reminiscent of apocalyptic Christians explaining the benefits of angelic bodies. Even if such theories are in some cultural sense scientific, they are directly analogous to claims that are unanimously agreed to be religious.(This is similar to how belief in alien encounters is often not classified as a religious belief despite aliens being phenomenologically similar to angelic encounters.) The historic ties that apocalyptic AI has to apocalyptic religious movements undermine the view that its apocalyptic claims are meaningfully distinct from the consistently false predictions of apocalyptic religious movements.

 

Section 3: An Artificial Polytheism 

 

One of the main differences between apocalyptic claims made by climate scientists and the apocalyptic claims made by both evangelicals and AI researchers is how these groups understand the historical process. In the case of climate science, there is respect for causal relationships. It is not as if when CO2 reaches X parts per million climate change will happen. Rather the greenhouse effect is a causal relationship between that amount of greenhouse gasses in the atmosphere and the average global temperature, their apocalyptic claim is made only on the assumption that historically observable causal relationships won’t vanish. On the other hand, Evangelicals and advocates of apocalyptic AI claim that there will be a sudden, and unprecedented, breakdown of the historical process in the next 100 years. This is however not the only possible interpretation of AI that maintains its status as a threat to humanity. In this section, I will attempt to historicize AI within the broader context of the development of capitalist systems and argue that the current risks posed by AI are the same risks posed by any system that seeks the maximization of a given value at any cost.

Treating AI as either the ultimate risk or as the only hope for salvation is highly suspect. In both cases, unfounded religious narratives sneak their way into risk assessment. However, while I think that the risk of the singularity is massively overblown, this is not to say that AI doesn’t keep me up at night. Rather, focusing on the singularity results in us becoming blind to the more mundane risk posed by artificial intelligence. To address this blind spot, I will use the final section of this paper to discuss how Ord frames AI risk, and I will offer a different framing. 

Ord begins his discussion of AI by asking us to consider

“What would happen if sometime this century researchers created an artificial general intelligence surpassing human abilities in almost every domain? In this act of creation, we would cede our status as the most intelligent entities on Earth. So without a very good plan to keep control, we should also expect to cede our status as the most powerful species, and the one that controls its own destiny.”(240)

In this passage, humanity is taken as a single actor, which currently possesses full control over its destiny, and focuses on a moment when artificial intelligence would, in a single moment, take that control away. This framing obfuscates more than it illuminates for two reasons: 1) “humanity” has never existed as a unified body, and has never spoken in a unified voice, and 2) power is disproportionately distributed such that some have more control over humanity's collective destiny than others. A weaker version of Ord’s claim might simply mean that a group of humans currently makes all decisions relevant to humanity's destiny. Yet this too is an overstatement, as it ignores the rapidly increasing prominence of machines making decisions on behalf of humans. In 2018 80% of the daily decisions made in the US stock market were made by machines (Amaro); however this reliance on machine intelligence is not limited to the stock market; the research firm IDC predicts that by 2024 80% of global 2000s companies will hire, fire, and train workers with automated systems(IDC). Ord imagines a scenario where a single super-intelligent machine takes control in an instant, and so misses that over the past few decades, those with the power to do so, have slowly handed control over our economy to a plethora of “intelligent” machines. As a result, rather than asking ourselves how “we” might maintain control of our collective destiny, readers might be better off asking themselves how they, or groups that they are a part of (necessarily including but not limited to humanity), could take back control over their destiny. 

However, just because machine learning is exerting an increasing degree of control doesn’t necessarily make it an existential risk to humanity. This will necessarily depend on the real effects of these algorithms, and whether or not the values of these algorithms are aligned with so-called “human values”. In his essay “Ethical Issues in Advanced Artificial Intelligence,” Nick Bostrom discusses how artificial intelligence seeking an arbitrary goal might cause a global catastrophe.

“This could result, to return to the earlier example, in a super-intelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. More subtly, it could result in a superintelligence realizing a state of affairs that we might now judge as desirable but which in fact turns out to be a false utopia, in which things essential to human flourishing have been irreversibly lost. We need to be careful about what we wish for from a superintelligence because we might get it.” (Bostrom)

Our immediate circumstance is of course non-analogous to Bostrom’s example, as our economy is not at the moment controlled by a single superintelligence, rather it is increasingly controlled by a plethora of artificial intelligence of varying degrees of sophistication. Yet the concern remains: Are their values conducive to human flourishing? The answer is of course no. The switch to AI Management by corporations is done to maximize their profit, and the infinite maximization of this metric of value conflicts with human flourishing, and perhaps even human survival. So while the world might not be traded for paper clips, the algorithms will happily trade the world for whatever makes money. As such programs are programmed to merely maximize profit, they can’t ensure that a world remains in which such profit has value. This type of maximization is not unique to AI-run systems, as most corporations have an explicit obligation to maximize shareholder profit over all else. AI management is not a break from the previous historical process of capitalist profit extraction but rather the further streamlining of this process. As a result, while prophecies of a single apocalyptic AI seem to lack credibility, the prevalence of automated systems in the management of the global economy may make it more difficult to properly address the climate crisis.
 

Conclusion: 

In this essay, I discussed 3 different groups that make apocalyptic claims: Christain Evangelicals, climate scientists, and AI researchers. I argued that the apocalyptic claims made by a subset of evangelicals are untrustworthy because 1) these claims are not based on a methodology that is generally truth-seeking, and 2) apocalyptic claims made by evangelicals have historically failed to come true. On the other hand, I argued that the apocalyptic claims made by climate scientists are trustworthy because 1) these claims are based on a methodology that is generally truth-seeking, and 2) predictions made by climate scientists have historically come true. I then argued that the apocalyptic claims made by AI researchers are more analogous to the claims of evangelicals because 1) these claims are not based on a methodology that is generally truth-seeking, 2) apocalyptic claims made by AI researchers have historically failed to come true, and 3) the narratives that underlie apocalyptic AI have a shared history with the apocalyptic claims made by evangelical Christians. As a result, these claims are untrustworthy and Ord is unjustified in placing risk from unaligned artificial intelligence as high as he does. However, it is important not to write off the dangers of artificial intelligence, as while it seems improbable that AI will make previous historical processes irrelevant, the development of this technology is part of a historical process that is directly at odds with human flourishing.

 

Bibliography:

Amaro, Silvia. “Sell-Offs Could Be down to Machines That Control 80% of the US Stock Market, Fund Manager Says.” CNBC, CNBC, 5 Dec. 2018, https://www.cnbc.com/2018/12/05/sell-offs-could-be-down-to-machines-that-control-80percent-of-us-stocks-fund-manager-says.html.

Borenstein, Seth. “UN Climate Report: 'Atlas of Human Suffering' Worse, Bigger.” AP NEWS, Associated Press, 28 Feb. 2022, https://apnews.com/article/climate-science-europe-united-nations-weather-8d5e277660f7125ffdab7a833d9856a3. 

Bostrom, Nick. “Ethical Issues in Advanced Artificial Intelligence.” Ethical Issues In Advanced Artificial Intelligence, 2003, https://nickbostrom.com/ethics/ai.html.

Bostrom, Nick. “Why I Want to Be a Posthuman When I Grow up - Nick Bostrom.” Nickbostrom.com, 2006, https://nickbostrom.com/posthuman.pdf. 

Cremer, Carla Zoe, and Luke Kemp. “Democratizing Risks: In Search of a Methodology to Study Existential Risk.” The Future of Humanity Institute & Centre for the Study of Existential Risk, 2021, https://doi.org/https://arxiv.org/pdf/2201.11214.pdf. 

Drake, Henri. “Historical Climate Models Accurately Projected Global WarmingHenri.” Historical Climate Models Accurately Projected Global Warming | MIT Department of Earth, Atmospheric and Planetary Sciences, 10 Dec. 2019, https://eapsweb.mit.edu/news/2019/historical-climate-models-accurately-projected-global-warming.

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. Bloomsbury Publishing, 2021. 

Geraci, Robert M. “Apocalyptic AI: Religion and the Promise of Artificial Intelligence.” Journal of the American Academy of Religion, vol. 76, no. 1, 2008, pp. 138–66, http://www.jstor.org/stable/40006028. Accessed 14 Apr. 2022.

“IDC FUTURESCAPE: Top 10 Predictions for the Future of Work.” IDC, 18 Nov. 2018, https://www.idc.com/getdoc.jsp?containerId=prUS48395221.

Lighthill, James. “Informatics Informatics Department Lighthill Report.” Lighthill Report, http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm

Mitchell, Audra, and Aadita Chaudhury. “Worlding beyond ‘the’ ‘End’ of ‘the World’: White Apocalyptic Visions and BIPOC Futurisms.” SagePub, 2020, https://journals.sagepub.com/doi/pdf/10.1177/0047117820948936. 

Newquist, Harvey P.. “The brain makers : [genius, ego, and greed in the quest for machines that think].” (1994).

Pew Research Center “Religion and Views on Climate and Energy Issues.” Pew Research Center Science & Society, Pew Research Center, 20 Aug. 2020, https://www.pewresearch.org/science/2015/10/22/religion-and-views-on-climate-and-energy-issues/. 

Scanlon, Hillary. “Evangelicals and Climate Change.” Religion in Environmental and Climate Change: Suffering, Values, Lifestyles, Sept. 2020, https://doi.org/10.5040/9781472549266.ch-007.

Soper, Spencer. Bloomberg.com, Bloomberg, https://www.bloomberg.com/news/features/2021-06-28/fired-by-bot-amazon-turns-to-machine-managers-and-workers-are-losing-out. 

24

0
0

Reactions

0
0

More posts like this

Comments36
Sorted by Click to highlight new comments since: Today at 11:53 PM

As someone working on climate for almost ten years and participating in climate discourse for longer it strikes me as odd to not also describe the very strong apocalyptic tendencies and pressures in climate circles.

There's lots of good work on how environmentalism has a predictable (because socio-culturally useful) apocalyptic streak so there is strong reason to also be distrustful of apocalyptic claims by environmentalist and climate scientists affected by this cultural environment (see e.g. here: https://doi.org/10.1111/j.1540-8159.2005.09566.x-i1)

I guess I just think they are valid, and it seems like the predictions have consistently come true. It is my view that the apocalyptic tendencies in the environmentalist movement do not directly parallel religious apocalyptic in the way apocalyptic AI does. However, I could be mistaken. 

To clarify my argument it is not the fact that an apocalyptic claim is being made, but rather that the claim is analogous to religious predictions that exist in the same culture that negatively affects its epistemic credibility.

From reading the abstract of this paper it doesn't seem to be about religious narratives in climate science. I lack access to the paper, if you have a free link I will check it out.

I don't have a free link but I think it's freely available somewhere on the internet.

The paper and lots of other research (by Mary Douglas, Aaron Wildavksy etc) is, among other things, about the similarity of apocalyptic belief systems found in religious sects and parts of modern environmentalism, so this research seems v relevant to your question.

On your point on predictive accuracy, I think you are comparing apples to oranges. Lots of intermediate predictions of climate science have become true, but so have lots of predictions on speed of AI progress, whereas predictions of apocalyptic outcomes have not materialized in either yet.

My point is not that one should not update downwards on AI risk based on worries about doomism being cultural rather than entirely based on objective analysis, just that applying an asymmetrical update in favor of taking climate more seriously seems mistaken given very similar dynamics.

My view is not that religious sects exist within AI and this is a reason to dismiss it.(this would apply equally to climate change) It is that the way apocalyptic AI is framed parallels religious apocalypses and religious narratives around the apocalypse, i.e. virtual heaven, reshaping of the world in the image of a single entity, etc. This simply isn't to the same degree for climate change.

I don't agree that apocalyptic outcomes haven't manifested from climate change yet. Larger and longer hurricane and fire season. The Glaciers melting away. It hasn't killed us all yet but it certainly looks apocalyptic. Soon entire countries will sink beneath the waves. In fairness with AI this seems like it can't happen until its the apocalypse so this is apples to oranges. 

Religious groups involved in environmentalism are not relevant insofar as they are not claiming to be non-religious(if they are making religious arguments) I guess the argument could be made that in the bible it says there will be more storms before the end and climate change says this but given that this is happening this doesn't seem to hold.

It is not doomerism that I'm worried about, it's old religious stories reinventing themselves in secular guesses. Things like the singularity, virtual heavens, space colonization, etc. This doesn't apply to the concern of AI systems as a group controlling the economy to human detriment, something I am very concerned about.

Hey, thanks for sharing! I thought this was well researched and written. As somebody who’s pretty convinced by the arguments for AI risk, I do mostly disagree with it, but I’d just like to ask a question and share an interesting line of research:

First, do you think there was ever a time when climate change predictions were more similar to religious apocalypse claims? For example, before there was substantial evidence that the Earth was getting warmer, or when people first started hypothesizing how chemicals and the atmosphere worked. The greenhouse effect was first proposed in 1824 long before temperatures started to rise — was the person who proposed it closer to a religious prophet than a scientist?

(I would say no because scientists can make good predictions about future events by using theory and careful experiments. For example, Einstein predicted the existence of “gravitational waves” in 1916 based on theory alone, and his theory wasn’t confirmed with empirical evidence until nearly 100 years later by the LIGO project. AI risk is similarly a prediction based on good theory and careful experiments that we can conduct today, despite the fact that we don’t have AGI yet and therefore don’t know for certain.)

Second, you mention that no existential harm has ever befallen humanity. It’s worth pointing out that, if it had, we wouldn’t be here talking about it today. Perhaps the reason we don’t see aliens in the sky is because existential catastrophes are common for intelligent life, and our survival thus far is a long string of good luck. I’m not an expert on this topic and I don’t quite believe all the implications, but there is a field of study devoted to it called anthropics, and it seems pretty interesting.

More on anthropics: https://www.briangwilliams.us/human-extinction/doomsday-and-the-anthropic-principle.html, https://nickbostrom.com/papers/anthropicshadow.pdf

Hope to read more from you again!

I don't think that there was a point where climate change predictions were more similar to religious apocalypses due to the pre-existing movements concerned with ecology that were already dealing with pre-existing forms of ecological destruction. It to me that combating climate change became part of those movements as it became a more credible threat, and it doesn't seem like it was ever irrationally focused on.

That first paragraph is supposed to be a nod to the anthropic principle and is meant to situate the reader in the special epistemic situation of not being able to rely on the historical record. I love anthropic I'm about to submit another post on its implications for nuclear war.

You might find the thread "The AI messiah" and the comments there interesting.

You quote AI results from the 70s and 90s as examples of overly optimistic AI predictions.

In recent years there are many many examples of predictions being too conservative (e.g. Google beating Lee Sedol at Go in 2016, GPT-3, Minerva, Imagen ...).
Self-driving seems to be the only field where progress has been slower than some expected. See e.g.
https://bounded-regret.ghost.io/ai-forecasting-one-year-in/? "progress on ML benchmarks happened significantly faster than forecasters expected" (even if it was sensitive to the exact timing of a single paper, I think it's a useful data point).

Would that make you increase the importance of AI risk as a priority?

I will check out the article.

I was unaware of these more recent predictions, these increase the credibility of AI risk in my mind to some degree.

Prior to generalized artificial intelligence actually existing I couldn't ever view it as a higher priority than climate change. If it existed but was contained I would treat it as a higher priority, and if there were points where it almost escaped but didn't even more so. 

[anonymous]2y4
0
0

Interesting tack at the problem!

As a skeptic of AI as the main (certainly as the only) existential risk to focus on, I do agree there are  "vibe" similarities with apocalyptic religious claims. The same applies to climate change discussions outside of EA. My guess as to why would be that it is almost impossible to  discuss existential risks purely rationally - we cannot help but feel anxiety and other strong negative emotions at the prospect of dying, which can cloud our judgment.

Ultimately, however, I do not view that as a sufficient reason to dismiss existential risk claims. 

If you'll pardon me for the somewhat somber example - take a hypochondriac who becomes  obsessed with the idea they might die from lung cancer. They quit smoking and start doing all the right things. However they eventually die in a car accident that could have been averted had they been wearing a seatbelt. 

What can we conclude from this scenario?
1. That person was right to worry about existential risk. Yes, while they were alive, the very fact that they were talking about existential risk showed that their fears hadn't yet materialized. But they did die from an existential risk in the end.
2. That person was right to worry about existential risks from lung cancer. Smoking does help cause lung cancer. Had they survived the accident but not quit smoking, who knows what would have happened.
3. That person was wrong not to worry about existential risks from car accidents.
4. That person was probably wrong to obsess over only one existential risk out of many.
5. That person probably would have been better off not living in fear. They could have enjoyed themselves while living wisely and prudently.

What we cannot conclude from this scenario:
1. Humans cannot die.
2. Humans cannot die from lung cancer.
3. I can smoke ten packs a day for decades without fear of consequence. 

I don't think the vibe of climate apocalyptic claims is analogous to apocalyptic religious claims in the way apocalyptic AI is. This is because climate change is a process based while religious apocalypses and apocalyptic AI are events. Whether or not that is enough reason to dismiss the claims is subjective but it should decrease their credibility to some degree.

I am confused as to how your thought experiment interacts with the argument in this paper. If no one had ever died of lung cancer before the man would be irrational no?

[anonymous]2y1
0
0

To be clear the argument of the thought experiment was more that "just because someone is being a bit of a maniac about an existential risk does not mean that they're wrong or that existential risk does not exist." So that's why I took an example of a risk we know can happen - the existential risk to one human. It was not an attempt at a full analogy to AI X-risk. 

It is true that the difference between humans dying and the entire human species going extinct is that we can know that humans have died in the past without dying  ourselves.

So if we're going for an analogy here, it is a scenario in which no one has yet died of X, but there is a plausible reason to believe that X can happen, X is somewhat likely to happen (as likely as these things can be anyway), and if X happens, it can cause death. 

I would argue that if you can establish that well enough, the claim is worth taking seriously regardless of "weirdness"/apocalyptic overtones, which are somewhat excusable due to the emotions involved in fear of death.  Of course if the claim can be made without them even better!
 

I agree with all of this. The argument is not that AI risk claims are wacky or emotional and therefore should be considered untrustworthy. A claim be wacky or emotional impo doesn't effect its epistemic credibility. It is that they directly parrel other apocalyptic claims that they both share a history with co-exist within the same culture(though being Christian apocalyptic beliefs) Additionally this is not proof that AI risk isn't real it is merely a reason to think it is less epistemically credible. 

Regardless of the reality of AI risk, this is a reason that people will justifiably distrust it.

I've upvoted this because I think the parallels between A.I. worries and apocalyptic religious stuff are genuinely epistemically worrying, and I inclined to think that the most likely path is that A.I. risk turns out to be yet another failed apocalyptic prediction. (This is compatible with work on it being high value in expectation.)

But I think there's an issue with your framing of "whose predictions of apocalypse should we trust more, climate scientists or A.I. risk people": if apocalyptic predictions means predictions of human extinction, it's not clear to me that most climate scientists are making them (at least in official scientific work). I think this is certainly how people prioritizing A.I. risk over climate change interpret the consensus among climate scientists.

I am using apocalyptic broadly, such that it applies both to the existential risk and to global catastrophic risk. IMPO if a nuclear war kills 99% of the population it is still apocalyptic.

I think that distinguishing between global catastrophic risk and existential risk is extremely difficult. While climate scientists don't generally predict human extinction I think this is largely because of pressure to not appear alarmist and due to state and corporate interests to downplay and ignore climate change. On the other hand, it doesn't seem like theirs any particular forces working to downplay AI risk. 

Fantastic essay - one of the most original and challenging I've seen on the forum. 

I'm interested in this argument not so much as it relates to the value of working on AI alignment, but rather the internal narratives / structures of meaning people carry into their EA work. 

Many people come to EA  from intense or high-control religious backgrounds - evangelicalism, Mormonism, orthodox and ultra-orthodox Judaism, and more. Especially in the kinds of educated circles that tend to overlap with EA, there's a huge cultural vacuum for shared structures of meaning. I suspect we underestimate the power of this vacuum at our own peril. We've got to acknowledge that AI apocalypse narratives (focusing on the narrative here; not the issue itself) have a powerful religious pull. They offer a kind of salvation / destruction binary that offers us what we want most in the world - relief from uncertainty. 

I see a young movement  with a ton of fervor / new converts and I wonder - are we honest with ourselves about what we're looking for? Are we being smart about where we get our sense of belonging, meaning, and purpose? 

Lots of folks are worried about burnout, and I am too. I see a bunch of brilliant 23 year olds in STEM (and others!) who haven't had a chance to develop an understanding of their emotional / relational / spiritual needs and are heading for a crash. 

Thank you for the high praise.

This certainly seems to be an area ripe for further exploration, and I am curious about how it applies to other parts of EA

I think it's an open question as to how much we can learn from failed apocalyptic predictions- https://forum.effectivealtruism.org/posts/2MjuJumEaG27u9kFd/don-t-be-comforted-by-failed-apocalypses

Also with the possible exception of the earliest Christians who were hoping for an imminent second coming, I'm quite sure most Christians have not predicted an imminent apocalypse so we're talking about specific sects and pastors (admittedly some quite influential).  You do say you're talking about early Christians at the start of the article, but I think the conclusion of your post makes it sound like religious people are constantly making apocalyptic predictions that fail to come true.

Anthropic shadow certainly creates some degree of uncertainty, however, it seems to apply less in this case than it might in say the case of nuclear war. (I'm actually about to submit another submission about anthropic shadow in that case) It seems like AI development wasn't slowed down by a few events but rather due to overarching complexities in its development. It's my understanding that anthropic shadow is mostly applicable in cases where there have been close calls, and is less applicable in non-linear cases. However, I might be mistaken.

The conclusion doesn't read this way to me, as to me the statement "apocalyptic claims made by Christians" doesn't imply that all Christians make apocalyptic claims. However, it does seem to have created unnecessary confusion, I will add the word some. 

Look forward to your next post!

I think this can be made more skimmable, with bolded subsections and statements and clearly communication of your reasoning for each section, rather than just a tl;dr at the top.

This is a good point I will add when I get the chance

Glad to help! Thanks for trying to contribute to the conversation.

[anonymous]2y2
0
0

x

[This comment is no longer endorsed by its author]Reply

My guess is that you don't understand AI risk arguments well enough to be able to pass an Ideological Turing Test for them, ie, be able to restate them in your own words in a way that proponents would buy. 

[This comment is no longer endorsed by its author]Reply
[anonymous]2y0
0
0

x

[This comment is no longer endorsed by its author]Reply

It seems helpful to be able to understand contrary ideas so you can reject them more confidently! :) 

I hope you take care too.

[This comment is no longer endorsed by its author]Reply
[comment deleted]2y1
0
0

This is a good point. I might write up a paragraph if I get the chance. In my head, I took it for granted that everyone would be on board with this but it'd probably be better to go over some of the data.

I am confused as to why anyone would downvote this.

[anonymous]2y2
0
0

x

[This comment is no longer endorsed by its author]Reply
[anonymous]2y3
0
0

I agree with your assessment of the group think and why your comment was probably downvoted but for what it's worth, I don't think it's weird that people here are sensitive to what sounds like a dismissal of AI risk since many people in EA circles are genuinely deeply afraid of it and some plan their lives around it. 

This makes sense

I appreciate the explanation. 

Basic answer: They aren't different, and a lot of climate change/environmentalism also has apocalyptic tendencies as well. Ember is just biased toward climate change being true. A shorter version of jackava's comment follows.

Specifically, both climate change and AI risk have apocalyptic elements in people's minds, and this is virtually unsurprising given people's brains. Climate Change is real, but crucially very likely not to be a risk. AI risk could very well go the same way.

There's a study showing that climate change/environmentalism has apocalyptic elements to it: https://doi.org/10.1111/j.1540-8159.2005.09566.x-i1.

I will add a section to this paper to clarify this. The argument isn't that AI is apocalyptic and therefore untrustworthy but that the apocalyptic narratives in AI parrel other apocalyptic narratives that are untrustworthy(specifically old religious narratives). This paper shows that there are narratives within environmentalist circles but that alone isn't enough for my argument to apply.

I also don't understand why no one has commented on my reframing of AI risk which I don't take to be suspect. I very obviously view AI as a threat there's an entire section on it.

More from Ember
Curated and popular this week
Relevant opportunities