In recent years, three pivotal concepts have emerged in the mainstream and are grabbing the interest of the scholarly landscape: Artificial Intelligence, Existential threats, and human ingenuity. This surge reflects the rapid transition of Homo sapiens from hunter-gatherers to explorers of the limitless possibilities of existence.
To clear the conceptual ambiguity of the term Existential threats, which is generally used interchangeably with catastrophic risks. Philosophers like Toby Ord, in his book, published in the year 2020, The Precipice, revolve around the doomsday risks encompassing scenarios such as permanent civilizational collapse or irrecoverable dystopias. But here, existential threats solely mean the reduction
in the sense of existence (a decline in the psychological decision-making authority for survival, which might make humans mentally trapped) and its adverse impact on human cognitive ability, resulting in the reduction of long-term prosperity.
Existence itself is a multifaceted dice with no straightforward purpose. Even answering what intelligence is and establishing the minimum threshold to be considered conscious is an extremely challenging quest that hangs delicately between the branches of science and philosophy. Though a conceivable task, it saddles humanity with this responsibility throughout generations. These questions have been raised in the form of religion and ethics, and scholars have tried to address them for thousands of years.
Relatively recently, positioned into the mainstream by the great English mathematician and philosopher Alan Mathison Turing, who invented the Bombe machine that broke Enigma codes, which were thought to be unbreakable because of the settings it offered of around 150,000,000,000,000,000,000, i.e, 1.5 × 1020 or 150 quintillion possible combination solutions.
He introduced the concept of the Turing test in his notable work on Computing Machinery and Intelligence. A game played between humans and computers that is supposed to answer the simple yet complex questions: Can machines think, and how to distinguish between Human Intelligence and Artificial Intelligence?
Contemporary LLMs (Large Language Models) and Machine learning’s breakthroughs in NLP (Natural Language Processing) have produced satisfactory and rapid outputs, but the tests themselves are outdated. Assuming science continues progressing without any interruption, the world is expected to see the early versions of AGI (Artificial General Intelligence) at the end of this decade. AGI is a hypothetical intelligence of the machine that possesses the ability to mimic the cognitive functions and provide outcomes similar to a human.
Ostensibly, Research and development in the Artificial Intelligence sector have two primary motives: on the surface, it is to provide a better lifestyle that can be achieved by aiming to develop Artificial Intelligence that surpasses human cognitive abilities, and beneath is the interest in monetary gain by the tech giant developers, along with government bodies.
Irving John Good introduced the concept of the intelligence explosion or technological singularity in 1965. This concept states that each alteration of information for recursive self-improvement will eventually lead to an exponential increase in intelligence. In the case of machine learning, the compound cycle of self-enhancement will add up to a certain point where its intelligence surpasses the boundary of human comprehension in a very short time frame, and the race has already begun.
A visible transition is going on right now from narrow (task-specific) intelligence to complex human-level cognition (Artificial General Intelligence). Then afterward, ASI (Artificial Super Intelligence).
However, defining complex human intelligence itself remains a topic of intriguing discussion. There is no consensus on what constitutes true human cognition in terms of conflicting biological and electronic cognitive skills. Does it lie in the seamless, unconscious processes of the human body, involving aged experience of Creativity, adaptability, emotional understanding, and communication of abstract ideas, or an average adult’s Intelligence Quotient measure?
If Technological advancement continued, human intelligence would barely stand equal to that of machines, and there would not be a reliable, ironclad test to differentiate between us and them.
This concern gives rise to a fundamentally serious question:
1. To what degree is it justifiable to rely on Artificial Intelligence for strategic and ethical decision-making, considering the outputs are logically consistent and data-driven?
With the surge of intellectual technologies (Machine Learning) in the politically globalised world, there are significant shifts in the core concept of exerting dominance.
Developed AIs will be used for strategic disruptions with minimal intervention and resource usage, which is considered the modern outlook of warfare; adversaries that kneel without confrontation are more favourable than sheer firepower on the battlefield.
The trends indicate that future conflicts will not be fought with blood and explosions, but rather focus primarily on asymmetric and non-traditional strategies involving economic and psychological warfare facilitated by cyber means.
There are two possibilities for computer superintelligence to aid military operations. First, for physical labour, such as using autonomous surveillance drones and self-driving vehicles to enhance logistical operations in a sensitive military compound. The second is the ability to strategize potential outcomes by analyzing dependent (Media/public perception, collateral damage, etc) and
independent variables (Diplomatic pressure, Day-night operations, etc), promising more effective decision-making amid conflict. In both cases, human interference is minimal, primarily for maintenance purposes.
Mankind’s blind reliance on Artificial superintelligence for critical
decision-making will pose a profound risk to humanity’s long-term mental and existential survival.
How?
Consider a hypothetical scenario in the near future of rivalries between bipolar or multipolar superpowers, locked in a struggle for (dominance), escalating tensions until the brink of large-scale destruction. Each of the superpower nation-states will seek to demonstrate their hard power influence over others without civilian
casualties. Warfare Strategists (humans) turn to advanced computing and artificial intelligence because of their ability to handle a high volume of data ingestion and analysis, with potential warfare confrontation simulation to find the single most optimal strategy to construct meaningful outcomes.
1. To deter conflict
2. Demonstrating superiority
3. Control the dynamic
4. Minimal human casualties
5. Shatter the opponent’s willpower to indulge in the fight.
6. Fast outcome
Machine learning will analyze the requirements on the basis of existing information available only in cyberspace, including geopolitical dynamics and strategic deterrence theories. Computer intelligence presents several options involving the best strategies to fulfill the request. One appears to be particularly effective:
A series of tactical, very low-yield nuclear strikes (within subunitary fraction of kilotons of TNT (for reference Trinity test was conducted with an explosive yield of 18.6 kilotons of TNT, and the russian Tsar Bomba with an astonishing yield of 50 megatons of TNT)) on uninhabited enemy territory to destroy resources while sparing human lives and also a hardpower showcase. Artificial General
Intelligence provided the full, detailed strike location, optimal time, and strategies to minimise aftermath scenarios, etc.
This plan meets the mentioned requirements and appears rational at first glance, a calculated psychological maneuver designed to de-escalate without triggering full-scale war.
There is only one case study with its two similar subsets of nuclear weapons usage in warfare- Hiroshima and Nagasaki. The result was indeed ‘satisfactory’ (open to contemplation). Plenty of information can be found on the topic of nuclear deterrence and how atomic bombs help human society not to go into total warfare. So, according to the AGI, it seems legitimate and practical to use low-yield nuclear
weapons for deterrence.
However, when implemented in real life, the outcome might unfold differently. Rather than perceiving the strikes as a measured warning, the adversary (humans) interprets them as the initiation of nuclear warfare and retaliates with
strategic atomic strikes. The conflict may (possibly) escalate into an all-out nuclear exchange, plunging humanity into a doomsday risk.
So, this implies that, unlike other biological intelligence Humans, too, are tied to a timeline of existence encoded in human genes that is shaped by evolution, and are aware of it (humans have a rough intrinsic understanding of mortality. In other words, how long are humans going to live..70..80...years?).
As a result, biological decision-making follows three stages: first, survival; second, the pursuit of comfort; and finally, reproduction. On the other hand, no matter how advanced the computer's cognitive ability is, the computer will have no idea of its existence and existential stakes. Sense of Existence only occurs when death is inevitable in the existential timeline.
Homo sapiens took thousands of years to invent written scripts and hundreds of years for paper. After the advent of paper, it only took relatively a few hundred years less to make the printing press, and then humanity saw the rise of the Industrial Revolution. From the first flight in the air to crossing the Karmal line, it only took a few decades to achieve. This shows the compound exponential intellectual growth. This also applies to machine learning. Each advancement will compound. Start may be slow, but after a few updates, it will start to challenge human cognitive and rational comprehension abilities. It’s just a matter of time.
Time, itself a concept that is embedded in the biological species' age-long evolutionary experience. For a computer of now, this is just a variable. A genuine sense of existence and the finality of death, which artificial intelligence truly lacks. Mirroring the potential risk of blindly relying on that might pose a challenge to human existence.
Mitigation?
The dichotomy is that humans are living in the best times compared to those their ancestors ever had in every possible aspect of life. Earlier, Homos was the prey of nature, but now Homo sapiens are the top predators. Still, Humans are increasingly concerned about existential survival. The scientific probability of the absolute precipice of human existence is highly unlikely; however, it's better to be prepared than regret the results.
To mitigate the existential threat challenge, humans need to understand the limits of trusting Artificial Intelligence in their decision-making process. Another thing is that designers and scientists must focus on instilling a sense of self-awareness, not
as the ability to think, but as their existence and death, in the development of advanced super-cognitive Artificial Intelligence.
