Hide table of contents

This post identifies a new existential risk factor which has not been recognised in prior literature; Brain-Computer interfaces. Brain-Computer interfaces refer to technologies which allow the brain to interface directly with an external device. In particular, BCIs have been developed to read mental and emotional content from neural activity, and have been developed to intentionally stimulate or inhibit certain kinds of brain activity. At present BCIs are used primarily for therapeutic purposes, but their potential use case is much wider.

We recognise that this sounds somewhat like science fiction. Skepticism here is warranted. However, the current state of the technology is much more advanced than most are aware of. In particular, well-corroborated research prototypes already exist (Moses et al 2019; Guenther et al 2009); a number of companies, including Facebook and Neuralink are working to commercialise this technology over the coming decades (Constine, 2017; Musk 2019); and there is widespread agreement among BCI researchers that this technology is not just feasible, but will be on market in the near future (Nijboer et al 2011). The risks this technology poses however, have been almost entirely neglected.

This paper will outline how the development and widespread deployment of BCIs could significantly raise the likelihood of longterm global totalitarianism. We suggest two main methods of impact. Firstly, BCIs will allow for an unparalleled expansion of surveillance, as they will enable states (or other actors) to surveil even the mental contents of their subjects. Secondly, BCIs will make it easier than ever for totalitarian dictatorships to police dissent by using brain stimulation to punish dissenting thoughts, or even make certain kinds of dissenting thought a physical impossibility.

At present, this risk factor has been entirely unnoticed by the X-Risk community. We suggest that given the high likelihood of it’s impact, and the possible magnitude of such an impact, it deserves more attention, more research, and more discussion.

1. Definitions

1.1 Existential Risk

Global existential risks are those which threaten the premature extinction of earth originating life (Bostrom 2002) or which threaten “the permanent and drastic reduction of it’s potential for desirable future development” (Cotton-Barratt & Ord 2015). As such, not all existential risks pose the danger of extinction. Irreversible global totalitarianism is often considered as an existential risk too; because even without posing any extinction risk, it has the capacity to irreversibly destroy or permanently lessen a great level of humanity’s potential (Ord 2020).

1.2 Risk Factors & Security Factors

A risk factor is a situation which makes the occurrence of an existential risk more likely. An example of this kind is international conflict, which by itself offers little likelihood of extinction; but which may drastically raise the odds of nuclear war, and thus the odds of existential catastrophe. Existential risks and existential risk factors are not always mutually exclusive; some consider climate change to be both an existential risk in itself, and also a risk factor which might increase the danger from other kinds of existential risks (Torres, 2016).

A security factor is the opposite of a risk factor. It is something which lowers another existential risk. For example, good international governance may be a security factor which lessens the chance of nuclear war.

Just as action to avoid existential risk is crucial, dealing with risk factors can be just as important, or in some cases, even more important than dealing with the risks themselves (Ord 2020). For example, if the chance of a particular X-Risk occurring is 10%, but a risk factor brings this chance up to 90%, it may end up being more cost effective to address the risk factor before addressing the risk itself. This is not always the case, and working on risk factors might be less cost effective in some cases, but there can be strong justification for working on alleviating existential risk factors when it is cost effective to do so.

This paper seeks to identify and outline the danger and likelihood of a new and unnoticed existential risk factor.

2. Brain-Computer Interfaces

2.1 Outline of Current Brain-Computer Interfaces

A brain-computer interface (or BCI), is an interface between a brain and an external device. Certain forms of BCIs already exist; the term refers to a range of technologies, used for a number of purposes. At present, the most well known commercial uses of BCIs include recovering lost senses, as with cochlear implants used to restore hearing and retinal implants to restore sight (Anupama et al 2012). However, BCIs have a vastly broader set of uses which already exist as either in-use medical technologies or as well corroborated research prototypes. In this section, we will outline a few of these uses to give an idea of the current and near term scope of the technology.

For the purposes of our explanation, there are two broad functions of BCIs. The first kind of BCIs are able to read neural activity, record it, interpret it, send it, and use it for a variety of purposes. The second kind of BCIs are able to write to the brain. They are able to influence and modify brain activity; stimulating or suppressing various responses by using skull mounted micro-electrodes, or by using less invasive, transcranial electrical stimulation. These two types could be combined and used together, though for clarity we will refer to them as type 1 and type 2 BCIs, so as to differentiate function.

Type 1 BCIs are able to read neural data, but also report and send this data for a number of purposes. These have already been used to translate speech from neural patterns in real time (Allison et al 2007; Guenther et al 2009; Moses et al 2019), and to detect positive and negative emotional states from neural patterns (Wu et al 2017). It is expected that near-term BCIs of this kind will be able to detect intentional deception, detect even subconscious recognition, and detect more precise and complex thought content (Evers and Sigman 2013; Bunce et al 2005; Bellman et al 2018; Roelfsema, Denys & Klink 2018). There are many practical uses of recording and interpreting neural data. So far, BCIs have been used in primates to allow them to control prosthetic limbs and smart devices with thought, by sending mental commands directly to the relevant device (Moore 2003; Carmena et al 2003; Ifft 2013). These same techniques have also been used to assist people who are paraplegic or quadriplegic by providing them with a neural shunt which records messages from the brain and sends these messages directly down to where the muscles are activated, allowing patients to use previously disabled limbs (Moore 2003). Many companies also have the longterm goal of allowing users to mentally transmit messages to other BCI users, allowing silent communication with only a thought (Kotchetkov et al, 2010).

The uses of Type 2 BCIs are even more varied. Many uses are therapeutic. Deep brain stimulation for example has used neural stimulation to treat various disabilities and conditions; including Parkinson’s disease (Deuschl 2005; Perlmutter 2006). Similar techniques have been used to alleviate disorders such as OCD (Abelson et al 2005; Greenberg 2006), and have been suggested as potential future treatments for conditions like alzheimers and depression (Laxton 2013; Mayberg et al 2005), and even to restore function in those with motor disability after a stroke (Gulati et al 2015).

Through deep brain stimulation, control of physical pain responses is also a possibility; Such techniques have been used to alleviate chronic pain (Kumar et al 1997; Bittar et al 2005a), treat phantom limb syndrome (Bittar et al 2005b), augment memory (Suthana 2012; Hamini et al 2008) and more. Just as BCIs can currently suppress pain, pain responses can also be stimulated for a variety of purposes, from interrogation, to incentivisation, to punishment. Similarly, BCIs are already able to artificially stimulate or suppress emotional reactions (Delgado 1969; Roelfsema & Klink 2018). These are just a few of the corroborated functions of BCIs. In future, it has been suggested that BCIs could be used as a possible treatment for cravings and addictions, and as a way to alter internal drives and rewards systems (Mazzoleni & Previdi 2015; Halpern, 2008).

“Consider eating a chocolate cake. While eating, we feed data to our cognitive apparatus. These data provide the enjoyment of the cake. The enjoyment isn’t in the cake per se, but in our neural experience of it. Decoupling our sensory desire from the underlying survival purpose [nutrition] will soon be within our reach.” - Moran Cerf, Professor at Northwestern University, Employee at Neuralink.

2.2 Future Brain-Computer Interfaces

There is significant research and development being done on BCIs to expand their capabilities, and to make BCIs orders of magnitude cheaper, more precise, less invasive, and more accessible to the broader population. Companies currently working on developing cheap, publicly accessible advanced BCIs include Facebook (Constine, 2017), Kernel (Kernel 2020; Statt 2017), Paradromics and Cortera (Regalado 2017), and Neuralink (Musk 2019). DARPA, the research arm of the US military, is funding significant research in this direction (DARPA 2019), as is the Chinese government (Tucker 2018).

The potential uses of BCIs are well corroborated. The primary difficulty at present is cost, precision, and invasiveness. With so many companies and governments working on this problem, it is likely that these barriers will quickly fall.

2.3 Not all BCIs Involve ‘Humanity’ Scale Risk

As a point of clarification, this paper does not argue that all BCIs act as an existential risk factor. It seems incredibly unlikely that cochlear implants have any impact on the likelihood of any existential risk. However, we do argue that certain kinds of more advanced BCI may be extremely dangerous, and may drastically raise the risk of long-lasting global totalitarianism.

2.4 Current Literature on Risks from BCIs

2.4.1 Previously Identified Risks

The current literature on global existential risk from BCIs is scarce. The vast majority of the literature on risk from BCI has focused on impacts at a very low scale. Such low-scale risks that have been considered include surgical risk from operations, possible health related side-effects such as altered sleep quality, risk of accidental personality changes, and the possibility of downstream mental health impacts or other unknown effects from BCI use (Burwell, Sample & Racine 2017). Potential threats to individual privacy have also been identified; specifically the risk of BCIs extracting information directly from the brain of users (Klein et al 2015).

At a higher scale, Bryan Caplan (2008) successfully identified 'brain scanning technology' as a factor that may impact existential risk at some point in the next thousand years by assisting with the maintenance of dictatorships. However, Caplan focuses only on risk within the next millennium, and does not consider the high potential for this to occur in a far shorter time frame; in particular, within the next hundred years. He also only briefly mentions brain scanning as a technology, and does not consider the risk from brain scanning technology being present and active in all citizens at all times. Such widespread use is a stated goal of multiple current BCI companies. Finally, Caplan did not consider the full depth of the impact of BCIs - only mentioning the capacity of brain scanning to increase the depth of surveillance, while ignoring the existential risk posed by the widespread use of brain stimulation.

2.4.2 Cybersecurity and Coercion

A final risk that has been identified in prior literature is cybersecurity; though prior literature has primarily focused on the threat to individuals. Specifically this has been discussed in relation to vulnerabilities in information security, financial security, physical safety, and physical control (Bernal et al 2019a). BCIs, just like computers, are vulnerable to be manipulated by malicious agents. BCIs and brain scanning offer an unprecedented level of personal information, passwords, as well as data about a user’s thoughts, experience, memories and attitudes, and thus offer an attractive terrain for attackers. It is likely that security flaws will be used by malicious actors to assist with cybercrime. Further Previously identified risks here includes risk of identity theft, password hacking, blackmail, and even compromising the physical integrity of targets who rely on BCIs as a medical device (Bernal et al 2019b). The use of deep brain stimulation for coercion or control of BCI users is also a possible source of risk (Demetriades et al 2010). Corroborated possibilities here include control of movement, evoking emotions, evoking pain or distress, evoking desires, and impacting memories and thinking processes; and these are just the earliest discovered capabilities (Delgado 1969). However, past papers have exclusively focused on this as a risk to individuals; that individuals may be sabotaged, surveilled, robbed, harmed, or controlled. Past research has not yet explored the risk provided to humanity as a whole.

This paper will seek to take the first steps to fill that gap, and will outline the risks that BCIs provide at a broader, global scale; addressing the risk they pose to the future of all of humanity.

2.5 Higher Scale Risks: BCI as a Risk Factor for Totalitarianism

2.5.1 Risk From Neural Scanning: Ability to Surveil Subjects

Dissent from within is one of the major weaknesses of totalitarian dictatorships. BCIs offer a powerful tool to mitigate this weakness. Increases in abilities for surveillance would make it easier to identify and root out dissent, or root out skeptics who might betray the party, and thus would make it easier to maintain totalitarian control. While conventional surveillance may allow for a high level of monitoring, and tracking of citizens behaviour and actions, it provides no way for a dictator to peer inside the minds of their subjects. Because of this, the difficulty of identifying the attitudes of careful defectors remains high. BCIs provide an unprecedented threat here. Surveillance through already existing methods may fail to expose some threats to a totalitarian regime, such as party members who carefully hide their skepticism. But BCI based surveillance would have no such flaw.

The level of intrusion here is potentially quite severe. With the advancement of BCIs, it is highly likely that in the near future we will see a rapid expansion in the ability to observe the contents of another’s mind. Some researchers claim that advanced BCIs will have access to more information about the intentions attitudes, and desires of a subject than those very subjects do themselves, suggesting that even subconscious attitudes, subconscious recognition, as well as intentional deception and hidden intentions will be detectable by BCIs (Evers and Sigman 2013) (Bunce et al 2005). Already, BCIs are able to detect unconscious recognition of objects that a subject has seen, but cannot consciously remember seeing (Bellman et al 2018).

Others have even suggested that by more precisely recording the activity of a larger number of neurons, future BCIs will be able to reveal not just perceptions and words, but emotions, thoughts, attitudes, intentions, and abstract ideas like recognition of people or concepts (Roelfsema, Denys & Klink 2018). Attitudes towards ideas, people, or organisations could be discovered by correlating emotions to their associated thought content, and dictatorships could use this to discover attitudes towards the state, political figures, or even ideas. This would allow detection of dissent without fail, and allow a dictator to quell rebellion before a rebellious thought is even shared.

Some might hope for BCIs which do not have this level of access, but accessing and recording mental states is an fundamental and unavoidable feature of many BCIs. In order to achieve their desired functions, many BCIs need a clear way to read neural data. Without significant neural data they simply cannot function - it is impossible to translate neural data to exert some function if one doesn’t have access to that neural data. Brain stimulators and BCIs are specifically designed to allow this kind of access; it is crucial for the effective functioning of the device (Ienca, 2015). It is of course possible that BCIs made by some companies will be exclusively targeted to certain sections of the brain, for example, only targeting areas associated with speech, and not targeting other areas associated with emotions or thought. This is conceivable, though it is not clear that all companies and countries would do the same. Furthermore, the utility gained by expanding to other areas of the brain beyond the speech centre means it is highly doubtful the technology will remain restrained indefinitely.

Furthermore, it is likely that BCIs will be created by companies, which have strong financial incentive to record the neural states of users, if only to give them more information with which to improve their own technology. This information could be requisitioned by governments, as is frequently done to tech companies at present - even in democratic countries. Further enhancing this problem, privacy laws have a history of struggling to keep pace with technological advancements. In more authoritarian countries, neural data might be transmitted directly to state records, and the preservation of privacy may not be attempted at all.

In essence, BCIs allow an easy and accurate way to detect thoughtcrime. For the first time, it will be possible for states to surveil the minds of its citizens. Deep surveillance of this kind would increase the likelihood that totalitarian dictatorships would last indefinitely.

2.5.2 Risks from Brain Stimulation: Ability to Control Subjects

In addition to recording neural activity, there is an even greater threat which has not been considered as an existential risk factor in any prior literature. In addition to reading brain activity, BCIs are able to intentionally influence the brain. In particular, future BCIs will be able to rewire pleasure and pain responses, and allow us to intentionally stimulate or inhibit emotional responses, en masse. Where this is done consensually, and is desired, this may be of some benefit. However, nothing about this technology guarantees consent.

In addition to being able to identify dissident elements more effectively than ever (due to increased surveillance), BCI will also powerfully increase the ability of states to control their subjects, and their ability to maintain that control indefinitely. In such a situation, identification of dissidents would no longer be necessary, as a state could guarantee that dissident thought would be a physical impossibility. Finely honed BCI’s can already trigger, and associate, certain emotions or stimuli with certain concepts (Roelfsema, Denys & Klink 2018). This could be used to mandating desirable emotions towards some ideas, or make undesirable emotions literally impossible. Though this possibility has been discussed in literature for it’s therapeutic uses, such as triggering stimulation in order to respond to negative obsessive thoughts (nullifying negative emotions caused by such thoughts) there is huge potential for misuse. A malicious controller could stimulate loyalty or affection in response to some ideas, or even for specific organisations and people; and could stimulate hatred in response to others. It could also inhibit certain emotions, so that citizens wouldn’t be physically able to feel anger at the state. The ability to trigger and suppress emotional content with BCIs has already existed for years (Delgado 1969). Combined with complex and detailed reading of thought content, this is a highly dangerous tool.

Some might argue that dissident action may be possible even with an outside state controlling ones emotional affect. This is highly debatable, but even without any control of this emotional content, the risk from BCIs is still extreme. BCIs could condition subjects to reinforce certain behaviour (Tsai et al 2009), or could be used to stimulate aversion to inhibit undesired behaviour (Lammel et al 2012), or stimulate the pain or fear response (Delgado 1969), and cause intense and unending pain in response to certain thoughts or actions - or even in response to a lack of cooperation. Even without controlling emotional affect, the state could punish dissident thoughts in real time, and make considering resistance a practical impossibility. This is a powerful advantage for totalitarian states, and a strong reason for authoritarian states to become more totalitarian. In addition to surveillance, it allow as way to police the population and gain full cooperation from citizens in a way that (once established in all citizens) could not be resisted against. Machine learning programs scanning state databases of neural activity could detect thought patterns towards the state which are deemed negative, and punish this in real time. Or, if the state is more efficient, it could simply stimulate the brains of subjects to enforce habits, increase loyalty, decrease a subject’s anger, or increase their passivity (Lammel 2012; Tsai et al 2009). Even high level dissent or threat of coup would be virtually impossible in a totalitarian state of this kind; and it’s longterm internal security would be assured.

This is a technology which fundamentally empowers totalitarianism. It allows a way to police the population and gain full cooperation from citizens in a way which could not be resisted against. This is because even considering the idea of resistance or having emotions of disdain towards the state could be detected and rewired (or punished) in real time. At worst, the brain could be re-incentivised, with errant emotions turned off at the source, so that dissenting attitudes are unable to ever form.

BCIs also offer an easy way to interrogate dissidents and guarantee their cooperation in helping to find other dissident camps - which might be otherwise impossible. In past resistances, certain dissidents have been considered near-impossible to completely wipe out due features of terrain making it impossible to locate them in a cost effective way. If the government is able to access and forcibly apply BCIs, this would be a dramatically weaker obstacle. Dissenters might normally lie or not cooperate; but with BCIs, they simply need to be implanted and rewired. Then they would be as loyal and cooperative as any other, and could actively lead the state to their previous allies. Even unconstrained defectors could not be fully trusted as they may one day be controlled by the state. Another issue for the longterm survival of totalitarian dictatorships is coups or overthrows from within, as citizens or party officials are often tempted by different conditions in other states. With BCIs, the loyalty of regular citizens and even party officials could be assured. In current dictatorships wiping out dissidents (particularly nonviolent dissidents) often has a significant social cost which can delegitimise and destabilise regimes (Sharp 1973). A dictatorship whose citizens are all implanted with BCIs would not pay this social cost, or run such a risk of overthrow. At present, when dictators crack down it can cause riots and resistance, which can cause dictatorships to fall. With BCI’s, governments will not need to appease their citizens at all to maintain loyalty. They need only turn up the dial.

It has long been argued that technologies can incline us towards certain systems of government (Orwell 1945) (Huxley 1946) (Martin 2001). BCIs are one of these technologies. By making the surveillance and policing of humans much easier, they incline us towards totalitarianism, and allow for a kind of totalitarianism the could be stable indefinitely. They do this by making the identification and control of dissidents (or even the first sparks of dissident thought) drastically easier, and by giving states the ability to turn this off at the source. Notably, Caplan (2008) proposed that for totalitarianism to be stable (in the absence of BCIs) it would need to be a global phenomena; so the citizens of a totalitarian government would not be able to be tempted by other kinds of social system. With the development of BCIs, this is no longer a necessary criteria for stable totalitarianism.

2.6 Strategic Implications for Risk of Global Totalitarianism

In this section we explore some global strategic implications of BCIs. In particular, that BCIs allow totalitarian regimes to be stable over the longterm, even without requiring global totalitarianism. We also argue that BCIs make authoritarian regimes more likely to become totalitarian in the first place, and we explore the dangerous strategic equilibrium that they create. Essentially, BCIs make it easier for totalitarianism to occur, easier for it to be established globally, and easier for it to last indefinitely.

Totalitarian states may fail for a few reasons. Conquest by external enemies is a danger, and since totalitarian states tend to stagnate more than more innovative liberal states, this may be a danger that grows over time. Internal dangers occur too; citizens may choose to rebel after comparing their lives to more prosperous countries in the outside world. Violent and nonviolent resistances have been able to overthrow even harsh authoritarian regimes (Chenoweth 2011), and at least one totalitarian state has been overthrown by popular uprising (specifically, the Socialist Republic of Romania).

It has been suggested that the presence of successful liberal countries may tempt defection among the members of authoritarian and totalitarian countries; maintaining the morale of citizens and the inner elite is a primary issue. Orwell (1945) and Caplan (2008) both propose that global totalitarianism would allow a totalitarian state to escape these risks of rebellion, as there would be no better condition for subjects to be tempted by or to compare their lives to. However, global totalitarianism is not necessary; BCIs can disarm these issues. Not only is identification of dissent easier; the capacity for dissent can be entirely removed such that it never even begins, and loyalty and high morale can be all but guaranteed. Typically, it is hard to maintain commitment to totalitarian ideologies when free societies deliver higher levels of wealth and happiness with lower levels of brutality and oppression. BCIs could neutralise this problem, making temptation physically impossible, loyalty guaranteed, and regimes stable for the longterm.

Global totalitarianism would no longer be required for a regime to be sustainable in the longterm. Individual totalitarian countries could be stable internally due to BCI. Furthermore, they could also be stable to external threats through the development of nuclear weapons, which powerfully discourage war, and provide security from foreign nations. Being safe from both internal and external threats would have significant impacts on the lifespan of a totalitarian country.

A second impact of BCIs is that conventional dictatorships may be far more likely to become totalitarian, as BCIs would make it easy and advantageous to do so. In particular, if there is an easy, cheap, and effective way to identify and remove all opportunities for dissent in a population then this is a powerful advantage for a dictatorial regime. Survival would be a powerful reason to descend to totalitarianism. Therefore, BCIs may increase not just the longevity of totalitarian states, but also the likelihood that they occur in the first place.

Finally, this also creates a worrying strategic situation which may increase the likelihood of totalitarianism entrenching itself globally. With BCIs, totalitarian countries would almost never fall from internal threats. Meanwhile, democratic countries which do not brainwash their citizens may still at some point degenerate to a more authoritarianism form of government - at least for a short period. Democratic governments have rarely lasted more than a few centuries in history, and have often temporarily slid into dictatorship or authoritarianism. BCI technology guarantees that if a country falls to totalitarianism it will be permanent; as BCIs will ensure that they can maintain that stable state indefinitely. At present, democracies can collapse to dictatorship, and dictatorships can have revolutions and rise to democracy. With BCIs, democracies can still collapse, but dictatorships are able to last forever. Of course, countries can also be threatened from outside, but with the advent of nuclear weapons, external military conquest is a much less viable option. In short, with a combination of BCIs and nuclear weapons, a totalitarian country could be secure from within, and from external threats as well.

This is a dangerous strategic equilibrium, as it means that free countries will still eventually fall, as they do at present, but when they do they will not be able to climb back out. Democracies could collapse to dictatorship, but dictatorships could never rise from that state. In a world where democracies are mortal but dictatorships live forever, the global system is inevitably inclined towards totalitarianism.

3. Risk Estimates

3.1 Probability Estimates of Existential Risk from BCI

This section will offer a conservative fermi estimate of the existential risk from BCI in the next 100 years. We use the same framework used by Toby Ord to assess other existential risks (2020). The following sections (3.2 to 3.6) will unpack and justify the estimates taken in this section.

Ord outlines two methods of being conservative with risks. One method is to underrate our assumptions such that we don’t overestimate the likelihood of a risk. Another way is overrating likelihood such that we don’t accidentally act in a negative way. When guiding action, the second approach is far more useful, as it is more prudent and more likely to avoid catastrophic failure. However, to make the strongest case possible, in this paper I will be using the second approach; and will show that even with our most conservative, lower-bound assumptions, the risk from BCI is significant. In fact, our risk estimate would need to be almost an order of magnitude lower (~10X) to be on par with the probability Ord assigns to existential risk from nuclear war in the next 100 years (approximately 0.1%) (2020).

The estimates provided here are not intended to be exactly calibrated, but are meant to estimate the risk within an order of magnitude. These numbers should also clarify the dialogue, and allow future criticisms to examine in more detail which assumptions are overrated or underrated.

Our probability estimate is broken down into five factors;

D: Likelihood of development and mass popularisation of X-risk-relevant BCI technology.
E: Likelihood that some countries will begin to use BCIs to establish totalitarian control of their population.
S: Likelihood of global spread of totalitarianism.
R: Likelihood of total reach.
L: Likelihood of indefinite regime survival.
= Total Existential Risk
D * E * S * R * L = 70% * 30% * 5% * 90% * 90%
= 0.0085 = 1%

This estimate is based on the following values, which are explored in detail in sections 3.2 to 3.6.

Likelihood of development (D): 70%
Likelihood that countries begin using BCIs to control the population (E): 30%
Likelihood of global spread (S): 5%
Likelihood of total reach (assuming global spread is assured) (R): 90%
Likelihood of lasting indefinitely (L): 90%

This is almost 10x the probability Ord assigns to existential risk from nuclear war by 2100 and slightly less than 1/10th of the risk that he assigns to AGI (Ord, 2020).

Supposing these overall estimates are too high by an order of magnitude, then this risk would still be on par with existential risk from nuclear war, and thus would still be a significant risk that should be addressed. If there is reason to confidently suggest that these estimates are off by two orders of magnitude or more, then there is a reasonable case for ignoring the risk from BCIs in favour of prioritising other existential risks. However, it seems unlikely that these estimates could reasonably be off by two orders of magnitude or more.

But this estimate refers to risk overall. What is relevant is not the overall risk but the increase in risk due to BCI. Brian Caplan (2008) attaches a risk of 5% in the next 1000 years to the development of global totalitarianism which would last at least 1000 years. This evens out to approximately 0.5% per century, assuming the risk is evenly distributed over the 1000 years.

Even assuming this most conservative lower bound estimates, our estimates indicate that BCIs would almost double the risk of global totalitarianism within the next 100 years. If the estimates are higher than this extreme-lower-bound estimation, the capacity of BCI to be a risk factor may be significantly greater.

A less conservative estimate may also be illustrative of a reasonable level of expected risk.

D * E * S * R * L = 85% * 70% * 10% * 95% * 95%
= 0.054 = 5.4%

This less conservative estimate would mean an overall existential risk of 5.4% in the next 100 years. This would represent an increase of almost 11X on the baseline totalitarian risk given by Bryan Caplan, assuming his risk is evenly distributed over 1000 years.

3.2 Likelihood of Development

Technological development is notoriously hard to predict. So our method in this section is to keep predictions in line with previous predictions by experts.

Estimates from experts in the field are very clear; with the vast majority of BCI researchers surveyed (86%) believing that some form of BCIs for healthy users will be developed, and on market within 10 years, and with 64.1% believing that BCI prostheses would be developed on market within the same time frame. (Nijboer et al 2011) Notably, in this survey, only 1.4% claimed that BCIs for healthy users will never be marketable and in use. The predictions made by these researchers should be taken with some scrutiny; almost ten years has almost passed since this survey was run, and BCI prosthetics are not yet on market. However, the optimism of experts in the field is telling about the level of progress in the field. The time frame seems more likely to be decades than centuries.

Of course, present BCIs are fairly rudimentary, and one might argue that advanced mind reading might not be a possible. However, consensus among researchers is that this is post theoretically possible, and likely; a recent survey of BCI researchers found universal consensus among participants that in the future, mental states will be able to be read (Evers & Sigman 2013). Similarly, a survey of software engineers who had been presented with a working BCI found a universally shared belief among participants that contents of the mind will someday be read by machines - though the survey provided no timeframe. This uniformity of belief was found despite the divergent views about the nature of the mind among participants (Merrill & Chuang 2018)

Furthermore, there is significant reason to believe that the primary issues holding back commercial BCIs will soon be solved. Issues with BCIs at present include scaling down technology so it is small enough to be useable, can target more neurons, and can target neurons more precisely, as well as scaling down cost so it can be easily reproduced. The fact that there are multiple well funded companies working at these goals, and multiple national governments working at these goals, makes us believe that the likelihood of development is relatively high (DARPA 2019; Kotchetkov et al 2010; Tucker 2018). To reinforce this, in 2019, the global BCI market was valued at $1.36 billion in 2019, but is projected to reach 3.85 billion by 2027, growing at 14.3% per year (Gaul 2020).

Further justification here can be found in the kind of strategic environment that this technology is being developed in. Specifically, it is a textbook case of a progress trap (Wright 2004). A progress trap is a situation where there are multiple small incentives encouraging a certain path (much like bait) which encourage a culture to follow a certain course of development; in particular, a direction which is beneficial in the short term but catastrophic in the longterm. In this case, we will be incentivised to develop BCIs every step of the way, as there will be strong rewards for doing so; specifically in the form of relieving disease burden from diseases like alzheimers and Parkinson's disease, as well as alleviating suffering for amputees/improving prosthetics. There are significant medical, moral and economic advantages to be had implementing technology which can alleviate that disease burden. This will provide us continual incentives to develop this technology, but will lead us to a place with a new technology that may drastically increase the chance of existential risk.

In short, existential risk relevant technology for mind reading and mind control may not be developed for military purposes, but due to valid economic and medical reasons. It will slowly evolve over time, solving one problem at a time, like Alzheimers and Parkinson’s disease. Each advancement will seem logical, morally constrained, and necessary. The immediate consequences of the technology will be unassailable, and will make this technology more difficult to criticise. The second order consequences however present serious challenges to the future of humanity.

Given this corporate competition, the expected increase in global market valuation, the amount of corroborated prototypes of BCIs, the presence of a progress trap, and the near consensus about possibility of the technology in surveys of researchers, we take the fairly conservative assumption that advanced, X-risk-relevant BCIs will be easier to develop, and more likely to be developed this century, than AGI (Artificial General Intelligence). These seems especially likely considering that we are uncertain of even the possible methods by which AGI could be built. In terms of the probability we should attach to this claim, a 2011 survey of machine intelligence researchers led to an assumption of 50% chance of development of general artificial intelligence by 2100 (Samberg & Bostrom 2011). As such, we suggest 70% as an extreme lower bound estimate for the development of BCIs. The probability is likely much higher, however we believe this estimate to be a reasonable lower bound.

3.3 Likelihood that Some Governments Will Begin to Use BCIs to Maintain Control

Once the technology is developed and available, how likely is it that some governments will begin using BCIs to control their country? Scenarios we are considering here include governments which begin using BCIs to control a wide swathe of it’s population either by forcibly implanting citizens with BCIs, or by overriding and controlling the behaviour of BCIs that have been implanted consensually. To satisfy the criteria we have set out, the government must be successful in doing this on a large scale; for example, controlling a significant portion of the population (~20%). Simply having a few dozen citizens with BCIs compromised would be a negative event but is not sufficient scale for what we’re discussing.

Probabilities here are difficult to estimate; however, there are a few conditions of this strategic environment which inform our estimates.

Firstly, there will be a strong incentive for authoritarian countries to use BCIs to control, surveil, and police their population; and to take control of BCIs which are currently implanted. The incentive lies in the fact that this would help to stabilise their regimes, guarantee loyalty of their subjects (which is often a major threat to dictatorial regimes) identify dissidents, and crack down on or rehabilitate those dissidents. The opportunity to use BCIs to enhance interrogation may also make certain kinds of decentralised resistance more difficult, as even former collaborators may be controlled and used by the totalitarian government they previously fought against. BCIs would allow for the perfection of totalitarianism within a country. Totalitarian, and even authoritarian countries will have a strong incentive to roll this out in their own society, assuming they are cost-effective to build and use.

This is already being seen to some extent. Much of the work on BCI-based detection of deception has been performed by state sponsored institutions in China (Munyon 2018), which is currently rated as an authoritarian government on the Global Democracy Index (EIU 2019). Combined with China’s history of performing surgeries to treat mental illness without patient consent (Zamiska 2007; Zhang 2017), this offers historical precedent, and highlights the high possibility that BCIs could be used harmfully by an authoritarian state.

There is an added effect here. BCIs give authoritarian countries an extra incentive to descend into full totalitarianism, and thus makes the existence of full totalitarianism more likely. The survival of a regime can be greatly assisted through cheap, effective surveillance, and cheaper, more effective coercion. Through the use of BCIs, surveillance will be easier, and more extensive than ever before. More dangerous than this, taking an extreme level of control over even the most private thoughts and emotions of citizens will be possible, easy, cheap, and useful.

A second factor is the number of possible sources. The higher the number of authoritarian countries, the more agents there are which may choose to use the neural lace to control their population; so the higher the likelihood of catastrophic disaster. This claim rests on the assumption that authoritarian countries are more likely to use BCIs to enslave their population than less authoritarian, more democratic countries. At present, there are 91 countries worldwide (54.5% of countries, and 51.6% of the world’s population) under non-democratic regimes; 54 of these are under authoritarian regimes (32.3% of all countries) and 37 are under hybrid regimes (22.2% of all countries) (EIU 2019). It is of course possible that this may change, and over the last hundred years, the number of authoritarian regimes worldwide has dropped (Pinker 2018); however, it is unclear whether such a trend should be expected to continue, halt, or reverse itself. Between 2007 and 2019, the number of authoritarian regimes globally has once again risen, with the global democracy index seeing stagnation and decline every year from 2014 to 2019, with declining scores in more than half the world’s countries in 2017, and with 2019 recording the worst average global score recorded by the Global Democracy Index since the index was first produced in 2006 (EIU 2019). There is a significant possibility that global democracy will continue to retreat.

However, number of agents is only an issue if those agents are able to gain access to BCI technology. Once complex, cost-effective BCIs have been developed, they will be a strong competitive advantage to countries that have access to them. A hugely relieved disease burden from conditions like Alzheimers and Parkinsons would have positive economic effects (as well as desirable humanitarian effects in terms of relieving suffering), as might BCI augmentation of the workforce. Individual countries will want BCIs because it will keep them economically competitive. This is comparable to an economic arms race. This in itself does not imply that countries will use BCIs to assist with totalitarianism, or force it on their citizens; just that many countries, even democratic ones, will have incentives to encourage the development of BCIs, and the use of BCIs by their citizens. And the more countries that develop and use BCIs, the higher chance there is that we run upon a country that will misuse it.

This may be less of a risk if there is a strong ability to prevent technological proliferation. This is conceivable. However, the level of success over the last 100 years at preventing proliferation depends heavily on features of individual technologies. With many weapons that we seek to limit access to, such as nuclear weapons, proliferation is not stopped by restricting knowledge (which is typically very difficult) but by restricting materials e.g. access to enriched Uranium. Based on current BCIs, it seems like there is no significant materials shortage for BCIs, as they do not require any fundamentally rare or unique materials. Furthermore, it is easier to prevent proliferation of technologies used by governments and militaries than it is to prevent proliferation of technologies used by civilians. Because BCIs are currently being planned as a widespread civilian technology, other countries are likely to gain access to them, and have the opportunity to reverse engineer them. With this in mind, anti-proliferation methods would need to focus on preventing spread of knowledge about the development or security of BCIs.

Third, individual countries or regions will be strongly incentivised to manufacture BCIs domestically; or to support local companies which manufacture BCIs, rather than supporting foreign companies. The reason for this can already be seen in 2020. Many countries at present, quite reasonably, don’t trust companies from foreign countries with situations that may compromise their national security, or information about their citizens (Hatmaker 2020). Similarly, countries are unlikely to trust companies from foreign countries to have access and control over the brains of their subjects. As a result, major countries (or regions) are more likely to develop BCIs domestically, and not use the BCIs developed by foreign companies; and as a result it is likely that a diversity of BCIs will be developed, with a variety of capacities and varying levels of government control/access.

If individual countries are incentivised to manufacture their own BCIs then we are more likely to get a diversity of approaches to privacy. Some companies and countries may develop BCIs that do not give read or write access to the company, or to the government (though this may be somewhat optimistic). Some companies/countries may develop BCIs that give read access, but not write access; allowing surveillance, but not control. And other countries that desire it, will be able to create their own BCIs where the company/government has both read and write access.

If there were only a single actor controlling the development of all BCIs, it would likely be an easier situation to control and regulate. Having multiple state actors, each of whom can decide the level of coercion they wish their BCI to be built with, is a much more complex scenario. If there is a competitive advantage to doing away with certain freedoms, then there is a reasonable possibility that countries will expand their access/control over BCIs. This is especially likely considering the historical precedent that in times of war, countries are often incentivised to do things that were previously considered unthinkable. For example, prior to world war two, strategic bombing (the intentional bombing of civilians) was considered unthinkable and against international law (League of Nations 1938; Rooseveldt 1939); but these norms quickly broke down as the tactics were deemed necessary, and by the end of the war, bombing of civilians was no longer considered a war crime, but a common feature of war (Veale 1993) (Ellsberg 2017). It is reasonably likely that in future situations of crisis, such restrictions on read and write access may be removed, or emergency powers may be demanded by certain governments. This is a clear security risk. And it means that trusting governments to stick by their commitments may not be a viable longterm strategy. Further evidence can be seen in the steady expansion in government surveillance by even democratic governments.

With this in mind, we attach a lower bound probability of 30% to the chance that any single country sets the precedent of using BCIs to control their population within the next 100 years. Considering the incentives companies worldwide will have to proliferate the technology, the difficulty in preventing proliferation of BCIs, the strong incentives authoritarian governments will have to use it, the number of authoritarian governments worldwide, and the historical precedent of authoritarian governments using whatever means they can to stabilise their regimes, we believe that a 30% chance of catastrophic incident is extremely conservative as a lower bound estimate.

3.4 Likelihood of Global Spread

We define global spread as a situation where either a global totalitarian dictatorship is established which utilises BCIs to control the populace, and has no major competitors; or a multi-polar world is established where all major players are totalitarian dictatorships that heavily utilise BCIs to control the populace.

There are a few mechanisms by which these conditions could occur. One of these is military conquest; a totalitarian dictatorship could become aggressively expansionist and seek to control all other countries. It is conceivable that BCIs would give dictatorial countries an advantage here, as they may push their soldiers further than non-dictatorships would be able to (due to maintaining a focus on individual rights and welfare). However, it is also reasonably likely that human soldiers will be a less and less essential factor in future wars, so it is unclear whether BCIs will offer decisive military advantages. We also consider it implausible the the first BCI creators could retain a monopoly on the technology long enough to conquer all other major states.

Overall, we attach a very low likelihood of global domination by military conquest, primarily due to the presence of nuclear weapons. The presence of a nuclear response is a powerful reason to avoid invasion, and is a reason to believe that attempts at global conquest through military means are relatively unlikely in the near future. As such, nuclear deterrence may stop a BCI controlled country from spreading its control by force to other countries.

Second, it is possible that a country does not expand it’s influence globally by force, but instead gains global dominance slowly through economic influence. In this case, that country could then popularise BCIs in tribute countries that are economically dependent on it, and eventually start to intentionally misuse these implanted BCIs to maintain or expand control in those countries. We consider this scenarios to be far more likely than military conquest. In particular we consider this to be a possibility because multiple scholars and politicians already predict the rise of china as a single dominating global superpower (displacing the US) as a realistic possibility this by the middle of the century (Jacques 2009; Pillsbury 2015; Press Trust of India 2019).

There is also a third method, which we consider most likely. It possible that no country expands to gain global dominance, but that countries independently fall to totalitarianism, creating a multi-polar global totalitarian order. At present, this is not a major concern, as dictatorships (even totalitarian dictatorships) can be overthrown. Thus the likelihood of all countries falling to totalitarianism individually, without having countries rise back to democracy, is low. We suggest that BCIs too make the path to such a multi-polar totalitarianism much more likely.

Firstly, as previously discussed, BCIs increase the likelihood of totalitarianism within any individual country, because once a country becomes authoritarian, there is a powerful reason to transition to BCI reinforced totalitarianism, as this would strongly increase odds of survival of both the dictator and the government. Secondly, as discussed previously, BCIs allow for a stable, sustainable form of totalitarianism which would be very hard to reverse, as rebellion from within would be literally ‘unthinkable’.

Furthermore, BCIs sets up a dangerous strategic equilibrium. As previously established, BCIs may make it highly unlikely that dictatorships can be overthrown from within. Nuclear weapons make it highly unlikely that major powers will be overthrown from outside. With this in mind, when a country falls to totalitarianism, and uses these two technologies to maintain order, without major technological shifts that could invalidate the strategic advantages of these technologies, such a dictatorship is likely to last indefinitely. Democracies may choose to preserve the freedom of their constituents (though they also may not). However, over time, individual powers may fall to authoritarianism, and use BCIs to establish irreversible, immortal totalitarian dictatorships in their own regions. In a world where a) countries that preserve mental freedom might possibly degenerate into totalitarian countries, b) totalitarian dictatorships are immortal, and c) there is no available territory for new free countries to be created, it creates an equilibrium where countries will steadily converge upon becoming dictatorships. A free country might not be free forever, and might at some point collapse into dictatorship, and then reinforce itself with BCIs. However, with BCIs, once a dictatorship is established, it is likely to last indefinitely. This provides a clear path into a multi-polar global totalitarian order.

Considering the rise of authoritarian and semi-authoritarian leaders in multiple countries worldwide in just the last 5 years, and the broader trend against democracy, with 113 countries seeing a net decline on the global democracy index in the last 14 years (EIU, 2019), this seems a realistically possible, if somewhat unpredictable trend.

However, even assuming such a negative trend were to continue, 100 years is a very short time period for all major countries to fall to authoritarianism and totalitarianism. Stronger trends would be needed to meet that time frame. It is of course possible that this process is exacerbated and fall to authoritarianism is made much more likely by the occurrence of disasters such as climate change or a limited nuclear exchange, which might make responding governments vulnerable to autocracy (Beeson 2010; Friche et al 2012; Martin 1990). However, we consider the probability of global totalitarianism being established in the next 100 years to be fairly low. This being said, our low probability estimate should not downplay the risk; by 2100 if the 54.6% of countries that have currently fallen to authoritarian and hybrid regimes are reinforced by BCI induced loyalty, we may be well on the way to global totalitarianism. If BCI reinforced totalitarianism is already entrenched in a great number of countries, then the problem much be drastically harder to stop, and the overall risk will be higher. This offers an unusual strategic circumstance in regard to existential risks. It is likely that with more time, more countries will fall, and the more totalitarian countries there are, the harder this problem will be to solve. The likelihood of global totalitarianism within 200 years may be far higher than the likelihood of global totalitarianism within 100 years. As such, this is a problem that may be easier to address earlier than later.

With this in mind, we attach a fairly low probability of global spread of totalitarianism within this century. Due to the nature of BCIs, and the strategic environment that they create, we expect the likelihood of this risk to rise over time, and become increasingly difficult to solve. The longer the problem is left unaddressed, the more countries will fall to authoritarianism and use BCIs to ensure that their rule is indefinitely stable. As such, risk from BCI may not be evenly distributed. Though the likelihood of global totalitarianism in the next 80 years may be relatively low, the 21st Century may be seen as the turning point and action in the 22nd Century to address this problem may come too late.

Considering the possible paths to global totalitarianism, we attach a lower bound estimate of a 5% likelihood to BCI facilitated spread of global spread of totalitarianism within 100 years, and a significantly higher likelihood over the coming centuries.

3.5 Likelihood of Complete Reach

With some existential risks, there is a reasonably high likelihood of affecting the vast majority of humanity, but there is no likely mechanism by which they could affect all of humanity. For example, a natural pandemic might be able to affect 80% of the world’s population, but be highly unlikely to affect heavily isolated groups.

Unlike with natural pandemics, totalitarian risk does have a mechanism of this kind, as totalitarian systems have intention. It seems likely that once the majority of major nations are enslaved (using BCIs), the rest are actually much more likely to be enslaved, as those nations expand to include nearby territories, and secure their control.

With this in mind, we have taken 90% as an extreme lower bound, because there may be technological shifts in the future which upset this balance, and allows unpredictable changes which might allow small populations to defend themselves more effectively. It is also conceivable that power dynamics between major powers will leave certain countries protected from invasion.

However, while this is an estimate of a global totalitarian system (or multiple systems) controlling all of humanity, we expect the impact of BCIs on this part of totalitarian risk to be minimal. If a world totalitarian government sought to expand it’s reach to all of humanity, it is highly likely that it would be able to reach all who could be a conceivable threat without the use of BCIs.

Furthermore, even if there was some small portion of escapees that it was not cost effective for regimes to track down, such a situation would have little effect on the potential of humanity’s future; it would still be drastically diminished. As such, with totalitarian risk, complete reach may not be necessary. Even if the disaster reaches only 99.99% of living humans, it would still lock in an irreversibly negative future for all of our descendants, so this would still count as an existential catastrophe. Having a tiny fraction of humanity free would not have powerful longterm positive effects for humanity overall, as it would be vanishingly unlikely that small, unthreatening populations could liberate humanity and recover a more positive future. As such, we expect the impact of BCIs on this variable to be rather small.

3.6 Likelihood of Lasting Indefinitely

As a point of definition, in this section we define indefinitely as “until the extinction of humanity and it’s descendants”. In a totalitarian world that is entirely, or even mostly implanted with BCIs, we consider indefinite survival of the regime to be highly likely. BCI’s offer, for the first time, the ability to have an entirely loyal populace, with no dissenters.

In a scenario where all citizens (including the dictator) are implanted with BCIs, and their emotions are rewired for loyalty to the state, resistance to the dictatorship, or any end to dictatorship would be incredibly unlikely. One major method of escaping a totalitarian government is internal overthrow; by either unhappy masses or by elites and closet skeptics within the party who are tempted by alternate systems. The ability to identify all such dissidents as soon as they even consider disloyalty makes revolt less likely. The ability to rewire subjects so they are not physically able to consider such disloyalty in the first place would seem to make revolt close to impossible. As such, we believe it is quite likely that BCI technology will provide a drastic increase to the chance of a global totalitarian state lasting indefinitely.

However, it is also possible that not all subjects will be implanted or altered, because a small ruling class may wish to remain free. This may offer a scenario for escape if certain parts of the upper class rebel, or take control and choose to end the current subjugation/choose to reprogram subjects back to their normal state. However, it is also far from guaranteed that the upper class would remain free of mental alteration. In many authoritarian regimes, not just the lower classes, but also the upper classes have been subject to coercion, and it is conceivable that certain levels of control (measures to ensure loyalty etc), would be implemented on all citizens. Dictators may even implant and slightly reprogram their heirs, to ensure they are not betrayed.

Furthermore, this scenario would require not just that the upper class is to a certain level free of mental surveillance and alteration, but also that the upper class can successfully run a coup without arousing suspicion and having their thoughts read, that they are able to take control of the nation’s BCIs, and most importantly, that they choose to let go of their power and free the subjects currently controlled by their BCIs, rather than establishing their own regime.

Furthermore, there may be situations we have not considered which lower the likelihood of such a dictatorship lasting indefinitely. With this in mind, we have taken the assumption of 90% as an extreme lower bound probability.

3.7 Level of Confidence in these Estimates

We have a low level of confidence that these estimates are exactly correct. However, we have a moderate degree of confidence that they are correct within an order of magnitude, and a high level of confidence that they are correct within two orders of magnitude.

If there is strong reason to suggest that our estimates are off by two orders of magnitude or more, then there is a reasonable case for ignoring the risk from BCIs in favour of prioritising other anthropogenic existential risks. Supposing our estimates are out by an order of magnitude they would still on par with risk of extinction from nuclear war, so should still be prioritised. If our estimates are relatively accurate (within an order of magnitude), then the existential risk provided by BCIs may be several times greater than that from nuclear war. Finally, there is also the possibility that our estimates have been too conservative, and have significantly underrated the level of risk from this technology.

As such, taking account of this uncertainty leads us to believe that this topic is deserving of discussion and deep analysis. Even if we do not have accurate probabilities, if there is a reasonable chance of this technology having a significant negative impact on the future of humanity, then it would be deeply reckless to ignore the risk it poses. This attitude of ignorance is the current status quo.

4. Other Considerations

4.1 BCI as Existential Security Factor

In addition to being a risk factor, it is possible that BCIs may also serve as an existential security factor which may decrease the risk from AGI. In particular, two main mechanisms may be relevant for this.

In particular, Elon Musk claims that BCIs may allow us to integrate with AI such that AI will not need to outcompete us (Young, 2019). It is unclear at present by what exact mechanism a BCI would assist here, how it would help, whether it would actually decrease risk from AI, or if it is a valid claim at all. Such a ‘solution’ to AGI may also be entirely compatible with global totalitarianism, and may not be desirable. The mechanism by which integrating with AI would lessen AI risk is currently undiscussed; and at present, no serious academic work has been done on the topic.

4.2 Risk Tradeoffs Between Security Factors

Tradeoff between the risks of global totalitarianism and AGI may be a useful point of discussion in the future. It is conceivable that BCIs could have impact reducing risk from AI, however, at present, it is unclear whether this would be the case, but also why giving giving an AGI access to human brains would theoretically reduce risk at all. Claims like this require far more detailed examination.

More importantly, even supposing BCIs were a security factor for risk from AGI, it is likely that they are not the only security factor for risk from AGI. As such, it is possible that they are not necessary in the solution at all. There may be other security factors which could be even more effective at reducing risk from AGI, and which don’t meaningfully increase any other existential risks. These risk factors would clearly be preferable.

As such, it is unclear that there is wisdom in definitively creating an existential risk in order to gain a mere chance of protecting us from a theorised existential risk.

Furthermore, if we are to build a technology which risks humanity’s future in the hopes of reducing another risk, this may be a poor strategic move if the reward is not certain. If BCIs fail to address risk from AGI, then it may lead us to an overall increase in existential risk, and no decrease whatsoever.

4.3 Recommendations for Future Research

Given the insights from this paper, we recommend a few directions for future research.

  1. More in depth analysis of the level of increase in risk caused by BCIs. In particular this would be assisted by stronger estimates on the baseline likelihood of totalitarian risk over the next 100 years.
  2. A search for possible solutions which might reduce the level of risk caused by BCIs, or which might prevent the development of this risk factor.
  3. Analyse these solutions in terms of cost effectiveness (i.e. how much they might cost compared to how much they might decrease risk by).
  4. Critically explore possible impacts (positive and negative) of BCIs on other risks such as the development of AGI.

5. Conclusion

This paper has sought to identify the potential of BCIs to increase the likelihood of longterm global totalitarianism. We suggest that even with highly conservative estimates, BCIs provide an increase to existential risk that is comparable to the level of existential risk posed by nuclear war, and almost double the risk from global totalitarianism given by current estimates. If less conservative estimates are accurate, the level of existential risk posed by BCIs may be an order of magnitude greater than this.

In addition, we identify that with the development of BCIs, totalitarianism would no longer require global spread to be sustainable in the longterm. We establish that the main weaknesses of totalitarian countries, caused by defection due to knowledge of better competing systems, can all be neutralised with BCI technology; specifically with the use of brain stimulation.

Finally, we establish that BCIs set up an unusual strategic environment, where the existential risk is likely to become harder to solve over longer time periods. This gives further reason to address this risk sooner rather than later, and put significant effort into into either preventing the development of BCIs, or guiding their development in a safe way, if this is possible.

Due to the current lack of discussion about this technology, and the high level of risk it poses; doubling the risk of global totalitarianism under our most conservative estimates, and increasing it by more than an order of magnitude under less conservative estimates, we believe that this risk factor deserves more discussion than it currently receives.

6. References

Abelson, J., Curtis, G., Sagher, O., Albucher, R., Harrigan, M., Taylor, S., Martis, B., & Giordani, B. (2005). Deep brain stimulation for refractory obsessive compulsive disorder. Biological Psychiatry, 57(5), 510-516.

Allison, B., Wolpaw, E., & Wolpaw, J. (2007). Brain-computer interface systems: progress and prospects. Expert Review of Medical Devices, 4(4), 463-474.

Anupama, H., Cauvery, N., & Lingaraju, G. (2012). Brain Computer Interface and its Types - A Study. International Journal of Advances in Engineering and Technology, 3(2), 739-745.

Beeson, M. (2010). The coming of environmental authoritarianism. Environmental Politics. 19(2), 276-294.

Bellman, C., Martin, M., MacDonald, S., Alomari, R., & Liscano, R. (2018). Have We Met Before? Using Consumer-Grade Brain-Computer Interfaces to Detect Unaware Facial Recognition. Computers in Entertainment, 16(2), 7.

Bernal, S., Celdran, A., Perez, G., Barros, M & Balasubramaniam, S. (2019a) Cybersecurity in Brain Computer Interfaces: State-of-the-art, opportunities, and future challenges. https://arxiv.org/pdf/1908.03536.pdf. Accessed 18 June 2020.

Bernal, S., Huertas, A., & Perez, G. (2019b) Cybersecurity on Brain-Computer-Interfaces: attacks and countermeasures. Conference: V Jornadas Nacionales de Investigación en Ciberseguridad.

Bittar, R., Kar-Purkayastha, I., Owen, S., Bear, R., Green, A., Wang, S., & Aziz, T. (2005a). Deep brain stimulation for pain relief: a meta-analysis. Journal of Clinical Neuroscience, 12(5), 515-519.

Bittar, R., Otero, S., Carter, H., & Aziz, T. (2005b). Deep brain stimulation for phantom limb pain Journal of Clinical Neuroscience, 12(4), 399-404.

Bostrom, N. (2012). Existential Risk Prevention as Global Priority. Global Policy, 4(1), 15-31.

Bunce, S., Devaraj, A., Izzetoglu, M., Onaral, B., & Pourrezaei, K. (2005). Detecting deception in the brain: a functional near-infrared spectroscopy study of neural correlates of intentional deception. Proc. SPIE Nondestructive Detection and Measurement for Homeland Security III, 5769.

Burwell, S., Sample, M. & Racine, E. (2017). Ethical aspects of brain computer interfaces: a scoping review. BMC Med Ethics, 18, 60.

Caplan, B. (2008). The Totalitarian Threat. In Nick Bostrom & Milan Cirkovic (eds) Global Catastrophic Risks. Oxford: Oxford University Press, 504-520.

Carmena, J., Lebedev, M., Crist, R., O’Doherty, J. Santucci, D., Dimitrov, D., Patil, P., Henriquez, C., & Nicolelis, M. (2003). Learning to control a brain-machine interface for reaching and grasping by primates. PLOS Biology, 1, 193-208.

Chenoweth, E (2011). Why Civil Resistance Works. New York. Colombia University Press.

Constine, J. (2017) Facebook is building brain computer interfaces for typing and skin-hearing. https://techcrunch.com/2017/04/19/facebook-brain-interface/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAF-G70vjq6iq3xbyTrRJX142g75UeLop2lh4vADsXSRxkbukky53hw2ztInvOKfgHxB0fl1YRSQwKJWVEeeXBw8ArIJuNLVh0z2Qb9Pe7sBKUGiEY-a0jkh5PHcArqoPjIc_4srTbiBZzjwN7QsNV3CuooTODW9-TxsZnp5q9RBf. Accessed 25 June 2020.

Cotton-Barratt, O., & Ord, T. (2015). Existential Risk and Existential Hope. Future of Humanity Institute - Technical Report, 1.

DARPA. (2019). Six paths to the nonsurgical future of brain-machine interfaces. https://www.darpa.mil/news-events/2019-05-20. Accessed on 16 June 2020.

Delgado, J.M.R. (1969). Physical Control of the Mind: Toward a Psychocivilized Society. New York: Harper and Rowe.

Demetriades, A.K., Demetriades, C.K., Watts, and K. Ashkan. (2010). Brain-machine interface: the challenge of neuroethics. The Surgeon, 8(5), 267–269.

Deuschl, G., Schade-Brittinger, C., Krack, P., Volkmann, J., Schafer, H., Botzel, K., Daniels, C., Deutschlander, A., Dillman, U., et al (2005) A Randomized Trial of Deep-Brain Stimulation for Parkinsons Disease. New England Journal of Medicine, 355, 896-908.

Economist Intelligence Unit. (2019). Global Democracy Index 2019: A year of democratic setback and popular protest.

Ellsberg, D. (2017). The Doomsday Machine: Confessions of a Nuclear War Planner. New York: Bloomsbury Publishing USA.

Evers, K., & Sigman, M. (2013). Possibilities and limits of mind-reading: a neurophilosophical perspective. Consciousness and Cognition, 22, 887-897.

Fritsche, I., Cohrs, J., Kessler, T., & Bauer, J. (2012). Global warming is breeding social conflict: The subtle impact of climate change threat on authoritarian tendencies. Journal of Environmental Psychology, 32(1), 1-10.

Gaul, V. (2020) Brain Computer Interface Market by Type Invasive BCI, Non-invasive BCI and Partially Invasive BCI), Application (Communication & Control, Healthcare, Smart Home Control, Entertainment & Gaming, and Others): Global Opportunity Analysis and Industry Forecast, 2020-2027. Allied Market Research. https://www.alliedmarketresearch.com/brain-computer-interfaces-marketAccessed 18 June 2020.

Glannon, W. (2009). Stimulating brains, altering minds. Journal of Medical Ethics, 35, 289–292.

Greenberg, B., Malone, D., Friehs, G., Rezai, A., Kubu, C., Malloy, P., Salloway, S., Okun, M., Goodman, W., & Rasmussen, S. (2006). Three-year outcomes in deep brain stimulation for highly resistant obsessive–compulsive disorder. Neuropsychopharmacology, 31, 2384–2393.

Guenther F.H., Brumberg, J.S., Wright, E.J., Nieto-Castanon, A., Tourville, J.A., Panko, M., Law, R., Siebert, S.A., Bartels, J., Andreasen, D., Ehirim, P., Mao, H., & Kennedy, P. (2009). A Wireless Brain-Machine Interface for Real-Time Speech Synthesis. PLoS ONE 4(12), e8218.

Gulati, T., Won, S.J., Ramanathan, DS., Wong, C., Bopepudi, A., Swanson, R., & Ganguly, K. (2015) Robust neuroprosthetic control from the stroke perilesional cortex. Journal of Neuroscience. 35(22), 8653-8661.

Halpern, C.H., Wolf, J.A., Bale, T.L., Stunkard, A.J., Danish, S.F., Grossman, M., Jaggi, J., Grady, S., & Baltuch, G. (2008). Deep brain stimulation in the treatment of obesity. Journal of Neurosurgery, 109(4), 625–634.

Hamani, C., McAndrews, M., Cohn, M., Oh, M., Zumsteg, D., Shapiro, C., Wennberg, R., & Lozano, A. (2008). Memory enhancement induced by hypothalamic/fornix deep brain stimulation.

Annals of Neurology. 63, 119-123.

Hatmaker, T. (2020). Senate seeks to ban Chinese app TikTok from government work phones. https://techcrunch.com/2020/03/12/hawley-bill-tiktok-china/. Accessed on 17 June 2020.

Huxley, A. (1946). Science, Liberty and Peace. London: Harper.

Ienca, M. (2015). Neuroprivacy, Neurosecurity and brain hacking: Emerging issues in neural engineering. Bioethica Forum, 8(2), 51-53.

Ienca, M., & Haselager P. (2016). Hacking the brain: brain-computer interfacing technology and the ethics of neurosecurity. Ethics and Information Technology, 18, 117-129.

Ienca, M., Andorno, R. (2017). Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy, 13, 5.

Ifft, P., Shokur, S., Li, Z., Lebedev, M., & Nicolelis, M. (2013). A brain machine interface that enables bimanual arm movement in monkeys. Science Translational Medicine, 5(210), 210ra154.

Jacques, M. (2009). When China Rules the World: the End of the Western World and the Birth of a New Global Order. London: Allen Lane.

Kernel: Neuroscience as a Service. (2020). https://www.kernel.co/. Accessed 17 June 2020.

Klein E, Brown T, Sample M, Truitt AR, Goering S. (2015). Engineering the brain: ethical issues and the introduction of neural devices. Hastings Center Report, 45(6), 26–35.

Kotchetkov, I. S., Hwang, B.Y., Appelboom, G., Kellner, C.P., & Connolly, E.S. (2010). Brain-computer interfaces: military, neurosurgical, and ethical perspective. Neurosurgical Focus, 28(5).

Kumar, K., Toth, C., & Nath, R.K. (1997). Deep brain stimulation for intractable pain: a 15-year experience. Neurosurgery, 40(4), 736-747.

Lammel, S., Lim, B.K., Ran, C., Huang, K.W., Betley, M., Tye, K., Deisseroth, K., & Malenka, R.(2012). Input-specific control of reward and aversion in the ventral tegmental area. Nature, 491, 212–217.

Laxton, A.L., & Lozano, A.M. (2013). Deep Brain Stimulation for the Treatment of Alzheimers Disease and Dementias. World Neurosurgery, 80(3-4), 28.el-S28.e8.

League of Nations. (1938). Protection of Civilian Populations Against Bombing From the Air in Case of War. League of Nations Resolution, September 30 1938, www.dannen.com/decision/int-law.html#d

Levy, R., Lamb, S., & Adams, J. (1987). Treatment of chronic pain by deep brain stimulation: long term follow-up and review of the literature. Neurosurgery, 21(6), 885-893.

Lipsman, N., & Lozana, A. (2015). Cosmetic neurosurgery, ethics, and enhancement. Correspondence, 2(7), 585-586.

Martin, B (1990). Politics after a nuclear crisis. Journal of Libertarian Studies, 9(2), 69-78.

Martin, B (2001). Technology for Nonviolent Struggle. London: War Resisters International.

Mayberg, H., Lozano, A., Voon, V., McNeely, H., Seminowicz, D., Hamani, C., Schwalb, J., Kennedy, S. (2005). Deep brain stimulation for treatment-resistant depression. Neuron, 45(5), 651-660.

Mazzoleni, M., & Previdi, F. (2015). A comparison of classification algorithms for brain computer interface in drug craving treatment. IFAC Papers Online, 48(20), 487-492.

Merrill, N., Chuang, J. (2018). From Scanning Brains to Reading Minds: talking to engineers about Brain Computer Interface. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper no 323. 1-11.

Moore, M.M. (2003). Real-world applications for brain-computer interface technology. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 11(2), 162-165.

Moses, D.A., Leonard, M.K., Makin, J.G., & Chang, E. (2019). Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nature Communications, 10, 3096.

Munyon, C. (2018). Neuroethics of Non-primary Brain Computer Interface: Focus on Potential Military Applications. Front Neurosci, 12, 696.

Musk, E. (2019). An Integrated brain-machine interface platform with thousands of channels. J Med Internet Res, 21(10).

Nijboer, F., Clausen, J., Alison, B., & Haselager, P. (2011). The Asilomar Survey: Stakeholders Opinions on Ethical Issues related to brain-computer interfacing. Neuroethics, 6, 541-578.

Ord, T. (2020). The Precipice. Oxford: Oxford University Press.

Orwell, G. (1945). You and the Atomic Bomb. Tribune (19 October 1945), reprinted in The Collected Essays, Journalism and Letters of George Orwell, Vol. 4: In Front of Your Nose, 1945–1950, ed. Sonia Orwell and Ian Angus (London: Seeker and Warburg, 1968), pp. 6-9. www.orwell.ru/library/articles/ABomb/english/e_abomb Accessed at 10 June 2020.

Perlmutter, J., and Mink, J. (2006). Deep Brain Stimulation Annual Review of Neuroscience 29. 229-257.

Pillsbury, M. (2015) The Hundred-Year Marathon: China’s Secret Strategy to Replace America as the Global Superpower. New York: Henry Holt.

Pinker, S. (2018). Enlightenment Now: The Case for Reason, Science, Humanism and Progress. New York: Viking.

Press Trust of India. (2019). China most likely to become sole global superpower by mid-21st Century, says Republican Senator Mitt Romney. https://www.firstpost.com/world/china-most-likely-to-become-sole-global-superpower-by-mid-21st-century-says-republican-senator-mitt-romney-7749851.html. Accessed 22 June 2020.

Reglado, A. (2017). The entrepreneur with the $100 million plan to link brains to computers. Technology Review. https://www.technologyreview.com/2017/03/16/153211/the-entrepreneur-with-the-100-million-plan-to-link-brains-to-computers/. Accessed 17 June 2020.

Roelfsema, P., Denys, D & Klink, P.C. (2018). Mind reading and writing: the future of neurotechnology. Trends in Cognitive Sciences, 22(7). 598-610.

Rooseveldt, F.D. (1939). An appeal to Great Britain, France, Italy, Germany and Poland to refrain from Air Bombing of Civilians. www.presidency.ucsb.edu/ws/?pid=15797. Accessed 11 June 2020.

Samberg, A., & Bostrom, N. (2011). Machine Intelligence Survey. FHI Technical Report, (1).

Sharp, G. (1973). The Politics of Nonviolent Action. Boston, MA: Porter Sargent.

Statt, N. (2017). Kernel is trying to hack the human brain - but neuroscience has a long way to go. https://www.theverge.com/2017/2/22/14631122/kernel-neuroscience-bryan-johnson-human-intelligence-ai-startup). Accessed 17 June 2020.

Suthana, N., Haneef, Z., Stern, J., Mukamel, R., Behnke, E., Knowlton, B., & Fried, I. (2012). Memory enhancement and deep-brain stimulation of the entorhinal area. N Engl J Med. 366, 502-510.

Torres, P. (2016). Climate Change is the most urgent existential risk. Future of Life Institute. https://futureoflife.org/2016/07/22/climate-change-is-the-most-urgent-existential-risk/?cn-reloaded=1. Accessed 11 June 2020.

Tsai, H.C., Zhang, F., Adamantidis, A., Stubler, G., Bonci, A., Lecea, L., & Deisseroth, K. (2009). Phasic firing in dopaminergic neurons is sufficient for behavioral conditioning. Science, 324(5930), 1080–1084.

Tucker, P. (2018). Defence Intel Chief Worried About Chinese ‘Integration of Human and Machines’ in Defence One. https://www.defenseone.com/technology/2018/10/defense-intel-chief-worried-about-chinese-integration-human-and-machines/151904/. Accessed 17 June 2020.

Veale, F.J.P (1993). Advance to Barbarism: The Development of Total Warfare from Sarajevo to Hiroshima. London: The Mitre Press.

Wright, R (2004). A Short History of Progress. Toronto: Anansi Press.

Wu, S., Xu, X., Shu, L., & Hu, B (2017). Estimation of valence of emotion using two frontal EEG channels. In 2017 IEEE international conference on bioinformatics and biomedicine (BIBM) (pp. 1127–1130).

Young, C. (2019). Key takeaways from Elon Musk’s Neuralink Presentation: Solving Brain Diseases and Mitigating AI Threat. https://interestingengineering.com/key-takeaways-from-elon-musks-neuralink-presentation-solving-brain-diseases-and-mitigating-ai-threat. Accessed 25 June 2020.

Zamiska, N. (2007). In China, brain surgery is pushed on the mentally ill. Wall St. J. https://www.wsj.com/articles/SB119393867164279313#:~:text=The%20irreversible%20brain%20surgeries%20performed,funding%20and%20hungry%20for%20profit. Accessed 17 June 2020.

Zhang, S., Zhou, P., Jiang, S., Li, P., & Wang W. (2017). Bilateral anterior capsulotomy and amygdalotomy for mental retardation with psychiatric symptoms and aggression: A case report. Medicine (Baltimore), 96(1), 10-13.

Comments12
Sorted by Click to highlight new comments since: Today at 8:08 AM

Thanks for this substantive and useful post. We've looked at this topic every few years in unpublished work at FHI to think about whether to prioritize it. So far it hasn't looked promising enough to pursue very heavily, but I think more careful estimates of the inputs and productivity of research in the field (for forecasting relevant timelines and understanding the scale of the research) would be helpful. I'll also comment on a few differences between the post and my models of BCI issues:

  • It does not seem a safe assumption to me that AGI is more difficult than effective mind-reading and control, since the latter requires complex interface with biology with large barriers to effective experimentation; my guess is that this sort of comprehensive regime of BCI capabilities will be preceded by AGI, and your estimate of D is too high
  • The idea that free societies never stabilize their non-totalitarian character, so that over time stable totalitarian societies predominate, leaves out the applications of this and other technologies to stabilizing other societal forms (e.g. security forces making binding oaths to principles of human rights and constitutional government, backed by transparently inspected BCI, or introducing AI security forces designed with similar motivations), especially if the alternative is predictably bad; also other technologies like AGI would come along before centuries of this BCI dynamic
  • Global dominance is blocked by nuclear weapons, but dominance of the long-term future through a state that is a large chunk of the world outgrowing the rest (e.g. by being ahead in AI or space colonization once economic and military power is limited by resources) is more plausible, and S is too low
  • I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term), but BCI that actually was highly effective for mind-reading would make international deals on WMD or AGI racing much more enforceable, as national leaders could make verifiable statements that they have no illicit WMD programs or secret AGI efforts, or that joint efforts to produce AGI with specific objectives are not being subverted; this seems to be potentially an enormous factor
  • Lie detection via neurotechnology, or mind-reading complex thoughts, seems quite difficult, and faces structural issues in that the representations for complex thoughts are going to be developed idiosyncratically in each individual, whereas things like optic nerve connections and the lower levels of V1 can be tracked by their definite inputs and outputs, shared across humans
  • I haven't seen any great intervention points here for the downsides, analogous to alignment work for AI safety, or biosecurity countermeasures against biological weapons;
  • If one thought BCI technology was net helpful one could try to advance it, but it's a moderately large and expensive field so one would likely need to leverage by advocacy or better R&D selection within the field to accelerate it enough to matter and be competitive with other areas of x-risk reduction activity

I think if you wanted to get more attention on this, likely the most effective thing to do would be a more rigorous assessment of the technology and best efforts nuts-and-bolts quantitative forecasting (preferably with some care about infohazards before publication). I'd be happy to give advice and feedback if you pursue such a project.

This seems like a thorough consideration of the interaction of BCIs with the risk of totalitarianism. I was also prompted to think a bit about BCIs as a GCR risk factor recently and had started compiling some references, but I haven't yet refined my views as much as this.

One comment I have is that risk described here seems to rely not just on the development of any type of BCI but on a specific kind, namely, relatively cheap consumer BCIs that can nonetheless provide a high-fidelity bidirectional neural interface. It seems likely that this type of BCI would need to be invasive, but it's not obvious to me that invasive BCI technology will inevitably progress in that direction. Musk hint's that Neuralink's goals are mass-market, but I expect that regulatory efforts could limit invasive BCI technology to medical use cases, and likewise, any military development of invasive BCI seems likely to lead to equipment that is too expensive for mass adoption (although it could provide the starting point for commercialization). Although DARPA's Next-Generation Nonsurgical Neurotechnology (N3) program does have the goal of developing high-fidelity non- or minimally-invasive BCIs; my intuition is at that they will not achieve their goal of reading from one million and writing to 100,000 neurons non-invasively, but I'm not sure about the potential of the minimally-invasive path. So one theoretical consideration is what percentage of a population needs to be thought policed to retain effective authoritarian control, which would then indicate how commercialized BCI technology would need to be before it could become a risk factor.

In my view, a reasonable way to steer BCIs development away from posing a risk-factor for totalitarianism would be to encourage the development of high-fidelity non-invasive and read-focused consumer BCI. While non-invasive devices are intrinsically more limited than invasive ones, if consumers can still be satisfied by their performance then it will reduce the demand to develop invasive technology. Facebook and Kernel already look like they are moving towards non-invasive technology. One company that I think is generally overlooked is CTRL-Labs (now owned by Facebook), who are developing an armband that acquires high-fidelity measurements from motor neurons - although this is a peripheral nervous system recording, users can apparently repurpose motor neurons for different tasks and even learn to control the activity of individual neurons (see this promotional video). As an aside, if anybody is interested in working on non-invasive BCI hardware, I have a project proposal for developing a device for acquiring high-fidelity and non-invasive central nervous system activity measurements that I'm no longer planning to pursue but am able to share.

The idea of BCIs that punish dissenting thoughts being used to condition people away from even thinking about dissent may have a potential loophole, in that such conditioning could lead people to avoid thinking such thoughts or it could simply lead them to think such thoughts in ways that aren't punished. I expect that human brains have sufficient plasticity to be able to accomplish this under some circumstances and while the punishment controller could also adapt what it punishes to try and catch such evasive thoughts, it may not always have an advantage and I don't think BCI thought policing could be assumed to be 100% effective. More broadly, differences in both intra- or inter-person thought patterns could determine how effective BCI is for thought policing. If a BCI monitoring algorithm can be developed using a small pool of subjects and then applied en masse, that seems much risky than if the monitoring algorithm needs to be adapted to each individual and possibly updated over time (though there would be scope for automating updating). I expect that Neuralinks future work will indicate how 'portable' neural decoding and encoding algorithms are between individuals.

I have a fun anecdotal example of neural activity diversity: when I was doing my PhD at the Queensland Brain Institute I did a pilot experiment for an fMRI study on visual navigation for a colleague's experiment. Afterwards, he said that my neural responses were quite different from those of the other pilot participant (we both did the navigation task well). He completed and published the study and ask the other pilot participant to join other fMRI experiments he ran, but never asked me to participate again. I've wondered if I was the one who ended up having the weird neural response compared to the rest of the participants in that study... (although my structural MRI scans are normal, so it's not like I have a completely wacky brain!)

The BCI risk scenario I've considered is whether BCIs could provide a disruptive improvement in a user's computer-interface speed or another cognitive domain. DARPA's Neurotechnology for Intelligence Analysts (NIA) program showed that an x10 increase in image analysis speed with no loss of accuracy, just using EEG (see here for a good summary of DARPAs BCI programs until 2015). It seems reasonable that somewhat larger speed improvements could be attained using invasive BCI, and this speed improvement would probably generalize to other, more complicated tasks. When advanced BCIs is limited to early adopters, could such cognitive advantages facilitate the risky development in AI or bioweapons by small teams, or give operational advantages to intelligence agencies or militaries? (happy to discuss or share my notes on this with anybody who is interested in looking into this aspect further)

Thanks for this post! It seems to me that one additional silver lining of BCI is that the mind-reading that could be used for totalitarianism could also be used to enforce treaties. The world leaders agree to e.g. say twice a day "I do not now, nor have I ever, made any intention to break treaty X under any circumstances" while under a lie detector. This could make arms races and other sorts of terrible game-theoretic situations less bad. I think Bostrom first made this point.

In particular, Elon Musk claims that BCIs may allow us to integrate with AI such that AI will not need to outcompete us (Young, 2019). It is unclear at present by what exact mechanism a BCI would assist here, how it would help, whether it would actually decrease risk from AI, or if it is a valid claim at all. Such a ‘solution’ to AGI may also be entirely compatible with global totalitarianism, and may not be desirable. The mechanism by which integrating with AI would lessen AI risk is currently undiscussed; and at present, no serious academic work has been done on the topic.

We have a bit of discussion about this (predating Musk's proposal) in section 3.4. of Responses to Catastrophic AGI Risk; we're also skeptical, e.g. this excerpt from our discussion:

De Garis [82] argues that a computer could have far more processing power than a human brain, making it pointless to merge computers and humans. The biological component of the resulting hybrid would be insignificant compared to the electronic component, creating a mind that was negligibly different from a 'pure' AGI. Kurzweil [168] makes the same argument, saying that although he supports intelligence enhancement by directly connecting brains and computers, this would only keep pace with AGIs for a couple of additional decades.
The truth of this claim seems to depend on exactly how human brains are augmented. In principle, it seems possible to create a prosthetic extension of a human brain that uses the same basic architecture as the original brain and gradually integrates with it [254]. A human extending their intelligence using such a method might remain roughly human-like and maintain their original values. However, it could also be possible to connect brains with computer programs that are very unlike human brains and which would substantially change the way the original brain worked. Even smaller differences could conceivably lead to the adoption of 'cyborg values' distinct from ordinary human values [290].
Bostrom [49] speculates that humans might outsource many of their skills to non-conscious external modules and would cease to experience anything as a result. The value-altering modules would provide substantial advantages to their users, to the point that they could outcompete uploaded minds who did not adopt the modules. [...]
Moravec [194] notes that the human mind has evolved to function in an environment which is drastically different from a purely digital environment and that the only way to remain competitive with AGIs would be to transform into something that was very different from a human.

I haven't had a chance to read this post yet, but just wanted to mention one paper I know of that does discuss brain-computer interfaces in the context of global catastrophic risks, which therefore might be interesting to you or other readers. (The paper doesn't use the term existential risk, but I think the basic points could be extrapolated to them.) 

The paper is Assessing the Risks Posed by the Convergence of Artificial Intelligence and Biotechnology. I'll quote the most relevant section of text below, but table 4 is also relevant, and the paper is open-access and (in my view) insightful, so I'd recommend reading the whole thing.

Beyond the risks associated with medical device exploitation, it is possible that in the future computer systems will be integrated with human physiology and, therefore, pose novel vulnerabilities. Brain-computer interfaces (BCIs), traditionally used in medicine for motor-neurological disorders, are AI systems that allow for direct communication between the brain and an external computer. BCIs allow for a bidirectional flow of information, meaning the brain can receive signals from an external source and vice versa. 

The neurotechnology company, Neuralink, has recently claimed that a monkey was able to control a computer using one of their implants. This concept may seem farfetched, but in 2004 a paralyzed man with an implanted BCI was able to play computer games and check email using only his mind. Other studies have shown a ‘‘brainbrain’’ interface between mammals is possible. In 2013, one researcher at the University of Washington was able to send a brain signal captured by electroencephalography over the internet to control the hand movements of another by way of transcranial magnetic stimulation. Advances are occurring at a rapid pace and many previous technical bottlenecks that have prevented BCIs from widespread implementation are beginning to be overcome.

Research and development of BCIs have accelerated quickly in the past decade. Future directions seek to achieve a symbiosis of AI and the human brain for cognitive enhancement and rapid transfer of information between individuals or computer systems. Rather than having to spend time looking up a subject, performing a calculation, or even speaking to another individual, the transfer of information could be nearly instantaneous. There have already been numerous studies conducted researching the use of BCIs for cognitive enhancement in domains such as learning and memory, perception, attention, and risk aversion (one being able to incite riskier behavior). Additionally, studies have explored the military applications of BCIs, and the field receives a bulk of its funding from US Department of Defense sources such as the Defense Advanced Research Projects Agency. 

While the commercial implementation of BCIs may not occur until well into the future, it is still valuable to consider the risks that could arise in order to highlight the need for security-by-design thinking and avoid path dependency, which could result in vulnerabilities—like those seen with current medical devices—persisting in future implementations. Cyber vulnerabilities in current BCIs have already been identified, including those that could cause physical harm to the user and influence behavior. In a future where BCIs are commonplace alongside advanced understandings of neuroscience, it may be possible for a bad actor to achieve limited influence over the behavior of a population or cause potential harm to users. This issue highlights the need to have robust risk assessment prior to widespread technological adoption, allowing for regulation, governance, and security measures to take identified concerns into account.

I can see a scenario where BCI totalitarianism sounds like a pretty good thing from a hedonic utilitarian point of view:

People are usually more effective workers when they're happy. So a pragmatic totalitarian government (like Brave New World) rather than a sadistic one or sadistic/pragmatic (1984, maybe) would want its people to be happy all the time, and would stimulate whatever in the brain makes them happy. To suppress dissent it would just delete thoughts and feelings in that direction as painlessly as possible. Competing governments would have an incentive to be pragmatic rather than sadistic.

Then the risk comes from the possibility that humans aren't worth keeping around as workers, due to automation.

Thanks for this important work. For reference, some estimates of existential risk from nuclear war are one or two orders of magnitude higher than in The Precipice, e.g. here.
When you were discussing the difficulty of overthrowing a BCI dictator, I couldn't help thinking resistance is futile.

Interesting hypothetical. Questions-

1. What precisely makes these totalitarianisms immortal? The people surrounding the dictator can change. The dictator himself can die (even with Radical Life Extension, accidents will still happen). Or he might change his mind on certain topics over the centuries and millennia. Eternal goal consistency may perhaps be maintained by a carefully constructed AGI, but this would move this scenario more into the category of X Risks posed by machine intelligence.

2. Romania was more of an elite coup against Ceausescu by the security services, the mass protests were mere backdrop to that. This is incidentally why Ceausescu was executed so quickly, nobody in the Romanian elites was interested in a public trial where their own roles in the regime would be opened up to the limelight. In fact, I cannot think of a single case of a successful revolt against a dictatorship (authoritarian or totalitarian) that was not accompanied by some degree of elite defections. No elite defections - no revolution. (We are seeing that now in Belarus). This point really just reinforces the first issue - the question of how precisely the ruling dictator/authority is to maintain goal consistency over sufficiently long stretches of time. (E.g., even a centuries long totalitarian cyber-panopticon, unpleasant as it may be for its denizens, will not constitute an existential risk if it does in fact collapse or fade away over time).

Thank you for this information. I hope the Global Priorities Institute and FHI take notice.

Easily available BCI may fuel a possible epidemic of wireheading, which may result in civilisational decline.

First of all, thank you for this post! Well-written article on a topic I'm surprised I haven't thought about myself.


My first thoughts are that the post does well in exploring BCI from a risk perspective, but that more would be needed. I think this quote is a good place to start:

" In a scenario where all citizens (including the dictator) are implanted with BCIs, and their emotions are rewired for loyalty to the state, resistance to the dictatorship, or any end to dictatorship would be incredibly unlikely. "

That is, if everybody is loyal to this state, is it really to be understoood as a dictatorship? I don't see the Western democracies based on free-will of today as the last and only legitimate form of government, and think we need to adjust to the idea of the governments of tomorrow be unlike ours based on individualism and free will. I've yet to read about it more in-depth myself, but what Bard and Söderqvist call the sensocracy could be worth looking at.

I wonder if the probability L = 90% is an overestimate of the likelihood of lasting indefinitely. It seems reasonable that the regime could end because of a global catastrophe that is not existential, reverting us to preindustrial society.  For example, nuclear war could end regimes if/while there are multiple states, or climate change could cause massive famine. On the other hand, is it reasonable to think that BCI would create so much stability that even the death of a significant proportion of its populace would not be able to end it?