Hide table of contents

Summary: When it comes to observing intelligence agencies, it's hard to see the hardened parts and easy to observe the soft corrupt parts. This leads to a bias where very large numbers of people overestimate how prevalent the easily-observed soft and harmless parts are. This can sometimes even result in a dangerous and prevalent estimation, among people whose careers are much further ahead than yours, that the entire intelligence agency is harmless and irrelevant, when it actually isn't. Intelligence agencies are probably a mix of both less-functional, less-relevant parts, and also more-functional, more-relevant parts that have a disproportionately large influence over governments and policies; and it is a mistake to assume that intelligence agencies are homogenously composed of non-functional non-relevant parts that aren't worth paying any attention to, even if such a belief is a popular norm.

In the transformative slow takeoff scenario anticipated by people like Christiano and Kokotajlo, forecasters need to pay attention to all sources and forms of power that will react/interact with upheavals and change the course of history, not just the economic power and lawmaking/regulatory power stemming from legislative bodies like the US Congress.  


Why intelligence agencies are dangerous 

There are a wide variety of situations where intelligence agencies suddenly becomes relevant, without warning. For example, most or all of the US Natsec establishment might suddenly and unanimously change its stance on Gain of Function research, such as if US-China relations or US-Russian relations once again hit a new 25-year low (which has actually been happening very frequently over the last few years).

Either the leadership of an agency, or a powerful individual in an agency with authority to execute operations, or a corrupt clique, might personally make a judgement that the best way to expedite or restart GOF research is to target various people who are the most efficient or effective at opposing GOF research.

This need not be anywhere near the most effective way to expedite or protect GOF research, it just needs to look like that, sufficiently for someone to sign off on that, or even for them to merely thing that it would look good to their boss.

Competent or technologically advanced capabilities can obviously be mixed with incompetent administration/decisionmaking in the mixed competence model of intelligence agencies. An intelligence agency that is truly harmless, irrelevant, and not worth paying attention to (as opposed to having an incentive to falsely give off the appearance of harmlessness, irrelevance, or not being worth paying attention to) would have to be an intelligence agency that is both technologically unsophisticated and too corrupt for basic functioning, such as running operations.

This would be an extremely naive belief to have about the intelligence agencies in the US, Russia, and China; particularly the US and China, which have broad prestige, sophisticated technology, and also thriving private sector skill pools to recruit talent from.

When calculating the expected value from policy advocacy tasks that someone somewhere absolutely must carry out, like pushing sensible policymaking on GOF research that could cause human extinction, many people are currently aware that the risk of that important community disappearing or dissolving substantially reduces the expected value calculations of everything produced by that important community; e.g. a 10% chance of the community ceasing to exist or dissolving reduces the expected value produced by that entire community by something like ~10%.

Most people I've encountered have in mind a massive totalitarian upheaval, like the ones in the early-mid 20th century, and such an upheaval is a hard boundary between being secure and not being secure. However, in the 21st century, especially after COVID and the 2008 recession, experts and military planners are now more focused on the international balance of power (e.g. the strength of the US, Russia, and China relative to each other and other independent states) being altered by economic collapse or alliance paralysis rather than revolutions or military conquest. This is because the entire world is roundaboutly different today from what it was 70 years ago. 

It makes more sense to anticipate slower and incomplete backsliding, with results like shifts towards a hybrid regime in various ways, where abuses of power by intelligence agencies and internal security agencies are increasingly commonplace due to corruption, and a lack of accountability due to a broad priority placed on hybrid warfare, as well as preventing foreign adversaries like Russia and China from leveraging domestic elites such as billionaires, government officials, and celebrities/thought leaders who are relevant among key demographics (like Yann Lecun).

An example of an angle on this, from the top comment on Don't Take the Organization Chart Literally:

...a lot of what goes on in government (and corrupt corporate orgs) is done with tacit power. Few DOJ, CIA, and FBI officers have a full picture of just how their work is misaligned with the interests of America. But most all of them have a general understanding that they are to be more loyal to the organization than they are to America.[1] Through his familial and otherwise corrupt connections, [Department of Justice leader] Barr is part of the in-group at the US corrupt apparatus. It can be as simple as most inferior officers knowing he's with them.

So Barr doesn't have to explicitly tell the guards to look the other way, he doesn't have to tell the FBI to run a poor investigation, he doesn't have to tell the DOJ to continue being corrupt ... Lower-level bosses who have the full faith and confidence of their inferiors put small plans into place to get it done. It's what the boss wants and the boss looks out for them.

Picture Musk's possible purchase of Twitter. Do you think that if Musk bought Twitter, even as a private owner, he would suddenly have full control of the whole apparatus? Of course not. The people with real power would be his inferiors who have been there for a while and are part of the in-group. The only way for Musk to get a hold of Twitter would be to fire quite a lot of people, many who are integral to the organization. 


It's hard to see the hardened parts

(Note: this is a cleaned up version of a previous post, whose quality I wasn't satisfied with. Feel free to skip this if you've already read it). 

Some social structures can evolve that allow secrets to be kept with larger numbers of people. For example, intelligence agencies are not only compartmentalized, but the employees making them up all assume that if someone approaches them offering to buy secrets, that it's probably one of the routine counterintelligence operation within the agency that draws out and prosecutes untrustworthy employees. As a result, the employees basically one-box their agency and virtually never accept bribes from foreign agents, no matter how ludicrously large the promised payout. And any that fall through the cracks are hard to disentangle from disinformation by double/triple agents posing as easily-bribed-people.

It's much more complex than that, but that's just one example of a secret-keeping system evolving inside institutions; effective enough not just to keep secrets, but also to thwart or misinform outside agents intelligently trying to rupture secret-keeping networks (emerging almost a hundred years ago or earlier).

The upper echelons of intelligence agencies are difficult to observe. It is not clear if the lack of output is caused primarily by incompetence and disinterest, or if the incentive dynamics inside such a powerful structure causes competent individuals to waste their capabilities on internal competition and eliminating their colleagues. However, it is dangerous to take the average lower- and mid-level government official/bureaucrat, who are easier to access and observe, and extrapolate that into difficult-to-observe higher echelons. The higher echelons might be substantially out-of-distribution; for example, in a thought experiment with the oversimplified Gervais model of a corporate hierarchy (the “sociopaths” are highly social and love potlucks; the “clueless” are a reservoir of deep organizational insights; and the “losers” live very happy lives, and the main thing they "lose" to is the same aging process as everyone else), an individual progressing up the pyramid would gradually discover a thanksgiving turkey effect: human being self-sort, resulting in encountering people who already successfully pursued wealth incentives at the top of the organization because they have unusual and qualitatively different combinations of personal traits than the more easily-observed people at the middle and bottom of the pyramid.

This image is explicitly stated to be a COMPANY HIERARCHY, it is explicitly stated to not be describing intelligence agencies or interesting nonprofits, which experience Moloch in a different way than most private sector firms.

Although the libertarian school of thought is the most grounded in empirical observations of government being generally incompetent, this should not distract us from the fundamental principle that the top 20% of an org with 80% of the power is largely unknown territory due to difficulty of observation, and all sorts of strange top-specific dynamics may explain government’s failures; although models must be grounded in observations, it is still risky to overdepend on the libertarian school of thought, which largely takes low-level bureaucrats and imagines government as uniformly composed of them, extrapolating those individuals to the highest and most desired positions. Intelligence agencies have surely noticed that posing as an incompetent bureaucrat makes for excellent camouflage, and it's also well known throughout government that mazes of paperwork deter predators. 

The top performing altruists that make up EA, substantially fewer than 0.1% of all altruists globally, are at the extreme peak due to highly unusual and extreme circumstances, including substantial competence, luck, intelligence, motivation, and capacity to spontaneously organize in productive ways in order to achieve instrumentally convergent goals. Unlike EA, however, the top 0.1% of people at intelligence, military, and internal security agencies face incredible evolutionary optimization pressure from the threat of regime change, a wide variety of wealthy and powerful elites looking up at them, and continuous strategic infiltration assaults by foreign intelligence agencies. It is not at all clear what sorts of structures would end up evolving at the peak of power brokers in a democracy, and it is not epistemically responsible to automatically defer to the libertarian school of thought on this, even if the libertarian school of thought is correct about the countless people whose lives were ruined by incompetent government intervention/regulation. Competent people and groups still get sorted to the top where they face darwinistic pressures, even if a large majority of competent people bounce off of bureaucratic nonsense along the way. The operations of intelligence agencies are the results that we observe from those people being given incredible power, impunity, the ability to monopolize information, and to exploit power and information asymmetry between themselves and the large, technologically advanced private corporations that they share a country with (with corporate lobbyists available to facilitate and even cash-incentivize a wide variety of complex bargains between them and leading, notably including revolving door employment of top talent, which is further facilitated by the power and prestige of intelligence agencies).  


It's easy to see the soft parts

Intelligence agencies are capable of penetrating hardened bureaucracies and other organizations, moving laterally by compromising networks of people, and steering the careers of people in executive departments and legislative branches/parliaments around the world, likely including domestically.

People with relevant experience understand that moving upwards and laterally through a bureaucracy is a science (it is also many other things, most of them extremely unpleasant). Promoting and navigating through a bureaucracy is also a much more precise science in the minds of people who have advanced further than you, than it is in your mind; given that they were so successful, they have likely done many things right and learned many things along the way which you haven't.

However, likewise, it is an even more precise science in the minds of the specialists at intelligence agencies, which have been specializing at systematically penetrating, controlling, and deceiving hardened parts of hardened bureaucracies (and other organizations) all over the world for generations (but only a handful of generations). Human civilization is built on a foundation of specialization and division of labor, and intelligence agencies are the people who specialized at doing that.[1]

This assymmetry of information is even greater due to the necessary dependence on anecdata, and yet further complicated by the phenomena where many people make decisions based off of vibes from their time working at a specific part of an agency.

This is notable, because the parts of an agency with high turnover, where a disproportionately large number of people enter and exit, thus occupying a disproportionately large share of observation and testimony. This further contributes to the dynamic where it is hard to see the hardened parts and easier to see the softer parts, since corruption, incompetence, thuggery/factionalism, and low-engagement each are known to increase turnover substantially, whereas high-value secrets, more relatively competent management, interesting work, and mission-oriented workers are known to have lower turnover and also more amenable to recruiting top talent from top companies. 

Furthermore, there is also the risk of anti-inductive situations that come with the territory of evaluating organizations whose missions include a very long history of propaganda, disinformation, and particularly counterintelligence and using advanced technology to exploit human psychology (including through the use of data science, mass surveillance, and AI). Going off of vibes, in particular, is a very bad approach, because vibes are emotional, subconscious, and easy to get large amounts of data on and study scientifically. The better you understand something, the easier it is to find ways to get specific outcomes by poking that something with specific stimuli.

Dealing with hypothetical groups of rich and power people, who specifically use their wealth and influence to avoid giving away their positions to also-rich-and-powerful foes, requires understanding of human cognitive biases related to dealing with unfalsifiable theories. My model looks great, it's a fun topic to play around with in your head, and the theory of hard-to-spot islands of competence-monopolization are an entirely different tier from flying spagetti monsters and invisible dragons; but these considerations also must be evaluated with a quantitative mindset. Ultimately, aside from policy outcomes and publicly-known military/intelligence outcomes, there is little good data, and both hypotheses (uniform incompetence vs non-uniform incompetence within intelligence agencies) must be handled with the best epistemology available. I recommend Yudkowsky's Belief in belief, Religion's claim to be non-disprovable, and An intuitive explanation of Bayes theorem (if you haven't read it already), and also Raemon's Dark Forest Theories. The constraints I've described in this post are critical for understanding intelligence agencies.

The study of these institutions warrants much better epistemics than what seems to have taken place so far. 


Functioning lie detectors as a turning point in human history

All of human society and equilibria is derived in-part from a fundamental trait of the human brain: lies are easier for the human brain to generate than it is for the human brain to detect, even during in-person conversations where massive amounts of intensely revealing nonverbal communication is exchanged (e.g. facial expressions, subtle body posture changes). You cannot ask people if they are planning to betray you, everything would be different if you could.

If functioning lie detectors were to be invented, incentive structures as we know them would be completely replaced with new ones that are far more effective. E.g. you can just force all your subordinates to wear an EEG or go into an fMRI machine, and ask all of them who the smartest/most competent person in the office is, promote the people who are actually top performers, and fire any cliques/factions of people who you detect as coordinating around a common lie. Most middle managers with access to functioning lie detection technology would think of those things, and many other strategies that have not yet occurred to me, over the course of the thousands of hours they spend as middle managers with access to functioning lie detection technology.

If your immediate reflexive response to lie detection technology is "well, lie detection technology is currently incredibly inaccurate and ineffective", then that's a very understandable mistake, but also unambiguously a mistake. I've talked to many people about this, and almost all of them confidently output basically that exact string of text, yet had no idea where it came from or what was backing it up. I don't really doubt that it was possibly true 40 or even 20 years ago, but with modern technology it's much more of a toss-up. The best paper (that I'm willing to share) covering government/military interest and access to lie detection technology, either current or potential future monopolization, is here, which among many other things also covers the reputation of lie detection technology (which is one of the easier things to observe and study).

This is likely one of the most significant ways that the next 30 years of human civilization will be out-of-distribution relative to the last 80 years of human civilization (it is NOT #1).


Information I found helpful:

Don't take the organizational chart literally (highly recommended)

LLMs will be great for censorship

Raemon's Dark Forest Theories

Joseph Nye's Soft Power

The US is becoming less stable

  1. ^

    Parliaments and legislative bodies, on the other hand, are more about giving a country's elites a legitimate and sustainable access to influence so that they have an outlet other than playing dirty (and there are a wide variety of ways for a country's top elites to play dirty at/near the peak of wealth and power; try imagining what a 175 IQ person could get up to). Authoritarian regimes, unlike democracies, focus more on walling elites off. They are specialists in friendly things, like robustness and policymaking.





More posts like this

Sorted by Click to highlight new comments since:

Why do you think that Western intelligence services could be dangerous for EA? My political view is the opposite: EA is the distilled form of our elite ideology. If they are nor investing in us is precisely because they are not very efficient, nor very long term oriented. 

In fact, if I were in 80.000 hours I would make an special program on NatSec positions: we need commited utilitarians in the NatSec apparatus (are our goals anything different from the extension of American Pax Democratica to the entire world?). 

FWIW I think it's still the case that psychologists/neuroscientists are nowhere near developing an accurate lie detector. And the paper you cite doesn't seem to support the claim that lie detection technology is accurate. From the abstract (emphasis mine):

Analyzing the myriad issues related to fMRI lie detection, the article identifies the key limitations of the current neuroimaging of deception science as expert evidence and explores the problems that arise from using scientific evidence before it is proven scientifically valid and reliable. We suggest that courts continue excluding fMRI lie detection evidence until this potentially useful form of forensic science meets the scientific standards currently required for adoption of a medical test or device.

There are methodological challenges associated with the typical studies done on lie detection. From a 2016 paper (emphasis mine):

Great hopes and expectations were expressed regarding the potential use of brain imaging techniques for the detection of deception. Contrary to what has been advocated by many researchers as well as practitioners (e.g., Bles & Haynes, 2008; Farwell, 2012; Langleben et al., 2005), the introduction of new measures such as P300 and fMRI is by no means a solution to the problems associated with the ANS-based CQT polygraph test. The CQT has been criticized for lacking proper controls and being unstandardized. In addition, its outcome is often contaminated by prior information available to the examiner. None of these criticisms can be resolved by replacing ANS recordings with fMRI measures.

Moreover, all paradigms face a similar logical problem: deception cannot be directly inferred either from the presence of emotional arousal in the CQT or from attentional orienting or inhibition in the CIT or DoD, regardless of whether ANS, reaction times, ERPs, or fMRI measures have been used.

So I'm not sure what the basis is for saying it's an "unambiguous mistake" to think accurate lie detection technology is a long way off.

Incredible post, thanks for writting it.

Thanks! I think that things with even a ~3% chance of destroying EA are still worthwhile for people to look into.

Curated and popular this week
Relevant opportunities