Hide table of contents

The Weapon of Openness is an essay published by Arthur Kantrowitz and the Foresight Institute in 1989. In it, Kantrowitz argues that the long-term costs of secrecy in adversarial technology development outweigh the benefits, and that openness (defined as "public access to the information needed for the making of public decisions") will therefore lead to better technology relative to adversaries and hence greater national security. As a result, more open societies will tend to outperform more secretive societies, and policymakers should tend strongly towards openness even in cases where secrecy is tempting in the short-term.

The Weapon of Openness presents itself as a narrow attack on secrecy in technological development. In the process, however, it makes many arguments which seem to generalise to other domains of societal decision-making, and can hence be viewed as a more general attack on certain kinds of secretiveness[1]. As such, it seems worth reviewing and reflecting on the arguments in the essay and how they might be integrated with a broader concern for information hazards and the long-term future.

The essay itself is fairly short and worth reading in its entirety, so I've tried to keep this fairly brief. Any unattributed blockquotes in the footnotes are from the original text.

Secrecy in technological development

The benefits of secrecy in adversarial technological development are obvious, at least in theory. Barring leaks, infiltration, or outright capture in war, the details of your technology remain opaque to outsiders. With these details obscured, it is much more difficult for adversaries to either copy your technology or design countermeasures against it. If you do really well at secrecy, even the relative power level of your technology remains obscured, which can be useful for game-theoretic reasons[2].

The costs of secrecy are more subtle, and easier to miss, but potentially even greater than the benefits. This should sound alarm bells for anyone familiar with the failure modes of naïve consequentialist reasoning.

One major cost is cutting yourself off from the broader scientific and technological discourse, greatly restricting the ability of experts outside the project to either propose new suggestions or point out flaws in your current approach. This is bad enough by itself, but it also makes it much more difficult for project insiders to enlist outside expertise during internal disputes over the direction of the project. The result, says Kantrowitz, is that disputes within secret projects have a much greater tendency to be resolved politically, rather than on the technical merits. That means making decisions that flatter the decision-makers, those they favour and those they want to impress, and avoiding changes of approach that might embarrass those people. This might suffice for relatively simple projects that involve making only incremental improvements on existing technology, but when the project aims for an ambitious leap in capabilities (and hence is likely to involve several false starts and course corrections) it can be crippling[3].

This claimed tendency of secret projects to make technical decisions on political grounds hints at Kantrowitz's second major argument[4]: that secrecy greatly facilitates corruption. By screening not only the decisions but the decision-making progress from outside scrutiny, secrecy greatly reduces the incentive for decision-makers to make decisions that could be justified to outside scrutinisers. Given the well-known general tendency of humans to respond to selfish incentives, the result is unsurprising: greatly increased toleration of waste, delay and other inefficiencies, up to and including outright corruption in the narrow sense, when these inefficiencies make the lives of decision-makers or those they favour easier, or increase their status (e.g. by increasing their budget)[5].

This incentive to corruption is progressive and corrosive, gradually but severely impairing general organisational effectiveness in ways that will obviously impair the effectiveness of the secret project. If the same organisation performs other secret projects in the future, the corrosion will be passed to these successor projects in the form of normalised deviance and generalised institutional decay. Since the corrupted institutions are the very ones responsible for identifying this corruption, and are screened from most or all external accountability, this problem can be very difficult to reverse.

Hence, says Kantrowitz, states that succumb to the temptations of secret technological development may reap some initial gains, but will gradually see these gains eaten away by impaired scientific/technological exchange and accumulating corruption until they are on net far less effective than if they'd stayed open the whole time. The implication of this seems to be that the US and its allies should be tend much more towards openness and less towards secrecy, at least in the technological domain in peacetime[6].

Secrecy as a short-term weapon

Finally, Kantrowitz makes the interesting argument that secrecy can be a highly effective short-term weapon, even if it isn't a viable long-term strategy.

When a normally-open society rapidly increases secrecy as a result of some emergency pressure (typically war) they initially retain the strong epistemic institutions and norms fostered by a culture of openness, and can thus continue to function effectively while reaping the adversarial advantages provided by secrecy. In addition, the pressures of the emergency can provide an initial incentive for good behaviour: "the behavior norms of the group recruited may not tolerate the abuse of secrecy for personal advancement or interagency rivalry."

As such, groups that previously functioned well in the open can continue to function well (or even better) in secret, at least for some short time. If the emergency persists for a long time, however, or if the secret institutions persist past the emergency that created them, the corroding effects of secrecy – on efficacy and corruption – will begin to take root and grow, eventually and increasingly compromising the functionality of the organisation.

Secrecy may therefore be good tactics, but bad strategy. If true, this would explain how some organisations (most notably the Manhatten Project) produce such impressive achievements while remaining highly secretive, while also explaining why these are exceptions to the general rule.

Speculating about this myself, this seems like an ominous possibility: the gains from secrecy are clearly legible and acquired rapidly, while the costs accrue gradually and in a way difficult for an internal actor to spot. The initial successes justify the continuation of secrecy past the period where it provided the biggest gains, after which the accruing costs of declining institutional health make it increasingly difficult to undo. Those initial successes, if later made public, also serve to provide the organisation with a good reputation and public support, while the organisations declining performance in current events are kept secret. As a result, the organisation's secrecy could retain both public and private support well past the time at which it begins to be a net impediment to efficacy[7].

If this argument is true, it suggests that secrecy should be kept as a rare, short-term weapon in the policy toolbox. Rather than an indispensible tool of state policy, secrecy might then be regarded analogously to a powerful but addictive stimulant: to be used sparingly in emergencies and otherwise avoided as much as possible.

Final thoughts

The Weapon of Openness presents an important-seeming point in a convincing-seeming way. Its arguments jibe with my general understanding of human nature, incentives, and economics. If true, they seem to present an important counterpoint to concerns about info hazards and information security. At the same time, the piece is an essay, not a paper, and goes to relatively little effort to make itself convincing beyond laying out its central vision: Kantrowitz provides few concrete examples and cites even fewer sources. I am, in general, highly suspicious of compelling-seeming arguments presented without evidentiary accompaniment, and I think I should be even more so when those arguments are in support of my own (pro-academic, pro-openness) leanings. So I remain somewhat uncertain as to whether the key thesis of the article is true.

(One point against that thesis that immediately comes to mind is that a great deal of successful technological development in an open society is in fact conducted in secret. Monetised open-source software aside, private companies don't seem to be in the habit of publicly sharing their product before or during product development. A fuller account of the weapon of openness would need to account for why private companies don't fail in the way secret government projects are alleged to[8].)

If the arguments given in the Weapon of Openness are true, how should those of us primarily concerned with value of the long-term future respond? Long-termists are often sceptical of the value of generalised scientific and technological progress, and in favour of slower, more judicious, differential technological development. The Weapon of Openness suggests this may be a much more difficult needle to thread than it initially seems. We may be sanguine about the slower pace of technological development[9], but the corrosive effects of secrecy on norms and institutions would seem to bode poorly for the long-term preservation of good values required for the future to go well.

Insofar as this corrosion is inevitable, we may simply need to accept serious information hazards as part of our narrow path towards a flourishing future, mitigating them as best we can without resorting to secrecy. Insofar as it is not, exploring new ways[10] to be secretive about certain things while preserving good institutions and norms might be a very important part of getting us to a good future.


  1. It was, for example, cited in Bostrom's original information-hazards paper in discussion of reasons one might take a robust anti-secrecy stance. ↩︎

  2. Though uncertainty about your power can also be very harmful, if your adversaries conclude you are less powerful than you really are. ↩︎

  3. Impediments to the elimination of errors will determine the pace of progress in science as they do in many other matters. It is important here to distinguish between two types of error which I will call ordinary and cherished errors. Ordinary errors can be corrected without embarrassment to powerful people. The elimination of errors which are cherished by powerful people for prestige, political, or financial reasons is an adversary process. In open science this adversary process is conducted in open meetings or in scientific journals. In a secret project it almost inevitably becomes a political battle and the outcome depends on political strength, although the rhetoric will usually employ much scientific jargon.

    ↩︎
  4. As a third argument, Kantrowitz also claims that greater openness can reduce "divisiveness" and hence increase societal unity, further strengthening open societies relative to closed ones. I didn't find this as well-explained or convincing as his other points so I haven't discussed it in the main text here. ↩︎

  5. The other side of the coin is the weakness which secrecy fosters as an instrument of corruption. This is well illustrated in Reagan's 1982 Executive Order #12356 on National Security (alarmingly tightening secrecy) which states {Sec. 1.6(a)}: "In no case shall information be classified in order to conceal violations of law, inefficiency, or administrative error; to prevent embarrassment to a person, organization or agency; to restrain competition; or to prevent or delay the release of information that does not require protection in the interest of national security." This section orders criminals not to conceal their crimes and the inefficient not to conceal their inefficiency. But beyond that it provides an abbreviated guide to the crucial roles of secrecy in the processes whereby power corrupts and absolute power corrupts absolutely. Corruption by secrecy is an important clue to the strength of openness.

    ↩︎
  6. We can learn something about the efficiency of secret vs. open programs in peacetime from the objections raised by Adm. Bobby R. Inman, former director of the National Security Agency, to open programs in cryptography. NSA, which is a very large and very secret agency, claimed that open programs conducted by a handful of matheticians around the world, who had no access to NSA secrets, would reveal to other countries that their codes were insecure and that such research might lead to codes that even NSA could not break. These objections exhibit NSA's assessment that the best secret efforts, that other countries could mount, would miss techniques which would be revealed by even a small open uncoupled program. If this is true for other countries is it not possible that it also applies to us?

    ↩︎
  7. Kantrowitz expresses similar thoughts: "The general belief that there is strength in secrecy rests partially on its short-term successes. If we had entered WWII with a well-developed secrecy system and the corruption which would have developed with time, I am convinced that the results would have been quite different." ↩︎

  8. There are various possible answers to this I could imagine being true. The first is that private companies are in fact just as vulnerable to the corrosive effects of secrecy as governments are, and that technological progress is much lower than it would be if companies were more open. Assuming arguendo that this is not the case, there are several factors I could imagine being at play. I originally had an itemised list here but the Forum is mangling my footnotes, so I'll include it as a comment for now. ↩︎

  9. How true this is depends on how much importance you place on certain kinds of adversarialism: how important you think it is that particular countries (or, more probably, particular kinds of ideologies) retain their competitive advantage over others. If you believe that the kinds of norms that tend to go with an open society (free, democratic, egalitarian, truth-seeking, etc) are important to the good quality of the long-term future you may be loath to surrender one of those societies' most important competitive advantages. If you doubt the long-term importance of those norms, or their association with openness, or the importance of that association to the preservation of these norms, this will presumably bother you less. ↩︎

  10. I suspect they really will need to be new ways, and not simply old ways with better people. But I as yet know very little about this, and am open to the possibility that solutions already exists about which I know nothing. ↩︎

Comments17
Sorted by Click to highlight new comments since: Today at 7:15 PM

Thanks for this, both the original work and your commentary was an edifying read.

I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice. Consider something like psychiatric detention: folks are generally in favour of (e.g.) personal freedom, and we do not need to think very hard to see how overruling this norm 'for their own good' could go terribly wrong (nor look very far to see examples of just this). Yet these considerations do not tell us what the optimal policy should be relative to the status quo, still less how it should be applied to a particular case.

Although the relevant evidence can neither be fully observed or fairly sampled, there's a fairly good prima facie case for some degree of secrecy not leading to disaster, and sometimes being beneficial. There's some wisdom of the crowd account that secrecy is the default for some 'adversarial' research; it would surprise if technological facts proved exceptions to the utility of strategic deception. Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.

Moreover technological secrecy has had some eye-catching successes: the NSA likely discovered differential cryptanalysis years before it was on the open literature; discretion by early nuclear scientists (championed particularly by Szilard) on what to publish credibly gave the Manhattan project a decisive lead over rival programs. Openness can also have some downsides - the one that springs to mind from my 'field' is Al-Qaeda started exploring bioterrorism after learning of the United States expressing concern about the same.

Given what I said above, citing some favourable examples doesn't say much (although the nuclear weapon one may have proved hugely consequential). One account I am sympathetic to would be talking about differential (or optimal) disclosure: provide information in the manner which maximally advantages good actors over bad ones. This will recommend open broadcast in many cases: e.g. where there aren't really any bad actors, where the bad actors cannot take advantage of the information (or they know it already, so letting the good actors 'catch up'), where there aren't more selective channels, and so forth. But not always: there seem instances where, if possible, it would be better to preferentially disclose to good actors versus bad ones - and this requires some degree of something like secrecy.

Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught: although, for what it's worth, I think 'security service' norms tend closer to the mark than 'academic' ones. I understand cybersecurity faces similar challenges around vulnerability disclosure, as 'don't publish the bug until the vendor can push a fix' may not perform as well as one might naively hope: for example, 'white hats' postponing their discoveries hinders collective technological progress, and risks falling behind a 'black hat' community avidly trading tips and tricks. This consideration can also point the other way: if the 'white hats' are much more able than their typically fragmented and incompetent adversaries, the greater the danger of their work 'giving bad people good ideas'. The FBI or whoever may prove much more adept at finding vulnerabilities terrorists could exploit than terrorists themselves. They would be unwise to blog their red-teaming exercises.

Thanks Greg! I think a lot of what you say here is true, and well-put. I don't yet consider myself very well-informed in this area, so I wouldn't expect to be able to convince someone with a considered view that differs from mine, but I would like to get a better handle on our disagreements.

I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice.

I basically agree with this, with the proviso that I'm currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I don't yet know enough to put a number on that.

Given that, and given how little actual evidence Kantrowitz marshals, I don't think someone with a considered pro-secrecy view should be persuaded by this account. I do suspect that, if such a view were to turn out to be wrong, something like this account could be an important part of why.

Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.

Do you think there is any evidence for institutional decay due to secrecy? I'm interested in whether you think this narrative is wrong, or just unimportant relative to other considerations.

My (as yet fairly uninformed) impression is that there is also evidence of plenty of hidden inefficiency and waste in secret organisations (and indeed, given that those in those orgs would be highly motivated to use their secrecy to conceal this, I'd expect there to be more than we can see). All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

I don't know anything about the NSA, but I think Kantrowitz would claim the Manhattan project to be an example of short-term benefits of secrecy, combined with the pressures of war, producing good performance that couldn't be replicated by institutions that had been secret for decades (see footnote 7). So what is needed to counter his narrative is evidence of big wins produced by institutions with a long history of secret research.

Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught.

By "second order concerns", do you mean the proposed negative effect of secrecy on institutions/incentives/etc? Because if so that does seem to me to weigh more clearly in one direction (i.e. against secrecy) than the first-order considerations do. Though this probably depends a lot on what you count as first vs second order...

All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

No I agree with these pro tanto costs of secrecy (and the others you mentioned before). But key to the argument is whether these problems inexorably get worse as time goes on. If so, then the benefits of secrecy inevitably have a sell-by date, and once the corrosive effects spread far enough one is better off 'cutting ones losses' - or never going down this path in the first place. If not, however, then secrecy could be a strategy worth persisting with if the (~static) costs of this are outweighed by the benefits on an ongoing basis.

The proposed trend of 'getting steadily worse' isn't apparent to me. One can find many organisations which typically do secret technical work have been around for decades (the NSA is one, most defence contractors another, (D)ARPA, etc.). A skim of what they were doing in (say) the 80s versus the 50s doesn't give an impression they got dramatically worse despite the 30 years of secrecy's supposed corrosive impact. Naturally, the attribution is very murky (e.g. even if their performance remained okay, maybe secrecy had gotten much more corrosive but this was outweighed by countervailing factors like much larger investment; maybe they would have fared better under a 'more open' counterfactual) but the challenge of dissecting out the 'being secret * time' interaction term and showing it is negative is a challenge that should be borne by the affirmative case.

But key to the argument is whether these problems inexorably get worse as time goes on.

Yeah, I was thinking about this yesterday. I agree that this ("inexorable decay" vs a static cost of secrecy) is probably the key uncertainty here.

"I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice."
I basically agree with this, with the proviso that I'm currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I don't yet know enough to put a number on that.

I think the points each of you make there are true and important.

As a further indication of the value of Will's point, I think a big part of the reason we're having this discussion at all is probably Bostrom's paper on information hazards, which is itself much more a list of considerations than an attempt to weigh them up. Bostrom makes this explicit:

The aim of this paper is to catalogue some of the various possible ways in which information can cause harm. We will not here seek to determine how common and serious these harms are or how they stack up against the many benefits of information—questions that would need to be engaged before one could reach a considered position about potential policy implications.

(We could describe efforts such as Bostrom's as "mapping the space" of consequences worth thinking about further, without yet engaging in that further thought.)

It seems possible to me that we've had more cataloguing of the considerations against openness than those for it, and thus that posts like this one can contribute usefully to the necessary step that comes before weighing up all the considerations in order to arrive at a well-informed decision. (For the same reason, it could also help slightly-inform all the other decisions we unfortunately have to make in the meantime.)

One caveat to that is that a post that mostly covers just the considerations that point in one direction could be counterproductive for those readers who haven't seen the other posts that provide the counterbalance, or who saw them a long time ago. But that issue is hard to avoid, as you can't cover everything in full detail in one place, and it also applies to Bostrom's paper and to a post I'll be making on this topic soon.

Another caveat in this particular case is that there are two related reasons why decisions on whether to develop/share (potentially hazardous) information may demand somewhat more caution than the average decision: the unilateralist's curse, and the fact that hard-to-reverse decisions destroy option value.

I personally think that it's still a good idea to openly discuss the reasons for openness, even if a post has to be somewhat lopsided in that direction for brevity and given that other posts were lopsided in the other direction. But I also personally think it might be good to explicitly note those extra reasons for caution somewhere within the "mostly-pro" post, for readers who may come to conclusions on the basis of that one post by itself.

(Just to be clear, I don't see this as disagreeing with Greg or Will's comments.)

The original version of footnote 8 (relating to how the narrative of the Weapon of Openness interacts with secrecy in private enterprise):

"There are various possible answers to this I could imagine being true. The first is that private companies are in fact just as vulnerable to the corrosive effects of secrecy as governments are, and that technological progress is much lower than it would be if companies were more open. Assuming arguendo that this is not the case, there are several factors I could imagine being at play:

  • Competition (i.e. the standard answer). Private companies are engaged in much more ferocious competition over much shorter timescales than states are. This provides much stronger incentives for good behaviour even when a project is secret.
  • Selection. Even if private companies are individually just as vulnerable to the corrosive effects of secrecy as state agencies, the intense short-term competition private firms are exposed to means that those companies with better epistemics at any given time will outcompete those that do not and gain market share. Hence the market as a whole can continue to produce effective technology projects in secret, even as secrecy continuously corrodes individual actors within the market.
  • Short-termism. It's plausible to me that, with rare exceptions, secret projects in firms are of much shorter duration than in state agencies. If this is the case, it might allow at least some private companies to continuously exploit the short-term benefits of secrecy while avoiding some or all of the long-term costs.
  • Differences in degrees of secrecy. If a government project is secret, it will tend to remain so even once completed, for national security reasons. Conversely, private companies may be less attached to total, indefinite secrecy, particulary given the pro-openness incentives provided by patents. It might also be easier to bring external experts into secret private projects, through NDAs and the like, than it is to get them clearance to consult on secret state ones.

I don't yet know enough economics or business studies to be confident in my guesses here, and hopefully someone who knows more can tell me which of these are plausible and which are wrong."

Regarding the footnote issue, it sounds like maybe you had the same issue I had, so I'll share the fix I found in case that helps.

Standard footnotes on EAF and LessWrong only work for one paragraph. If you have a second paragraph (or dot points) that you want included, it just disappears.

To fix that, use "bignotes", as described here and here. (And it still works with dot points; just put 4 spaces before the dot point, as you would before any other paragraph.)

Also, as a general point, it seems there's little info on how writing posts/comments works on EAF, but there is for LessWrong, and a lot of that applies. So when I get stuck, I search on LessWrong (e.g., here). (And if that fails too, I google my problem + "markdown", as that's the syntax used).

It might be that you knew all that and your issue was different, but just thought I'd share in case it helps.

Thanks, I'll try this out next time!

Interesting points, thanks for sharing!

One minor thought, in response to:

It might also be easier to bring external experts into secret private projects, through NDAs and the like, than it is to get them clearance to consult on secret state ones.

It seems to me that it's also possible that that's more of a symptom than a cause of the secrecy being less corrosive/corrupting in private than state projects (if indeed that is the case). That is, perhaps there's some other reason why secrecy leads to less corruption and distortion of incentives in businesses than in governments, and then because of that, those in-the-know in business are more willing to let external experts get NDAs-or-similar and look at what's going on than are those in-the-know in government.

Thanks for this post.

By screening not only the decisions but the decision-making progress from outside scrutiny, secrecy greatly reduces the incentive for decision-makers to make decisions that could be justified to outside scrutinisers. Given the well-known general tendency of humans to respond to selfish incentives, the result is unsurprising: greatly increased toleration of waste, delay and other inefficiencies, up to and including outright corruption in the narrow sense, when these inefficiencies make the lives of decision-makers or those they favour easier, or increase their status (e.g. by increasing their budget)

Relatedly, in a reply to Gregory Lewis you write:

All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

I don't think I know enough about this to clearly disagree or agree, but I've seen some arguments that I think would push against your claims, if the arguments are sound. (I'm not sure the arguments are sound, but I'll describe them anyway.)

As you say, "secrecy greatly reduces the incentive for decision-makers to make decisions that could be justified to outside scrutinisers." The arguments I have in mind could see that as a good thing. It could be argued that this frees organisations/individuals to optimise for what really matters, rather than for acting in ways they could easily defend after the fact. (See here for Habryka's discussion of similar arguments, focusing on the idea of "legibility" - though I should note that he seems to not necessarily or wholeheartedly endorse these arguments.)

For example, it is often claimed that confidentiality/secrecy in cabinet or executive meetings is very important so that people will actually share their thoughts openly, rather than worrying about how their statements might be interpreted, taken out of context, used against them, etc. after the fact.

For another example, I somewhere saw it claimed that things like credentials, prestige of one's university, etc. are overly emphasised in hiring processes partly because people in charge of hiring aren't just optimising for the best candidate, but the candidate they can best justify hiring if things do turn out badly. There may be many "bets" they think would be better in expectation, but if they turned out poorly the hirer would struggle to justify their decision to their superiors, whereas if the person from Harvard turned out badly the hirer can claim all the evidence looked good before the fact. (I have no idea if this is true; it's just a claim I've seen.)

In other words, in response to "Given the well-known general tendency of humans to respond to selfish incentives", these arguments might highlight that there's also a tendency for at least some people to truly want to do what they believe is "right", but to be restrained by incentives of a "justifying" or "bureaucratic" type. So "secrecy" (of a sort) could, perhaps, sometimes allow individuals/organisations to behave more effectively and have better epistemics, rather than optimising for the wrong things, hiding their real views, etc.

But again, these are just possible arguments. I don't know if I actually agree with them, and I think more empirical evidence would be great. I think there's also two particular reasons for tentative suspicion of such arguments:

  • They could be rationalisations of secrecy that is actually in the organisation/individual's interest for other reasons
  • There are often reasons why people should "tick the boxes" and optimise what's "justifiable"/legible/demanded rather than "what they believe is right". This can happen when individuals are wrong about what's right, and their organisations or superiors do know better, and did put those tick-boxes there for a good reason.

My own past experience as a teacher suggests (weakly and somewhat tangentially) that there's truth to both sides of this debate.

Remarkably, and I think quite appallingly, I and most other teachers at my school could mostly operate in secret, in the sense that we were hardly ever observed by anyone except our students (who probably wouldn't tell our superiors anything less than extreme misconduct). I do think that this allowed increased laziness, distorting results to look good (e.g., "teaching to the test" in bad ways, or marking far too leniently[1]), and semi-misconduct (e.g., large scary male teachers standing over and yelling angrily at 13 year olds). This seems to tangentially support the idea that "secrecy" increases "corruption".

On the other hand, the school, and curriculum more broadly, also had some quite pointless or counterproductive policies. Being able to "operate in secret" meant that I could ditch the policies that were stupid, not waste time "ticking boxes", and instead do what was "really right".

But again, the caveat should be added that it's quite possible that the school/curriculum was right and I was wrong, and thus I would've been better off being put under the floodlights and forced to conform. I tried to bear this sort of epistemic humility in mind, and therefore "go my own way" only relatively rarely, when I thought I had particularly good reasons for doing so.

This all also makes me think that the pros and cons of secrecy will probably vary between individuals and organisations, and in part based on something like how "conscientious" or "value aligned" the individual is. In the extreme, a highly conscientious and altruistic person with excellent morals and epistemics to begin with might thrive if able to operate somewhat "secretly", as they are then freed to optimise for what really matters. (Though other problems could of course occur.) Conversely, for someone with a more normal level of conscientiousness, self-interest, flawed beliefs about what's right, and flawed beliefs about the world, openness may free them instead to act self-interestedly, corruptly, or based on what they think is right but is actually worse than what others would've told them to do.

I see you mention the NSA in a footnote. One thing worth keeping in mind is that the NSA is both highly secretive and is generally believed based on past leaks and cases of "catching up" by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research. It's possible this situation is not stable, but my best guess as an outsider is that they are a proof by example that secrecy as a strategy for maintaining a technological lead against adversaries can work, but there are likely a lot of specifics to making that work that you should probably expect any random attempt at secrecy of this sort not to be as successful as the NSA's, i.e. the NSA is a massive outlier in this regard.

past leaks and cases of "catching up" by public researchers that they are roughly 30 years ahead of publicly disclosed cryptography research

I have never heard this and would extremely surprised by this. Like, willing to take a 15:1 bet on this, at least. Probably more.

Do you have a source for this?

Ugh, I'd have to dig things up, but some things that come to mind that could be confirmed by looking that I count as evidence of this:

  • lag to figuring out the thing about the DES recommended magic numbers vs. when they were given out
  • NSA lead on public key crypto and sending agents to discourage mathematicians from publishing (this one was likely shorter because it was earlier)
  • lag on figuring out the problems with elliptic curve during which the NSA encouraged its use

Even if all of these turn out to be quite significant, that would at most imply a lead of something like 5 years.

The elliptic curve one doesn't strike me at all like the NSA had a big lead. You are probably referring to this backdoor:

https://en.wikipedia.org/wiki/Dual_EC_DRBG

This backdoor was basically immediately identified by security researchers the year it was embedded in the standard. As you can read in the Wikipedia article:

Bruce Schneier concluded shortly after standardization that the "rather obvious" backdoor (along with other deficiencies) would mean that nobody would use Dual_EC_DRBG.

I can't really figure out what you mean by the DES recommended magic numbers. There were some magic numbers in DES that were used for defense against the differential cryptanalysis technique. Which I do agree is probably the single strongest example we have of an NSA lead, though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

To be clear, a 30 (!) year lead seems absolutely impossible to me. A 3 year broad lead seems maybe plausible to me, with a 10 year lead in some very narrow specific subset of the field that gets relatively little attention (in the same way research groups can sometimes pull ahead in a specific subset of the field that they are investing heavily in).

I have never talked to a security researcher who would consider 30 years remotely plausible. The usual impression that I've gotten from talking to security researchers is that the NSA has some interesting techniques and probably a variety of backdoors, which they primarily installed not by technological advantage but by political maneuvering, but that in overall competence they are probably behind the academic field, and almost certainly not very far ahead.

though it's important to note that that technique was developed at IBM, and then given to the NSA, and not developed internally at the NSA.

So I think this is actually a really important point. I think by default the NSA can contract out various tasks to industry professionals and academics and on average get results back from them that are better than what they could have done internally. The differential cryptoanalysis situation is a key example of that. IBM could have instead been contracted by some random other group and developed the technology for them instead, which means that the NSA had basically no lead in cryptography over IBM.

I think 30 years is an overstatement, thought it's hard to quantify. However, I can think of a few things that makes me think this gap is likely to exist, and be significant in cryptography, and even more specifically in cryptanalysis. For hacking, the gap is clearly smaller, but a still nontrivial amount - perhaps 2 years.

Curated and popular this week
Relevant opportunities