willbradshaw

willbradshaw's Comments

Thoughts on The Weapon of Openness

But key to the argument is whether these problems inexorably get worse as time goes on.

Yeah, I was thinking about this yesterday. I agree that this ("inexorable decay" vs a static cost of secrecy) is probably the key uncertainty here.

Thoughts on The Weapon of Openness

Thanks, I'll try this out next time!

Thoughts on The Weapon of Openness

Thanks Greg! I think a lot of what you say here is true, and well-put. I don't yet consider myself very well-informed in this area, so I wouldn't expect to be able to convince someone with a considered view that differs from mine, but I would like to get a better handle on our disagreements.

I'm not persuaded, although this is mainly owed to the common challenge that noting considerations 'for' or 'against' in principle does not give a lot of evidence of what balance to strike in practice.

I basically agree with this, with the proviso that I'm currently trying to work out what the considerations to be weighed even are in the first place. I currently feel like I have a worse explicit handle on the considerations mitigating in favour of openness than those mitigating in favour of secrecy. I do think these higher-level issues (around incentives, institutional quality, etc) are likely to be important, but I don't yet know enough to put a number on that.

Given that, and given how little actual evidence Kantrowitz marshals, I don't think someone with a considered pro-secrecy view should be persuaded by this account. I do suspect that, if such a view were to turn out to be wrong, something like this account could be an important part of why.

Bodies that conduct 'secret by default' work have often been around decades (and the states that house them centuries), and although there's much to suggest this secrecy can be costly and counterproductive, the case for their inexorable decay attributable to their secrecy is much less clear cut.

Do you think there is any evidence for institutional decay due to secrecy? I'm interested in whether you think this narrative is wrong, or just unimportant relative to other considerations.

My (as yet fairly uninformed) impression is that there is also evidence of plenty of hidden inefficiency and waste in secret organisations (and indeed, given that those in those orgs would be highly motivated to use their secrecy to conceal this, I'd expect there to be more than we can see). All else equal, I would expect a secret organisation to have worse epistemics and be more prone to corruption than an open one, both of which would impair its ability to pursue its goals. Do you disagree?

I don't know anything about the NSA, but I think Kantrowitz would claim the Manhattan project to be an example of short-term benefits of secrecy, combined with the pressures of war, producing good performance that couldn't be replicated by institutions that had been secret for decades (see footnote 7). So what is needed to counter his narrative is evidence of big wins produced by institutions with a long history of secret research.

Judging the overall first-order calculus, leave along weighing this against second order concerns (such as noted above) is fraught.

By "second order concerns", do you mean the proposed negative effect of secrecy on institutions/incentives/etc? Because if so that does seem to me to weigh more clearly in one direction (i.e. against secrecy) than the first-order considerations do. Though this probably depends a lot on what you count as first vs second order...

Thoughts on The Weapon of Openness

The original version of footnote 8 (relating to how the narrative of the Weapon of Openness interacts with secrecy in private enterprise):

"There are various possible answers to this I could imagine being true. The first is that private companies are in fact just as vulnerable to the corrosive effects of secrecy as governments are, and that technological progress is much lower than it would be if companies were more open. Assuming arguendo that this is not the case, there are several factors I could imagine being at play:

  • Competition (i.e. the standard answer). Private companies are engaged in much more ferocious competition over much shorter timescales than states are. This provides much stronger incentives for good behaviour even when a project is secret.
  • Selection. Even if private companies are individually just as vulnerable to the corrosive effects of secrecy as state agencies, the intense short-term competition private firms are exposed to means that those companies with better epistemics at any given time will outcompete those that do not and gain market share. Hence the market as a whole can continue to produce effective technology projects in secret, even as secrecy continuously corrodes individual actors within the market.
  • Short-termism. It's plausible to me that, with rare exceptions, secret projects in firms are of much shorter duration than in state agencies. If this is the case, it might allow at least some private companies to continuously exploit the short-term benefits of secrecy while avoiding some or all of the long-term costs.
  • Differences in degrees of secrecy. If a government project is secret, it will tend to remain so even once completed, for national security reasons. Conversely, private companies may be less attached to total, indefinite secrecy, particulary given the pro-openness incentives provided by patents. It might also be easier to bring external experts into secret private projects, through NDAs and the like, than it is to get them clearance to consult on secret state ones.

I don't yet know enough economics or business studies to be confident in my guesses here, and hopefully someone who knows more can tell me which of these are plausible and which are wrong."

The Intellectual and Moral Decline in Academic Research

Unrelatedly, I'm quite enjoying watching the karma on this comment go up and down. Currently at -1 karma after 7 votes. Interesting data on differing preferences over commenting norms.

The Intellectual and Moral Decline in Academic Research

Yeah, I don't want to imply that I strongly support the original claims. I think there are lots of very serious problems with incentives and epistemics in science, but nevertheless that both the incentives and the epistemics of scientists are unusually good in important ways.

(As an anecdote that probably shouldn't be taken as strong evidence, but that I found striking, I once tried out the 2-4-6 test on my lab, and IIRC something like two-thirds of members got the right answer first-time, and both group leaders present did so fairly quickly.)

I'm also very worried about the effects of corporate funding on research, at least in some domains.

The Intellectual and Moral Decline in Academic Research

Thanks Gavin.

I'd be interested in seeing data on the distribution of causes of retraction and how it's changed over time. I know RetractionWatch likes to say that scientists tend to underestimate the proportion of retractions that are down to fraud. I do think some (many?) retractions are due to serious technical errors with no implication of deliberate fraud or misconduct. I suspect RetractionWatch has data on this.

I'm not claiming that it's inevitably true that more retractions indicates better community epistemics, but I do think it's a big part of the story in this case. A paper retraction requires someone to notice that the paper is worthy of retraction, bring that to the editors and, very often, put a lot of pressure on the editors to retract the paper (who are usually extremely reluctant to do so). That requires people to be on the lookout for things that might need to be retracted and willing to put in the time and effort to get it retracted.

In the past this was very rare, and only extremely flagrant fraud or misconduct (or unusually honest scientists retracting their own work) led to retractions. Now, partly as a side consequence of the replication crisis but also more general (and incomplete) changes in norms, we have a lot more people who spend a lot of time actively searching for data manipulation and other retraction-worthy things in papers.

This is just the science version of the common claim that a recorded increase (or decrease) in the rate of a particular crime, or a particular mental disorder, or some such, is mainly due to changes in how closely we're looking for it.

The Intellectual and Moral Decline in Academic Research

Sure.

Taken at face value, the claim is that taxpayer funding and number of retractions have increased over time, at rates not hugely different from one another. I think both can almost entirely be accounted for by an increase in the total number of researchers. If you have more researchers producing papers, this will result in both a big increase in funding required and in number of papers retracted without any change in the quality distribution.

I would want to see evidence for a big increase in retractions per number of researchers, researcher hours or some other aggregative measure before taking this seriously as a claim that science has got worse over time. It's well-known that if you don't control for the total number of people in a place or doing a thing, all sorts of things will correlate (homicides and priests, ice-cream sales and suicides, etc.).

More substantively, I also disagree with the claim that a big increase in retractions is evidence of scientific decline. Insofar as there has been any increase in the per-capita rate of retractions, I regard this as a sign of increasing epistemic standards, and think both editors and scientists are still way too reluctant to retract papers. It's like the replication crisis: the problems have always been there, but we only started paying attention to them recently. That's a good sign, not a bad one.

The Intellectual and Moral Decline in Academic Research

In short, maybe the author is burnt out or has only ever worked with poor colleagues? Or hasn't been funded in a while?

I downvoted this comment based on this paragraph. Arch speculations that a position taken is probably due to inadequacies and personal frustrations of the author are nearly always uncharitable, unwarranted and, in my experience, well-correlated with sloppy and defensive thinking.

No, the guy probably isn't just mad because he couldn't cut it in academia.

The Intellectual and Moral Decline in Academic Research

From 1970 to 2010, as taxpayer funding for public health research increased 700 percent, the number of retractions of biomedical research articles increased more than 900 percent, with most due to misconduct.

https://www.tylervigen.com/spurious-correlations

Load More