Hide table of contents

Epistemic Status: Quickly written, uncertain. I'm fairly sure there's very little in terms of the public or government concerned about AGI claims, but I'm sure there's a lot I'm missing. I'm not at all an expert on government or policy and AI.

This was originally posted to Facebook here, where it had some discussion.  Many thanks to Rob Bensinger, Lady Jade Beacham, and others who engaged in the discussion there.


Multiple tech companies now are openly claiming to be working on developing AGI (Artificial General Intelligence).

As written in a lot of work on AGI (See Superintelligence, as an example), if any firm does establish sufficient dominance in AGI, they might have some really powerful capabilities.

  • Write bots that could convince (some) people to do almost anything
  • Hack into government weapons systems
  • Dominate vital parts of the economy
  • Find ways to interrupt other efforts to make AGI

And yet, from what I can tell, almost no one seems to really mind? Governments, in particular, seem really chill with it. Companies working on AGI get treated similarly to other exciting AI companies.

If some company were to make a claim like,

"We're building advanced capabilities that can hack and modify any computer on the planet"

or,

"We're building a private nuclear arsenal",

I'd expect that to draw attention.

But with AGI, crickets.

I assume that governments dismiss corporate claims of AGI development as overconfident marketing-speak or something.

You might think,

"But concerns about AGI are really remote and niche. State actors wouldn't have come across them."

That argument probably applied 10 years ago. But at this point, the conversation has spread a whole lot. Superintelligence was released in 2014 and was an NYT bestseller. There are hundreds of books out now about concerns about increasing AI capabilities. Elon Musk and Bill Gates both talked about it publicly. This should be one of the easiest social issues at this point for someone technically savvy to find.

The risks and dangers (of a large power-grab, not of alignment failures, though those too) are really straightforward and have been public for a long time.

Responses

In the comments to my post, a few points were made, some of which I was roughly expecting.  Points include:

  1. Companies saying they are making AGI are ridiculously overconfident
  2. Governments are dramatically incompetent
  3. AGI will roll out gradually and not give one company a dominant advantage

My quick responses would be:

  1. I think many longtermist effective altruists believe these companies might have a legitimate chance in the next 10 to 50 years, in large part because of a lot of significant research (see everything on AI and forecasting on LessWrong and the EA Forum). At the same time, my impression is that most of the rest of the world is indeed incredibly skeptical of serious AGI transformation.
  2. I think this is true to an extent. My impression is that government nonattention can change dramatically and quickly, particularly in the United States, so if this is the crux, it might be a temporary situation.
  3. I think there's substantial uncertainty here. But I would be very hesitant to put over a 70% chance that: (a) one, or a few, of these companies will gain a serious advantage, and (b) the general-purpose capabilities of these companies will come with significant global power capabilities. AGI is general-purpose, it seems difficult to be sure that your company can make it without it being an international security issue of some sort or other.

Updates

This post was posted to Reddit and Hacker News, where it had a total of around 100 more comments. The Hacker News crowd mostly suggested Response #1 ("AGI is a pipe dream that we don't need to worry about")

82

0
0

Reactions

0
0

More posts like this

Comments50
Sorted by Click to highlight new comments since:

This isn't intended to be a complete response to your post, but for comparison, here are some other things that ambitious tech companies have serious plans to accomplish:

  • SpaceX has a pretty credible plan to create a revolutionary space-launch system (Starship) and use it to literally colonize another planet. The USA doesn't seem as excited about this as it rationally ought to be, nor do China or other countries seem as concerned as one might expect.

  • Various crypto companies and projects have big plans to literally replace fiat currency, undermining one of the fundamental ways that governments wield power. Crypto does seem to be on the radar of most governments, but still less than one might expect given the long-term stakes.

  • Nuclear fusion companies like Helion, General Fusion, and others have collectively raised billions of dollars to make "energy too cheap to meter" a reality. You'd think governments would be more excited about subsidizing potentially incredible energy sources like this, but maybe they've been burned too many times before or maybe they're just slow.

  • There is a lot we could straightforwardly do to be better prepared for future pandemics, but we're mostly not doing that stuff (hugely scaled-up metagenomic sequencing, improved ventilation and PPE, making vacaccines in advance for all potential pandemic viruses, etc) even though it's totally legible, it doesn't require any risky breakthroughs, and it's extremely salient due to covid.

Obviously the consequences of even Mars colonization, full cryptoization of the economy, abundant power from fusion, and significantly mitigated biorisk pale in comparison to the transformative power of AGI. But from a government's perspective they might all seem to be in the same reference class. Yet it's surprisingly close to "crickets" on all counts. (I admit that AGI might be especially neglected even here, though -- SpaceX at least gets normal NASA contracts, etc.)

I think SpaceX's regular non-Mars-colonization activities are in fact taken seriously by relevant governments, and the Mars colonization stuff seems like it probably won't happen and also wouldn't be that big a deal if it did (in terms of, like, national security; it would definitely affect who gets into the history books). So it doesn't seem to me like governments are necessarily acting irrationally there.

Same with cryptocurrency; its implications for investor protection, tax evasion, capital controls evasion, and facilitating illicit transactions are indeed taken seriously, and while governments would obviously care quite a lot if it displaced fiat currency, I just don't think there's any way that's happening. If it does, then this is probably because fiat currency itself somehow stopped working and something was needed to fill the void; if governments think this scenario is at all plausible, then presumably their attention would be on the first part where fiat currency fails, since that's much more within their control and cryptocurrency isn't really a relevant input.

The scientific and regulatory culture around fusion power seems to be shaped, as you suggest, by the long history of failures in that domain; judging by similar situations in other fields, I wouldn't be surprised if no one wanted to admit to putting any credence in it, so that they wouldn't look stupid in case it fails again.

The state of pandemic preparedness does indeed seem like just straight-up government incompetence.

That's a good point, and I like the examples, thanks!

Governments are concerned/interested in near-term AI. See EU, US, UK and Chinese regulation and investment. They're maybe about as interested in it as like clean tech and satellites, more than lab-grown meat.

Transformative AI is several decades away, governments aren't good at planning for possibilities over long time periods. If/when we get closer to transformative capabilities, governments will pay more attention. See: nuclear energy + weapons, bioweapons + biotech, cryptography, cyberweapons, etc etc. 

Jade Leung's thesis is useful on this. So to is Jess Whittlestone's conceptual clarifications of near/long distinctions (with Carina Prunkl) and on transformative AI (with Ross Gruetzemacher)

What makes you confident that  "Transformative AI is several decades away"? Holden estimates "more than a 10% chance we'll see transformative AI within 15 years (by 2036)", based on a variety of reports taking different approaches (that are IMO conservative).  Given the magnitude of what is meant by "transformative", governments (and people in general) should really be quite a bit more concerned. As the analogy goes - if you were told that there was a >10% chance of aliens landing on Earth in the next 15 years, then you should really be doing all you can to prepare, as soon as possible!

Governments have trouble responding to things more than a few  years away, and even then, only when it's effectively certain. If they had reliable data that there are aliens showing up in 10 years, I'd expect them to respond by fighting about it and commissioning studies.

Yep. Watched Don't Look Up last night; can imagine that.

Fictional evidence! And I haven't seen the movie, but expect it to be far too under-nuanced about how government works.

Median estimate is still decades away.  I personally completely agree people should be more concerned.

Median is ~3-4 decades away. I'd call that "a few", rather than "several" (sorry to nitpick, but I think this is important: several implies "no need to worry about it, probably not going to happen in my lifetime", whereas a few implies (for the majority of people) "this is within my lifetime; I should sit up and pay attention.")

The way I sometimes phrase it to people is that I now think it's more urgent than Climate Change (and people understand that Climate Change is getting quite urgent, and is something that will have a big impact within their lifetimes).

Thanks! 
(For those casually browsing, I just want to flag that Haydn works directly in this field, and has much more experience and knowledge in it than I do. I wish it were easier to point this out on the EA Forum.)

This interview with Obama Allan Defoe pointed to once is pretty instructive around these questions. On reflection, reasonable government actors see the case, it's just really hard to prioritize given short-run incentives ("maybe in 20 years the next generation will see things coming and deal with it").

My basic model is that government actors are all tied up by the stopping problem. If you ever want to do something good you need to make friends and win the next election. The voters and potential allies are even more short-termist than you. Availability bias explains why people would care about private nuclear weapons. Superintelligence codes as dinner party/dorm room chat. It will sell books, but it's not action relevant.  

"The average age of Members of the House at the beginning of the 117th Congress was 58.4 years; of Senators, 64.3 years."

This is a good point, but I'd flag that there are many departments of the government with different levels of autonomy. It seems easy for me to imagine some special cluster in the military or intelligence departments to be spending a lot of time around AGI events, but I so far don't have evidence of anything like that. 

Fair point. First let me add another piece of info about the congress: "The dominant professions of Members are public service/politics, business, and law."

Now on to your point. 

 

  • How old are the leaders of the military? How many of them know what python is? What was their major in college? Now ask yourself the same thing about the CIA/NSA./Etc. This isn't a rhetorical question. I assume each department will differ.  Though there may be a bit of smugness implicit.
  • Conditional on such a cluster existing: How likely do you think it is that it would be declassified? I don't find it that unlikely that the NSA or CIA could be running a program and not speaking on it, and it seems possible to figure this out simply by accounting for where every CS/AI graduate in the US works.  I feel less strongly that the military would hide such a project. FWIW my epistemic confidence is very low for this entire claim, I am not someone who has obsessed over governmental classification and things like that.
  • How many CS PHDs are there in the US government in total? How many masters? how many bachelors?

I think there is also more to say about the variety of reasons people feel more comfortable giving their input on economic, social, and foreign policy issues (even if they have no business doing so),  which I think could leak into leaders just naturally trending towards dealing with those issues, but I think this is a much more delicate argument that I don't feel comfortable fleshing out right now. 

 

I think aogaras point above is reasonable and mostly true, but I don't think it goes to the level of explaining the discrepancy.  This is incredibly skewed because of who I associate with(not all of my friends are eas though),  but anecdotally I think AGI is starting to gain some recognition as a very important issue among people my age (early 20s), specifically those in STEM fields.  Not a lot, but certainly more than it is talked about in the mainstream. Let's be real though, none of my friends will ever be in the military or run for office, nor do I believe they will work for the intelligence agencies. My point is, In addition to age,  we have a serious problem with under-representation of stem in high up positions and over-representation of lawyers. It would be interesting to test the leaders of various Gov departments on their level of computer science competency/comprehension. 

What do you think it would look like if the US government was minding companies explicitly making AGIs?

I feel like there's a whole lot I could imagine seeing.
Different parts of the government mind a whole lot of things. Here in Berkeley, there are regulations you need to abide by for all sorts of things (often they go too far, in my opinion). I also know of people who got reported to the CIA or FBI for a lot of very minor hacking/IT issues. 

Some quick things:
- Politicians talking about AGI publicly.
- Members of the CIA/NSA attending meetups/conferences around AGI and asking a lot of questions. 
- Government security or military professionals engaging with both longtermists concerned about AGI, and with AI companies working on AGI.
- Early legislation that really calls out AGI or similar general-purpose AI issues.
- Reports from government agencies that go into detail on potential scenarios.
- The hiring of promising AGI people (both technical and policy) into secretive or public government organizations.

There are clearly others around our community who have more expertise here (I'm really an amateur on this topic), so other suggestions are appreciated.

One of EA’s most important and unusual beliefs is that superintelligent AGI is imminently possible. While ideally effective altruism is just an ethical framework that can be paired with any set of empirical beliefs, it is a very important fact that people in this community hold extremely unusual beliefs about the empirical question of AI progress.

Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades. I study computer science in school, I work in the field of data science, and everybody I know anticipates progress-as-usual for the foreseeable future. GPT-3 is a cool NLP algorithm, but it doesn’t spell world takeover anytime soon. The stock market would arguably agree, with DeepMind receiving a valuation of only $400M in 2014, though more recent progress within Google and Facebook has not received public financial valuations. The AI Impacts investigation into the history of technological progress revealed just how rare it is for a single innovation to bring decades worth of progress on an important metric. Much more likely in my opinion is the gradual and progressive acceleration of progress in AI and ML systems where the 21st century sees a booming Silicon Valley, but no clear “takeoff point” of discontinuous progress, with the possibility that the supposed impacts of AGI (such as automating most of the labor force or >2xing the global GDP growth rate) do not emerge for a century or centuries.

To be clear, I agree that unprecedented AI progress is possible and important. There are some strong object-level arguments, particularly Ajeya’s OpenPhil analysis of the size of the human brain vs. the size of our biggest computers. These arguments have helped convince influential experts to write books, conduct research, and bring attention to the problem of AGI safety. Perhaps the more persuasive argument is that no matter how slim the chances are, the chance cannot be disproven, and the impact of such a transformation would be so great that a group of people should be seriously thinking about. But it shouldn’t be a surprise when other groups do not take the superintelligence revolution seriously, nor should it be a surprise if the revolution does not come this century.

Epistemic Status: Possibly overstated.

EDIT: Here’s a better summary of my views. https://forum.effectivealtruism.org/posts/7ZZpWPq5iqkLMmt25/aogara-s-shortform?commentId=xZFEv84LGqbRFwt4G

Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades.

Yep, this roughly matches my impressions. I think very, very few people really believe that superintelligence systems will be that influential.

One notable exception, of course though, would be the AGI companies themselves. I'm fairly confident that people in these groups really do think that they have a good shot at making AGI, and that it will be transformative. 

This would be an example of Response 1 that I listed. 

As to the question of, "Since everyone else besides AGI companies and select longtermists doesn't seem to think this is an issue, maybe it isn't an issue?"; I specifically am not that interested in discussion of that question. This sort of question is just very different and gets discussed in depth elsewhere. 

But I think the discrepancy is interesting to understand, to better understand why society at large is doing what it's doing.

Agreed, and I don't have any specific explanation of why government is unconcerned with dramatic progress in AI. As usual, government seems just a bit slow to catch up to the cutting edge of technological development and academic thought. Charles_Guthmann's point on the ages of people in government seems relevant. Appreciate your response though, I wasn't sure if others had the same perceptions.

I think very, very few people really believe that superintelligence systems will be that influential.

 

A lot of prominent scientists, technologists and intellectuals outside of EA have warned about advanced artificial intelligence too. Stephen Hawking, Elon Musk, Bill Gates, Sam Harris, everyone on this open letter back in 2015 etc.

I agree that the number of people really concerned about this is strikingly small given the emphasis longtermist EAs put on it. But I think these many counter-examples warn us that it's not just EAs and the AGI labs being overconfident or out of left field. 

Counterpoint on market sentiment: Anthropic raised a $124M Series A with few staff and no public facing product. The money comes from a handful of individuals including Jaan Tallin and Eric Schmidt, which makes unusual beliefs more likely to govern the bid (think unilateralist’s curse). But this seems like it has to be a financial bet on the possibility of incredible AI progress.

Separate question: Anthropic seems to be composed largely of people from OpenAI, another well-funded and socially-minded AGI company. Why did they leave OpenAI?

I think market sentiment is a bit complicated. Very few investors are talking about AGI, but organizations like OpenAI still seem to think that talking about AGI is good marketing for them (for talent, and I'm sure for money, later on).  

I think most of the Anthropic investment was from people close to effective altruism: Jaan Tallinn, Dustin Moskovitz, and Center for Emerging Risk Research, for example. 
https://www.anthropic.com/news/announcement

On why those people left OpenAI, I'm not at all an expert here. I think it's common for different teams to have different ways of seeing things, and wanting independence. In this case, I think there weren't all too many reasons to stay part of the same org (it's easy enough to get funding independently, as is evidenced by the Anthropic funding). I guess if Anthropic stayed close to OpenAI, it could have been part of scaling GPT-3 and similar, but I'm not sure how valuable that was to the rest of the team (especially in comparison to having more freedom to do things their own ways). I'd note that right now, there seem to be several more technical alignment focused people at Anthropic.

You're modeling government as a single coherent actor - and I think that's the most critical mistake. That's not to say they are incompetent, just that governments aren't actually looking at what companies do to decide how to respond. (And many would say this is a feature, not a bug!)

Sorry if my post made it seem that way, but I don't feel like I've been thinking of it that way. In fact, it's sort of worse if it's not a single actor; many different departments could have done something about this, but none of them seemed to take public action.

I'm not sure how to understand your second sentence exactly. It seems pretty different from your first sentence, from what I can tell?

A multi-actor system is constrained in ways that a group of single actors are not. Individual agencies can't do their own thing publicly, and you can't see what they are doing privately.

For the agencies that do pay attention, they can't publicly respond - and the lack of public monitoring and response by government agencies which can slap new regulations on individual companies or individuals is what separates a liberal state from a dictatorship. If US DOD notices something, they really, really aren't allowed to respond publicly, especially in ways that would be seen as trying to interfere with business or domestic policy. If NSA or the FBI notices something, they can only enforce extant laws, and are limited in their legal ability. And agencies which can respond, like the FTC, are in fact already working on drafting regulations for relevant applications of AI. (And yes, Congress could act to respond, but it's really fundamentally broken.)

An, that’s really good to know… and kind of depressing. Thanks so much.

Are you sure that they don't mind? I would be surprised if intelligence agencies weren't keeping some track of the technical capabilities of foreign entities, and I'd be unsurprised if they were also keeping track of domestic entities as well. If they thought we were six months away from transformative AGI, they could nationalize it or shut it down.

Are you sure that they don't mind?

I don't have any inside information into the government, it's of course possible there are secretive programs somewhere

"If they thought we were six months away from transformative AGI, they could nationalize it or shut it down."
Agreed, in theory. In practice, many different parts of the government think differently. It seems very likely that one will think that "there might be a 5% chance we're six months away from transformative AGI", but the parts that could take action just wouldn't.
 

AGI concerns are outside the overtone-window and are often considered actively harmful. The narrative "The whole debate about existential risks AI poses to humanity in the far off future is a huge distraction" (as illustrated in this post https://www.skynettoday.com/editorials/dont-worry-agi/) is quite wide-spread in the AI policy community. 

In this situation, actors who raise AGI concerns thus additionally risk being portrayed as working against the public interest.

You seem to have a small formatting mistake in the link, this should work though.
https://www.skynettoday.com/editorials/dont-worry-agi/

My guess is that this site focuses on the prosaic, mainstream sense of AI harms, e.g. automation, privacy, competition, what Acemoglu means here.

 

By the way, of the content on the webpage "advancing trustworthy AI" seems like it could be the most relevant to AGI/ASI risk.  But the link is broken, which is really on the nose!

 

The link for the trustworth AI wasn't broken for me? https://www.ai.gov/strategic-pillars/advancing-trustworthy-ai/#Use-of-AI-by-the-Federal-Government

But unsurprisingly, it mostly seems like they are talking about bigoted algorithms and not singularity. 

However it did link this:

https://www.nscai.gov/

Find their abriged 2021 report here:

https://reports.nscai.gov/final-report/table-of-contents/ 

https://reports.nscai.gov/final-report/chapter-7/ 

Personally, this looked more promising than anything else I had seen. There was a section titled adversarial AI, which I thought might be about AGI, but upon further reading, it wasn't. So this appears to also be in the vein of what Ozzie is saying.  However, It seems they have events semi-frequently. I think someone from EA should really try to go to they are allowed.  The second link is the closest chapter in the report to AGI stuff if anyone wants to take a look- again though it's not that impressive. 

And Also I found this: https://www.dod-coe4ai-ml.org/leadership-members

But I can't really tell if this is the DODs org or Howard universities; it seems like they only hire Howard professors and students so probably the latter. 

Closest paper I could find from them to anything AGI related: https://www.techrxiv.org/articles/preprint/Recent_Advances_in_Trustworthy_Explainable_Artificial_Intelligence_Status_Challenges_and_Perspectives/17054396/1

Yep; The US government is definitely taking some actions to progress AI development in general. 

Its work to promote AI safety, and particularly, regulate or at least discuss what to do about AGI, seems to be much more lacking.

There are two governance-related proposals in the second EA megaprojects thread. One is to create a really large EA-oriented think tank. The other is essentially EA lobbying, i.e. to put major funding behind political parties and candidates who agree to take EA concerns seriously.

Making one of these megaprojects a reality could get officials in governments to take AGI more seriously and/or get it more into the mainstream political discourse.

Andrew Yang made transformative AI a fairly central part of his 2020 presidential campaign. To the OP's point though, I don't recall him raising any alarms about the existential risks of AGI.

One possibility is that either the plausibility of AGI being developed soon is smaller than we think, or the danger it imposes is smaller than we think. This is far from the only explanation though.

Yea; I think this fits into response 1, "Companies saying they are making AGI are ridiculously overconfident". 

I think it's pretty clear that almost everyone outside of EAs + AGI developers are very skeptical of AGI. Very arguably, they're the ones who are correct. (Personally, I'm in-between, I just mean to point out the discrepancy) 

I think governments are not aware of the stop button problem and they think in case of emergency they can just shut down the company / servers running the AGI using force. That's what happened in the past with digital currencies (which Jackson Wagner mentions here as a plausible member of the same reference class as AGI for governments) before bitcoin - they either failed on their own, or if successful, were shut down by government (https://en.wikipedia.org/wiki/Digital_currency#History). 

Fair point. Honestly, any of them would suffice. My impression is that all are quiet on these issues right now.

I would have expected specific agencies or the military to care, in particular. 

National governments seem incredibly well prepared for possible global wars 30 to 100 years out. (Or at least, they spend a lot of attention at it). They also generally have financial infrastructure that lasts for a very long time and similar. 

A whole lot of national government strategy seems fairly long-term to me.

I don't have any thorough knowledge of military history but this is not at all my impression:

  • France was extremely underprepared for both WW1 and WW2.
  • Nazi Germany didn't really have any grand war plan and was lucky to stumble into a blitzkrieg and open another front by launching Barbarossa out of severe economic necessities.
  • Israel seemed to be underprepared for Operation Badr, the initially successful Egyptian offensive during Yom Kippur War. (Well, not a global war but definitely existential for Israel.)

To be clear, my main point was that they spend a lot of attention/work on the issue, not that they're doing a highly competent job. The US spends almost $800 B per year on the military, a whole lot of which is just there to prepare for future potential conflicts. Other countries of course also have large military presences, even if they don't have active conflicts. 

My impression is that a lot of this money is being spent highly inefficiently, but it's definitely being spent. 

https://www.statista.com/statistics/272473/us-military-spending-from-2000-to-2012/

On "incredibly well prepared", I just meant, "well for what I could expect from the government". The US military has 600 international bases, and the US does lots of diplomacy in order to better secure its longstanding military strategic position. Other large governments do similar diplomatic measures. 

https://en.wikipedia.org/wiki/List_of_United_States_military_bases

I think it's very easy to find flaws in these systems, but they seem more important to said governments than the vast majority of their priorities, and I think they're correspondingly often taking fairly reasonable actions.

I have a lot of unnecessary knowledge about military history and I don't agree with these examples:

France was extremely underprepared for both WW1 and WW2.

In both WW1 and WW2, France was formidable. In WW2, many people thought it was crazy for the Germans to go to war or attack France as soon as it did. It was astonishing to many to learn about speed  of Germany's success.

Nazi Germany didn't really have any grand war plan and was lucky to stumble into a blitzkrieg and open another front by launching Barbarossa out of severe economic necessities.

Nazi Germany had a very solid war plan and position before Barbarossa. No one including the allies thought the Soviets would hold out. It's unclear what "severe economic necessities" means, but it does suggests some intense, proximate motivation for an attack. But the Soviets were delivering many trains of precious raw materials of rare metals and oil in a lavish trade deal. This occurred up to the last hours before the attack.

Probably a better example is Japan, who had a very poor position in natural resources and felt forced to attack the allies US as a result of the US oil embargo.

Israel seemed to be underprepared for Operation Badr, the initially successful Egyptian offensive during Yom Kippur War. (Well, not a global war but definitely existential for Israel.)

I don't think the Egyptian's initial success, which was reversed by the end of this conflict, is related to this point. If anything, Israel's supreme, carefully maintained military position in the region for half a century is evidence of the original point. The Israeli nuclear program, which definitely would have played a role if things became "existential" during 1973, is probably a good example of 40-100 years of planning.
 

I think my responses above don't touch on the main issue. 

The more direct reply is that ex-post realizations of losing or winning isn't a good argument that good military planning or strategy is ineffectual or isn't extensively planned 40 years in advance. It seems that governments spend enormous effort trying to ensure success. The fact that some get stomped doesn't seem a surprisingly outcome for a complex adversarial conflict.


 

Thank you; really appreciate this comment! Short on time, so briefly:

  • I will basically affirm that you are right about Israel (overall) being a supporting example;
  • I still disagree about WW2 and was aware of things you mentioned. I think I would need to think more but at least my initial comment wasn't appropriately qualified. Further, severe economic necessities (a constant shortage of metals/grains/oil, which the USSR covered only partially) might make it self-defeating.

Methodologically, yeah, ex-post cherry-picking is bad as most of the successes are unseen (when war actually doesn't happen like between NATO and USSR/China). But enormous trying isn't in itself supportive as not all bloated prestigious buerocracies are doing a reasonable job.

[comment deleted]7
0
0
Curated and popular this week
Relevant opportunities