Current AI projects will easily accelerate present trends toward a far higher likelihood of existential catastrophic events. Risks are multiplied by the many uncoordinated global AI projects and their various experimental applications, particularly genetic engineering in less fastidiously scientific jurisdictions; but also the many social, political, and military applications of misaligned AI. AI safety work would be well-intentioned but irrelevant as these genies won't be put back into every AI 'safety bottle'. Optimistically, as we have survived our existential risks for quite some time, we may yet find a means to survive the Great Filter of Civilizations challenge presented by Fermi's Paradox.
Did the RAND scientists never entertain the possibility that they were being used by the industrialists who stood to benefit from every "sprint" and who may have had a hand in providing them the raw data supporting "mistaken" assessments of enemy capabilities? Perhaps calling these assessments "mistakes" might be a face-saving way of admitting to having been manipulated by people less technically brilliant?