I'm glad you agree! For the sake of controversy, I'll add that I'm not entirely sure that scenario is out of consideration from an EV point of view, firstly because the exhaust will have a lot of energy and I'm not sure what will happen to it, and secondly because I'm open to a "diminishing returns" model of population ethics where the computational capacity furloughed does not have an overwhelmingly higher value.
On singletons, I think the distinction between "single agent" and "multiple agents" is more of a difference in how we imagine a system than an actual difference. Human civilization is divided into minds with a high level of communication and coordination within each mind and a significantly lower level between minds. This pattern is an accident of evolutionary history and if technological progress continues I doubt it will remain in the distant future, but I also don't think there will be perfect communication and coordination between the parts of a future civilization either. Even within a single human mind the communication and coordination is imperfect.
I guess. I don't like the concept of a singleton. I prefer to think that by describing a specific failure mode this gives a more precise model for exactly what kind of coordination is needed to prevent it. Also, we definitely shouldn't assume a coordinated colonization will follow the Armstrong-Sandberg method. I'm also motivated by a "lamppost approach" to prediction: This model of the future has a lot of details that I think could be worked out to a great deal of mathematical precision, which I think makes it a good case study. Finally, if the necessary kind coordination is rare then even if it's not worth it from an EV view to plan for our civilization to end up like this we should still anticipate alien civilizations to look like this.
historical cases are earlier than would be relevant directly
Practically all previous pandemics were far enough back in history that their applicability is unclear. I think it's unfair to discount your example because of that, because every other positive or negative example can be discounted the same way.
I've just examined the two Wikipedia articles you link to and I don't think this is an independent discovery. The race between Einstein and Hilbert was for finding the Einstein field equations which put general relativity in a finalized form. However, the original impetus for developing general relativity was Einstein's proposed Equivalence Principle in 1907, and in 1913 he and Grossman published the proposal that it would involve spacetime being curved (with a pseudo-Riemannian metric). Certainly after 1913 general relativity was inevitable, perhaps it was inevitable after 1907, but it still all depended on Einstein's first ideas.
That's a far cry from saying that these idea wouldn't have been discovered until the 1970s, which I'm basing mainly on hearsay and I confess is much more dubious.
I don't recall the source, but I remember hearing from a physicist that if Einstein hadn't discovered the theory of special relativity it would surely have been discovered by other scientists at the time, but if he hadn't discovered the theory of general relativity it wouldn't have been discovered until the 1970s. More specifically, general relativity has an approximation known as linearized gravity which suffices to explain most of the experimental anomalies of Newtonian gravity but doesn't contain the concept that spacetime is curved, and that could have been discovered instead.
I'm puzzled by Mallatt's response to the last question about consciousness in computer systems. It appears to me like he and Feinberg are applying a double-standard when judging the consciousness of computer programs. I don't know what he has in mind when he talks about the enormous complexity of conscious, but based on other parts of the interview we can see some of the diagnostic criteria Mallatt uses to judge consciousness in practice. These include behavioral tests such as going back to places an animal saw food before, tending wounds, and hiding when injured, as well as structural tests such as a multiple levels of intermediate processing from the sensory input to motor output. Existing AIs already pass the structural test I listed, and I believe they could pass the behavior tests with a simple virtual environment and reward function. I don't see a principled way of including the simplest types of animal conscious while any form of computer consciousness.
On the second paragraph, making your point succinctly is a valuable skill that is also important for anti-debates. A key part of this skill is understanding which parts of your argument are crucial for your conclusion and which merit less attention. The bias towards quick arguments and the bandwagon effect also exist in natural conversation and I'm not sure if it's any worse in competitive debating. I have little experience with competitive debating so I cannot make the comparison and am just arguing from how this should work in principle.
On the other hand, in natural conversation you want to minimize use both of the audiences' time and cognitive resources, whereas competitive debate weighs more heavily in minimizing time, which distorts how people learn succinctness from it. Also, the time constraint in competitive debate might be much more severe than the mental resource constraint in the most productive natural conversations, and so many important skills that are only applied in long-form conversation are not practiced at all.
I reached this article through a link that already revealed that it was about self-care but didn't notice the "self-care" in the title, and I expected the rhetoric to be a bait-and-switch that starts by talking about how aiming for the minimum in directly impact-related things is bad and then switches to arguing that the same reasoning applies to self-care.