One piece of advice I gave to EAs of various stripes in early 2021 was: do everything you can to make the government sane around biorisk, in the wake of the COVID pandemic, because this is a practice-run for AI.
I said things like: if you can't get the world to coordinate on banning gain-of-function research, in the wake of a trillions-of-dollars tens-of-millions-of-lives pandemic "warning shot", then you're not going to get coordination in the much harder case of AI research.
Biolabs are often publicly funded (rather than industry-funded). The economic forces arrayed behind this recklessly foolish and impotent research consists of “half-a-dozen researchers thinking it’s cool and might be helpful”. (While the work that would actually be helpful—such as removing needless bureaucracy around vaccines and investing in vaccine infrastructure—languishes.) Compared to the problem of AI—where the economic forces arrayed in favor of “ignore safety and rush ahead” are enormous and the argument for expecting catastrophe much murkier and more abstract—the problem of getting a sane civilizational response to pandemics (in the wake of a literal pandemic!) is ridiculously easier.
And—despite valiant effort!—we've been able to do approximately nothing.
We're not anywhere near global bans on gain-of-function research (or equivalent but better feats of coordination that the people who actually know what they're talking about when it comes to biorisk would tell you are better targets than gain-of-function research).
The government continues to fund research that is actively making things worse, while failing to put any serious funding towards the stuff that might actually help.
I think this sort of evidence has updated a variety of people towards my position. I think that a variety of others have not updated. As I understand the counter-arguments (from a few different conversations), there are two main reasons that people see this evidence and continue to hold out hope for sane government response:
1. Perhaps the sorts of government interventions needed to make AI go well are not all that large, and not that precise.
I confess I don't really understand this view. Perhaps the idea is that AI is likely to go well by default, and all the government needs to do is, like, not use anti-trust law to break up some corporation that's doing a really good job at AI alignment just before they succeed? Or perhaps the idea is that AI is likely to go well so long as it's not produced first by an authoritarian regime, and working against authoritarian regimes is something governments are in fact good at?
I'm not sure. I doubt I can pass the ideological Turing test of someone who believes this.
2. Perhaps the ability to cause governance to be sane on some issue is tied very directly to the seniority of the government officials advising sanity.
EAs only started trying to affect pandemic policy a few years ago, and aren't very old or recognized among the cacophony of advisors. But if another pandemic hit in 20 years, the sane EA-ish advisors would be much more senior, and a lot more would get done. Similarly, if AI hits in 20 years, sane EA-ish advisors will be much more senior by then. The observation that the government has not responded sanely to pandemic near-misses, is potentially screened-off by the inexperience of EAs advising governance.
I have some sympathy for the second view, although I'm skeptical that sane advisors have significant real impact. I'd love a way to test it as decisively as we've tested the "government (in its current form) responds appropriately to warning shots" hypotheses.
On my own models, the "don't worry, people will wake up as the cliff-edge comes more clearly into view" hypothesis has quite a lot of work to do. In particular, I don't think it's a very defensible position in isolation anymore. The claim "we never needed government support anyway" is defensible; but if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.
(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form "surely my political party will become popular, claim power, and implement policies I like".)
See also: the law of continued failure, and Rob Bensinger's thoughts on the topic.
Nate thinks we should place less of our hope and focus on governments, and more of it on corporations; but corporations obviously aren't perfect rational actors either.
This isn't well predicted by "perfect rational actor or bust", but it's well predicted by "Nate thinks the problem is at a certain (high) level of difficulty, and the best major governments are a lot further away from clearing that difficulty bar than the best corporations are".
From Nate's perspective, AGI is a much harder problem than anything governments have achieved in the past (including the good aspects of our response to nuclear, Y2K, 9/11, and asteroids). In order to put a lot of our hope in sane government response, there should be clear signs that EA intervention can cause at least one government to perform better than any government ever has in history.
COVID's relevance here isn't "a-ha, governments failing on COVID proves that they never do anything right, and therefore won't do AGI right"; it's "we plausibly won't get any more opportunities (that are at least this analogous to AGI risk) to test the claim that EAs can make a government perform dramatically better than they ever have before; so we should update on what data we have (insofar as we even need more data for such an overdetermined claim), and pin less of our hopes on government outperformance".
If EAs can't even get governments to perform as well as they have on other problems, in the face of an biorisk warning shot, then we've failed much more dramatically than if we'd merely succeeded in making a government's response to COVID as sane as its response to the Y2K bug or the collapse of the Soviet Union.
(This doesn't mean that we should totally give up on trying to improve government responses — marginal gains might help in some ways, and unprecedented things do happen sometimes. But we should pin less of our hope on it, and treat it as a larger advantage of a plan if the plan doesn't require gov sanity as a point of failure.)
Are there other things you think show Nate is misunderstanding relevant facts about gov, that aren't explained by disagreements like "Nate thinks the problem is harder than you do"?