Hide table of contents

One piece of advice I gave to EAs of various stripes in early 2021 was: do everything you can to make the government sane around biorisk, in the wake of the COVID pandemic, because this is a practice-run for AI.

I said things like: if you can't get the world to coordinate on banning gain-of-function research, in the wake of a trillions-of-dollars tens-of-millions-of-lives pandemic "warning shot", then you're not going to get coordination in the much harder case of AI research.

Biolabs are often publicly funded (rather than industry-funded). The economic forces arrayed behind this recklessly foolish and impotent research consists of “half-a-dozen researchers thinking it’s cool and might be helpful”. (While the work that would actually be helpful—such as removing needless bureaucracy around vaccines and investing in vaccine infrastructure—languishes.) Compared to the problem of AI—where the economic forces arrayed in favor of “ignore safety and rush ahead” are enormous and the argument for expecting catastrophe much murkier and more abstract—the problem of getting a sane civilizational response to pandemics (in the wake of a literal pandemic!) is ridiculously easier.

And—despite valiant effort!—we've been able to do approximately nothing.

We're not anywhere near global bans on gain-of-function research (or equivalent but better feats of coordination that the people who actually know what they're talking about when it comes to biorisk would tell you are better targets than gain-of-function research).

The government continues to fund research that is actively making things worse, while failing to put any serious funding towards the stuff that might actually help.

I think this sort of evidence has updated a variety of people towards my position. I think that a variety of others have not updated. As I understand the counter-arguments (from a few different conversations), there are two main reasons that people see this evidence and continue to hold out hope for sane government response:

 

1. Perhaps the sorts of government interventions needed to make AI go well are not all that large, and not that precise.

I confess I don't really understand this view. Perhaps the idea is that AI is likely to go well by default, and all the government needs to do is, like, not use anti-trust law to break up some corporation that's doing a really good job at AI alignment just before they succeed? Or perhaps the idea is that AI is likely to go well so long as it's not produced first by an authoritarian regime, and working against authoritarian regimes is something governments are in fact good at?

I'm not sure. I doubt I can pass the ideological Turing test of someone who believes this.

 

2. Perhaps the ability to cause governance to be sane on some issue is tied very directly to the seniority of the government officials advising sanity.

EAs only started trying to affect pandemic policy a few years ago, and aren't very old or recognized among the cacophony of advisors. But if another pandemic hit in 20 years, the sane EA-ish advisors would be much more senior, and a lot more would get done. Similarly, if AI hits in 20 years, sane EA-ish advisors will be much more senior by then. The observation that the government has not responded sanely to pandemic near-misses, is potentially screened-off by the inexperience of EAs advising governance.

I have some sympathy for the second view, although I'm skeptical that sane advisors have significant real impact. I'd love a way to test it as decisively as we've tested the "government (in its current form) responds appropriately to warning shots" hypotheses.

On my own models, the "don't worry, people will wake up as the cliff-edge comes more clearly into view" hypothesis has quite a lot of work to do. In particular, I don't think it's a very defensible position in isolation anymore. The claim "we never needed government support anyway" is defensible; but if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.

(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form "surely my political party will become popular, claim power, and implement policies I like".)

 

See also: the law of continued failure, and Rob Bensinger's thoughts on the topic.

Comments20
Sorted by Click to highlight new comments since: Today at 5:27 AM

I have some sympathy for the second view, although I'm skeptical that sane advisors have significant real impact. I'd love a way to test it as decisively as we've tested the "government (in its current form) responds appropriately to warning shots" hypotheses.

On my own models, the "don't worry, people will wake up as the cliff-edge comes more clearly into view" hypothesis has quite a lot of work to do. In particular, I don't think it's a very defensible position in isolation anymore....if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.

(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form "surely my political party will become popular, claim power, and implement policies I like".)

I think the second view is basically correct for policy in general, although I don't have a strong view yet of how it applies to AI governance specifically. One thing that's become clear to me as I've gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that's possible in those settings. The more optimistic among us tend to get too excited about isolated interventions (e.g., electing a committed EA to Congress, getting a voting reform passed in one jurisdiction) that, even if successful, would only address a small part of the problem. On the other hand, skeptics see the inherent complexity and failures of past efforts and conclude that policy/advocacy/improving institutions is fundamentally hopeless, neglecting to appreciate that critical decisions by governments are, at the end of the day, made by real people with friends and colleagues and reading habits just like anyone else.

Viewed through that lens, my opinion and one that I think you will find is shared by people with experience in this domain is that the reason we have not seen more success influencing large-scale bureaucratic systems is that we have have been under-resourcing it as a community.  By "under-resourcing it" I don't just mean in terms of money, because as the Flynn campaign showed us it's easy to throw millions of dollars at a solution that hits rapidly diminishing returns. I mean that we have not been investing enough in strategic clarity, a broad diversity of approaches that complement one another and collectively increase the chances of success, and the patience to see those approaches through. In the policy world outside of EA, activists consider it normal to have a 6-10 year timeline to get significant legislation or reforms enacted, with the full expectation that there will be many failed efforts along the way. But reforms do happen -- just look at the success of the YIMBY movement, which Matt Yglesias wrote about today, or recent legislation to allow Medicare to negotiate prescription drug prices, which was in no small part the result of an 8-year, $100M campaign by Arnold Ventures.

Progress in the institutional sphere is not linear. It is indeed disappointing that the United States was not able to get a pandemic preparedness bill passed in the wake of COVID, or that the NIH is still funding ill-advised research. But we should not confuse this for the claim that we've been able to do "approximately nothing." The overall trend for EA and longtermist ideas being taken seriously at increasingly senior levels over the past couple of years is strongly positive. Some of the diverse factors include the launch of the Future Fund and the emergence of SBF as a key political donor; the publication of Will's book and the resulting book tour; the networking among high-placed government officials by EA-focused or -influenced organizations such as Open Philanthropy,  CSET, CLTR, the Simon Institute, Metaculus, fp21, Schmidt Futures, and more; and the natural emergence of the initial cohort of EA leaders into the middle third of their careers. Just recently, I had one senior person tell me that Longview Philanthropy's hiring of Carl Robichaud, a nuclear security grantmaker with 20 years of experience, is what got them to pay attention to EA for the first time. All of it is, by  itself, not enough to make a difference, and judged on its own terms will look like a failure. But all of it combined is what creates the possibility that more can be accomplished the next time around, and all of the time in between.

"I think the second view is basically correct for policy in general, although I don't have a strong view yet of how it applies to AI governance specifically. One thing that's become clear to me as I've gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that's possible in those settings."

 

This is a problem I've spoken often about, and I'm currently writing an essay on for this forum based on some research I co-authored. 

People wildly underestimate how hard it is to not only pass governance, but make sure it is abided to, and to balance the various stakeholders that are required. The AI Governance field has a massive sociological, socio-legal, and even ops-experience gap that means a lot of very good policy and governance ideas die in their infancy because no-one who wrote them have any idea how to enact them feasibly. My PhD is on the governance end of this and I do a bunch of work within government AI policy, and I see a lot of very good governance pitches go splat against the complex, ever-shifting beast that is the human organisation purely because the researchers never thought to consult a sociologist, or incorporate any socio-legal research methods.

And—despite valiant effort!—we've been able to do approximately nothing.

Why not?

I apologize for an amateur question but: what all have we tried and why has it failed?

It's possible there's a more comprehensive writeup somewhere, but I can offer two data points regarding the removal of $30B in pandemic preparedness funding that was originally part of Biden's Build Back Better initiative (which ultimately evolved into the Inflation Reduction Act):

  • I had an opportunity to speak earlier this summer with a former senior official in the Biden administration who was one of the main liaisons between the White House and Congress in 2021 when these negotiations were taking place. According to this person, they couldn't fight effectively for the pandemic preparedness funding because it was not something that representatives' constituents were demanding.
  • During his presentation at EA Global DC a few weeks ago, Gabe Bankman-Fried from Guarding Against Pandemics said that Democratic leaders in Congress had polled Senators and Representatives about their top three issues as Build Back Better was being negotiated in order to get a sense for what could be cut without incurring political backlash. Apparently few to no members named pandemic preparedness as one of their top three. (I'm paraphrasing from memory here, so may have gotten a detail or two wrong.)

The obvious takeaway here is that there wasn't enough attention to motivating grassroots support for this funding, but to be clear I don't think that is always the bottleneck -- it just seems to have been in this particular case.

I also think it's true that if the administration had wanted to, it probably could have put a bigger thumb on the scale to pressure Congressional leaders to keep the funding. Which suggests that the pro-preparedness lobby was well-connected enough within the administration to get the funding on the agenda, but not powerful enough to protect it from competing interests.

Surely grassroots support for pandemic preparedness wouldn't be too hard to get, would it? Is anyone working on this? Should someone work on this?

I'm not aware of anyone working on it really seriously!

I think the Pandemic Prevention Network, formerly No More Pandemics, is active in this space. (Definitely in the UK, maybe some work in the US?) More info on them from:

On the other hand, as of October 23 2022,  this page on their site states "Our operations are currently on hold," so maybe they're not active at the moment.

I don’t follow the US pandemic policy but wasn’t some $bn (albeit much less than $30bn) still approved for pandemic preparedness and isn't more still being discussed (a very quick google points to $0.5b here and $2b here etc and I expect there is more)? If so that seems like a really significant win.

Also your reply was about government, not about EA or adjacent organisations. I am not sure anyone in this post / thread has given any evidence of any "a valiant effort" yet, such as listing campaigns run or even policy papers written etc. The only post-COVID policy work I know of (in the UK, see comment below) seemed very successful and I am not sure it makes sense to update against "making the government sane" without understanding what the unsuccessful campaigns have been. (Maybe also Guarding Against Pandemics, are they doing stuff that people feel ought to have had an impact by now, and has it?)

As opposed to speaking with Congressmen, is "prepare a scientific report and meet with the NIH director/his advisors" an at-all plausible mechanism for shutting down the specific research grant Soares linked?

Or if not, becoming NIH peer reviewers?

This post makes the case that warning shots won't change the picture in policy  much, but I could imagine a world where  some warning shot makes the leading AI labs decide to focus more on safety, or agree to slow down their deployment, without policy change occurring. Maybe this could buy a couple of years time for safety researchers?

This isn't a well developed thought, just something that came to mind while reading.

To be honest, Nate’s analysis about the hope for government action sometimes comes across as if it’s from someone who never studied political economy or other poli-sci topics, assumed governments would act rational, and then concludes governments are hopeless when that assumption is flawed.

If you started out with the assumption that governments are nationally rational and fast-acting, then yes you obviously need to update away from that in light of the lack of COVID response (and many other examples predating COVID). And yes you do need to have some plausible causal chains for success.

But COVID definitely shouldn’t be assumed as proof of hopelessness (or other pessimistic claims like “Warning Shots Probably Wouldn't Change The Picture Much”), if only given examples of where government did take sensible or at least strong actions in response to threats/events (sometimes without them even occurring). See for example: 9/11, Y2K, Nunn-Lugar Cooperative Threat Reduction program, the asteroid tracking system, etc. Some of these have even been analyzed in posts here on the EA Forum! (Plausibly also worth learning about MADD: Mothers Against Drunk Driving)

Ultimately, factors such as “Probability of affecting the decision-makers personally,” “existing options for response,” “temporal sharpness of harm,” “good political narratives,” “influential advisors,” etc. matter. Are those things guaranteed for AI? No, but I don’t think this COVID situation is a good/sufficient reason to assume that we can’t act rationally in response to a clear warning shot for AI.

Models of success should reflect this uncertainty clearly in their reasoning/modeling, in case later analysis shows this to be false (i.e., warning shots for AI probably won’t matter).

To be honest, Nate’s analysis about the hope for government action sometimes comes across as if it’s from someone who never studied political economy or other poli-sci topics, assumed governments would act rational, and then concludes governments are hopeless when that assumption is flawed.

Nate thinks we should place less of our hope and focus on governments, and more of it on corporations; but corporations obviously aren't perfect rational actors either.

This isn't well predicted by "perfect rational actor or bust", but it's well predicted by "Nate thinks the problem is at a certain (high) level of difficulty, and the best major governments are a lot further away from clearing that difficulty bar than the best corporations are".

From Nate's perspective, AGI is a much harder problem than anything governments have achieved in the past (including the good aspects of our response to nuclear, Y2K, 9/11, and asteroids). In order to put a lot of our hope in sane government response, there should be clear signs that EA intervention can cause at least one government to perform better than any government ever has in history.

COVID's relevance here isn't "a-ha, governments failing on COVID proves that they never do anything right, and therefore won't do AGI right"; it's "we plausibly won't get any more opportunities (that are at least this analogous to AGI risk) to test the claim that EAs can make a government perform dramatically better than they ever have before; so we should update on what data we have (insofar as we even need more data for such an overdetermined claim), and pin less of our hopes on government outperformance".

If EAs can't even get governments to perform as well as they have on other problems, in the face of an biorisk warning shot, then we've failed much more dramatically than if we'd merely succeeded in making a government's response to COVID as sane as its response to the Y2K bug or the collapse of the Soviet Union.

(This doesn't mean that we should totally give up on trying to improve government responses — marginal gains might help in some ways, and unprecedented things do happen sometimes. But we should pin less of our hope on it, and treat it as a larger advantage of a plan if the plan doesn't require gov sanity as a point of failure.)

Are there other things you think show Nate is misunderstanding relevant facts about gov, that aren't explained by disagreements like "Nate thinks the problem is harder than you do"?

Re "AGI is a harder problem", see Eliezer's description:

[...] I think there's a valid argument about it maybe being more possible to control the supply chain for AI training processors if the global chip supply chain is narrow (also per Carl).

It is in fact a big deal about nuclear tech that uranium can't be mined in every country, as I understand it, and that centrifuges stayed at the frontier of technology and were harder to build outside the well-developed countries, and that the world ended up revolving around a few Great Powers that had no interest in nuclear tech proliferating any further.

Unfortunately, before you let that encourage you too much, I would also note it was an important fact about nuclear bombs that they did not produce streams of gold and then ignite the atmosphere if you turned up the stream of gold too high with the actual thresholds involved being unpredictable.

[...]

I would be a lot more cheerful about a few Great Powers controlling AGI if AGI produced wealth, but more powerful AGI produced no more wealth; if AGI was made entirely out of hardware, with no software component that could be keep getting orders of magnitude more efficient using hardware-independent ideas; and if the button on AGIs that destroyed the world was clearly labeled.

That does take AGI to somewhere in the realm of nukes.

I would be curious how much the pandemic preparedness stuff is actually a crux. E.g. if gain of function research is restricted within the next year, would that noticeably change your estimate of how helpful a warning shot will be?

I think this is kind of testing your argument (2) – EA advisors might possibly become more influential within the next year. (And also it just takes forever to do anything in policy.)

Random meta point: You can now crosspost posts to the EA Forum from LW and vice-versa, which automatically adds a link to the crosspost to the top, and adds a link to the comment section on the other side to the bottom of the comment section (together with a counter of the number of comments). Seems like this would have been a bit nicer for this case.

Readers might be interested in the comments over here, especially Daniel K.'s comment:

The only viable counterargument I've heard to this is that the government can be competent at X while being incompetent at Y, even if X is objectively harder than Y. The government is weird like that. It's big and diverse and crazy. Thus, the conclusion goes, we should still have some hope (10%?) that we can get the government to behave sanely on the topic of AGI risk, especially with warning shots, despite the evidence of it behaving incompetently on the topic of bio risk despite warning shots.

Or, to put it more succinctly: The COVID situation is just one example; it's not overwhelmingly strong evidence.

Can't we produce really good text-to-image educational videos? E.g. Eliezer's fictional writings are really fun, and have introduced many of us into this topic. Bonus point if these videos accurately predict the future, gaining us some sort of reputation.

I just wanted to share as my experience was so radically different from yours. Based in the UK during the pandemic  I felt like:

  • No one in was really doing anything to try to "make the government sane around biorisk". I published a paper targeted at government on managing risks. I remember at the time (in 2020) it felt like no one else was shifting to focus on policy change on lessons learned from COVID.
  • When I tried doing stuff it went super well. As mentioned here  (and here) this work went much better than expected. The government seemed willing to update and commit to being better in future.

 I came away from the situation with a feeling that influencing policy was easy and impactful and neglected and hopefully about what policy work could achieve – but just disappointed that not more was being done to  "make the government sane around biorisk".
 

This leads me to questions Why are our experiences so different? Some hypothesis that I have are:

  • Luck / randomness – maybe I was lucky or US advocates were unlucky and we should assume the truth lies in the middle.
  • Different country – the US is different, harder to influence, or less sane than some (or many) other places.
  • Different methodology – The standard policy advocacy sector really sucks, it is not evidence based and there is little M&E. It might be that advocacy run in an impact-focused way (like was happening in the UK) is just much better than funding standard advocacy organisations (which I guess was happening in the US). See discussion on this here.
  • Different amount of work – your post mentions a "valiant effort" was made, but does not evidence this. This makes it hard to form an opinion on what works and why. Would be great to get an answer to this (see Susan's comment) e.g. links to a few campaigns in this space. 

Grateful for your views.

And—despite valiant effort!—we've been able to do approximately nothing.

This should update judgements on whether GOF research is as easy to influence as was thought in 2021.

Some resources I recommend on GOF research are the first two chapters of Mearshimer's Tragedy of Great Power Politics (2014) and the first two chapters of Schelling's Arms and Influence (1966).

I'll just note that I have a prediction market on this here, which is currently at a 7% chance of some prominent event causing mainstream AI capabilities researchers to start taking the risk more seriously by 2028.

More from So8res
Curated and popular this week
Relevant opportunities