I sometimes get a vibe that many people trying to ambitiously do good in the world (including EAs) are misguided about what doing successful policy/governance work looks like. An exaggerated caricature would be activities like: dreaming up novel UN structures, spending time in abstract game theory and ‘strategy spirals[1]’, and sweeping analysis of historical case studies.
Instead, people that want to make the world safer with policy/governance should become experts on very specific and boring topics. One of the most successful people I’ve met in biosecurity got their start by getting really good at analyzing obscure government budgets.
Here are some crowdsourced example areas I would love to see more people become experts in:
- Legal liability - obviously relevant to biosecurity and AI safety, and I’m especially interested in how liability law would handle spreading infohazards (e.g. if a bio lab publishes a virus sequence that is then used for bioterrorism, or if an LLM is used maliciously in a similar way).
- Privacy / data protection laws - could be an important lever for regulating dangerous technologies.
- Executive powers for regulation - what can and can't the executive actually do to get AI labs to adhere to voluntary security standards, or get DNA synthesis appropriately monitored?
- Large, regularly reauthorized bills (e.g., NDAA, PAHPA, IAA) and ways in which they could be bolstered for biosecurity and AI safety (both in terms of content and process).
- How companies validate customers, e.g., for export control or FSAP reasons (know-your-customer), and the statutes and technologies around this.
- How are legal restrictions on possessing or creating certain materials justified/implemented e.g. Chemical Weapons Convention, narcotics, Toxic Substances Control Act?
- The efficacy of tamper-proof and tamper-evident technology (e.g. in voting machines, anti-counterfeiting printers)
- Biochemical supply chains - which countries make which reagents, and how are they affected by export controls and other trade policies?
- Consumer protection laws and their application to emerging tech risks (e.g. how do product recalls work? Could they apply to benchtop DNA synthesizers or LLMs?)
- Patent law - can companies patent dangerous technology in order to prevent others from developing or misusing it?
- How do regulations on 3d-printed firearms work?
- The specifics of congressional appropriations, federal funding, and procurement: what sorts of things does the government purchase, how does this relate to biotech or AI (software)? Related to this, becoming an expert on the Strategic National Stockpile and understanding the mechanisms of how a vendor managed inventory could work.
A few caveats. First, I spent like 30 minutes writing this list (and crowdsourced heavily from others). Some of these topics are going to be dead ends. Still, I’d be more excited about somebody pursuing one of these concrete, specific dead ends and getting real feedback from the world (and then pivoting[2]), rather than trying to do broad strategy work and risk ending up in a never-ending strategy spiral. Moreover, the most impactful topics are probably not on this list and will be discovered by somebody who got deep into the weeds of something obscure.
For those of you that are trying to do good with an EA mindset, this also means getting out of the EA bubble and spending lots of time with established experts[3] in these relevant fields. Every so often, I’ll get the chance to collect biosecurity ideas and send them to interested people in DC. In order to be helpful, these ideas need to be super specific, e.g. this specific agency needs to task this other subagency to raise this obscure requirement to X. Giving broad input like ‘let’s have better disease monitoring’ is not helpful. Experts capable of producing these specific ideas are much more impactful, and impact-oriented people should aspire to work with and eventually become those experts.[4]
I appreciated feedback and ideas on the crowdsourced list from Tessa Alexanian, Chris Bakerlee, Anjali Gopal, Holden Karnofsky, Trevor Levin, James Wagstaff, and a number of others.
- ^
'Strategy Spiral' is the term I use to describe spending many hours doing ‘strategy’ with very little feedback from the real world, very little sense of what decision-makers would actually find helpful or action-relevant, and no real methodology to actually make progress or get clarity. The strategy simply goes in circles. Strategy is important so doing strategy can make you feel important, but I think people often underestimate the importance of getting your hands dirty directly, plus in the long run it will help you do better strategy.
- ^
And then if you write up a document explaining why this was a dead end, you benefit everybody else trying to have an impact (or perhaps inspire somebody to perhaps see a different angle on the problem).
- ^
One of the people reading this said ‘I feel like one thing I didn't understand until pretty recently is how much of (the most powerful version of) this kind of expertise basically requires being in a government office where you have to deal with an annoying bureaucratic process. This militates in favor of early-career EAs working in government instead of research roles’
- ^
Concretely, this looks like either getting an entry level job in government, or being at a think tank but working closely with somebody in government who actually wants your analysis, or drilling deep on a specific policy topic where there is a clear hypothesis for it being ‘undervalued’ by the policy marketplace. Doing independent research is not a good way of doing this.
I think I just don't have sufficiently precise models to know whether it's more valuable for people to do implementation or strategy work on the current margin.
I think compared to a year ago implementation work has gone up in value because there appears to be an open policy window and so we want to have shovel-ready policies we think are, all things considered, good. I think we've also got a bit more strategic clarity than we had a year or so ago thanks to the strategy writing that Holden, Ajeya and Davidson have done.
On the other hand, I think there's still a lot of strategic ambiguity and for lots of the most important strategy questions there's like one report with massive uncertainty that's been done. For instance, both bioanchors and Davidson's takeoff speeds report assume we could get TAI by just by scaling up compute. This seems like a pretty big assumption. We have no idea what the scaling laws for robotics are, there are constant references to race dynamics but like one non-empirical paper from 2013 that's modelled it at the firm level (although there's another coming out.) The two recent Thorstad papers to come out I think are a pretty strong challenge to longtermism not grounded in digital minds being a big deal.
I think people, especially junior people, should be baised towards work with good feedback loops but I think this is a different axis from strategy vs implementation. Lots of epochs work is stratagy work but also has good feedback loops. The legal priorities project and GPI both do pretty high level work but I think both are great because they're grounded in academic disciplines. Patiant philanthripy is probably the best example of really high level, purely conceptual, work that is great.
In AI in particualr so high level stuff that I think would be great would be: a book on what good post TAI futures look like, forcasting the growth of the Chinese economy under different political setups, scaling laws for robotics, modelling the elasticity of the semi-conductor supply chain, proposals for transfering ownership capital to the population more broadly, investigating different funding models for AI safety.