I sometimes get a vibe that many people trying to ambitiously do good in the world (including EAs) are misguided about what doing successful policy/governance work looks like. An exaggerated caricature would be activities like: dreaming up novel UN structures, spending time in abstract game theory and ‘strategy spirals[1]’, and sweeping analysis of historical case studies.

Instead, people that want to make the world safer with policy/governance should become experts on very specific and boring topics. One of the most successful people I’ve met in biosecurity got their start by getting really good at analyzing obscure government budgets.

Here are some crowdsourced example areas I would love to see more people become experts in:

  • Legal liability - obviously relevant to biosecurity and AI safety, and I’m especially interested in how liability law would handle spreading infohazards (e.g. if a bio lab publishes a virus sequence that is then used for bioterrorism, or if an LLM is used maliciously in a similar way).
  • Privacy / data protection laws - could be an important lever for regulating dangerous technologies. 
  • Executive powers for regulation - what can and can't the executive actually do to get AI labs to adhere to voluntary security standards, or get DNA synthesis appropriately monitored? 
  • Large, regularly reauthorized bills (e.g., NDAA, PAHPA, IAA) and ways in which they could be bolstered for biosecurity and AI safety (both in terms of content and process).
  • How companies validate customers, e.g., for export control or FSAP reasons (know-your-customer), and the statutes and technologies around this.
  • How are legal restrictions on possessing or creating certain materials justified/implemented e.g. Chemical Weapons Convention, narcotics, Toxic Substances Control Act? 
  • The efficacy of tamper-proof and tamper-evident technology (e.g. in voting machines, anti-counterfeiting printers)
  • Biochemical supply chains - which countries make which reagents, and how are they affected by export controls and other trade policies?
  • Consumer protection laws and their application to emerging tech risks (e.g. how do product recalls work? Could they apply to benchtop DNA synthesizers or LLMs?)
  • Patent law - can companies patent dangerous technology in order to prevent others from developing or misusing it?
  • How do regulations on 3d-printed firearms work?
  • The specifics of congressional appropriations, federal funding, and procurement: what sorts of things does the government purchase, how does this relate to biotech or AI (software)? Related to this, becoming an expert on the Strategic National Stockpile and understanding the mechanisms of how a vendor managed inventory could work.

A few caveats. First, I spent like 30 minutes writing this list (and crowdsourced heavily from others). Some of these topics are going to be dead ends. Still, I’d be more excited about somebody pursuing one of these concrete, specific dead ends and getting real feedback from the world (and then pivoting[2]), rather than trying to do broad strategy work and risk ending up in a never-ending strategy spiral. Moreover, the most impactful topics are probably not on this list and will be discovered by somebody who got deep into the weeds of something obscure.

For those of you that are trying to do good with an EA mindset, this also means getting out of the EA bubble and spending lots of time with established experts[3] in these relevant fields. Every so often, I’ll get the chance to collect biosecurity ideas and send them to interested people in DC. In order to be helpful, these ideas need to be super specific, e.g. this specific agency needs to task this other subagency to raise this obscure requirement to X. Giving broad input like ‘let’s have better disease monitoring’ is not helpful. Experts capable of producing these specific ideas are much more impactful, and impact-oriented people should aspire to work with and eventually become those experts.[4]


I appreciated feedback and ideas on the crowdsourced list from Tessa Alexanian, Chris Bakerlee, Anjali Gopal, Holden Karnofsky, Trevor Levin, James Wagstaff, and a number of others.

  1. ^

    'Strategy Spiral' is the term I use to describe spending many hours doing ‘strategy’ with very little feedback from the real world, very little sense of what decision-makers would actually find helpful or action-relevant, and no real methodology to actually make progress or get clarity.  The strategy simply goes in circles.  Strategy is important so doing strategy can make you feel important, but I think people often underestimate the importance of getting your hands dirty directly, plus in the long run it will help you do better strategy.

  2. ^

    And then if you write up a document explaining why this was a dead end, you benefit everybody else trying to have an impact (or perhaps inspire somebody to perhaps see a different angle on the problem).

  3. ^

     One of the people reading this said ‘I feel like one thing I didn't understand until pretty recently is how much of (the most powerful version of) this kind of expertise basically requires being in a government office where you have to deal with an annoying bureaucratic process. This militates in favor of early-career EAs working in government instead of research roles’

  4. ^

    Concretely, this looks like either getting an entry level job in government, or being at a think tank but working closely with somebody in government who actually wants your analysis, or drilling deep on a specific policy topic where there is a clear hypothesis for it being ‘undervalued’ by the policy marketplace.  Doing independent research is not a good way of doing this.

297

2
0

Reactions

2
0

More posts like this

Comments29
Sorted by Click to highlight new comments since: Today at 6:20 PM

I think I mostly disagree with this post. 

I think Michael Webb would be an example of someone who did pretty abstract stuff (are ideas, in general, getting harder to find) at a relatively junior level (PhD student) but then because his work was impressive and rigorous became very senior in the British government and DeepMind. 

Tamay Besiroglu's MPhil thesis on ideas getting harder to find in ML I think should be counted as strategy research by a junior person but has been important in feeding into various Epoch papers and the Davidson takeoff speeds model.  I expect the Epoch papers and the takeoff model to be very impactful. Tamay is now deputy director of Epoch. 

My guess is that it's easier to do bad strategy research than it is to get really good at a niche but important things, but I think it's very plausible that it's the better-expected value decision provided you can make your strategy research legibly impressive, and your strategy research is good research.  It seems plausible that doing independent strategy research one isn't aiming to be published in a journal is particularly bad since it doesn't provide good career capital, there isn't good mentorship or feedback and there's no clear path to impact.  

I would guess that economists are unusually well-suited to strategy research because it can often be published in journals which is legibly impressive and so is good career capital, and the type of economics strategy research that one does is either empirical and so has good feedback loops, or is model-based but drawn from economic theory and so much more structured than typical theory would be. I think this latter type of research can clearly be impactful - for instance, Racing to the Precipice is a pure game theory paper but informs much of the conversation of avoiding race dynamics. Economics is also generally respected within government and economists are often hired as economists which is unusual amongst the social sciences. 

My training is as an economist and it's plausible to me that work in political science, law, and political philosophy would also be influential but I have less knowledge of these areas. 

I don't want to overemphasise my disagreement - I think lots of people should become experts in very specific things - but I think this post is mostly an argument against doing bad strategy research that doesn't gain career capital. I expect doing strategy research at an organization that is experienced at doing good and/or legibly impressive research e.g. in academia mostly solves this problem. 

A final point, I think this post underrates the long-run influence of ideas on government and policy. The neoliberals of the 60s and 70s are a well-known example of this, but also Jane Jacob's influence on US urban planning, the influence of legal and literary theory on the Modern American left, and the importance of Silent Spring for the US environmental movement. Research in information economics has been important in designing healthcare systems, e.g. the mandate to buy healthcare under the ACA. The EUs cap and trade scheme is another idea that came quite directly from pretty abstract research. 

This is a different kind of influence to proposing a specific or implementing a specific policy in government - which is a very important kind of influence - but I suspect over the long run is more important (though with weak confidence and I don't think this is especially cruxy.)

[anonymous]8mo33
3
0

Commenting to try to figure out where you disagree with the original poster, and what your cruxes are.

It sounds like you're saying:
1) Conditional on being able to do legibly impressive and good strategy research, it's more valuable for junior people to do strategy research than to become an expert in a specific boring topic
2) Nonetheless, many people should become experts in specific boring topics
3) Over the long run, ideas influence the government more than the OP suggests they do (and probably more than the proposal/implementation of specific policies do)

Does that sound right to you?

I think there's (at least) two ways to read the original post: either as a claim about the total comparative utility of boring/specific expertise vs. strategy work, or as a claim about the marginal utility of boring/specific expertise vs. strategy work. 
For example:
A) As a general rule, if you're a junior person who wants to get into policy, becoming an expert on a specific boring topic is more useful than attempting strategy work ("Instead, people that want to make the world safer with policy/governance should become experts on very specific and boring topics")
B) On the margin, a junior person having expertise on a specific boring topic is more useful than a junior person doing strategic work ("I’d be more excited about somebody pursuing one of these concrete, specific dead ends and getting real feedback from the world (and then pivoting[2]), rather than trying to do broad strategy work and risk ending up in a never-ending strategy spiral")

It wasn't clear to me whether you agree with 1), 2), both, or neither. Agreeing with A is compatible with accepting 1-3 (e.g. maybe most junior people can't do legibly impressive and good strategy work), as is disagreeing with A and agreeing with B (e.g. maybe junior people likely can do legibly impressive and good strategy work, but the neglectedness of boring/specific expertise means that its marginal utility is higher than of strategy work). Where'd you stand on A and B, and why? And do you think it's the case that many junior people could do legibly impressive and good strategy research?

(I'm sorry if this comment sounds spiky - it wasn't meant to be! I'm interested in the topic and trying to get better models of where people disagree and why :) )
 

I think I just don't have sufficiently precise models to know whether it's more valuable for people to do implementation or strategy work on the current margin. 

I think compared to a year ago implementation work has gone up in value because there appears to be an open policy window and so we want to have shovel-ready policies we think are, all things considered, good. I think we've also got a bit more strategic clarity than we had a year or so ago thanks to the strategy writing that Holden, Ajeya and Davidson have done. 

On the other hand, I think there's still a lot of strategic ambiguity and for lots of the most important strategy questions there's like one report with massive uncertainty that's been done. For instance, both bioanchors and Davidson's takeoff speeds report assume we could get TAI by just by scaling up compute. This seems like a pretty big assumption. We have no idea what the scaling laws for robotics are, there are constant references to race dynamics but like one non-empirical paper from 2013 that's modelled it at the firm level (although there's another coming out.) The two recent Thorstad papers to come out I think are a pretty strong challenge to longtermism not grounded in digital minds being a big deal.  

I think people, especially junior people, should be baised towards work with good feedback loops but I think this is a different axis from strategy vs implementation. Lots of epochs work is stratagy work but also has good feedback loops. The legal priorities project and GPI both do pretty high level work but I think both are great because they're grounded in academic disciplines. Patiant philanthripy is probably the best example of really high level, purely conceptual, work that is great. 

In AI in particualr so high level stuff that I think would be great would be: a book on what good post TAI futures look like, forcasting the growth of the Chinese economy under different political setups, scaling laws for robotics, modelling the elasticity of the semi-conductor supply chain, proposals for transfering ownership capital to the population more broadly, investigating different funding models for AI safety. 

Thanks, I thought these were useful comments, particularly about the longer-term influence of big ideas (neoliberalism etc). I would be interested in reading/skimming the Thorstad papers you refer to, where are they? I found https://onlinelibrary.wiley.com/doi/full/10.1111/papa.12248 which is presumably one of them. Do they have EA Forum versions, and if not do you know if David planning to put them up as such? Seems potentially valuable.

Suggestion for how people go about developing this expertise from ~scratch, in a way that should be pretty adaptable to e.g. the context of an undergraduate or grad-level course, or independent research (a much better/stronger version of things I've done in the past, which involved lots of talking and take-developing but not a lot of detail and publication, which I think are both really important):

  1. Figure out who, both within the EA world and not, would know at least a fair amount about this topic -- maybe they just would be able to explain why it's useful in more context than you have, maybe they know what papers you should read or acronyms you should familiarize yourself with -- and talk to them, roughly in increasing order of scariness/value of their time, such that you've at least had a few conversations by the time you're talking to the scariest/highest-time-value people. Maybe this is like a list of 5-10 people?
  2. During these conversations, take note of what's confusing you, ideas that you have, connections you or your interlocutors draw between topics, takes you find yourself repeating, etc.; you're on the hunt for a first project.
  3. Use the "learning by writing" method and just try to write "what you think should happen" in this area, as in, a specific person (maybe a government agency, maybe a funder in EA) should take a specific action, with as much detail as you can, noting a bunch of ways it could go wrong and how you propose to overcome these obstacles.
  4. Treat this proposal as a hypothesis that you then test (meaning, you have some sense of what could convince you it's wrong), and you seek out tests for it, e.g. talking to more experts about it (or asking them to read your draft and give feedback), finding academic or non-academic literature that bears on the important cruxes, etc., and revise your proposal (including scrapping it) as implied by the evidence.
  5. Try to publish something from this exercise -- maybe it's the proposal, maybe it's "hey, it turns out lots of proposals in this domain hinge on this empirical question," maybe it's "here's why I now think [topic] is a dead end." This gathers more feedback and importantly circulates the information that you've thought about it a nonzero amount.

Curious what other approaches people recommend!

Honestly, my biggest recommendation would be just getting a job in policy! You'll get to see what "everyone knows", where the gaps are, and you'll have access to a lot more experts and information to help you upskill faster if you're motivated.

You might not be able to get a job on the topic you think is most impactful but any related job will give you access to better information to learn faster, and make it easier to get your next, even-more-relevant policy job.

In my experience getting a policy job is relatively uncorrelated with knowing a lot about a specific topic so I think people should aim for this early. You can also see if you actually LIKE policy jobs and are good at them before you spend too much time!

Agree, basically any policy job seems to start teaching you important stuff about institutional politics and process and the culture of the whole political system!

Though I should also add this important-seeming nuance I gathered from a pretty senior policy person who said basically: "I don't like the mindset of, get anywhere in the government and climb the ladder and wait for your time to save the day; people should be thinking of it as proactively learning as much as possible about their corner of the government-world, and ideally sharing that information with others."

Thanks for writing this!

+1 on "specialist experts are surprisingly accessible to enthusiastic youth", cf some relevant advice from Alexey Guzey

Another +1 that it's surprisingly easy to get experts to talk to you. Once for a job I had to find out this super obscure thing about the Federal Reserve. Instead of spending hours trying to research it on my own (which I don't think would have gone anywhere), I found a Fed expert at a think tank. He also didn't know the answer to the question, but helped immensely in tracking an answer down. I was surprised by how much time he spent on it!

If you're becoming an expert in something neglected, chances are there won't be good public writing about it, so you should really lean on speaking with experts.

Strongly agree. Having been on both sides of the policy advocacy fence (i.e. in government and as a consultant/advocate working from the ouside), policy ideas have to be concrete. Asking the government to improve disease surveillance (as opposed to something specific like e.g. implementing threatnet) is about as useful as asking a government to improve education outcomes by improving pedagogy, or to boost the economy by raising productivity.

Of course, you don't have to be an expert yourself per se, but you have to talk to those who are, and get their inputs - and beyond a certain point, if your knowledge of that specific space becomes great enough after working in it for a long time, you're practically an expert yourself.

While EA is great, a lot of us have naive views of how governance works, and for that matter have overly optimistic theories of change of how abstract ideas and research affect actual policies and resource allocation, let alone welfare.

So of course developing expertise in something important is better than "ending up in a never-ending strategy spiral." But I largely disagree about your focus on established, specific, boring, prosaic questions. The relevant way I can impact governance by discovering something good, telling EAs, and thus causing EAs to use their influence to make it happen. And I think I can do that much better by starting at questions like "if AI goes well, what happened" and "what intermediate goals improve AI safety" and "what should labs do" than the kind of topics on your list.

Speaking just to the little slice of the world I know:

Using a legal research platform (i.e. Westlaw, LexisNexis, Casetext) could be really helpful with several of these. If you're good at thinking up search terms and analogous products/actors/circumstances (3D firearms, banned substances, and squatting on patents are good examples here), there's basically always a case where someone wasn't happy someone else was doing X, so they hired lawyers to figure out which laws were implicated by X and file a suit/indict someone, usually on multiple different theories/laws. 

The useful part is that courts will then write opinions on the theories which provide a clear, lay-person explanation of the law at issue, what it's for, how it works, etc. before applying it to the facts at hand. Basically, instead of you having to stare at some pretty abstract, high-level, words in a statute/rule and imagine how they apply, a lot of that work has already been done for you, and in an authoritative, citable way. Because cases rely on real facts and circumstances, they also make things more concrete for further analogy-making to the thing you care about. 

Downside is these tools seem to cost at least ~$150/mo, but you may be able to get free access through a university or find other ways to reduce this. Google Scholar case law is free, but pretty bad. 

Man, the topics you mention feel too boring (i.e. too far removed from the important questions). Quoting from Richard Ngo's AGI safety career advice for another flavor of question:

Here are some topics where I wish we had a world expert on applying it to AGI safety. . . . :

  1. What regulatory apparatus within the US government would be most effective at regulating large training runs?
  2. What tools and methods does the US government have for auditing tech companies?
  3. What are the biggest gaps in the US export controls to China, and how might they be closed?
  4. What AI applications or demonstrations will society react to most strongly?
  5. What interfaces will humans use to interact with AIs in the future?
  6. How will AI most likely be deployed for sensitive tasks (e.g. advising world leaders) given concerns about privacy?
  7. How might political discourse around AI polarize, and what could mitigate that?
  8. What would it take to automate crucial infrastructure (factories, weapons, etc)?

(I know you weren't just writing about AI safety. Even given that, these questions are substantially more relevant to stuff that matters than yours, and I think that's good.)

1-3 seem good for generating more research questions like ASB's, but the narrower research questions are ultimately necessary to get to impact. 4-8 seem like things EA is over-invested in relative to what ASB lays out here, not that more couldn't be done there. 

4 and 7 are not really questions that one can meaningfully develop expertise on. Even politicians, whose jobs depend on understanding public opinion, are worse at this than just running a poll, and depend heavily on polling to assess public opinion when they have the money to run adequate polls. They do bring a useful amount of additional judgment to that process and can give you a sense of when a poll result is likely to not hold up in an adversarial environment, but I don't think you can develop an equivalent skill without actually spending a lot of time talking to the public. I also don't think that would allow you to do much prediction of where public opinion is headed. Hillary Clinton would probably have been elected President in 2008 if she had been able to predict how Dem primary voters' opinions on her Iraq vote would change, and she never lacked access to world-class experts at the time she was making her decision. 

 

You could spend 30k to run a poll and get a better sense of current public sentiment and specific ways opinions can be pushed given information currently available. A world-expert level pollster could perhaps help you write better questions, and you could review the history of pubic opinion on topics you find to be analogous. I think with all that you'd outperform most unelected policymakers in understanding current public opinion, but only because their condescension toward the average person makes them especially bad at it (see, e.g. their obvious bungling of covid). I'd be extremely skeptical that you'd do any better at predicting what shifts public and elite opinion better than an average swing-district member of Congress who took 10 min to review your poll.

Mostly disagree. "What AI applications or demonstrations will society react to most strongly?" depends a lot on what AI applications will be powerful in the future, not just what people say in polls today.

And polls today leave lots of uncertainty about "How might political discourse around AI polarize"-- that depends not just on what people say in polls today but also what the future of AI looks like and especially how polarization works.

I think I strongly agree with this post on becoming specific, having observed how the fossil fuels vs renewables "fight" is playing out. Perhaps it is not directly relevant to bio, but the climate action takes place in obscure places, often at state level in the US where the fossil fuel lobby hired actors to influence some local politics (not recommended!) and how very important decision are made in esoteric hearings of unknown government entities. I am not saying bio will play out similarly, but that it is important to find these arenas/levers within the large and complex government landscape, build alliances and engage on specific points in the bureaucracy. 

I think this is true if for some particular reason you are set on working outside government and want to be an external academic expert advising policymakers. I think this is what the poster’s experience has been.

But if you want to optimise for impacting policy, it is much higher impact to get roles inside government and to become a policymaker yourself, which usually requires broad, generalist expertise and skills, so overall I strongly disagree with this recommendation, but can see how the poster’s personal experience led to them to their view.

At a glance, each of these questions could easily (1 week to 6 mos) be answered by a U.S. lawyer or dedicated law student. If you have money to spend, hire someone; if not, link up with a law firm's pro bono program or a law school's security clinic.

Just a quick note on "Patent law - can companies patent dangerous technology in order to prevent others from developing or misusing it?" -- No. The U.S. patent tradeoff is that you make an idea public (describe it fully and accurately) in return for (max) 20 years of protection. So the patent process would publicize the dangerous tech, and then (at max, if you had good lawyers and good judges) protect the tech for 20 years -- after which anyone could use the tech you had publicized. Please don't do this.

I mostly agree. Though having worked in research, policy and now policy analysis, it's also the case that sectors can get stuck in established practices. Big-picture /blue sky thinking is of course also important. But I agree that the value of work to get the basics/details right is under recognised. So thumbs up for your post, except the framing of "boring"; one person's boring is another's wildly fascinating - who are we to judge. 

I mostly agree with this, with one addition. 

You yourself probably know better than ASB or anyone else what are the specific boring areas in which you could become an expert that would be useful and impactful. 

Or maybe you already are an expert in a boring area and just need to build on that. 

For example, a chemical engineer will already have studied areas like explosion risk-management, quality control/assurance and safety. An engineer with Pharma experience might also have had extensive dealings with the FDA and their audits. To you, this might seem obvious - but you can be sure that your knowledge represents a vast, complex and yes, boring field of knowledge to someone who hasn't had your experience and learning. 

It feels like these all provide important perspective to any discussions on AI Safety, Safety verification, etc., that probably most of the people working on this topic will not have. There are so many comments like "we need an agency like the FDA or the IAEA" from people who only have a superficial notion of how these agencies operate. So if you have a deep understanding, that could be valuable, impactful knowledge. In a policy discussion, it is powerful perspective if you can argue that "this is how the FDA does it". 

One area where there seems to be a huge lack of expertise among the EA community is China. I don't know how many discussions I've had about AI Safety and Governance in the context of China, but almost nobody (certainly not me!) has been in the discussions providing perspective of how things actually work in China, how policies can be influenced, how corporations and government actually influence each other (i.e. who meets whom, when, where, what flexibility do they have, ...). So this could be a great area for someone who has worked in that kind of area in China, or who has a network of people in that area. 

ASB - thanks for sparking a fascinating discussion, and to the many comment-writers who contributed. 

I'm left with mixed feelings about the pros and cons of developing narrow, 'boring', expertise in specific policy topics, versus more typical EA-style big-picture thinking.

The thing is, there are important and valuable roles for people who specialize in connecting these two approaches, in serving as 'middle men' between the specialist policy wonks and the EA strategists. This requires some pro-active networking, some social skills, a capacity for rapid getting-up-to-speed in new areas, a respect for subject matter experts, and an ability to understand what can help policy experts do their jobs, and advance their careers, more effectively. This intermediary role could probably benefit from a few years of immersion in the gov't/policy/think tank world -- but not such deep immersion that one soaks up all of the conventional wisdom and groupthink and unexamined assumptions that tend to characterize many policy subcultures. So, the best intermediaries may still keep one foot in the EA subculture and one foot in a very specialized policy subculture.

(I say this as someone who's spent most of his academic career trying to connect specialist knowledge in evolutionary and genetic theory to bigger-picture issues in human psychology.)

A Biosecurity area I'd add to the list is regulation and policy around contact tracing apps.

Example questions:

  1. How come Google and Apple got to dictate the privacy constraints rather than democratic governments?
  2. What policies could boost uptake if the apps?
  3. Why didn't politicians make the tracing legally enforceable, unlike traditional contact tracing (this was the case in the UK, I'm not sure the international situation)?

I might be able to help on this one — I did some of the early work on Exposure Notification tech and was somewhat involved with discussions between Apple/Google/a few governments.  

We partially answer some of your questions here: https://arxiv.org/abs/2306.00873. I'm also happy to try to answer any related questions (some parts are closer to informed guesses and didn't make it into the academic paper).

Thanks! I don't have any particular insights or questions (I'm on the tech side of things), just that: I think these apps are really promising, we need better (lower cost and less blunt) tools for dealing with outbreaks, and the biggest issues seem to be on the regulatory/acceptability/political/etc side rather than technical. I will have a look at your paper though, thank you for the link.

Add Electromagnetic Spectrum policy and regulations to the list. The world is wireless and a few experts at the ITU-R, as well as state run regulatory bodies like the USA's NTIA, decide what that wireless world will be. Become an expert in Electromagnetic Spectrum policy and regulations is a great way to change the world. (and the money isn't bad either)

Terrible advice. I did that, became a Superforecaster. Checked a bazillion boring facts. Came to the conclusion of a 1% AI X-Risk in the XPT tournament, together with many of my thoughtful peers. 

Then our (repeated, active) attempts to actively engage and foster a dialog with the "AI experts" in the tournament went to nowhere. There was disagreement at the end of the tournament. All the pundits, once results went up, said surely this can't possibly be. Pundits also missed that in areas where we did engage in productive reasoning with actual experts during the tournament (cough cough nuclear X-risk) there was a lot of agreement in the end.

Motivated reasoning beats boring each time. Become a pompous self promoter, please! EA needs more egomaniacs!

More from ASB
Curated and popular this week
Relevant opportunities