Hide table of contents

This is a sister post to "Problem areas beyond 80,000 Hours' current priorities".

Introduction

In this post, we list some more career options beyond our priority paths that seem promising to us for positively influencing the long-term future.

Some of these are likely to be written up as priority paths in the future, or wrapped into existing ones, but we haven't written full profiles for them yet—for example policy careers outside AI and biosecurity policy that seem promising from a longtermist perspective.

Others, like information security, we think might be as promising for many people as our priority paths, but because we haven't investigated them much we're still unsure.

Still others seem like they'll typically be less impactful than our priority paths for people who can succeed equally in either, but still seem high-impact to us and like they could be top options for a substantial number of people, depending on personal fit—for example research management.

Finally some—like becoming a public intellectual—clearly have the potential for a lot of impact, but we can't recommend them widely because they don't have the capacity to absorb a large number of people, are particularly risky, or both.

We compiled this list by asking 6 advisers about paths they think more people in the effective altruism community should explore, and which career ideas they think are currently undervalued—including by 80,000 Hours. In particular, we were looking for paths that seem like they may be promising from the perspective of positively shaping the long-term future, but which aren't already captured by aspects of our priority paths. If something was suggested twice and also met those criteria, we took that as a presumption in favor of including it. We then spent a little time looking into each one and put together a few thoughts and resources for those that seemed most promising. The result is the list below.

We'd be excited to see more of our readers explore these options, and plan on looking into them more ourselves.

Who is best suited to pursue these paths? Of course the answer is different for each one, but in general pursuing a career where less research has been done on how to have a large impact within it—especially if few of your colleagues will share your perspective on how to think about impact—may require you to think especially critically and creatively about how you can do an unusual amount of good in that career. Ideal candidates, then, would be self-motivated, creative, and inclined to think rigorously and often about how they can steer toward the highest impact options for them—in addition to having strong personal fit for the work.

What are the pros and cons of each of these paths? Which are less promising than they might at first appear? What particular routes within each one are the most promising and which are the least? What especially promising high-impact career ideas is this list missing?

We're excited to read people's reactions in the comments. And we hope that for people who want to pursue paths outside those we talk most about, this list can give them some fruitful ideas to explore.

Career ideas we're particularly excited about beyond our priority paths

Become a historian focusing on large societal trends, inflection points, progress, or collapse

We think it could be high-impact to study subjects relevant to the long-term arc of history—e.g, economic, intellectual, or moral progress from a long-term perspective, the history of social movements or philanthropy, or the history of wellbeing. Better understanding long trends and key inflection points, such as the industrial revolution, may help us understand what could cause other important shifts in the future (see more promising topics).

Our impression is that although many of these topics have received attention from historians and other academics (examples: 1, 2, 3, 4, 5), some are comparatively neglected, especially from a more quantitative or impact-focused perspective.

In general, there seem to be a number of gaps that skilled historians, anthropologists, or economic historians could help fill. Revealingly, the Open Philanthropy Project commissioned their own studies of the history and successes of philanthropy because they couldn’t find much existing literature that met their needs. Most existing research is not aimed at deriving action-relevant lessons.

However, this is a highly competitive path, which is not able to absorb many people. Although there may be some opportunities to do this kind of historical work in foundations, or to get it funded through private grants, pursuing this path would in most cases involve seeking an academic career. Academia generally has a shortage of positions, and especially in the humanities often doesn’t provide many backup options. It seems less risky to pursue historical research as an economist, since an economic PhD does give you other promising options.

How can you estimate your chance of success as a history academic? We haven't looked into the fields relevant to history in particular, but some of our discussion of parallel questions for philosophy academia or academia in general may be useful.

It may also be possible to pursue this kind of historical research in ‘non-traditional academia,’ such as at groups like the Future of Humanity Institute or Global Priorities Institute. Learn more about the Global Priorities Institute by listening to our podcast episode with Michelle Hutchinson.

Become a specialist on Russia or India

We’ve argued that because of China’s political, military, economic, and technological importance on the world stage, helping western organizations better understand and cooperate with Chinese actors might be highly impactful.

We think working with China represents a particularly promising path to impact. But a similar argument could be made for gaining expertise in other powerful nations, such as Russia or India.

This is likely to be a better option for you if you are from or have spent a substantial amount of time in these countries. There’s a real need for people with a deep understanding of their cultures and institutions, as well as fluency in the relevant languages (e.g. at the level where one might write a newspaper article about longtermism in Russian).

If you are not from one of these countries, one way to get started might be to pursue area or language studies in the relevant country (one source of support available for US students is the Foreign Language and Area Studies scholarship programme), perhaps alongside economics or international relations. You could also start by working in policy in your home country and slowly concentrate more and more on issues related to Russia or India, or try to work in philanthropy or directly on a top problem in one of these countries.

There are likely many different promising options in this area, both for long-term career plans and useful next steps. Though they would of course have to be adapted to the local context, some of the options laid out in our article on becoming a specialist in China could have promising parallels in other national contexts as well.

Become an expert in AI hardware

Advances in hardware, such as the development of more efficient, specialized chips, have played an important role in raising the performance of AI systems and allowing them to be used economically.

There is a commonsense argument that if AI is an especially important technology, and hardware is an important input in the development and deployment of AI, specialists who understand AI hardware will have opportunities for impact—even if we can't foresee exactly the form they will take.

Some ways hardware experts may be able to help positively shape the development of AI include:

  • More accurately forecasting progress in the capabilities of AI systems, for which hardware is a key and relatively quantifiable input.
  • Advising policymakers on hardware issues, such as export, import, and manufacturing policies for specialized chips. (Read a relevant issue brief from CSET.)
  • Helping AI projects in making credible commitments by allowing them to verifiably demonstrate the computational resources they’re using.
  • Helping advise and fulfill the hardware needs for safety-oriented AI labs.

These ideas are just examples of ways hardware specialists might be helpful. We haven't looked into this area very much, so we are pretty unsure about the merits of different approaches, which is why we've listed working in AI hardware here instead of as a part of the AI technical safety and policy priority paths.

We also haven't come across research laying out specific strategies in this area, so pursuing this path would likely mean both developing skills and experience in hardware and thinking creatively about opportunities to have an impact in the area. If you do take this path, we encourage you to think carefully through the implications of your plans, ideally in collaboration with strategy and policy experts also focused on creating safe and beneficial AI.

Information security

Researchers at the Open Philanthropy Project have argued that better information security is likely to become increasingly important in the coming years. As powerful technologies like bioengineering and machine learning advance, improved security will likely be needed to protect these technologies from misuse, theft, or tampering. Moreover, the authors have found few security experts already in the field who focus on reducing catastrophic risks, and predict there will be high demand for them over the next 10 years.

In a recent podcast episode, Bruce Schneier also argued that applications of information security will become increasingly crucial, although he pushed back on the special importance of security for AI and biorisk in particular.

We would like to see more people investigating these issues and pursuing information security careers as a path to social impact. One option would be to try to work on security issues at a top AI lab, in which case the preparation might be similar to the preparation for AI safety work in general, but with a special focus on security. Another option would be to pursue a security career in government or a large tech company with the goal of eventually working on a project relevant to a particularly pressing area. In some cases we've heard it's possible for people who start as engineers to train in information security at large tech companies that have significant security needs.

Compensation is usually higher in the private sector. But if you want to work eventually on classified projects, it may be better to pursue a public sector career as it may better prepare you to eventually earn a high level of security clearance.

There are certifications for information security, but it may be better to get started by investigating on your own the details of the systems you want to protect, and/or participating in public 'capture the flag' cybersecurity competitions. At the undergraduate level, it seems particularly helpful for many careers in this area to study CS and statistics.

Information security isn't listed as a priority path because we haven't spent much time investigating how people working in the area can best succeed and have a big positive impact. Still, we think there are likely to be exciting opportunities in the area, and if you’re interested in pursuing this career path, or already have experience in information security, we'd be interested to talk to you. Fill out this form, and we will get in touch if we come across opportunities that seem like a good fit for you.

Become a public intellectual

Some people seem to have a very large positive impact by becoming public intellectuals and popularizing important ideas—often through writing books, giving talks or interviews, or writing blogs, columns, or open letters.

However, it’s probably even harder to become a successful and impactful public intellectual than a successful academic, since becoming a public intellectual often requires a degree of success within academia while also having excellent communication skills and spending significant time building a public profile. Thus this path seems to us to be especially competitive and a good fit for only a small number of people.

As with other advocacy efforts, it also seems relatively easy to accidentally do harm if you promote mistaken ideas, or even promote important ideas in a way that turns people off. (Read more about how to avoid accidentally doing harm.)

That said, this path seems like it could be extremely impactful for the right person. We think building awareness of certain global catastrophic risks, of the potential effects of our actions on the long-term future, or of effective altruism might be especially high value, as well as spreading positive values like concern for foreigners, nonhuman animals, future people, or others.

There are public intellectuals who are not academics—such as prominent bloggers, journalists and authors. However, academia seems unusually well-suited for becoming a public intellectual because academia requires you to become an expert in something and trains you to write (a lot), and the high standards of academia provide credibility for your opinions and work. For these reasons, if you are interested in pursuing this path, going into academia may be a good place to start.

Public intellectuals can come from a variety of disciplines—what they have in common is that they find ways to apply insights from their fields to issues that affect many people, and they communicate these insights effectively.

If you are an academic, experiment with spreading important ideas on a small scale through a blog, magazine, or podcast. If you share our priorities and are having some success with these experiments, we’d be especially interested in talking to you about your plans.

Journalism

For the right person, becoming a journalist seems like it could be highly valuable for many of the same reasons being a public intellectual might be.

Good journalists keep the public informed and help positively shape public discourse by spreading accurate information on important topics. And although the news media tend to focus more on current events, journalists also often provide a platform for people and ideas that the public might not otherwise hear about.

However, this path is also very competitive, especially when it comes to the kinds of work that seem best for communicating important ideas (which are often complex), i.e., writing long-form articles or books, podcasts, and documentaries. And like being a public intellectual, it seems relatively easy to do make things worse as a journalist by directing people's attention in the wrong way, so this path may require especially good judgement about which projects to pursue and with what strategy. We therefore think journalism is likely to be a good fit for only a small number of people.

Check out our interview with Kelsey Piper of Vox’s Future Perfect to learn more.

Policy careers that are promising from a longtermist perspective

There is likely a lot of policy work with the potential to positively affect the long run future that doesn’t fit into either of our priority paths of AI policy or biorisk policy.

We aren't sure what it might be best to ultimately aim for in policy outside these areas. But working in an area that is plausibly important for safeguarding the long-term future seems like a promising way of building knowledge and career capital so that you can judge later what policy interventions seem most promising for you to pursue.

Possible areas include:

See our problem profiles page for more issues, some of which you might be able to help address through a policy-oriented career.

There is a spectrum of options for making progress on policy, ranging from research to work out which proposals make sense, to advocacy for specific proposals, to implementation. (See our write-up on government and policy careers for more on this topic.)

It seems likely to us that many lines of work within this broad area could be as impactful as our priority paths, but we haven't investigated enough to be confident about the most promising options or the best routes in. We hope to be able to provide more specific guidance in this area in the future.

Be research manager or a PA for someone doing really valuable work

Some people may be extraordinarily productive compared to the average. (Read about this phenomenon in research careers.). But these people often have to use much of their time on work that doesn’t take the best advantage of their skills, such as bureaucratic and administrative tasks. This may be especially true for people who work in university settings, as many researchers do, but it is also often true of entrepreneurs, politicians, writers, and public intellectuals.

Acting as a personal assistant can dramatically increase these peoples' impact. By supporting their day-to-day activities and freeing up more of their time for work that other people can’t do, you can act as a ‘multiplier’ on their productivity. We think a highly talented personal assistant can make someone 10% more productive, or perhaps more, which is like having a tenth (or more) as much impact as they would have. If you're working for someone doing really valuable work, that's a lot.

A related path is working in research management. Research managers help prioritize research projects within an institution and help coordinate research, fundraising, and communications to make the institution more impactful. Read more here. In general, being a PA or a research manager seems valuable for many of the same reasons working in operations management does—these coordinating and supporting roles are crucial for enabling researchers and others to have the biggest positive impact possible.

Become an expert on formal verification

'Proof assistants' are programs used to formally verify that computer systems have various properties—for example that they are secure against certain cyberattacks—and to help develop programs that are formally verifiable in this way.

Currently, proof assistants are not very highly developed, but the ability to create programs that can be formally verified to have important properties seems like it could be helpful for addressing a variety of issues, perhaps including AI safety and cybersecurity. So improving proof assistants seems like it could be very high-value.

For example, it might be possible to use proof assistants to help solve the AI ‘alignment problem’ by creating AI systems that we can prove have certain properties we think are required for the AI system to reliably do what we want it to do. Alternatively, we may be able to use proof assistants to generate programs that we need to solve some sub-part of the problem. (Read our career review of researching risks from AI)

We haven't looked into formal verification yet much, but both further research in this area as well as applying existing techniques to important issues seem potentially promising to us. You can enter this path by studying formal verification at the undergraduate or graduate level, or learning about it independently if you have a background in computer science. Jobs in this area exist both in industry and in academia.

Use your skills to meet a need in the effective altruism community

As a part of this community, we may have some bias here, but we think helping to build the community and make it more effective might be one way to do a lot of good. Moreover, unlike other paths on this list, it might be possible to do this part time while you also learn about other areas.

There are many ways of helping build and maintain the effective altruism community that don’t involve working within an effective altruism organisations, such as consulting for one of these organizations, providing legal advice, or helping effective altruist authors with book promotion.

Within this set of roles, we’d especially like to highlight organizing student and local effective altruism groups. Our experience suggests that these groups can be very useful resources for people to learn more about different global problems and connect with others who share their concerns (more resources for local groups).

We think these roles are good to pursue in particular if you are very familiar with the effective altruism community and you already have the relevant skills and are keen to bring them to bear in a more impactful way.

Nonprofit entrepreneurship

If you can find a way to address a key bottleneck to progress in a pressing problem area which hasn’t been tried or isn’t being covered by an effective organisation, starting one of your own can be extremely valuable.

That said, this path seems to us to be particularly high-risk, which is why we don't list it as a priority path. Most new organizations struggle, and non-profit entrepreneurship can often be even more difficult than for-profit entrepreneurship. Setting up a new organisation will also likely involve diverting resources from other organisations, which means it’s easier than it seems to set the area back. The risks are greater if you’re one of the first organizations in an area, as you could put off others from working on the issue, especially if you make poor progress (although this has to be balanced against the greater information value of exploring an uncharted area).

In general, we wouldn’t recommend that someone start off by aiming to set up a new organisation. Rather, we’d recommend starting by learning about and working within a pressing problem area, and then if through the course of that work you come across a gap, and that gap can’t be solved by an existing organisation, then consider founding a new one. Organisations developed more organically like this, and which are driven by the needs of a specific problem area, usually seem to be much more promising.

There is far more to say about the question of whether to start a new organisation, and how to compare different non-profit ideas and other alternatives. A great deal depends on the details of your situation, making it hard for us to give general advice on the topic.

If you think you may have found a gap for an organisation within one of our priority problem areas, or problem areas that seem promising that we haven’t investigated yet, then we’d be interested to speak to you.

Even if you don't have an idea right now, if you're interested in spearheading new projects focusing on improving the long-run future you might find it thought-provoking and helpful to fill out this survey for people interested in longtermist entrepreneurship, run by Jade Leung as part of a project supported by Open Philanthropy.

You might also be interested in checking out these resources on effective nonprofits, or the organization Charity Entrepreneurship, especially if you’re interested in global health or animal welfare.

Non-technical roles in leading AI labs

Although we think technical AI safety research and AI policy are particularly impactful, we think having very talented people focused on safety and social impact at top AI labs may also be very valuable, even when they aren’t in technical or policy roles.

For example, you might be able to shift the culture around AI more toward safety and positive social impact by talking publicly about what your organization is doing to build safe and beneficial AI (example from DeepMind), helping recruit safety-minded researchers, designing internal processes to consider social impact issues more systematically in research, or helping different teams coordinate around safety-relevant projects.

We're not sure which roles are best, but in general ones involved in strategy, ethics, or communications seem promising. Or you can pursue a role that makes an AI lab's safety team more effective—like in operations or project management.

That said, it seems possible that some such roles could have a veneer of contributing to AI safety without doing much to head off bad outcomes. For this reason it seems particularly important here to continue to think critically and creatively about what kinds of work in this area are useful.

Some roles in this space may also provide strong career capital for working in AI policy by putting you in a position to learn about the work these labs are doing, as well as the strategic landscape in AI.

Create or manage a long-term philanthropic fund

Some of the best opportunities for making a difference may lie far in the future. In that case, investing resources now in order to have many more resources available at that future time might be extremely valuable.

However, right now we have no way of effectively and securely investing resources over such long time periods. In particular, there are few if any financial vehicles that can be reliably expected to persist for more than 100 years and stay committed to their intended use, while also earning good investment returns. Figuring out how to set up and manage such a fund seems to us like it might be very worthwhile.

Founders Pledge—an organization that encourages effective giving for entrepreneurs—is currently exploring this idea and is actively seeking input. It seems likely that only a few people will be able to be involved in a project like this, as it's not clear there will be room for multiple funds or a large staff. But for the right person we think this could be a great opportunity. Especially if you have a background in finance or relevant areas of law, this might be a promising path for you to explore.

Explore a potentially pressing problem area

There are many neglected global problems that could turn out to be as or even more pressing than those we currently prioritise most highly. We’d be keen to see more people explore them by acquiring relevant training and a network of mentors, and getting to know the relevant fields.

If the problem area still seems potentially promising once you've built up a background, you could take on a project or try to build up the relevant fields, for instance by setting up a conference or newsletter to help people working in the area coordinate better.

If, after investigating, working on the issue doesn't seem particularly high impact, then you’ve helped to eliminate an option, saving others time.

In either case we'd be keen to see write-ups of these explorations, for instance on this forum.

We can't really recommend this as a priority path because it's so amorphous and uncertain. It also generally requires especially high degrees of entrepreneurialism and creativity, since you may get less support in your work, especially early on, and it’s challenging to think of new projects and research ideas that provide useful information about the promise of a less explored area. However, if you fit this profile (and especially if you have existing interest in and knowledge of the problem you want to explore), this path could be an excellent option for you.

high impact careers

Comments28
Sorted by Click to highlight new comments since: Today at 9:44 AM

I'm excited to see this! One thing I'd mention on the historian path and its competitiveness is you could probably do a lot of this sort of work as an economic historian with a PhD in economics. Economic historians study everything from gender roles to religion and do ambitious if controversial quantitative analyses of long-term trends. While economists broadly may give little consideration to historical context, the field of economic history prides itself on actually caring about history for its own sake as well, so you can spend time doing traditional historian things, like working with archival documents (see the Preface to the Oxford Encyclopedia of Economic History for a discussion of the field's norms).

The good thing here is it probably allows for greater outside options and potentially less competitiveness than a history PhD given the upsides of an economics PhD. You could also probably do similar work in political science.

>> Our impression is that although many of these topics have received attention from historians (examples: 1, 2, 3, 4, 5), some are comparatively neglected within the subject, especially from a more quantitative or impact-focused perspective.

I'd also note that some of these cites I don't think are history—2 and 4 are written by anthropologists. (I think in the former case he's sometimes classified as a biologist, psychologist, or economist too.)

I really do hope we have EAs studying history and fully support it, and I just wanted to give some closely related options!

Good points!

This comment isn't really a reply, but is about the same career idea, so I figured it might make sense to put it here.

At EAGx, I talked with someone interested in history, so I made some notes about topics I'd be excited to see historians explore (or at least to see non-historians explore in a similar style), as well as some relevant resources. I'll share those notes here, in case they're helpful to others.

Disclaimer: These are quick thoughts from someone (me) who's only ~6 months into doing longtermist research, and who has no background in academic history. Also, this list of topics is neither comprehensive nor prioritised.

  • History of economic, technological, moral, etc. growth and progress (this post already mentioned this topic)
  • History of societal collapse and recovery (some relevant sources)
  • Historical scenarios of movements growing, having influence, collapsing, etc. (this post already mentioned this one) (some relevant sources)
    • This could provide evidence relevant to what might happen to EA or related movements and how to steer things well.
  • History of proliferation and nonproliferation efforts in the case of nuclear weapons or other military technologies/weapons
  • History of efforts to regulate technology (or otherwise influence the direction or applications of technological development)
    • See Grace and Grace for works I haven't actually read but that seem relevant to this topic or the above topic.
  • History of predictions, predictions of things like extinction or collapse, millenarianism, and how often people have been right vs wrong about these and other things
    • Knowing more about this would help us know how much to trust predictions of various kinds, which is relevant to things like whether we're at the Hinge of History and how high existential risk is. We currently seem to know very little about this. See e.g. Muehlhauser, me, and me again.
  • History of legal and other efforts to represent future generations or other neglected populations (animals, slaves, etc.)
  • History of moral circle expansion
  • Counterfactual history related to what factors might’ve led the Nazi regime, Soviet Union, etc. to last, and how long it would’ve lasted if those factors had been present.
    • This could inform how high the risk from dystopias/totalitarianism is, etc.
    • I'd guess that the question of what factors might have led those regimes to last wouldn't be neglected by mainstream historians, but that the question of just how long they might have lasted is probably neglected. But this is purely a guess.

And as a broadly relevant resource here, there was an EAG 2018 talk entitled From the Neolithic Revolution to the Far Future: How to do EA History.

It seems to me that a recurring theme is that EAs without a background in history have done relatively brief analyses of these topics, and then other people have been very interested and maybe made big decisions based on it, but there’s been no deeper or more rigorous follow-up. I'd therefore be quite excited to see more historians (or similar types) in/interacting with EA. (That said, I have no specific reason to believe this is a more valuable career path that then others mentioned in this post; I think it might just be the lack of EAs on this career path happens to be more noticeable to me as I do my current work. Also, I'm not saying the existing analyses were bad - I've very much appreciated many of them.)

A potential counterexample to that "recurring theme" is AI Impacts' research into "historic cases of discontinuously fast technological progress". My understanding is that that research has indeed been done by EAs without a background in history, but it also seem quite thorough and rigorous, and possibly more useful for informing key decisions about than work on that topic by most academic historians would've been. (I hold that view very tentatively.) I'm not sure if that's evidence for or against the value of EAs becoming historians.

By the way, I've added this comment to A central directory for open research questions. So if anyone reading this thinks of other topics worth mentioning, or knows of other collections of topics it would be high-priority from an EA perspective for historians to look into, please comment about that below.

Semi-relevant update: I've now created a History tag, and listed some posts there. Posts with this tag might be useful for EAs considering this career idea, as they might provide some ideas about the sorts of topics from history EAs are interested in, and about how historical research methods can be useful for EA. 

Hey Michael,

Thanks (as often) for this list! I'm wondering, might you be up for putting it into a slightly more fomal standalone post or google doc that we could potentially link to from the blurb?

Really love how you're collecting resources on so many different important topics!

Happy to hear this list seems helpful, and thanks for the suggestion! I've now polished & slightly expanded my comment into a top level post: Some history topics it might be very valuable to investigate.

(I also encourage people in that post to suggest additional topics they think it'd be good to explore, so hopefully the post can become something of a "hub" for that.)

Great! Linked.

Just to let you know I've revised the blurb in light of this. Thanks again!

Thanks for these points! Very encouraging that you can do this work from such a variety of disciplines. I'll revise the blurb in light of this.

On "become a specialist on Russia or India": I've sometimes wondered if there might be a case for focusing on countries that could become great powers, or otherwise geopolitically important, on somewhat longer timescales, say 2-5 decades.

[ETA: It occurred to me that the reasoning below also suggests that EA community building in these countries might be valuable. See also here, here, here, and here for relevant discussion.]

One example could be Indonesia: it is the 4th most populous country, its economy is the 16th largest by nominal GDP but has consistently grown faster than those of high-income countries. It's also located in an area that many think will become increasingly geopolitically important, e.g. Obama's foreign policy is sometimes described as a 'pivot to the Pacific'. (Caveat: I've spent less than 10 minutes looking into Indonesia.)

More generally, we could ask: if we naively extrapolated current trends forward by a couple of decades, which countries will emerge as, say, "top 10 countries" on relevant metrics by 2050.

Some arguments in favor:

  • Many EAs are in their twenties, and in the relevant kind of social science disciplines and professional careers my impression is that people tend to have most of their impact in their 40s to 50s. (There is a literature on scientific productivity by age, see e.g. Simonton, 1997, for a theoretical model that also surveys empirical findings. I haven't checked how consistent or convincing these findings are.) Focusing on countries that become important in a couple of decades would align well with this schedule.
  • It's plausible to me that due to 'short-term biases' in traditional foreign policy, scholarship etc., these countries will be overly neglected by the 'academic and policy markets' relative to their knowable expected value for longtermists.
  • For this reason, it might also be easier to have an outsized influence. E.g. in all major foreign policy establishments, there already are countless people specializing on Russia. However, there may be the opportunity to be one of the very few, say, 'Indonesia specialists' by the time there is demand for them (and also to have an influence on the whole field of Indonesia specialists due to founder effects).

Some arguments against:

  • The whole case is just armchair speculation. I have no experience in any of the relevant areas, don't understand how they work etc.
  • If it's true that, say, Indonesia isn't widely viewed as important today, this also means there will be few opportunities (e.g. jobs, funding, ...) to focus on it.
  • Even if the basic case was sound, I expect it would be an attractive option only for very few people. For example, I'd guess this path would involve living in, say, Indonesia for months to years, which is not something many people will be prepared to do.

Hm - interesting suggestion! The basic case here seems pretty compelling to me. One question I don't know the answer to is how predicable countries trajectories are -- like how much would a niave extrapolation have predicted the current balance of power 50 years ago? If very unpredictable it might not be worth it in terms of EV to bet on the extrapolation. But

I feel more intuitievely excited about trying to foster home grown EA communities in a range of such countries, since many of the people working on it would probably have reasons to be in and focus on those countries anyway because they're from there.

For example, it might be possible to use proof assistants to help solve the AI ‘alignment problem’ by creating AI systems that we can prove have certain properties we think are required for the AI system to reliably do what we want it to do.

I don't think this is particularly impactful, primarily because I don't see a path by which it has an impact, and I haven't seen anyone make a good case for this particular path to impact.

(It's hard to argue a negative, but if I had to try, I'd point out that if we want proofs, we would probably do those via math, which works at a much higher level of abstraction and so takes much less work / effort; formal verification seems good for catching bugs in your implementations of ideas, which is not the core of the AI risk problem.)

However, it is plausibly still worthwhile becoming an expert on formal verification because of the potential applications to cybersecurity. (Though it seems like in that case you should just become an expert on cybersecurity.)

As someone in the intersection of these subjects I tend to agree with your conclusion, and with your next comment to Arden describing the design-implementation relationship.

Edit 19 Feb 2022: I want to clarify my position, namely, that I don't see formal verification as a promising career path. As for what I write below, I both don't believe it is a very practical suggestions, and I am not at all sold on AI safety.

However, while thinking about this, I did come up with a (very rough) idea for AI alignment , where formal verification could play a significant role.
One scenario for AGI takeoff, or for solving AI alignment, is to do it inductively - that is, each generation of agents designs the next generation, which should be more sophisticated (and hopefully still aligned). Perhaps one plan to do achieve this is as follows (I'm not claiming that any step is easy or even plausible):

  1. Formally define what it means for an agent to be aligned, in such a way that subsequent agents designed by this agent are also aligned.
  2. Build your first generation of AI agents (which should be lean and simple as possible, to make the next step easier).
  3. Let a (perhaps computer assisted) human prove that the first generation of AI is aligned in the formal sense of 1.

Then, once you deploy the first generation of agents, it is their job to formally prove that further agents designed by them are aligned as well. Hopefully, since they are very intelligent, and plausibly good at manipulating the previous formal proofs, they can find such proofs. Since the proof is formal, humans can trust and verify it (for example using traditional formal proof checkers), despite not being able to come up with the proof themselves.

This plan has many pitfalls (for example, each step may turn out to be extremely hard to carry out, or maybe your definition of alignment will be so strict that the agents won't be able to construct any new and interesting aligned agents), however it is a possible way to be certain about having aligned AI.

I agree (with the caveat that I have much less relevant domain knowledge than Rohin, so you should presumably give less weight to my view).

Several years ago, I attended a small conference on 'formal mathematics', i.e. proof assistants, formal verification, and related issues. As far as I remember, all of the real-world applications mentioned there were of the type "catching bugs in your implementations of ideas". For example, I remember someone saying they had verified software used in air traffic control. This does suggest formal verification can be useful in real-world safety applications. However, I'd guess (very speculative) that these kind of applications won't benefit much from further research, and wouldn't need to be done by someone with an understanding of EA - they seem relatively standard and easy to delegate, and at least in principle doable with current tools.

Thanks Max -- I'll pass this on!

Hi Rohin,

Thanks for this comment. I don't know a lot about this area, so I'm not confident here. But I would have thought that it would sometimes be important for making safe and beneficial AI to be able to prove that systems actually exhibit certain properties when implemented.

I guess I think this first becuase bugs seem capable of being big deals in this context (maybe I'm wrong there?), and because it seems like there could be some instances where it's more feasible to use proof assistants than math to prove that a system has a property.

Curious to hear if/why you disagree!

I would have thought that it would sometimes be important for making safe and beneficial AI to be able to prove that systems actually exhibit certain properties when implemented.

We can decompose this into two parts:

1. Proving that the system that we design has certain properties

2. Proving that the system that we implement matches the design (and so has the same properties)

1 is usually done by math-style proofs, which are several orders of magnitude easier to do than direct formal verification of the system in a proof assistant without having first done the math-style proof.

2 is done by formal verification, where for complex enough systems the specification for the formal verification often comes from the output of a math proof.

I guess I think this first becuase bugs seem capable of being big deals in this context

I'm arguing that after you've done 1, even if there's a failure from not having done 2, it's very unlikely to cause x-risk via the usual mechanism of an AI system adversarially optimizing against humans. (Maybe it causes x-risk in that due to a bug the computer system says "call Russia" and that gets translated to "launch all the nukes", or something like that, but that's not specific to AI alignment, and I think it's pretty unlikely.)

Like, idk. I struggle to actually think of a bug in implementation that would lead to a powerful AI system optimizing against us, when without that bug it would have been fine. Even if you accidentally put a negative sign on a reward function, I expect that this would be caught long before the AI system was a threat.

I realize this isn't a super compelling response, but it's hard to argue against this because it's hard to prove a negative.

there could be some instances where it's more feasible to use proof assistants than math to prove that a system has a property.

Proof assistants are based on math. Any time a proof assistant proves something, it can produce a "transcript" that is a formal math proof of that thing.

Now you might hope that proof assistants can do things faster than humans, because they're automated. This isn't true -- usually the automation is things like "please just prove for me that 2*x is larger than x, I don't want to have to write the details myself", or "please fill out and prove the base case of this induction argument", where a standard math proof wouldn't even note the detail.

Sometimes a proof assistant can do better than humans, when the proof of a fact is small but deeply unintuitive, such that brute force search is actually better than finetuned human intuition. I know of one such case, that I'm failing to find a link for. But this is by far the exception, not the rule.

(There are some proofs, most famously the map-coloring theorem, where part of the proof was done by a special-purpose computer program searching over a space of possibilities. I'm not counting these, as this feels like mathematicians doing a math proof and finding a subpart that they delegated to a machine.)

EDIT: I should note that one use case that seems plausible to me is to use formal verification techniques to verify learned specifications, or specifications that change based on the weights of some neural net, but I'd be pretty surprised if this was done using proof assistants (as opposed to other techniques in formal verification).

Thanks so much Rohin for this explanation. It sounds somewhat persuasive to me, but I don't feel in a psoition to have a good judgement on the matter. I'll pass this on to our AI specialists to see what they think!

Chiming in on this very late. (I worked on formal verification research using proof assistants for a sizable part of undergrad.)

- Given the stakes, it seems like it could be important to verify 1. formally after the math proofs step. Math proofs are erroneous a non-trivial fraction of the time.

- While I agree that proof assistants right now are much slower than doing math proofs yourself, verification is a pretty immature field. I can imagine them becoming a lot better such that they do actually become better to use than doing math proofs yourself, and don't think this would be the worst thing to invest in.

- I'm somewhat unsure about the extent to which we'll be able to cleanly decompose 1. and 2. in the systems we design, though I haven't thought about it much.

- A lot of the formal verification work on proof assistants seems to me like it's also work that could apply to verifying learned specifications? E.g. I'm imagining that this process would be automated, and the automation used could look a lot like the parts of proof assistants that automate proofs.

Something that I think EAs may be undervaluing is scientific research done with the specific aim of identifying new technologies for mitigating global catastrophic or existential risks, particularly where these have interdisciplinary origins.


A good example of this is geoengineering (the merger of climate/environmental science and engineering) which has developed strategies that could allow for mitigating the effects of worst-case climate change scenarios. In contrast, the research being undertaken to mitigate worst-case pandemics seem to focus on developing biomedical interventions (biomedicine started as an interdisciplinary field, although it is now very well established as its own discipline). As an interdisciplinary scientist, I think there is likely to be further scope for identifying promising interventions from the existing literature, conducting initial analysis and modelling to demonstrate these could be feasible responses to GCRs, and then engaging in field-building activities to encourage further scientific research along those paths. The reason I suggest focusing on interdisciplinary areas is that merging two fields often results in unexpected breakthroughs (even to researchers from the two disciplines involved in the merger) and many 'low-hanging' discoveries that can be investigated relatively easily. However, such a workflow seems uncommon both in academia (which doesn't strongly incentivise interdisciplinary work or explicitly considering applications during early-stage research) and EA (which [with the exception of AI Safety] seems to focus on finding and promoting promising research after it has already been initiated by mainstream researchers).


Still, this isn't really a career option as much as it is a strategy for doing leveraged research which seems like it would be better done at an impact focused organisation than at a University. I'm personally planning to use this strategy and will attempt to identify and then model the feasibility of possible antiviral interventions as the intersection of physics and virology (although I haven't yet thought much about how to effectively promote any promising results).

Interesting! I think this might fall under global priorities research, which we have as a 'priority path' -- but it's not really talked about in our profile on that, and I agree it seems like it could be a good straetgy. I'll take a look at the priority path and consider adding something about it. Thanks!

Great post, thanks for sharing it! I also like that 80k seem to be posting on the forum a bit more often lately - seems like a great way to spark discussion, solicit input, etc., with everyone else able to directly see and learn from that discussion and input as well.

And I liked that this post outlined the methodology that was used (in the "We compiled this list by asking 6 advisers..." paragraph). I feel like my relationship with 80k has typically been something like "black box hums for a while, then spits out deep wisdom, and I quit my teaching job and move to the UK to do what it says." This isn't a criticism - I know 80k doesn't want me to simply "do what it says", and actually I'm very grateful for the wisdom the black box spat out. But it's also nice to get an extra peek inside sometimes :)

EDIT: On reflection, I think I usually feel I have a good understanding of things like 80k's arguments for why a particular problem or career path might be important, and that what I've felt unsure about is just how 80k decides what to look into in the first place.

Also, since the matter of accidental harm was raised a few times, I'll also mention this collection I made of sources relevant to that, and my collections of sources on the related matters of the unilateralist's curse, information hazards, and differential progress.

Thanks for this feedback (and for the links)!

It's not clear to me why research management is typically less valuable than paths such as policy or research. I think this is because the model of impact described seems wrong:

In general, being a PA or a research manager seems valuable for many of the same reasons working in operations management does—these coordinating and supporting roles are crucial for enabling researchers and others to have the biggest positive impact possible

Head's of Research can have a significant sway on the entire research direction of an organization; that seems very different to "enabling" the research, it's actively determining what the research will look like. For example, I expect this to be true for someone like Jade Leung at GovAI. Jess Whittlestone seems to hold a similar view, which she explicitly distinguishes from the model of impact described above:

I used to think of good research management mostly in terms of helping individual researchers to be more effective. But there’s also a different kind of management: the kind that provides high-level strategy for a group and a structure within which people can collaborate towards shared goals - which seems even more neglected...I might be able to do a lot more good by helping build a team who can work effectively together in this way, than I could through my own research.

Hey, thanks for this comment -- I think you're right there's a plausibly more high-impact thing that could be described as 'research management' which is more about setting strategic directions for research. I'll clarify that in the writeup!

Thanks for the reply and clarification! The write up looks the same as before, I think

Hi Arden! Big fan of your work expanding 80,000 Hours list of priorities with more research and writeups of potentially impactful career paths. I've written up some thoughts about a career path to improve existential safety that 80,000 Hours reviewed back in 2016: nuclear weapons security. Would be very interested in any feedback from you or others on the 80,000 Hours team! Here's the link :)

[I'm doing a bunch of low-effort reviews of posts I read a while ago and think are important. Unfortunately, I don't have time to re-read them or say very nuanced things about them.]

Maybe worth including, for similar reasons to its sister post.

I happened to have met someone working in formal verification. What they do is to use SAT solver to check if a railroad switching system works as expected. I don't think they would consider them doing AI security. But for software used in critical infrastructure (like railway), it is an important safe measure.

Curated and popular this week
Relevant opportunities