Thanks for the question.
Asteroid risk probably has the most cooperation and the most transparent communication. Asteroid risk is notable for its high degree of agreement: all parties around the world agree that it would be bad for Earth to get hit by a large rock, and that there should be astronomy to detect nearby asteroids, and that if a large Earthbound asteroid is detected, there should be some sort of mission to deflect it away from Earth. There are some points of disagreement, such as on the use of nuclear explosives for asteroid deflection, but this is a bit more down in the details.
Additionally, the conversation about asteroid risk is heavily driven by scientific communities. Scientists have a strong orientation toward transparency, such as publishing research in the open literature, including details on methods, etc. There are relatively few aspects of asteroid risk that involve the sorts of information that is less transparent, such as classified government information or proprietary business information. There is some, such as regarding nuclear explosives, but it's overall a small portion of the topic. This manifests in a relatively transparent conversation about asteroid risk.
The question of scalability is harder to answer. A lot of the relevance governance activities are singular or top-down in a way that scalability is less relevant. For example, it's hard to talk about the scalability of initiatives to deflect asteroids or make sound nuclear weapon launch decisions because these are things that only need to be done in a few isolated circumstances.
It's easier to talk about the scalability of initiatives for reducing climate change because there's such a broad ongoing need to reduce greenhouse gases. For example, a notable recent development in the climate change space is the rapid growth in the market for electric bicycles; this is a technology that is rapidly maturing and can be manufactured at scale. Certain climate change governance concepts can also scale, for example urban design concepts that are initially implemented in a few neighborhoods and then scaled up. Scaling things like this up is often difficult, but it at least in principle can be scaled up.
The best way to answer this question is probably in terms of GCRI's three major areas of activity: research, outreach, and community support, plus the fourth item of organization development.
GCRI's ultimate goal is to reduce global catastrophic risk. Everything we do is oriented toward that end. Our research develops ideas and reduces uncertainty about how best to reduce global catastrophic risk. Our outreach gets those ideas to important decision-makers and helps us understand what research questions decision-makers would benefit from answers to. Our community support advances the overall population of people working on global catastrophic risk, including people who work with us on research and outreach. Our organization development work provides us with the capacity to do all of these things.
Phrased in terms of three problems: (1) We don't know the best ways of reducing global catastrophic risk, and so we are advancing research to understand this better. (2) We are not positioned to take all of the necessary actions to reduce global catastrophic risk on our own, so we are doing outreach to other people who are well positioned to have an impact and we are supporting the overall community of people who are working on the risks. (3) We don't have the capacity to do as much to reduce global catastrophic risk as we could, so we are developing the organization to increase our capacity.
I appreciate that this is all perhaps a bit vague. Because we work across so many topics within global catastrophic risk, it's hard to specify three more specific problems that we face. Some further detail is available at our Summary of 2021-2022 GCRI Accomplishments, Plans, and Fundraising, and in other comments on this AMA.
I regret that I don't have a good answer to this question. Global catastrophic risk doesn't have much in the way of statistics, due to the lack of prior global catastrophes. (Which is a good thing!)
There are some statistics on the amount of work being done on global catastrophic risk. For that, I would recommend the paper Accumulating evidence using crowdsourcing and machine learning: A living bibliography about existential risk and global catastrophic risk by Gorm Shackelford and colleagues at CSER. It finds that there is a significant body of work on the topic, in contrast with some prior concerns, such as those comparing the amount of research on global catastrophic risk to the amount of research on dung beetles.
Thanks for the question. I see that the question is specifically on neglected areas of research, not other types of activity, so I will focus my answer on that. I'll also note that my answers to this question map pretty closely to my own research agenda, which may be a bit of a bias, though it's also the case that I try to focus my research on the most important open questions.
For AI, there are a variety of topics in need of more attention, especially (1) the relation between near-term governance initiatives and long-term AI outcomes; (2) detailed concepts for specific, actionable governance initiatives in both public policy and corporate governance; (3) corporate governance in general (see discussion here); (4) the ethics of what an advanced AI should be designed to do; and (5) the implications of military AI for global catastrophic risk. There may also be neglected areas of research on how to design safe AI, though it is less my own expertise and it already gets a relatively large amount of investment.
For asteroids, I would emphasize the human dimensions of the risk. Prior work on asteroid risk has included a lot of contributions from astronomers and from the engineers involved in space missions, and I think comparatively little attention from social scientists. The possibility of an asteroid collision causing inadvertent nuclear war is a good example of a topic in need of a wider range of attention.
For climate change, one important line of research is on characterizing climate change as a global catastrophic risk. The recent paper Assessing climate change’s contribution to global catastrophic risk by S. J. Beard and colleagues at CSER provides a good starting point, but more work is needed. There is also a lot of opportunity to apply insights from climate change research to other global catastrophic risks. I've done this before here, here, here, and here. One good topic for new research would be evaluating the geoengineering moral hazards debate in terms of its implications for other risky technologies, including debates over what ideas shouldn't be published in the first place, e.g. Was breaking the taboo on research on climate engineering via albedo modification a moral hazard, or a moral imperative?
For nuclear weapons, I would like to see more on policy measures that are specifically designed to address global catastrophic risk. My winter-safe deterrence paper is one effort in that direction, but more should be done to develop this sort of idea.
For biosecurity, I'm less at the forefront of the literature, so I have fewer specific suggestions, though I would expect that there are good opportunities to draw lessons from COVID-19 for other global catastrophic risks.
Thanks for the question. To summarize, I don't have a clear ranking of the risks, and I don't think it makes sense to rank them in terms of tractability. There are some tractable opportunities across a variety of risks, but how tractable they are can vary a lot depending on one's background and other factors.
First, tractability of a risk can vary significantly from person to person or from opportunity to opportunity. There was a separate question on which risks a few select individuals could have the largest impact on; my answer to that is relevant here.
Second, this is a good topic to note the interconnections between risks. There is a sense in which AI, nuclear weapons, asteroid impacts, extreme climate change, and biosecurity are not distinct from each other. For example, nuclear power helps with climate change but can increase nuclear weapons risks, as in international debate over the nuclear program of Iran. Nuclear explosives have been proposed to address asteroid risk, but this could also affect nuclear weapons risks; see discussion in my paper Risk-risk tradeoff analysis of nuclear explosives for asteroid deflection. Pandemics can affect climate change; see e.g. Impact of COVID-19 on greenhouse gases emissions: A critical review. Improving international relations and improving the resilience of civilization helps across a range of risks. This makes it further difficult to compare the tractability of these various risks.
Third, I see tractability and neglectedness as being closely related. When a risk gets a lot of attention, a lot of the most tractable opportunities have already been taken or will be taken anyway.
With those caveats in mind, some answers:
Climate change is distinctive in the wide range of opportunities to reduce the risk. On one hand, this makes it difficult for dedicated effort to significantly reduce the overall risk, because so many efforts are needed. On the other hand, it does create some relatively easy opportunities to reduce the risk. For example, when you're walking out of a room, you might as well turn the lights off. This might not have a massive risk reduction, but the unit of work here is trivially small. More significant examples include living somewhere in which you don't need to drive everywhere and eating more of a vegan diet; these are both also worth doing for a variety of other reasons. That said, the most significant examples involve changes to policy, industry, etc that are unfortunately generally difficult to implement.
Nuclear weapons opportunities vary a lot in terms of tractability. There is a sense in which reducing nuclear weapons risk is easy: just don't launch the nuclear weapons! There is a different sense in which reducing the risk is very difficult: at its core, the risk derives from adversarial relations between certain major countries, and reducing the risk may depend on improving these relations, which is difficult. In between, there are a lot of opportunities to influence nuclear weapons policy. These are mostly very high-skill activities that benefit from advanced training in both international security and global catastrophic risk. For people who are able to train in these fields, I think the opportunities are quite good. Otherwise, there still are opportunities, but they are perhaps more limited.
Asteroid risk is an interesting case because the extreme portion of the risk may actually be more tractable. Large asteroids cause more extreme collisions, and because they are larger, they are also easier for astronomy research to detect. Indeed, a high percentage of the largest asteroids are believed to already be detected. None of the ones detected are on collision course with Earth. Much of the residual global catastrophic risk may involve more complex scenarios, such as involving smaller asteroids triggering inadvertent nuclear war; see my papers on this scenario here and here. My impression is that there may be some compelling opportunities to reduce the risk from these scenarios.
For AI, at the moment I think there are some excellent opportunities related to near-term AI governance. The deep learning revolution has put AI high on the agenda for public policy. There are active high-level initiatives to establish AI policy going on right now, and there are good opportunities to influence these policies. Once these policies are set, they may remain largely intact for a long time. It's important to take advantage of these opportunities while they still exist. Additionally, I think there is low-hanging fruit in other domains. One example is corporate governance, which has gotten relatively little attention especially from people with an orientation toward long-term catastrophic risks; see my recent post on long-term AI corporate governance with Jonas Schuett of the Legal Priorities Project. Another example is AI ethics, which has gotten surprisingly little attention; see my work with Andrea Owe of GCRI here, here, here, and here. There may also be good opportunities on AI safety design techniques, though I am less qualified to comment on this.
For biosecurity, I am less active on it at the moment, so I am less qualified to comment. Also, COVID-19 significantly changes the landscape of opportunities. So I don't have a clear answer on this.
Interesting question, thanks. To summarize my answer: I believe nuclear weapons have the largest opportunities for a few select individuals to make an impact; climate change has the smallest opportunities; and AI, asteroids, and biosecurity are somewhere in between.
First, please note that I am answering this question without regard for the magnitude of the risks. One risk might have larger opportunities for an individual to make an impact on because it's a much larger risk. However, accounting for that turns this into a question about which risks are larger, whereas it seems more fruitful to focus on other aspects of the risks.
Second, all of these risks require a lot more than 10 people to address. Indeed, a lot of important roles involve engaging with lots of other people: lawmakers setting policy that influences the activities of government agencies, private citizens, etc.; researchers who develop ideas that influence other people's thinking; startup founders who build companies with large numbers of employees; etc. This is an important caveat.
With that in mind, I believe the answer is nuclear weapons. The president of the United States has a very high degree of influence over nuclear weapons risk, including the sole authority to order the launch of nuclear weapons. This is a point of ongoing debate; see e.g. this. I am less familiar with procedures in other countries but at least some of them may be similar. There are significant opportunities for a variety of people to impact nuclear weapons risk (see this for discussion), but I think it's still the risk in which a few well-placed individuals can have the largest impact, for better or worse.
On the opposite end of the spectrum, a few powerful individuals probably have the least influence over climate change. A central characteristic of climate change is that its solutions are highly distributed. Greenhouse gas emissions are distributed widely across countries and economic sectors. Solutions for reducing emissions must likewise be implemented across countries and economic sectors and must additionally be maintained over extended periods of time. Technological solutions like renewable energy depend less on a single brilliant idea or a single policy enactment and more on sustained investment in research, development, and deployment. The best example I can think of is the idea of a geoengineering "greenfinger", in which a rogue actor unilaterally implements a geoengineering regime. I'm not up to speed on the research on this idea and I don't have a good sense for whether the idea is viable in practice.
For AI, the largest opportunities may involve a research group developing technological solutions that, once developed, would be readily adopted by other groups - though the adoption process can be a limiting factor that requires larger numbers of people.
For asteroids, the largest opportunities may involve leading a program to detect and deflect incoming asteroids; the program itself would require larger numbers of people, though there may be a role for a few well-placed government officials to have a major impact.
For biosecurity, the best example that comes to mind involves an increase in the risk. There are scenarios in which a research lab creates and (intentionally or accidentally) releases a dangerous pathogen. See debates on "gain of function" experiments, "dual-use research of concern", etc.
Finally, some collective action theory is relevant here. Opportunities for a few individuals to have an impact may be especially large in "single best effort" situations, in which the problem can be solved by one effort: a single best technological solution for AI, a single best detection/deflection effort for asteroids, or even a single effort to launch nuclear weapons or develop a pathogen. In contrast, reducing greenhouse gas emissions is an "aggregate effort" situation, in which results come from the total amount of effort aggregated across everyone who contributes. Geoengineering is more in the direction of a single best effort situation, though perhaps not to the same extent as the other examples. For more on this theory, see my paper Collective action on artificial intelligence: A primer and review, especially Section 2.3, or work by Scott Barrett, especially his book Why Cooperate? The Incentive to Supply Global Public Goods.
That's an interesting question, thanks. To summarize my remarks below: AI and climate change are more market-oriented, asteroids and nuclear weapons are more government-oriented, biosecurity is a mix of both, and philanthropy has a role everywhere.
First, market solutions will be limited for all global catastrophic risks because the risks inevitably involve major externalities. The benefits of reducing global catastrophic risks go to people all over the world and future generations. Markets aren't set up to handle that sort of value.
That said, there can still be a role for market activity in certain global catastrophic risks, especially AI and climate change. AI and climate change are distinctive in that both involve highly profitable activity from some of the largest corporations in the world. Per this https://companiesmarketcap.com, the current top five largest companies are Apple, Microsoft, Alphabet, Saudi Aramco, and Amazon. My own work on AI corporate governance is largely motivated by my prior experience working on climate change policy, including my PhD dissertation).
There are ways to make money while reducing climate change risk, such as by reducing expenditures on energy or building transit-oriented housing. The climate benefits are more incidental, but they can still be significant. Likewise, for AI, market demand for safe near-term AI technologies can have some incidental benefits for improving the safety of long-term AI technologies like AGI. These are good opportunities to pursue, as are opportunities to influence corporate governance to better align corporate activities with reducing global catastrophic risk.
Second, governments can play important roles in all of the global catastrophic risks. Even for corporate activity related to AI and climate change, governments have important roles as regulators. That said, governments are especially important for nuclear weapons and asteroids. There certainly is a role for a variety non-governmental actors to reduce nuclear weapons risk (see this for an overview). That said, it's still the case that governments control the weapons and make the major decisions about them. Governments also play a central in addressing asteroid impacts, supplemented by a robust scientific community, though I believe the science is also largely funded by government work.
Third, philanthropy can play important roles in all of the global catastrophic risks. Philanthropic and nonprofit activity is highly versatile and can play roles that markets and governments can't or won't do. Ten years ago, prior to the deep learning revolution, I believe almost all work on global catastrophic risk from AI was from philanthropy; now the portfolio of work is more diverse. For the current state of affairs, I don't have a specific answer to the question.
And briefly, regarding biosecurity, that is a risk in which governments and markets are both quite important. This is seen in the ongoing pandemic, for example in the role of the pharmaceutical industry in developing and manufacturing vaccines and the role of governments in supporting vaccine development and distribution and a variety of other policy responses.
Hi everyone. Thanks for all the questions so far. I'll be online for most of the day today and I'll try to get to as many of your questions as I can.
Thanks for the question. This is a good thing to think critically about. With respect to strong AI, the short answer is that it's important to develop these sorts of ideas in advance. If we wait until we already have the technology, it could be too late. There are some scenarios in which waiting is more viable, such as the idea of a long reflection, but this is only a portion of the total scenario space, and even then, the outcomes could depend on the initial setup. Additionally, ethics can also matter for near-term / weak AI, including in ways that affect global catastrophic risk, such as in the context of environmental or military affairs.
Glad to hear that you're interested in these topics. It's a good area to pursue work in.
Regarding how to get involved, to a large extent my advice is just general advice for getting involved in any area: study, network, and pursue opportunities as you get them. The networking can often be the limiting factor for people new to something. I would keep an eye on fellowship programs, such as the ones listed here. One of those is the GCRI Advising and Collaboration Program, which to a large extent exists to provide an entry point for people interested in these topics. We try to connect participants to other people in our networks to help them get plugged in. That said, I would encourage you to not restrict yourself to formal programs like these, but instead to try to create your own opportunities. Finally, regarding AI policy specifically, it's good to monitor ongoing policy initiatives, e.g. this in the US, and research on AI policy, especially on the gcr/xrisk dimensions, e.g. GCRI's AI research (though definitely look at more than just GCRI). If you can draw connections between ongoing policy initiatives and the ideas being developed in research, that's a really valuable skill that there will almost certainly be continued demand for over the years.