I will be online to answer questions morning-afternoon US Eastern time on Friday 17 December. Ask me anything!
About me:
- I am co-founder and Executive Director of the Global Catastrophic Risk Institute.
- I am also an editor at the journals Science and Engineering Ethics and AI & Society, and an honorary research affiliate at CSER.
- I have seen the field of global catastrophic risk grow and evolve over the years. I’ve been involved in global catastrophic risk since around 2008 and co-founded GCRI in 2011.
- My work focuses on bridging the divide between theoretical ideals about global catastrophic risk, the long-term future, outer space, etc. and the practical realities of how to make a positive difference on these issues. This includes research to develop and evaluate viable options for reducing global catastrophic risk, outreach to important actors (policymakers, industry, etc.), and activities to support the overall field of global catastrophic risk.
- The topics I cover are a bit eclectic. I have worked across a range of global catastrophic risks, especially artificial intelligence, asteroids, climate change, and nuclear weapons. I also work with a variety of research disciplines and non-academic professions. A lot of my work involves piecing together these various perspectives, communities, etc. This includes working at the interface between EA communities and other communities relevant to global catastrophic risk.
- I do a lot of advising for people interested in getting more involved in global catastrophic risk. Most of this is through the GCRI Advising and Collaboration Program. The program is not currently open; it will open again in 2022.
Some other items of note:
- Common points of advice for students and early-career professionals interested in global catastrophic risk, a write up of running themes from the advising I do (originally posted here).
- Summary of 2021-2022 GCRI Accomplishments, Plans, and Fundraising, our recent annual report on the current state of affairs at GCRI.
- Subscribe to the GCRI newsletter or follow the GCRI website to stay informed about our work, next year’s Advising and Collaboration Program, etc.
- My personal website here.
I’m happy to field a wide range of questions, such as:
- Advice on how to get involved in global catastrophic risk, pursue a career in it, etc. Also specific questions on decisions you face: what subjects to study, what jobs to take, etc.
- Topics I wish more people were working on. There are many, so please provide some specifics of the sorts of topics you’re looking at. Otherwise I will probably say something about nanotechnology.
- The details of the global catastrophic risks and the opportunities to address them, and why I generally favor an integrated, cross-risk approach.
- What’s going on at GCRI: our ongoing activities, plans, funding, etc.
- The intersection of animal welfare and global catastrophic risk/long-term future, and why GCRI is working on nonhumans and AI ethics (see recent publications 1, 2, 3, 4).
- The world of academic publishing, which I’ve gotten a behind-the-scenes view of as a journal editor.
One type of question I will not answer is advice on where to donate money. GCRI does take donations, and I think GCRI is an excellent organization to donate to. We do a lot of great work on a small budget. However, I will not engage in judgments about which other organizations may be better or worse.
Thanks for the question. I see that the question is specifically on neglected areas of research, not other types of activity, so I will focus my answer on that. I'll also note that my answers to this question map pretty closely to my own research agenda, which may be a bit of a bias, though it's also the case that I try to focus my research on the most important open questions.
For AI, there are a variety of topics in need of more attention, especially (1) the relation between near-term governance initiatives and long-term AI outcomes; (2) detailed concepts for specific, actionable governance initiatives in both public policy and corporate governance; (3) corporate governance in general (see discussion here); (4) the ethics of what an advanced AI should be designed to do; and (5) the implications of military AI for global catastrophic risk. There may also be neglected areas of research on how to design safe AI, though it is less my own expertise and it already gets a relatively large amount of investment.
For asteroids, I would emphasize the human dimensions of the risk. Prior work on asteroid risk has included a lot of contributions from astronomers and from the engineers involved in space missions, and I think comparatively little attention from social scientists. The possibility of an asteroid collision causing inadvertent nuclear war is a good example of a topic in need of a wider range of attention.
For climate change, one important line of research is on characterizing climate change as a global catastrophic risk. The recent paper Assessing climate change’s contribution to global catastrophic risk by S. J. Beard and colleagues at CSER provides a good starting point, but more work is needed. There is also a lot of opportunity to apply insights from climate change research to other global catastrophic risks. I've done this before here, here, here, and here. One good topic for new research would be evaluating the geoengineering moral hazards debate in terms of its implications for other risky technologies, including debates over what ideas shouldn't be published in the first place, e.g. Was breaking the taboo on research on climate engineering via albedo modification a moral hazard, or a moral imperative?
For nuclear weapons, I would like to see more on policy measures that are specifically designed to address global catastrophic risk. My winter-safe deterrence paper is one effort in that direction, but more should be done to develop this sort of idea.
For biosecurity, I'm less at the forefront of the literature, so I have fewer specific suggestions, though I would expect that there are good opportunities to draw lessons from COVID-19 for other global catastrophic risks.