I will be online to answer questions morning-afternoon US Eastern time on Friday 17 December. Ask me anything!
About me:
- I am co-founder and Executive Director of the Global Catastrophic Risk Institute.
- I am also an editor at the journals Science and Engineering Ethics and AI & Society, and an honorary research affiliate at CSER.
- I have seen the field of global catastrophic risk grow and evolve over the years. I’ve been involved in global catastrophic risk since around 2008 and co-founded GCRI in 2011.
- My work focuses on bridging the divide between theoretical ideals about global catastrophic risk, the long-term future, outer space, etc. and the practical realities of how to make a positive difference on these issues. This includes research to develop and evaluate viable options for reducing global catastrophic risk, outreach to important actors (policymakers, industry, etc.), and activities to support the overall field of global catastrophic risk.
- The topics I cover are a bit eclectic. I have worked across a range of global catastrophic risks, especially artificial intelligence, asteroids, climate change, and nuclear weapons. I also work with a variety of research disciplines and non-academic professions. A lot of my work involves piecing together these various perspectives, communities, etc. This includes working at the interface between EA communities and other communities relevant to global catastrophic risk.
- I do a lot of advising for people interested in getting more involved in global catastrophic risk. Most of this is through the GCRI Advising and Collaboration Program. The program is not currently open; it will open again in 2022.
Some other items of note:
- Common points of advice for students and early-career professionals interested in global catastrophic risk, a write up of running themes from the advising I do (originally posted here).
- Summary of 2021-2022 GCRI Accomplishments, Plans, and Fundraising, our recent annual report on the current state of affairs at GCRI.
- Subscribe to the GCRI newsletter or follow the GCRI website to stay informed about our work, next year’s Advising and Collaboration Program, etc.
- My personal website here.
I’m happy to field a wide range of questions, such as:
- Advice on how to get involved in global catastrophic risk, pursue a career in it, etc. Also specific questions on decisions you face: what subjects to study, what jobs to take, etc.
- Topics I wish more people were working on. There are many, so please provide some specifics of the sorts of topics you’re looking at. Otherwise I will probably say something about nanotechnology.
- The details of the global catastrophic risks and the opportunities to address them, and why I generally favor an integrated, cross-risk approach.
- What’s going on at GCRI: our ongoing activities, plans, funding, etc.
- The intersection of animal welfare and global catastrophic risk/long-term future, and why GCRI is working on nonhumans and AI ethics (see recent publications 1, 2, 3, 4).
- The world of academic publishing, which I’ve gotten a behind-the-scenes view of as a journal editor.
One type of question I will not answer is advice on where to donate money. GCRI does take donations, and I think GCRI is an excellent organization to donate to. We do a lot of great work on a small budget. However, I will not engage in judgments about which other organizations may be better or worse.
Thanks for the question.
Asteroid risk probably has the most cooperation and the most transparent communication. Asteroid risk is notable for its high degree of agreement: all parties around the world agree that it would be bad for Earth to get hit by a large rock, and that there should be astronomy to detect nearby asteroids, and that if a large Earthbound asteroid is detected, there should be some sort of mission to deflect it away from Earth. There are some points of disagreement, such as on the use of nuclear explosives for asteroid deflection, but this is a bit more down in the details.
Additionally, the conversation about asteroid risk is heavily driven by scientific communities. Scientists have a strong orientation toward transparency, such as publishing research in the open literature, including details on methods, etc. There are relatively few aspects of asteroid risk that involve the sorts of information that is less transparent, such as classified government information or proprietary business information. There is some, such as regarding nuclear explosives, but it's overall a small portion of the topic. This manifests in a relatively transparent conversation about asteroid risk.
The question of scalability is harder to answer. A lot of the relevance governance activities are singular or top-down in a way that scalability is less relevant. For example, it's hard to talk about the scalability of initiatives to deflect asteroids or make sound nuclear weapon launch decisions because these are things that only need to be done in a few isolated circumstances.
It's easier to talk about the scalability of initiatives for reducing climate change because there's such a broad ongoing need to reduce greenhouse gases. For example, a notable recent development in the climate change space is the rapid growth in the market for electric bicycles; this is a technology that is rapidly maturing and can be manufactured at scale. Certain climate change governance concepts can also scale, for example urban design concepts that are initially implemented in a few neighborhoods and then scaled up. Scaling things like this up is often difficult, but it at least in principle can be scaled up.