All of SethBaum's Comments + Replies

AMA: Seth Baum, Global Catastrophic Risk Institute

Thanks for the question.

Asteroid risk probably has the most cooperation and the most transparent communication. Asteroid risk is notable for its high degree of agreement: all parties around the world agree that it would be bad for Earth to get hit by a large rock, and that there should be astronomy to detect nearby asteroids, and that if a large Earthbound asteroid is detected, there should be some sort of mission to deflect it away from Earth. There are some points of disagreement, such as on the use of nuclear explosives for asteroid deflection, but this... (read more)

AMA: Seth Baum, Global Catastrophic Risk Institute

The best way to answer this question is probably in terms of GCRI's three major areas of activity: research, outreach, and community support, plus the fourth item of organization development.

GCRI's ultimate goal is to reduce global catastrophic risk. Everything we do is oriented toward that end. Our research develops ideas and reduces uncertainty about how best to reduce global catastrophic risk. Our outreach gets those ideas to important decision-makers and helps us understand what research questions decision-makers would benefit from answers to. Our comm... (read more)

AMA: Seth Baum, Global Catastrophic Risk Institute

I regret that I don't have a good answer to this question. Global catastrophic risk doesn't have much in the way of statistics, due to the lack of prior global catastrophes. (Which is a good thing!)

There are some statistics on the amount of work being done on global catastrophic risk. For that, I would recommend the paper Accumulating evidence using crowdsourcing and machine learning: A living bibliography about existential risk and global catastrophic risk by Gorm Shackelford and colleagues at CSER. It finds that there is a significant body of work on the... (read more)

AMA: Seth Baum, Global Catastrophic Risk Institute

Thanks for the question. I see that the question is specifically on neglected areas of research, not other types of activity, so I will focus my answer on that. I'll also note that my answers to this question map pretty closely to my own research agenda, which may be a bit of a bias, though it's also the case that I try to focus my research on the most important open questions.

For AI, there are a variety of topics in need of more attention, especially (1) the relation between near-term governance initiatives and long-term AI outcomes; (2) detailed concepts... (read more)

AMA: Seth Baum, Global Catastrophic Risk Institute

Thanks for the question. To summarize, I don't have a clear ranking of the risks, and I don't think it makes sense to rank them in terms of tractability. There are some tractable opportunities across a variety of risks, but how tractable they are can vary a lot depending on one's background and other factors.

First, tractability of a risk can vary significantly from person to person or from opportunity to opportunity. There was a separate question on which risks a few select individuals could have the largest impact on; my answer to that is relevant here.

Se... (read more)

AMA: Seth Baum, Global Catastrophic Risk Institute

Interesting question, thanks. To summarize my answer: I believe nuclear weapons have the largest opportunities for a few select individuals to make an impact; climate change has the smallest opportunities; and AI, asteroids, and biosecurity are somewhere in between.

First, please note that I am answering this question without regard for the magnitude of the risks. One risk might have larger opportunities for an individual to make an impact on because it's a much larger risk. However, accounting for that turns this into a question about which risks are large... (read more)

AMA: Seth Baum, Global Catastrophic Risk Institute

That's an interesting question, thanks. To summarize my remarks below: AI and climate change are more market-oriented, asteroids and nuclear weapons are more government-oriented, biosecurity is a mix of both, and philanthropy has a role everywhere.

First, market solutions will be limited for all global catastrophic risks because the risks inevitably involve major externalities. The benefits of reducing global catastrophic risks go to people all over the world and future generations. Markets aren't set up to handle that sort of value.

That said, there can sti... (read more)

3Madhav Malhotra8mo
This is a very comprehensive answer! I especially appreciate your summary up top and you linking to sources. Thank you :-)
AMA: Seth Baum, Global Catastrophic Risk Institute

Hi everyone. Thanks for all the questions so far. I'll be online for most of the day today and I'll try to get to as many of your questions as I can.

AMA: Seth Baum, Global Catastrophic Risk Institute

Thanks for the question. This is a good thing to think critically about. With respect to strong AI, the short answer is that it's important to develop these sorts of ideas in advance. If we wait until we already have the technology, it could be too late. There are some scenarios in which waiting is more viable, such as the idea of a long reflection, but this is only a portion of the total scenario space, and even then, the outcomes could depend on the initial setup. Additionally, ethics can also matter for near-term / weak AI, including in ways that affect global catastrophic risk, such as in the context of environmental or military affairs.

AMA: Seth Baum, Global Catastrophic Risk Institute

Glad to hear that you're interested in these topics. It's a good area to pursue work in.

Regarding how to get involved, to a large extent my advice is just general advice for getting involved in any area: study, network, and pursue opportunities as you get them. The networking can often be the limiting factor for people new to something. I would keep an eye on fellowship programs, such as the ones listed here. One of those is the GCRI Advising and Collaboration Program, which to a large extent exists to provide an entry point for people interested in these ... (read more)

AMA: Seth Baum, Global Catastrophic Risk Institute

Thanks for your questions. In reply:

I would not ever expect governments to respond to catastrophic risks to a degree that I (for one) think is proportionate to the importance of the risks. This is because I would rate the risks as being more important than most other people would. There are a variety of reasons for this, including the intergenerational nature of it, and the global nature, and some psychological and institutional factors. Jonathan Wiener's paper The Tragedy of the Uncommons is a good read on this.

That said, I do see potential for government... (read more)

1Ben Stewart8mo
Thanks!
Common Points of Advice for Students and Early-Career Professionals Interested in Global Catastrophic Risk

Thank you for these thoughtful comments.

Regarding exploration vs. exploitation:

First, my understanding of what you mean by this is that exploration involves taking time to learn more about an area, whereas exploitation involves focusing on trying to make an impact within that area. On one hand, it can be important to learn more in order to better orient oneself in the right direction. On the other hand, spending too much time on exploration can mean not making much of an impact. My apologies if this is not what you intended.

There often is a need for balanc... (read more)

The case for long-term corporate governance of AI

Thanks for sharing this - looks like good work.

2019 AI Alignment Literature Review and Charity Comparison

My commendations on another detailed and thoughtful review. A few reactions (my views, not GCRI's):

The only case I can think of where scientists are relatively happy about punitive safety regulations, nuclear power, is one where many of those initially concerned were scientists themselves.

Actually, a lot of scientists & engineers in nuclear power are not happy about the strict regulations on nuclear power. Note, I've been exposed to this because my father worked as an engineer in the nuclear power industry, and I've had other interact... (read more)

Long-Term Future Fund: April 2019 grant recommendations

Thanks, that makes sense. This is one aspect in which audience is an important factor. Our two recent nuclear war model papers (on the probability and impacts) were written to be accessible to wider audiences, including audiences less familiar with risk analysis. This is of course a factor for all research groups that work on topics of interest to multiple audiences, not just GCRI.

Long-Term Future Fund: April 2019 grant recommendations

All good to know, thanks.

I'll briefly note that I am currently working on a more extended discussion of policy outreach suitable for posting online, possibly on this site, that is oriented toward improving the understanding of people in the EA-LTF-GCR community. It's not certain I'll have the chance to complete given my other responsibilities it but hopefully I will.

Also if it would help I can provide suggestions of people at other organizations who can give perspectives on various aspects of GCRI's work. We could follow up privately about that.

Long-Term Future Fund: April 2019 grant recommendations

I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I've read and I predict that other people who have thought about global catastrophic risks for a while would feel the same.

That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts. The integrated assessment paper in particular describes an agenda and is not intended to have much in the way of original conclusions.

The argument seemed to mostly consists of a few con
... (read more)
8Raemon3y
Just wanted to make a quick note that I also felt the "overview" style posts aren't very useful to me (since they mostly encapsulate things I already had thought about) At some point I was researching some aspects of nuclear war, and reading up on a GCRI paper that was relevant, and what I found myself really wishing was that the paper had just drilled deep into whatever object level, empirical data was available, rather than being a high level summary.
Long-Term Future Fund: April 2019 grant recommendations
I do view this publishing of the LTF-responses as part of an iterative process.

That makes sense. I might suggest making this clear to other applicants. It was not obvious to me.

Long-Term Future Fund: April 2019 grant recommendations

Oliver Habryka's comments raise some important issues, concerns, and ideas for future directions. I elaborate on these below. First, I would like to express my appreciation for his writing these comments and making them available for public discussion. Doing this on top of the reviews themselves strikes me as quite a lot of work, but also very valuable for advancing grant-making and activity on the long-term future.

My understanding of Oliver's comments is that while he found GCRI's research to be of a high intellectual quality, he did not ha... (read more)

8Habryka3y
I want to make sure that there isn't any confusion about this: When I do a grant writeup like the one above, I am definitely only intending to summarize where I am personally coming from. The LTF-Fund had 5 voting members last round (and will have 4 in the coming rounds), and so my assessment is necessarily only a fraction of the total assessment of the fund. I don't currently know whether the question of the target audience would have been super valuable for the other fund members, and given that I already gave a positive recommendation, their cruxes and uncertainties would have actually been more important to address than my own.
8Habryka3y
(Breaking things up into multiple replies, to make things easier to follow, vote on, and reply to) Of those, I had read "Long-term trajectories of human civilization" and "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" before I made my recommendation (which I want to clarify was a broadly positive recommendation, just not a very-positive recommendation). I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I've read and I predict that other people who have thought about global catastrophic risks for a while would feel the same. I had a sense that they were mostly retreading and summarizing old ground, while being more difficult to read and of lower quality than most of the writing that already exists on this topic (a lot of it published by FHI, and a lot of it written on LessWrong and the EA Forum). I also generally found the arguments in them not particularly compelling (in particular I found the arguments in "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives" relatively weak, and thought that it failed to really make a case for significant convergent benefits of long-term and short-term concerns. The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that). I highlighted the "A model for the probability of nuclear war" not because it was the only paper I read (I read about 6 GCRI papers when doing the review and two more since then), but because it was the paper that did actually feel to me like it was helping me build a better model of the world, and something that I expect to be a valuable reference for quite a while. I actually don't think that applies to any of the three papers you linked above. I don't currently have a great operationalization of what I mean
8Habryka3y
Thanks for posting the response! Some short clarifications: My perspective only played a partial role in the discussion of the GCRI grant, since I am indeed not the person with the most policy expertise on the fund. It only so happens that I am also the person who had the most resources available for writing things up for public consumption, so I wouldn't update too much on my specific feedback. Though my perspective might still be useful for understanding the experience of people closer to my level of expertise, of which there are many, and I do obviously think there is important truth to it (and obviously as a way to help me build better models of the policy space, which I do think is valuable). I strongly agree with this, and also think that a lot of the best work is cross-cutting and interdisciplinary. I think the degree to which things are interdisciplinary is part of the reason for why there is some shortage for EA grantmaking expertize. Part of my hope with facilitating public discussion like this is to help me and other people in grantmaking positions build better models of domains where we have less expertize.
Have we underestimated the risk of a NATO-Russia nuclear war? Can we do anything about it?

Thanks for this conversation. Here are a few comments.

Regarding the Ukraine crisis and the current NATO-Russia situation, I think Max Fisher at Vox is right to raise the issue as he has, with an excellent mix of insider perspectives. There should be more effort like this, in particular to understand Russia's viewpoint. For more on this topic I recommend recent work by Rajan Menon [http://nationalinterest.org/feature/newsflash-america-ukraine-cannot-afford-war-russia-13137], [http://nationalinterest.org/feature/avoiding-new-cuban-missile-crisis-ukraine-1294... (read more)

I am Seth Baum, AMA!

I see the logic here, but I would hesitate to treat it as universally applicable. Under some circumstances, more centralized structrues can outperform. For example if China or Wal-Mart decide to reduce greenhouse gas emissions, then you can get a lot more than if the US or the corner store decide to, because the latter are more decentralized. That's for avoiding catastrophes. For surviving them, sometimes you can get similar effects. However, local self-sufficiency can be really important. We argued this in http://sethbaum.com/ac/2013_AdaptationRecovery.ht... (read more)

I am Seth Baum, AMA!

OK, I'm wrapping up for the evening. Thank you all for these great questions and discussion. And thanks again to Ryan Carey for organizing.

I'll check back in tomorrow morning and try to answer any new questions that show up.

2RyanCarey7y
Thanks very much for giving some of your time to discuss this important topic with all of us! It's great to build a stronger connection between effective altruists and GCRI and to get a better idea of how you're thinking about analysing and predicting risks. Good luck with GCRI and I look forward to hearing how GCRI comes along with its new, research-focussed direction.
1Randomized, Controlled7y
Thanks again for your time, comments and being a nucleation point for conversation!
I am Seth Baum, AMA!

For what it's worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It's just easier for me to order the salad.

I mainly eat veg foods too. It reduces environmental problems, which helps on gcr/xrisk. And i... (read more)

I am Seth Baum, AMA!

I took an honors BA which included a pretty healthy dose of post-structuralist inflected literary theory, along with math and fine arts. I did a masters in architecture, worked in that field for a time, then as a 'creative technologist' and now I'm very happy as a programmer, trying to learn as much math as I can in my free time.

Very interesting!

I am Seth Baum, AMA!

It looks like a good part of the conversation is starting to revolve around influencing policy. I think there's some big macro social/cultural forces that have been pushing people to be apolitical for a while now. The most interesting reform effort I've heard about lately is Lawrence Lessig's anti-PAC in the US. How can we effectively level our political games up?

I agree there are macro factors pushing people away from policy. However, that can actually increase the effectiveness of policy engagement: less competition.

A great way to level up in politics... (read more)

I am Seth Baum, AMA!

Total mixed bag of questions, feel free to answer any/all. Apologies if you've already written on the subject elsewhere; feel free to just link if so.

No worries.

What is your current marginal project(s)? How much will they cost, and what's the expected output (if they get funded)

We're currently fundraising in particular for integrated assessment, http://gcrinstitute.org/integrated-assessment. Most institutional funders have programs on only one risk at a time. We're patching integrated assessment work from other projects, but hope to get more dedicat... (read more)

I am Seth Baum, AMA!

One of the major obstacles to combating Global Warming at the governmental level in America is the large financial investment that the fossil fuel industry makes to politicians in return for tens of billions of dollars in government assistance every year (widely varied numbers depending on how one calculates the incentives and tax breaks and money for research and so on). There seems to me to be only one way to change the current corrupt money for control of politicians process, and that is to demand that all political donations be made anonymously, given

... (read more)
I am Seth Baum, AMA!

oops I think I answered this question up above. I think this is the link: http://effective-altruism.com/ea/fv/i_am_seth_baum_ama/2v9

I am Seth Baum, AMA!

What funding will GCRI require over the coming year to maintain these activities?

GCRI has a small base of ongoing funding that keeps the doors open, so to speak, except that we don't have any actual doors. I will say, not having an office space really lowers costs!

The important thing is that GCRI is in an excellent place to convert additional funding into additional productivity, mainly by freeing up additional person-hours of work.

I am Seth Baum, AMA!

Then I guess you don't think it's plausible that we can't expect to make many permanent gains. Why?

I'll have to look at that link later, but briefly, I do think it can be possible to make some permanent gains, but there seem to be significantly more opportunities to avoid permanent losses. That said, I do not wish to dismiss the possibility of permanent gains, and am very much willing to consider them as of potential comparable significance.

I am Seth Baum, AMA!

Here's one question: which risks are you most concerned about?

I shy away from ranking risks, for several reasons:

  • The risks are often interrelated in important ways. For example, we analyzed a scenario in which geoengineering catastrophe was caused by some other catastrophe: http://sethbaum.com/ac/2013_DoubleCatastrophe.html. This weekend Max Tegmark was discussing how AI can affect nuclear war risk if AI is used for nuclear weapons command & control. So they're not really distinct risks.

  • Ultimately what's important to rank is not the risks thems

... (read more)
I am Seth Baum, AMA!

What are GCRI's current plans or thinking around reducing synthetic biology risk? Frighteningly, there seems to be underinvestment in this area.

We have an active synbio project modeling the risk and characterizing risk reduction opportunities, sponsored by the US Dept of Homeland Security: http://gcrinstitute.org/dhs-emerging-technologies-project.

I agree that synbio is an under-invested-in area across the gcr community. Ditto for other bio risks. GCRI is working to correct that, as is CSER.

Also, with regard to the research project on altruism, my shoo

... (read more)
0Randomized, Controlled7y
For what it's worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It's just easier for me to order the salad. My interest in x-risk comes from wanting to work on big/serious problems. I can't think of a bigger one than x-risk.
2SethBaum7y
I shy away from ranking risks, for several reasons: * The risks are often interrelated in important ways. For example, we analyzed a scenario in which geoengineering catastrophe was caused by some other catastrophe: http://sethbaum.com/ac/2013_DoubleCatastrophe.html [http://sethbaum.com/ac/2013_DoubleCatastrophe.html]. This weekend Max Tegmark was discussing how AI can affect nuclear war risk if AI is used for nuclear weapons command & control. So they're not really distinct risks. * Ultimately what's important to rank is not the risks themselves, but the actions we can take to reduce them. We may sometimes have better opportunities to reduce smaller risks. For example, maybe some astronomers should work on asteroid risks even though this is a relatively low probability risk. Also, the answer to this question varies by time period. For, say, the next 12 months, nuclear war and pandemics are probably the biggest risks. For the next 50-100 years, we need to worry about these plus a mix of environmental and technological risks. There's the classic Margaret Mead quote, "Never underestimate the power of a small group of committed people to change the world. In fact, it is the only thing that ever has." There's a lot of truth to this, and I think the EA community is well on its way to being another case in point. That is as long as you don't slack off! :) That said, I keep an eye on a mix of politicians, other government officials, researchers, activists, celebrities, journalists, philanthropists, entrepreneurs, and probably a few others. They all play significant roles and it's good to be able to work with all of them.
I am Seth Baum, AMA!

thank you for your time and work!

You're welcome!

If I wanted to work at GCRI or a similar think-tank/institution, what skills would make me most valuable?

Well, I regret that GCRI doesn't have the funds to be hiring right now. Also, I can't speak for other think tanks. GCRI runs a fairly unique operation. But I can say a bit on what we look for in people we work with.

Some important things to have for GCRI include: (1) a general understanding of gcr/xrisk issues, for example by reading research from GCRI, FHI, and our colleagues; (2) deep familiarity w... (read more)

1Randomized, Controlled7y
I took an honors BA which included a pretty healthy dose of post-structuralist inflected literary theory, along with math and fine arts. I did a masters in architecture, worked in that field for a time, then as a 'creative technologist' and now I'm very happy as a programmer, trying to learn as much math as I can in my free time.
I am Seth Baum, AMA!

Thanks Ryan! And thanks again for organizing.

My last question for now: what do you think is the path from risk-analysis to policy? Some aspiring effective altruists have taken up a range of relevant jobs, for instance working for politicians, in think tanks, in defence and in international governance. Can they play a role in promoting risk-reducing policies? And more generally, how can researchers get their insights implemented?

This is a really, really important question. In a sense, it all comes down to this. Otherwise there's not much point in doing ... (read more)

I am Seth Baum, AMA!

Hi Ales,

Are you coordinating with FLI and FHI to have some division of labor?

We are in regular contact with both FLI & FHI. FHI is more philosophical than GCRI. The most basic division of labor there is for FHI to develop fundamental theory and GCRI to make the ideas more applied. But this is a bit of a simplication, and the coordination there is informal. With FLI, I can't yet point to any conceptual division of labor, but we're certainly in touch. Actually I was just spending time with Max Tegmark over the weekend in NYC, and we had some nice con... (read more)

I am Seth Baum, AMA!

what kind of researchers do you think are needed most at GCRI?

Right now, I would say researchers who can do detailed risk analysis similar to what we did in our inadvertent nuclear war paper: http://sethbaum.com/ac/2013_NuclearWar.html. The ability to work across multiple risks is extremely helpful. Our big missing piece has been on biosecurity risks. However, we have a new affiliate Gary Ackerman who is helping out with that. Also I'm participating in a biosecurity fellowship program that will also help. But we could still use more on biosecurity. That... (read more)

I am Seth Baum, AMA!

Good questions!

Of all the arguments you've heard for de-prioritizing GCR reduction, which do you find most convincing?

The only plausible argument I can imagine for de-prioritizing GCR reduction is if there are other activities out there that can offer permanent expected gains that are comparably large as the permanent expected losses from GCRs. Nick Beckstead puts this well in his dissertation discussion of far future trajectories, or the concept of "existential hope" from Owen Cotton-Barratt & Toby Ord. But in practical terms the bulk of... (read more)

2Alexander7y
Then I guess you don't think it's plausible that we can't expect to make many permanent gains [http://www.overcomingbias.com/2014/02/dust-in-the-wind.html]. Why?
1RyanCarey7y
What funding will GCRI require over the coming year to maintain these activities?