JL

Jared Leibowich

Forecaster
23 karmaJoined Nov 2022

Bio

I forecast for various organizations:

  • Samotsvety
  • Swift Centre
  • Metaculus (Pro Forecaster)
  • Good Judgment (Superforecaster)

I also teach forecasting classes.

If you'd like to get a hold of me, I can be reached at jleibowich@gmail.com

How I can help others

I am happy to provide forecasting and/or tutoring for both individuals and organizations.

Posts
1

Sorted by New

Comments
2

Glad the diagram is helpful for you! As far as the highest EV path, here are some of my thoughts:

Most ideal plan: The easiest route to lowering almost every path in my diagram is by simply ensuring that AI doesn’t get to a certain point of advancement. This is something I’m very open to. While there are economic and geopolitical incentives to create increasingly advanced AI, I don’t think this is an inevitable path that humans have to take. For example, we as a species have somewhat come to an agreement that nuclear weapons should basically never be used (even though some countries have them) and that it’s unideal to do nuclear weapons research that figures out ways to make cheaper and more powerful nuclear weapons (although this is still being done to a certain extent).

If there was a treaty in place that all countries (and companies) had to abide by as far as capacity limits, I think this would be a good thing because huge economic gains could still be had even without super-advanced AI. I am hopeful that this is actually possible. I think many people were genuinely freaked out when they saw what GPT-4 was capable of, and this is not even that close to AGI. So I am confident that there will be pushback from society as a whole to creating increasingly advanced AI.

I don’t think there is an inevitable path that technology has to take. For example, I don’t think the internet was destined to operate the way it currently does. We might have to accept that AI is one of those things that we place limits on as far as research, just as we do so with nuclear weapons, bioweapons, and chemical weapons.

Second plan (if first plan doesn’t work): If humanity decides not to place limits on how advanced AI is allowed to get, my next recommendation is to minimize the chance that AGI systems are able to succeed in their EC attempts. I think this is doable as far as getting some kind of international treaty (the same way we have nuclear weapons treaties) with an organization that’s a part of the UN focused on ensuring that there are agreed upon barriers put in place to cut off AGI from accessing weapons of mass destruction.

Also, there should perhaps be some kind of watermarking standards implemented to ensure that communication between nations can be trusted, so that there are no wars between nations as a result of AGI tricking them with fake information that could lead to a conflict. That said, watermarking is hard, and people (and probably AI) eventually always find a way to get around a watermark.

I think #2 is much more unideal than #1 because if AGI were to get intelligent enough, I think it would be significantly harder to prevent AGI systems from succeeding with their goals.

I think both #1 and #2 could be relatively cheap (and easy) to implement if the political will is there.

Going back to your question though, as far as how it would start and how long it would take:

  • If there was an international effort, humanity could start #1 and/or #2 tomorrow.

  • I don’t see any reason why these could not be successfully implemented within the next year or two.

While my recommendations might come across as naïve to some, I am more optimistic than I was several months ago because I have been impressed with how quickly many people got freaked out by what AI is already capable of. This gives me reason to think that if progress were to continue with AI capabilities, there will be an increasing amount of pushback in society, especially as AI starts affecting people’s personal and professional lives in more jarring ways.

Thank you for your comment and insight. The main reason why my forecast for this scenario is not higher is because I think there is a sizable risk of an existential catastrophe unrelated to AGI occurring before the scenario you mentioned were to resolve positively.

I am very open to adjusting my forecast, however. Are there any resources you would recommend that make an argument for why we should forecast a higher probability for this scenario relative to other AGI x-risk scenarios? And what are your thoughts on the likelihood of another existential catastrophe occurring to humanity before an AGI-related one?

Also please excuse any delay in my response because I will be away from my computer for the next several hours, but I will try to respond within the next 24 hours to any points you make.