...If AI systems replace humanity, that outcome would undoubtedly be an absolute disaster for the eight billion human beings currently alive on Earth. However, it would be a localized, short-term disaster rather than an astronomical one. Bostrom's argument, strictly interpreted, no longer applies to this situation. The reason is that the risk is confined to the present generation of humans: the question at stake is simply whether the eight billion people alive today will be killed or allowed to continue living. Even if you accept that killing eight billion pe
I think your formulation is elegant, but I think the real possibilities are lumpier and span many more orders of magnitude (OOMs). Here's a modification from a comment on a similar idea:
I think there would be some probability mass that we have technological stagnation and population reductions, though the cumulative number of lives would be much larger than alive today. Then there would be some mass on maintaining something like 10 billion people for a billion years (no AI, staying on earth either due to choice or technical reasons). Then there would...
I didn't realize it was that much money. This has relevance to the debates about whether AI will value humans. Though EA has not focused as much on making mainstream money more effective, there have been some efforts.
But my major response is why the focus on cultivated meat? It seems like efforts on plant-based meat or fermentation or leaf protein concentrate have much greater likelihood of achieving parity in the near term.
It could even be that mitigating existential risk is the most cost-effective way of saving species, though I realize that is pro...
=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 aren't well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
This question was not about probability, but instead what one considers doom. But let's talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likeliho...
I'm not sure if you consider LessWrong serious literature, but cryonically preserving all humans was mentioned here. I think nearly everyone would consider this doom, but there are people defending extinction (which I think is even worse) as not doom, so I included all them for completeness.
Yes, one could take many hours thinking through these questions (as I have), but even if one doesn't have that time, I think it's useful to get an idea how people are defining doom, because a lot of people use the term, and I suspect that there is a wide variety of defi...
Minimum P(doom) that is unacceptable to develop AGI
80%: I think even if we were disempowered, we would likely get help from the AGI to quickly solve problems like poverty, factory farming, aging, etc. and I do think that is valuable. If humanity were disempowered, I think there would still be some value in expectation of the AGI settling the universe. I am worried that a pause before AGI could become permanent (until there is population and economic collapse due to fertility collapse, after which it likely doesn’t matter), and that could prevent the settle...
P(disempowerment|AGI)
60%: If humans stay biological, it's very hard for me to imagine in the long run ASI with its vastly superior intelligence and processing speed still taking direction from feeble humans. I think if we could get human brain emulations going before AGI got too powerful, perhaps by banning ASI until it is safe, then we have some chance. You can see for someone like me with much lower P(catastrophe|AGI) than disempowerment why it’s very important to know whether disempowerment is considered doom!
P(catastrophe|AGI)
15%: I think it would only take around a month's delay of AGI settling the universe to spare earth from overheating, which is something like one part in 1 trillion of the value lost, if there is no discounting, due to receding galaxies. The continuing loss of value by sparing enough sunlight for the earth (and directing the infrared radiation from the Dyson swarm away from Earth so it doesn't overheat) is completely negligible compared to all the energy/mass available in the galaxies that could be settled. I think it is relatively un...
What mildest scenario do you consider doom?
“AGI takes control bloodlessly and prevents competing AGI and human space settlement in a light touch way, and human welfare increases rapidly:” I think this would result in a large reduction in long-term future expected value, so it qualifies as doom for me.
...Seeing the amount of private capital wasted on generative AI has been painful. (OpenAI alone has raised about $80 billion and the total, global, cumulative investment in generative AI seems like it’s into the hundreds of billions.) It’s made me wonder what could have been accomplished if that money had been spent on fundamental AI research instead. Maybe instead of being wasted and possibly even nudging the U.S. slightly toward a recession (along with tariffs and all the rest), we would have gotten the kind of fundamental research progress needed for usefu
Right - only 5% of EA Forum users surveyed want to accelerate AI:
"13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab."
Quoting myself:
So I do think that it is a vocal minority in EA and LW that have median timelines before 2030.
Now we have some data on AGI timelines for EA (though it was only 34 responses, so of course there could be large sample bias): about 15% expect it by 2030 or sooner.
Wow - @Toby_Ord then why did you have such a high existential risk for climate? Did you have large likelihoods that AGI would take 100 or 200 years despite a median date of 2032?
Most of these statistics (I haven't read the links) don't necessarily imply that they are unsustainable. The soil degradation sounds bad, but how much has it actually reduced yields? Yields have ~doubled in the last ~70 years despite soil degradation. I talk some about supporting 10 billion people sustainably at developed country standards of living in my second 80,000 Hours podcast.
Yeah, and there are lots of influences. I got into X risk in large part due to Ray Kurzweil's The Age of Spiritual Machines (1999) as it said "My own view is that a planet approaching its pivotal century of computational growth - as the Earth is today - has a better than even chance of making it through. But then I have always been accused of being an optimist."
Interesting idea.
As we switch to wind/solar, you can get the same energy services with less primary energy, something like a factor of 2.
We’re a factor ~500 too small to be type I.
- Today: 0.3 VPP
- Type I: 40 VPP
But 40 is only ~130X 0.3.
There is some related discussion here about distribution.
I'm not sure exactly, but ALLFED and GCRI have had to shrink, and ORCG, Good Ancestors, Global Shield, EA Hotel, Institute for Law & AI (name change from Legal Priorities Project), etc have had to pivot to approximately all AI work. SFF is now almost all AI.
Interesting. The claim I heard was that some rationalists anticipated that there would be a lockdown in the US and figured out who they wanted to be locked down with, especially to keep their work going. That might not have been put on LW when it was happening. I was skeptical that the US would lock down.
Year of Crazy (2029)
I'm using a combination of scenarios in the post - one or more of these happen significantly before AGI.
Year of Singularity (2040)
Though I think we could get explosive economic growth with AGI or even before, I'm going to interpret this as explosive physical growth, that we could double physical resources every year or less. I think that will take years after AGI to, e.g., crack robotics/molecular manufacturing.
Year of AGI (2035)
Extrapolating the METR graph here <https://www.lesswrong.com/posts/6KcP7tEe5hgvHbrSF/metr-how-does-time-horizon-vary-across-domains> means soon for super-human coder, but I think it's going to take years after that for the tasks that are slower on that graph, and many tasks are not even on that graph (despite the speedup from having a superhuman coder).
Here's another example of someone in the LessWrong community thinking that LLMs won't scale to AGI.
Welcome, Denise! You may be interested in ALLFED, as one of the things we investigate is resilience to tail end climate catastrophes.
"I don't want to encourage people to donate (even to the same places as I did) unless you already have a few million dollars in assets"
I do see advantages of the abundance mindset, but your threshold is extremely high-it excludes nearly everyone in developed countries, let alone the world. Plenty of people without millions of dollars of assets have an abundance mindset (including myself).
Shameless plug for ALLFED: Four of our former volunteers moved into paid work in biosecurity, and they were volunteers before we did much direct work in biosecurity. Now we are doing more directly. Since ALLFED has had to shrink, the contribution from volunteers has become relatively more important. So I think ALLFED is a good place for young people to skill up in biosecurity and have impact.
I have a very hard time believing that the average or median person in EA is more aware of issues like P hacking (or the replication crisis in psychology, or whatever) than the average of median academic working professionally in the social sciences. I don't know why you would think that.
Maybe aware is not the right word now. But I do think that EAs updated more quickly that the replication crisis was a big problem. I think this is somewhat understandable, as the academics have strong incentives to get a statistically significant result to publish pa...
You said:
I see no evidence that effective altruism is any better at being unbiased than anyone else.
So that's why I compared to non-EAs. But ok, let's compare to academia. As you pointed out, there are many different parts of academia. I have been a graduate student or professor at five institutions, but only two countries and only one field (engineering, though I have published some outside of engineering). As I said in the other comment, academia is much more rigorously referenced than the EA Forum, but the disadvantage of this is that academia pus...
Daniel said "I would say that there’s like maybe a 30% or 40% chance that something like this is true, and that the current paradigm basically peters out over the next few years."
It might have been Carl on the Dwarkesh podcast, but I couldn't easily find a transcript. But I've heard from several others (maybe Paul Christiano?) that they have 10-40% chance that AGI is going to take much longer (or is even impossible), either because the current paradigm doesn't get us there, or because we can't keep scaling compute exponentially as fast as we have in the last decade once it becomes a significant fraction of GDP.
Wouldn’t a global totalitarian government — or a global government of any kind — require advanced technology and a highly developed, highly organized society? So, this implies a high level of recovery from a collapse, but, then, why would global totalitarianism be more likely in such a scenario of recovery than it is right now?
Though it may be more likely for the world to go to global totalitarianism after recovery from collapse, I was referring to a scenario where there was not collapse, but the catastrophe pushed us towards totalitarianism. Some pe...
The movie's reviews and ratings have been hurt by its rather frustrating ending, but I think that's unfair to its overall dramatic excellence.
The link didn't work.
Spoilers: the fatality estimate is ~1 order of magnitude too high. It's true that if there are lots of nukes headed towards your missile silos, there is great urgency to launch before being destroyed. However, there is not that urgency to launch if a city is targeted, so it seemed contrived. I was not aware that ground based interceptors have to physically hit the ICBM, instead of having an...
I agree that extinction has been overemphasized in the discussion of existential risk. I would add that it's not just irrecoverable collapse, but the potential increased risk of subsequent global totalitarianism or worse values ending up in AI. Here are some papers that I have been on that have addressed some of these issues: 1, 2, 3, 4. And here is another relevant paper: 1, and very relevant project 2.
I think where academic publishing would be most beneficial for increasing the rigour of EA’s thinking would be AGI.
AGI is a subset of global catastrophic risks, so EA-associated people have extensively academically published on AGI - I personally have about 10 publications related to AI.
...Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons be
Several variables a... (read more)