>It's plausible humans will go extinct from AI. It's also plausible humans will go extinct from supervolcanoes.
Our primitive and nontechnological ancestors survived tens of millions of years of supervolcano eruptions (not to mention mass extinctions from asteroid/comet impacts) and our civilization's ability to withstand them is unprecedentedly high and rapidly increasing. That's not plausible, it's enormously remote, well under 1/10,000 this century.
I think there are whole categories of activity that are not being tried by the broader world, but that people focused on the problem attend to, with big impacts in both bio and AI. It has its own diminishing returns curve.
The thing to see is if the media attention translates into action with more than a few hundred people working on the problem as such rather than getting distracted, and government prioritizing it in conflict with competing goals (like racing to the precipice). One might have thought Covid-19 meant that GCBR pandemics would stop being neglected, but that doesn't seem right. The Biden administration has asked for Congressional approval of a pretty good pandemic prevention bill (very similar to what EAs have suggested) but it has been rejected because it's still seen as a low priority. And engineered pandemics remain off the radar with not much improvement as a result of a recent massive pandemic.
AIS has always had outsized media coverage relative to people actually doing something about it, and that may continue.
I actually do every so often go over the talks from the past several EAGs on Youtube and find it does better. Some important additional benefits are turning on speedup and subtitles, being able to skip forward or bail more easily if the talk turns out bad, and not being blocked from watching two good simultaneous talks.
In contrast, a lot of people really love in-person meetings compared to online video or phone.
I disagree with the idea that short AI timelines are not investable (although I agree interest rates are a bad and lagging indicator vs AI stocks). People foreseeing increased expectations of AI sales as a result of scaling laws, shortish AI timelines, and the eventual magnitude of success have already made a lot of money investing in Nvidia, DeepMind and OpenAI. Incremental progress increases those expectations, and they can increase even in worlds where AGI winds up killing or expropriating all investors so long as there is some expectation of enough investors thinking ownership will continue to matter. In practice I know lots of investors expecting near term TAI who are betting on it (in AI stocks, not interest rates, because the returns are better). They also are more attracted to cheap 30 year mortgages and similar sources of mild cheap leverage. They put weight on worlds where society is not completely overturned and property rights matter after AGI, as well as during an AGI transition (e.g. consider that a coalition of governments wanting to build AGI is more likely to succeed earlier and more safely with more compute and talent available to it, so has reason to make credible promises of those who provide such resources actually being compensated for doing so post-AGI, or the philanthropic value of being able to donate such resources).
And at the object level from reading statements from investors and talking to them, investors weighted by trading in AI stocks (and overwhelmingly for the far larger bond market setting interest rates) largely don't have short AI timelines (confident enough to be willing to invest on) or expect explosive growth in AI capabilities. There are investors like Cathy Woods who do with tens or hundreds of billions of dollars of capital, but they are few enough relative to the investment opportunities available that they are not setting e.g. the prices for the semiconductor industry. I don't see the point of indirect arguments from interest rates for the possibility that investors or the market as a whole could believe in AGI soon but only versions where owning the AI chips or AI developers won't pay off, when at the object level that possibility is known to be false.
If you haven't read this piece by Ajeya Cotra, Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover I would highly recommend it. Some of the post on AI alignment here (aimed at a general audience) might also be helpful.
Well Musk was the richest, who notably pulled out and then the money seems mostly not to have manifested. I haven't seen a public breakdown of commitments those sorts of statements were based on.
Alone and directly (not as a contributing factor to something else later), enough below 0.1% that I evaluate nuclear interventions based mainly on their casualties and disruption, not extinction. I would (and have) support them in the same kind of metric as GiveWell, not in extinction risk.
In the event of all-out WMD war (including with rogue AGI as belligerent) that leads to extinction nukes could be a contributing factor combined with bioweapons and AI (strategic WMD war raises the likelihoods of multiple WMDs being used together).