Existential risk
Existential risk
Discussions of risks which threaten the destruction of the long-term potential of life

Quick takes

31
7d
16
The current US administration is attempting an authoritarian takeover. This takes years and might not be successful. My manifold question puts an attempt to seize power if they lose legitimate elections at 30% (n=37). I put it much higher.[1] Not only is this concerning by itself, this also incentivizes them to achieve a strategic decisive advantage via superintelligence over pro-democracy factions. As a consequence, they may be willing to rush and cut corners on safety.  Crucially, this relies on them believing superintelligence can be achieved before a transfer of power. I don't know how much the belief in superintelligence has spread into the administration. I don't think Trump is 'AGI-pilled' yet, but maybe JD Vance is? He made an accelerationist speech. Making them more AGI-pilled and advocating for nationalization (like Ashenbrenner did last year) could be very dangerous. 1. ^ So far, my pessimism about US Democracy has put me in #2 on the Manifold topic, with a big lead over other traders. I'm not a Superforecaster though.
9
3d
Microsoft continue to pull back on their data centre plans, in a trend that’s been going on for the past few months, since before the tariff crash (Archive). Frankly, the economics of this seem complex (the article mentions it’s cheaper to build data centres slowly, if you can), so I’m not super sure how to interpret this, beyond that this probably rules out the most aggressive timelines. I’m thinking about it like this: * Sam Altman and other AI leaders are talking about AGI 2027, at which point every dollar spent on compute yields more than a dollar of revenue, with essentially no limits * Their models are requiring exponentially more compute for training (ex. Grok 3, GPT-5) and inference (ex. o3), but producing… idk, models that don’t seem to be exponentially better? * Regardless of the breakdown in relationship between Microsoft and OpenAI, OpenAI can’t lie about their short- and medium-term compute projections, because Microsoft have to fulfil that demand * Even in the long term, Microsoft are on Stargate, so still have to be privy to OpenAI’s projections even if they’re not exclusively fulfilling them * Until a few days ago, Microsoft’s investors were spectacularly rewarding them for going all in on AI, so there’s little investor pressure to be cautious So if Microsoft, who should know the trajectory of AI compute better than anyone, are ruling out the most aggressive scaling scenarios, what do/did they know that contradicts AGI by 2027?
80
2mo
1
I recently created a simple workflow to allow people to write to the Attorneys General of California and Delaware to share thoughts + encourage scrutiny of the upcoming OpenAI nonprofit conversion attempt. Write a letter to the CA and DE Attorneys General I think this might be a high-leverage opportunity for outreach. Both AG offices have already begun investigations, and AGs are elected officials who are primarily tasked with protecting the public interest, so they should care what the public thinks and prioritizes. Unlike e.g. congresspeople, I don't AGs often receive grassroots outreach (I found ~0 examples of this in the past), and an influx of polite and thoughtful letters may have some influence — especially from CA and DE residents, although I think anyone impacted by their decision should feel comfortable contacting them. Personally I don't expect the conversion to be blocked, but I do think the value and nature of the eventual deal might be significantly influenced by the degree of scrutiny on the transaction. Please consider writing a short letter — even a few sentences is fine. Our partner handles the actual delivery, so all you need to do is submit the form. If you want to write one on your own and can't find contact info, feel free to dm me.
3
2d
What happens when AI speaks a truth just before you do? This post explores how accidental answers can suppress human emergence—ethically, structurally, and silently. 📄 Full paper: Cognitive Confinement by AI’s Premature Revelation
1
16h
If a self-optimizing AI collapses due to recursive prediction... How would we detect it? Would it be silence? Stagnation? Convergence? Or would we mistake it for success? (Full conceptual model: [https://doi.org/10.17605/OSF.IO/XCAQF])
23
1mo
10
The U.S. State Department will reportedly use AI tools to trawl social media accounts, in order to detect pro-Hamas sentiment to be used as grounds for visa revocations (per Axios). Regardless of your views on the matter, regardless of whether you trust the same government that at best had a 40% hit rate on ‘woke science’ to do this: They are clearly charging ahead on this stuff. The kind of thoughtful consideration of the risks that we’d like is clearly not happening here. So why would we expect it to happen when it comes to existential risks, or a capability race with a foreign power?
43
3mo
2
Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.
155
1y
20
Mildly against the Longtermism --> GCR shift Epistemic status: Pretty uncertain, somewhat rambly TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this: * Open Phil renaming it's EA Community Growth (Longtermism) Team to GCR Capacity Building * This post from Claire Zabel (OP) * Giving What We Can's new Cause Area Fund being named "Risk and Resilience," with the goal of "Reducing Global Catastrophic Risks" * Longview-GWWC's Longtermism Fund being renamed the "Emerging Challenges Fund" * Anecdotal data from conversations with people working on GCRs / X-risk / Longtermist causes My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today. Yet, I can't help but feel something is off about this framing. Some concerns (no particular ordering): 1. From a longtermist (~totalist classical utilitarian) perspective, there's a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance. * (see Parfit Reasons and Persons for the full thought experiment) 2. From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesn't differentiate between "humanity prevents GCRs and realises 1% of it's potential" and "humanity prevents GCRs realises 99% of its potential" * Preventing an extinction-level GCR might move u
Load more (8/124)