In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US). 

Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place. 

I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention.

Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or taking into account the worlds north of the median. I think these people would have done better to know like 90% less of the timeline discourse than they did. 

I don’t think AI timelines pay rent for all the oxygen they take up, and they can be super scary to new people who want get involved without really helping them to action. Maybe I’m wrong and you find lots of action-relevant insights there. If it’s the case that timeline updates frequently update your actions, your intervention may not be robust enough to the assumptions that go into the timeline or the timeline’s uncertainty, anyway. In which case, you should probably pursue a more robust intervention. Like, if you are changing your strategy every time a new model drops with new capabilities that advance timelines, you clearly need to take a step back and account for more and more powerful models in your intervention in the first place. Looks like you shouldn’t be playing it so close to the trend line. PauseAI, for example, is an ask that works under a wide variety of scenarios, including AI development going faster than we thought, because it is not contingent on the exact level of development we have reached since we passed GPT-4.

Be part of the solution. Pick a timeline-robust intervention. Talk less about timelines and more about calling your Representatives.

110

27
14
2
2

Reactions

27
14
2
2

More posts like this

Comments31
Sorted by Click to highlight new comments since:

There's a grain that I agree with here, which is that people excessively plan around a median year for AGI rather than a distribution for various events, and that planning around that kind of distribution leads to more robust and high-expected-value actions (and perhaps less angst).
However, I strongly disagree with the idea that we already know "what we need." Off the top of my head, several ways narrowing the error bars on timelines -- which I'll operationalize as "the distribution of the most important decisions with respect to building transformative AI" -- would be incredibly useful:

  • To what extent will these decisions be made by the current US administration, or by people governed by the current administration? This affects the political strategy everyone -- including, I propose, PauseAI -- should adopt.
  • To what extent will the people making the most important AI decisions remember stuff people said in 2025? This is very important for the relative usefulness of public communications versus research, capacity-building, etc.
  • Are these decisions soon enough that the costs of being "out of the action" outweigh the longer-term benefits of e.g. going to grad school, developing technical expertise, etc? Clearly relevant for lots of individuals who want to make a big impact.
  • When should philanthropists spend their resources? As I and others have written, there are several considerations that point towards spending later; these are weakened a lot if the key decisions are in the next few years.
  • To what extent will the most transformative models be technically similar to the ones we have today? That answer determines the value of technical safety research.

I also strongly disagree with the framing that the important thing is us knowing what we know. Yes, people who have been immersed in AI content for years often believe that very scary and/or awesome AI capabilities are coming within the decade. But most people, including most of the people who might take the most important actions, are not in this category and do not share this view (or at least don't seem to have internalized it). Work that provides an empirical grounding for AI forecasts has already been very useful in bringing attention to AGI and its risks from a broader set of people, including in governments, who would otherwise be focused on any one of the million other problems in the world.

I agree that not everyone already knows what they need to know. Our crux issue is probably "who needs to get it and how will they learn it?" I think we more than have the evidence to teach and set an example of knowing for the public. I think you think we need to make a very respectable and detailed case to convince elites. I think you can take multiple routes to influencing elites and that they will be more receptive when the reality of AI risk is a more popular view. I don't think timelines are a great tool for convincing either of these groups because they create such a sense of panic and there's such an invitation to quibble with the forecasts instead of facing the thrust of the evidence. 

I definitely agree there are plenty of ways we should reach elites and non-elites alike that aren't statistical models of timelines, and insofar as the resources going towards timeline models (in terms of talent, funding, bandwidth) are fungible with the resources going towards other things, maybe I agree that more effort should be going towards the other things (but I'm not sure -- I really think the timeline models have been useful for our community's strategy and for informing other audiences).

But also, they only sometimes create a sense of panic; I could see specificity being helpful for people getting out of the mode of "it's vaguely inevitable, nothing to be done, just gotta hope it all works out." (Notably the timeline models sometimes imply longer timelines than the vibes coming out of the AI companies and Bay Area house parties.)

I feel subtweeted :p As far as I can tell, most of the wider world isn't aware of the arguments for shorter timelines, and my pieces are aimed at them, rather than people already in the bubble.

That said, I do think there was a significant shortening of timelines from 2022 to 2024, and many people in EA should reassess whether their plans still make sense in light of that (e.g. general EA movement building looks less attractive relative to direct AI work compared to before).

Beyond that, I agree people shouldn't be making month-to-month adjustments to their plans based on timelines, and should try to look for robust interventions.

I also agree many people should be on paths that build their leverage into the 2030s, even if there's a chance it's 'too late'. It's possible to get ~10x more leverage by investing in career capital / org building / movement building, and that can easily offset. I'll try to get this message across in the new 80k AI guide.

Also agree for strategy it's usually better to discuss specific capabilities and specific transformative effects you're concerned about, rather than 'AGI' in general. (I wrote about AGI because it's the most commonly used term outside of EA and was aiming to reach new people.)

Honestly, I wasn't thinking of you! People planning their individual careers is one of the better reasons to engage with timelines imo. It's more the selection of interventions where I think the conversation is moot, not where and how individuals can connect to those interventions. 

The hypothetical example of people abandoning projects that culminate in 2029 was actually inspired by PauseAI-- there is a contingent of people who think protesting and irl organizing takes too long and that we should just be trying to go viral on social media. I think the irl protests and community is what make PauseAI a real force and we have greater impact, including by drawing social media attention, all along that path-- not just once our protests are big. 

That said, I do see a lot of people making the mistakes I mentioned about their career paths. I've had a number of people looking for career advice through PauseAI say things like, "well, obviously getting a PhD is ruled out", as if there is nothing they can do to have impact until they have the PhD. I think being a PhD student can be a great source of authority and a flexible job (with at least some income, often) where you have time to organize a willing population of students! (That's what I did with EA at Harvard.) The mistake here isn't even really a timelines issue; it's not modeling the impact distribution along a career path well. Seems like you've been covering this: 

>I also agree many people should be on paths that build their leverage into the 2030s, even if there's a chance it's 'too late'. It's possible to get ~10x more leverage by investing in career capital / org building / movement building, and that can easily offset. I'll try to get this message across in the new 80k AI guide
 

Thanks Holly. I agree that fixating on just trying to answer the "AI timelines" question won't be productive for most people. Though, we all need to come to terms with it somehow. I like your callout for "timeline-robust interventions". I think that's a very important point. Though I'm not sure that implies calling your representatives.

I disagree that "we know what we need to know". To me, the proper conversation about timelines isn't just "when AGI", but rather, "at what times will a number of things happen", including various stages of post-AGI technology, and AI's dynamics with the world as a whole. It incorporates questions like "what kinds of AIs will be present".

This allows us to make more prudent interventions: What technical AI safety and AI governance you need depends on the nature of the AI that will be built. Important AI to address isn't just orthogonality thesis-driven paperclip maximizers.

I think seeing the way AI is emerging, that it's clear some classic AI safety challenges are not as relevant anymore. For example, it seems to me that "value learning" is looking much easier than classic AI safety advocates thought. But versions of many classic AI safety challenges are still relevant. The same issue remains: if we can't verify that something vastly more intelligent than us is acting in our interests, then we are in peril.

I don't think it would be right if everyone would be occupied with such AI timelines and AI scenarios questions, but I think they deserve very strong efforts. If you are trying to solve a problem, the most important thing to get right is what problem you're trying to solve. And what is the problem of AI safety? That depends on what kind of AI will be present in the world and what humans will be doing with it.

I agree that better understanding of progress and which problems are more or less challenging is valuable, but it seems clear that timelines get fare more attention than needed in places where they aren't decision relevant.

In what places do timelines get a lot of attention despite not being very decision-relevant?

AI Safety Twitter, this Forum, Bay Area parties… 

I thought that David meant "intellectual areas" as opposed to physical/digital venues. Seems like lots of discussion in those venues isn't particularly decision-relevant, and timelines aren't really an unusually decision-irrelevant case.

I thought you were taking issue with the claim they were overdiscussed and asking where.

The areas where timelines are overdiscussed are numerous. Policy and technical safety career advice are the biggest ime.

Yeah, I was mostly thinking about policy - if we're facing 90% unemployment, or existential risk, and need policy solutions, the difference between 5 and 7 years is immaterial. (There are important political differences, but the needed policies are identical.)

I disagree that "we know what we need to know". To me, the proper conversation about timelines isn't just "when AGI", but rather, "at what times will a number of things happen", including various stages of post-AGI technology, and AI's dynamics with the world as a whole. It incorporates questions like "what kinds of AIs will be present".

See I think forecasts like that don't really give us useful enough information about how to plan for future contingencies. I think we are deluded if we think we can make important moves based, for example, on the kinds of AIs that we project could be present in the future. The actual state of our knowledge is very coarse and we need to act accordingly. I really think the only prospective chance for impact is to do things that slow development and create real human and democratic oversight, and we have almost no chance of nudging the trajectory of development in a technical direction that works for us from here. (Maybe we will after we've secured the time and will to do so!)

I agree.

A big reason why I think timeline forecasting gets too much attention—which you alluded to— is that no matter how much forecasting you do, you'll never have that much confidence about how AI is going to go. And certain plans only work under a narrow set of future scenarios. You need to have a plan that works even if your forecast is wrong, because there will always be a good chance that your forecast is wrong.

Slowing down AI has downsides, but it seems to me that slowing down AI is the plan that works under the largest number of future scenarios. Particularly an international treaty to globally slow down AI, so that all developers slow down simultaneously. That seems hard to achieve, but I think peaceful protests increase the chance of success by cultivating political will for a pause/slowdown treaty.

Quick thoughts:

  1. I've previously been frustrated that "AI forecasting" has focused heavily on "when will AGI happen" as opposed to other potential strategic questions. I think there are many interesting strategic questions. That said, I think in the last 1-2 years things have improved here. I've been impressed by a lot of the recent work by Epoch, for instance.
  2. My guess is that a lot of our community is already convinced. But I don't think we're the target market for much of this.
  3. Interestingly, OP really does not seem to be convinced. Or, they have a few employees who are convinced of short timelines, but their broader spending really doesn't seem very AGI-pilled to me (tons of non-AI spending, for instance). I'm happy for OP to spend more money investigating these questions, for reasons of whether OP should spend more money in this area in the future.
  4. It sounds like you have some very specific individuals/people in mind, in terms of parts like "If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place." I'm really not sure who you are referring to here.
  5. I'd agree that the day-to-day of "what AI came out today" gets too much attention, but this doesn't seem like an "AI timelines" thing to me, more like an over-prioritization of recent news.
  6. On ai-2027.com; I see this as dramatically more than answering "when will AGI happen." It's trying to be very precise about what a short-timeline world would look like. This contains a lot of relevant strategic questions/discussions. 

Yeah I tried to exempt AI 2027 from my critique. They are doing a lot more, and well.

This seems to complement @nostalgebraist's complaint that much of work on AI timelines (Bio Anchors, AI 2027) rely on a few load-bearing assumptions (e.g. the permanence of Moore's law, the possibility of software intelligence explosion) and then doing a lot of work crunching statistics and Fermi estimations to "predict" an AGI date, when really the end result is overdetermined by those beginning assumptions and not affected very much by changing the secondary estimations. It is thus largely a waste of time to focus on improving those estimations when there is a lot more research to be done on the actual load-bearing assumptions:

  • Is Moore's law going to continue indefinitely?
  • Is software intelligence explosion plausible? (If yes, does it require concentration of compute?)
  • Is technical alignment easy?
  • ...

Which are the actual cruxes for the most controversial AI governance questions like:

  • How much should we worry about regulatory capture?
  • Is it more important to reduce the rate of capabilities growth or for the US to beat China?
  • Should base models be open-sourced?
  • How much can friction when interacting with the real world (e.g. time needed to build factories and perform experiments (poke @titotal), regulatory red tape, labor unions, etc.) prevent AGI?
  • How continuous are "short-term" AI ethics efforts (FAccT, technological unemployment, military uses) with "long-term" AI safety?
  • How important is it to enhance collaboration between US, European and Chinese safety organizations?
  • Should EAs work with, for, or against frontier AI labs?
  • ...

I'm not directly working on an AI pause, but if I were, I would think that timelines were very strategically relevant:

  • How soon will we see warning shots, and will they arrive before society is disempowered?
  • What kinds of preparation should I do before we see a warning shot
  • Will progress be sufficiently slow such that we get frog-boiled?
  • How quickly should we aim to spend our limited human capital and financial resources?
  • How "in the water" will (or could) AI be over the next few years?

Discussion of AI timelines is helpful for answering many of the questions above, as are the endline forecasts. I suspect that I believe that working on AI pauses is much less robust to timelines than you think it is, and discussion of timelines is much more helpful for developing models of AI development and risk (and those models are highly decision-relevant).

Strong agree that people talk about AI timelines way too much. I think that the level of EG 2 Vs 5 Vs 20 Vs way longer is genuinely decision relevant, but that much more fine grained than that often isn't. And it's so uncertain and the evidence is so weak that I think it's difficult to do much more than putting decent probability on all of those categories and shifting your weightings a bit

I am having trouble understanding why AI safety people are even trying to convince the general public that timelines are short. 

If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous. 

Also, if you make a bold prediction about short timelines and turn out to be wrong, won't people stop taking you seriously the next time around? 

If you manage to convince an investor that timelines are very short without simultaneously convincing them to care a lot about x-risk, I feel like their immediate response will be to rush to invest briefcases full of cash into the AI race, thus helping make timelines shorter and more dangerous. 

I'm the corresponding author for a paper that Holly is maybe subtweeting and was worried about this before publication but don't really feel like those fears were realized.

Firstly, I don't think there are actually very many people who sincerely think that timelines are short but aren't scared by that. I think what you are referring to is people who think "timelines are short" means something like "AI companies will 100x their revenue in the next five years", not "AI companies will be capable of instituting a global totalitarian state in the next five years." There are some people who believe the latter and aren't bothered by it but in my experience they are pretty rare.

Secondly, when VCs get the "AI companies will 100x their revenue in the next five years" version of short timelines they seem to want to invest into LLM-wrapper startups, which makes sense because almost all VC firms lack the AUM to invest in the big labs.[1] I think there are plausible ways in which this makes timelines shorter and more dangerous but it seems notably different from investing in the big labs.[2]

Overall, my experience has mostly been that getting people to take short timelines seriously is very close to synonymous with getting them to care about AI risk.

  1. ^

    Caveat that ~everyone has the AUM to invest in publicly traded stocks. I didn't notice any bounce in share price for e.g. NVDA when we published and would be kind of surprised if there was a meaningful effect, but hard to say.

  2. ^

    Of course, there's probably some selection bias in terms of who reaches out to me. Masayoshi Son probably feels like he has better info than what I could publish, but by that same token me publishing stuff doesn't cause much harm.

Yes, I agree. I think what we need to spend our effort on is convincing people that AI development is dangerous and needs to be handled very cautiously if at all, not that superintelligence is imminent and there's NO TIME. I don't think the exact level of urgency or the exact level of risk matters much after like p(doom)=5. The thing we need to convince people of is how to handle the risk. 

A lot of AI Safety messages expect the audience to fill in most of the interpretive details-- "As you can see, this forecast is very well-researched. ASI is coming. You take it from here."-- when actually what they need to know is what those claims mean for them and what they can do.

I think this is an important tension that's been felt for a while. I believe there's been discussion on this at least 10 years back. For a while, few people were "allowed"[1] to publicly promote AI safety issues, because it was so easy to mess things up. 

I'd flag that there isn't much work actively marketing information about there being short timelines. There's research here, but generally EAs aren't excited to heavily market this research broadly. I think there's a tricky line between "doing useful research in ways that are transparent" and "not raising alarm in ways that could be damaging."

Generally, there is some marketing on focused AI safety discussions. For example, see Robert Miles or Rational Animations

[1] As in, if someone wanted to host a big event on AI safety, and they weren't close to (and respected by) the MIRI cluster, they were often discouraged from this. 

Those are reasonable points, but I'm not sure they are enough to overcome the generally reasonable heuristic that dramatic events will go better if people involved anticipate them and have had a chance to think about them and plan responses beforehand, than if they take them by surprise. 

I don't think she's saying that people shouldn't think and plan responses, I think it's more that endless naval gazing about timelines and rapidly shifting responses isn't the most useful response

I think I directionally agree!

One example of timelines feeling very decision-relevant is for people who are looking to specialise in partisan influence, you might want to specialise far more in Republicans the larger your credence in TAI/ASI by Jan 2029. Whereas for longer timelines on priors Democrats have a ~50% chance of controlling the presidency from 2029, so specialising in Dem political comms could make more sense.

Nice points, Holly! However, I think they only apply to small disagreements about AI timelines. I liked Epoch After Hours' podcast episode Is it 3 Years, or 3 Decades Away? Disagreements on AGI Timelines by Ege Erdil and Matthew Barnett (linkpost). Ege has much longer timelines than the ones you seem to endorse (see text I bolded below), and is well informed. He is the 1st author of the paper about Epoch AI's compute-centric model of AI automation which was announced on 21 March 2025.

Ege

Yeah, I mean, I guess one way to try to quantify this is when you expect, I don’t know, we often talk about big acceleration, economic growth. One way to quantify is when do you expect, maybe US GDP growth, maybe global GDP growth to be faster than 5% per year for a couple of years in a row. Maybe that’s one way to think about it. And then you can think about what is your median timeline until that happens. I think if you think about like that, I would maybe say more than 30 years or something. Maybe a bit less than 40 years by this point. So 35. Yeah. And I’m not sure, but I think you [Matthew Barnett] might say like 15 or 20 years.

Relatedly, the median expert in 2023 thought the median date of full automation to be 2073.

CDF of ESPAI survey showing median and central 50% of expert responses.

I remain open to betting up to 10 k$ against short AI timelines. I understand this does not work for people who think doom or utopia are certain soon after AGI, but I would say this is a super extreme view. It also reminds me of religious unbettable or unfalsiable views. Banks may offer loans with better conditions, but, as long as my bet is beneficial, one should take the bank loans until they are marginally neutral, and then also take my bet.

(I don't particularly endorse any timeline, btw, partly bc I don't think it's a decision-relevant question for me.)

Nice post and I fully agree.

Unfortunately it all goes back to inadequate math education and effective disinformation campaigns. Whether it was tobacco or climate change, those who opposed change and regulation have always focused on uncertainty as a reason not to act, or to delay. And they have succeeded in convincing the vast majority of the public. The mentality is: "even the scientists don't agree on whether we'll have a global catastrophe or total human extinction - so until we're sure which one it is, let's just keep using fossil fuels and pumping out carbon dioxide." 

With AI, I liken most of humanity's mentality to that of a lazy father watching a football game who needs a soda. And there is a store just across a busy highway from his house. He could go get the soda, but he might miss an important score. So instead he sends his 7-year-old son to the store. Because, realistically, there's a good chance that his son won't get hit by a car, while if he goes himself, it is certain that he'll miss a part of the game. 

No parent would think like that. But when it comes to AI, that's how we think. 

And timelines are just the nth excuse to keep thinking that way. "We don't need to act yet, it mightn't happen for 5 years - some people say even 10 years."

The challenge for us is to somehow wake people up before it's too late, and despite the fact that the people who are in the best position to pause are the most gung-ho of all, whether they are CEO's or US president, because they personally have everything to gain from accelerating AI, even if it ends up screwing everyone else (and let's be realistic, they don't really care about anyone else). 

Upvoted. I think this is a great argument. Timelines are a way overrated thing to be incessantly talking about and are often a distraction on what can be done.

Curated and popular this week
Relevant opportunities