All of Benjamin_Todd's Comments + Replies

Minor but I actually think Deepseek was pretty on trend for algorithmic efficiency (as explained in the post). The main surprise was that it was a Chinese company near the forefront of algorithmic efficiency (but here several months before I suggest that the Chinese are close to the frontier there).

It's the first chapter in a new guide about how to help make AI go well (aimed at new audiences).

I think it's generally important for people who want to help to understand the strategic picture.

Plus in my experience the thing most likely to make people take AI risk more seriously is believing that powerful AI might happen soon. 

I appreciate that talking about this could also wake more people up to AGI, but I expect the guide overall will proportionally boost the safety talent pool a lot more than the speeding up AI talent pool. 

(And long term I t... (read more)

Yes I basically agree that's the biggest limiting factor at this point.

However, a better base model can improve agency via e.g. better perception (which is still weak).

And although reasoning models are good at science and math, they still make dumb mistakes reasoning about other domains, and very high reliability is needed for agents. So I expect better reasoning models also helps with agency quite a bit.

I feel subtweeted :p As far as I can tell, most of the wider world isn't aware of the arguments for shorter timelines, and my pieces are aimed at them, rather than people already in the bubble.

That said, I do think there was a significant shortening of timelines from 2022 to 2024, and many people in EA should reassess whether their plans still make sense in light of that (e.g. general EA movement building looks less attractive relative to direct AI work compared to before).

Beyond that, I agree people shouldn't be making month-to-month adjustments to their ... (read more)

7
Holly Elmore ⏸️ 🔸
Honestly, I wasn't thinking of you! People planning their individual careers is one of the better reasons to engage with timelines imo. It's more the selection of interventions where I think the conversation is moot, not where and how individuals can connect to those interventions.  The hypothetical example of people abandoning projects that culminate in 2029 was actually inspired by PauseAI-- there is a contingent of people who think protesting and irl organizing takes too long and that we should just be trying to go viral on social media. I think the irl protests and community is what make PauseAI a real force and we have greater impact, including by drawing social media attention, all along that path-- not just once our protests are big.  That said, I do see a lot of people making the mistakes I mentioned about their career paths. I've had a number of people looking for career advice through PauseAI say things like, "well, obviously getting a PhD is ruled out", as if there is nothing they can do to have impact until they have the PhD. I think being a PhD student can be a great source of authority and a flexible job (with at least some income, often) where you have time to organize a willing population of students! (That's what I did with EA at Harvard.) The mistake here isn't even really a timelines issue; it's not modeling the impact distribution along a career path well. Seems like you've been covering this:  >I also agree many people should be on paths that build their leverage into the 2030s, even if there's a chance it's 'too late'. It's possible to get ~10x more leverage by investing in career capital / org building / movement building, and that can easily offset. I'll try to get this message across in the new 80k AI guide.   

Apparently there's a preprint showing Gemini 2.5 gets 20% on the Olympiad questions, which would be in line with the o3 result.

I wouldn't totally defer to them, but I wouldn't totally ignore them either. (And this is mostly besides the point since I'm overall I'm critical of using their forecasts and my argument doesn't rest on this.)

I only came across this paper in the last few days! (The post you link to is from 5th April; my article was first published 21st March.) 

I want to see more commentary on the paper before deciding what to do about it. My current understanding:

o3-mini seems to be a lot worse than o3 – it only got ~10% on Frontier Math, similar to o1. (Claude Sonnet 3.7 only gets ~3%.) 

So the results actually seem consistent with Frontier Math, except they didn't test o3, which is significantly ahead of other models.

The other factor seems to be that they evaluated the quality of the proofs rather than the ability to get a correct numerical answer.

I'm not sure data leakage is a big part of the difference.

7
Benjamin_Todd
Apparently there's a preprint showing Gemini 2.5 gets 20% on the Olympiad questions, which would be in line with the o3 result.

Here we're also talking about capabilities rather than harm. If you want to find out how fast cars will be in 5 years, asking the auto industry seems like a reasonable move.

4
NickLaing
I wouldn't consider car company CEOs a serious data point here for the same reasons. I agree it seems a reasonable move but I don't think it actually is.  Asking workers and technicians within companies, especially off the record though I would consider a useful data point, although still biased of course. I would have thought there might even be data on the accuracy of industry head predictions, because there would be a lot of news sources to look back on which could now be checked for accuracy. Might have a look.

Is it? Wouldn't you expect the auto industry to have incentives to exaggerate their possible future accomplishments in developing faster cars because it has a direct influence on how much governments will prioritise it as a means of transport, subsidise R&D, etc.? 

So, OpenAI is telling the truth when it says AGI will come soon and lying when it says AGI will not come soon?

I don't especially trust OpenAI's statements on either front.

The framing of the piece is "the companies are making these claims, let's dig into the evidence for ourselves" not "let's believe the companies".

(I think the companies are most worth listening to when it comes to specific capabilities that will arrive in the next 2-3 years.)

I agree those two statements don't obviously seem inconsistent, though independently it seems to me Dario probably has been too optimistic historically.

I discuss expert views here. I don't put much weight on the superforecaster estimates you mention at this point because they were made in 2022, before the dramatic shortening in timelines due to chatGPT (let alone reasoning models).

They also (i) made compute forecasts that were very wrong (ii) don't seem to know that much about AI (iii) were selected for expertise in forecasting near-term political events, which might not generalise very well to longer-term forecasting of a new technology.

I agree we should consider the forecast, but I think it's ultimately... (read more)

Thank you!

I am roughly in agreement with this post by an AI expert responding to the other (less good) short- timeline article going around. 

This post just points out that the AI 2027 article is an attempt to flesh out a particular scenario, rather than an argument for short timelines, which the authors of AI 2027 would agree with.

I thought instead of critiquing the parts that I'm not an expert in, I might take a look at the part of this post that intersects with my field, when you mention material science discovery, and pour just a little bit of cold

... (read more)
2
titotal
As I said, I don't think your statement was wrong, but I want to give people a more accurate perception as to how AI is currently affecting scientific progress: it's very useful, but only in niches which align nicely with the strengths of neural networks. I do not think similar AI would produce similarly impressive results in what my team is doing, because we already have more ideas than we have the time and resources to execute on.  I can't really assess how much speedup we could get from a superintelligence, because superintelligences don't exist yet and may never exist. I do think that 3xing research output with AI in science is an easier proposition than building digital super-einstein, so I expect to see the former before the latter. 

I think Ege is one of the best proponents of longer timelines, and link to that episode in the article.

I don't put much stock in the forecast of AI researchers the graph is from. I see the skill of forecasting as very different from the skill of being a published AI researcher. A lot of their forecasts also seem inconsistent. A bit more discussion here: https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/

Financially, I'm already heavily exposed to short AI timelines via my investments.

5
Yarrow Bouchard 🔸
Then what was the point of quoting Sam Altman, Dario Amodei, and Demis Hassabis' timelines at the beginning of your article? The section of the post "When do the 'experts' expect AGI to arrive?" suffers from a similar problem: downplaying expert opinion when it challenges the thesis and playing up expert opinion when it supports the thesis. What is the content and structure of this argument? It just feels like a restatement of your personal opinion. I also wish people would stop citing Metaculus for anything. Metaculus is not a real prediction market. You can't make money on Metaculus. You might as well just survey people on r/singularity.

The next few years, I expect AI revenues to continue to increase 2-4x per year, like they have recently, which gets you to those kinds of numbers in 2027.

There won't be widespread automation, rather AI will make money from a few key areas with few barriers, especially programming.

You could then reach an inflection point where AI starts to help with AI research. AI inference gets mostly devoted to that task for a while. Major progress is made, perhaps reaching AGI, without further external deployment.

Revenues would then explode after that point, but OpenAI ... (read more)

5
Yarrow Bouchard 🔸
So, OpenAI is telling the truth when it says AGI will come soon and lying when it says AGI will not come soon? Sam Altman’s most recent timeline is "thousands of days", which is so vague. 2,000 days (the minimum "thousands of days" could mean) is 5.5 years. 9,000 days (the point before you might think he would just say "ten thousand days") is 24.7 years. So, 5-25 years?

This is my understanding too – some crucial questions going forward:

  1. How useful are AIs that are mainly good at these verifiable tasks?
  2. How much does getting better at reasoning on these verifiable tasks generalise to other domains? (It seems like at least a bit e.g. o1 improved at law)
  3. How well will reinforcement learning work when applied at scale to areas with weaker reward signals?

Pretty sure o1 and Gemini have access to the internet.

The main way it's potentially misleading is that it's not a log plot (most benchmark results will look like exponentials on a linear scale) – however, I expect Deep Research would still seem above trend even if it was. I also think it's helpful to new readers to see some of the charts on linear scales, since in some ways it's more intuitive. 

9
Tim Hua
While you can use o1 and gemini with internet access, I think they almost certainly evaluated it without such access (see the original paper here). I really really do not think you should put the plot there. It's like comparing two different students performance except one of them has access to the internet. I think it's extremely misleading. If you want to illustrate progress you could just use the FrontierMath/GPQA results or even ARC-AGI. 

Glad it's useful! I categorise RL on chain of thought as a type of post-training, rather than test time compute. (Sometimes people lump them together as both 'inference scaling', but I think that's confusing.) I agree RL opens up novel capabilities you can't get just from next token prediction on the internet.

For test time compute, you need to do logarithmic increases of compute to get linear increases in accuracy on the benchmark. It's similar to the pretraining scaling law.

I agree test time compute isn't especially explosive – it mainly serves to "pull forward" more advanced capabilities by 1-2 years.

More broadly, you can swap training for inference: https://epoch.ai/blog/trading-off-compute-in-training-and-inference

On brute force, I mainly took Toby's thread to be saying we don't clearly have enough information to know how effective test time compute is vs. brute force. 

3
tobycrisford 🔸
Ah, that's a really interesting way of looking at it, that you can trade training-compute for inference-compute to only bring forward capabilities that would have come about anyway via simply training larger models. I hadn't quite got this message from your post. My understanding of Francois Chollet's position (he's where I first heard the comparison of logarithmic inference-time scaling to brute force search - before I saw Toby's thread) is that RL on chain of thought has unlocked genuinely new capabilities that would have been impossible simply by scaling traditional LLMs (or maybe it has to be chain of thought combined with tree-search - but whatever the magic ingredient is he has acknowledged that o3 has it). Of course this could just be his way of explaining why the o3 ARC results don't prove his earlier positions wrong. People don't like to admit when they're wrong! But this view still seems plausible to me, it contradicts the 'trading off' narrative, and I'd be extremely interested to know which picture is correct. I'll have to read that paper! But I guess maybe it doesn't matter a lot in practice, in terms of the impact that reasoning models are capable of having.

Spitballing: EA entrepreneurs should be preparing for one of two worlds:

i) Short timelines with 10x funding of today

ii) Longer timelines with relatively scarce funding

1
Guive
Scarce relative to the current level or just < 10x the current level? 

The response to Michael is an interesting point, but it only concerns diminishing returns in individual capabilities of new members. 

Diminishing returns are mainly driven by the quality of opportunities being used up, rather than the capabilities.

IIRC a 10x in resources to get a 3x in impact was a typical response in the old coordination forum survey responses.

In the past at 80k I'd often assume a 3x increase in inputs (e.g. advising calls) to get a 2x increase in outputs (impact-adjusted plan changes), and that seemed to be roughly consistent with th... (read more)

Aside: A more compelling argument against growth in this area to me is something like "EA should focus on improving its brand and comms skills, and on making reforms & changing its messaging to significantly reduce the chance of something like FTX happening again, before trying to grow aggressively again"; rather than "the possibility of scandals means it should never grow".

Another one is "it's even more high priority to grow X others movements than EA" rather than "EA is net negative to grow".

Less importantly, I also feel less confident coordination benefits would mean impact per member goes up with the number of members.

I understand that the value of a social network like Facebook grows with the number of members. But many forms of coordination become much harder with the number of members.

As an analogy, it's significantly easier for 2 people to decide where to go to dinner than for 3 people to decide. And 10 people in a group discussion can take ages to come to consensus.

Or, it's much harder to get a new policy adopted in an organisation of 1... (read more)

Thanks for the analysis! I think it makes sense to me, but I'm wondering if you've missed an important parameter: diminishing returns to resources.

If there are 100 community members they can take the 100 most impactful opportunities (e.g. writing DGB, publicising that AI safety is even a thing), while if there are 1000 people, they will need to expand into opportunities 101-1000, which will probably be lower impact than the first 100 (e.g. becoming the 50th person working on AI safety).

I'd guess a 10x increase to labour or funding working on EA things (eve... (read more)

2
Ben_West🔸
Thanks! See my response to Michael for some thoughts on diminishing returns. 10x increase in labor leading to 3x increase in impact feels surprising to me. At least in the regime of ~2xing supply I doubt returns diminish that quickly. But I haven't thought about this deeply and I agree that there is some rate of diminishing marginal returns which would make marginal growth net negative.
4
Benjamin_Todd
Aside: A more compelling argument against growth in this area to me is something like "EA should focus on improving its brand and comms skills, and on making reforms & changing its messaging to significantly reduce the chance of something like FTX happening again, before trying to grow aggressively again"; rather than "the possibility of scandals means it should never grow". Another one is "it's even more high priority to grow X others movements than EA" rather than "EA is net negative to grow".
2
Benjamin_Todd
Less importantly, I also feel less confident coordination benefits would mean impact per member goes up with the number of members. I understand that the value of a social network like Facebook grows with the number of members. But many forms of coordination become much harder with the number of members. As an analogy, it's significantly easier for 2 people to decide where to go to dinner than for 3 people to decide. And 10 people in a group discussion can take ages to come to consensus. Or, it's much harder to get a new policy adopted in an organisation of 100 than an organisation of 10, because there are more stakeholders to consult and compromise with, and then more people to train in the new policy etc. And large organisations are generally way more bureaucratic than smaller ones. I think these analogies might be closer than the analogy of Facebook. You also get effects like in a movement of under 1000, it's possible to have met in person most of the people, and know many of them well; while in a movement of 10,000, coordination has to be based on institutional mechanisms, which tend to involve a lot of overhead and not be as good. Overall it seems to me that movement growth means more resources and skills, more shared knowledge, infrastructure and brand effects, but also many ways that it becomes harder to work together, and the movement becoming less nimble. I feel unsure which effect wins, but I put a fair bit of credence on the term decreasing rather than increasing. If it were decreasing, and you also add in diminishing returns, then impact per member could be going down quite fast.

Thank you, I appreciate that.

Hey JWS, 

These comments were off-hand and unconstructive, have been interpreted in ways I didn't intend, and twitter isn't the best venue for them, so I apologise for posting, and I'm going to delete them. My more considered takes are here. Hopefully I can write more in the future.

Hey Ben, I'll remove the tweet images since you've deleted them. I'll probably rework the body of the post to reflect that and happy to make any edits/retractions that you think aren't fair.

I apologise if you got unfair pushback as a result of my post, and regardlesss of your present/future affiliation with EA, I hope you're doing well.

I'd be interested to hear a short explanation of why this seems like a different result from Leopold's paper, especially the idea that it could be better to accelerate through the time of perils.

6
Owen Cotton-Barratt
That talks about the effect of growth on existential risk; this analysis is explicitly not considering that. Here's a paragraph from this post: Leopold's analysis is of this "revised case" type.

I should maybe have been more cautious - how messaging will pan out is really unpredictable.

However, the basic idea is that if you're saying "X might be a big risk!" and then X turns out to be a damp squib, it looks like you cried wolf.

If there's a big AI crash, I expect there will be a lot of people rubbing their hands saying "wow those doomers were so wrong about AI being a big deal! so silly to worry about that!"

That said, I agree if your messaging is just "let's end AI!", then there's some circumstances under which you could look better after a crash e... (read more)

I agree people often overlook that (and also future resources).

I think bio and climate change also have large cumulative resources.

But I see this as a significant reason in favour of AI safety, which has become less neglected on an annual basis recently, but is a very new field compared to the others.

Also a reason in favour of the post-TAI causes like digital sentience.

Or you might like to look into Christian's grantmaking at Founders Pledge: https://80000hours.org/after-hours-podcast/episodes/christian-ruhl-nuclear-catastrophic-risks-philanthropy/

Thanks that's helpful background!

I agree tractability of the space is the main counterargument, and MacArthur might have had good reasons to leave. Like I say in the post, I'd suggest people think about this issue carefully if you're interested in giving to this area.

It's worth separating two issues:

  1. MacArthur's longstanding nuclear grantmaking program as a whole
  2. MacArthur's late 2010s focus on weapons-usable nuclear material specifically

The Foundation had long been a major funder in the field, and made some great grants, e.g. providing support to the programs that ultimately resulted in the Nunn-Lugar Act and Cooperative Threat Reduction (See Ben Soskis's report). Over the last few years of this program, the Foundation decided to make a "big bet" on "political and technical solutions that reduce the world’s reliance on ... (read more)

I don't focus exclusively on philanthropic funding. I added these paragraphs to the post to clarify my position:

I agree that a full accounting of neglectedness should consider all resources going towards the cause (not just philanthropic ones), and that 'preventing nuclear war' more broadly receives significant attention from defence departments. However, even considering those resources, it still seems similarly neglected as biorisk.

And the amount of philanthropic funding still matters because certain important types of work in the space can only be funde

... (read more)
2
Vasco Grilo🔸
Thanks for clarifying, Ben! Agreed, although my understanding is that you think the gains are often exagerated. You said: Again, if the gain is just a factor of 3 to 10, then it makes all sense to me to focus on cost-effectiveness analyses rather than funding. Agreed. However, deciding how much to weight a given relative drop in a fraction of funding (e.g. philanthropic funding) requires understanding its cost-effectiveness relative to other sources of funding. In this case, it seems more helpful to assess the cost-effectiveness of e.g. doubling philanthropic nuclear risk reduction spending instead of just quantifying it. The product of the 3 factors in the importance, neglectedness and tractability framework is the cost-effectiveness of the area, so I think the increased robustness comes from considering many interventions. However, one could also (qualitatively or quantitatively) aggregate the cost-effectiveness of multiple (decently scalable) representative promising interventions to estimate the overall marginal cost-effectiveness (promisingness) of the area. I agree, but I did not mean to argue for deemphasising the concept of cause area. I just think the promisingness of areas had better be assessed by doing cost-effectiveness analyses of representative (decently scalable) promising interventions. To clarify, the estimate for the cost-effectiveness of corporate campaigns I shared above refers to marginal cost-effectiveness, so it does not directly refer to the cost-effectiveness of ending factory-farming (which is far from a marginal intervention). My guess would be that the acquired career capital would still be quite useful in the context of the new top interventions, especially considering that welfare reforms have been top interventions for more than 5 years[1]. In addition, if Open Philanthropy is managing their funds well, (all things considered) marginal cost-effectiveness should not vary much across time. If the top interventions in 5 years were

It might take more than $1bn, but around that level, you could become a major funder of one of the causes like AI safety, so you'd already be getting significant benefits within a cause.

Agree you'd need to average 2x for the last point to work.

Though note the three pathways to impact - talent, intellectual diversity, OP gaps - are mostly independent, so you'd only need one of them to work.

Also agree in practice there would be some funging between the two, which would limit the differences, that's a good point.

I'd also be interested in that. Maybe worth adding that the other grantmaker, Matthew, is younger. He graduated in 2015 so is probably under 32.

3
Benjamin_Todd
Or you might like to look into Christian's grantmaking at Founders Pledge: https://80000hours.org/after-hours-podcast/episodes/christian-ruhl-nuclear-catastrophic-risks-philanthropy/

Intellectual diversity seems very important to figuring out the best grants in the long term.

If atm the community, has, say $20bn to allocate, you only need a 10% improvement to future decisions to be worth +$2bn.

Funder diversity also seems very important for community health, and therefore our ability to attract & retain talent. It's not attractive to have your org & career depend on such a small group of decision-makers.

I might quantify the value of the talent pool around another $10bn, so again, you only need a ~10% increase here to be worth a b... (read more)

6
Jason
I find it plausible that a strong fix to the funder-diversity problem could increase the value of the talent pool by 10% or even more. However, having a new independent funder with $1B in assets (spending much less than that per year) feels more like an incremental improvement. You'd need to do that consistently (no misses, unless counteracted by >2x grants) and efficiently (as incurring similar overhead as OP with $1B of assets would consume much of the available cash flow). That seems like a tall order.  Moreover, I'm not sure if a model in which the new major funder always gets to act "last" would track reality very well. It's likely that OP would change its decisions, at least to some extent, based on what it expected the other funder to do. In this case, the new funder would end up funding a significant amount of stuff that OP would have counterfactually funded.

One quick point is divesting, while it would help a bit, wouldn't obviously solve the problems I raise – AI safety advocates could still look like alarmists if there's a crash, and other investments (especially including crypto) will likely fall at the same time, so the effect on the funding landscape could be similar.

With divestment more broadly, it seems like a difficult question.

I share the concerns about it being biasing and making AI safety advocates less credible, and feel pretty worried about this.

On the other side, if something like TAI starts to h... (read more)

6
Greg_Colbourn ⏸️
I would encourage EAs to go even further against the EMH than buying AI stocks. EAs have been ahead of the curve on lots of things, so we should be able to make even better returns elsewhere, especially given how crowded AI is now. It's worth looking at the track record of the HSEACA investing group[1], but, briefly, I have had 2 cryptos that I learnt about in there in the last couple of years go up 1000x and 200x respectively (realising 100x and 50x gains respectively, so far in the case of the second). Lots of people also made big money shorting stock markets before the Covid crash, and there have been various other highly profitable plays, and promising non-AI start-ups posted about. There are plenty of other opportunities out there that are better than investing in AI, even from a purely financial perspective. More EAs should be spending time seeking them out, rather than investing in ethically questionable companies that go against their mission to prevent x-risk, and are very unlikely to provide significant profits that are actually usable before the companies they come from cause doom, or collapse in value from being regulated to prevent doom. 1. ^ Would actually be great if someone did an analysis of this sometime!
3
Greg_Colbourn ⏸️
It is! In fact, I think non-doom TAI worlds are highly speculative[1]. 1. ^ I've still not seen any good argument for them making up a majority of the probability space, in fact.

I want to be clear it's not obvious to me OP is making a mistake. I'd lean towards guessing AI safety and GCBRs are still more pressing than nuclear security. OP also have capacity constraints (which make it e.g. less attractive to pursue smaller grants in areas they're not already covering, since it uses up time that could have been used to make even larger grants elsewhere). Seems like a good fit for some medium-sized donors who want to specialise in this area.

3
Arepo
I don't know if they're making a mistake - my question wasn't meant to be rhetorical. I take your point about capacity constraints, but if no-one else is stepping up, it seems like it might be worth OP stepping up their capacity constraints. I continue to think the EA movement systematically underestimates the x-riskiness of nonextinction events in general and nuclear risk in particular by ignoring much of the increased difficulty of becoming interstellar post-destruction/exploitation of key resources. I gave some example scenarios of this here (see also David's results) - not intended to be taken too seriously, but nonetheless incorporating what I think are significant factors that other longtermist work omits (e.g. in The Precipice, Ord defines x-risk very broadly, but when he comes to estimate the x-riskiness of 'conventional' GCRs, he discusses them almost entirely in terms of their probability of making humans immediately go extinct, which I suspect constitutes a tiny fraction of their EV loss).

Interesting. I guess a key question is whether another wave of capabilities (e.g. gpt-5, agent models) comes in soon or not.

Agree it's most likely already in the price.

Though I'd stand behind the idea that markets are least efficient when it comes to big booms and busts involving large asset classes (in contrast to relative pricing within a liquid asset class), which makes me less inclined to simply accept market prices in these cases.

You could look for investments that do neutral-to-well in a TAI world, but have low-to-negative correlation to AI stocks in the short term. That could reduce overall portfolio risk but without worsening returns if AI does well.

This seems quite hard, but the best ideas I've seen so far are:

  1. The cluster of resources companies, electricity producers, commodities, land. There's reason to think these could do quite well during a TAI transition, but in the short term they do well when inflation rises, which tends to be bad for AI stocks. (And they were effective
... (read more)

I should have maybe added that several people mentioned "people who can practically get stuff done" is still a big bottleneck..

My impression is that of EA resources focused on catastrophic risk, 60%+ are now focused on AI safety, or issues downstream of AI (e.g. even the biorisk people are pretty focused on the AI/Bio intersection).

AI has also seem dramatic changes to the landscape / situation in the last ~2 years, and my update was focused on how things have changed recently.

So for both reasons most of the updates that seemed salient to me concerned AI in some way.

That said, I'm especially interested in AI myself, so I focused more on questions there. It would be ideal to hear from more bio people.

I also briefly mention nuclear security, where I think the main update is the point about lack of funding.

 

9
DavidNash
I think there is more value in separating out AI vs bio vs nuclear vs meta GCR than having posts/events marketed as GCR but be mainly on one topic. Both from the perspective of the minor causes and the main cause which would get more relevant attention.  Also the strategy/marketing of those causes will often be different and so it doesn't make as much sense to lump them together unless it is about GCR prioritisation or cross cause support.

Hi Wayne,

Those are good comments!

On the timing of the profits, my first estimate is for how far profits will need to eventually rise. 

To estimate the year-by-year figures, I just assume revenues grow at the 5yr average rate of ~35% and check that's roughly in line with analyst expectations. That's a further extrapolation, but I found it helpful to get a sense of a specific plausible scenario.

(I also think that if Nvidia revenue looked to be under <20% p.a. the next few quarters, the stock would sell off, though that's just a judgement call.)

On the ... (read more)

I mean "enter the top of the funnel".

For example, if you advertise an event as being about it, more people will show up to the event. Or more people might sign up to a newsletter.

(We don't yet know how this translates into more intense forms of engagement.)

It's fair that I only added "(but not more)" to the forum version – it's not in the original article which was framed more like a lower bound. Though, I stand by "not more" in the sense that the market isn't expecting it to be *way* more, as you'd get in an intelligence explosion or automation of most of the economy. Anyway I edited it a bit.

I'm not taking revenue to be equivalent to value. I define value as max consumer willingness to pay, which is closely related to consumer surplus.

I agree risk also comes into it – it's not a risk-neutral expected value (I discuss that in the final section of the OP).

Interesting suggestion that the Big 5 are riskier than Nvidia. I think that's not how the market sees it – the big 5 have lower price & earnings volatility and lower beta. Historically chips have been very cyclical. The market also seems to think there's a significant chance Nvidia loses market share to TPUs or AMD. I think the main reason Nvidia has a higher PE ratio is due its earnings growth.

I agree all these factors go into it (e.g. I discuss how it's not the same as the mean expectation in the appendix of the main post, and also the point about AI changing interest rates).

It's possible I should hedge more in the title of the post. That said, I think the broad conclusion actually holds up to plausible variation in many of these parameters.

For instance, margin is definitely a huge variable, but Nvidia's margin is already very high. More likely the margin falls, and that means the size of the chip market needs to be even bigger than the estimate.

I do think you should hedge more given the tower of assumptions underneath.

The title of the post is simultaneously very confident ("the market implies" and "but not more"), but also somewhat imprecise ("trillions" and "value"). It was not clear to me that the point you were trying to make was that the number was high.

Your use of "but not more" implies you were also trying to assert the point that it was not that high, but I agree with your point above that the market could be even bigger. If you believe it could be much bigger, that seems inconsistent with... (read more)

Load more