We are still working on getting a more official version of this on Arvix, possibly with estimates for and .
When we do that, we'll also upload full replication files. But I don't want to keep anyone waiting for the data in case they have some uses for it, so see here for the main CSV we used: https://github.com/parkerwhitfill/EOS_AI
In complete generality, you could write effective labor as
.
That is, effective labor is some function of the number of human researchers we have, the effective inference compute we have (quantity of AIs we can run) and the effective training compute (quality of AIs we trained).
The perfect substitution claim is that once training compute is sufficiently high, then eventually we can spend the inference compute on running some AI that substitutes for human researchers. Mathematically, for some ,
w...
Here is a fleshed out version of Cheryl's response. Lets suppose actual research capital is but we just used in our estimation equation.
Then the true estimation equation is
re-arranging we get
So if we regress on a constant and then the coefficient on is still as long as q is independent of .
Nevertheless, I think this should increase your uncertainty in our estimates because there is clearly a lot go...
Note that if you accept this, our estimation of in the raw compute specification is wrong.
The cost-minimization problem becomes
.
Taking FOCs and re-arranging,
So our previous estimation equation was missing an A on the relative prices. Intuitively, we understated the degree to which compute was getting cheaper. Now A is hard to observe, but let's just assume its growing exponentially with an 8 month doubling time per this Epoch paper.
Imputing this guess of A, and estimating via OLS w...
This is a good point, we agree, thanks! Note that you need to assume that the algorithmic progress that gives you more effective inference compute is the same that gives you more effective research compute. This seems pretty reasonable but worth a discussion.
Although note that this argument works only with the CES in compute formulation. For the CES in frontier experiments, you would have the so the A cancels out.[1]
You might be able to avoid this by adding the A's in a less naive fashion. You don't have to train larger models
Thanks for the insightful comment.
I take your overall point as the static optimization problem may not be properly specified. For example, costs may not be linear in labor size because of adjustment costs to growing very quickly or costs may not be linear in compute because of bulk discounting. Moreover, these non-linear costs may be changing over time (e.g., adjustment costs might only matter in 2021-2024 as OpenAI, Anthropic have been scaling labor aggressively). I agree that this would bias the estimate of . Given the data we have, there sho...
Great paper, as always Phil.
I'm curious to hear your thoughts a bit more about if we can salvage SWE by introducing non-standard preferences.
Minor quibble: "There is then no straightforward sense in which economic growth has historically been exponential, the central stylized fact which SWE and semi-endogenous models both seek to explain"
I agree that there is no consumption aggregate under non-homothetic preferences, but we can still say economic growth has been exponential in the sense that GDP growth is exponential. Perhaps it is not a ...
People often appeal to Intelligence Explosion/Recursive Self-Improvement as some win-condition for current model developers e.g. Dario argues Recursive Self-Improvement could enshrine the US's lead over China.
This seems non-obvious to me. For example, suppose OpenAI trains GPT 6 which trains GPT 7 which trains GPT 8. Then a fast follower could take GPT 8 and then use it to train GPT 9. In this case, the fast follower has a lead and has spent far less on R&D (since they didn't have to develop GPT 7 or 8 themselves).
I guess people are thinking that OpenAI will be able to ban GPT 8 from helping competitors? But has anyone argued for why they would be able to do that (either legally or technically)?
Here is a counterargument: focusing on the places where there is altruistic alpha is 'defecting' against other value systems. See discussion here
Agreed with this. I'm very optimistic about AI solving a lot of incentive problems in science. I don't know if the end case (full audits) as you mention will happen, but I am very confident we will move in a better direction than where we are now.
I'm working on some software now that will help a bit in this direction!
Since it seems like a major goal is of the Future Fund is to experiment and gain information on types of philanthropy —how much data collection and causal inference are you doing/plan to do on the grant evaluations?
Here are some ideas I quickly came up with that might be interesting.
I'd say it's close and depends on the courses you are missing from an econ minor instead of a major. If those classes are 'economics of x' classes (such as media or public finance), then your time is better spent on research. If those classes are still in the core (intermediate micro, macro, econometrics, maybe game theory) I'd probably take those before research.
Of course, you are right that admissions care a lot about research experience - but it seems the very best candidates have all those classes AND a lot of research experience.
Is your sense that that's better than math major + econ minor + a few classes in stats and computer science + econ research (doing econ research with the time that would have otherwise gone to extra econ classes)? I'd guess this makes sense since I've heard econ grad schools aren't too impressed by econ majors and care a lot about research experience.
One case where this doesn't seem to apply is an economics Ph.D. For that, it seems taking very difficult classes and doing very well in them is largely a prerequisite for admissions. I am very grateful I took the most difficult classes and spent a large fraction of my time on schoolwork.
The caveat here is that research experience is very helpful too (working as an RA).
Is there a strong reason to close applications in January?
I'm only familiar with the deadlines for economics graduate school, but for that you get decisions back from graduate school in February-March along with the funding package. Therefore, it would be useful to be able to apply for this depending on the funding package you receive (e.g. if you are fully funded you don't need to apply, but if you are given little or no funding, it would be important to apply) .
I highly recommend cold turkey blocker, link here. It offers many of the features you listed above, including scheduled blocking, blocking the whole internet, blocking specific URL or search phrases (Moreover, this can be done with regex, so you can make the search terms very general), password-protected blocks, no current loopholes (if there are ones please don't post them, I don't want to know!) and the loopholes that used to exist (proxies) got fixed.
Pricing seems better than freedom as it's $40 for lifetime usage. My only complaint is that there is no phone version.
I'd still agree that we should factor in cooperation, but my intuition is then that it's going to be a smaller consideration than neglect of future generations, so more about tilting things around the edges, and not being a jerk, rather than significantly changing the allocation. I'd be up for being convinced otherwise – and maybe the model with log returns you mention later could do that. If you think otherwise, could you explain the intuition behind it?
I think one point worth emphasizing is that if the cooperative portfolio is a p...
Because my life has been a string of lucky breaks, ex post I wouldn’t change anything. (If I’d gotten good advice age 20, my life would have gone worse than it in fact has gone.) But assuming I don’t know how my life would turn out:
"A pretty standard view of justice is that you don't harm others, and if you are harming them then you should stop and compensate for the harm done. That seems to describe what happens to farmed animals."
I think this only applies to people who are contributing to the harm. But for a vegan for is staunchly opposed to factory farming, they aren't harming the animals, so factory farming is not an issue of justice for them.
..."Whether we seek to alleviate poverty directly or indirectly, we might suppose that such efforts will get a privileged status over very different cause areas if we endorse the justice view. But our other cause priorities deal with injustices too; factory farming is an unjust emergency, and an existential catastrophe would clearly be a massive injustice that might only be prevented if we act now. And just like poverty, both of these problems have been furthered by selfish and corrupt international institutions which have also contributed to our wealth
Arxiv link here https://arxiv.org/abs/2507.23181