Thinking, writing, and tweeting from Berkeley California. Previously, I ran programs at the Institute for Law & AI, worked on the one-on-one advising team at 80,000 Hours in London and as a patent litigator at Sidley Austin in Chicago.
Your picture of EA work on AGI preparation is inaccurate to the extent I don't think you made a serious effort to understand the space you're criticizing. Most of the work looks like METR benchmarking, model card/RSP policy (companies should test new models for dangerous capabilities a propose mitigations/make safety cases), mech interp, compute monitoring/export controls research, and trying to test for undesirable behavior in current models.
Other people do make forecasts that rely on philosophical priors, but those forecasts are extrapolating and responding to the evidence being generated. You're welcome to argue that their priors are wrong or that they're overconfident, but comparing this to preparing for an alien invasion based on Oumuamua is bad faith. We understand the physics of space travel well enough to confidently put a very low prior on alien invasion. One thing basically everyone in the AI debate agrees on is that we do not understand where the limits of progress are as data reflecting continued progress continues to flow.
I agree there's logical space for something less than less than AGI making the investments rational, but I think the gap between that and full AGI is pretty small. Peculiarity of my own world model though, so not something to bank on.
My interpretation of the survey responses is selecting "unlikely" when there are also "not sure" and "very unlikely" options suggests substantial probability (i.e. > 10%) on the part of the respondents who say "unlikely," or "don't know." Reasonable uncertainty is all you need to justify work on something so important if-true and the cited survey seems to provide that.
I directionally agree that EAs are overestimating the imminence of AGI and will incur some credibility costs, but the bits of circumstantial evidence you present here don't warrant the confidence you express. 76% of experts saying it's "unlikely" the current paradigm will lead to AGI leaves ample room for a majority thinking there's a 10%+ chance it will, which is more than enough to justify EA efforts here.
And most of what EAs are working on is determining whether we're in that world and what practical steps you can take to safeguard value given what we know. It's premature to declare case closed when the markets and the field are still mostly against you (at the 10% threshold).
I wish EA were a bigger and broader movement such that we could do more hedging, but given that you only have a few hundred people and a few $100m/yr, it's reasonable to stake that on something this potentially important that no one else is doing effective work on.
I would like to bring back more of the pre-ChatGPT disposition where people were more comfortable emphasizing their uncertainty, but standing by the expected value of AI safety work. I'm also open to the idea that that modesty too heavily burdens our ability to have impact in the 10%+ of worlds where it really matters.
If AIs are a perfect substitute for humans with lower absolute costs of production – where "costs" mean the physical resources needed to keep a flesh-and-blood human alive and productive – humans will have a comparative advantage only in theory. In practice, it would make more sense to get rid of the humans and use the inputs that would have sustained them to produce more AI labor.
Yep. I agree there's too great a default towards optimism and that some people are wasting their time as a result.
Based on the numbers I saw when I worked on hiring, I'd say something like 125-200 of the GovAI applicants were determined to work in AIS, properly conceived, as the primary thing they wanted to do. FIG is harder to guess, but I wouldn't be surprised is they just got added to some popular lists of random/"easy" internship opportunities and got flooded with totally irrelevant apps.
No on appalled; No on oversaturated; Yes on being clear that AIS projects are looking for the ultra-talented, but be mindful how hard it is to be well calibrated on who is ultra-talented, including yourself.
In my experience, a sizable majority of applicants in big figures like these are both plainly unqualified and don't understand the mission of AIS orgs. You shouldn't assume most or even many resemble you or other people on the EA Forum.
Everyone should be open to not being a fit for many projects or open to the idea that better candidates are out there. I wish for the world's sake that I become unhirable!
Interesting exchange there. I agree that the vision should be to have EA so in-the-water that most people don't realize they're doing "Effective Altruism." I'm very uncertain about how you get from here to there. I doubt it makes sense to shrink or downplay the existing EA community. My intuition is you want to scale up the memes at every level. Right now we're selling everything to buy more AI safety memes. It's going okay, but it's risky and I'm conscious of the costs to everything else.
Specifically inspired by Mechanize's piece on technological determinism. It seems overstated, but I wonder what the altruistic thing to do would be if they were right.
My list is very similar to yours. I believe items 1, 2, 3, 4, and 5 have already been achieved to substantial degrees and we continue to see progress in the relevant areas on a quarterly basis. I don't know about the status of 6.
For clarity on item 1, AI company revenues in 2025 are on track to cover 2024 costs, so on a product basis, AI models are profitable; it's the cost of new models that pull annual figures into the red. I think this will stop being true soon, but that's my speculation, not evidence, so I remain open that scaling will continue to make progress towards AGI, potentially soon.