Currently MIRI, formerly MATS, sometimes Palisade.
Opinions my own.
I took your comment to have two parts it was critiquing:
My response was addressing point 2, not point 1, and this was intentional. I will continue not to engage on point 1, because I don't think it matters. If you're devoted to thinking Lintz is a race-stoking war hawk due to antecedent ambiguity in a quickly-drafted post, and my bid to dissuade you of this was ineffective, that's basically fine with me.
"Who do you mean by 'The country with the community of people who have been thinking about this the longest'"
The US — where ~all the labs are based, where ~all the AI safety research has been written, where Chinese ML engineers have told me directly that, when it comes to AI and ML, they think exclusively in English. Where a plurality of users of this forum live, where the large models that enabled the development of DeepSeek were designed to begin with, where the world's most powerful military resides, where....
Yes, a US-centric view is an important thing to inspect in all contexts, but I don't think it's unreasonable to think that the US is the most important actor in the current setting (could change but probably not), and to (for a US-based person who works on policy... in the US, as Lintz does, speaking to a largely US-based audience) use 'we' to refer to the US.
"What is your positive evidence for the claim that other communities (e.g., certain national intelligence communities) haven't thought about that for at least as long?"
I want to point out that this is unfair, since meeting this burden of proof would require comprehensive privileged knowledge of the activities over the past half century of every intelligence agency on the planet. My guess is you know that I don't have that!
Things I do know:
Finally: "Positive evidence.... have not" is a construction I would urge you to reconsider — positive evidence for a negative assertion is a notoriously difficult thing to get.
I'd like to say I'm grateful to have read this post, it helped explicate some of my own intuitions, and has drawn my attention to a few major cruxes. I'm asking a bunch of questions because knowing what to do is hard and we should all figure it out together, not because I'm especially confident in the directions the questions point.
Should we take the contents of the section "What should people concerned about AI safety do now?" to be Alex Lintz's ordinal ranking of worthwhile ways to spend time? If so, would you like to argue for this ranking? If not, would you like to provide an ordinal ranking and argue for it?
"The US is unlikely to be able to prevent China (or Russia) from stealing model weights anytime soon given both technological constraints and difficulty/cost of implementing sufficient security protocols."
Do you have a link to a more detailed analysis of this? My guess is there's precedent for the government taking security seriously in relation to some private industry and locking all of their infrastructure down, but maybe this is somewhat different for cyber (and, certainly, more porous than would be ideal, anyway). Is the real way to guarantee cybersecurity just conventional warfare? (yikes)
What's the theory of change behind mass movement building and public-facing comms? We agitate the populace and then the populist leader does something to appease them? Or something else?
You call out the FATE people as explicitly not worth coalition-building with at this time; what about the jobs people (their more right-coded counterpart)? Historically we've been hesitant to ally with either group, since their models of 'how this whole thing goes' are sort of myopic, but that you mention FATE and not the jobs people seems significant.
"It would be nice to know if China would be capable of overtaking the US if we were to slow down progress or if we can safely model them as being a bit behind no matter how fast the US goes."
I think compute is the crux here. Dario was recently talking about OOMs of chips matter, and the 10s of thousand necessary for current DeepSeek models would be difficult to scale to the 100s of thousands or millions that are probably necessary at some point in the chain. (Probably the line of 'reasoning models' descended from ~GPT-4 has worse returns per dollar spent than the line of reasoning models descended from GPT-5, esp. if the next large model is itself descended from these smaller reasoners). [<70 percent confident]
If that's true, then compute is the mote, and export controls/compute governance still get you a lot re: avoiding multipolar scenarios (and so shouldn't be deprioritized, as your post implies).
I'm also not sure about this 'not appealing to the American Left' thing. Like, there's some subset of conservative politicians that are just going to support the thing that enriches their donors (tech billionaires), so we can't Do The Thing* without some amount of bipartisan support, since there are bad-faith actors on both sides actively working against us.
"Developing new policy proposals which fit with the interests of Trump’s faction"
I'd like to point out that there's a middle ground between trying to be non-partisan and convincing people in good faith of the strength of your position (while highlighting the ways in which it synergizes with their pre-existing concerns), and explicitly developing proposals that fit their interests. This latter thing absolutely screws you if the tables turn again (as they did last time when we collaborated with FATE on, i.e., the EO), and the former thing (while more difficult!) is the path to more durable (and bipartisan!) wins.
*whatever that is
I wasn't really reading anything in this post as favoring the US over China in an "us vs them" way, so much as I was reading a general anxiety about multipolar scenarios. Also, very few of us are based in China relative to the US or EU. 'The country with the community of people who have been thinking about this the longest' is probably the country you want the thing to happen in, if it has to happen (somewhere).
fwiw I'm usually very anxious about this us vs them narrative, and didn't really feel this post was playing into it very strongly, other than in the above ways, which are (to me) reasonable.
small but expanding (like everything in the space) is my understanding; there are also a lot of non-rand government and government-adjacent groups devoted to AI safety and nat sec.
I didn't mean to imply that the org had retooled to become entirely AI-focused or something; sorry if that's how it read!
I think your claim is directionally correct all-else-equal; I just don't think the effect is big enough in context, with high enough confidence, that it changes the top-line calculation you're responding to (that 4-5x) at the resolution it was offered (whole numbers).
The naive assumption that scholars can be arranged linearly according to their abilities and admitted one-by-one in accordance with the budget is flawed. If it were true, we could probably say that the marginal MATS scholar at selection was worth maybe <80 percent of the central scholar (the threshold at which I would have written 3-4x above rather than 4-5x). But it's not true.
Mentors pick scholars based on their own criteria (MATS ~doesn't mess with this, although we do offer support in the process). Criteria vary significantly between mentors. It's not the case, for instance, that all of the mentors put together their ordered list of accepted and waitlisted scholars and end up competing for the same top picks. This happens some, but quite rarely relative to the size of the cohort. If what you've assumed actually had a strong effect, we'd expect every mentor to have the same (or even very similar) top picks. They simply don't.
MATS 6 is both bigger and (based on feedback from mentors) more skill-dense than any previous MATS cohort, because it turns out all else does not hold equal as you scale and you can't treat a talent pipe line like a pressure calculation.
Ah, really just meant it as a data point and not an argument! I think if I were reading this I'd want to know the above (maybe that's just because I already knew it?).
But to carry on the thread: It's not clear to me from what we know about the questions in the survey if 'creating' meant 'courting, retraining', or 'sum of all development that made them a good candidate in the first place, plus courting, retraining.' I'd hope it's the former, since the latter feels much harder to reason about commutatively. Maybe this ambiguity is part of the 'roughness' brought up in the OP.
I'm also not sure if 'the marginal graduate is worse than the median graduate' is strongly true. Logically it seems inevitable, but also it's very hard to know ex ante how good a scholar's work will be, and I don't think it's exactly right to say there's a bar that gets lowered when the cohort increases in size. We've been surprised repeatedly (in both directions) by the contributions of scholars even after we feel we've gotten a bead on their abilities (reviewed their research plans, etc).
Often the marginal scholar allows us to support a mentor we otherwise wouldn't have supported, who may have a very different set of selection criteria than other mentors.
In case it's useful to anyone: that 100k number is ~4-5x the actual cost of increasing the size of a MATS cohort by 1.
edit for more fleshed out thoughts and some questions....
and now edited again to replace those questions with answers, since the doc is available...
Reasoning about how exceptional that exceptional technical researcher is is super hard for me because even very sharp people in the space have highly varied impact (like maybe 4+ OOM between the bottom person I'd describe with the language you used and the top person I'd describe in the same language, e.g. Christiano).
Would have been interested to see a more apples to apples with technical researchers on the policy side. Most technical researchers have at least some research and/or work experience (usually ~5 years of the two combined). One of the policy categories is massively underqualified in comparison, and the other is massively overqualified. I'd guess this is downstream of where the community has set the bar for policy people, but I'd take "has worked long enough to actually Know How Government Works, but has no special connections or string-pulling power" at like >10:1 against the kind of gov researcher listed (although I'd also take that kind of gov researcher at less than half the median exchange rate above).
Surprised a UN AI think tank (a literal first, afaik, and likely a necessary precursor for international coordination or avoiding an arms race) would be rated so low, whereas a US think tank (when many US think tanks, including the most important one, have already pivoted to spending a lot of time thinking about AI) was rated so highly.
Without commenting too much on a specific org (anonymity commitments, sorry!), I think we’re in agreement here and that the information you provided doesn’t conflict with the findings of the report (although, since your comment is focused on a single org in a way that the report is simply not licensed to be, your comment is somewhat higher resolution).
One manager creates bandwidth for 5-10 additional Iterator hires, so the two just aren’t weighted the same wrt something like ‘how many of each should we have in a MATS cohort?’ In a sense, a manger is responsible for ~half the output of their team, or is “worth” 2.5-5 employees (if, counterfactually, you wouldn’t have been able to hire those folks at all). This is, of course, conditional on being able to get those employees once you hire the manager. Many orgs also hire managers from within, especially if they have a large number of folks in associate positions who’ve been with the org > 1 year and have the requisite soft skills to manage effectively.
If you told me “We need x new safety teams from scratch at an existing org”, probably I would want to produce (1-2)x Amplifiers (to be managers), and (5-10)x Iterators. Keeping in mind the above note about internal hires, this pushes the need (in terms of ‘absolute number of heads that can do the role’) for Amplifiers relative to Iterators down somewhat.
Fwiw, I think that research engineer is a pretty Iterator-specced role, although with different technical requirements from, i.e. “Member of Technical Staff” and “Research Scientist”, and that pursuing an experimental agenda that requires building a lot of your own tools (with an existing software development background) is probably great prep for that position. My guess is that MATS scholars focused on evals, demos, scalable oversight, or control could make strong research engineers down the line, and that things like CodeSignal tests would help catch strong Research Engineers in the wild.
...we’re looking for someone with experience in a research or engineering environment, who is excited about and experienced with people and project management, and who is enthusiastic about our research agenda and mission.
I’d also predict that, if management becomes a massive bottleneck to Anthropic scaling, they would restructure somewhat to make the prerequisites for these roles a little less demanding (as has DeepMind, with their People Managers, as opposed to Research Leads, and as have several growing technical orgs, as mentioned in the post).
This is a good point, and something that I definitely had in mind when putting this post together. There are a few thoughts, though, that would temper my phrasing of a similar claim:
Many interviewees said things like "I want 50 more iterators, 10 amplifiers to manage them, and 1-2 connectors." Interviewees were also working on diverse research agendas, meaning that each of these agendas could probably absorb >100 iterators if not for managerial bottlenecks and, to a lesser extent, funding constraints. This is even more true if those iterators have sufficient research taste (experience) to design their own followup experiments.
This points toward abundant low hanging fruit and a massive experimental backlog field-wide. For this reason and others, I'd probably bump up the 100 number in your hypothetical by a few oom which, given the (fast in an absolute sense but, relative to our actual needs/funds) slow growth of the field, probably means the need for iterators holds even in long timelines, particularly if read as "for at least a few months, please prioritize making more iterators and amplifiers" and not "for all time, no more connectors are needed."
If we just keep tasting the soup, and figuring out what it needs as we go, we'll get better results than if any one-time appraisal or cultural mood becomes dogma.
There's a line I hear paraphrased a lot by the ex-physicists around here, from Paul Dirac, about physics in the immediate wake of relativity: it was a time when "second-rate physicists could do first-rate work." The AI safety situation seems similar: the rate of growth, the large number of folks who've made meaningful contributions, the immaturity of the paradigm, the proliferation of divergent conceptual models, all point to a landscape in which a lot of dry scientific churning needs doing.
I definitely agree that marginal 'more-of-the-same' talent has diminishing returns. But I also think diverse teams have a multiplicative effect, and my intention in the post is to advocate for a diversified talent portfolio (as in the numbered takeaways section, which is in some sense a list of priorities, but in another sense a list of considerations that I would personally refuse to trade off against if I were the Dictator of AI Safety Field-building). That is, you get more from 5 iterators, one amplifier, and one connector working together on mech interp, than you do from 30 iterators doing the same. But I wasn't thinking about building the mech interp talent pool from scratch in a frictionless vacuum; I was looking at the current mech interp talent pool and trying to see how far it is, right now, from its ideal composition, then fill those gaps (where job openings, especially at small safety orgs, and preferences of grant makers, are a decent proxy for the gaps).
Sorry to go so hard in this response! I've just been living inside thinking about this for 4-5 months, and a lot of this type of background was cut from the initial post for concision and legibility (neither of which are particularly native to me). I'd hoped the comment section might be a good place for me to provide more context and tempering, so thanks so much for engaging!
Two years ago short timelines to superintelligence meant decades. That you would structure this bet such that it resolves in just a few years is itself evidence that timelines are getting shorter.
That you would propose the bet at even odds also does not gesture toward your confidence.
Finally, what money means after superintelligence is itself hotly debated, especially for worst-case doomers (the people most likely to expect three year timelines to ASI are also the exact people who don't expect to be alive to collect their winnings).
I think it's possible for a betting structure to prove some kind of point about timelines, but this isn't it.