AI forecasting & strategy at AI Impacts. Blog: Not Optional.
Not reading the paper, and not planning to engage in much discussion, and stating beliefs without justification, but briefly commenting since you asked readers to explain disagreement:
I think this framework is bad and the probabilities are far too low, e.g.:
Separately, note that "AI that can quickly and affordably be trained to perform nearly all economically and strategically valuable tasks at roughly human cost or less" is a much higher bar than the-thing-we-should-be-paying-attention-to (which is more like takeover ability; see e.g. Kokotajlo).
Not really, or it depends on what kinds of rules the IAIA would set.
For monitoring large training runs and verifying compliance, see Verifying Rules on Large-Scale NN Training via Compute Monitoring (Shavit 2023).
Some more sketching of auditing with model evals is in Model evaluation for extreme risks (DeepMind 2023).
I disagree. In particular:
(Of course you should be friendly and not waste weirdness points.)
Some categories where extraordinary evidence is common, off the top of my head:
Separately, fwiw I endorse the Mark Xu post but agree with you that (there's a very reasonable sense in which) extraordinary evidence is rare for stuff you care about. Not sure you disagree with "extraordinary evidence is common" proponents.
This is quite surprising to me. For the record, I don't believe that the authors believe that "carry out as much productive activity as one of today’s largest corporations" is a good--or even reasonable--description of superintelligence or of what's "conceivable . . . within the next ten years."
And I don't follow Sam's or OpenAI's communications closely, but I've recently seemed to notice them declining to talk about AI as if it's as big a deal as I think they think it is. (Context for those reading this in the future: Sam Altman recently gave congressional testimony which [I think after briefly engaging with it] was mostly good but notable in that Sam focused on weak AI and sometimes actively avoided talking about how big a deal AI will be and x-risk, in a way that felt dishonest.)
(Thanks for engaging.)
This was hard to read, emotionally.
Some parts are good. I'm confused about why OpenAI uses euphemisms like
it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
(And I heard MATS almost had a couple strategy/governance mentors. Will ask them.)
(Again, thanks for being constructive, and in the spirit of giving credit, yay to GovAI, ERA, and CHERI for their summer programs. [This is yay for them trying; I have no knowledge of the programs and whether they're good.])
I mean, I don't think all of your conditions are necessary (e.g. "We invent a way for AGIs to learn faster than humans" and "We massively scale production of chips and power") and I think together they carve reality quite far from the joints, such that breaking the AGI question into these subquestions doesn't help you think more clearly [edit: e.g. because compute and algorithms largely trade off, so concepts like 'sufficient compute for AGI' or 'sufficient algorithms for AGI' aren't useful].