Holden Karnofsky

Sequences

Wiki Contributions

Comments

Comments for shorter Cold Takes pieces

[Placeholder for Describing Utopia comments]

Comments for shorter Cold Takes pieces

[Placeholder for Progress Studies comments]

“Biological anchors” is about bounding, not pinpointing, AI timelines

I think the sense in which your case is disjunctive is mostly that there are multiple potential "PONR-inducing tasks," and multiple potential ways to get to each one (brute-force trial-and-error on the full task, generalization from easier-to-learn tasks, decomposition into easier-to-learn tasks, breakthrough new paradigm). But this sort of disjunctiveness seems like it was fundamentally there in 1970 and in 1990 - if it didn't predict transformative AI (or PONR AI) within 15 years then, what's different today?

I'm guessing your answer is something like "Today, we are close to being able to train human-brain-sized models, if only on small-number-of-timestep tasks." I do think that's relevant. But with GPT-3 having been out for more than a year, within 1000x of the "human brain size" threshold, and with seemingly nobody having found a way to get it to do something that seems all that much like a human doing some economically relevant task, this doesn't seem like enough to get over 50% probability by 2036.

Comments for shorter Cold Takes pieces

Comments for Rowing, Steering, Anchoring, Equity, Mutiny will go here for now. I hope to post the whole piece to the Forum separately, but I'm currently having trouble with formatting. I will post a link to it when it's up so that future comments can go there.

Has Life Gotten Better?

The fifth piece in this series is Unraveling the evidence about violence among very early humans. I suggest that any comments on it go in this thread.

Load More