Absolutely. A few comments:
I work at Netflix on the recommender. It's interesting to read this abstract article about something that's very concrete for me.
For example, the article asks, "The key question any model of the problem needs to answer is - why aren’t recommender systems already aligned."
Despite working on a recommender system, I genuinely don't know what this means. How does one go about measuring how much a recommender is aligned with user interests? Like, I guarantee 100% that people would rather have the recommendations given by Netflix ...
I'm not sure about users definitely preferring the existing recommendations to random ones - I actually have been trying to turn off YouTube recommendations because they make me spend more time on YouTube than I want. Meanwhile other recommendation systems send me news that is worse on average than the rest of the news I consume (from different channels). So in some cases at least, we could use a very minimal standard of: a system is aligned if the user better off because the recommendation system exists at all.
This is a pretty blunt metric, and proba...
Excellent post.
I want to highlight something that I missed on the first read but nagged me on the second read.
You define transformative AGI as:
1. Gross world product (GWP) exceeds 130% of its previous yearly peak value
2. World primary energy consumption exceeds 130% of its previous yearly peak value
3. Fewer than one billion biological humans remain alive on Earth
You predict when transformative AGI will arrive by building a model that predicts when we'll have enough compute to train an AGI.
But I feel like there's a giant missing link - what are the odds tha... (read more)