I'm a senior at Harvard, where I run the Harvard AI Safety Team (HAIST). I also do research with David Krueger's lab at Cambridge University.
I'm pretty unconvinced that your "suggests a significant number of fundamental breakthroughs remain to achieve PASTA" is strong enough to justify the odds being "approximately 0," especially when the evidence is mostly just expecting tasks to stay hard as we scale (something which seems hard to predict, and easy to get wrong). Though it does seem that innovation in certain domains may lead to long episode lengths and inaccurate human evaluation, it also seems like innovation in certain fields (e.g., math) could easily not have this problem (i.e., in cases where verifying is much easier than solving).
I'd like his thoughts on focusing on "Long-Termism" vs "Existential Risk," especially in the context of community building (i.e. responding "Long-Termism" vs. "Existential Risk" and Simplify EA Pitches to "Holy Shit, X-Risk").
Worth noting that (1) the AST is for people already planning to go into alignment after graduating (and isn't an intro program), and (2) I usually have backups prepared in case people have already read the thing (I don't think showing up 30 minutes in would be great!).
Thanks for the Harvard AI Safety Team shout-out! I do think in person reading is great, because it (1) creates a super low barrier to showing up, and (2) feels good/productive to be in a room with everyone silently reading. Two points on this: