I'm a senior at Harvard, where I run the Harvard AI Safety Team (HAIST). I also do research with David Krueger's lab at Cambridge University.
I'm pretty unconvinced that your "suggests a significant number of fundamental breakthroughs remain to achieve PASTA" is strong enough to justify the odds being "approximately 0," especially when the evidence is mostly just expecting tasks to stay hard as we scale (something which seems hard to predict, and easy to get wrong). Though it does seem that innovation in certain domains may lead to long episode lengths and inaccurate human evaluation, it also seems like innovation in certain fields (e.g., math) could easily not have this problem (i.e., in cases where verifying is much easier than solving).
Thanks for this! Want to note that this was co-authored by 7 other people (the names weren't transferred when it was crossposted from LW).