Impact Markets link: https://app.impactmarkets.io/profile/clfljvejd0012oppubuwne2k2
I think his answer is here:
Some hope for some sort of international treaty on safety. This seems fanciful to me. The world where both the CCP and USG are AGI-pilled enough to take safety risk seriously is also the world in which both realize that international economic and military predominance is at stake, that being months behind on AGI could mean being permanently left behind. If the race is tight, any arms control equilibrium, at least in the early phase around superintelligence, seems extremely unstable. In short, ”breakout” is too easy: the incentive (and the fear that others will act on this incentive) to race ahead with an intelligence explosion, to reach superintelligence and the decisive advantage, too great.
At the very least, the odds we get something good-enough here seem slim. (How have those climate treaties gone? That seems like a dramatically easier problem compared to this.)
I think we still see really good engagement with the videos themselves. The average view duration for the AI video is currently 58.7% of the video, and 25% of viewers watched the whole video
This average percentage relates to organic traffic only, right? The paid traffic APV must look much lower, something like 5%?
There's a maybe naive way of seeing their plan that leads to this objection:
"Once we have AIs that are human-level AI alignment researchers, it's already too late. That's already very powerful and goal-directed general AI, and we'll be screwed soon after we develop it, either because it's dangerous in itself or because it zips past that capability level fast since it's an AI researcher, after all."
What do you make of it?
Can I promote your courses without restraint on Rational Animations? I think it would be a good idea since people can go through the readings by themselves. My calls to action would be similar to this post I made on the Rational Animations' subreddit: https://www.reddit.com/r/RationalAnimations/comments/146p13h/the_ai_safety_fundamentals_courses_are_great_you/
For me, perhaps the biggest takeaway from Aschenbrenner's manifesto is that even if we solve alignment, we still have an incredibly thorny coordination problem between the US and China, in which each is massively incentivized to race ahead and develop military power using superintelligence, putting them both and the rest of the world at immense risk. And I wonder if, after seeing this in advance, we can sit down and solve this coordination problem in ways that lead to a better outcome with a higher chance than the "race ahead" strategy and don't risk encountering a short period of incredibly volatile geopolitical instability in which both nations develop and possibly use never-seen-before weapons of mass destruction.
Edit: although I can see how attempts at intervening in any way and raising the salience of the issue risk making the situation worse.