Currently doing local AI safety Movement Building in Australia and NZ.
As an AI safety person who believes short timelines are very possible, I'm extremely glad to see this shift.
For those who are disappointed, I think it's worth mentioning that I just took a look at the Probably Good website and it seems much better than the last time I looked. I had previously been a bit reluctant to recommend it, but it now seems like a pretty good resource and I'm sure they'll be able to make it even better with more support.
Given that The 80,000 Hours Podcast is increasing its focus on AI, it's worth highlighting Asterisk Magazine as a good resource for exploring a broader set of EA-adjacent ideas.
I've only just CTL-F'd the report so I could have missed something, but I guess the key question for me is what does a multilateral project mean in terms of security/diffusion of the technology?
My intuition is that preventing diffusion of the tech in a multilateral project would be hard, if not impossible and I see this as consideration as something that could kill the desirability of such a project by itself, even if there are several other strong arguments in favour.
I know you mention this in the potential future work section, but I do think it is worthwhile editing in a paragraph or two on why you think we might want to consider this model anyway (it's impossible to address everyone's pet objection, but my guess is that this will prove to be one of the major objections that people make).
I have neither upvoted nor downvoted this post.
I suspect that the downvoting is because the post assumes this is a good donation target rather than making the argument for it (even a paragraph or two would likely make a difference). Some folks may feel that it's bad for the community for posts like this to be at +100, even if they agree with the concrete message, as it undermines the norm of EA forum posts containing high-quality reasoning, as opposed to other appeals.
I think it's worth bringing in the idea of an "endgame" here, defined as "a state in which existential risk from AI is negligible either indefinitely or for long enough that humanity can carefully plan its future".
Some waypoints are endgames, some aren't and some may be treated as an endgame by one strategy, but not by another.
For the record, I see the new field of "economics of transformative AI" as overrated.
Economics has some useful frames, but it also tilts people towards being too "normy" on the impacts of AI and it doesn't have a very good track record on advanced AI so far.
I'd much rather see multidisciplinary programs/conferences/research projects, including economics as just one of the perspectives represented, then economics of transformative AI qua economics of transformative AI.
(I'd be more enthusiastic about building economics of transformative AI as a field if we were starting five years ago, but these things take time and it's pretty late in the game now, so I'm less enthusiastic about investing field-building effort here and more enthusiastic about pragmatic projects combining a variety of frames).