Nonexistence is preferable to intense suffering, and I think there are enough S-risks associated with the array of possible futures ahead of us that we should prioritize reducing S-risks over X-risks, except when reducing X-risks is instrumental to reducing S-risks. So to be specific, I would only agree with this to the extent that "value" == lack of suffering -- I do not think we should build for the utopia that might not come to pass because we wipe ourselves out first, just that it is vastly more important to prevent dystopia
A quick counterargument from the alt-protein side: while $100k to an animal welfare nonprofit might alleviate $100k worth of suffering, it isn't going to lead to a state change unless it is facilitating a permanent intervention that meat producers would not have an incentive to reverse. The same amount of money directed toward innovation in cultivated meat is progress toward a potential nonlinear tipping point that could fully displace factory farming, and I don't think we should take it as a guarantee that alt protein technologies will breach to disrupt the meat market without the right amount of wind in their sails.
I broadly agree with the thesis that AI safety needs to incorporate sociopolitical thinking rather than shying away from it. No matter what we do on the technical side, the governance side will always run up against moral and political questions that have existed for ages. That said, I have some specific points about this post to talk about, most of which are misgivings:
the "Left wants control, Right wants decentralization" dichotomy here seems not only narrowly focused on Republican - Democrat dynamics, but also wholly incorrect in terms of what kinds of political ideology actually leads one to support one versus the other. Many people on the left would argue uncompromisingly for decentralization due to concerns about imperfect actors cementing power asymmetries through AI. Much of the regulation that the center-left advocates for is aimed at preventing power accumulation by private entities, which could be viewed as a safeguard against the emergence of some well-resourced, corporate network-state that is more powerful than many nation-states. I think Matrice nailed it above in that we are all looking at the same category-level, abstract concerns like decentralization versus control, closed-source versus open-source, competition versus cooperation, etc. but once we start talking about object-level politics -- Republican versus Democratic policy, bureaucracy and regulation as they actually exist in the U.S. -- it feels, for those of us on the left, like we have just gotten off the bus to crazy town. The simplification of "left == big government, right == small government" is not just misleading; it is entirely the wrong axis for analyzing what left and right are (hence the existence of the still oversimplified, but one degree less 1-dimensional, Political Compass...). It seems to me that it is important for us to all step outside of our echo chambers and determine precisely what our concerns and priorities are so we can determine areas where we align and can act in unison.
Some points I liked / agree with: