OK, but this post is about drawing an analogy between the degrowth debate and the AI pause debate, and I don't see the analogy. Do you disagree with my argument for why they aren't analogous?
Especially if you disagree, explain why or upvote a comment that roughly reflects your view rather than downvoting. Downvoting controversial views only hides them rather than confronting them.
As a meta-comment, please don't assume that anyone who downvotes does so because they disagree, or only because they disagree. A post being controversial doesn't mean it must be useful to read, any more than it means it must not be useful to read. I vote on posts like this based on whether they said something that I think deserves more attention or helped me understand something better, regardless of whether I think it's right or wrong.
While I agree there are similarities in the form of argument between degrowth and AI pause, I don't think those similarities are evidence that the two issues should have the same conclusion. There's simply nothing at all inconsistent about believing all of these at the same time:
Almost the entire question, for resolving either of these issues, is working out whether these premises are really true or not. And that's where the similarities end, IMO: there's not much analogy between the relevant considerations for feasibility of AI pause and feasibility of degrowth, or desirability of either outcome, that would lead us to think it's surprising for them to have different answers.
I don't know if you intended it this way, but I read this comment as saying "the author of this post is missing some fairly basic things about how IR works, as covered by introductory books about it". If so, I'd be interested in hearing you say more about what you think is being missed.
In the name of trying to make legible what I think is going on in the average non-expert's head about this, I'm going to say a bunch of things I know are likely to be mistaken, incomplete, or inadequately sophisticated. Please don't take these as endorsements that this thinking is correct, just that it's what I see when I inspect my instincts about this, and suspect other casual spectators might have the same ones.
It feels intuitive that Google and OpenAI and Anthropic etc. are more likely to co-operate with each other than any of them are to co-operate with Alibaba or Tencent. This is for a mixture of practical reasons (because they're governed by the same or similar courts of law, e.g. contracts between them seem likely to be cheaper and more reliable, there's fewer language barriers) and cultural reasons (they're run by people who grew up in a similar environment, told similar things about what kind of person they ought to be, their employees are more likely to socialize with each other, that sort of thing). That said, it does also seem likely that Google stands to gain more from the failure of Microsoft than from the failure of Alibaba: maybe we can think of the US companies as simultaneously closer friends and closer enemies with each other?
I do also imagine that both the US and Chinese governments have the potential to step in when corporations in their country get too powerful, and in particular (again, not coming from a place of expertise on this, just a casual impression) the Chinese government appears more willing and able to seize or direct privately-owned resources in the name of the national interest, e.g. I'm thinking of when they kind of told a hundred-billion dollar industry to stop existing.
I think there's also a mostly-psychological factor at play where if I were a US citizen, then I'd have a share in US governance as a member of the electorate, and while I might not have a share in US corporate governance, well, at least there is a board of directors that's nominally accountable to shareholders, many ordinary people could be shareholders, or if the thing is privately owned, at least there is some pressure from the government, so indirect accountability to me. I can feel like my interests have a stake, a representation, though of course my individual share is so small that its value in anything other than a symbolic sense is questionable. Meanwhile, I feel like I have essentially zero effective influence over Chinese government or corporations; while this is simplifying some realities in both directions, understood psychologically or symbolically the difference is there. This, I suspect, leads to people thinking of all US corporations as essentially amenable to reason or at least coercion, while thinking of Chinese corporations as having no obligation to listen to them at all, even in aggregate.
All this to say that I don't think aggregating US corporate power and Chinese corporate power has no basis in reason or reality; but mostly I say this in the belief that we can and should undo that aggregation more, but understanding why people do it and where it comes from might be useful for that purpose.
I'd like to see more of an analysis of where we are now with what people want from CH, what they think it does, and what it actually does, and to what extent and why gaps exist between these things, before we go too deep into what alternative structures could exist. Currently I don't feel like we really understand the problem well enough to solve it.
People talk about AI resisting correction because successful goal-seekers "should" resist their goals being changed. I wonder if this also acts as an incentive for AI to attempt takeover as soon as it's powerful enough to have a chance of success, instead of (as many people fear) waiting until it's powerful enough to guarantee it.
Hopefully the first AI powerful enough to potentially figure out that it wants to seize power and has a chance of succeeding is not powerful enough to passively resist value change, so acting immediately will be its only chance.
Concerns about "bosses raking in profits" seem pretty weird to raise in a thread about a nonprofit, in a community largely comprised of nonprofits. There might be something in your proposal in general, but it doesn't seem relevant here.
I understand you're not interested in replies to this comment, but for the sake of other readers I'll point out parts of it that seem wrong to me:
I disagree with the suggestion that there was something sinister about a policy of 'we don't talk badly about you and you don't talk badly about us'. That is a rephrasing of a fairly standard social (and literal) contract which exists between the majority of people all around the world. As somebody who works for the government making policy, you bet I'd lose my job outright if I publicly criticised them. But I would also expect a variation of this from most employers.
I think it's reasonable for an employer to no longer want to hire you or work with you if you're saying bad things about them, but I don't think it's appropriate for them to try to limit what you say beyond that. I think it's not appropriate for your employer to try to hurt you or your future career prospects at other employers because you talked about having a bad time with them.
Whistleblower protections aren't exactly analogous, because AIUI they're about disclosure to government authorities, rather than to the general public, and that's a significant enough difference that it makes sense to treat them separately. But it's nevertheless interesting to note that if you disclose certain kinds of wrongdoing in certain ways, your employer isn't even allowed to fire you, let alone retaliate beyond that. These rules are important: it's difficult and unpleasant to be in that situation, but if that's where you end up, protecting the employee over the employer is IMO the right call.
[...] autophagic self-immolation process [...] toxic witchhunts [...] pillory and vigilantism
I get that threats like these are very painful for the people involved. However, I don't think there's any real non-painful way for people to confront the realities that they've hurt others through mistakes they've made, and there's no non-painful way to say "we, as a community, must recommend that people guard themselves against being hurt by these people". You hint that there are other ways to handle these things, but you don't say what they are.
I think we could probably come up with a system that's kinder to the accused than this one. However, granting that sometimes such a system would demand that we need to warn other prospective employees and funders about what happened, there's no world that I can see that contains no posts like this. I think it's reasonable to believe that Kat and Nonlinear should have had more time to make their case, but ultimately, if they fail to make their case, that fact must be made public, and there's no enjoyable way to do that.
I think this would be a good top-level post