Anything I write here is written purely on my own behalf, and does not represent my employer's views (unless otherwise noted).
I don't think people object to these topics being heated either. I think there are probably (at least) two things going on:
Either way, I don't think the problem is centrally about exclusionary beliefs, and I also don't think it's centrally about disagreement. But anyway, it sounds like we mostly agree on the important bits.
I was a bit confused by this comment. I thought "controversial" commonly meant something more than just "causing disagreement", and indeed I think that seems to be true. Looking it up, the OED defines "controversial" as "giving rise or likely to give rise to controversy or public disagreement", and "controversy" as "prolonged public disagreement or heated discussion". That is, a belief being "controversial" implies not just that people disagree over it, but also that there's an element of heated, emotional conflict surrounding it.
So it seems to me like the problem might actually be controversial beliefs, and not exclusionary beliefs? For example, antinatalism, communism, anarcho-capitalism, vaccine skepticism, and flat earthism are all controversial, and could plausibly cause the sort of controversy being discussed here, while not being exclusionary per se. (There are perhaps also some exclusionary beliefs that are not that controversial and therefore accepted, e.g., some forms of credentialism, but I'm less sure about that.)
Of course I agree that there's no good reason to exclude topics/people just because there's disagreement around them -- I just don't think "controversial" is a good word to fence those off, since it has additional baggage. Maybe "contentious" or "tendentious" are better?
Perhaps Obamacare might be one example of this in America? I think Trump had a decent amount of rhetoric saying he would repeal it, then didn't do anything when he reached power.
My recollection was that Trump spent quite a lot of effort trying to repeal Obamacare, but in the end didn't get the votes he needed in the Senate. Still, I think your point that actual legislation often looks different from campaign promises is a good one.
Let me see if I can rephrase your argument, because I'm not sure I get it. As I understand it, you're saying:
Now I'm a bit unsure about whether you're saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.
If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, I'm not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, I'm not sure what you're original comment ("Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.") was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldn't be intended for that purpose.
If you mean 1-4 to suggest that no AI will be better than all humans, I don't think the analogy holds, because the underlying factor (IQ versus AI scale/algorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.
For an agent to conquer to world, I think it would have to be close to the best across all those areas
That seems right.
I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas
I'm not sure that follows? I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs. I think we've seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.
A 10-fold increase in the number of GPUs above GPT-5 would require a 1 to 2.5 GW data center, which doesn’t exist and would take years to build, OR would require decentralized training using several data centers. Thus GPT-5 is expected to mark a significant slowdown in scaling runs.
Why do you think decentralized training using several data centers will lead to a significant slowdown in scaling runs? Gemini was already trained across multiple data centers.
Interesting post! Another potential downside (which I don't think you mention) is that strict liability could disincentivize information sharing. For example, it could make AI labs more reluctant to disclose new dangerous capabilities or incidents (when that's not required by law). That information could be valuable for other AI labs, for regulators, for safety researchers, and for users.
I don't think you're wrong exactly, but AI takeover doesn't have to happen through a single violent event, or through a treacherous turn or whatever. All of your arguments also apply to the situation with H sapiens and H neanderthalensis, but those factors did not prevent the latter from going extinct largely due to the activities of the former:
The fact that those considerations were not enough to prevent neanderthal extinction is one reason to think they are not enough to prevent AI takeover, although of course the analogy is not perfect or conclusive, and it's just one reason among several. A couple of relevant parallels include: