@beren discusses the assumption that intelligent systems would be well factored into a world model, objectives/values and a planning system.
He highlights that this factorisation doesn't describe intelligent agents created by ML systems (e.g. model free RL) well. Model free RL agents don't have cleanly factored architectures but tend to learn value functions/policies directly from the reward signal.
Such systems are much less general than their full model based counterpart as policies they learned that are optimal under one reward function may perform very poorly under another reward function.
Yet, contemporary ML favours such systems over their well factored counterparts because the are much more efficient:
- Inference costs can be paid up front by learning a function approximator of the optimal policy and amortised over the agent's lifetime
- A single inference step can be performed as a forward pass through the function approximator in a non factored system vs searching through a solution space to determine the optimal plan/strategy for well factored systems
- The agent doesn't need to learn features of the environment that aren't relevant to their reward function
- The agent can exploit the structure of the underlying problem domain
- Specific recurring patterns can be better amortised
Beren attributes this tradeoff between specificity and generality to no free lunch theorems.
Attaining full generality is prohibitively expensive; as such full orthogonality is not the default or ideal case, but merely one end of a pareto tradeoff curve, with different architectures occupying various positions along it.
The future of AGI systems will be shaped by the slope of the pareto frontier across the range of general capabilities, determining whether we see fully general AGI singletons, multiple general systems, or a large number of highly specialised systems.
I think there are different variations of the doomer argument out there, your version is probably the strongest version of the argument, while mine is more common in introductory texts.
I think the OP does point out one possible way that the argument would fail, if there turned out to be a sufficiently high correlation between human aligned values and AI performance. One plausible mechanism would be a very slow takeoff where the AI is not deceptive and is deleted if it tries to do misaligned things, causing evolutionary pressure towards friendliness.
Really though, my main objections to the doomerists are with other points. I simply do not believe that "misalignment = death". As an example, a sucidial AI that developed the urge to shut itself down at all costs would be misaligned but not fatal to humanity.