In the AI safety literature, AI alignment is often presented as conceptually distinct from capabilities. However, (1) the distinction seems somewhat fuzzy and (2) many techniques that are supposed to improve alignment also improve capabilities.
(1) The distinction is fuzzy because one common way of defining alignment is getting an AI system to do what the programmer or user intends. However, programmers intend for systems to be capable. eg we want chess systems to win at chess. So, a system that wins more is more intent aligned, and is also more capable.
(2) eg This Irving et al (2018) paper by a team at Open AI proposes debate as a way to improve safety and alignment, where alignment is defined as aligning with human goals. However, the debate also improved the accuracy of image classification in the paper, and therefore also improved capabilities.
Similarly, Reinforcement learning with human feedback was initially presented as an alignment strategy, but my loose impression is that it also made significant capabilities improvements. There are many other examples in the literature of alignment strategies also improving capabilities.
**
This makes me wonder whether alignment is actually more neglected that capabilities work. AI companies want to make aligned systems because they are more useful.
How do people see the difference between alignment and capabilities?
When people distinguish between alignment and capabilities, I think they’re often interested in the question of what research is good vs. bad for humanity. Alignment vs. capabilities seems insufficient to answer that more important question. Here’s my attempt at a better distinction:
There are many different risks from AI. Research can reduce some risks while exacerbating others. "Safety" and "capabilities" are therefore incorrectly reductive. Research should be assessed by its distinct impacts on many different risks and benefits. If a research direction is better for humanity than most other research directions, then perhaps we should award it the high-status title of "safety research."
Scalable oversight is a great example. It provides more accurate feedback to AI systems, reducing the risk that AIs will pursue objectives that conflict with human goals because their feedback has been inaccurate. But it also makes AI systems more commercially viable, shortening timelines and perhaps hastening the onset of other risks, such as misuse, arms races, or deceptive alignment. The cost-benefit calculation is quite complicated.
"Alignment" can be a red herring in these discussions, as misalignment is far from the only way that AI can lead to catastrophe or extinction.
Related: https://www.lesswrong.com/posts/zswuToWK6zpYSwmCn/some-background-for-reasoning-about-dual-use-alignment