I was listening to https://80000hours.org/podcast/episodes/paul-christiano-ai-alignment-solutions recently, and found it very helpful. However, I still have a question at the end of it.
What is the plan that organizations like OpenAI have to prevent bad outcomes from AGI? From how Paul Christiano frames it, it seems like it's "create AGI, and make sure it's aligned."
But I don't understand how this plan accounts for competition. To use a stupid analogy, if I was concerned that cars in America weren't safe, I might start my own car company to manufacture and sell safer cars. Maybe I spend a lot of time engineering a much safer car. But my efforts would be for naught if my cars weren't very popular (and hence my company wasn't very successful), even if they were groundbreakingly safe.
It seems like this latter part is most of the trick, at least in the domain of cars.
I'd like to understand in more detail how this analogy breaks down. I can imagine several ways, but would love to hear it direct from the horse's mouth.
Paul Christiano has a notion of competitiveness, which seems relevant. Directions and desiderata for AI control seems to be the the place it's stated most clearly.
The following quote (emphasis in the original) is one of the reasons he gives for desiring competitiveness, and seems to be in the same ballpark as the reason you gave:
Seems right, though I don't know to what extent Paul's view is representative of OpenAI's overall view.