My intuition is that driving is a domain narrow enough not to require AGI and, moreover, to require a system of far less sophistication and reasoning capabilities than an AGI. SAE Level 5 autonomy — which requires a vehicle to be able to drive autonomously wherever and whenever a typical human driver could — has not been achieved by any company. All autonomous driving projects currently require a human in the loop, either in the driver’s seat or available to provide remote assistance.
In a world where AGI is achieved by, say, 2030 or 2035, what are the odds Level 5 autonomy hasn’t been solved by 2023? My intuition is that we would expect autonomous vehicles to be a relatively low-hanging fruit that is plucked relatively early in the trajectory from AI solving video games to AI solving ~everything.
There are a few reasons why this intuition could be wrong:
- Maybe self-driving is actually an AGI-level problem or much closer to AGI-level than my intuition tells me. (I would rate this as highly plausible.)
- Maybe AI progress is such a steep exponential that the lag time between Level 5 autonomy and AGI is much shorter than my intuition tells me. (I would rate this as moderately plausible.)
- Perhaps Internet-scale data simply isn’t available to train self-driving AIs. (I would rate this as fairly implausible; it would be more much plausible if Tesla weren’t such a clear counterexample.)
- Robotics in general could prove to be either too hard or unimportant for an otherwise transformative or general AI. (I would rate this as highly implausible; it strikes me as special pleading.)
- Onboard compute for Teslas, which is a constraint on model size, is tightly limited, whereas LLMs that live in the cloud don’t have to worry nearly as much about the physical space they take up, the cost of the hardware, or their power consumption. (I would rate this as the most plausible objection, but I wonder why Tesla wouldn't put a ton of GPUs in the trunk of a car and see if that works.)
- Self-driving cars don’t get to learn through trial-and-error and become gradually more reliable, whereas LLMs do. (I would rate this as somewhat plausible; the counterargument is that Tesla's Autopilot is allowed to make mistakes, which humans can correct.)[1]
Please enumerate any additional reasons you can think of in the comments. Also, please present any arguments or evidence you can think of as to why I should accept any of the reasons given above.
Great question!
My understanding was that self-driving cars are already less likely to get into accidents than humans are.
However, they certainly can't "drive autonomously wherever and whenever a typical human driver could", requiring a costly process to adapt current self-driving technology to each city one at a time.
What does this tell us about how far from AGI? In particular, should this make us less enthusiastic about the generative AI direction than we might otherwise be? If it's so powerful, shouldn't we be able to use it to solve self-driving?
I guess it doesn't feel to me that we should make a huge update on this because anyone who is at all familiar with generative AI should already know it is incredibly unreliable without having to bring self-driving cars into the equation.
The question then becomes how insurmountable the unreliability problem is. There are certainly challenges here, but it's not clear that it is insurmountable. The short-timelines scenarios are pretty much always contingent on us discovering some kind of self-reinforcing loops. Is this likely? It's hard to tell, but there are already very basic techniques like self-consistency or reinforcement learning from AI feedback, so it isn't completely implausible. And it's not really clear to me why the lack of self-driving cars at present is a strong reason to believe that attempts to set up such a loop will fail.