I'm adding it to the forum because the author makes compelling points that I don't see addressed often enough in conversations with other EA's or in courses like BlueDot's AI Governance.
To clarify - I don't know whether Cochrane is correct in his claims. Maybe. Maybe not. Time will tell. Regardless, promoting regulation without addressing the critiques (and especially the critiques about the huge limitations and failures of the regulatory state) seems harmful, and with the current surge in status of AI Governance within EA, I'm hoping essays like this will reduce some of the echo-chamber effects of the conversation.
Here's a 3 paragraph summary by Claude:
Cochrane argues against extensive regulation of AI, contending that throughout history, attempts to predict and regulate the societal impacts of new technologies have often been misguided or harmful. He points out that major technological innovations, from the printing press to the internet, have had unforeseen consequences that regulators failed to anticipate. The essay suggests that preemptive regulation of AI based on speculative threats to democracy and society is likely to be ineffective and potentially counterproductive.
The essay criticizes the idea that government regulators can effectively manage the development of AI to mitigate social and political risks. It argues that regulatory bodies often lack the necessary information and foresight, and are susceptible to capture by industry interests. Cochrane contends that attempts to regulate AI communication technologies could amount to censorship, potentially threatening rather than protecting democracy. He advocates for competition and market forces as better mechanisms for addressing potential AI-related issues.
Regarding economic concerns, the piece dismisses fears that AI will lead to widespread unemployment, drawing parallels to similar unfounded fears about past technological innovations. Instead, it suggests that AI has the potential to significantly boost productivity and economic growth, particularly in developing regions. Cochrane concludes by arguing for a more hands-off approach to AI development, emphasizing the importance of rule of law, competition, and strengthening democratic institutions rather than relying on preemptive regulation to address potential challenges posed by AI.
I'm generally a fan of John Cochrane. I would agree that government regulation of AI isn't likely to work out well, which is why I favor an international pause on AI development instead (less need for government competence on detailed technical matters).
His stance on unemployment seems less understandable. I guess he either hasn't considered the possibility that AGI could drive wages below human subsistence levels, or think that's fine (humans just work for the same low wages as AIs and governments make up the difference with a "broad safety net that cushions all misfortunes")?
Oh, of course he also doesn't take x-risk concerns seriously enough, but that's more understandable for an economist who probably just started thinking about AI recently.
So, most of this is a heavily biased and cherrypicked polemic against regulation in general. Like, they look at climate change and they pick on EV subsidies and move on, not mentioning all the other climate interventions that are actually working. I don't think anyone credibly thinks the free market would have solved climate change on it's own.
With regards to AI, I agree with them that the future is very hard to predict. But the present isn't, and I think there are present day real-world harms that can and should be regulated.