It seems to me that most people concerned about AI regulation (and calls for "CERN for AI", or the proposals such as the OAA) are concerned about the monopolisation of AI, not about regulation per se. And in AI monopolisation (or oligopolisation), they are mostly concerned about the concentration of power and perhaps somewhat about the bias that may creep into the monopolistic AI (in the form of recommendations that the AI makes to the user, answers that it gives on contested questions, ethical worldview, or even language that it works best in and the vocabulary that it prefers).

Most of these people are probably fine with regulatory boundaries, like law -- AI shouldn't give instructions for making bioweapons, or plan terror attacks, etc.

The key question, of course, is how to prevent AIs from being able to break the law in this way without effectively enacting AI oligopoly through a stringent regulatory approval regime.

The only approach to solving this conundrum that at least has a chance of working, it seems to me, is an Open Agency, where each AI service is dedicated to some part of the generative world model (material science, rocketry, macroeconomics, virology, etc.) and there are also some "glue AIs", like LLMs, that can solve problems by calling to these services (but are not exceedingly smart themselves and don't internalise a lot of specialised knowledge).

All the specialised services are approved (and therefore oligopolised, within a domain), with dangerous knowledge being erased from them (or only available to users with security clearance), and the inference of these models is constrained to conform to other regulatory and legal constraints. All services are forced to be developed by independent business or non-profit entities by antitrust agencies, to prevent the concentration of power.

Glue AIs could be independently developed or be open-source, on the condition that they didn't use any deeply specialised knowledge during training (apart from fine-tuning with the specialised services as tools), which could somehow be checked semi-automatically, perhaps through the use of approved datasets (cleaned from sensitive specialised data) and zero-knowledge proofs of training.

I think this model addresses the core concerns of the anti-AI regulation folks: the concentration of power and the freedom of general political and ethical views.

In this world, there still should be a lot of nasty compute surveillance and restrictions to prevent people from unilaterally developing AIs that don't conform to the above model, or from running their inference (perhaps, new models of GPUs must verify that the matrix weights belong to an approved or self-approved AI before doing the computation). Some people who are against AI regulation would probably be pissed off by such surveillance, too. But I don't see a way to remove surveillance from the picture and maintain an acceptable level of risk, per the Vulnerable World Hypothesis.

Cross-posted on LessWrong.




No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities