Introduction
Thanks to my friend in UK for attending the Oxford Civic AI Conference, and producing detailed notes and analysis. I passed the materials he provided to ChatGPT and asked it to evaluate issues such as Civic AI and alignment based on my previous researches. Within just a few minutes, it generated the lengthy article, which readers are welcome to explore on their own.
Here, however, I would first like to use my own language—yes, human language!—to present a more intuitive interpretation of the key discussion points, so that readers may gain an initial impression. I have also shared the article with NotebookLM and asked it to generate a concise infographic for comparison.
Technical safety governance
To begin with, “technical safety governance” is discussed, which represents the mainstream approach adopted by tech companies. Its aim is to ensure that the development of AI has clear boundaries and does not cause harm to humanity. However, this approach still suffers from problems such as concentration of power, top-down control, lack of accountability, and structural inequality. Loong vividly describes this as a system of a “benevolent dictator.”
Civic infrastructure
Secondly, the idea of “civic infrastructure” is introduced, which shifts the focus from purely technical considerations to institutional questions, emphasizing that AI must be accountable to people and responsive to community needs. One major attempt in this direction is the development of agent-based deliberation systems, where AI agents represent citizens in public discussion and decision-making. Habermolt, as a kind of “Habermas machine,” is one such important experiment presented at the conference.
To me, this resembles a form of traditional representative democracy, where citizens delegate their views to representatives. Its advantage is that these representatives—AI agents in this case—do not hoard private interests or abuse power, and can participate in policy debates in a more rational and balanced way. Moreover, there is no practical limit to the number of representatives or citizens.
Its limitation, however, is that the mechanism tends to reflect expressed opinions and pre-existing positions in a rather mechanical way. It has limited capacity to actively advance discussion, facilitate learning, enhance communication, or foster genuine consensus.
Swarm AI
Finally, the potentials of “Swarm AI” is explored. This can be understood as a more automated and decentralized mode of self-organization among AI agents, built upon the above civic infrastructure. The concept originates from discussions in my new book Demotopia: An Exploration of Democracy and Artificial Intelligence (2026).
To put it more concretely, I see this as resembling a kind of corporatist or functional constituency model—though certainly not the interest-dividing mechanism associated with entrenched elites. Instead, AI agents are endowed with different functional roles. For example, some agents act as experts, while others reflect public opinion; some prioritize scientific evidence, while others emphasize affect and emotions; some focus on short-term societal needs, while others consider long-term interests; some represent mainstream values, while others speak for minorities and marginalized groups; still others may represent future citizens, non-citizens, or even non-humans such as animals, ecosystems, and the planet itself. This approach draws in part on human–machine ecosystem research conducted by Jeremy Pitt and his team at Imperial College London, which in turn is closely connected to Josiah Ober’s Demopolis: Democracy before Liberalism in Theory and Practice (2017) in their theoretical framework.
Under such a model, AI agents can still participate in discussions in a rational and balanced manner, but they do not directly mirror current public opinion. Does this violate the fundamental principles of democracy? In my view, compared with real-world parliamentary and party systems, it still holds significant potential for improvement. Ultimately, the system must adhere to two fundamental principles, which serve as the foundation for democratic values, human-centered governance, and effective human–AI alignment:
First, public deliberation and decision-making conducted by AI agents must be sufficiently open and transparent, allowing citizens to understand and engage with the process;
Second, once citizens have gained such understanding, they must be able to provide meaningful and effective feedback, exercising real oversight and checks on AI decision-making.
One can easily imagine that AI agents in this system do not represent fixed interest groups, but instead embody overlapping and interwoven values and needs. They can be flexibly assembled according to context, without a predetermined decision-making structure. When a new issue is introduced into the system, some or all agents are activated automatically, engaging in deliberation without direct human intervention, and generating one or more policy proposals. These proposals are then returned to citizens for feedback, forming a continuous and iterative cycle, contributing to mutual learning and further improvements.
As my friend noted in his fieldnotes, Audrey Tang’s “Civic AI OpenClaw Bootstrap Guide” and his own idea of “Regenerative Climate Commons” have already moved quite far from the original Habermolt model. At first glance, they also resemble more a functional constituency system than a traditional representative model.
