In a previous post, I proposed we use torrents of AI Safety Funding to buy research infrastructure and turn it into commons for AI Safety work. This is obviously not universal basic income (UBI), but I thought I would write this post anyways to outline some lessons we can learn from their theoretical parallels.
Differences Between UBI and Research Commons
Universal basic income (UBI) and a pooled research-infrastructure commons (hereafter, research commons) for AI safety are very different proposals. One is a cash floor for everyone; the other is free compute, curated datasets, and fine-tuned models for a defined research community. UBI sits inside welfare economics. The gift-economy proposal sits inside science funding and AI policy. The literatures barely speak to each other.
| UBI | AI Safety Research Commons | |
|---|---|---|
| What it provides | Cash floor for everyone | Compute, datasets, models for researchers |
| Who it's for | General public | Defined research community |
| Field | Welfare economics | Science funding & AI policy |
But the two proposals share a key structural move: they turn a domain organized around evaluative allocation — prove you deserve it, then we'll give you some — and replace it with frictionless access. So, this post is, indirectly, about why creating research commons might be an undervalued intervention.
The Main Commonality: No More Evaluative Allocation
Welfare states gate welfare. Means tests, work tests, asset tests, application forms, caseworker interviews, periodic recertification all ask: are you really poor enough, really trying hard enough, really not hiding a partner's income, for this support? UBI's signature move is to abolish the evaluative apparatus. Per BIEN, Van Parijs and Vanderborght define basic income precisely by the absence of gating — cash, to individuals, regardless of work status, regardless of need test. The point isn't generosity; it's unconditionality.
In research funding, the default mode is also gating. Write a proposal, defend a budget, get reviewed by a committee, win or lose a slot. The grant economy is the welfare state of intellectual life: a sprawling apparatus whose job is to decide who deserves resources. The gift-economy proposal — pooled compute, shared datasets, specialized models, available to a defined research community without proposal review — abolishes (most of) that apparatus the same way UBI does for cash.
A Very Brief Academic Literature Review On UBI-Adjacent Topics
The idea of research commons builds on at least three lines of academic thinking that have criticized evaluative allocation.
First, evaluative allocation suffers from an imperfect information problem. Hayek's argument is that no central planner can aggregate the dispersed, tacit, time-and-place-specific knowledge that rational allocation requires (my restatement: no omniscience, no optimum). Free universal provision is one response: let recipients figure out what to do with the resources. Whereas a free market uses prices as a proxy for knowledge, research commons give the resources straight to the knowledge source.
Second, evaluative allocation of research resources seems to be at least somewhat ineffective, for three reasons. First, it creates additional administration for researchers: a 2018 Workload Survey found that principal investigators spend 16% of their time preparing proposals and an extra 6.3% in pre-award admin. Second, forming grant committees takes up financial, temporal, and coordination resources. Research commons save both researchers' time and committees' costs.
Third, abundance has shifted the bottleneck from accessing money to accessing resources. Brynjolfsson and McAfee argue that UBI works because, when capital and money are abundant, what limits human flourishing or productive work is what Van Parijs called real freedom, not money per se. Research commons' deep premise is parallel: the binding constraint on serious AI-safety research isn't abstract funding, but ease-of-access to non-fungible inputs (compute hours on the right hardware, datasets under the right licenses, models too expensive to train solo). Said otherwise: make abundant what researchers need, without money (or grants) as an intermediary, and flourishing becomes institutionally frictionless.
A Very Brief Frameworks Review
The idea of research commons synthesizes a few prior frameworks.
The closest existing intellectual cousin is the Universal Basic Services literature. Coote and Percy argue for universal in-kind provision of essential needs — housing, transport, digital access, care — on capability-theoretic grounds. UBS is the move I'm describing, applied to a different basket of goods. The gift-economy proposal is, in effect, UBS for research infrastructure.
Korpi and Palme's paradox of redistribution is adjacent: universal welfare programs reduce poverty more effectively than targeted ones, because universalism builds the cross-class coalitions that keep programs generous and politically defended. Targeting concentrates benefits on groups too marginal to mount a defence when governments start budget-slashing. The political resilience logic applies here: researcher commons can serve more than researchers who passed the grant-gates. When the going gets rough, broad access may become a life-raft: the researchers, students, journalists, and independent scholars who use the commons become, collectively, the political base that makes it worth protecting.
Hess and Ostrom's Understanding Knowledge as a Commons is the closest framework on the research side. They treat libraries, datasets, and scholarly infrastructure as commons whose value is in access, not in gatekeeping. They never connect the framework to UBI as the same allocative principle, but everything they say about the research half of my argument is already in their book. When it comes to barriers to access, a research commons approach plays limbo — how low can we go?
There are still more analogies to draw, and lessons to be learned. The interesting work will be figuring out if any of this theorizing ports into the real world.
In the meantime: I'm working on grant proposals for different organizations that would buy, build, or fund different research commons for AI-safety research. If you've worked on adjacent problems — fiscal sponsorship structures, governance of shared technical infrastructure, the political economy of research commons, the financial architecture of nonprofits whose unit economics have to actually work — I'd like to hear from you. You can reach me in the comments or via DM.

There is already some commons work being done. It didn't feel appropriate to include these examples in this specific post since it focuses on UBI parallels. I wanted to shout some out here anyways to: acknowledge that people have spoken about a similar framing before; draw attention to these live experiments; and spawn grounded reflection on this post's merits.
CAIS Compute Cluster exists and is doing a partial version of what I'm proposing. The Center for AI Safety is basically already running a small-scale compute commons, and reports supporting ~350 researchers and ~109 papers through its compute cluster. This follows the Shared compute fund project idea that's been floating around.
Foresight Institute's AI for Science & Safety Nodes are opening in SF and Berlin offering "grant funding, office and community spaces, and local compute"... basically a commons that bridges physical space, services, and digital resources.