There is already some commons work being done. It didn't feel appropriate to include these examples in this specific post since it focuses on UBI parallels. I wanted to shout some out here anyways to: acknowledge that people have spoken about a similar framing before; draw attention to these live experiments; and spawn grounded reflection on this post's merits.
CAIS Compute Cluster exists and is doing a partial version of what I'm proposing. The Center for AI Safety is basically already running a small-scale compute commons, and reports supporting ~350 researchers and ~109 papers through its compute cluster. This follows the Shared compute fund project idea that's been floating around.
Foresight Institute's AI for Science & Safety Nodes are opening in SF and Berlin offering "grant funding, office and community spaces, and local compute"... basically a commons that bridges physical space, services, and digital resources.
A few other works inspired this post, and I am grateful to all of them:
People write about why diversity is a moral imperative because it's useful, mostly to come up with creative solutions and avoid groupthink. But, is diversity intrinsically good, a thing worth maximizing for its own sake? Here's a quick question to test that, I would love your thoughts:
Does being principled produce the same choice outcomes as being a long-term-consequentialist ?
Leadership circles[1] emphasize putting principles first. Utilitarianism rejects this approach: it focuses on maximizing outcomes, with little normative attention paid to the process (or, as the quip goes: the ends justify the means). This (apparent) distinction pits EA against conventional wisdom and, speaking from my experience as a group organizer,[2] is a turn-off.
However, this dichotomy seems false to me. I can easily imagine a conflict between a myopic utilitarian and a deontologist (e.g. the first might rig the lottery to send more money to charity).[3] I have more trouble imagining a conflict between a provident utilitarian and a principles-first person (e.g. cheating may help in the short term, but in the long-term, I may be barred from playing the game).[4]
Even if principles sometimes butt heads (e.g. being kind vs. being honest), so can different choice outcomes (e.g. minimizing animal suffering vs. maximizing human flourishing). Both these differences are resolved by changing the question's parameters or definitions:[5] being dishonest is an unkindness; we need to take both sufferings into account.
All in all, it seems like both approaches face the same internal problems, the same resolutions, and could produce the same answer set. If this turns out to be true, there are a few possible consequences:
I'm thinking of Stephen Covey's works "7 Habits of Highly Effective People" (1989) and "Principle-Centered Leadership" (1992). If these leadership models are outdated, please correct me.
When tabling for a new EA group, mentioning utilitarianism cast a shadow on a few (~40%) conversations. When I explained how we choose between lives we save every day, people seemed more empathetic, but it felt like a harder sell than it had to be.
I would love for someone to do proper math to see if this expected value works out. Quick maths are as follows (making assumptions along the way). Assume the lottery is 100M$ with a 80% chance of getting caught, and otherwise, you make 200G per year, and you'd get 10 years in prison for rigging. EV of lottery rigging = winning profits + losing costs = .2*$100M + .8*(-$200G/yr*10 yr) = 18.4M.
I'm assuming that we live in a society that doesn't value cheating...
This strategy is Captain Kirk's when solving the Kobayashi Maru.
Its modus tollens comes to the same conclusion as utilitarianism: if you have the wrong consequences, you must have had the wrong processes.
Couldn't comment on the doc, so here are a few others I found:
Fair points — I should have been more careful with the "free access" framing. Here's a quick revision:
Yes, UBI is rationed per-person. These commons should be too. In both UBI and the commons, we abolish evaluative rationing — rationing based on perceived deservingness.
I'll split the "projects you would not approve of" concern in two parts. The first is a harm worry (dangerous use); the second is a dilution worry (most compute goes to non-safety projects).
Amount-rationing could address harm by not giving enough compute to do harm, but it would not address dilution concerns, which may be as much, if not more, of a problem for funders.
Which is why, upon further reflection, I'd like to introduce a second gate: a gate by type. Shevlane (2022) and Bucknall & Trager (GovAI, 2023) have argued basically this — that selective access can preserve safety-positive work while limiting misuse.
So yes, I think my model needs some gates. But I would still push hard to keep the gates non-evaluative. That's how, I think, to inspire safety research that would never have been done, by people who'd never have applied for a grant. And that's what the commons is for.