Epistemic status: Some fundamental errors but worth leaving up
I think the error with this piece is that it is a hammer looking for a nail, rather than the reverse. Austin makes the point well here. Rather than saying "how to institute Futarchy," I guess it should say "how do we solve problems people already have"
- I would like there to be a regranter using prediction markets and I want your thoughts on how that process should work
- Manifund, a new granting org, is connected to the charity prediction market, Manifold, so is a uniquely good candidate
- Create prediction markets for each proposed project, scoring it against some metric if it happens or doesn't
- Order all projects according to which offer the most additional value if they happen than if they don't
- Fund all projects until the Futarchy regrantor uses its allocated budget for that period
- A discussion of different metrics
- A discussion of how this might be different from current grantmaking orgs
- Please correct any errors (even grammatical errors, I think I'd publish on here more if I didn't have to send it round to 10 people to check grammar first)
What is Futarchy and why is this a good opportunity?
Futarchy is a decision-making system where prediction markets are used to make decisions on what actions to take. You agree on some some kind of metric that you care about, and then predict how decisions will affect that metric. Futarchy generally implies a deterministic system, though this proposal is more about ironing the kinks out of the system so it could initially be advisory initially.
We can imagine running decision markets on a Manifund regrantor. Manifund is a new regranting org. Individuals get regranting budgets and then publicly allocate them. I guess that if they use their budget well they can argue to be given more.
Manifund is a good test bed for Futarchy for three reasons:
- They already run an impact market system, so one could try and do some kind of futures markets on the impact certificates. I don't know the legality or technicality of this.
- It's associated with a play money prediction market, Manifold - Austin Chen is a cofounder of both Manifold (the prediction market) and Manifund (the regrantor) hence the same part of the name. I think it'd be much easier to implement features than any other comparable one.
- The Manifund team ship things very quickly, and are into mechanistic design in general, so it just seems more likely to work here than maybe in other places.
How it might work?
So, it occurred to me that this would be a cool thing to exist, so this is me trying badly to do it. I don't pretend I'm going to do a good job here. I'm going to lay out what seems like the way I would do it, and then if, you know, please correct me,
For each grant proposal you want a prediction market with two sets of options, either 4 options or 2 continuous options in a single market:
- The metric (perhaps binary) if the grant is fulfilled
- The metric (perhaps binary) if the grant is not fulfilled
The regrantor needs to prioritise grant opportunities above some funding bar. The key questions is "what's the value add". Here, the answer is the difference in the metric if it happens and if it doesn't. Perhaps divide by the size of the grant.
So, this requires that we have a clear metric, which is going to be a problem. Here are three suggested metrics:
- Impact Markets. We could see whether Manifund want to run a futures market (is this even legal, are they doing it?) or we could run our own prediction markets, charity prediction markets, sure, but of the future value of the grant.. I don't actually really know what the future time should be. 5? 10 years? I also don't quite understand how the non-funding aspect of this works. I guess you choose a funding bar and only fund things with a good enough return on investment.
- It is a financial mechanism that already exists
- They may already have this process
- If the markets are on manifold, there isn't a way for many investors to get their money back, so may not invest properly
- It may be illegal somehow
- I have some vague foreboding that this won't actually work
- Community/expert assessment. The community or small group scores grants 5 - 10 years later on how much value they created or destroyed. Then pays out the prediction markets based on who was correct. This is similar to an impact market, but doesn't require having huge, liquid impact markets. My intution from the accuracy of prediction markets vs Metaculus is that if you gatekeep the voting well (say, LessWrong/EA forum users) then this is as accurate as a liquid market.
- Doesn't need lots of liquidity
- Have to decide who votes and who doesn't
- Not real money
- A vote. We could vote in 5 -10 years on if the grant should have been funded. This seems the easiest to implement. But I don't know that it can account for the second order effects. If there were secret or complex costs or benefits then I expect the market or assessment to figure that out. A straight vote might boil down much more to vibes.
- Easy to implement
- Worse incentives
- Can you think of better?
Issues with futarchy
I reserve the right to edit these.
Issues I buy
- Issues of causality (ht Lizka). Prediction markets are markets and allow for hedging (where your true probability is skewed by other assets you hold) and strange correlations - the universes where an asset is more valuable may cause the decision rather than result from it.
- In this case an example might be a grant for Ajeya Cotra to work on global health. The world's where she is willing to do this are ones where alignment is solved and her talents are better used elsewhere. And we would consider it valuable work. But it doesn't mean it's a good idea in this world. Markets are capable of this sleight of hand and if we don't notice we can take the wrong signal from them.
- The additional complexity isn't worth it. I am a mechanistic design guy. I love the idea that our world is run by a series of beautiful systems. But I am a pragmatist too. And while such systems reduce friction (you can now dispute a grant by buying some shares in a prediction market rather than having to talk to the grant funder, if you know who they are) they increase complexity. In my experience this tradeoff is often quite poor.
- There is some chance it will seem too weird. I am open to the idea that this process will be weird and so harmful if it ever allocated a lot of money, especially in a surprising way.
- This is a hammer in search of a nail. While it's interesting to think about how Futarchy would work, it may not actually solve real problems grantmakers have. In this sense the effort would be better used in solving those. Austin (who co-runs manifund gives an example) "The problem that seems most important to solve: "finding projects that turn out to be orders of magnitude more successful/impactful than the rest". To the extent this doens't solve the top problems it's not the best idea
- People are better at predicting 5 years than betting 5 years. Lizka makes this point and I am unsure about it. Getting people to bet 5 - 10 years ahead seems like a stretch, but equally, that's what OpenPhil does. That said, perhaps the question is of incentives. Grantmakers are well-incentivised to think long term. Bettors, especially those betting in fiat (as opposed to a currency that appreciates over time or betting on impact) are explicitly not incentivised to take longer term bets due to discount rates.
The issues I don't buy
- People will manipulate the markets. I don't buy that someone can jump on the markets and mess with them. Manifold (or better real money) markets have great incentives against this. I am unaware of any time this has happened in a way that it would in a live prediction market.
Comparison to current grantmaking orgs
To me it seems more like GiveDirectly than GiveWell. Something that is less effective (due to the benefits of both secrecy and an org that manages decisions) but can be scaled much more. I'm doubtful that a futarchy regrantor would scale soon, but equally I sense one day that all regranting will happen like this.
It will be attractive to a certain kind of donor. I sense a certain kind of donor (often crypto) will find the idea very attractive. And since large crypto donors are comparably prevalent (and with a bull run, will be even more so), there seems reason for more so.
It would be good to test futarchy. I would like to understand why we haven't seen more futarchy. What are the kinks that need ironing out? I think there is value in testing in a real world situation. Maybe there are learnings transferrable to other orgs.
This was written quickly, let me know what you think
I dictated this and spent about an hour editing it. Please correct errors or make suggestions. I don't currently intend to put this into place, so if you want to, do! Please let me know.
It doesn't seem hard to remove the downside by saying there will be a committee who will veto markets if they seem obviously manipulated.
People's intuition seems to be that there should be 2 separate markets - "If A" and "If not A". To me this seems wrong. If you make a profit on such a market you need a way to take the money out of the market. I can go into the details in the comments, but this requires a "will A happen" market. And so you might as well have all 4 options as a single market.
A criticism here is "5 years, that's ages" but it seems normal grantmaking operates on this timescale also.