Hide table of contents

i. What is futarchy

Futarchy is a proposed system of governance that combines prediction markets with democratic decision-making. Developed by Robin Hanson (who is at my university!), it improves policy outcomes by harnessing the efficiency of markets. Under futarchy, citizens would vote on desired outcomes or metrics of success, but not on specific policies. Instead, prediction markets would be used to determine which policies are most likely to achieve the voted-upon goals. Traders in these markets would bet on the expected outcomes of different policy options, with the policies predicted to be most successful being automatically implemented. You vote on goals, but bet on beliefs. 

What I want to draw your attention to is an unintended, but beneficial, side effect of such a system. Policy will reflect the utility functions of the people, even if they are non-linear. Indeed, if there are no subsidies to the prediction market, non-linear utility functions are the only way in which anyone would trade!

A quick word on what a prediction market is, in its simplest form — you have a security which pays out 1 if it resolves yes, and 0 if it resolves no. In other words, gambling. You are doing the same thing as when betting on whether the Orioles or the Blue Jays will win a baseball game. 

Why would you trade on this? If everyone were perfectly risk-neutral and rational, they would know the only reason someone would come along and bet is if they possessed more information about the game. Perhaps they knew the Orioles’ starting pitcher had injured themselves, or the Blue Jays had an outbreak of the flu. You would be foolish, under those circumstances, to bet. In the case of sports, it is because people find it fun; in more boring markets, it is hedging. It is a truism about the world that people prefer the certain to the risky, and that money has declining returns. If a company benefits if an event happens, and is harmed if the event doesn’t happen, then they can buy shares that pay out only when the event doesn’t happen. This way, they change an uncertain level of profit into a certain level of profit. Their hedging will not distort the market away from the true probability of things, because they are one and the other bidders are many. This ought not hold when people’s harm and benefits from an event occurring are not symmetric, however. Let’s explore why that’s an advantage in futarchy, though.

ii. An example

Our society has voted to get the policy which is most likely to achieve a value of .5. (Imagine the value is a rate of GDP growth, which affects everyone). Suppose there are two securities, both of which have an expected value of .5, and achieves that level or better precisely half the time. One always returns a normal distribution tightly centered around .5, while the other one returns 1 half the time, and 0 the other half. If everyone is perfectly risk neutral, then traders will be indifferent between the two, and buy and sell such that the price of them is the same. If the second policy were infinitesimally more likely to resolve 1 then that would be our policy and we would enter a slightly weighted coin flip for our outcomes.

Suppose that everyone is risk-averse, however. A risk-averse person finds losing the .5 of value, when the second security resolves to 0, to be a keener loss than missing out on .5 of value when the second security resolves to 1. Thus, people will not regard these securities, despite paying out the same on average, as having equal real value — the riskier will be lower than the safer. The payout includes not just the value from getting it right, but the value from the government policy, and if the government is choosing policies on the basis of their chance of success, then the policies chosen will better reflect the preferences of the people.

Requiring unanimity is a big jump, though. Is this robust to times where most people are risk-averse, but some people are risk-neutral? Let us say that, in an extreme case, precisely one person is risk neutral. They will see the price of the second security, and bid it up until the price reflects the true rate. Of course, this “true rate” does not correlate perfectly with people’s utility, so we would expect them to be outbid, if funding is not infinite. As we increase the number of people, adding in arbitrary risk-neutral or even risk-preferring utility functions, the price of the securities should reflect the cardinally-weighted sum of all utility functions in the economy. This seems really, really good! 

iii. Why it’s a challenge for prediction markets

One of the nice things about prediction markets is that you can simply read off the price of a security on an event as a percent chance of the event occurring. This presumes risk-neutral traders, however. If everyone is hedging in a direction, then the price will diverge from the true probability. Scott Alexander, in his Prediction Market FAQ, in section 4.6.1.1, has us imagine that an export-import bank hedges against Trump winning the election. Other people would step in, then, to arbitrage the possibility away. If you presume that everyone wants to hedge against Trump winning, though, this shouldn’t hold! If there isn’t unlimited funding to undoing hedging, the market price will differ from the true probability of the event. (Why would Trump have a chance of being elected, if people consider his election as a really bad thing in this world? Because voting cannot take into account cardinal preferences, only ordinal ones. Voting is also essentially costless, and so people are free to vote for what they want to believe, rather than what they really believe. This is just a fundamental problem of voting).

Is it such a big deal, though? I would argue that, even while the price may diverge from the true probability, it is still useful at providing information about the world. For similar events, such as bad weather, you could assume that people’s hedging remains roughly constant over time, and adjust accordingly. Even if the information were unpredictably distorted, it still surely provides some value.

iv. A concern

This came up while I was writing — hence, its ugly appendation to this blog. I am concerned that futarchy may lead to excess risk-taking, if people vote for unrealistically high goals. Suppose that people are universally risk-averse, and that the two policies as before (a tight distribution around .5, and a coin flip between 1 and 0) are available to bet on. This time we vote on a goal of .99, however. The first policy will never achieve that, but the second policy will, and so wins — in spite of the fact that we may, as a society, find that the increased risk leaves us worse off than the policy which doesn’t achieve our goal, and that if it were put to a vote, the first policy would win unanimously. There may not be as much of a separation between goals and policy as we would hope — the choice of goal may lead us into sub-optimal policies.

Perhaps this example is unfair, because we are voting on unbounded possibilities, but betting on only two possibilities in this example. Allowing unlimited policies should not help, however, when the goal is not easily achievable. Imagining again that we are trying to choose a tax rate which will result in a given rate of economic growth, perhaps what we could do is have separate markets for each gradation of economic growth. The variance in policy outcomes would be implied by the difference in percentage between markets — so in the prior example, you’d have a market to see if it comes back with .1, .2, .3, etc. We would see then that policy 1 always achieves below .5, and never achieves above .5, and the second policy’s returns are the same (a fifty percent chance) at all increments. We would then need to decide how risk averse society should be, which could be voted on, or decided arbitrarily by some group of reasonable people.

This would probably still be an improvement over our present system. I am concerned, however, that people will vote aspirationally as to what our goals should be, and thus unwittingly harming ourselves. We cannot entirely get away from the foolishness of the voter. If we believe that people will act more idealistically the farther away they are from actually setting policy, then it would be better for people to vote for representatives, who then vote to set goals.

Comments2


Sorted by Click to highlight new comments since:

I have written a bit about this (and related topics) in the past:

 

Our society has voted to get the policy which is most likely to achieve a value of .5 [...]

I think you make a fairly good argument (in iv) about trying to maximise the probability of achieving outcome x where x could vary to being a small number, but I expect futarchy proponents would argue that you can fix this by returning E[outcome] rather than P(outcome > x). So society would vote to get the policy that maximises the expected outcome rather than the probability of an outcome. (Or you could look at P(outcome > x) for a range of x).

 

You wrote on reddit:

I have written a blog post exploring why the prices in a prediction market may not reflect the true probability of an event when the things we want to hedge against are correlated

But I think none of your explanation here actually relies on this correlation. (And I think this is extremely important). I think risk-neutrality arguments are actually not the right framing. For example, a coin flip is a risky bet, but that doesn't mean the price will be less than 1/2 because there's a symmetry in whether or not you are bidding on heads or tails. It's just more likely you don't bet at all because if you are risk-neutral, you value H at 0.45 and T at 0.45. 

The key difference is that if the coin flip is correlated to the real economy, such that the dollar-weighted average person would rather live in a world where heads come up than tails, they will pay more for tails than heads. 

Executive summary: Futarchy, a governance system combining prediction markets with democratic goal-setting, can reflect non-linear utility functions and risk preferences of citizens, but may lead to excessive risk-taking if unrealistic goals are set.

Key points:

  1. Futarchy allows citizens to vote on goals while prediction markets determine policies to achieve them.
  2. Risk-averse traders in prediction markets will price in their preferences, leading to policies that better reflect societal risk attitudes.
  3. This feature challenges the assumption that prediction market prices directly reflect probabilities.
  4. Concern: Voting for unrealistically high goals may lead to riskier policies being chosen against citizens' true preferences.
  5. Possible solution: Implement separate markets for different outcome levels and incorporate societal risk aversion into decision-making.
  6. Despite potential issues, futarchy likely improves upon current governance systems.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
Rasool
 ·  · 1m read
 · 
In 2023[1] GiveWell raised $355 million - $100 million from Open Philanthropy, and $255 million from other donors. In their post on 10th April 2023, GiveWell forecast the amount they expected to raise in 2023, albeit with wide confidence intervals, and stated that their 10th percentile estimate for total funds raised was $416 million, and 10th percentile estimate for funds raised outside of Open Philanthropy was $260 million.  10th percentile estimateMedian estimateAmount raisedTotal$416 million$581 million$355 millionExcluding Open Philanthropy$260 million$330 million$255 million Regarding Open Philanthropy, the April 2023 post states that they "tentatively plans to give $250 million in 2023", however Open Philanthropy gave a grant of $300 million to cover 2023-2025, to be split however GiveWell saw fit, and it used $100 million of that grant in 2023. However for other donors I'm not sure what caused the missed estimate Credit to 'Arnold' on GiveWell's December 2024 Open Thread for bringing this to my attention   1. ^ 1st February 2023 - 31st January 2024
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra