Prediction markets are a tool that have gained significant attention and traction in EA. Although I agree that they can be useful in some circumstances, and that there is not always an existing better alternative, they nonetheless have some flaws, which I believe deserve more attention.
I don't think most of this is new information, but as far as I know these issues have not been systematically discussed in a single place.
Why is this important to EA?
1. Being able to make accurate predictions is important to basically every EA cause area. Prediction markets have gotten a lot of attention as a tool to facilitate this, so if they are not actually effective, it may be necessary to look for other strategies.
2a. Some EA-aligned organizations, such as the FTX Future Fund, have placed emphasis on prediction markets as a potential EA project, which could be a problem if prediction markets are less useful than widely believed.
2b. The Future Fund has also given multiple grants related to prediction markets. (I did a cursory search of other major EA funders but found inconclusive information.) If prediction markets are less useful than widely believed in EA, it might be better to use that money elsewhere.
3. Overhyping prediction markets could theoretically be harmful to community epistemics. (I am the least confident in this point.)
Issue 1: Prediction markets become much less reliable in the long-run
The one empirical study I found that actually directly addressed this question found that prediction markets are fairly well-calibrated in the short term but are not as well calibrated in the long-term. The study in question actually defined "long term" as "more than one month away," but I expect (P = 0.85) that this would be as severe or more severe of a problem on the scale of years. As many questions that are highly relevant to EA depend on the outcome of events years in the future, this limits the usefulness of prediction markets to EA.
Issue 2: Prediction markets are bad at estimating the probability of very unlikely outcomes
There are many events where we might want to tell the difference between multiple fairly low probabilities. For example, we might want to answer the question, "Will there be another pandemic that kills more than 1 million people worldwide before 2030?" It matters a lot whether the probability of this happening is more like 5% or more like 0.05%, but due to very low expected payouts, even if someone thinks that 5% is much too high, there is not much of an incentive for them to tie up their money betting on the market. (In some cases, they may even lose money due to inflation!)
For some practical examples, take a look at the Manifold Free Money tag. While some of the markets there do at least seem close to the true probability, some examples have significant discrepancies, such as "This market will resolve no," which is currently trading at 5%.
A similar but distinct problem occurs in prediction markets based on unlikely conditionals. For example, let's say I wanted to answer the question, "Conditional on Congress passing [some bill], how many degrees of warming will there be by 2100?" but the bill in question is very unlikely to pass. Even if the probability is not accurate, people are often reluctant to tie up their money on a market that will likely just return their money back to them.
Issue 3: The incentive for being right on many important questions is often asymmetric
Questions related to existential risk are often important to our work. However, in many such cases, one side of the market will never be able to collect, even if they are correct. For example, if someone asks "Will an unaligned AGI kill all humans by 2050?" there is very little incentive to bet yes, even if you believe the market is underrating the probability of this happening. As a result, prediction markets will systematically tend to err on the side of humanity not going extinct.
A lesser form of this issue can arise with long-term questions in full generality. If a prediction market asks about what will happen in 2100, most investors today will probably be dead by the time the market resolves, which means there is not very much incentive to bet on it in either direction. I suspect with low confidence that this is less serious of an issue because it does not produce a systematic skew in the same direction.
Prediction markets can be a useful tool, but they have limitations, and it's important to be aware of them and to not overstate their potential benefits. These issues include lack of long-term accuracy, overstated probabilities for unlikely events, and systematic incentive issues for certain topics.
Or much, much more than years, but as far as I know no one is trying to create prediction markets on events millions of years from now.