Hide table of contents

I had a project last month to promote prediction markets in the Middle East. The idea was to publish some op-eds about applications of prediction markets in the US, build some traction. Unfortunately, the 3 newspapers I sent it to declined to run it. It's possible I need connections or a reputation to publish a piece like this. But it's also possible the piece is just badly written.

Anyway, I post it here for your critique.

Die US spies bet on Russia's invasion of the Ukraine? Yes, and that is good.

The Russian invasion of Ukraine has seen major successes by the US and UK intelligence communities, alongside embarrassments for the Russian and French agencies. One factor in American and British success might be their use of prediction markets. Prediction markets are systems for betting on future events (usually in reputation points, not money). Despite their resemblance to the stock market and sports gambling, prediction markets solve critical problems intelligence services face.

Days before the invasion, Biden stated that he “was convinced” by his intelligence services that Putin would invade. It was a bold position: in my university in DC we regarded an invasion as inconceivable: international wars are vanishingly rare, an invasion would unit Russia’s enemies, and it would trap Russia in a bloody and expensive insurgency. Even the French army were surprised by the invasion, prompting the resignation of France’s head of military intelligence last week.

During the invasion Russian intelligence faced its own embarrassment. Russia’s invasion planners anticipated a rapid collapse. Accordingly, they used rapid, unsupported armored thrusts and aerial landings behind Ukrainian lines. When Ukrainian resistance did not crumble, this invasion plan left Russian units isolated in front of long, insecure supply lines. Leaked Russian intelligence shows they believed support for the Ukrainian government was weak and would evaporate quickly. That failure has prompted a purge of the Russian intel agency FSB, whose leaders are now under house arrest.

The Russian story demonstrates some key challenges facing an intelligence service. States pay intelligence officer’s salaries partly to learn what will happen in the future. The inherent uncertainty of political and economic outcomes makes this exchange difficult. You can never be 100% certain about a future event, especially in a foreign country. Suppose an advisor states “the Ukrainian military will collapse with 80% probability”. If the collapse does not occur, how can we know if the analyst was wrong or just unlucky? You need a system for judging predictive ability across many statements. And what if some analysts believe an event is likely and others disagree. Leaked documents reveal some FSB agents dissented from the invasion plan. How can you aggregate the predictions of multiple analysts together?

A prediction market uses betting to solve both these problems. A prediction market is similar to a stock market, in that participants buy and sell assets linked to a future payoff (options and futures are literally prediction markets). Unlike stock markets, the assets in a prediction market are tickets that pay if certain events happen. For example, a ticket might pay 1 dollar if Russia invades Ukraine. If someone buys the ticket for 30%, we know they think an invasion has at least a 30% chance. Prediction markets naturally track performance over time; after a few bets my account balance will tell me just how competent I am. In non-monetary markets, the difference between prediction and accuracy can be easily calculated. Moreover, bets require clearly defined outcomes to attract participants. The bet “will Russia win their invasion” invites acrimonious debates but “will 100 or more Russian troops enter Lviv before January 1” is clear to everyone.

The US intelligence community started its main prediction market in 2010, the creatively named Intelligence Community Prediction Market (ICPM). Only people with top-secret clearances can access it, so we know very little about it We know that war incidence is a common question type, and the ICPM features precisely worded questions. The market is reasonably popular, with 190,000 predictions and 4,300 users as of 2018. The ICPM does not use real money, but a system of play money with no redeemable value. Real money would incentivize the participants to intentionally mislead one another to get a better price, which obviously damages the whole enterprise. High scores are not even directly linked to commendations or promotions.

Did the ICPM give the US an edge in the Ukraine crisis? With no public access to the system, it’s hard to say. We know that the public prediction markets like Metaculus performed quite well. Metaculus’s “will Russia invade?” question increased form 20% in December to 70% by the middle of February. But intelligence community had plenty of other resources. Public statements show the US had an abundance of information Russian plans, troop movements and false flag attacks. Undoubtedly the invasion was discussed at length by high-ranking staff, who likely rely less on the ICPM.

7

0
0

Reactions

0
0

More posts like this

Comments3


Sorted by Click to highlight new comments since:

Oh god the paragraph breaks didn't go through. Fixing!

Hey, how is this any different from the assassination problem within prediction markets?

In the assassination's problem, people manipulate the market to win bets. No one is doing that in this case.

Also, knowing when wars will happen is socially beneficial because uncertainty increases the probability of war. If both sides think they are strong, they both take strong bargaining positions. When their offers are rejected they fight. More knowledge -> bargains are more likely to be accepted.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would