I believe effective altruism has an overconfidence problem, and since FTX I have been thinking more about why this might be true.

Here's part of what I suspect is going on: EAs get really excited about statistics. I think this is a good property for the movement to have, and I wish it was more common in other communities. However, I also think that in the excitement to 'do statistics', EAs sometimes spend less time than they could considering whether their data is good enough to draw accurate conclusions from their models.

An example: Philip Tetlock's Superforecasting has been rightly embraced by EAs. But one form this has taken is trying to forecast the probability of events like nuclear armageddon which, unlike those in the Good Judgment Project, are arguably without precedent. Tetlock himself seems to think it's highly unclear how accurate we can expect these predictions to be.

I have a solution to propose.

The folks predicting nuclear armageddon (hypothetically) assume a high prior probability that the predictions derived from their data will be inaccurate, where 'inaccurate' is below an agreed-upon Brier score. 

They then make a series of arguments about the data that allow them to update that probability all the way down to <1%. They publish those arguments along with their prediction, so that the community can critique them. 

If they believe they can get to <1% from 90%, they state that their confidence is Prior 90%—<1% (I'm open to suggestions for better notation). If they can only get there from 50%, it will be Prior 50%—<1%. If they think <1% is an unreasonably high bar, they can aim for <10% instead and write Prior 90%—<10%.

Here's a list of previously implicit-to-catastrophic-risk-forecasting questions that I think this process explicitly answers:

  • What is the agreed upon lower bound for acceptable accuracy? (agreed-upon Brier score).
  • How confident should we be that we can meet this lower bound? (The <x%).
  • If I am trying to convince someone who is skeptical that I can meet this lower bound, how big of a skeptic can I convince? (The first %).
  • How do I claim I can convince the skeptic? (The argumentation).
  • At least how much prior confidence in my model's data do I think is reasonable? (The first %).

This of course adds work to the forecasting process. But if we enjoy statistics, I think it's epistemically healthy to take extra care that we are building models that are accurate, not just fun. 

11

0
0

Reactions

0
0
Comments1
Sorted by Click to highlight new comments since: Today at 8:08 AM

See also: "Model uncertainty" (EA Wiki, n.d.).

(I write this comment along with applying the tag because I think this Wiki page is unusually in-depth and worth reading, that it's very directly relevant to the post, and that more EAs should see it.[1])

  1. ^

    The latter point reflects my (moderate) agreement with the post's opening claim. I expect I perceive EAs as being (somewhat) overconfident about different things to what this post's author has in mind, however.