Hide table of contents

Metaculus is an online platform where users make and comment on forecasts, which has recently been particularly notable for its forecasting of various aspects of the pandemic, on a dedicated subdomain. As well as displaying summary statistics of the community prediction, Metaculus also uses a custom algorithm to produce an aggregated "Metaculus prediction". More information on forecasting can be found in this interview with Philip Tetlock on the 80,000 hours podcast.

Questions on Metaculus are submitted by users, and a thread exists on the platform where people can suggest questions they'd like to see but do not have the time/skill/inclination to construct themselves. Question construction is non-trivial, not least because for forecasting to work, clear criteria need to be set for what counts as positive resolution. A useful intuition pump here is "if two people made a bet on the outcome of this question, would everyone agree who had won?"

Although there is already a significant overlap between the EA community and the Metaculus userbase, I think it is likely that there exist many forecasting questions which would be very useful from an EA perspective, but that have not yet been written. As such, I've written this question as both a request and an offer.

The request:

Have a think about whether there are any forecasts you think have the potential to have a large impact on decision making within the EA community.

The offer:

If you do think of one, and post it below, and I'll write it up for you and submit it to the site. The closer it is to "fully formed", the more quickly this is likely to happen, but please don't feel the need to spend ages choosing resolution criteria, I'm happy to help with this. I intend to choose questions based on some combination of number of upvotes the suggestion has and how easy the question is to operationalise.

Examples of my question-writing on Metaculus are here, and I also recently become a moderator on the platform.

Some examples of EA-adjacent questions already on the platform:

How much will GiveWell guess it will cost to get an outcome as good as saving a life, at the end of 2021?

On December 1st, 2023 how many companies worldwide will pledge uphold GAP standards for broiler chickens raised for meat?

How many reviews will Toby Ord's book The Precipice have on Amazon on January 1st 2021?

How many infections of SARS-CoV-2 (novel coronavirus) will be estimated to have occurred worldwide, before 2021?

If you're interested in having someone make a forecast about a question that's more personal to you, and/or something that you wouldn't expect the Metaculus community as a whole to have the right combination of interest in and knowledge of, I'd recommend checking out this offer from amandango.

New Answer
New Comment

8 Answers sorted by

Thanks for doing this, great idea! I think Metaculus could provide some valuable insight into how society's/EA's/philosophy's values might drift or converge over the coming decades.

For instance, I'm curious about where population ethics will be in 10-25 years. Something like, 'In 2030 will the consensus within effective altruism be that "Total utilitarianism is closer to describing our best moral theories than average utilitarianism and person affecting views"?'

Having your insight on how to operationalize this would be useful, since I'm not very happy with my ideas: 1. Polling FHI and GW 2. A future PhilPapers Survey if there is one 3. Some sort of citation count/ number of papers on total/average/person utilitarianism. It would probably also be useful to get the opinion of a population ethicist.

Stepping back from that specific question, I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like 'Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?', or 'Will EA in 2030 believe that EA should've invested more and donated less over the 2020s?'

I'd also be interested in forecasts on these topics.

I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like 'Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?', or 'Will EA in 2030 believe that EA should've invested more and donated less over the 2020s?'

It seems to me that there'd be a risk of self-fulfilling prophecies. 

That is, we'd hope that what'd happen is: 

  1. a bunch of forecasters predict what the EA community would end up believing after a great deal of thou
... (read more)
4
Kirsten
4y
I agree this would be a genuine problem. I think it would be a little less of a problem if the question being forecasted wasn't about the EA communities beliefs but instead something about the state of AI/climate change/pandemics themselves
1
alex lawsen (previously alexrjl)
4y
This is really interesting, and potentially worth my abandoning the plan to write some questions on the outcomes of future EA surveys.    The difficulty with "what will people in general think about X" type questions is how to operationalise them, but there's potentially enough danger in doing this for it not to be worth the tradeoff. I'm interested in more thoughts here. In terms of "how big a deal will X be, there are several questions already of that form. The Metaculus search function is not amazing, so I'm happy to dig things out if there are areas of particular interest, though several are mentioned elsewhere in this thread.
2
MichaelA
4y
Do you mean questions like "what will the state of AI/climate change/pandemics be"  (as Khorton suggests), or things like "How big a deal will Group A think X is"? I assume the former? I'm not sure I know what you mean by this (particularly the part after the comma). 
3
alex lawsen (previously alexrjl)
4y
  Yes. The not was in the wrong place, have fixed now. I had briefly got in touch with rethink about trying to predict survey outcomes, but I'm not going ahead with this for now as the conerns your raised seem bad if low probability. I'm consider, as an alternative, asking about the donation split of EAF in ~5 years, which I think kind of tracks related ideas but seems to have less downside risk of the form you describe. 
3
MichaelA
4y
To lay out my tentative position a bit more: I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:  * how the actor will interpret the evidence that those specific events provide regarding X * lots of events we might not think to specifically forecast that could be relevant to X On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.  I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event that's unusually well correlated with an actor's overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast that's directly about an actor's overall belief would. This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:  * is a subset of the EA community[2] * seems to have a good process of forming beliefs * seems likely to avoid updating problematically based on the forecast Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff. This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/feedback loops, which somewhat blunts the effect. I'm pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadn't thought of

The best operationalisation here I can see is asking that we are able to attach a few questions if this form to the 2030 EA survey, then asking users to predict what the results will be. If we can get some sort of pre-commitment from whoever runs the survey to include the answers, even better.

One thing to think about (and maybe for people to weigh in on here) is that as you get further out in time there's less and less evidence that forecasting performs well. It's worth considering a 2025 date for these sorts of questions too for that reason.

4[anonymous]4y
Another operationalisation would be to ask to what extent the 80k top career recommendations have changed, e.g. what percentage of the current top recommendations wills till be in the top recommendations in 10 years.
5
alex lawsen (previously alexrjl)
4y
This question is now open. How many of the "priority paths" identified by 80,000hours will still be priority paths in 2030?
2
alex lawsen (previously alexrjl)
4y
I really like this and will work something out to this effect
1
alex lawsen (previously alexrjl)
4y
Do you want to have a look at the 2019 EA survey and pick a few things it would be most useful to get predictions on? I'll then write a few up.
1
jacobpfau
4y
I think the 'Diets of EAs' question could be a decent proxy for the prominence of animal welfare within EA. I think there are similar questions on metaculus for the general US population https://www.metaculus.com/questions/?order_by=-activity&search=vegetarian I don't see the ethics question as all that useful, since I think most of population ethics presupposes some form of consequentialism.
2
alex lawsen (previously alexrjl)
4y
It looks like a different part of the survey asked about cause prioritisation directly, which seems like it could be closer to what you wanted, my current plan (5 questions) for how to use the survey is here.

Somewhat unrelated, but I'll leave this thought here anyway: Maybe EA metaculus users could perhaps benefit from posting question drafts as short-form posts on the EA forum.

3
alex lawsen (previously alexrjl)
4y
I'm kind of hoping that this thread ends up serving that purpose. There's also a thread on metaculus where people can post ideas, the difference there is nobody's promising to write them up, and they aren't necessarily EA ideas, but I thought it was worth mentioning. (I do have some thoughts on the top level answer here, but don't have time to write them now, will do soon)

A while ago, Leah Edgerton, of Animal Charity Evaluators, gave an AMA, and one of the questions I asked was What are some questions regarding EAA (effective animal advocacy) which are amenable to being forecasted?.

Her answer is in this video here. In short:

  • Will corporations stick to their animal welfare commitments?
  • When will specific animal free food technologies become cost-competitive with their traditional animal counterparts?
  • Timelines for cultured meat coming to market?
  • When will technology exist which allows the identification of the sex of a chicken before it hatches? When, if ever, will such a technology be adopted
  • When, if ever, will the global production and consumption of farmed animals stop growing? When will stop completely?
  • When will specific countries or states adopt legal protection for animals / farmed animals?
  • When will EAA organizations have a budget of more than $500 million? $1 billion?
  • Questions related to the pandemic.
  • Questions related to the budget of EAA organizations in the immediate future.

Operationalizing these questions, and finding out what the most useful things to forecast are may involve contacting ACE directly. For example, "corporations" is pretty general, so I imagine ACE has some particular ones in mind.

I post my own questions sometimes, but I have some ideas for questions that I'm not sure how to operationalize:

  • Will there ever be a broadly accepted answer on how to compare possible worlds that have a nonzero probability of containing infinite utility?
  • Are donations to GiveDirectly net positive?
  • How much will the EA movement grow over the next few decades?
  • What is the long-term rate of value drift among EAs?
  • If the Founders Pledge makes a long-term investment fund, how long will it last before shutting down? And why does it shut down? (Or the same question for some other long-term investment fund.)
  • Will a well-diversified long term investor eventually go bankrupt due to a market crash?
  • What will the funding distribution look like across causes 5/10/20 years from now?
  • What will be the cost per human life saved equivalent according to Animal Charity Evaluators in 2031? (I asked a similar question on GiveWell, but it's not obvious how to determine a cost per life human saved equivalent from ACE's recommendations.)

Lots of good ideas here, and I think I'll be able to help with several, sent you a pm.

Some fun, useful questions with shorter time horizons could be stuff like:

  • Will GiveWell add a new Top Charity to its list in 2020 (i.e. a Top Charity they haven't previously recommended)?
  • How much money will the EA Funds grant in 2020? (total or broken down by Fund)
  • How many new charities will Charity Entrepreneurship launch in 2020?
  • How many members will Giving What We Can have at the end of 2020?
  • How many articles in [The Economist/The New York Times/...?] will include the phrase "effective altruism" in 2020?

Stuff on global development and global poverty could also be useful. I don't know if we have data to resolve them, but questions like:

  • What will the global poverty rate be in 2021, as reported by the World Bank?
  • How many malaria deaths will there be in 2021?
  • How many countries will grow their GDP by more than 5% in 2021?

These are great questions. Several similar questions are already up which I've linked below (including one I approved after this post was written). I've also written three new questions based on your ideas, which I'm just waiting for someone else to proofread and will then add to this post.

Will one of GiveWell's 2019 top charities be estimated as the most cost-effective charity in 2031?

How much will GiveWell guess it will cost to get an outcome as good as saving a life, at the end of 2031?

Will the number of people in extreme poverty in 2020 be lower than t

... (read more)

I'm interested in operationalizing two questions raised in this thread. The first seems substantially easier to operationalize than the second, but less interesting.

1. Will the next new constitution for a national government be closer to a presidential or parliamentary system?

2. To what extent would academics think that the work of Gerring, Thacker and Moreno were causally relevant to this choice?

These questions are not directly decision-relevant to me, but I'm generally excited about the idea of adding more quantification/forecasting to EA conversations.

I'm also interested in questions around approval voting in general, and the Center for Election Science in particular.

Some stuff:

  • Conditional on less than 5 cities with >=50,000 people having implemented approval voting by Dec 31, 2022, what will the funding for the Center for Election Science be during 2023? Context: According to the CES's strategic plan converting 5 cities with >= 50,000 inhabitants is one of their main targets by 2022 (see p. 7). Conditional on them not achieving it, how will their funding look like? This can probably be operationalized with reference to IRS tax reports.
  • How many US cities with more than 50,000 people will have implemented approval voting by [date]?
  • What will CES funding look like in 2021, 2022, etc.

For EAs who are investing to give, more questions about the market would be great, e.g. my comment here.

Metaculus did briefly experiment with a finance spinoff but I don't believe it was successful. I can definitely write a couple and I think they'd get a lot of interest, but I'd be surprised if making investment decisions based on metaculus was a winning strategy in the long-term. I'd be more optimistic, though still cautious, about political betting using metaculus predictions.


Here are some current questions which seem relevant, you can find more here.

https://www.metaculus.com/questions/2807/will-the-uk-housing-market-crash-before-july... (read more)

2
Denkenberger
4y
Thanks - very helpful.

In The Precipice Toby Ord gives estimated chances of various existential risks happening within the next 100 years. It'd be cool if we could get estimates from Metaculus as well, although it may be impossible to implement, as Tachyons would only be awarded when the world doesn't blow up.

Well, there's the Ragnarök question series, which seems to fit what you're looking for.

Curated and popular this week
Relevant opportunities