Hide table of contents

2022 has almost wrapped up — do you have EA-relevant predictions you want to register for 2023? List some in this thread! 

You’re encouraged to update them if others give you feedback, but we’ll save a copy of the thread on January 6, after which the predictions will be “registered.” 

Note that there's also a forecasting & estimation subforum now — consider joining or exploring it! 

Suggested format

Prediction - chances (optional elaboration)

Examples (with made-up numbers!): 

  • Will WHO declare a new Global Health Emergency in 2023? Yes. 60%
  • WHO declares a new Global Health Emergency in 2023 - 60% (I’m not very resilient on this — if I thought about it/did more research for another hour, I could see myself moving to 10-80%)

Additional notes

These can be low-effort! Here are some examples: a bunch of predictions from 2021 on Astral Codex Ten

Once someone has registered a prediction, feel free to reply to their comment and register your own prediction for that statement or question.

You can also suggest topics for people to predict on, even if you yourself don’t want to register a prediction. 

Other opportunities to forecast what will happen in 2023

  • Astral Codex Ten (ACX) is running a prediction contest, with 50 questions about the state of the world at the end of 2023 (you don’t have to predict on all the questions). There will be at least four $500 prizes. (Enter your predictions by 10 January or 1 February, depending on how you want to participate.)
  • You can also forecast on Metaculus (question categories here), Manifold Markets (here are the questions tagged “effective altruism”), and many other platforms. If some of the things you’re listing in this thread are predictions for questions available on some other platforms, you might be able to embed the question to display the current average predictions. 

Questions to consider

I think the questions from the ACX tournament are a great place to start (here they are on Manifold). Here are some of them (each about whether these things will be the case by January 1, 2024): 

  • Will Vladimir Putin be President of Russia?
  • Will a nuclear weapon be used in war (i.e. not a test or accident) and kill at least 10 people?
  • Will any new country join NATO?[1]
  • Will OpenAI release GPT-4?[2]
  • Will COVID kill at least 50% as many people in 2023 as it did in 2022?[3]
  • Will a cultured meat product be available in at least one US store or restaurant for less than $30?
  • Will a successful deepfake attempt causing real damage make the front page of a major news source?[4]
  • Will AI win a programming competition?[5]

And here are some other types of questions you might consider: 

Image made with DALL-E's help. 
  1. ^

    Sweden and Finland completing the accession process would count as new countries.

  2. ^

    This resolves as positive if Open-AI publishes a paper or webpage implicitly declaring GPT-4 “released” or “complete”, showcasing some examples of what it can do, and offering some form of use in some reasonable timescale to some outside parties (researchers, corporate partners, people who want to play around with it, etc). A product is “GPT-4” if it is either named GPT-4, or is a clear successor to GPT-3 to a degree similar to how GPT-3 was a successor to GPT-2 (and not branded as a newer version of GPT-3, eg ChatGPT3).

  3. ^

    According to https://ourworldindata.org/covid-deaths

  1. ^

    A deepfake is defined as any sophisticated AI-generated image, video, or audio meant to mislead. For this question to resolve positively, it must actually harm someone, not just exist. Valid forms of harm include but are not limited to costing someone money, or making some specific name-able person genuinely upset (not just “for all we know, people could have seen this and been upset by it”). The harm must come directly from the victim believing the deepfake, so somebody seeing the deepfake and being upset because the existence of deepfakes makes them sad does not count.

  2. ^

    This will resolve positively if a major news source reports that an AI entered a programming competition with at least two other good participants, and won the top prize. A good participant will be defined as someone who could be expected to perform above the level of an average big tech company employee; if there are at least twenty-five participants not specifically selected against being skilled, this will be considered true by default. The competition could be sponsored by the AI company as long as it meets the other criteria and is generally considered fair.

Show all footnotes
Comments13


Sorted by Click to highlight new comments since:

This is really cool! Thank you for sharing. I was slightly surprised by how low these are:

  1. Ukraine-Russia war is over by EOY 2023 — 20%
  2. Putin deposed by EOY 2023 — 5%
  3. Putin leaving power by any means for at least 30 consecutive days (with start date in 2023) — 10%
    1. I think I'm also surprised by the difference between 2 and 3. Unless this is driven primarily by the possibility of a month-long disease (which doesn't weaken the regime)? (Maybe also it takes a while to depose someone in some cases? So e.g. he might go on a couple-month-long "holiday" while they figure things out?)

And these are really interesting: 

I think I'm also surprised by the difference between 2 and 3

I view deposing as involving something internal, quick, and forceful. I think if Putin retires willingly / goes quietly (even if under duress), this would count as him leaving power without being deposed. Likewise, if he died but not because of assassination, that wouldn't count as being deposed.

This is niche, but I've been wondering how well I can forecast the success my TikToks. Vertical blue lines are 80% CIs, horizontal blue line is median estimate, green dot is if the actual was within 10x of median, red x otherwise.

No photo description available.

For the next year:

  1. 90% I have at least one video with 100k+ views
  2. 40% I have at least one video with 1M+ views
  3. 20% I make a serious effort to be popular again for at least a month
  4. 85% I have at least one video with 1M+ views conditional on me making a serious effort for at least a month

Thanks for sharing & good luck with the TikToks! I notice I'm curious about e.g. how likely 100M views on a video are, given 1M views (and similar questions).

Disclaimer: Ben is my manager.

Thanks! I previously found that my videos roughly fit a power-law distribution, and the one academic paper I could find on the subject also found that views were Zipf-distributed.

Since power law distributions are scale-invariant, I think it's relatively easy to answer your question:  etc. In that original post I thought that my personal views roughly fit the model ; I haven't looked at that recently though and expect the coefficients have changed.

I'll start off with a couple of predictions — all of these are very quick attempts (so not very resilient): 

  1. Vladimir Putin will be president of Russia — 85% (I made a quick base rate from taking a quick look at comparable regimes in this data, then also wanted to add the current chaos as an input & looked at this Metaculus question) (as mentioned above, I don't think this is resilient!)[1]
  2. Will a cultured meat product be available in at least one US store or restaurant for less than $30? — 50% 
  3. WHO declares a new Global Health Emergency in 2023 —20% (extremely rough; had estimated (probably poorly) for pandemics, multiplied by a rough multiple, don't have time to do any checks)
  1. ^

    In case anyone's interested, base rates seemed to give 0.94 chances of him remaining president for another year, 0.89 for another 2, 0.85 for three, 0.83 for four, and  0.81 for five. And here's the Metaculus question:

If Global Health Emergency is meant to mean public health emergency of international concern , then the base rate is roughly 45% = 7 / 15.5: declared 7 times, while the appropriate regulation come into force in mid-2007.

Great, thanks! Really appreciate this; I was really off — I think I had quickly taken my number/base rate for pandemics, and referenced a list of PHEICs I thought was for the 21st century without checking or noticing that this only starts in 2007. I might just go for this base rate, then. 

Hey, thanks for starting this!

Misha beat me to it RE: PHEIC base rates, but I'd also be interested in the 50% figure for cultured meat, given FDA approval last month for UPSIDE Foods (formally Memphis Meats). How much of the 50% figure is driven by pending USDA approval, VS time to market VS the $30 figure?

I was thinking very loosely about both, without doing any proper homework. I had the sense that USDA approval would take a while (for a sense of how loosely I approached this: I didn't remember which of FDA or USDA had already approved this), and was under the vague impression (from a conversation?) that this wouldn't go straight to stores or chains, but would instead go to fancy restaurants first (just now confirmed that the restaurant listed here is very high-end). But then again, I vaguely expected ~full-enough approval in 2023, and I felt like "available for $30" could happen in lots of ways (e.g. there's some "tasting" option that's tiny and therefore decently cheap), etc. So I went with 50% without thinking much longer about it.

I went and checked just now, however, and am seeing this article (Nov 16), which notes 

Upside has previously said “end of 2022” as a launch date for its cultivated chicken. The company must still secure approvals from the United States Department of Agriculture (USDA) before it can actually sell to consumers. In a statement Upside promised more details on timing and launch to follow.

The link is to an article from April, and although companies might be over-optimistic and might over-promise, I guess that means that this timeline is vaguely feasible, which pushes me towards thinking that I should be more optimistic. 

Also, I just checked Metaculus, and it appears that in April, ~forecasters thought that there's a roughly 19% chance that cultured meat might be available for sale in the US by the end of 2022 (community prediction was 37%), which seems wild, but again, makes me think that I was more pessimistic than necessary before: 

Oh, I'm looking through other Metaculus questions, and here's another relevant one: Will [the US] approve a cultivated meat product for consumption before April 2023?
Community Prediction 52%

The comments are quite interesting, and imply that Upside Foods don't have enough meat to "sell soon" and also

Meanwhile, FDA says it approved nothing:

"The voluntary pre-market consultation is not an approval process. Instead, it means that after our careful evaluation of the data and information shared by the firm, we have no further questions at this time about the firm’s safety conclusion."

According to the Formal Agreement Between FDA and USDA Regarding Oversight of Human Food Produced Using Animal Cell Technology Derived from Cell Lines of USDA-amenable Species, the next 2 stages in the FDA regulatory phase are:

"Oversee initial cell collection and the development and maintenance of qualified cell banks, including by issuing regulations or guidance and conducting inspections, as appropriate."

&

"Oversee proliferation and differentiation of cells through the time of harvest, including by issuing regulations or guidance and conducting inspections, as appropriate."

After that, it looks to me like there's another phase of regulatory steps that's centered more in the US Department of Agriculture.

The bear market in stocks will continue, and the S&P 500 will decline an additional 30-45%. VC-backed and unprofitable early stage companies will continue to remain at the center of the storm as access to capital further deteriorates. The crypto bear market also accelerates. Open Philanthropy has to cut its spending target again and GiveWell again falls short of its goals. The EA community realizes how linked it was to frothy financial asset valuations.

EAs appeal to Warren Buffett and Mark Zuckerberg to consider filling some of the gaps, and one or both of them commits to doing so.

Curated and popular this week
 ·  · 1m read
 · 
[This post was written quickly and presents the idea in broad strokes. I hope it prompts more nuanced and detailed discussions in the future.] In recent years, many in the Effective Altruism community have shifted to working on AI risks, reflecting the growing consensus that AI will profoundly shape our future.  In response to this significant shift, there have been efforts to preserve a "principles-first EA" approach, or to give special thought into how to support non-AI causes. This has often led to discussions being framed around "AI Safety vs. everything else". And it feels like the community is somewhat divided along the following lines: 1. Those working on AI Safety, because they believe that transformative AI is coming. 2. Those focusing on other causes, implicitly acting as if transformative AI is not coming.[1] Instead of framing priorities this way, I believe it would be valuable for more people to adopt a mindset that assumes transformative AI is likely coming and asks: What should we work on in light of that? If we accept that AI is likely to reshape the world over the next 10–15 years, this realisation will have major implications for all cause areas. But just to start, we should strongly ask ourselves: "Are current GHW & animal welfare projects robust to a future in which AI transforms economies, governance, and global systems?" If they aren't, they are unlikely to be the best use of resources. 
Importantly, this isn't an argument that everyone should work on AI Safety. It's an argument that all cause areas need to integrate the implications of transformative AI into their theory of change and strategic frameworks. To ignore these changes is to risk misallocating resources and pursuing projects that won't stand the test of time. 1. ^ Important to note: Many people believe that AI will be transformative, but choose not to work on it due to factors such as (perceived) lack of personal fit or opportunity, personal circumstances, or other pra
rai, NunoSempere
 ·  · 5m read
 · 
We’re developing an AI-enabled wargaming-tool, grim, to significantly scale up the number of catastrophic scenarios that concerned organizations can explore and to improve emergency response capabilities of, at least, Sentinel. Table of Contents 1. How AI Improves on the State of the Art 2. Implementation Details, Limitations, and Improvements 3. Learnings So Far 4. Get Involved! How AI Improves on the State of the Art In a wargame, a group dives deep into a specific scenario in the hopes of understanding the dynamics of a complex system and understanding weaknesses in responding to hazards in the system. Reality has a surprising amount of detail, so thinking abstractly about the general shapes of issues is insufficient. However, wargames are quite resource intensive to run precisely because they require detail and coordination. Eli Lifland shared with us some limitations about the exercises his team has run, like at The Curve conference: 1. It took about a month of total person-hours to iterate of iterating on the rules, printouts, etc. 2. They don’t have experts to play important roles like the Chinese government and occasionally don’t have experts to play technical roles or the US government. 3. Players forget about important possibilities or don’t know what actions would be reasonable. 4. There are a bunch of background variables which would be nice to keep track of more systematically, such as what the best publicly deployed AIs from each company are, how big private deployments are and for what purpose they are deployed, compute usage at each AGI project, etc. For simplicity, at the moment they only make a graph of best internal AI at each project (and rogue AIs if they exist). 5. It's effortful for them to vary things like the starting situation of the game, distribution of alignment outcomes, takeoff speeds, etc. AI can significantly improve on all the limitations above, such that more people can go through more scenarios faster at the same q
 ·  · 13m read
 · 
> It seems to me that we have ended up in a strange equilibrium. With one hand, the Western developed nations are taking actions that have obvious deleterious effects on developing countries... With the other hand, we are trying (or at least purport to be trying) to help developing countries through foreign aid... Probably the strategy that we as the West could be doing, is to not take these actions that are causing harm. That is, we don't need to "fix" anything, but we could stop harming developing countries. —Nathan Nunn, Rethinking Economic Development EAs typically think about development policy through the lens of “interventions that we can implement in poor countries.” The economist Nathan Nunn argues for a different approach: advocating for pro-development policy in rich countries. Rather than just asking for more and better foreign aid from rich countries, this long-distance development policy[1] goes beyond normal aid-based development policy, and focuses on changing the trade, immigration and financial policies adopted by rich countries.  What would EA look like if we seriously pursued long-distance development policy as a complementary strategy to doing good? The argument Do less harm Nunn points out that rich countries take many actions that harm poor countries. He identifies three main kinds of policies that harm poor countries: 1. International trade restrictions. Tariffs are systematically higher against developing countries, slowing down their industrialization and increasing poverty. 2. International migration restrictions. By restricting citizens of poor countries from travelling to and working in rich countries, rich countries deny large income-generation opportunities to those people, along with the pro-development effects of their remittances. 3. Foreign aid. This sounds counterintuitive—surely foreign aid is one of the helpful actions?–-but there's sizable evidence that foreign aid can fuel civil conflict, especially when given with