2022 has almost wrapped up — do you have EA-relevant predictions you want to register for 2023? List some in this thread!
You’re encouraged to update them if others give you feedback, but we’ll save a copy of the thread on January 6, after which the predictions will be “registered.”
Note that there's also a forecasting & estimation subforum now — consider joining or exploring it!
Prediction - chances (optional elaboration)
Examples (with made-up numbers!):
- Will WHO declare a new Global Health Emergency in 2023? Yes. 60%
- WHO declares a new Global Health Emergency in 2023 - 60% (I’m not very resilient on this — if I thought about it/did more research for another hour, I could see myself moving to 10-80%)
These can be low-effort! Here are some examples: a bunch of predictions from 2021 on Astral Codex Ten.
Once someone has registered a prediction, feel free to reply to their comment and register your own prediction for that statement or question.
You can also suggest topics for people to predict on, even if you yourself don’t want to register a prediction.
Other opportunities to forecast what will happen in 2023
- Astral Codex Ten (ACX) is running a prediction contest, with 50 questions about the state of the world at the end of 2023 (you don’t have to predict on all the questions). There will be at least four $500 prizes. (Enter your predictions by 10 January or 1 February, depending on how you want to participate.)
- You can also forecast on Metaculus (question categories here), Manifold Markets (here are the questions tagged “effective altruism”), and many other platforms. If some of the things you’re listing in this thread are predictions for questions available on some other platforms, you might be able to embed the question to display the current average predictions.
Questions to consider
- Will Vladimir Putin be President of Russia?
- Will a nuclear weapon be used in war (i.e. not a test or accident) and kill at least 10 people?
- Will any new country join NATO?
- Will OpenAI release GPT-4?
- Will COVID kill at least 50% as many people in 2023 as it did in 2022?
- Will a cultured meat product be available in at least one US store or restaurant for less than $30?
- Will a successful deepfake attempt causing real damage make the front page of a major news source?
- Will AI win a programming competition?
And here are some other types of questions you might consider:
- Questions about how the numbers listed in this post will change over 2023: What happens on the average day?
- Explore EA-related questions on Metaculus, like “Will China have approved cultivated meat for human consumption?”
- Random topics that might spark someone:
- How much of a big deal will fusion be?
- Will some new medical treatments be in widespread use or important in some specified ways?
- Will there be a universal flu vaccine available? (Approved by the FDA or the European Medicine Agency)
Sweden and Finland completing the accession process would count as new countries.
This resolves as positive if Open-AI publishes a paper or webpage implicitly declaring GPT-4 “released” or “complete”, showcasing some examples of what it can do, and offering some form of use in some reasonable timescale to some outside parties (researchers, corporate partners, people who want to play around with it, etc). A product is “GPT-4” if it is either named GPT-4, or is a clear successor to GPT-3 to a degree similar to how GPT-3 was a successor to GPT-2 (and not branded as a newer version of GPT-3, eg ChatGPT3).
According to https://ourworldindata.org/covid-deaths
A deepfake is defined as any sophisticated AI-generated image, video, or audio meant to mislead. For this question to resolve positively, it must actually harm someone, not just exist. Valid forms of harm include but are not limited to costing someone money, or making some specific name-able person genuinely upset (not just “for all we know, people could have seen this and been upset by it”). The harm must come directly from the victim believing the deepfake, so somebody seeing the deepfake and being upset because the existence of deepfakes makes them sad does not count.
This will resolve positively if a major news source reports that an AI entered a programming competition with at least two other good participants, and won the top prize. A good participant will be defined as someone who could be expected to perform above the level of an average big tech company employee; if there are at least twenty-five participants not specifically selected against being skilled, this will be considered true by default. The competition could be sponsored by the AI company as long as it meets the other criteria and is generally considered fair.