Have there ever been any efforts to try to set up EA-oriented funding
organisations that focus on investing donations in such a way as to fund
high-utility projects in very suitable states of the world? They could be pure
investment vehicles that have high expected utility, but that lose all their
money by some point in time in the modal case.
The idea would be something like this:
For a certain amount of dollars, to maximise utility, to first order, one has to
decide how much to spend on which causes and how to distribute the spending over
However, with some effort, one could find investments that pay off conditionally
on states of the world where specific interventions might have very high
utility. Some super naive examples would be a long-dated option structure that
pays off if the price for wheat explodes, or a CDS that pays off if JP Morgan
collapses. This would then allow organisations to intervene through targeted
measures, for example, food donations.
This is similar to the concept of a “tail hedge” - an investment that pays off
massively when other investments do poorly, that is when the marginal utility of
owning an additional dollar is very high.
Usually, one would expect such investments to carry negatively, that is, to be
costly over time possibly even with negative unconditional expected returns.
However, if an EA utility function is sufficiently different from a typical
market participant, this need not be the case, even in dollar terms (?).
Clearly, the arguments here would have to be made a lot more rigorous and
quantitative to see whether this might be attractive at all. I’d be interested
in any references etc.
This is some advice I wrote about doing back-of-the-envelope calculations
(BOTECs) and uncertainty estimation, which are often useful as part of
forecasting. This advice isn’t supposed to be a comprehensive guide by any
means. The advice originated from specific questions that someone I was
mentoring asked me. Note that I’m still fairly inexperienced with forecasting.
If you’re someone with experience in forecasting, uncertainty estimation, or
BOTECs, I’d love to hear how you would expand or deviate from this advice.
1. How to do uncertainty estimation?
1. A BOTEC is estimating one number from a series of calculations. So I
think a good way to estimate uncertainty is to assign credible intervals
to each input of the calculation. Then propagate the uncertainty in the
inputs through to the output of the calculation.
1. I recommend Squiggle for this (the Python version
2. How to assign a credible interval:
1. Normally I choose a 90% interval. This is the default in Squiggle.
2. If you have a lot of data about the thing (say, >10 values), and the
sample of data doesn’t seem particularly biased, then it might be
reasonable to use the standard deviation of the data. (Measure this in
log-space if you have reason to think it’s distributed log-normally -
see next point about choosing the distribution.) Then compute the 90%
credible interval as +/- 1.645*std, assuming a (log-)normal
3. How to choose the distribution:
1. It’s usually a choice between log-normal and normal.
2. If the variable seems like the sort of thing that could vary by orders
of magnitude, then log-normal is best. Otherwise, normal.
1. You can use the data points you have, or the credible interval you
chose, to inform this.
3. When in doubt, I’d say that most of
TL;DR: Someone should probably write a grant to produce a spreadsheet/dataset of
past instances where people claimed a new technology would lead to societal
catastrophe, with variables such as “multiple people working on the tech
believed it was dangerous.”
Slightly longer TL;DR: Some AI risk skeptics are mocking people who believe AI
could threaten humanity’s existence, saying that many people in the past
predicted doom from some new tech. There is seemingly no dataset which lists and
evaluates such past instances of “tech doomers.” It seems somewhat ridiculous*
to me that nobody has grant-funded a researcher to put together a dataset with
variables such as “multiple people working on the technology thought it could be
very bad for society.”
*Low confidence: could totally change my mind
I have asked multiple people in the AI safety space if they were aware of any
kind of "dataset for past predictions of doom (from new technology)"? There have
been some articles and arguments floating around recently such as "Tech Panics,
Generative AI, and the Need for Regulatory Caution", in which skeptics say we
shouldn't worry about AI x-risk because there are many past cases where people
in society made overblown claims that some new technology (e.g., bicycles,
electricity) would be disastrous for society.
While I think it's right to consider the "outside view" on these kinds of
things, I think that most of these claims 1) ignore examples of where there were
legitimate reasons to fear the technology (e.g., nuclear weapons, maybe
synthetic biology?), and 2) imply the current worries about AI are about as
baseless as claims like "electricity will destroy society," whereas I would
argue that the claim "AI x-risk is >1%" stands up quite well against most
(These claims also ignore the anthropic argument/survivor bias—that if they ever
were right about doom we wouldn't be around to observe it—but this is less
I especially would like to see a
Now you can explore more relationships between more forecast questions on
Metaculus, with conditional pairs that feature question group subquestions. To
submit your own:
1. Click 'Write a Question' on Metaculus
2. Select 'conditional pair' as the Question Type
3. Click 'Select Parent' and/or 'Select Child'
4. Search for the subquestion name, which will be indicated in parentheses
after the group name, or paste in the URL of the subquestion
Note: You can copy the URL of a subquestion by visiting the question group page,
clicking the ‘...’ more options menu, and selecting the ‘Copy Link’ option next
to the subquestion you’re focused on.
In addition to submitting a new conditional yourself, you can also request
questions here in the new question discussion post.
Here are some new subquestion conditional pairs to start forecasting on:
The next technological revolution could come this century and could last less
than a decade
This is a quickly written note that I don't expect to have time to polish.
This note aims to bound reasonable priors on the date and duration of the next
technological revolution, based primarily on the timings of (i) the rise of homo
sapiens; (ii) the Neolithic Revolution; (iii) the Industrial Revolution. In
particular, the aim is to determine how sceptical our prior should be that the
next technological revolution will take place this century and will occur very
The main finding is that the historical track record is consistent with the next
technological revolution taking place this century and taking just a few years.
This is important because it partially undermines the claims that (i) the “most
important century” hypothesis is overwhelmingly unlikely and (ii) the burden of
evidence required to believe otherwise is very high. It also suggests that the
historical track record doesn’t rule out a fast take-off.
I expect this note not to be particularly surprising to those familiar with
existing work on the burden of proof for the most important century hypothesis.
I thought this would be a fun little exercise though, and it ended up pointing
in a similar direction.
* This is based on very little data, so we should put much more weight on other
evidence than this prior
* I don’t think this is problematic for arguing that the burden of evidence
required to think a technological revolution this century is likely is not
* But these priors probably aren’t actually useful for forecasting – they
should be washed out by other evidence
* My calculations use the non-obvious assumption that the wait times between
technological revolutions and the durations of technological revolutions
decrease by the same factor for each revolution
* It’s reasonable to expect the wait times and durations to decrease, e.g.
Nice to see that there is now a sub-forum dedicated to Forecasting, this seems
like a good place to ask what might be a silly question.
I am doing some work on integrating forecasting with government decision making.
There are several roadblocks to this, but one of them is generating good
questions (See Rigor-Relevance trade-off among other things).
One way to avoid this might be to simple ask questions about the targets the
government has already set for itself, a lot of these are formulated in a
SMART  way and are thus pretty forecastable. Forecasts on whether the
government will reach its target also seem like they will be immediately
actionable for decision makers. This seemed like a decent strategy to me, but I
think I have not seen them mentioned very often. So my question is simple: Is
there some sort of major problem here I am overlooking?
The one major problem I could think of is that there might be an incentive for a
sort of circular reasoning: If forecasters in aggregate think that the
government might not be on its way to achieve a certain target then the gov
might announce new policy to remedy the situation. Smart Forecasters might see
this coming and start their initial forecast higher.
I think you can balance this by having forecasters forecast on intermediate
targets as well. For example: Most countries have international obligations to
reduce their CO2 emissions by X% by 2030, instead of just forecasting the 2030
target you could forecasts on all the intermediate years as well.
SMART stands for: Specific, Measurable, Assignable, Realistic, Time-related
- See https://en.wikipedia.org/wiki/SMART_criteria
For a long time I found this surprisingly nonintuitive, so I made a spreadsheet
that did it, which then expanded into some other things.
* Spreadsheet here, which has four tabs based on different views on how best to
pick the fair place to bet where you and someone else disagree. (The fourth
tab I didn't make at all, it was added by someone (Luke Sabor) who was
passionate about the standard deviation method!)
* People have different beliefs / intuitions about what's fair!
* An alternative to the mean probability would be to use the product of the
Then if one person thinks .9 and the other .99, the "fair bet" will have
implied probability more than .945.
* The problem with using Geometric mean can be highlighted if player 1
estimates 0.99 and player 2 estimates 0.01.
This would actually lead player 2 to contribute ~90% of the bet for an EV
of 0.09, while player 1 contributes ~10% for an EV of 0.89. I don't like
that bet. In this case, mean prob and Z-score mean both agree at 50%
contribution and equal EVs.
* "The tradeoff here is that using Mean Prob gives equal expected values
(see underlined bit), but I don't feel it accurately reflects "put your
money where your mouth is". If you're 100 times more confident than the
other player, you should be willing to put up 100 times more money. In
the Mean prob case, me being 100 times more confident only leads me to
put up 20 times the amount of money, even though expected values are more
* Then I ended up making an explainer video because I was excited about it
Other spreadsheets I've seen in the space:
* Brier score betting (a fifth way to figure out the correct bet ratio!)
* Posterior Forecast Calculator
* Inferring Probabilities from PredictIt Prices
These three all by William Kiely.
Does anyone else know of any? Or want to argue for one method over another?
Load more (8/33)
On January 6, 2022, at 4pm GMT, I am going to host a gather town meetup to go
through Scott Alexander's Prediction Competition on Blind Mode which means you
only spend max 5 minutes on each question.
Because of that, and also possibly because these are the rules (I'm finding
out), we likely won't collaborate (though if the rules ok it, maybe we do!), but
if you've been wanting to enter and haven't yet made time, come, and we'll set
some pomodoros and have a good time!
Event link here: