Hi Milan,
So far it has been used to back the donor lottery (this has no net effect in expectation, but requires funds to fill out each block and handle million dollars swings up and down), make a grant to ALLFED, fund Rethink Priorities' work on nuclear war, and small seed funds for some researchers investing two implausible but consequential if true interventions (including the claim that creatine supplements boost cognitive performance for vegetarians).
Mostly it remains invested. In practice I have mostly been able to recommend major grants to other funders so this fund is used when no other route is more appealing. Grants have often involved special circumstances or restricted funding, and the grants it has made should not be taken as recommendations to other donors to donate to the same things at the current margin in their circumstances.
There is some effect in this direction, but not a sudden cliff. There is plenty of room to generalize, not an in. We create models of alternative coherent lawlike realities, e.g. the Game of Life or and physicists interested in modeling different physical laws.
Thanks David, this looks like a handy paper!
Given all of this, we'd love feedback and discussion, either as comments here, or as emails, etc.
I don't agree with the argument that infinite impacts of our choices are of Pascalian improbability, in fact I think we probably face them as a consequence of one-boxing decision theory, and some of the more plausible routes to local infinite impact are missing from the paper:
Here are two posts from Wei Dai, discussing the case for some things in this vicinity (renormalizing in light of the opportunities):
https://www.lesswrong.com/posts/Ea8pt2dsrS6D4P54F/shut-up-and-divide
https://www.lesswrong.com/posts/BNbxueXEcm6dCkDuk/is-the-potential-astronomical-waste-in-our-universe-too
Thanks for this detailed post on an underdiscussed topic! I agree with the broad conclusion that extinction via partial population collapse and infrastructure loss, rather than by the mechanism of catastrophe being potent enough to leave no or almost no survivors (or indirectly enabling some later extinction level event) has very low probability. Some comments:
It sounds like you're assuming a common scale between the theories (maximizing expected choice-worthiness)).
A common scale isn't necessary for my conclusion (I think you're substituting it for a stronger claim?) and I didn't invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don't amount to dictatorship of one theory, the weight for an NU view is not going to spend its decision-influence on the former rather than the latter when they're both non-vanishing possibilities.
I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.
Sure (which will make the s-risk definition even more inapt for those people), and those scenarios will be approximately ignored vs scenarios that are more like 1/100 or 1/1000 being tortured on a lexical view, so there will still be the same problem of s-risk not tracking what's action-guiding or a big deal in the history of suffering.
Just a clarification: s-risks (risks of astronomical suffering) are existential risks.
This is not true by the definitions given in the original works that defined these terms. Existential risk is defined to only refer to things that are drastic relative to the potential of Earth-originating intelligent life:
where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
Any X-risks are going to be in the same ballpark of importance if they occur, and immensely important to the history of Earth-originating life. Any x-risk is a big deal relative to that future potential.
S-risk is defined as just any case where there's vastly more total suffering than Earth history heretofore, not one where suffering is substantial relative to the downside potential of the future.
S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.
In an intergalactic civilization making heavy use of most stars, that would be met by situations where things are largely utopian but 1 in 100 billion people per year get a headache, or a hell where everyone was tortured all the time. These are both defined as s-risks, but the bad elements in the former are microscopic compared to the latter, or the expected value of suffering.
With even a tiny weight on views valuing good parts of future civilization the former could be an extremely good world, while the latter would be a disaster by any reasonable mixture of views. Even with a fanatical restriction to only consider suffering and not any other moral concerns, the badness of the former should be almost completely ignored relative to the latter if there is non-negligible credence assigned to both.
So while x-risks are all critical for civilization's upside potential if they occur, almost all s-risks will be incredibly small relative to the potential for suffering, and something being an s-risk doesn't mean its occurrence would be an important part of the history of suffering if both have non-vanishing credence.
From the s-risk paper:
We should differentiate between existential risks (i.e., risks of “mere” extinction or failed potential) and risks of astronomical suffering1(“suffering risks” or “s-risks”). S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.
The above distinctions are all the more important because the term “existential risk” has often been used interchangeably with “risks of extinction”, omitting any reference to the future’s quality.2 Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event that would create 1025 unhappy beings in a future that already contains 1035 happy individuals constitutes an s-risk, but not an x-risk.
If one were to make an analog to the definition of s-risk for loss of civilization's potential it would be something like risks of loss of potential welfare or goods much larger than seen on Earth so far. So it would be a risk of this type to delay interstellar colonization by a few minutes and colonize one less star system. But such 'nano-x-risks' would have almost none of the claim to importance and attention that comes with the original definition of x-risk. Going from 10^20 star systems to 10^20 star systems less one should not be put in the same bucket as premature extinction or going from 10^20 to 10^9. So long as one does not have a completely fanatical view and gives some weight to different perspectives, longtermist views concerned with realizing civilization's potential should give way on such minor proportional differences to satisfy other moral concerns, even though the absolute scales are larger.
Bostrom's Astronomical Waste paper specifically discusses such things, but argues since their impact would be so small relative to existential risk they should not be a priority (at least in utilitarianish terms) relative to the latter.
This disanalogy between the x-risk and s-risk definitions is a source of ongoing frustration to me, as s-risk discourse thus often conflates hellish futures (which are existential risks, and especially bad ones), or possibilities of suffering on a scale significant relative to the potential for suffering (or what we might expect), with bad events many orders of magnitude smaller or futures that are utopian by common sense standards and compared to our world or the downside potential.
I wish people interested in s-risks that are actually near worst-case scenarios, or that are large relative to the background potential or expectation for downside would use a different word or definition, that would make it possible to say things like 'people broadly agree that a future constituting an s-risk is a bad one, and not a utopia' or at least 'the occurrence of an s-risk is of the highest importance for the history of suffering.'
$1B commitment attributed to Musk early on is different from the later Microsoft investment. The former went away despite the media hoopla.
It's invested in unleveraged index funds, but was out of the market for the pandemic crash and bought in at the bottom. Because it's held with Vanguard as a charity account it's not easy to invest as aggressively as I do my personal funds for donation, in light of lower risk-aversion for altruistic investors than those investing for personal consumption, although I am exploring options in that area.
The fund has been used to finance the CEA donor lottery, and to make grants to ALLFED and Rethink Charity (for nuclear war research). However, it should be noted that I only recommend grants for the fund that I think aren't a better fit for other funding sources I can make recommendations to, and often with special circumstances or restricted funding, and grants it has made should not be taken as recommendations from me to other donors to donate to the same things at the margin. [For the object-level grants, although using donor lotteries is generally sensible for a wide variety of donation views.]
Not particularly.