Hide table of contents

I don’t claim originality for any content here; people who’ve been influential on this include Nick Beckstead, Phil Trammell, Toby Ord, Aron Vallinder, Allan Dafoe, Matt Wage, and, especially, Holden Karnofsky and Carl Shulman. Everything tentative; errors all my own. 

Introduction

Here are two distinct views:

Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) :=  We are living at the most influential time ever. 

It seems that, in the effective altruism community as it currently stands, those who believe longtermism generally also assign significant credence to HoH; I’ll precisify ‘significant’ as >10% when ‘time’ is used to refer to a period of a century, but my impression is that many longtermists I know would assign >30% credence to this view.  It’s a pretty striking fact that these two views are so often held together — they are very different claims, and it’s not obvious why they should so often be jointly endorsed.

This post is about separating out these two views and introducing a view I call outside-view longtermism, which endorses longtermism but finds HoH very unlikely. I won’t define outside-view longtermism here, but the spirit is that — as our best guess — we should expect the future to continue the trends of the past, and we should be sceptical of the idea that now is a particularly unusual time. I think that outside-view longtermism is currently a neglected position within EA and deserves some defense and exploration.

Before we begin, I’ll note I’m not making any immediate claim about the actions that follow from outside-view longtermism. It’s plausible to me that whether we have 30% or just 0.1% credence in HoH, we should still be investing significant resources into the activities that would be best were HoH true. The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building. If you think that some future time will be much more influential than today, then a natural strategy is to ensure that future decision-makers, who you are happy to defer to, have as many resources as possible when some future, more influential, time comes. So in what follows I’ll sometimes use this as the comparison activity.


Getting the definitions down

We’ve defined strong longtermism informally above and in more detail in this post.

For HoH, defining ‘most influential time’ is pretty crucial. Here’s my proposal: 

a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.

(I’ll also use the term ‘hingier’ to be synonymous with ‘more influential’.)

This definition gets to the nub of the matter, for me. It seems to me that, for most times in human history, longtermists ought, if they could, to have been investing their resources (via values-spreading as well as literal investment) in order that they have greater influence at hingey moments when one’s ability to influence the long-run future is high. It’s a crucial question for longtermists whether now is a very hingey moment, and so whether they should be investing or doing direct work.

It’s significant that my definition focuses on how much influence a person at a time can have, rather than how much influence occurs during a time period. It could be the case, for example, that the 20th century was a bigger deal than the 17th century, but that, because there were 1/5th as many people alive during the 17th century, a longtermist altruist could have had more direct impact in the 17th century than in the 20th century. 

It’s also significant that, on this definition, you need to take into account the level of knowledge and understanding of the average longtermist altruist at the time. This seems right to me. For example, hunter-gatherers could contribute more to tech speed-up than people now (see Carl Shulman’s post here); but they wouldn’t have known, or been in a position to know, that trying to innovate was a good way to benefit the very long-run future. (In that post, Carl mentions some reasons for thinking that such impact was knowable, but prior to the 17th century people didn’t even have the concept of expected value, so I’m currently sceptical.) 

So I’m really bundling two different ideas into the concept of ‘most influential’: how pivotal a particular moment in time is, and how much we’re able to do something about that fact.  Perhaps we’re at a really transformative moment now, and we can, in principle, do something about it, but we’re so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment. If this were true, I would not count this time as being exceptionally influential. 


Strong longtermism even if HoH is not true

I mentioned that it’s surprising that strong longtermism and significant credence in HoH are so often held together. But here’s one reason why you might think you should put significant credence in HoH iff you believe longtermism: You might accept that most value is in the long-run future, but think that, at most times in history so far, we’ve been unable to do anything about that value. So it’s only because HoH is true that longtermism is true. But I don’t think that’s a good argument, for a few reasons. 

First, given the stakes involved, it’s plausible that even a small chance of being at a period of unusually high extinction or lock-in risk is enough for working on extinction risk or lock-in scenarios to be higher expected value than short-run activities. So, you can reasonably think that (i) HoH is unlikely (e.g. 0.1% likely), but that (ii) when combined with the value of being able to influence the value of the long-run future, a small chance of HoH being true is enough to make strong longtermism true. 

Second, even if we’re merely at a relatively hingey time — just not the most hingey time — as long as there are some actions that have persistent long-run effects that are positive in expected value, that’s plausibly sufficient for strong longtermism to be true.

Third, you could even be certain that HoH is false, and that there are currently no direct activities with persistent impacts, but still believe that longtermism is true if, as is natural to suppose, you have the option of investing resources, enabling future longtermist altruists to take action at a time which is more influential.

Arguments for HoH

In this post, I’m going to simply state, but not discuss, some views on which something like HoH would be entailed, and some arguments for thinking HoH is likely. Each of these views and arguments require a lot more discussion, and often have had a lot more discussion elsewhere.

There are two commonly held views that entail something like HoH:


The Value Lock-in view

Most starkly, according to a view regarding AI risk most closely associated with Nick Bostrom and Eliezer Yudkowsky: it’s likely that we will develop AGI this century, and it’s likely that AGI will quickly transition to superintelligence. How we handle that transition determines how the entire future of civilisation goes: if the superintelligence ‘wins’, then the entire future of civilisation is determined in accord with the superintelligence’s goals; if humanity ‘wins’, then the entire future of civilisation is determined in accord with whoever controls the superintelligence, which could be everyone, or could be a small group of people. If this story is right, and we can influence which of these scenarios occurs, then this century is the most influential time ever. 

A related, but more general, argument, is that the most pivotal point in time is when we develop techniques for engineering the motivations and values of the subsequent generation (such as through AI, but also perhaps through other technology, such as genetic engineering or advanced brainwashing technology), and that we’re close to that point. (H/T Carl Shulman for stating this more general view to me).


The Time of Perils view

According to the Time of Perils view, we live in a period of unusually high extinction risk, where we have the technological power to destroy ourselves but lack the wisdom to be able to ensure we don’t; after this point annual extinction risk will go to some very low level. Support for this view could come from both outside-view and inside-view reasoning: the outside-view argument would claim that extinction risk has been unusually high since the advent of nuclear weapons; the inside-view argument would point to extinction risk from forthcoming technologies like synthetic biology. 

The ‘unusual’ is important here. Perhaps extinction risk is high at this time, but will be even higher at some future times. In which case those future times might be even hingier than today. Or perhaps extinction risk is high, but will stay high indefinitely, in which case the future is not huge in expectation, and the grounds for strong longtermism fall away. 

And, for the Time of Perils view to really support HoH, it’s not quite enough to show that extinction risk is unusually high; what’s needed is that extinction risk mitigation efforts are unusually cost-effective. So part of the view must be not only that extinction risk is unusually high at this time, but also that longtermist altruists are unusually well-placed to decrease those risks — perhaps because extinction risk reduction is unusually neglected.


Outside-View Arguments

The Value Lock-In and Time of Perils views are the major views on which HoH — or something similar — would be supported. But there are also a number of more general, and more outside-view-y, arguments that might be taken as evidence in favour of HoH: 

  1. That we’re unusually early on in human history, and earlier generations in general have the ability to influence the values and motivations of later generations.[2]
  2. That we’re at an unusually high period of economic and technological growth.
  3. That the long-run trend of economic growth means we should expect extremely rapid growth into the near future, such that we should expect to hit the point of fastest-ever growth fairly soon, before slowing down.
  4. That we’re unusually well-connected and able to cooperate in virtue of being on one planet.
  5. That we’re unusually likely to become extinct in virtue of being on one planet.

My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.[3]

Arguments against HoH

#1: The outside-view argument against HoH

Informally, the core argument against HoH is that, in trying to figure out when the most influential time is, we should consider all of the potential billions of years through which civilisation might exist. Out of all those years, there is just one time that is the most influential. According to HoH, that time is… right now. If true, that would seem like an extraordinary coincidence, which should make us suspicious of whatever reasoning led us to that conclusion, and which we should be loath to accept without extraordinary evidence in its favour. We don’t have such extraordinary evidence in its favour. So we shouldn’t believe in HoH.

I’ll take each of the key claims in this argument in turn: 

  1. It’s a priori extremely unlikely that we’re at the hinge of history
  2. The belief that we’re at the hinge of history is fishy
  3. Relative to such an extraordinary claim, the arguments that we’re at the hinge of history are not sufficiently extraordinarily powerful 

Claim 1

That HoH is a priori unlikely should be pretty obvious. It’s hard to know exactly what ur-prior to use for this claim, though. One natural thought is that we could use, say, 1 trillion years’ time as an early estimate for the ‘end of time’ (due to the last naturally occurring star formation), and a 0.01% chance of civilisation surviving that long. Then, as a lower bound, there are an expected 1 million centuries to come, and the natural prior on the claim that we’re in the most influential century ever is 1 in 1 million. This would be too low in one important way, namely that the number of future people is decreasing every century, so it’s much less likely that the final century will be more influential than the first century. But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000. 

(This is a very rough argument. I really don’t know what the right ur-prior is to set here, and I’d be keen to see further discussion, as it potentially changes one’s posterior on HoH by an awful lot.)

[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:

The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.

The unconditional prior probability over whether this is the most influential century would then depend on one's priors over how long Earth-originating civilization will last for. However, for the purpose of this discussion we can focus on just the claim that we are at the most influential century AND that we have an enormous future ahead of us. If the Value Lock-In or Time of Perils views are true, then we should assign a significant probability to that claim. (i.e. they are claiming that, if we act wisely this century, then this conjunctive claim is probably true.) So that's the claim we can focus our discussion on.

It's worth noting that my proposal follows from the Self-Sampling Assumption, which is roughly (as stated by Teru Thomas ('Self-location and objective chance' (ms)): "A rational agent’s priors locate him uniformly at random within each possible world." I believe that SSA is widely held: the key question in the anthropic reasoning literature is whether it should be supplemented with the self-indication assumption (giving greater prior probability mass to worlds with large populations). But we don't need to debate SIA in this discussion, because we can simply assume some prior probability distribution over sizes over the total population - the question of whether we're at the most influential time does not require us to get into debates over anthropics.]


Claim 2

Lots of things are a priori extremely unlikely yet we should have high credence in them: for example, the chance that you just dealt this particular (random-seeming) sequence of cards from a well-shuffled deck of 52 cards is 1 in 52! ≈ 1 in 10^68, yet you should often have high credence in claims of that form.  But the claim that we’re at an extremely special time is also fishy. That is, it’s more like the claim that you just dealt a deck of cards in perfect order (2 to Ace of clubs, then 2 to Ace of diamonds, etc) from a well-shuffled deck of cards. 

Being fishy is different than just being unlikely. The difference between unlikelihood and fishiness is the availability of alternative, not wildly improbable, alternative hypotheses, on which the outcome or evidence is reasonably likely. If I deal the random-seeming sequence of cards, I don’t have reason to question my assumption that the deck was shuffled, because there’s no alternative background assumption on which the random-seeming sequence is a likely occurrence.  If, however, I deal the deck of cards in perfect order, I do have reason to significantly update that the deck was not in fact shuffled, because the probability of getting cards in perfect order if the cards were not shuffled is reasonably high. That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.

Similarly, if it seems to me that I’m living in the most influential time ever, this gives me good reason to suspect that the reasoning process that led me to this conclusion is flawed in some way, because P(I’m reasoning poorly)P(seems like I’m living at the hinge of history | I’m reasoning poorly) >> P(I’m reasoning correctly)P(seems like I’m living at the hinge of history | I’m reasoning correctly). In contrast, I wouldn’t have the same reason to doubt my underlying assumptions if I concluded that I was living in the 1047th most influential century.

The strength of this argument depends in part on how confident we are on our own reasoning abilities in this domain. But it seems to me there’s a strong risk of bias in our assessment of the evidence regarding how influential our time is, for a few reasons:

  • Salience. It’s much easier to see the importance of what’s happening around us now, which we can see and is salient to us, than it is to assess the importance of events in the future, involving technologies and institutions that are unknown to us today, or (to a lesser extent) the importance of events in the past, which we take for granted and involve unsalient and unfamiliar social settings. 
  • Confirmation. For those of us, like myself, who would very much like for the world to be taking much stronger action on extinction risk mitigation (even if the probability of extinction is low) than it is today, it would be a good outcome if people (who do not have longtermist values) think that the risk of extinction is high, even if it’s low. So we might be biased (subconsciously) to overstate the case in our favour. And, in general, people have a tendency towards confirmation bias: once they have a conclusion (“we should take extinction risk a lot more seriously”), they tend to marshall arguments in its favour, rather than carefully assess arguments on either side, more than they should. Though we try our best to avoid such biases, it’s very hard to overcome them.
  • Track record. People have a poor track record of assessing the importance of historical developments. And in particular, it seems to me, technological advances are often widely regarded as being more dangerous than they are. Some examples include assessment of risks from nuclear power, horse manure from horse-drawn carts, GMOs, the bicycle, the train, and many modern drugs.[4]

I don’t like putting weight on biases as a way of dismissing an argument outright (Scott Alexander gives a good run-down of reasons why here). But being aware that long-term forecasting is an area that’s very difficult to reason correctly about should make us quite cautious when updating from our prior. 

If you accept you should have a very low prior in HoH, you need to be very confident that you’re good at reasoning about the long-run significance of events (such as the magnitude of risk from some new technology) in order to have  a significant posterior credence in HoH, rather than concluding we’re mistaken in some way. But we have no reason to believe that we’re very reliable in our reasoning in these matters. We don’t have a good track record of making predictions about the importance of historical events, and some track record of being badly wrong. So, if a chain of reasoning leads us to the conclusion that we’re living in the most important century ever, we should think it more likely that our reasoning has gone wrong than that the conclusion really is true. Given the low base rate, and given our faulty tools for assessing the claim, the evidence in favour of HoH is almost certainly a false positive.


Claim 3

I’ve described some of the arguments for thinking that we’re at an unusually influential time in the previous section above.

I won’t discuss the object-level of these arguments here, but it seems hard to see how these arguments could be strong enough to move us from the very low prior all the way to significant credence in HoH. To illustrate: a randomised controlled trial with a p-value of 0.05, under certain reasonable assumptions, corresponds to a Bayes factor of around 3; a Bayes factor of 100 is regarded as ‘decisive’ evidence. In order to move from a prior of 1 in 100,000 to a posterior of 1 in 10, one would need a Bayes factor of 10,000 — extraordinarily strong evidence. 

But, so this argument goes, the evidence we have for either the Value Lock-in view or the Time of Perils view are informal arguments. They aren’t based on data (because they generally concern future events) nor, in general, are they based on trend extrapolation, nor are they based on very well-understood underlying mechanisms, such as physical mechanisms. And the range of deep critical engagement with those informal arguments, especially from ‘external’ critics, has, so far, been limited. So it’s hard to see why we should give them much more evidential weight than, say, a well-done RCT with a p-value at 0.05 — let alone assign them an evidential weight 3000 times that amount.

An alternative path to the same conclusion is as follows. Suppose that, if we’re at the hinge of history, we’d certainly have seeming evidence that we’re at the hinge of history; so say that P(E | HoH ) ≈ 1. But if we weren’t at the hinge of history, what would be the chances of us seeing seeming evidence that we are at the hinge of history? It’s not astronomically low; perhaps P(E | ¬HoH ) ≈ 0.01. (This would seem reasonable to believe if we found just one century in the past 10,000 years where people would have had strong-seeming evidence in favour of the idea that they were at the hinge of history. This seems conservative. Consider: the periods of the birth of Christ and early Christianity; the times of Moses, Mohammed, Buddha and other religious leaders; the Reformation; the colonial period; the start of the industrial revolution; the two world wars and the defeat of fascism; and countless other events that would have seemed momentous at the time but have since been forgotten in the sands of history. These might have all seemed like good evidence to the observers at the time that they were living at the hinge of history, had they thought about it.) But, if so, then our Bayes factor is 100 (or less): enough to push us from 1 in 100,000 to 1 in 1000 in HoH, but not all the way to significant credence. 


#2: The Inductive Argument against HoH 

In addition to the previous argument, which relies on priors and claims we shouldn’t move drastically far from those priors, there’s a positive argument against HoH, which gives us evidence against HoH, whatever our priors. This argument is based on induction from past times. 

If, when looking into the past, we saw hinginess steadily decrease, that would be a good reason for thinking that now is hingier than all times to come, and so we should take action now rather than pass resources on to future longtermists.  If we had seen hinginess steadily increase, then we have some reason for thinking that the hingiest times are yet to come; if we had a good understanding of the mechanism of why hinginess is increasing, and knew that mechanism was set to continue into the future, that would strengthen that argument further.

I suggest that in the past, we have seen hinginess increase. I think that most longtermists I know would prefer that someone living in 1600 passed resources onto us, today, rather than attempting direct longtermist influence. (I certainly would prefer this.) One reason for thinking this would be if one thinks that now is simply a more pivotal point in time, because of our current level of technological progress. However, the stronger reason, it seems to me, is that our knowledge has increased so considerably since then. (Recall that on my definition a particularly hingey time depends both on how pivotal the period in history is and the extent to which a longtermist at the time would know enough to do something about it.) Someone in 1600 couldn’t have had knowledge of AI, or population ethics, or the length of time that humanity might continue for, or of expected utility theory, or of good forecasting practices; they would have had no clue about how to positively influence the long-run future, and might well have done harm. Much the same is true of someone in 1900 (though they would have had access to some of those concepts). It’s even true of someone in 1990, before people became aware of risks around AI. So, in general, hinginess is increasing, because our ability to think about the long-run effects of our actions, evaluate them, and prioritise accordingly, is increasing. 

But we know that we aren’t anywhere close to having fully worked out how to think about the long-run effects of our actions, evaluate them, and prioritise accordingly. We should confidently expect that in the future we will come across new crucial considerations — as serious as the idea of population ethics, or AI risk — or major revisions of our views. So, just as we think that people in the past should have passed resources onto us rather than do direct work, so, this argument goes, we should pass resources into the future rather than do direct longtermist work. We should think, in virtue of future people’s far better epistemic state, that some future time is more influential.   

There are at least three ways in which our knowledge is changing or improving over time, and it’s worth distinguishing them:

  1. Our basic scientific and technological understanding, including our ability to turn resources into things we want.
  2. Our social science understanding, including our ability to make predictions about the expected long-run effects of our actions.
  3. Our values.

It’s clear that we are improving on (1) and (2). All other things being equal, this gives us reason to give resources to future people to use rather than to use those resources now. The importance of this, it seems to me, is very great.  Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them. Even now, the science of good forecasting practices is still in its infancy, and the study of how to make reliable long-term forecasts is almost nonexistent. 

It’s more contentious whether we’re improving on (3) — for this argument one’s meta-ethics becomes crucial. Perhaps the Victorians would have had a very poor understanding of how to improve the long-run future by the lights of their own values, but they would have still preferred to do that than to pass resources onto future people, who would have done a better job of shaping the long-run future but in line with a different set of values. So if you endorse a simple subjectivist view, you might think that even in such an epistemically impoverished state you should still prefer to act now rather than pass the baton on to future generations with aims very different from yours (and even then you might still want to save money in a Victorian-values foundation to grant out at a later date). This view also makes the a priori unlikelihood of living at the hinge of history much less: from the perspective of your idiosyncratic values, now is the only time that they are instantiated in physical form, so of course this time is important!

In contrast, if you are more sympathetic to moral realism (or a more sophisticated form of subjectivism), as I am, then you’ll probably be more sympathetic to the idea that future people will have a better understanding of what’s of value than you do now, and this gives another reason for passing the baton on to future generations. For just some ways in which we should expect moral progress: Population ethics was first introduced as a field of enquiry in the 1980s (with Parfit’s Reasons and Persons); infinite ethics was only first seriously discussed in moral philosophy in the early 1990s (e.g. Vallentyne’s Utilitarianism and Infinite Utility), and it’s clear we don’t know what the right answers are; moral uncertainty was only first discussed in modern times in 2000 (with Lockhart’s Moral Uncertainty and its Consequences) and had very little attention until around the 2010s (with Andrew Sepielli’s PhD and then my DPhil), and again we’ve only just scraped the surface of our understanding of it.

So, just as we think that the intellectual impoverishment of the Victorians means they would have done a terrible job of trying to positively influence the long-run future, we should think that, compared to future people, we are thrashing around in ignorance. In which case we don’t have the level of understanding required for ours to be the most influential time. 


#3: The simulation update argument against HoH

The final argument[5] is: 

  1. If it seems to you that you’re at the most influential time ever, you’re differentially much more likely to be in a simulation. (That is: P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH).)
  2. The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t. (In general, I’d aver that we have very little understanding of the best things to do if we’re in a simulation, though there’s a lot more to be said here.)
  3. So we should not make a major update in the most action-relevant proposition, which is that we’re both at the hinge of history and not in a simulation.

The primary reason for believing (1) is that the most influential time in history would seem likely to be a very common subject of study by our descendents, and much more common than other periods in time. (Just as crucial periods in time, like the industrial revolution, get vastly more study by academics today than less pivotal periods, like 4th century Indonesia.)  The primary reasons for believing (2) are that if we’re in a simulation it’s much more likely that the future is short, and that extending our future doesn’t change the total amount of lived experiences (because the simulators will just run some other simulation afterwards), and that we’re missing some crucial consideration around how to act.

This argument is really just a special case of argument #1: if it seems like you’re at the most influential point in time ever, probably something funny is going on. The simulation idea is just one way of spelling out ‘something funny going on’. I’m personally reticent to make major updates in the direction of living in a simulation on the basis of this rather than updates to more banal hypotheses like just some inside-view arguments not actually being very strong; but others might disagree on this.

Might today be merely an enormously influential time?

In response to the arguments I’ve given above, you might say: “Ok, perhaps we don’t have good reasons for thinking that we’re at the most influential time in history. But the arguments support the idea that we’re at an enormously influential time. And very little changes whether you think that we’re at the most influential time ever, or merely at an enormously influential time, even though some future time is even more influential again.”

However, I don’t think this response is a good one, for three reasons.

First, the implication that we’re among the very most influential times is susceptible to very similar arguments to the ones that I gave against HoH. The idea that we’re in one of the top-10 most influential times is 10x more a priori likely than the claim that we’re in the most influential time, and it’s perhaps more than 10x less fishy. But it’s still extremely a priori unlikely, and still very fishy. So that should make us very doubtful of the claim, in the absence of extraordinarily powerful arguments in its favour. 

Second, some views that are held in the effective altruism community seem to imply not just that we’re at some very influential time, but that we’re at the most influential time ever. On the fast takeoff story associated with Bostrom and Yudkowsky, once we develop AGI we rapidly end up with a universe determined in line with a singleton superintelligence’s values, or in line with the values of those who manage to control it. Either way, it’s the decisive moment for the entire rest of civilisation.  But if you find the claim that we’re at the most influential time ever hard to swallow, then you have, by modus tollens, to reject that story of the development of superintelligence.  

Third, even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously. 

Possible other hinge times

If now isn’t the most influential time ever, when is? I’m not going to claim to be able to answer that question, but in order to help make alternative possibilities more vivid I’ve put together a list of times in the past and future that seem particularly hingey to me.  

Of course, it’s much more likely, a priori, that if HoH is false, then the most influential time is in the future. And we should also care more about the hingeyness of future times than of past times, because we can try to save resources to affect future times, but we know we can’t affect past times.[6] But past hingeyness might still be relevant for assessing hingeyness today: If hingeyness has been continually decreasing over time, that gives us some reason for thinking that the present time is more influential than any future time; if it’s been up and down, or increasing over time, that might give us evidence for thinking that some future time will be more influential.

Looking through history, some candidates for particularly influential times might include the following (though in almost every case, it seems to me, the people of the time would have been too intellectually impoverished to have known how hingey their time was and been able to do anything about it[7]): 

  • The hunter-gatherer era, which offered individuals the ability to have a much larger impact on technological progress than today.
  • The Axial age, which offered opportunities to influence the formation of what are today the major world religions.
  • The colonial period, which offered opportunities to influence the formation of nations, their constitutions and values.
  • The formation of the USA, especially at the time just before, during and after the Philadelphia Convention when the Constitution was created.
  • World War II, and the resultant comparative influence of liberalism vs fascism over the world. 
  • The post-WWII formation of the first somewhat effective intergovernmental institutions like the UN.
  • The Cold War, and the resultant comparative influence of liberalism vs communism over the world. 

In contrast, if the hingiest times are in the future, it’s likely that this is for reasons that we haven’t thought of. But there are future scenarios that we can imagine now that would seem very influential:

  • If there is a future and final World War, resulting in a unified global culture, the outcome of that war could partly determine what values influence the long-run future.
  • If one religion ultimately outcompetes both atheism and other religions and becomes a world religion, then the values embodied in that religion could partly determine what values influence the long-run future.[8]
  • If a world government is formed, whether during peacetime or as a result of a future World War, then the constitution embodied in that could constrain development over the long-run future, whether by persisting indefinitely, having knock-on effects on future institutions, or by influencing how some other lock-in event takes place. 
  • The time at which settlement of other solar systems begins could be highly influential for longtermists. For example, the ownership of other solar systems could be determined by an auction among nations and/or companies and individuals (much as the USA purchased Alaska and a significant portion of the midwest in the 19th century[9]); or by an essentially lawless race between nations (as happened with European colonisation); or through war (as has happened throughout history). If the returns from interstellar settlement pay off only over very long timescales (which seems likely), and if most of the decision-makers of the time still intrinsically discount future benefits, then longtermists at the time would be able to cheaply buy huge influence over the future.
  • The time when the settlement of other galaxies begins, which might obey similar dynamics to the settlement of other solar systems.

Implications

I said at the start that it’s non-obvious what follows, for the purposes of action, from outside-view longtermism. The most obvious course of action that might seem comparatively more promising is investment, such as saving in a long-term foundation, or movement-building, with the aim of increasing the amount of resources longtermist altruists have at a future, more hingey time. And, if one finds my second argument compelling, then research, especially into social science and moral and political philosophy, might also seem unusually promising. 

These are activities that seem like they would have been good strategies across many times in the past. If we think that today is not exceptionally different from times in the past, this gives us reason to think that they are good strategies now, too.


[1] The question of what ‘resources’ in this context are is tricky. As a working definition, I’ll use 1 megajoule of stored but useable energy, where I’ll allow the form of stored energy to vary over time: so it could be in the form of grain in the past, oil today, and antimatter in the future.

[2] H/T to Carl Shulman for this wonderful quote from C.S. Lewis, The Abolition of Man: “In order to understand fully what Man’s power over Nature, and therefore the power of some men over other men, really means, we must picture the race extended in time from the date of its emergence to that of its extinction. Each generation exercises power over its successors: and each, in so far as it modifies the environment bequeathed to it and rebels against tradition, resists and limits the power of its predecessors. This modifies the picture which is sometimes painted of a progressive emancipation from tradition and a progressive control of natural processes resulting in a continual increase of human power. In reality, of course, if any one age really attains, by eugenics and scientific education, the power to make its descendants what it pleases, all men who live after it are the patients of that power. They are weaker, not stronger: for though we may have put wonderful machines in their hands we have pre-ordained how they are to use them. And if, as is almost certain, the age which had thus attained maximum power over posterity were also the age most emancipated from tradition, it would be engaged in reducing the power of its predecessors almost as drastically as that of its successors. And we must also remember that, quite apart from this, the later a generation comes — the nearer it lives to that date at which the species becomes extinct—the less power it will have in the forward direction, because its subjects will be so few. There is therefore no question of a power vested in the race as a whole steadily growing as long as the race survives. The last men, far from being the heirs of power, will be of all men most subject to the dead hand of the great planners and conditioners and will themselves exercise least power upon the future.

The real picture is that of one dominant age—let us suppose the hundredth century A.D.—which resists all previous ages most successfully and dominates all subsequent ages most irresistibly, and thus is the real master of the human species. But then within this master generation (itself an infinitesimal minority of the species) the power will be exercised by a minority smaller still. Man’s conquest of Nature, if the dreams of some scientific planners are realized, means the rule of a few hundreds of men over billions upon billions of men. There neither is nor can be any simple increase of power on Man’s side. Each new power won by man is a power over man as well. Each advance leaves him weaker as well as stronger. In every victory, besides being the general who triumphs, he is also the prisoner who follows the triumphal car.”

[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.

[4] These are just anecdotes, and I’d love to see someone undertake a thorough investigation of how often people tend to overreact vs underreact to technological developments, especially in terms of risk-assessment and safety. As well as for helping us understand how likely we are to be biased, this is relevant to how much we should expect other actors in the coming decades to invest in safety with respect to AI and synthetic biology.

[5] I note that this argument has been independently generated quite a number of times by different people. 

[6] Though if one endorses non-causal decision theory, those times might still be decision-relevant.

[7] An exception might have been some of the US founding fathers. For example, John Adams, the second US President, commented that: “The institutions now made in America will not wholly wear out for thousands of years. It is of the last importance, then, that they should begin right. If they set out wrong, they will never be able to return, unless by accident, to the right path." (H/T Christian Tarsney for the quote.)

[8] If you’re an atheist, it’s easy to think it’s inevitable that atheists will win out in the end. But because of differences in fertility rate, the global proportion of fundamentalists is predicted to rise and the proportion of atheists is predicted to decline. What’s more, religiosity is moderately heritable, so these differences could compound into the future.  For discussion, see Shall the religious inherit the earth? by Eric Kaufman.

[9] Some numbers on this: The Louisiana purchase cost $15 million at the time, or $250 million in today’s money, for what is now 23.3% of US territory.  https://www.globalpolicy.org/component/content/article/155/25993.html Alaska cost $120 million in today’s money; its GDP today is $54 billion per year. https://fred.stlouisfed.org/series/AKNGSP

Comments147
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Hi Will,

It is great to see all your thinking on this down in one place: there are lots of great points here (and in the comments too). By explaining your thinking so clearly, it makes it much easier to see where one departs from it.

My biggest departure is on the prior, which actually does most of the work in your argument: it creates the extremely high bar for evidence, which I agree probably couldn’t be met. I’ve mentioned before that I’m quite sure the uniform prior is the wrong choice here and that this makes a big difference. I’ll explain a bit about why I think that.

As a general rule if you have a domain like this that extends indefinitely in one direction, the correct prior is one that diminishes as you move further away in that direction, rather than picking a somewhat arbitrary end point and using a uniform prior on that. People do take this latter approach in scientific papers, but I think it is usually wrong to do so. Moreover in your case in particular, there are also good reasons to suspect that the chance of a century being the most influential should diminish over time. Especially because there are important kinds of significant event (such as the value lock-in or an... (read more)

Hi Toby,

Thanks so much for this very clear response, it was a very satisfying read, and there’s a lot for me to chew on. And thanks for locating the point of disagreement — prior to this post, I would have guessed that the biggest difference between me and some others was on the weight placed on the arguments for the Time of Perils and Value Lock-In views, rather than on the choice of prior. But it seems that that’s not true, and that’s very helpful to know. If so, it suggests (advertisement to the Forum!) that further work on prior-setting in EA contexts is very high-value. 

I agree with you that under uncertainty over how to set the prior, because we’re clearly so distinctive in some particular ways (namely, that we’re so early on in civilisation, that the current population is so small, etc), my choice of prior will get washed out by models on which those distinctive features are important; I characterised these as outside-view arguments, but I’d understand if someone wanted to characterise that as prior-setting instead.

I also agree that there’s a strong case for making the prior over persons (or person-years) rather than centuries. In your discussion, you go via number of person... (read more)

Thanks for this very thorough reply. There are so many strands here that I can't really hope to do justice to them all, but I'll make a few observations.

1) There are two versions of my argument. The weak/vague one is that a uniform prior is wrong and the real prior should decay over time, such that you can't make your extreme claim from priors. The strong/precise one is that it should decay as 1/n^2 in line with a version of LLS. The latter is more meant as an illustration. It is my go-to default for things like this, but my main point here is the weaker one. It seems that you agree that it should decay, and that the main question now is whether it does so fast enough to make your prior-based points moot. I'm not quite sure how to resolve that. But I note that from this position, we can't reach either your argument that from priors this is way too unlikely for our evidence to overturn (and we also can't reach my statement of the opposite of that).

2) I wouldn't use the LLS prior for arbitrary superlative properties where you fix the total population. I'd use it only if the population over time was radically unknown (so that the first person is... (read more)

I appreciate your explicitly laying out issues with the Laplace prior! I found this helpful.

The approach to picking a prior here which I feel least uneasy about is something like: "take a simplicity-weighted average over different generating processes for distributions of hinginess over time". This gives a mixture with some weight on uniform (very simple), some weight on monotonically-increasing and monotonically-decreasing functions (also quite simple), some weight on single-peaked and single-troughed functions (disproportionately with the peak or trough close to one end), and so on…

If we assume a big future and you just told me the number of people in each generation, I think my prior might be something like 20% that the most hingey moment was in the past, 1% that it was in the next 10 centuries, and the rest after that. After I notice that hingeyness is about influence, and causality gives a time asymmetry favouring early times, I think I might update to >50% that it was in the past, and 2% that it would be in the next 10 centuries.

(I might start with some similar prior about when the strongest person lives, but then when I begin to understand something about strength the generating mechanisms which suggest that the strongest people would come early and everything would be diminishing thereafter seem very implausible, so I would update down a lot on that.)

5
Toby_Ord
I'm sympathetic to the mixture of simple priors approach and value simplicity a great deal. However, I don't think that the uniform prior up to an arbitrary end point is the simplest as your comment appears to suggest. e.g. I don't see how it is simpler than an exponential distribution with an arbitrary mean (which is the max entropy prior over R+ conditional on a finite mean). I'm not sure if there is a max entropy prior over R+ without the finite mean assumption, but 1/x^2 looks right to me for that. Also, re having a distribution that increases over a fixed time interval giving a peak at the end, I agree that this kind of thing is simple, but note that since we are actually very uncertain over when that interval ends, that peak gets very smeared out. Enough so that I don't think there is a peak at the end at all when the distribution is denominated in years (rather than centiles through human history or something). That said, it could turn into a peak in the middle, depending on the nature of one's distribution over durations.
I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms. But I hope you see that it does show that the whole thing comes down to whether you choose a prior like you did, or another reasonable alternative... Additionally, if you didn’t know which of these priors to use and used a mixture with mine weighted in to a non-trivial degree, this would also lead to a substantial prior probability of HoH.

I think this point is even stronger, as your early sections suggest. If we treat the priors as hypotheses about the distribution of events in the world, then past data can provide evidence about which one is right, and (the principle of) Will's prior would have given excessively low credence to humanity's first million years being the million years when life traveled to the Moon, humanity becoming such a large share of biomass, the first 10,000 years of agriculture leading to the modern world, and so forth. So those data would give us extreme evidence for a less dogmatic prior being correct.

If we treat the priors as hypotheses about the distribution of events in the world, then past data can provide evidence about which one is right, and (the principle of) Will's prior would have given excessively low credence to humanity's first million years being the million years when life traveled to the Moon, humanity becoming such a large share of biomass, the first 10,000 years of agriculture leading to the modern world, and so forth.

On the other hand, the kinds of priors Toby suggests would also typically give excessively low credence to these events taking so long. So the data doesn't seem to provide much active support for the proposed alternative either.

It also seems to me like different kinds of priors are probably warranted for predictions about when a given kind of event will happen for the first time (e.g. the first year in which someone is named Steve) and predictions about when a given property will achieve its maximum value (e.g. the year with the most Steves). It can therefore be consistent to expect the kinds of "firsts" you list to be relatively bunched up near the start of human history, while also expecting relevant "mosts" (such as the most hingey year) to be relatively spread out.

That being said, I find it intuitive that periods with lots of "firsts" should tend to be disproportionately hingey. I think this intuition could be used to construct a model in which early periods are especially likely to be hingey.

6
William_MacAskill
I don't think I agree with this, unless one is able to make a comparative claim about the importance (from a longtermist perspective) of these events relative to future events' importance - which is exactly what I'm questioning. I do think that weighting earlier generations more heavily is correct, though; I don't feel that much turns on whether one construes this as prior choice or an update from one's prior.
3
MichaelDickens
A related outside-view argument for the HoH being more likely to occur in earlier centuries: 1. New things must happen more frequently in earlier centuries because over time, we will run out of new things to do. 2. HoH will probably occur due to some significant thing (or things) happening. 3. HoH must coincide with the first occurrence of this thing, because later occurrences of the same thing or similar things cannot be more important. If we accept these premises, this justifies using a diminishing prior like Laplace.
As a general rule if you have a domain like this that extends indefinitely in one direction, the correct prior is one that diminishes as you move further away in that direction, rather than picking a somewhat arbitrary end point and using a uniform prior on that.

Just a quick thought on this issue: Using Laplace's rule of succession (or any other similar prior) also requires picking a somewhat arbitrary start point. You suggest 200000BC as a start point, but one could of course pick earlier or later years and get out different numbers. So the uniform prior's sensitivity to decisions about how to truncate the relevant time interval isn't a special weakness; it doesn't seem to provide grounds for prefering the Laplacian prior.

I think that for some notion of an "arbitrary superlative," a uniform prior also makes a lot more intuitive sense than a Laplacian prior. The Laplacian prior would give very strange results, for example, if you tried to use it to estimate the hottest day on Earth, the year with the highest portion of Americans named Zach, or the year with the most supernovas.

Moreover in your case in particular, there are also good reasons to suspe
... (read more)
3
ESRogs
Doesn't the uniform prior require picking an arbitrary start point and end point? If so, switching to a prior that only requires an arbitrary start point seems like an improvement, all else equal. (Though maybe still worth pointing out that all arbitrariness has not been eliminated, as you've done here.)
3
Toby_Ord
You are right that having a fuzzy starting point for when we started drawing from the urn causes problems for Laplace's Law of Succession, making it less appropriate without modification. However, note that in terms of people who have ever lived, there isn't that much variation as populations were so low for so long, compared to now. I see your point re 'arbitrary superlatives', but am not sure it goes through technically. If I could choose a prior over the relative timescale of beginning to the final year of humanity, I would intuitively have peaks at both ends. But denominated in years, we don't know where the final year is and have a distribution over this that smears that second peak out over a long time. This often leaves us just with the initial peak and a monotonic decline (though not necessarily of the functional form of LLS). That said, this interacts with your first point, as the beginning of humanity is also vague, smearing that peak out somewhat too.
[anonymous]15
0
0

So your prior says, unlike Will’s, that there are non-trivial probabilities of very early lock-in. That seems plausible and important. But it seems to me that your analysis not only uses a different prior but also conditions on “we live extremely early” which I think is problematic.

Will argues that it’s very weird we seem to be at an extremely hingy time. So we should discount that possibility. You say that we’re living at an extremely early time and it’s not weird for early times to be hingy. I imagine Will’s response would be “it’s very weird we seem to be living at an extremely early time then” (and it’s doubly weird if it implies we live in an extremely hingy time).

If living at an early time implies something that is extremely unlikely a priori for a random person from the timeline, then there should be an explanation. These 3 explanations seem exhaustive:

1) We’re extremely lucky.

2) We aren’t actually early: E.g. we’re in a simulation or the future is short. (The latter doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class).

3) Early people don’t actually have outsized influen... (read more)

6
Toby_Ord
I don't think I'm building in any assumptions about living extremely early -- in fact I think it makes as little assumption on that as possible. The prior you get from LLS or from Gott's doomsday argument says the median number of people to follow us is as many as have lived so far (~100 billion), that we have an equal chance of being in any quantile, and so for example we only have a 1 in a million chance of living in the first millionth. (Though note that since each order of magnitude contributes an equal expected value and there are infinitely many orders of magnitude, the expected number of people is infinite / has no mean.)
1[anonymous]
If you're just presenting a prior I agree that you've not conditioned on an observation "we're very early". But to the extent that your reasoning says there's a non-trivial probability of [we have extremely high influence over a big future], you do condition on some observation of that kind. In fact, it would seem weird if any Copernican prior could give non-trivial mass to that proposition without an additional observation. I continue my response here because the rest is more suitable as a higher-level comment.
7
Liam_Donovan
What is a Copernican prior? I can't find any google results
7[anonymous]
It's just an informal way to say that we're probably typical observers. It's named after Copernicus because he found that the Earth isn't as special as people thought.
4
JP Addison🔸
I don't know the history of the term or its relationship to Copernicus, but I can say how my forgotten source defined it. Suppose you want to ask, "How long will my car run?" Suppose it's a weird car that has a different engine and manufacturer than other cars, so those cars aren't much help. One place you could start is with how long it's currently be running for. This is based on the prior that you're observing it on average halfway through its life. If it's been running for 6 months so far, you would guess 1 year. There surely exists a more rigorous definition than this, but that's the gist.
3
Linch
Wikipedia gives the physicist's version, but EAs (and maybe philosophers?) use it more broadly. https://en.wikipedia.org/wiki/Copernican_principle The short summary I use to describe it is that "we" are not that special, for various definitions of the word we. Some examples on FB.

>> And it gets even more so when you run it in terms of persons or person years (as I believe you should). i.e. measure time with a clock that ticks as each lifetime ends, rather than one that ticks each second. e.g. about 1/20th of all people who have ever lived are alive now, so the next century it is not really 1/2,000th of human history but more like 1/20th of it.

And if you use person-years, you get something like 1/7 - 1/14! [1]

>> I doubt I can easily convince you that the prior I’ve chosen is objectively best, or even that it is better than the one you used. Prior-choice is a bit of an art, rather like choice of axioms.

I'm pretty confused about how these dramatically different priors are formed, and would really appreciate it if somebody (maybe somebody less busy than Will or Toby?) could give pointers on how to read up more on forming these sort of priors. As you allude to, this question seems to map to anthropics, and I'm curious how much the priors here necessarily maps to your views on anthropics. Eg, am I reading the post and your comment correctly that Will takes an SIA view and you take an SSA view on anthropic questions?

In general, does anybody have pointers on how best to reason about anthropic and anthropic-adjacent questions?

[1] https://eukaryotewritesblog.com/2018/10/09/the-funnel-of-human-experience/


6[anonymous]
On your prior, P(high influence) isn't tiny. But if I understand correctly, that's just because P(high influence | short future) isn't tiny whereas P(high influence | long future) is still tiny. (I haven't checked the math, correct me if I'm wrong). So your argument doesn't seems to save existential risk work. The only way to get a non-trivial P(high influence | long future) with your prior seems to be by conditioning on an additional observation "we're extremely early". As I argued here, that's somewhat sketchy to do.

I don't have time to get into all the details, but I think that while your intuition is reasonable (I used to share it) the maths does actually turn out my way. At least on one interpretation of what you mean. I looked into this when wondering if the doomsday argument suggested that the EV of the future must be small. Try writing out the algebra for a Gott style prior that there is an x% chance we are in the first x%, for all x. You get a Pareto distribution that is a power law with infinite mean. While there is very little chance on this prior that there is a big future ahead, the size of each possible future compensates for that, such that each order of magnitude of increasing size of the future contributes an equal expected amount of population to the future, such that the sum is infinite.

I'm not quite sure what to make of this, and it may be quite brittle (e.g. if we were somehow certain that there weren't more than 10^100 people in the future, the expected population wouldn't be all that high), but as a raw prior I really think it is both an extreme outside view, saying we are equally likely to live at any relative position in the sequence *and* that there is extremely high (infinite) EV in the future -- not because it thinks there is any single future whose EV is high, but because the series diverges.

This isn't quite the same as your claim (about influence), but does seem to 'save existential risk work' from this challenge based on priors (I don't actually think it needed saving, but that is another story).

7[anonymous]
Interesting point! The diverging series seems to be a version of the St Petersburg paradox, which has fooled me before. In the original version, you have a 2^-k chance of winning 2^k for every positive integer k, which leads to infinite expected payoff. One way in which it's brittle is that, as you say, the payoff is quite limited if we have some upper bound on the size of the population. Two other mathematical ways are 1) if the payoff is just 1.99^k or 2) if it is 2^0.99k.
1[anonymous]
On second thoughts, I think it's worth clarifying that my claim is still true even though yours is important in its own right. On Gott's reasoning, P(high influence | world has 2^N times the # of people who've already lived) is still just 2^-N (that's 2^-(N-1) if summed over all k>=N). As you said, these tiny probabilities are balanced out by asymptotically infinite impact. I'll write up a separate objection to that claim but first a clarifying question: Why do you call Gott's conditional probability a prior? Isn't it more of a likelihood? In my model it should be combined with a prior P(number of people the world has). The resulting posterior is then the prior for further enquiries.
2
Ofer
As you wrote, the future being short "doesn’t necessarily imply that xrisk work doesn’t have much impact because the future might just be short in terms of people in our anthropic reference class". Another thought that comes to mind is that there may exist many evolved civilizations that their behavior is correlated with our behavior. If so, us deciding to work hard on reducing x-risks means it's more likely that those other civilizations would also decide—during early centuries—to work hard on reducing x-risks.
4
WilliamKiely
Under Toby's prior, what is the prior probability that the most influential century ever is in the past?
8
Toby_Ord
Quite high. If you think it hasn't happened yet, then this is a problem for my prior that Will's doesn't have. More precisely, the argument I sketched gives a prior whose PDF decays roughly as 1/n^2 (which corresponds to the chance of it first happening in the next period after n absences decaying as ~1/n). You might be able to get some tweaks to this such that it is less likely than not to happen by now, but I think the cleanest versions predict it would have happened by now. The clean version of Laplace's Law of Succession, measured in centuries, says there would only be a 1/2,001 chance it hadn't happened before now, which reflects poorly on the prior, but I don't think it quite serves to rule it out. If you don't know whether it has happened yet (e.g. you are unsure of things like Will's Axial Age argument), this would give some extra weight to that possibility.
3
William_MacAskill
Given this, if one had a hyperprior over different possible Beta distributions, shouldn't 2000 centuries of no event occurring cause one to update quite hard against the (0.5, 0.5) or (1, 1) hyperparameters, and in favour of a prior that was massively skewed towards the per-century probability of no-lock-in-event being very low? (And noting that, depending exactly on how the proposition is specified, I think we can be very confident that it hasn't happened yet. E.g. if the proposition under consideration was 'a values lock-in event occurs such that everyone after this point has the same values'.)
2
Toby_Ord
That's interesting. Earlier I suggested that a mixture of different priors that included some like mine would give a result very different to your result. But you are right to say that we can interpret this in two ways: as a mixture of ur priors or as a mixture of priors we get after updating on the length of time so far. I was implicitly assuming the latter, but maybe the former is better and it would indeed lessen or eliminate the effect I mentioned. Your suggestion is also interesting as a general approach, choosing a distribution over these Beta distributions instead of debating between certainty in (0,0), (0.5, 0.5), and (1,1). For some distributions over Beta parameters these the maths is probably quite tractable. That might be an answer to the right meta-rational approach rather than an answer to the right rational approach, or something, but it does seem nicely robust.
2
Tobias_Baumann
I don't understand this. Your last comment suggests that there may be several key events (some of which may be in the past), but I read your top-level comment as assuming that there is only one, which precludes all future key events (i.e. something like lock-in or extinction). I would have interpreted your initial post as follows: Suppose we observe 20 past centuries during which no key event happens. By Laplace's Law of Succession, we now think that the odds are 1/22 in each century. So you could say that the odds that a key event "would have occurred" over the course of 20 centuries is 1 - (1-1/22)^20 = 60.6%. However, we just said that we observed no key event, and that's what our "hazard rate" is based on, so it is moot to ask what could have been. The probability is 0. This seems off, and I think the problem is equating "no key event" with "not hingy", which is too simple because one can potentially also influence key events in the distant future. (Or perhaps there aren't even any key events, or there are other ways to have a lasting impact.)
3
lewish
I know this is an old thread, and I'm not totally sure how this affects the debate here, but for what it's worth I think applying principle of indifference-type reasoning here implies that the appropriate uninformative prior is an exponential distribution. I apply the principle of indifference  (or maybe of invariance, following Jaynes (1968)) as follows: If I wake up tomorrow knowing absolutely nothing about the world and am asked about the probability of 10 days into the future containing the most important time in history conditional on it being in the future, I should give the same answer as if I were to be woken up 100 years from now and were asked about the day 100 years and 10 days from now. I would need some further information (e.g. about the state of the world, of human society, etc.) to say why one would be more probable than the other, and here I'm looking for a prior from a state of total ignorance. This invariance can be generalized as: Pr(X>t+k|X>t) = Pr(X>t'+k|X>t') for all k, t, t'. This happens to be the memoryless property, and the exponential distribution is the only continuous distribution that has this property. Thus if we think that our priors from a state of total ignorance should satisfy this requirement, our prior needs to be an exponential distribution. I imagine there are other ways of characterizing similar indifference requirements that imply memorylessness. This is not to say our current beliefs should follow this distribution: we have additional information about the world, and we should update on this information. It’s also possible that the principle of indifference might be applied in a different way to give a different uninformative prior as in the Bertrand paradox. (The Jaynes paper: https://bayes.wustl.edu/etj/articles/prior.pdf)
3
Tobias_Baumann
The following is yet another perspective on which prior to use, which questions whether we should assume some kind of uniformity principle: As has been discussed in other comments and the initial text, there are some reasons to expect later times to be hingier (e.g. better knowledge) and there are some reasons to expect earlier times to be hingier (e.g. because of smaller populations). It is plausible that these reasons skew one way or another, and this effect might outweigh other sources of variance in hinginess. That means that the hingiest times are disproportionately likely to be either a) the earliest generation (e.g. humans in pre-historic population bottlenecks) or b) the last generation (i.e. the time just before some lock-in happens). Our time is very unlikely to be the hingiest in this perspective (unless you think that lock-in happens very soon). So this suggests a low prior for HoH; however, what matters is arguably comparing present hinginess to the future, rather than to the past. And in this perspective it would be not-very-unlikely that our time is hingier than all future times. In other words, rather than there being anything special about our time, it could just the case that a) hinginess generally decreases over time and b) this effect is stronger than other sources of variance in hinginess. I'm fairly agnostic about both of these claims, and Will argued against a), but it's surely likelier than 1 in 100000 (in the absense of further evidence), and arguably likelier even than 5%. (This isn't exactly HoH because past times would be even hingier.)
2
Habryka
At least in Will's model, we are among the earliest human generations, so I don't think this argument holds very much, unless you posit a very fast diminishing prior (which so far nobody has done).

Thanks for this post Will, it's good to see some discussion of this topic. Beyond our previous discussions, I'll add a few comments below.


hingeyness

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

The possibility of engineered plagues causing an apocalypse was a grave concern of forward thinking people in the early 20th century as biological weapons were developed and demonstrated. Many of the anti-nuclear scientists concerned for the global prospects of humanity were also concerned about germ warfare.

Both of the above also had prominent fictional portrayals to come to mind for longtermist altruists engaging in a wide-ranging search. If there had been a longtermist altruist movement trying to c... (read more)

So I would say both the population and pre-emption (by earlier stabillization) factors intensely favor earlier eras in per resource hingeyness, constrained by the era having any significant lock-in opportunities and the presence of longtermists.

I think this is a really important comment; I see I didn't put these considerations into the outside-view arguments, but I should have done as they are make for powerful arguments.

The factors you mention are analogous to the parameters that go into the Ramsey model for discounting: (i) a pure rate of time preference, which can account for risk of pre-emption; (ii) a term to account for there being more (and, presumably, richer) future agents and some sort of diminishing returns as a function of how many future agents (or total resources) there are. Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness i... (read more)


> Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.

I agree (and have used in calculations about optimal disbursement and savings rates) that the chance of a future altruist funding crash is an important reason for saving (e.g. medium-scale donors can provide insurance against a huge donor like the Open Philanthropy Project not entering an important area or being diverted). However, the particularly relevant kind of event for saving is the possibility of a 'catastrophe' that cuts other altruistic funding or similar while leaving one's savings unaffected. Good Ventures going awry fits that bill better than a nuclear war (which would also destroy a DAF saving for the future with high probability).

Saving extra for a catastro... (read more)

6
Tobias_Baumann
Maybe it's a nitpick but I don't think this is always right. For instance, suppose that from now on, population size declines by 20% each century (indefinitely). I don't think that would mean that later generations are more hingy? Or, imagine a counterfactual where population levels are divided by 10 across all generations – that would mean that one controls a larger fraction of resources but can also affect fewer beings, which prima facie cancels out. It seems to me that the relevant question is whether the present population size is small compared to the future, i.e. whether the present generation is a "population bottleneck". (Cf. Max Daniel's comment.) That's arguably true for our time (especially if space colonisation becomes feasible at some point) and also in the rebuilding scenario you mentioned.
2
CarlShulman
In expectation, just as a result of combining comparability within a few OOM on likelihood of a hinge in the era/transition, but far more in population. I was not ruling out specific scenarios, in the sense that it is possible that a random lottery ticket is the winner and worth tens of millions of dollars, but not an option for best investment. Generally, I'm thinking in expectations since they're more action-guiding.

Hi Carl,

Thanks so much for taking the time to write this excellent response, I really appreciate it, and you make a lot of great points.  I’ll divide up my reactions into different comments; hopefully that helps ease of reading. 

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

This is a good idea. Some options: influentialness; criticality; momentousness; importance; pivotality; significance. 

I’ve created a straw poll here to see as a first pass what the Forum thinks.

[Edit: Results:

Pivotality - 26% (17 votes)

Criticality - 22% (14 votes)

Hingeyness - 12% (8 votes)

Influentialness - 11% (7 votes)

Importance - 11% (7 votes)

Significance - 11% (7 votes)

Momentousness - 8% (5 votes)]

Now it's officially on BBC: https://www.bbc.com/future/article/20200923-the-hinge-of-history-long-termism-and-existential-risk

But here’s another adjective for our times that you may not have heard before: “hingey”.

Although it also says:

(though MacAskill now prefers the term “influentialness”, as it sounds less flippant)

Thinking further, I would go with importance among those options for 'total influence of an era' but none of those terms capture the 'per capita/resource' element, and so all would tend to be misleading in that way. I think you would need an explicit additional qualifier to mean not 'this is the century when things will be decided' but 'this is the century when marginal influence is highest, largely because ~no one tried or will try.'

Criticality is confusing because it describes the point when nuclear reaction becomes self-sustaining, and relates to "critical points" in the related area of dynamical systems, which is somewhat different from what we're talking about.

I think Hingeyness should have a simple name because it is not a complicated concept - It's how much actions affect long-run outcomes. In RL, in discussion of prioritized experience replay, we would just use something like "importance". I would generally use "(long-run) importance" or "(long-run) influence" here, though I guess pivotality (from Yudkowsky's "pivotal act") is alright in a jargon-liking context (like academic papers).

Edit: From Carl's comment, and from rereading the post, the per-resource component seems key. So maybe per-resource importance.

I think this overstates the case. Diminishing returns to expenditures in a particular time favor a nonzero disbursement rate (e.g. with logarithmic returns to spending at a given time 10x HoH levels would drive a 10x expenditure for a given period)

Sorry, I wasn’t meaning we should be entirely punting to the future, and in case it’s not clear from my post my actual all-things-considered views is that longtermist EAs should be endorsing a mixed strategy of some significant proportion of effort spent on near-term longtermist activities and some proportion of effort spent on long-term longtermist activities. 

I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period. Certainly the vibe in the air is ‘expenditure (of money or labour) now is super important, we should really be focusing on that’. 

(I also don’t think that diminishing returns is entirely true: there are fixed costs and economies of scale when trying to do most things in the world, so I expect s-curves in general. If so, that would favour a lumpier disbursement schedule.)

I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period.

I agree that many small donors do not have a principled plan and are trying to shift the overall portfolio towards more donation soon (which can have the effect of 100% now donation for an individual who is small relative to the overall portfolio).

However, I think that institutionally there are in fact mechanisms to regulate expenditures:

  • Evaluations of investments in movement-building involve estimations of the growth of EA resources that will result, and comparisons to financial returns; as movement-building returns decline they will start to fall under the financial return benchmark and no longer be expanded in that way
  • The Open Philanthropy Project has blogged about its use of the concept of a 'last dollar' opportunity cost of funds, asking for current spending whether in exp
... (read more)
I agree we are learning more about how to effectively exert resources to affect the future, but if your definition is concerned with the effect of a marginal increment of resources (rather than the total capacity of an era), then you need to wrestle with the issue of diminishing returns.

I agree with this, though if we’re unsure about how many resources will be put towards longtermist causes in the future, then the expected value of saving will come to be dominated by the scenario where very few resources are devoted to it. (As happens in the Ramsey model for discounting if one includes uncertainty over future growth rates and the possibility of catastrophe.) This considerations gets stronger if one thinks the diminishing marginal returns curve is very steep.

E.g. perhaps in 150 years’ time, EA and Open Phil and longtermist concern will be dust; in which case those who saved for the future (and ensured that there would be at least some sufficiently likeminded people to pass their resources onto) will have an outsized return. And perhaps returns diminish really steeply, so that what matters is guaranteeing that there are at least some longtermists around. If the outsized return in th... (read more)

You might think the counterfactual is unfair here, but I wouldn’t regard it as accessible to someone in 1600 to know that they could make contributions to science and the Enlightenment as a good way of influencing the long-run future. 

Is longtermism accessible today? That's a philosophy of a narrow circle, as Baconian science and the beginnings of the culture of progress were in 1600. If you are a specialist focused on moral reform and progress today with unusual knowledge, your might want to consider a counterpart in the past in a similar position for their time.

To talk about what they would have been one needs to consider a counterfactual in which we anachronistically introduce at least some minimal version of longtermist altruism, and what one includes in that intervention will affect the result one extracts from the exercise.

I agree there’s a tricky issue of how exactly one constructs the counterfactual. The definition I’m using is trying to get it as close as possible to a counterfactual we really face: how much to spend now vs how much to pass resources onto future altruists. I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.

I feel that involving the anachronistic insertion of a longtermist altruist into the past, if anything, makes my argument harder to make, though. If I can’t guarantee that the past person I’m giving resources to would even be a longtermist, that makes me less inclined to give them resources. And if I include the possibility that longtermism might be wrong and that the future-person that I pass resources onto will recognise this, that’s (at least some) argument to me in favour of passing on resources. (Caveat subjectivist meta-ethics, possibility of future people’s morality going wayward, etc.)

I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.

I tried engaging with the post for 2-3 hours and was working on a response, but ended up kind of bouncing off at least in part because the definition of hingyness didn't seem particularly action-relevant to me, mostly for the reasons that Gregory Lewis and Kit outlined in their comments.

I also think a major issue with the current definition is that I don't know of any technology or ability to reliably pass on resources to future centuries, which introduces a natural strong discount factor into the system, but which seems like a major consideration in favor of spending resources now instead of trying to pass them on (and likely fail, as illustrated in Robin Hanson's original "giving later" post).

I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

Thanks, I’ve updated on this since writing the post and think my original claim was at least too strong, and probably just wrong. I don’t currently have a good sense of, say, if I were living in the 1950s, how likely I would be to figure out AI as the thing, rather than focus on something else that turned out not to be as important (e.g. the focus on nanotech by the Foresight Institute (a group of idealistic futurists) in the late 80s could be a relevant example).

I'd guess a longtermist altruist movement would have wound up with a flatter GCR porfolio at the time. It might have researched nuclear winter and dirty bombs earlier than in OTL (and would probably invest more in nukes than today's EA movement), and would have expedited the (already pretty good) reaction to the discovery of asteroid risk. I'd also guess it would have put a lot of attention on the possibility of stable totalitarianism as lock-in.

5
SiebeRozendal
Some ideas: "Leverage", "temporal leverage", "path-dependence", "moment" (in relation to the concept from physics), "path-criticality" (meaning how many paths are closed off by decisions in the current time). Anyone else with ideas?
1
MichaelA🔸
I like "leverage" (which I'd imagine being used in ways like "the highest leverage time in history" or "the time in history where an altruist can have the highest leverage"). Compared to the other options Will suggested, "leverage" seems to me to somewhat more clearly signal the "per capita/resource" element highlighted above (or more simply the sense that one isn't just saying that x time is important, but also that something can predictably be done at x time to influence the future). One potential downside is that it's possible "leverage" would cause a bit of confusion for some people, if the financial sense of "leverage" comes to their mind more readily than the sort of "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world" sense.
5
William_MacAskill
Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact). The sim-arg could still cause you to change your actions, though. It’s somewhat plausible to me, for example, that the chance of being a sim if you’re at the very most momentous time is 1000x higher than the chance of being a sim if you’re at the 20th most hingey time, but the most hingey time is not 1000x more hingey than the 20th most hingey time. In which case the hypothesis that you’re at the 20th most hingey time has a greater relative importance than it had before.

Your argument seems to combine SSA style anthropic reasoning with CDT. I believe this is a questionable combination as it gives different answers from an ex-ante rational policy or from updateless decision theory (see e.g. https://www.umsu.de/papers/driver-2011.pdf). The combination is probably also dutch-bookable.

Consider the different hingeynesses of times as the different possible worlds and your different real or simulated versions as your possible locations in that world. Say both worlds are equally likely a priori and there is one real version of you in both worlds, but the hingiest one also has 1000 subjectively indistinguishable simulations (which don't have an impact). Then SSA tells you that you are much less likely a real person in the hingiest time than you are to be a real person in the 20th hingiest time. Using these probabilities to calculate your CDT-EV, you conclude that the effects of your actions on the 20th most hingiest time dominate.

Alternatively, you could combine CDT with SIA. Under SIA, being a real person in either time is equally likely. Or you could combine the SSA probabilities with EDT. EDT would recommend acting as if you were controlling all simulati

... (read more)
1
Olle Häggström
Is this slightly off? The factor that goes into the expected impact is the chance of being a non-sim (not the chance of being a sim), so for the argument to make sense, you might wish to replace "the chance of being a sim [...] is 1000x higher than..." by "the chance of being a non-sim is just 1/1000 of..."?

Excellent work; some less meritorious (and borderline repetitious) remarks:

1) One corollary of this line of argument is that even if one is living at a 'hinge of history', one should not reasonably believe this, given the very adverse prior and the likely weak confirmatory evidence one would have access to.

2) The invest for the future strategy seems to rely on our descendants improving their epistemic access to the point where they can reliably determine whether they're at a 'hinge' or not, and deploying resources appropriately. There are grounds for pessimism about this ability ever being attained. Perhaps history (or the universe as a whole) is underpowered for these inferences.

3) Although with the benefit of hindsight over previous times we could assess the distribution of hingeyness/influence across these, to get a sense of the distribution, and so a steer as to whether we should think there are hingey periods of vastly outsized influence in the first place.

4) If we grant the ground truth is occasional 'crucial moments', but we expect evidence at-the-time for living in one of these is scant, my intuition is the optimal strategy would to husban... (read more)

5
William_MacAskill
The way I'd think about it is that we should be uncertain about how justifiably confident people can be that they're at the HoH. If our current credence in HoH is low, then the chance that it might be justifiably much higher in the future should be the significant consideration. At least if we put aside simulation worries, I can imagine evidence which would lead me to have high confidence that I'm at the HoH. I think if that were one's credences, what you say makes sense. But it seems hard for me to imagine a (realistic) situation where I think that it's 1% chance of HoH this decade, but I'm confident that the chance will much much lower than that for all of the next 99 decades. For what it's worth, my intuition is that pursuing a mixed strategy is best; some people aiming for impact now, in case now is a hinge, and some people aiming for impact in many many years, at some future hinge moment.

One of the amusing things about the 'hinge of history' idea is that some people make the mediocrity argument about their present time - and are wrong.

Isaac Newton, for example, 300 years ago appears to have made an anthropic argument that claims that he lived in a special time which could be considered any kind of, say, 'Revolution', due to the visible acceleration of progress and recent inventions of technologies, were wrong, and in reality, there was an ordinary rate of innovation and the invention of many things recently merely showed that humans had a very short past and were still making up for lost time (because comets routinely drove intelligent species extinct).

And Lucretius ~1800 years before Newton (probably relaying older Epicurean arguments) made his own similar argument, arguing that Greece & Rome were not any kind of exception compared to human history - certainly humans hadn't existed for hundreds of thousands or millions of years! - and if Greece & Rome seemed innovative compared to the dark past, it was merely because "our world is in its youth: it was not created long ago, but is of comparatively recent origin. That is why at ... (read more)

8
trammell
Interesting finds, thanks! Similarly, people sometimes claim that we should discount our own intuitions of extreme historic importance because people often feel that way, but have so far (at least almost) always been wrong. And I’m a bit skeptical of the premise of this particular induction. On my cursory understanding of history, it’s likely that for most of history people saw themselves as part of a stagnant or cyclical process which no one could really change, and were right. But I don’t have any quotes on this, let alone stats. I’d love to know what proportion of people before ~1500 thought of themselves as living at a special time.
6
CarlShulman
My read is that Millenarian religious cults have often existed in nontrivial numbers, but as you say the idea of systematic, let alone accelerating, progress (as opposed to past golden ages or stagnation) is new and coincided with actual sustained noticeable progress. The Wikipedia page for Millenarianism lists ~all religious cults, plus belief in an AI intelligence explosion. So the argument seems, first order, to reduce to the question of whether credence in AI growth boom (to much faster than IR rates) is caused by the same factors as religious cults rather than secular scholarly opinion, and the historical share/power of those Millenarian sentiments as a share of the population. But if one takes a narrower scope (not exceptionally important transformation of the world as a whole, but more local phenomena like the collapse of empires or how long new dynasties would last) one sees smaller distortion of relative importance for propaganda frequently (not that it was necessarily believed by outside observers).
2
William_MacAskill
Thanks for these links. I’m not sure if your comment was meant to be a criticism of the argument, though? If so: I’m saying “prior is low, and there is a healthy false positive rate, so don’t have high posterior.” You’re pointing out that there’s a healthy false negative rate too — but that won’t cause me to have a high posterior? And, if you think that every generation is increasing in influentialness, that’s a good argument for thinking that future generations will be more influential and we should therefore save.

I think the outside view argument for acceleration deserves more weight. Namely:

  • Many measures of "output" track each other reasonably closely: how much energy we can harness, how many people we can feed, GDP in modern times, etc.
  • Output has grown 7-8 orders of magnitude over human history.
  • The rate of growth has itself accelerated by 3-4 orders of magnitude. (And even early human populations would have seemed to grow very fast to an observer watching the prior billion years of life.)
  • It's pretty likely that growth will accelerate by another order of magnitude at some point, given that it's happened 3-4 times before and faster growth seems possible.
  • If growth accelerated by another order of magnitude, a hundred years would be enough time for 9 orders of magnitude of growth (more than has occurred in all of human history).
  • Periods of time with more growth seem to have more economic or technological milestones, even if they are less calendar time.
  • Heuristics like "the next X years are very short relative to history, so probably not much will happen" seem to have a very bad historical track record when X is enough time for lots of growth to occur, and so it seem
... (read more)

Meta-comment: the level of discussion here has been fantastic. It's nice that these complex issues are discussed in this format; publically and relatively informally (though other formats obviously have their advantages too). Thanks to all contributors.

9
SiebeRozendal
Exactly! It reminds me a lot of the Polymath Project in which maths problems were solved collaboratively. I really wish EA made more use of this - I think Will's recent choice to post his ideas to the Forum is turning out to be an excellent choice.
3
Stefan_Schubert
Cf. this LessWrong-post on the Parliamentary Model for moral uncertainty which explicitly mentions the Polymath Project. https://www.lesswrong.com/posts/whhsY6JQXfJs7rMFS/polymath-style-attack-on-the-parliamentary-model-for-moral

Great discussion here, top quality comments. To make one aspect of this a bit clearer I made this figure of different 'hingeiness' trajectories and their implications:

Will adds: "In this post I’m just saying it’s unlikely we’re at A2, rather than at some other point in that curve, or on a different curve, and the evidence we have doesn’t give us strong enough evidence to think we’re at A2.

But then yeah it’s a really good point that even if one thinks hinginess is increasing locally, and feels confident about that, it doesn’t mean we’re atop the last peak.

A related point from the graphs: even if hinginess is locally decreasing faster than the real rate of interest, that’s still not sufficient for spending, if there will be some future time when hinginess starts increasing or staying the same or slowing to less than the real rate of interest (as long as you can save for that long)."

Upvote for using graphics to elucidate discussion on the Forum. Haven't seen it often and it's very helpful!

As a side note, Derek Parfit was an early advocate of what you call the 'Hinge of History Hypothesis'. He even uses the expression 'hinge of history' in the following quote (perhaps that's the inspiration for your label):

We live during the hinge of history. Given the scientific and technological discoveries of the last two centuries, the world has never changed as fast. We shall soon have even greater powers to transform, not only our surroundings, but ourselves and our successors. If we act wisely in the next few centuries, humanity will survive its most dangerous and decisive period. Our descendants could, if necessary, go elsewhere, spreading through this galaxy. (On What Matters, vol. 2, Oxford, 2011, p. 616)

Interestingly, he had expressed similar views already in 1984, though back then he didn't articulate why he believed that the present time is uniquely important:

the part of our moral theory... that covers how we affect future generations... is the most important part of our moral theory, since the next few centuries will be the most important in human history. (Reasons and Persons, Oxford, 1984, p. 351)

Thanks, Pablo! Yeah, the reference was deliberate — I’m actually aiming to turn a revised version of this post into a book chapter in a Festschrift for Parfit. But I should have given the great man his due! And I didn’t know he’d made the ‘most important centuries’ claim in Reasons and Persons, that’s very helpful!

Thanks Pablo, I also didn't know he had claimed this at the very time he was introducing population ethics and extinction risk.

The most obvious implication, however, is regarding what proportion of resources longtermist EAs should be spending on near-term existential risk mitigation versus what I call ‘buck-passing’ strategies like saving or movement-building.

In his excellent Charity Cost Effectiveness in an Uncertain World, first published in 2013, Brian Tomasik calls this approach 'Punting to the Future'. Unless there are strong reasons for introducing a new label, I suggest sticking to Brian's original name, both to avoid unnecessary terminological profusion and to credit those who pioneered discussion of this idea.

Great post!

Even just a few decades ago, a longtermist altruist would not have thought of risk from AI or synthetic biology, and wouldn’t have known that they could have taken action on them.

Minor point, but I think this is unclear. On AI see e.g. here. On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."

On synbio I'm less familiar but I'm guessing someone more than a few decades ago was able to think thoughts like "Once we understand cell biology realy well, seems like we might be able to engineer pathogens much more destructive than those served up by nature."

+1. I don't know the intellectual history well but the risk from engineered pathogens should have been apparent 4 decades ago in 1975 if not (more likely, IMO) earlier.

A fairly random sample of writing on the topic:

  • Jack London's 1910 short story "An Unparalleled Invasion" [CW: really racist] imagines genocide through biological warfare and the possibility that a "hybridization" between pathogens created "a new and frightfully virulent germ" (I don't think he's suggesting the hybridization was intentional but it's a bit ambiguous).
  • the possibility of engineering pathogens was seriously discussed 4 decades ago at the Asilomar Conference in 1975.
  • There's a 1982 sci-fi book by a famous writer where a vengeful molecular biologist releases a pathogen engineered to be GCR-or-worse.
  • In 1986, a U.S. Defense Department official was quoted saying "“The t
... (read more)

Szilard anticipated nuclear weapons (and launched a large and effective strategy to cause the liberal democracies to get them ahead of totalitarian states, although with regret), and was also concerned about germ warfare (along with many of the anti-nuclear scientists). See this 1949 story he wrote. Szilard seems very much like an agenty sophisticated anti-xrisk actor.

4
CarlShulman
Plus the Soviet bioweapons program was actively at work to engineer pathogens for enhanced destructiveness during the 70s and 80s using new biotechnology (and had been using progessively more advanced methods through the 20th century.
6
William_MacAskill
Huh, thanks for the great link! I hadn’t seen that before, and had been under the impression that though some people (e.g. Good, Turing) had suggested the intelligence explosion, no-one really worried about the risks. Looks like I was just wrong about that.

Just a quick thought: I wonder whether the hingiest times were during periods of potential human population bottlenecks. E.g., Wikipedia says:

A 2005 study from Rutgers University theorized that the pre-1492 native populations of the Americas are the descendants of only 70 individuals who crossed the land bridge between Asia and North America.
[...]
In 2000, a Molecular Biology and Evolution paper suggested a transplanting model or a 'long bottleneck' to account for the limited genetic variation, rather than a catastrophic environmental change.[6] This would be consistent with suggestions that in sub-Saharan Africa numbers could have dropped at times as low as 2,000, for perhaps as long as 100,000 years, before numbers began to expand again in the Late Stone Age.

(Note that the Wikipedia article doesn't seem super well done, and also that it appears there has been significant scholarly controversy around population bottleneck claims. I don't want to claim that there in fact were population bottlenecks; I'm just curious what the implications in terms of hinginess would be if there were.)

As a first pass, it seems plausible to me that e.g. the action of any one of... (read more)

8
Max_Daniel
On a second thought, maybe what we should do is: take some person at ti (bracketing for a moment whether we draw someone uniformly at random, or take the one with most influence, or whatever) and then look at the difference between their actual actions (or the actions we'd expect them to take in the possible world we're considering if the values of the person are also determined by our sampling procedure) and the actions they'd take if we "intervene" to assume this person in fact was a longtermist altruist. This definition would suggest that hinginess in the periods I mentioned wasn't that high: It's true that one of 70 people helping to hunt a bison made a big difference when compared to doing nothing; however, probably there is approximately zero difference between what that person has actually done and what they would have done if there had been a longtermist altruists: they'd have helped hunting a bison in both cases.
4
Pablo
I just realized that there are actually two separate reasons for thinking that the hingiest times in history were periods of population bottlenecks. First, because tiny populations are much more vulnerable to extinction than much larger populations are. Second, because in smaller populations an individual person has a larger share of influence than they do in larger populations, holding total influence constant. Compare population bottlenecks to one of Will's examples: Unlike the 17th century, which is hingier only because comparatively fewer people exist, periods of population bottlenecks are hingier both because of their unusually low population and because they are "a bigger deal" than other periods.
2
Tobias_Baumann
Do you think that this effect only happens in very small populations settling new territory, or is it generally the case that a smaller population means more hinginess? If the latter, then that suggests that, all else equal, the present is hingier than the future (though the past is even hingier), if we assume that future populations are bigger (possibly by a large factor). While the current population is not small in absolute terms, it could plausibly be considered a population bottleneck relative to a future cosmic civilisation (if space colonisation becomes feasible).
3
Max_Daniel
[Epistemic status: have never thought about this issue specifically in a focused way.] I think as a super rough first pass it makes sense to think that, all else equal, smaller populations mean more hinginess. I feel uncertain to what extent this is just because we should then expect any single person to own a greater share of total resources at some point in time. One extreme assumption would be that the relative distribution of resources at any given point in time is the prior for everyone's influence over the long-run future, perhaps weighted by how much they care about the long run. On that extreme assumption, this would probably mean that the maximum influence over all agents is higher today because global inequality is presumably higher than during population bottlenecks or in fact any past period. However, I think that assumption is too extreme: it's not the case that every generation can propagate their values indefinitely, with the share of their influence staying constant; for example, it might be that certain developments are determined by environmental conditions or other factors that are independent from any human's values. This turns on quite controversial questions around environmental/technological determinism that probably have a nuanced rather than simple answer.
Kit
23
0
0

This was very thought-provoking. I expect I'll come back to it a number of times.

I suspect that how the model works depends a lot on exactly how this definition is interpreted:

a time t is more influential (from a longtermist perspective) than a time t iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at t rather than to a longtermist altruist living at t.

In particular, I think you intend direct work to include extinction risk reduction, and to be opposite to strategies which punt decisions to future generations. However, extinction risk reduction seems like the mother of all punting strategies, so it seems naturally categorised as not direct work for the purpose of considering whether to punt. Due to this, I expect some weirdness around the categorisation, and would guess that a precise definition would be productive.

(Added formatting and bold to the quote for clarity.)

JanB
13
0
0

How I see it:

Extinction risk reduction (and other type of "direct work") affects all future generations similarly. If the most influential century is still to come, extinction risk reduction also affects the people alive during that century (by making sure they exist). Thus, extinction risk reduction has a "punting to future generations that live in hingey times" component. However, extinction risk reduction also affects all the unhingey future generations directly, and the effects are not primarily mediated through the people alive in the most influential centuries.

(Then, by definition, if ours is not a very hingey time, direct work is not a very promising strategy for punting. The effect on people alive during the "most influential times" has to be small by definition. If direct work did strongly enable the people living in the most influential century (e.g. by strongly increasing the chance that they come into existence), it would also enable many other generations a lot. This would imply that the present was quite hingey after all, in contradiction to the assumption that the present is unhingey.)

Punting strategies, in contrast, affect future generations primarly via their effect on the people alive in the most influential centuries.

1
Kit
That seems like a sufficiently precise definition. Whether there are any interventions in that category seems like an open question. (Maybe it is a lot more narrow than Will's intention.)
7
Stefan_Schubert
I agree that it seems important to get more clarity over the direct work vs buck-passing/punting distinction. Building capacity for future extinction risk reduction work may be seen as more "meta"/"buck-passing/"punting" still. There has been an interesting discussion on direct vs meta-level work to reduce existential risk; see Toby Ord and Owen Cotton-Barratt.
7
Kit
Thanks! I hadn't seen the Cotton-Barratt piece before. Extinction risk reduction punts on the question of which future problems are most important to solve, but not how best to tackle the problem of extinction risk specifically. Building capacity for future extinction risk reduction work punts on how best to tackle the problem of extinction risk specifically, but not the question of which future problems are most important to solve. They seem to do more/less punting than one another along different dimensions, so, depending on one's definition of direct vs punting, each could be more of a punt than the other. I'm not clear on whether this means we should pick a dimension to talk about, or whether there is no meaningful single spectrum of directness vs punting.

Nice post :) A couple of comments:

even if we’re at some enormously influential time right now, if there’s some future time that is even more influential, then the most obvious EA activity would be to invest resources (whether via financial investment or some sort of values-spreading) in order that our resources can be used at that future, more high-impact, time. Perhaps there’s some reason why that plan doesn’t make sense; but, currently, almost no-one is even taking that possibility seriously.

To me it seems that the biggest constraint on being able to invest in future centuries is the continuous existence of a trustworthy movement from now until then. I imagine that a lot of meta work implicitly contributes towards this; so the idea that the HoH is far in the future is an argument for more meta work (and more meta work targeted towards EA longevity in particular). But my prior on a given movement remaining trustworthy over long time periods is quite low, and becomes lower the more money it is entrusted with.

But there are future scenarios that we can imagine now that would seem very influential:

To the ones you listed, I would add:

  • The time period during which we reach technological
... (read more)
1. It’s a priori extremely unlikely that we’re at the hinge of history
Claim 1

I want to push back on the idea of setting the "ur-prior" at 1 in 100,000, which seems far too low to me. I also will critique the method that arrived at that number, and propose a method of determining the prior that seems superior to me.

(One note before that: I'm going to ignore the possibility that the hingiest century could be in the past and assume that we are just interested in the question of how probable it is that the current century is hingier than any future century.)

First, to argue that 1 in 100,000 is too low: The hingiest century of the future must occur before civilization goes extinct. Therefore, one's prior that the current century is the hingiest century of the future must be at least as high as one's credence that civilization will go extinct in the current century. I think this is already (significantly) greater than 1 in 100,000.

I'll come back to this idea when I propose my method of determining the prior, but first to critique yours:

The method you used to come up with the 1 in 100,000 prior that our current century is hingier than any future... (read more)

Kit
20
0
0

Using a distribution over possible futures seems important. The specific method you propose seems useful for getting a better picture of maxcentury most leveraged. However, what we want in order to make decisions is something more akin to maxleverage of century . The most obvious difference is that scenarios in which the future is short and there is little one can do about it score highly on expected ranking and low on expected value. I am unclear on whether a flat prior makes sense for expectancy, but it seems more reasonable than for probability.

Of course, even maxleverage of century does not accurately reflect what we are looking for. Similarly to Gregory_Lewis' comment, the decision-relevant thing (if 'punting to the future' is possible at all) is closer still to maxwhat we will assess the leverage of century i to be at the time. i.e. whether we will have higher expected leverage in some future century according to our beliefs at that time. Thinking this through, I also find it plausible that even this does not make sense when using the definitions in the post, and will make a related top-level comment.

While I agree with you that is not that action relevant, it is what Will is analyzing in the post, and think that William Kiely's suggested prior seems basically reasonable for answering that question. As Will said explicitly in another comment:

Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).

I do think that the focus on is the part of the post that I am least satisfied by, and that makes it hardest to engage with it, since I don't really know why we care about the question of "are we in the most influential time in history?". What we actually care about is the effectiveness of our interventions to give resources to the future, and the marginal effectiveness of those resources in the future, both of which are quite far removed from that question (because of the difficulties of sending resources to the future, and the fact that the answer to that question makes overall only a small difference for the total magnitude of the impact of any ... (read more)

Kit
13
0
0

I agree that, among other things, discussion of mechanisms for sending resources to the future would needed to make such a decision. I figured that all these other considerations were deliberately excluded from this post to keep its scope manageable.

However, I do think that one can interpret the post as making claims about a more insightful kind of probability: the odds with which the current century is the one which will have the highest leverage-evaluated-at-the-time (in contrast to an omniscient view / end-of-time evaluation, which is what this thread mostly focuses on). I think that William_MacAskill's main arguments are broadly compatible with both of these concepts, so one could get more out of the piece by interpreting it as about the more useful concept.

Formally, one could see the thing being analysed as

maximises leverage of century ,

where is the knowledge available at the beginning of century i. If we and all future generations may freely move resources across time, and some things that are maybe omitted from the leverage definition are held constant, this expression tells us with what odds we are correct to do 'direct work' today as oppo... (read more)

Another reason to think that MacAskill's method of determining the prior is flawed that I forgot to write down:

If one uses the same approach to come up with a prior that the second, third, fourth, X century is the hingiest century of the future, and then adds these priors together one ought to get 100%. This is true because exactly one of the set of all future centuries must be the hingiest century of the future. Yet with MacAskill's method of determining the priors, when one sums all the individual priors that the hingiest century is century X, one gets a number far greater than 100%. That is, MacAskill's estimate is that there are 1 million expected centuries ahead, so he uses a prior of 1 in 1 million that the first century is the hingiest (before the arbitrary 10x adjustment). However, his model assumes that it's possible that civilization could last as long as 10 billion centuries (1 trillion years). So what is his prior that e.g. the 2 billionth century is the hingiest? 1 in 1 million also? Surely this isn't reasonable, for if one uses a prior of 1 in 1 million for all 10 billion possible centuries then, one's prior expectation that one of the 10... (read more)

4
William_MacAskill
Thanks, William!  Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe.  But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe. And it’s only the latter that we care about.   So what I really should have said (in my too-glib argument) is: for simplicity, just assume a high-population future, which are the action-relevant futures if you're a longtermist. Then take a uniform prior over all times (or all people) in that high-population future. So my claim is: “In the action-relevant worlds, the frequency of ‘most important time’ (or ‘most important person’) is extremely low, and so should be our prior.”
1
WilliamKiely
Thanks for the reply, Will. I go by Will too by the way. This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work). That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn't exercise it) to make the future a big-population universe.
2
MichaelA🔸
Did you mean to say "assuming a 1% risk of extinction per century for 1000 centuries"? That seems to better fit the rest of what you said, and what's in your model, as best I can tell.
1
WilliamKiely
Yes, thank you for the correction!

Wouldn't your framework also imply a similarly overwhelming prior against saving? If long term saving works with exponential growth then we're again more important than virtually everyone who will ever live, by being in the first n billion people who had any options for such long term saving. The prior for 'most important century to invest' and 'most important century to donate/act directly' shouldn't be radically uncoupled.

kbog
13
0
0

I think this argument implicitly assumes a moral objectivist point of view.

I'd say that most people in history have been a lot closer to the hinge of history when you recognize that the HoH depends on someone's values.

If you were a hunter-gatherer living in 20,000 BC then you cared about raising your family and building your weir and you lived at the hinge of history for that.

If you were a philosopher living in 400 BC then you cared about the intellectual progress of the Western world and you lived at the hinge of history for that.

If you were a theologian living in 1550 then you cared about the struggle of Catholic and Protestant doctrines and you lived at the hinge of history for that.

If you're an Effective Altruist living in 2020 then you care about global welfare and existential risk, and you live at the hinge of history for that.

If you're a gay space luxury communist living in 2100 then you care about seizing the moons of production to have their raw materials redistributed to masses, and you live at the hinge of history for that.

This isn't a necessary relationship. We may say that some of these historical hinges actually were really important in our minds, and maybe a future hinge will be more important. But generally speaking, the rise and fall of motivations and ideologies is correlated with the sociopolitical opportunity for them to matter. So most people throughout history have lived in hingy times. 

There were a couple of recurring questions, so I’ve addressed them here.

What’s the point of this discussion — isn’t passing on resources to the future too hard to be worth considering? Won’t the money be stolen, or used by people with worse values?

In brief: Yes, losing what you’ve invested is a risk, but (at least for relatively small donors) it’s outweighed by investment returns. 

Longer: The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time.  Suppose I think that the best opportunities in, say, 100 years, are as good as the best opportunities now. Then, if I have a small amount of money, then I can get (say) at least a 2% return per year on those funds. But I shouldn’t think that the chance of my funds being appropriated (or otherwise lost) is as high as 2% per year. So the expected amount of good I do is greater by saving. 

So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

(Caveat that once we consider larger amounts of m... (read more)

Kit
29
0
0

I was very surprised to see that 'funds being appropriated (or otherwise lost)' is the main concern with attempting to move resources 100 years into the future. Before seeing this comment, I would have been confident that the primary difficulty is in building an institution which maintains acceptable values† for 100 years.

Some of the very limited data we have on value drift within individual people suggests losses of 11% and 18% per year for two groups over 5 years. I think these numbers are a reasonable estimate for people who have held certain values for 1-6 years, with long-run drop-off for individuals being lower.

A more relevant but less precise outside view is my intuitions about how long charities which have clear founding values tend to stick to those values after their founders leave. I think of this as ballpark a decade on average, though hopefully we could do better if investing time and money in increasing this.

Perhaps yet more relevant and yet less precise is the history of institutions through the eras which have built themselves around some values which they thought of as non-negotiable (in the same way that we might see impartiality as non-negotiable). For ... (read more)

Sorry - 'or otherwise lost' qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift.

I think there's a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations:

  • If you have precise values (e.g. classical utilitarianism) then it's easier to transmit those values across time - you can write your values down clearly as part of the constitution of the foundation, and it's easier to find and identify younger people to take over the fund who also endorse those values. In contrast, for other foundations, the ultimate aims of the foundation are often not clear, and too dependent on a particular empirical situation (e.g. Benjamin Franklin's funds were to 'to provide loans for apprentices to start their businesses' (!!)).
  • If you take a lot of time carefully choosing who your successors are (and those people take a lot of time over who their successors are).

Then to reduce appropriation, one could spread the funds across many different countries and different people who share your values. (Again, easier if you endorse a set of values that are legible and non-idiosyncrat... (read more)

9
Kit
Got it. Given the inclusion of (bad) value drift in 'appropriated (or otherwise lost)', my previous comment should just be interpreted as providing evidence to counter this claim: [Recap of my previous comment] It seems that this quote predicts a lower rate than there has ever† been before. Such predictions can be correct! However, a plan for making the prediction come true is needed. It seems that the plan should be different to what essentially all†† the people with higher rates of (bad) value drift did. These particular suggestions (succession planning and including an institution's objectives in its charter) seem qualitatively similar to significant minority practices in the past. (e.g. one of my outside views uses the reference class of 'charities with clear founding values'. For the 'institutions through the eras' one, religious groups with explicit creeds and explicit succession planning were prominent examples I had in mind.) The open question then seems to be whether EAs will tend to achieve sufficient improvement in such practices to bring (bad) value drift down by around an order of magnitude relative to what has been achieved historically. This seems unlikely to me, but not implausible. In particular, the idea that it is easier to design a constitution based on classical utilitarianism than for other goals people have had is very interesting. Aside: investing heavily in these practices seems easier for larger donors. The quote seems very hard to defend for donors too small to attract a highly dedicated successor. This discussion has made me think that insofar as one does punt to the future, making progress on how to reduce institutional value drift would be a very valuable project, even if I'm doubtful about how much progress is possible. † It seems appropriate to exclude all groups coordinating for mutual self-interest, such as governments. (This is broader than my initial carving out of for-profits.) †† However, it seems useful to think about a mu
1
Max_Daniel
Just to make sure I understand - you're saying that, historically, the chance of funds (that were not intended just to advance mutual self-interest) being appropriated has always been higher than 2% per year? If so, I'm curious what this is based on. - Do you have specific cases of appropriation in mind? Are you mostly appealing to charities with clear founding values and religious groups, both of which you mention later? [Asking because I feel like I don't have a good grasp on the probability we're trying to assess here.]
3
Kit
Not appropriated: lost to value drift. (Hence, yes, the historical cases I draw on are the same as in my comment 3 up in this thread.) I'm thinking of this quantity as something like the proportion of resources which will in expectation be dedicated 100 years later to the original mission as envisaged by the founders, annualised.
2
Max_Daniel
I think you make good points, and overall I feel quite sympathetic to the view you expressed. Just one quick thought pushing a bit in the other direction: But perhaps this example is quite relevant? To put it crudely, perhaps we can get away with keeping the value "do the most good" stable. This seems more analogous to "maximize profits" than to any specification of value that refers to a specific content of "doing good" (e.g., food aid to country X, or "abolish factory farming", or "reduce existential risk"). More generally, the crucial point seems to be: the content and specifics of values might change, but some of this change might be something we endorse. And perhaps there's a positive correlation between the likelihood of a change in values and how likely we'd be to agree with it upon reflection. [Exploring this fully seems quite complex both in terms of metaethics and empirical considerations.]
2
Kit
Thanks. I agree that we might endorse some (or many) changes. Hidden away in my first footnote is a link to a pretty broad set of values. To expand: I would be excited to give (and have in the past given) resources to people smarter than me who are outcome-oriented, maximizing, cause-impartial and egalitarian, as defined by Will here, even (or especially) if they plan to use them differently to how I would. Similarly, keeping the value 'do the most good' stable maybe means something like keeping the outcome-oriented, maximizing, cause-impartial and egalitarian values stable. For clarity, I excluded profit maximisation because incentives to pursue this goal seem powerful in a way that might never apply to effective altruism, however broadly it is construed. (The 'impartial' part seems especially hard to keep stable.) In particular, profit maximisation does not even need to be propagated: e.g. if a company does some random other stuff for a while, its stakeholders will still have a moderate incentive to maximise profits, so will typically return to doing this. A similar statement is that 'maximise profits' is the default state of things. No matter how broad our conception of 'do the most good' can be made, it seems likely to lack this property (except for lock-in scenarios).

The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time. [...] So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

Are you referring to average or marginal cost-effectiveness here? If "average", then this seems wrong. From the perspective of making a decision on whether to spend on longtermist causes now or later, what matters is the marginal cost-effectiveness of the best opportunities available now versus later. For example, it could well be the case that the next century is more influential than this century (has higher average cost-effectiveness) but because longtermism has gained a lot more ground in terms of popularity, all the highly cost-effective interventions are already done so the money I've invested will have to be spent on marginal interventions that are less cost-effective than the marginal opportunities available today.

If you're referring to marginal cost-effectiveness instead, then your conception of "influ

... (read more)
My view is that, in the aggregate, these outside-view arguments should substantially update one from one’s prior towards HoH, but not all the way to significant credence in HoH.
[3] Quantitatively: These considerations push me to put my posterior on HoH into something like the [1%, 0.1%] interval. But this credence interval feels very made-up and very unstable.

What credence to 'this century is the most HoH-ish there will ever be henceforth?' That claim soaks up credence from trends towards diminishing influence over time, and our time is among the very first to benefit from longtermist altruists actually existing to get non-zero returns from longtermist strategies and facing plausible x-risks. The combination of those two factors seems to have a good shot at 'most HoH century' but substantially better than that for 'most HoH century remaining.'


Here are two distinct views:
Strong Longtermism := The primary determinant of the value of our actions is the effects of those actions on the very long-run future.
The Hinge of History Hypothesis (HoH) :=  We are living at the most influential time ever. 
It seems that, in the effective altruism community as it currently stands, those who believe longtermism generally also assign significant credence to HoH; I’ll precisify ‘significant’ as >10% when ‘time’ is used to refer to a period of a century, but my impression is that many longtermists I know would assign >30% credence to this view.  It’s a pretty striking fact that these two views are so often held together — they are very different claims, and it’s not obvious why they should so often be jointly endorsed.

Two clear and common channels I have seen are:

  • Longtermism leads to looking around for things that would have lasting impacts (e.g. Parfit and Singer attending to existential risk, and noticing that a large portion of all technological advances have been in the last few centuries, and a large portion of the remainder look likely to come in the next few centuries, including the technologies for much higher existential risk
... (read more)
2
MichaelA🔸
Interesting comment. I think personally I had a sort of amplifying feedback loop between longtermism and assigning a "significant" credence to HoH (not actually sure what credence I assign to it, but it probably at least sometimes feels >10%). Something very roughly like the following: 1. I had a general inclination towards utilitarianism and a large moral circle, which got me into EA. 2. EA introduced me to arguments about longtermism and existential risks this century being high enough to be a global priority (which could perhaps be quite "low" by usual standards) 3. I started becoming convinced by those arguments, and thus learning more about them, and beginning to switch my focus to x-risk reduction. 4. Learning and thinking more about x-risks made the potential scale and quality of the future if we avoid them more salient, which made longtermism more emotionally resonant. This then feeds back into 2 and 3. 5. Learning and thinking more about x-risks and longtermism also exposed me to more arguments against concerns about x-risks, and meant I was positioned to respond to them not with "Ok, let's shift some probability mass towards the best thing to work on being global poverty and/or animal welfare" but instead "Ok, let's shift some probability mass towards the best thing to work on being longtermist efforts other than current work on x-risks." This led me to think more about various ways longtermism could be acted on, and thus more ways the future could be excellent or terrible, and thus more reasons why longtermism feels important. I'm not saying this is an ideal reasoning process. Some of it arguably looks a little like motivated reasoning or entering something of an echo chamber. But I think that's roughly the process I went through.

This is an unusual comment for me, since I will talk about religion. The Baha'i Faith claims, at least as it would be expressed in the terminology used here, that something very close to the following are both true:
-Strong Longtermism and
-The Hinge of History Hypothesis (HOH).
Conjoining/conflating these two claims is the position criticized by Will in this blog post, a position which is at least to a certain degree defended by Toby in his comments.

My sense is that the Baha'i Faith strongly agrees with Toby (and probably goes much farther than he would) in claiming that both these hypotheses are true, and in addition that certain other hypotheses mentioned below are true. I won't back all this up with quotes now, as I have no idea if anyone here is interested in that level of discussion, and it would anyway require some research time to get right. So what I am stating here amounts to my opinions about Baha'i views.

My views are that, at least at surface level, there is a strong coincidence, one well worth noting, between the views of the Baha'i Faith (in the domain under consideration) and the common views in Effective Altruism that Will intended to criticize... (read more)

Kelsey Piper has just published a Vox article, 'Is this the most important century in human history?', discussing this post.

[anonymous]14
0
0

Important post!

I like your simulation update against HoH. I was meaning to write a post about this. Brian Tomasik has a great paper that quantitatively models the ratio of our influence on the short vs long-term. Though you've linked it, I think it's worth highlighting it more.

How the Simulation Argument Dampens Future Fanaticism

The paper cleverly argues that the simulation argument combined with anthropics either strongly dampens the expected impact of far future altruism or strongly increases the impact of short-term altruism. That conclusion seems fairly robust to the choice of decision- and anthropic theory and uncertainty over some empirical parameters. He doesn't directly discuss how the "seems like HoH" observation affects his conclusions, but I think it makes them stronger. (i recommend Brian's simplified calculations here).

I assume this paper didn't get as much discussion as it deserves because Brian posted it in the dark days of LW.

9[anonymous]
2. For me, the HoH update is big enough to make a the simulation hypothesis a pretty likely explanation. It also makes it less likely that there are alternative explanations for "HoH seems likely". See my old post here (probably better to read this comment though). Imagine a Bayesian model with a variable S="HoH seems likely" (to us) and 3 variables pointing towards it: "HoH" (prior: 0.001), "simulation" (prior=0.1), and "other wrong but convincing arguments" (prior=0.01). Note that it seems pretty unlikely there will be convincing but wrong arguments a priori (I used 0.01) because we haven't updated on the outside view yet. Further, assume that all three causes, if true, are equally likely to cause "HoH seems likely" (say with probability 1, but the probability doesn't affect the posterior). Apply Bayes rule: We've observed "HoH seems likely". The denominator in Bayes rule is P(HoH seems likely) ~~ 0.111 (roughly the sum of the three priors because the priors are small). The numerator for each hypothesis H equals 1 * P(H). Bayes rule gives an equal update (ca 1/0.111x = 9x) in favor of every hypothesis, bringing up the probability of "simulation" to nearly 90%. Note that this probability decreases if we find, or think there are better explanations for "HoH seems likely". This is plausible but not overwhelmingly likely because we already have a decent explanation with prior 0.1. If we didn't have one, we would still have a lot of pressure to explain "HoH seems likely". The existence of the plausible explanation "simulation" with prior 0.1 "explains away" the need for other explanations such as those falling under "wrong but convincing argument". This is just an example, feel free to plug in your numbers, or critique the model.

I liked this post. One comment:

Or perhaps extinction risk is high, but will stay high indefinitely, in which case the future is not huge in expectation, and the grounds for strong longtermism fall away.

I don't think this necessarily follows. If the present generation can reduce the risk of extinction for all future generations, the present value of extinction reduction may still be high enough to vindicate strong longtermism. For example, suppose that each century will be exposed to a 2% constant risk of extinction, and that we can bring that risk down to 1% by devoting sufficient resources to extinction risk reduction. Assuming a stable population of 10 billion, then thanks to our efforts an additional 500 billion lives will exist in expectation, and most of these lives will exist more than 10,000 years from now. Relaxing the stable population assumption strengthens this conclusion.

9
William_MacAskill
Agreed, good point; I was thinking just of the case where you reduce extinction risk in one period but not in others.  I’ll note, though, that reducing extinction risk at all future times seems very hard to do. I can imagine, if we’re close to a values lock-in point, we could shift societal values such that they care about future extinction risk much more than they would otherwise have done. But if that's the pathway, then the Time of Perils view wouldn’t provide an argument for HoH independent of the Value Lock-In view.

Thanks, I think this was very good.

Re movement-building as a buck-passing strategy, I guess that the formation of the major world religions can be seen as movement-building, in a sense. Yet my interpretation is that you don't see that as an example of buck-passing, but as a more direct change of world history (you mention it as an example of a particularly influential time). Thus some forms of movement-building are, on this view, seen as buck-passing, and not others (size of the movement is probably a relevant factor here, but no doubt there are others).

Maybe that serves to show that the distinction between directly changing world history and passing the buck for later isn't sharp (maybe it could be seen as a matter of degree). It would be good to see some further analysis of this distinction.

5
William_MacAskill
Thanks - I agree that this distinction is not as crisp as would be ideal. I’d see religion-spreading, and movement-building, as in practice almost always a mixed strategy: in part one is giving resources to future people, and in part one is also directly altering how the future goes. But it's more like buck-passing than it is like direct work, so I think I should just not include the Axial age in the list of particularly influential times (given my definition of 'influential').
there are an expected 1 million centuries to come, and the natural prior on the claim that we’re in the most influential century ever is 1 in 1 million. This would be too low in one important way, namely that the number of future people is decreasing every century, so it’s much less likely that the final century will be more influential than the first century. But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000. 

Half-baked thought: you might think that the very very long futures will mostly have been locked in very close to their start—i.e. that timescales for locking in the best futures are much much shorter than the maximum lifespan for civilisation. This would push you towards a prior over an even smaller chunk of the expected future.

Something like this view seems implicit in some ways of talking about the future, and feels plausible to me, though I’m not sure what the best arguments are.

Great post! It's great to see more thought going into these issues. Personally, I'm quite sceptical about claims that our time is especially influential, and I don't have a strong view on whether our time is more or less hingy than other times. Some additional thoughts:

I got the impression that you assume that some time (or times) are particularly hingy (and then go on to ask whether it's our time). But it is also perfectly possible that no time is hingy, so I feel that this assumption needs to be justified. Of course, there is some variation and therefore there is inevitably a most influential time, but the crux of the matter is whether there are differences by a large factor (not just 1.5x). And that is not obvious; for instance, if we look at how people in the past could have shaped 21st century societies, it is not clear to me whether any time was especially important.

I think a key question for longtermism is whether the evolution of values and power will eventually settle in some steady state (i.e. the end of history). It is plausible that hinginess increases as one gets closer to this point. (But it's not obvious, e.g. there could just be a slow conv... (read more)

Ofer
10
0
0

Interesting post!

But even if we restricted ourselves to a uniform prior over the first 10% of civilisation’s history, the prior would still be as low as 1 in 100,000.

Why should we use a uniform distribution as a prior? If I had to bet on which century would be the most influential for a random alien civilization, my prior distribution for "most influential century" would be a monotonically decreasing function.

Thanks for writing this all up! A few small comments:

And, for the Time of Perils view to really support HoH, it’s not quite enough to show that extinction risk is unusually high; what’s needed is that extinction risk mitigation efforts are unusually cost-effective. So part of the view must be not only that extinction risk is unusually high at this time, but also that longtermist altruists are unusually well-placed to decrease those risks — perhaps because extinction risk reduction is unusually neglected.

It could even be the case that extinction ris... (read more)

1
Ramiro
I agree with your reasoning concerning uncertainty. In the arguments against HoH, there’s an appeal to the uncertainty of our evaluations of "Influence". However, the definition of most influential time depends on an evaluation of the opportunity costs of investing in one time vs. another (such as the short-term vs. the long-term). Uncertainty is a double-edged sword: I get confused when someone argues for “give later” mostly on the ground of our current uncertainty about impact (actually, uncertainty often induces risk-aversion and presentist bias). Suppose that I currently have a credence 0.7 over the statement “AMF saves at least a life (30 QALY) for every U$3,000”; if I wait ten years, I can hope my confidence on such statements will increase to something like 0.8. However, my confidence in such an increase is just 0.9 – so, when I aggregate all of this uncertainty, it’s almost a draw – 0.72. (Sorry about using point estimates, but I’m no statistician, and I guess we better keep it simple) Something similar applies to “start a movement”, and I didn’t even mention cluelessness and value shift. So, if I donate to a Fund that promises me to invest in the best actions in the long term future, instead of the short-term, I have to trust: a) that the world is not going to end first (so, I have to discount extinction rates); b) the Fund and the underlying financial structure will not end first (or significantly lose its value); c) the Fund will correctly identify a more influential moment, and d) its investment will be aligned with my impartial preferences (as I would decide if I had the same info).

Interesting piece. One challenge in extending it to decision making is "resources". It's not clear if you mean financial instruments or some kind of stockpiling. There appears to be some not fully considered vacillation on that topic.

Financial instruments are probably the default, but as we move into more and more long term views, the meaning of these becomes more vague. Does it really pass "resources" to a future generation by having stockpiled financial instruments? While in a micro-economic level these are very translatable... (read more)

I agree with most of your reasoning, but disagree significantly about this:

>The case for focusing on AI safety and existential risk reduction is much weaker if you live in a simulation than if you don’t.

It's true that a pure utilitarian would expect about an order of magnitude less utility from x-risk reduction if we have a 90% chance of being in a simulation compared to a zero chance of being in a simulation. But the pure utilitarian case for x-risk reduction isn't very sensitive to an order of magnitude change in utility, since the expected ... (read more)

This post introduced the "hinge of history hypothesis" to the broader EA community, and that has been a very valuable contribution. (Although note that the author states that they are mostly summarizing existing work, rather than creating novel insights.)

The definitions are clear, and time has proven that the terms "strong longtermism" and "hinge of history" are valuable when considering a wide variety of questions.

Will has since published an updated article, which he links to in this post, and the topic has received input from others, e.g. this critique f... (read more)

Claim: The most influential time in the future must occur before civilization goes extinct.

Thoughts on whether this is true or not?

3
SiebeRozendal
Must is a strong word, so that's one reason I don't think it's true. What do you mean by "civilization goes extinct"? Because 1) There might be complex societies beyond Earth 2) New complex societies made up of intelligent beings can arise even after Homo Sapiens goes extinct

Typo corrections:

Lots of things are a priori extremely [unlikely] yet we should have high credence in them

and

so I should update towards the cards having [not] been shuffled.

and

All other things being equal, this gives us reason to give resources to future people than to use rather than to use those resources now.

This doesn't show up on the sidebar Table of Contents:

#3: The simulation update argument against HoH
2
William_MacAskill
Thanks! :)
1
JP Addison🔸
I believe the #3 not showing up is due to it having non-bold text on that line. (the [5] footnote). This is kinda awkwardly unexpected behavior, sorry about that. But I'm not sure what I'd rather the behavior be. The simple rule of "lines with only bold text are counted as h4, otherwise it's treated as a paragraph" probably leads to less surprise than some attempt to do a threshold.

Cool, thanks for getting all these ideas out there!

Possible correction: You write "P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)". Shouldn't the term on the right just be "P(simulation | doesn't seem like HoH)"?

1[anonymous]
Both seem true and relevant. You could in fact write P(seems like HoH | simulation) >> P(seems like HoH | not simulation), which leads to the other two via Bayes theorem.
4
Lukas Finnveden
Not necessarily. P(simulation | seems like HOH) = P(seems like HOH | simulation)*P(simulation) / (P(seems like HOH | simulation)*P(simulation) + P(seems like HOH | not simulation)*P(not simulation)) Even if P(seems like HoH | simulation) >> P(seems like HoH | not simulation), P(simulation | seems like HOH) could be much less than 50% if we have a low prior for P(simulation). That's why the term on the right might be wrong - the present text is claiming that our prior probability of being in a simulation should be large enough that HOH should make us assign a lot more than 50% to being in a simulation, which is a stronger claim than HOH just being strong evidence for us being in a simulation.
1[anonymous]
Agreed, I was assuming that the prior for the simulation hypothesis isn't very low because people seem to put credence in it even before Will's argument. But I found it worth noting that Will's inequality only follows from mine (the likelihood ratio) plus having a reasonably even prior odds ratio.
1
Lukas Finnveden
Ok, I see. This is kind of tangential, but some of the reasons that people put credence in it before Will's argument are very similar to Will's argument, so one has to make sure to not update on the same argument twice. Most of the force from the original simulation argument comes from the intuition that ancestor simulations are particularly interesting. (Bostrom's trilemma isn't nearly as interesting for a randomly chosen time-and-space chunk of the universe, because the most likely solution is that nobody ever hade any reason to simulate it.) Why would simulations of early humans be particularly interesting? I'd guess that this bottoms out in them having disproportionately much influence over the universe relative to how cheap they are to simulate, which is very close to the argument that Will is making.
2
trammell
Also, even if one could say P(simulation | seems like HoH) >> P(not-simulation | seems like HoH), that wouldn’t be decision relevant, since t could just be that P(simulation) >> P(not-simulation) in either case. What matters is which observation (seems like HoH or not) renders it more likely that the observer is being simulated.
1
trammell
We have no idea if simulations are even possible! We can’t just casually assert “P(seems like HoH | simulation) > P(seems like HoH | not simulation)”! All that we can reasonably speculate is that, if simulations are made, they’re more likely to be of special times than of boring times.
2
Lukas Finnveden
Did you make a typo here? "if simulations are made, they're more likely to be of special times than of boring times" is almost exactly what “P(seems like HoH | simulation) > P(seems like HoH | not simulation)” is saying. The only assumptions you need to go between them is that the world is more likely to seem like HoH for people living in special times than for people living in boring times, and that the statement "more likely to be of special times than of boring times" is meant relative to the rate at which special times and boring times appear outside of simulations.
1
trammell
And that P(simulation) > 0.
1[anonymous]
Yep, see reply to Lukas.

Would someone be willing to translate these sentences from philosophy/maths into English? Or let me know how I can work it out for myself?

That is: P(cards not shuffled)P(cards in perfect order | cards not shuffled) >> P(cards shuffled)P(cards in perfect order | cards shuffled), even if my prior credence was that P(cards shuffled) > P(cards not shuffled), so I should update towards the cards having not been shuffled.
Similarly, if it seems to me that I’m living in the most influential time ever, this gives me good reason to suspect that the r
... (read more)
1
wuschel
Imagine you play cards with your friends. You have the deck in your hand. You are pretty confident, that you have shuffled the deck. Than you seal the deck, and give yourself the first 10 cards. And what a surprise: You happen to find all the clubs in your hand! What is more reasonable to assume? That you just happen do dray all the clubs, or that you where wrong about having suffeld the cards? Rather the latter one. Compare this to: Imagine, thinking about the HoH hypothesis. You are pretty confident, that you are good at long term-forecasting, and you predict, that the most influential time in history in: NOW?! Here to, so the argument goes, it is more reasonable to assume, that your assumption of being good in forecasting the future, is flawed.

Very interesting post and discussion in the comments.

I said at the start that it’s non-obvious whaput follows, for the purposes of action, from outside-view longtermism. The most obvious course of action that might seem comparatively more promising is investment, such as saving in a long-term foundation, or movement-building, with the aim of increasing the amount of resources longtermist altruists have at a future, more hingey time.

Throughout a lot of this post, I was wondering if the sort of reasoning given in that quote would generalise to an upda... (read more)

P(simulation | seems like HoH ) >> P(not-simulation | seems like HoH)

Disagree: as a software engineer, my prior for the simulation hypothesis is extraordinarily low because common sense and the laws of physics indicate convincingly that we don't live in a simulation. (The only plausible exception is if I am the only person in the simulation.)

I like Toby's point—seems like the prior about "one person's influence over the future" should decrease over time, and the point about how a significant fraction of all cognitively modern humans ever are alive t... (read more)

As currently defined, long termists have two possible choices.

  1. Direct work to reduce X-risk
  2. Investing for the future (by saving or movement building) to then spend on reduction of x-risk at a later date

There are however other actions that may be more beneficial.

Let us look again at the definition of influential again

a time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at t
... (read more)

Thanks for this post. However, HoH still seems ambiguous to me, particularly when we take uncertainty seriously. For example, what kind of comparison is happening in “T is the most influential time ever” - and, consequently, what kind of probability function does one use to model credence in it?

1) Weak-HoH: “the sentence ‘t is hingey’ is more likely to be true for now (or for the next n years) than for any other similar set t in the future”

If you interpret hingey events as produced by stochastic processes modeled by an exponential distribution, then weak-H... (read more)

Curated and popular this week
Relevant opportunities