All of William_MacAskill's Comments + Replies

Towards a Weaker Longtermism

There are definitely some people who are fanatical strong longtermists, but a lot of people who are made out to be such treat it as an important consideration but not one held with certainty or overwhelming dominance over all other moral frames  and considerations. In my experience one cause of this is that if you write about implications within a particular worldview people assume you place 100% weight on it, when the correlation is a lot less than 1. 

 

I agree with this, and the example of Astronomical Waste is particularly notable. (As I u... (read more)

Towards a Weaker Longtermism

I'm also not defending or promoting strong longtermism in my next book.  I defend (non-strong) longtermism, and the  definition I use is: "longtermism is the view that positively influencing the longterm future is among the key moral priorities of our time." I agree with Toby on the analogy to environmentalism.

(The definition I use of strong longtermism is that it's the view that positively influencing the longterm future is the moral priority of our time.)

4Davidmanheim1moThanks Will - I apologize for mischaracterizing your views, and am very happy to see that I was misunderstanding your actual position. I have edited the post to clarify. I'm especially happy about the clarification because I think there was at least a perception in the community that you and/or others do, in fact, endorse this position, and therefore that it is the "mainstream EA view," albeit one which almost everyone I have spoken to about the issue in detail seems to disagree with.
Gordon Irlam: an effective altruist ahead of his time

I agree that Gordon deserves great praise and recognition! 

One clarification: My discussion of Zhdanov was based on Gordon's work: he volunteered for GWWC in the early days, and cross-posted about Zhdanov on the 80k blog. In DGB, I failed to  cite him, which was a major oversight on my part, and I feel really bad about that. (I've apologized to him about this.)  So that discussion shouldn't be seen as independent convergence. 

Thoughts on whether we're living at the most influential time in history

Thanks Greg  - I asked and it turned out I had one remaining day to make edits to the paper, so I've made some minor ones in a direction you'd like, though I'm sure they won't be sufficient to satisfy you. 

Going to have to get back on with other work at this point, but I think your  arguments are important, though the 'bait and switch' doesn't seem totally fair - e.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet.

Thoughts on whether we're living at the most influential time in history

Thanks for this, Greg.

"But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million."

I'm surprised this wasn't clear to you, which has made me think I've done a bad job of expressing myself.  

It's the former, and  for the reason of your explanation  (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the  blog post I describe what I call the outside-view arguments, including t... (read more)

For my part, I'm more partial to 'blaming the reader', but (evidently) better people mete out better measure than I in turn.

Insofar as it goes, I think the challenge (at least for me) is qualitative terms can cover multitudes (or orders of magnitudes) of precision. I'd take ~0.3% to be 'significant' credence for some values of significant. 'Strong' 'compelling' or 'good' arguments could be an LR of 2 (after all, RCT confirmation can be ~3) or 200. 

I also think quantitative articulation would help the reader (or at least this reader) better benchmark t... (read more)

How much of that 0.1% comes from worlds where your outside view argument is right vs worlds where your outside view argument is wrong? 

This kind of stuff is pretty complicated so I might not be making sense here, but here's what I mean: I have some distribution over what model to be using to answer the "are we at HoH" question, and each model has some probability that we're at HoH, and I derive my overall belief by adding up the credence in HoH that I get from each model (weighted by my credence in it).  It seems like your outside view model assi... (read more)

Thoughts on whether we're living at the most influential time in history

Actually, rereading my post I realize I had already made an edit similar to the one you suggest  (though not linking to the article which hadn't been finished) back in March 2020:

"[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:

The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.

The ... (read more)

Oh man, I'm so sorry, you're totally right that this edit fixes the problem I was complaining about. When I read this edit, I initially misunderstood it in such a way that it didn't address my concern. My apologies.

Thoughts on whether we're living at the most influential time in history

Thanks, Greg.  I really wasn't meaning to come across as super confident in a particular posterior (rather than giving an indicative number for a central estimate), so I'm sorry if I did.


"It seems more reasonable to say 'our' prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion."

I agree with this (though see for the discussion with Lukas for some clarification about what we're talking ... (read more)

But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million. I want to push on this because if your own credences are inconsistent with your argument, the reasons why seem both important to explore and to make clear to readers, who may be mislead into taking this at 'face value'. 

From this on page 13 I guess a generous estimate (/upper bound) is something like 1/ 1 million for the 'among most important million people':

[W]e can assess the quality of the arguments given in favour of

... (read more)
Thoughts on whether we're living at the most influential time in history

Richard’s response is about right. My prior with respect to influentialness, is such that either: x-risk is almost surely zero, or we are almost surely not going to have a long future, or x-risk is higher now than it will be in the future but harder to prevent than it will be in the future or in the future there will be non-x-risk-mediated ways of affecting similarly enormous amounts of value in the future, or the idea that most of the value is in the future is false.

I do think we should update away from those priors, and I think that update is sufficient ... (read more)

Hmm, interesting. It seems to me that your priors cause you to think that the "naive longtermist" story, where we're in a time of perils and if we can get through it, x-risk goes basically to zero and there are no more good ways to affect similarly enormous amounts of value, has a probability which is basically zero. (This is just me musing.)

Thoughts on whether we're living at the most influential time in history

"Only using a single, simple function for something so complicated seems overconfident to me. And any mix of functions where one of them assigns decent probability to early people being the most influential is enough that it's not super unlikely that early people are the most influential."

I strongly agree with this. The fact that under a mix of  distributions, it becomes not super unlikely that early people are the most influential, is really important and was somewhat buried in the original comments-discussion. 

And then we're also very distinctive in other ways: being on one planet, being at such a high-growth period, etc. 

Thoughts on whether we're living at the most influential time in history

Thanks, I agree that this is  key. My thoughts: 

  • I agree that our earliness gives a dramatic update in favor of us being influential. I don't have a stable view on the magnitude of that. 
  • I'm not convinced that the negative exponential form of Toby's distribution is the right one, but I don't have any better suggestions 
  • Like Lukas, I think that Toby's distribution gives too much weight to early people, so the update I would make is less dramatic than Toby's
  • Seeing as Toby's prior is quite sensitive to choice of reference-class, I would wan
... (read more)
Thoughts on whether we're living at the most influential time in history

"If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness."

Thanks, Lukas, I thought this was very clear and exactly right. 

"So now we've switched over to instead making a guess about P(X in E | X in H), i.e. the probability that one of the 1e10 most influential people also is one of the 1e11 earliest people, and dividing by 10. That doesn't seem much easier than making a guess about P(X in H | X in E), and it's not obvious whether our intuitions here would lead us to expect ... (read more)

This seems important to me because, for someone claiming that we should think that we're at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does.  To me at least, that's a striking fact and wouldn't have been obvious before I started thinking about these things.

It seems to me the object level is where the action is, and the non-simulation Doomsday Arguments mostly raise a phantom consideration that cancels out (in particula... (read more)

Thoughts on whether we're living at the most influential time in history

This comment of mine in particular seems to have been downvoted. If anyone were willing, I'd be interested to understand why: is that because (i) the tone is off (seemed too combative?); (ii) the arguments themselves are weak; (iii) it wasn't clear what I'm saying; (iv) it wasn't engaging with Buck's argument; (v) other?

2djbinder10moI can't speak for why other people down-voted the comment but I down-voted it because the arguments you make are overly simplistic. The model you have of philanthropy is that on an agent in each time period has the choice to either (1) invest or (2) spend their resources, and then getting a payoff depending on how influential'' the time is. You argue that the agent should then save until they reach the most influential'' time, before spending all of their resources at this most influential time. I think this model is misleading for a couple of reasons. First, in the real world we don't know when the most influential time is. In this case the agent may find it optimal to spend some of their resources at each time step. For instance direct philanthropic donations may give them a better understanding in the future of how influentialness varies (ie, if you don't invest in AI safety researchers now, how will you ever know whether/when AI safety will be a problem?) You may also worry about "going bust": if while you are being patient, an existential catastrophe (or value lock-in) happens, then the patient long-termist looses their entire investment. Perhaps one way to phrase how important this knowledge problem is to finding the optimal strategy is to think about it as analogous to owning stocks in a bubble. You strategy is that we should sell at the market peak, but we can't do that if we don't know when that will be. Second, there are very plausible reasons why now may be the best time to donate. If we can spend money today to permanently reduce existential risk, or to permanently improve the welfare of the global poor, then it is always more valuable to do that action ASAP rather than wait. Likewise we plausibly get more value by working on biorisk, AI safety, or climate change today then we will in 20 years. Third, the assumption of no diminishing marginal returns is illogical. We should be thinking about how EAs as a whole should spend their money as a whole. As
Thoughts on whether we're living at the most influential time in history

Yeah, I do think the priors-based argument given in the post  was  poorly stated, and therefore led to  unnecessary confusion. Your suggestion  is very reasonable, and I've now edited the post.

Actually, rereading my post I realize I had already made an edit similar to the one you suggest  (though not linking to the article which hadn't been finished) back in March 2020:

"[Later Edit (Mar 2020): The way I state the choice of prior in the text above was mistaken, and therefore caused some confusion. The way I should have stated the prior choice, to represent what I was thinking of, is as follows:

The prior probability of us living in the most influential century, conditional on Earth-originating civilization lasting for n centuries, is 1/n.

The ... (read more)

Thoughts on whether we're living at the most influential time in history

Comment (5/5)

Smaller comments 

  • I agree that one way you can avoid thinking we’re astronomically influential is by believing the future is short, such as by believing you’re in a simulation, and I discuss that in the blog post at some length. But, given that there are quite a number of ways in which we could fail to be at the most influential time (perhaps right now we can do comparatively little to influence the long-term, perhaps we’re too lacking in knowledge to pick the right interventions wisely, perhaps our values are misguided, perhaps longtermis
... (read more)

“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that we’re (in expectation at least) at an enormou

... (read more)
5Buck10moSo you are saying that you do think that the evidence for longtermism/x-risk is enough to push you to thinking you're at a one-in-a-million time? EDIT: Actually I think maybe you misunderstood me? When I say "you're one-in-a-million", I mean "your x-risk is higher than 99.9999% of other centuries' x-risk"; "one in a thousand" means "higher than 99.9% of other centuries' x-risk". So one-in-a-million is a stronger claim which means higher x-risk. What I'm saying is that if you believe that x-risk is 0.1%, then you think we're at least one in a million. I don't understand why you're willing to accept that we're one-in-a-million; this seems to me force you to have absurdly low x-risk estimates.
Thoughts on whether we're living at the most influential time in history

(Comment 4/5) 

The argument against patient philanthropy

“I sometimes hear the outside view argument used as an argument for patient philanthropy, which it in fact is not.”

I don’t think this works quite in the way you think it does.

It is true that, in a similar vein to the arguments I give against being at the most influential time (where ‘influential’ is a technical term, excluding investing opportunities), you can give an outside-view argument against now being the time at which you can do the most good tout court. As a matter of fact, I believe that’... (read more)

4William_MacAskill10moThis comment of mine in particular seems to have been downvoted. If anyone were willing, I'd be interested to understand why: is that because (i) the tone is off (seemed too combative?); (ii) the arguments themselves are weak; (iii) it wasn't clear what I'm saying; (iv) it wasn't engaging with Buck's argument; (v) other?
9vaniver10moTho I note that the only way one would ever take such opportunities, if offered, is by developing a view of what sorts of opportunities are good that is sufficiently motivating to actually take action at least once every few decades. For example, when the most attractive opportunity so far appears in year 19 of investing and assessing opportunities, will our patient philanthropist direct all their money towards it, and then start saving again? Will they reason that they don't have sufficient evidence to overcome their prior that year 19 is not more attractive than the years to come? Will they say "well, I'm following the Secretary Problem solution, and 19 is less than 70/e, so I'm still in info-gathering mode"? They won't, of course, know which path had higher value in their particular world until they die, but it seems to me like most of the information content of a strategy that waits to pull the trigger is in when it decides to pull the trigger, and this feels like the least explicit part of your argument. Compare to investing, where some people are fans of timing the market, and some people are fans of dollar-cost-averaging [https://www.investopedia.com/terms/d/dollarcostaveraging.asp]. If you think the attractiveness of giving opportunities is going to be unpredictably volatile, then doing direct work or philanthropy ever year is the optimal approach. If instead you think the attractiveness of giving opportunities is predictably volatile, or predictably stable, then doing patient philanthropy makes more sense. What seems odd to me is simultaneously holding the outside view sense that we have insufficient evidence to think that we're correctly assessing a promising opportunity now, and having the sense that we should expect that we will correctly assess the promising opportunities in the future when they do happen.
5Buck10moMy claim is that patient philanthropy is automatically making the claim that now is the time where patient philanthropy does wildly unusually much expected good, because we're so early in history that the best giving opportunities are almost surely after us.
Thoughts on whether we're living at the most influential time in history

(Comment 3/5) 

Earliness

“Will’s resolution is to say that in fact, we shouldn’t expect early times in human history to be hingey, because that would violate his strong prior that any time in human history is equally likely to be hingey.”

I don’t see why you think I think this. (I also don’t know what “violating” a prior would mean.)

The situation is: I have a prior over how influential I’m likely to be. Then I wake up, find myself in the early 21st century, and make a whole bunch of updates. This include updates on the facts that: I’m on one planet, I’m ... (read more)

6Tobias_Baumann10moI think it's not clear whether higher economic growth or technological progress implies more influence. This claim seems plausible, but you could also argue that it might be easier to have an influence in a stable society (with little economic or technological change), e.g. simply because of higher predictability. I'm very sympathetic to patient philanthropy, but this seems to overstate the required amount of evidence. Taking into account that each time has donors (and other resources) of their own, and that there are diminishing returns to spending, you don't need to have extreme beliefs about your elevated influentialness to think that spending now is better. However, the arguments you gave are not very specific to 2020; presumably they still hold in 2100, so it stands to reason that we should invest at least over those timeframes (until we expect the period of elevated influentialness to end). A bag of oats is presumably much more relative wealth in those other times than now. The current price of a ton of oats is GBP 120 per ton [https://www.statista.com/statistics/524185/oats-market-price-per-tonne-scotland/#:~:text=The%20price%20per%20ton%20of,pounds%20per%20tonnes%20in%202017.] , so if the bag contains 50 kg, it's worth just GBP 6. People in earlier times also have less 'competition'. Presumably the medieval person could have been the first to write up arguments for antispeciesism or animal welfare; or perhaps they could have a significant impact on establishing science, increasing rationality, improving governance, etc. (All things considered, I think it's not clear if earlier times are more or less influential.)

I'm confused as to what your core outside-view argument is Will. My initial understanding of it was the following:
(A1) We are in a potentially large future with many trillions of trillions of humans
(A2) Our prior should be that we are randomly chosen amongst all living humans
then we conclude that  
(C)  We should have extremely low a prior odds of being amongst the most influential
To be very crudely quantitative about this, multiplying the number of humans on earth by the number of stars in the visible universe and the lifetime of the Earth, we qu... (read more)

Thoughts on whether we're living at the most influential time in history

(Comment 2/5)

The outside-view argument (in response to your first argument)

In the blog post, I stated the priors-based argument quite poorly - I thought this bit wouldn’t be where the disagreement was, so I didn’t spend much time on it. How wrong I was about that! For the article version (link), I tidied it up.

The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that pop... (read more)

On this set-up of the argument (which is what was in my head but I hadn’t worked through), I don’t make any claims about how likely it is that we are part of a very long future.

 

This does make a lot more sense than what you wrote in your post. 

Do you agree that as written, the argument as written in your EA Forum post is quite flawed? If so, I think you should edit it to more clearly indicate that it was a mistake, given that people are still linking to it.

3ESRogs10moDo you have some other way of updating on the arrow of time? (It seems like the fact that we can influence future generations, but they can't influence us, is pretty significant, and should be factored into the argument somewhere.) I wouldn't call that an update on finding ourselves early, but more like just an update on the structure of the population being sampled from.

The key thing is that the way I’m setting priors is as a function from populations to credences: for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n

The fact that I consider a  certain property F should update me, though. This already demonstrates that F is something that I am particularly interested in, or that F is salient to me, which presumably makes it more likely that I am an outlier on F. 

Also, this principle can have p... (read more)

I still don’t see the case for building earliness into our priors, rather than updating on the basis of finding oneself seemingly-early.

If we're doing things right, it shouldn't matter whether we're building earliness into our prior or updating on the basis of earliness.

Let the set H="the 1e10 (i.e. 10 billion) most influential people who will ever live"  and let E="the 1e11 (i.e. 100 billion) earliest people who will ever live". Assume that the future will contain 1e100 people. Let X be a randomly sampled person.

For our unconditional prior P(X in... (read more)

I don’t make any claims about how likely it is that we are part of a very long future. Only that, a priori, the probability that we’re *both* in a very large future *and* one of the most influential people ever is very low. For that reason, there aren’t any implications from that argument to claims about the magnitude of extinction risk this century.

I don't understand why there are implications from that argument to claims about the magnitude of our influentialness either.

As an analogy, suppose Alice bought a lottery ticket that will win her $100,000,0... (read more)

for any property F, your prior should be such that if there are n people in a population, the probability that you are in the m most F people in that population is m/n.

I want to dig into this a little, because it feels like there might be some sort of selection effect going on here. Suppose I make a claim X, and it has a number of implications X1, X2, X3 and so on. Each of these might apply to a different population, and have a different prior probability as a standalone claim. But if a critic chooses the one which has the lowest prior probability (call it... (read more)

Thoughts on whether we're living at the most influential time in history

(Comment 1/5)

Thanks so much for engaging with this, Buck! :)

I revised the argument of the blog post into a forthcoming article, available at my website (link). I’d encourage people to read that version rather than the blog post, if you’re only going to read one. The broad thrust is the same, but the presentation is better. 

I’ll discuss the improved form of the discussion about priors in another comment. Some other changes in the article version:

  • I frame the argument in terms of the most influential people, rather than the most influential times. It’s t
... (read more)

The comment I'd be most interested in from you is whether you agree that your argument forces you to believe that x-risk is almost surely zero, or that we are almost surely not going to have a long future.

4Buck10moI've added a link to the article to the top of my post. Those changes seem reasonable.

I like the term 'influential' and 'influentialness'. I think it is very clear and automatically leads pretty much to the definition you give it.

How hot will it get?

Something I forgot to mention in my comments before: Peter Watson suggested to me it's reasonably likely that estimates of climate sensitivity will be revised upwards for the next IPCC, as the latest generation of models are running hotter. (e.g. https://www.carbonbrief.org/guest-post-why-results-from-the-next-generation-of-climate-models-matter, https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782 - "The range of ECS values across models has widened in CMIP6, particularly on the high end, and now includes nine models with values... (read more)

1Pagw1yPeter here - so actually I'd say this isn't clear now - here's some recent work for example suggesting that estimates of future warming won't change much compared to those from the previous set of models once recent observed warming is used as a constraint i.e. those newer models with higher sensitivity seem to warm too fast compared to observations e.g. https://advances.sciencemag.org/content/6/12/eaaz9549 [https://advances.sciencemag.org/content/6/12/eaaz9549] . Well, the models are only one piece of evidence going into the overall estimate anyway. I don't follow the literature on this closely enough to be confident about what the IPCC will actually conclude.
2Halstead1yAh, I didn't know that, thanks, I haven't followed the literature that closely over the last year. I'll put that into the model. On a side note, that does seem high, and doesn't seem like it would fit with the observational data for the last 200 years very well.
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

How much do you worry that MIRI's default non-disclosure policy is going to hinder MIRI's ability to do good research, because it won't be able to get as much external criticism?

I worry very little about losing the opportunity to get external criticism from people who wouldn't engage very deeply with our work if they did have access to it. I worry more about us doing worse research because it's harder for extremely engaged outsiders to contribute to our work.

A few years ago, Holden had a great post where he wrote:


For nearly a decade now, we've been putting a huge amount of work into putting the details of our reasoning out in public, and yet I am hard-pressed to think of cases (especially in more recent years) where
... (read more)
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

Suppose you find out that Buck-in-2040 thinks that the work you're currently doing is a big mistake (which should have been clear to you, now). What are your best guesses about what his reasons are?

I think of myself as making a lot of gambles with my career choices. And I suspect that regardless of which way the propositions turn out, I'll have an inclination to think that I was an idiot for not realizing them sooner. For example, I often have both the following thoughts:

  • "I have a bunch of comparative advantage at helping MIRI with their stuff, and I'm not going to be able to quickly reduce my confidence in their research directions. So I should stop worrying about it and just do as much as I can."
  • "I am not sure whether the M
... (read more)
I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

What's the biggest misconception people have about current technical AI alignment work? What's the biggest misconception people have about MIRI?

Reality is often underpowered

Thanks Greg - I really enjoyed this post.

I don't think that this is what you're saying, but I think if someone drew the lesson from your post that, when reality is underpowered, there's no point in doing research into the question, that would be a mistake.

When I look at tiny-n sample sizes for important questions (e.g.: "How have new ideas made major changes to the focus of academic economics?" or "Why have social movements collapsed in the past?"), I generally don't feel at all like I'm trying to get a p<0... (read more)

Thanks, Will!

I definitely agree we can look at qualitative data for hypothesis generation (after all, n=1 is still an existence proof). But I'd generally recommend breadth-first rather than depth-first if we're trying to adduce considerations.

For many/most sorts of policy decisions although we may find a case of X (some factor) --> Y (some desirable outcome), we can probably also find cases of ¬X --> Y and X --> ¬Y. E.g., contrasting with what happened with prospect theory, there are also cases where someone happened on an important breakthrough w

... (read more)
Are we living at the most influential time in history?

Sorry - 'or otherwise lost' qualifier was meant to be a catch-all for any way of the investment losing its value, including (bad) value-drift.

I think there's a decent case for (some) EAs doing better at avoiding this than e.g. typical foundations:

  • If you have precise values (e.g. classical utilitarianism) then it's easier to transmit those values across time - you can write your values down clearly as part of the constitution of the foundation, and it's easier to find and identify younger people to take over the fund who also endor
... (read more)
9Kit2yGot it. Given the inclusion of (bad) value drift in 'appropriated (or otherwise lost)', my previous comment should just be interpreted as providing evidence to counter this claim: [Recap of my previous comment] It seems that this quote predicts a lower rate than there has ever† been before. Such predictions can be correct! However, a plan for making the prediction come true is needed. It seems that the plan should be different to what essentially all†† the people with higher rates of (bad) value drift did. These particular suggestions (succession planning and including an institution's objectives in its charter) seem qualitatively similar to significant minority practices in the past. (e.g. one of my outside views uses the reference class of 'charities with clear founding values'. For the 'institutions through the eras' one, religious groups with explicit creeds and explicit succession planning were prominent examples I had in mind.) The open question then seems to be whether EAs will tend to achieve sufficient improvement in such practices to bring (bad) value drift down by around an order of magnitude relative to what has been achieved historically. This seems unlikely to me, but not implausible. In particular, the idea that it is easier to design a constitution based on classical utilitarianism than for other goals people have had is very interesting. Aside: investing heavily in these practices seems easier for larger donors. The quote seems very hard to defend for donors too small to attract a highly dedicated successor. This discussion has made me think that insofar as one does punt to the future, making progress on how to reduce institutional value drift would be a very valuable project, even if I'm doubtful about how much progress is possible. † It seems appropriate to exclude all groups coordinating for mutual self-interest, such as governments. (This is broader than my initial carving out of for-profits.) †† However, it seems useful to think about a mu
Ask Me Anything!

I think you might be misunderstanding what I was referring to. An example of what I mean: Suppose Jane is deciding whether to work for Deepmind on the AI safety team. She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad. Because there’s some precisification of her credences on which taking the job is good, and some on which taking the job is bad, then if she uses a Liberal decision rule (= it is permissible for ... (read more)

She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad.

That's an awfully (in)convenient interval to have! That is the unique position for an interval of that length with no distinguishing views about any parts of the interval, such that integrating over it gives you a probability of 0.5 and expected impact of 0.

The standard response to that is that you should weigh all these and do what is in expectation be
... (read more)
Are we living at the most influential time in history?

Thanks, William! 

Yeah, I think I messed up this bit. I should have used the harmonic mean rather than the arithmetic mean when averaging over possibilities of how many people will be in the future. Doing this brings the chance of being among the most influential person ever close to the chance of being the most influential person ever in a small-population universe.  But then we get the issue that being the most influential person ever in a small-population universe is much less important than being the most influential person in a big-population universe.... (read more)

1WilliamKiely2yThanks for the reply, Will. I go by Will too by the way. This assumption seems dubious to me because it seems to ignore the nontrivial possibility that there is something like a Great Filter in our future that requires direct-work to overcome (or could benefit from direct-work). That is, maybe if we solve one challenge right in our near-term future right (e.g. hand-off the future to benevolent AGI) then it will be more or less inevitable that life will flourish for billions of years, and if we fail to overcome that challenge then we will go extinct fairly soon. As long as you put a nontrivial probability on such a challenge existing in the short-term future and it being tractable, then even longtermist altruists in the small-population worlds (possibly ours) who try punting to the future / passing the buck instead of doing direct work and thus fail to make it past the Great-Filter-like challenge can (I claim, contrary to you by my understanding) be said to be living in an action-relevant world despite living in a small-population universe. This is because they had the power (even though they didn't exercise it) to make the future a big-population universe.
Are we living at the most influential time in history?

Thanks for these links. I’m not sure if your comment was meant to be a criticism of the argument, though? If so: I’m saying “prior is low, and there is a healthy false positive rate, so don’t have high posterior.” You’re pointing out that there’s a healthy false negative rate too — but that won’t cause me to have a high posterior?

And, if you think that every generation is increasing in influentialness, that’s a good argument for thinking that future generations will be more influential and we should therefore save.

Are we living at the most influential time in history?

There were a couple of recurring questions, so I’ve addressed them here.

What’s the point of this discussion — isn’t passing on resources to the future too hard to be worth considering? Won’t the money be stolen, or used by people with worse values?

In brief: Yes, losing what you’ve invested is a risk, but (at least for relatively small donors) it’s outweighed by investment returns. 

Longer: The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at ... (read more)

I was very surprised to see that 'funds being appropriated (or otherwise lost)' is the main concern with attempting to move resources 100 years into the future. Before seeing this comment, I would have been confident that the primary difficulty is in building an institution which maintains acceptable values† for 100 years.

Some of the very limited data we have on value drift within individual people suggests losses of 11% and 18% per year for two groups over 5 years. I think these numbers are a reasonable estimate for people who have held certain ... (read more)

The concept of ‘influentialness of a time’ is the same as the cost-effectiveness (from a longtermist perspective) of the best opportunities accessible to longtermists at a time. [...] So if you think that hingeyness (as I’ve defined it) is about the same in 100 years as it is now, or greater, then there’s a strong case for investing for 100 years before spending the money.

Are you referring to average or marginal cost-effectiveness here? If "average", then this seems wrong. From the perspective of making a decision on whether to spend on longtermist caus

... (read more)
Are we living at the most influential time in history?

I don't think I agree with this, unless one is able to make a comparative claim about the importance (from a longtermist perspective) of these events relative to future events' importance - which is exactly what I'm questioning.

I do think that weighting earlier generations more heavily is correct, though; I don't feel that much turns on whether one construes this as prior choice or an update from one's prior.

Are we living at the most influential time in history?

Given this, if one had a hyperprior over different possible Beta distributions, shouldn't 2000 centuries of no event occurring cause one to update quite hard against the (0.5, 0.5) or (1, 1) hyperparameters, and in favour of a prior that was massively skewed towards the per-century probability of no-lock-in-event being very low?

(And noting that, depending exactly on how the proposition is specified, I think we can be very confident that it hasn't happened yet. E.g. if the proposition under consideration was 'a values lock-in event occurs such that everyone after this point has the same values'.)

2Toby_Ord2yThat's interesting. Earlier I suggested that a mixture of different priors that included some like mine would give a result very different to your result. But you are right to say that we can interpret this in two ways: as a mixture of ur priors or as a mixture of priors we get after updating on the length of time so far. I was implicitly assuming the latter, but maybe the former is better and it would indeed lessen or eliminate the effect I mentioned. Your suggestion is also interesting as a general approach, choosing a distribution over these Beta distributions instead of debating between certainty in (0,0), (0.5, 0.5), and (1,1). For some distributions over Beta parameters these the maths is probably quite tractable. That might be an answer to the right meta-rational approach rather than an answer to the right rational approach, or something, but it does seem nicely robust.
Are we living at the most influential time in history?

Hi Toby,

Thanks so much for this very clear response, it was a very satisfying read, and there’s a lot for me to chew on. And thanks for locating the point of disagreement — prior to this post, I would have guessed that the biggest difference between me and some others was on the weight placed on the arguments for the Time of Perils and Value Lock-In views, rather than on the choice of prior. But it seems that that’s not true, and that’s very helpful to know. If so, it suggests (advertisement to the Forum!) that further work on prior-setting in EA contexts ... (read more)

Thanks for this very thorough reply. There are so many strands here that I can't really hope to do justice to them all, but I'll make a few observations.

1) There are two versions of my argument. The weak/vague one is that a uniform prior is wrong and the real prior should decay over time, such that you can't make your extreme claim from priors. The strong/precise one is that it should decay as 1/n^2 in line with a version of LLS. The latter is more meant as an illustration. It is my go-to default for things like this, but my main point here ... (read more)

I appreciate your explicitly laying out issues with the Laplace prior! I found this helpful.

The approach to picking a prior here which I feel least uneasy about is something like: "take a simplicity-weighted average over different generating processes for distributions of hinginess over time". This gives a mixture with some weight on uniform (very simple), some weight on monotonically-increasing and monotonically-decreasing functions (also quite simple), some weight on single-peaked and single-troughed functions (disproportionately with the peak ... (read more)

Are we living at the most influential time in history?

The way I'd think about it is that we should be uncertain about how justifiably confident people can be that they're at the HoH. If our current credence in HoH is low, then the chance that it might be justifiably much higher in the future should be the significant consideration. At least if we put aside simulation worries, I can imagine evidence which would lead me to have high confidence that I'm at the HoH.

E.g., the prior is (say) 1/million this decade, but if the evidence suggests it is 1%, perhaps we should drop everything to work on
... (read more)
Are we living at the most influential time in history?
So I would say both the population and pre-emption (by earlier stabillization) factors intensely favor earlier eras in per resource hingeyness, constrained by the era having any significant lock-in opportunities and the presence of longtermists.

I think this is a really important comment; I see I didn't put these considerations into the outside-view arguments, but I should have done as they are make for powerful arguments.

The factors you mention are analogous to the parameters that go into the Ramsey model for discounting: (i) a pure rate of time prefe... (read more)


> Then given uncertainty about these parameters, in the long run the scenarios that dominate the EV calculation are where there’s been no pre-emption and the future population is not that high. e.g. There's been some great societal catastrophe and we're rebuilding civilization from just a few million people. If we think the inverse relationship between population size and hingeyness is very strong, then maybe we should be saving for such a possible scenario; that's the hinge moment.

I agree (and have used in calculations about optimal dis... (read more)

6Tobias_Baumann2yMaybe it's a nitpick but I don't think this is always right. For instance, suppose that from now on, population size declines by 20% each century (indefinitely). I don't think that would mean that later generations are more hingy? Or, imagine a counterfactual where population levels are divided by 10 across all generations – that would mean that one controls a larger fraction of resources but can also affect fewer beings, which prima facie cancels out. It seems to me that the relevant question is whether the present population size is small compared to the future, i.e. whether the present generation is a "population bottleneck". (Cf. Max Daniel's comment.) That's arguably true for our time (especially if space colonisation becomes feasible at some point) and also in the rebuilding scenario you mentioned.
2CarlShulman2yIn expectation, just as a result of combining comparability within a few OOM on likelihood of a hinge in the era/transition, but far more in population. I was not ruling out specific scenarios, in the sense that it is possible that a random lottery ticket is the winner and worth tens of millions of dollar, but not an option for best investment. Generally, I'm thinking in expectations since they're more action-guiding.
Are we living at the most influential time in history?
I think this overstates the case. Diminishing returns to expenditures in a particular time favor a nonzero disbursement rate (e.g. with logarithmic returns to spending at a given time 10x HoH levels would drive a 10x expenditure for a given period)

Sorry, I wasn’t meaning we should be entirely punting to the future, and in case it’s not clear from my post my actual all-things-considered views is that longtermist EAs should be endorsing a mixed strategy of some significant proportion of effort spent on near-term longtermist activities and some proportion of ... (read more)

I do agree that, at the moment, EA is mainly investing (e.g. because of Open Phil and because of human capital and because much actual expenditure is field-building-y, as you say). But it seems like at the moment that’s primarily because of management constraints and weirdness of borrowing-to-give (etc), rather than a principled plan to spread giving out over some (possibly very long) time period.

I agree that many small donors do not have a principled plan and are trying to shift the overall portfolio towards more donation soon (which can have the effect... (read more)

Are we living at the most influential time in history?
I would note that the creation of numerous simulations of HoH-type periods doesn't reduce the total impact of the actual HoH folk

Agree that it might well be that even though one has a very low credence in HoH, one should still act in the same way. (e.g. because if one is not at HoH, one is a sim, and your actions don’t have much impact).

The sim-arg could still cause you to change your actions, though. It’s somewhat plausible to me, for example, that the chance of being a sim if you’re at the very most momentous time is 1000x higher than the chance of ... (read more)

Your argument seems to combine SSA style anthropic reasoning with CDT. I believe this is a questionable combination as it gives different answers from an ex-ante rational policy or from updateless decision theory (see e.g. https://www.umsu.de/papers/driver-2011.pdf). The combination is probably also dutch-bookable.

Consider the different hingeynesses of times as the different possible worlds and your different real or simulated versions as your possible locations in that world. Say both worlds are equally likely a priori and there is one real version of you

... (read more)
1Olle Häggström2yIs this slightly off? The factor that goes into the expected impact is the chance of being a non-sim (not the chance of being a sim), so for the argument to make sense, you might wish to replace "the chance of being a sim [...] is 1000x higher than..." by "the chance of being a non-sim is just 1/1000 of..."?
Are we living at the most influential time in history?
I agree we are learning more about how to effectively exert resources to affect the future, but if your definition is concerned with the effect of a marginal increment of resources (rather than the total capacity of an era), then you need to wrestle with the issue of diminishing returns.

I agree with this, though if we’re unsure about how many resources will be put towards longtermist causes in the future, then the expected value of saving will come to be dominated by the scenario where very few resources are devoted to it. (As happens in the Ramsey model f... (read more)

You might think the counterfactual is unfair here, but I wouldn’t regard it as accessible to someone in 1600 to know that they could make contributions to science and the Enlightenment as a good way of influencing the long-run future. 

Is longtermism accessible today? That's a philosophy of a narrow circle, as Baconian science and the beginnings of the culture of progress were in 1600. If you are a specialist focused on moral reform and progress today with unusual knowledge, your might want to consider a counterpart in the past in a similar position for their time.

Are we living at the most influential time in history?
To talk about what they would have been one needs to consider a counterfactual in which we anachronistically introduce at least some minimal version of longtermist altruism, and what one includes in that intervention will affect the result one extracts from the exercise.

I agree there’s a tricky issue of how exactly one constructs the counterfactual. The definition I’m using is trying to get it as close as possible to a counterfactual we really face: how much to spend now vs how much to pass resources onto future altruists. I’d be interested if others thoug... (read more)

I’d be interested if others thought of very different approaches. It’s possible that I’m trying to pack too much into the concept of ‘most influential’, or that this concept should be kept separate from the idea of moving resources around to different times.

I tried engaging with the post for 2-3 hours and was working on a response, but ended up kind of bouncing off at least in part because the definition of hingyness didn't seem particularly action-relevant to me, mostly for the reasons that Gregory Lewis and Kit outlined in their comments.

I also thi... (read more)

Are we living at the most influential time in history?
I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.

Thanks, I’ve updated on this since writing the post and think my original claim was at least too strong, and probably just wrong. I don’t currently have a good sense of, say, if I were living in the 1950s, how likely I would be to figure out AI as the thing, rather than focus on something else that turned out not to be as important (e.g. the focus on nanotech by the Foresight Institute (a group of idealistic futurists) in the late 80s could be a relevant example).

I'd guess a longtermist altruist movement would have wound up with a flatter GCR porfolio at the time. It might have researched nuclear winter and dirty bombs earlier than in OTL (and would probably invest more in nukes than today's EA movement), and would have expedited the (already pretty good) reaction to the discovery of asteroid risk. I'd also guess it would have put a lot of attention on the possibility of stable totalitarianism as lock-in.

Are we living at the most influential time in history?

Hi Carl,

Thanks so much for taking the time to write this excellent response, I really appreciate it, and you make a lot of great points.  I’ll divide up my reactions into different comments; hopefully that helps ease of reading. 

I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.

This is a good idea. Some options: influentialness; criticality; momentousness; importance; pivotality; significance. 

I’ve created a straw poll here to see... (read more)

Now it's officially on BBC: https://www.bbc.com/future/article/20200923-the-hinge-of-history-long-termism-and-existential-risk

But here’s another adjective for our times that you may not have heard before: “hingey”.

Although it also says:

(though MacAskill now prefers the term “influentialness”, as it sounds less flippant)

Thinking further, I would go with importance among those options for 'total influence of an era' but none of those terms capture the 'per capita/resource' element, and so all would tend to be misleading in that way. I think you would need an explicit additional qualifier to mean not 'this is the century when things will be decided' but 'this is the century when marginal influence is highest, largely because ~no one tried or will try.'

Criticality is confusing because it describes the point when nuclear reaction becomes self-sustaining, and relates to "critical points" in the related area of dynamical systems, which is somewhat different from what we're talking about.

I think Hingeyness should have a simple name because it is not a complicated concept - It's how much actions affect long-run outcomes. In RL, in discussion of prioritized experience replay, we would just use something like "importance". I would generally use "(long-run) importance" or ... (read more)

Are we living at the most influential time in history?

Thanks - I agree that this distinction is not as crisp as would be ideal. I’d see religion-spreading, and movement-building, as in practice almost always a mixed strategy: in part one is giving resources to future people, and in part one is also directly altering how the future goes.

But it's more like buck-passing than it is like direct work, so I think I should just not include the Axial age in the list of particularly influential times (given my definition of 'influential').

Are we living at the most influential time in history?

Huh, thanks for the great link! I hadn’t seen that before, and had been under the impression that though some people (e.g. Good, Turing) had suggested the intelligence explosion, no-one really worried about the risks. Looks like I was just wrong about that.

Are we living at the most influential time in history?

Agreed, good point; I was thinking just of the case where you reduce extinction risk in one period but not in others. 

I’ll note, though, that reducing extinction risk at all future times seems very hard to do. I can imagine, if we’re close to a values lock-in point, we could shift societal values such that they care about future extinction risk much more than they would otherwise have done. But if that's the pathway, then the Time of Perils view wouldn’t provide an argument for HoH independent of the Value Lock-In view.

Are we living at the most influential time in history?

Thanks, Pablo! Yeah, the reference was deliberate — I’m actually aiming to turn a revised version of this post into a book chapter in a Festschrift for Parfit. But I should have given the great man his due! And I didn’t know he’d made the ‘most important centuries’ claim in Reasons and Persons, that’s very helpful!

Ask Me Anything!

I agree re value-drift and societal trajectory worries, and do think that work on AI is plausibly a good lever to positively affect them.

Ask Me Anything!

One thing that moves me towards placing a lot of importance on culture and institutions: We've actually had the technology and knowledge to produce greater-than-human intelligence for thousands of years, via selective breeding programs. But it's never happened, because of taboos and incentives not working out.

5CarlShulman2yPeople didn't quite have the relevant knowledge [https://www.gwern.net/Bakewell] , since they didn't have sound plant and animal breeding programs or predictions of inheritance.
Ask Me Anything!

Population ethics; moral uncertainty.

I wonder if someone could go through Conceptually and make sure that all the wikipedia entries on those topics are really good?

Ask Me Anything!

I think cluelessness-ish worries. From the perspective of longtermism, for any particular action, there are thousands of considerations/ scenarios that point in the direction of the action being good, and thousands of considerations/ scenarios that point in the direction of the action being bad. The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work, and there’s some principled grounds for s... (read more)

2Milan_Griffes2yShameless plug for my essay on cluelessness: 1 [https://forum.effectivealtruism.org/posts/LPMtTvfZvhZqy25Jw/what-consequences], 2 [https://forum.effectivealtruism.org/posts/MWquqEMMZ4WXCrsug/just-take-the-expected-value-a-possible-reply-to-concerns] , 3 [https://forum.effectivealtruism.org/posts/Q8isNAMsFxny5N37Y/how-tractable-is-cluelessness] , 4 [https://forum.effectivealtruism.org/posts/X2n6pt3uzZtxGT9Lm/doing-good-while-clueless]

> From the perspective of longtermism, for any particular action, there are thousands of considerations/ scenarios that point in the direction of the action being good, and thousands of considerations/ scenarios that point in the direction of the action being bad.

I worry that this type of problem is often exaggerated, e.g. with the suggestion that 'proposed x-risk A has some arguments going for it, but one could make arguments for thousands of other things' when the thousands of other candidates are never produced and could not be produced an... (read more)

Ask Me Anything!

It depends on who we point to as the experts, which I think there could be disagreement about. If we’re talking about, say, FHI folks, then I’m very clearly in the optimistic tail - others would put much higher x-risk, takeoff scenario, and chance of being superinfluential. But note I think there’s a strong selection effect with respect to who becomes an FHI person, so I don’t simply peer-update to their views. I’d expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view... (read more)

9Linch1yI think there's some evidence that Metaculus, while a group of fairly smart and well-informed people, are nowhere near as knowledgeable as a fairly informed EA (perhaps including a typical user of this forum?) for the specific questions around existential and global catastrophic risks. One example I can point to is that for this question on climate change [https://www.metaculus.com/questions/1500] and GCR before 2100 (that has been around since October 2018), a single not-very-informative comment [https://www.metaculus.com/accounts/profile/112057/#comment-24208] from me was enough to change the community median from 24% to 10%. This suggests to me that Metaculus users did not previously have strong evidence or careful reasoning on this question, or perhaps GCR-related thinking in general. Now you might think that actual superforecasters are better, but based on the comments given so far [https://goodjudgment.io/covid/dashboard/]for COVID-19, I'm unimpressed. In particular the selected comments point to use of reference classes that EAs and avid Metaculus users have known to be flawed for over a week before the report came out (eg, using China's low deaths as evidence that this can be easily replicated in other countries as the default scenario). Now COVID-19 is not an existential risk or GCR, but it is an "out of distribution" problem showing clear and fast exponential growth that seems unusual for most questions superforecasters are known to excel at.
Ask Me Anything!

Thanks! I’ve read and enjoyed a number of your blog posts, and often found myself in agreement. 

If you think that extinction risk this century is less than 1%, then in particular, you think that extinction risk from transformative AI is less than 1%. So, for this to be consistent, you have to believe either
a) that it's unlikely that transformative AI will be developed at all this century,
b) that transformative AI is unlikely to lead to extinction when it is developed, e.g. because it will very likely be aligned in at least a narrow sense. (I wrote up
... (read more)
4Tobias_Baumann2yStrongly agree. I think it's helpful to think about it in terms of the degree to which social and economic structures optimise for growth and innovation. Our modern systems (capitalism, liberal democracy) do reward innovation - and maybe that's what caused the growth mode change - but we're far away from strongly optimising for it. We care about lots of other things, and whenever there are constraints, we don't sacrifice everything on the altar of productivity / growth / innovation. And, while you can make money by innovating, the incentive is more about innovations that are marketable in the near term, rather than maximising long-term technological progress. (Compare e.g. an app that lets you book taxis in a more convenient way vs. foundational neuroscience research.) So, a growth mode could be triggered by any social change (culture, governance, or something else) resulting in significantly stronger optimisation pressures for long-term innovation. That said, I don't really see concrete ways in which this could happen and current trends do not seem to point in this direction. (I'm also not saying this would necessarily be a good thing.)
Load More