By 'this site' do you mean the forum or all the other resources on effectivealtruism.org? In either case, if the 80,000 Hours site counts as an EA site then I highly doubt that! My guess is that the answer is going to depend on how wide the catchment is for 'EA site', but most construals are going to put 80K right out in front. Maybe GiveWell is up there, plus the GWWC site and The Life You Can Save. I also think that Nick Bostrom's personal site gets a surprising number of hits. I would guess the forum is middling to top around these sites? Very interested in being proved wrong about that!
Obviously all these sites have their own numbers, but I haven't seem them pooled together in some publicly available resource (nor am I sure that would be useful). I do know of some exact numbers but don't think it would be sensible to share them without permission. Unfortunately, in my experience it's also not totally straightforward to glean those stats from the outside, although search engine rankings etc are a good proxy.
Oops, forgot to make this a linkpost! Have updated :)
Thanks so much for taking the time to post this!
I recently made a very hard decision, having drawn it out far too long and become really anxious and low about the whole thing in the end. I think I did a few things wrong, so would be happy to speak to anyone having a hard time weighing between really good but hard to compare options. Also, this may not be a great idea, but I think it might be nice to speak to someone I don't know to get an outside view on whether I made a good call, since I'm still in a weird / unresolved place about it. If anyone is up for chatting, I can share the doc I made detailing the two options and my thought process. Cheers!
Thanks so much for both these comments! I definitely missed some important detail there.
I think it can be useful to motivate longtermism by drawing an analogy to the prudential case — swapping out the entire future for your future, and only considering what would make your life go best.
Suppose that one day you learned that your ageing process had stopped. Maybe scientists identified the gene for ageing, and found that your ageing gene was missing. This amounts to learning that you now have much more control over how long you live than previously, because there's no longer a process imposed on you from outside that puts a guaranteed ceiling on your lifespan. If you die in the next few centuries, it'll most likely be due to an avoidable, and likely self-imposed, accident. What should you do?
To begin with, you might try a bit harder to avoid those avoidable risks to your life. If previously you had adopted a laissez faire attitude to wearing seatbelts and helmets, now could be time to reconsider. You might also being to spend more time and resources on things which compound their benefits over the long-run. If you'd been putting off investing because of the hassle, you now have a much stronger reason to get round to it. 5% returns for 30 years multiplies your original investment just over fourfold. 5% returns for 1,000 years works out at a significantly more attractive multiplier of more than 1,000,000,000,000,000,000,000. If keeping up your smoking habit is likely to lead to lingering lung problems which are very hard or costly to cure, you might care much more about kicking that habit soon. And you might begin to care more about 'meta' skills, like learning how to learn. While previously such skills seemed frivolous, now it's clear there's time for them to pay dividends. Finally, you might want to set up checks against some slide into madness, boredom, or destructive behaviour which living so long could make more likely. So you think carefully about your closest-held values, and write them down as a guide. You draw up plans for quickly kicking an addiction before it's too late.
When you learned that your future could contain far more value than you originally thought, certain behaviours and actions became far more important than you thought. Yet, most of those behaviours were sensible things to do anyway. Far from diminishing their importance, this fact should only underline them. The analogy to our collective predicament should be clear.
Curious to hear people's thoughts, and also whether this might make a nice (if short) post.
Pessimistically, my guess is that the current low-res impression of EA is something like: charity for nerds. 'Charity' still gets taken to mean 'global health charities'. Earning to give too often gets taken to be the main goal, and maybe there's also an overemphasis on EA's confidence in what can be measured / compared / predicted (a kind of naïve utilitarianism).
(Incidentally, I'm not sure effective altruism is an idea — maybe it's more like (i) a bunch of motivating arguments and concepts; (ii) the intellectual project of building on them; (iii) the practical project of 'following through' on those ideas; and (iv) the community of people engaged in those projects. Will MacAskill's 'The Definition of Effective Altruism' is really good.)
Thanks for replying Ben, good stuff! Few thoughts.
I don't think so. There's no data on the problem, so there's nothing to adjudicate between our disagreements. We can honestly try this if you want. What's your credence?
I'll concede that point!
Now, even if we could converge on some number, what's the reason for thinking that number captures any aspect of reality?
I think a better response to the one I originally gave was to point out that the case for strong longtermism relies on establishing a sensible lower(ish) bound for total future population. Greaves and MacAskill want to convince you that (say) at least a quadrillion lives could plausibly lie in the future. I'm curious if you have an issue with that weaker claim?
I think your point about space exploration is absolutely right, and more than a nitpick. I would say two things: one is that I can imagine a world in which we could be confident that we would never colonise the stars (e.g. if the earth were more massive and we had 5 decades before the sun scorched us or something). Second, voicing support for the 'anything permitted by physics can become practically possible' camp indirectly supports an expectation of a large numbers of future lives, no?
But then we need to be clear that these estimates aren't saying "anything precise about the actual world." They should be treated completely differently than estimates based on actual data. But they're not. When Greaves and MacAskill compare how many lives are saved by donating to AI safety versus the AMF, they compare these numbers as if they were equally as reliable and equally as capable of capturing something about reality.
Hmm — to my lights Greaves and MacAskill are fairly clear about the differences between the two kinds of estimate. If your reply is that doing any kind of (toy) EV calculation with both estimates just implies that they're somehow "equally as capable of capturing something about reality", then it feels like you're begging the question.
There should be no daylight. Whatever daylight there is would have to be a result of purely subjective beliefs, and we shouldn't lend this any credibility. It doesn't belong alongside an actual statistical estimate.
I don't understand what you mean here, which is partly my fault for being unclear in my original comment. Here's what I had in mind: suppose you've run a small-scale experiment and collected your data. You can generate a bunch of statistical scores indicating e.g. the effect size, plus the chance of getting the results you got assuming the null hypothesis was true (p-value). Crucially (and unsurprisingly) none of those scores directly give you the likelihood of an effect (or the 'true' anything else). If you have reason to expect a bias in the direction of positive results (e.g. publication bias), then your guess about how likely it is that you're picked up on a real effect may in fact be very different from any statistic, because it makes use of information from beyond those statistics (i.e. your prior). For instance, in certain social psych journals, you might pick a paper at random, see that p < 0.05, and nonetheless be fairly confident that you're looking at a false positive. So subjective credences (incorporating info from beyond the raw stats) do seem useful here. My guess is that I'm misunderstanding you, yell at me if I am.
Subjective credences aren't applicable to short term situations. (Again, when I say "subjective" there's an implied "and based on no data”).
By 'subjective credence' I just mean degree of belief. It feels important that everyone's on the same terminological page here, and I'm not sure any card-carrying Bayesians imply "based on no data" by "subjective"! Can you point me towards someone who has argued that subjective credences in this broader sense aren't applicable even to straightforward 'short-term' situations?
Fair point about strong longtermism plausibly recommending slowing certain kinds of progress. I'm also not convinced — David Deutsch was an influence here (as I'm guessing he was for you). But the 'wisdom outrunning technological capacity' thing still rings true to me.
I take the implication here to be that we should stop developing technology and wait for our wisdom to catch up.
There's two ways to close the gap, of course, and isn't the obvious conclusion just to speed up the 'wisdom' side?
Which ties in to your last point. Correct me if I'm wrong, but I'm taking you as saying: to the extent that strong longtermism implies significant changes in global priorities, those changes are really worrying: the logic can justify almost any present sacrifices, there's no closed feedback loop or error-correction mechanism, and it may imply a slowing down of technological progress in some cases. To the extent that strong longtermism doesn't imply significant changes in global priorities, then it hardly adds any new or compelling reasons for existing priorities. So it's either dangerous or useless or somewhere between the two.
I won't stick up for strong longtermism, because I'm unsure about it, but I will stick up for semi-skimmed longtermism. My tentative response is that there are some recommendations that (i) are more-or-less uniquely recomended by this kind of longtermism, and (ii) not dangerous or silly in the ways you suggest. One example is establishing kinds of political representation for future generations. Or funding international bodies like the BWC, spreading long-term thinking through journalism, getting fair legislative frameworks in place for when transformative / general AI arrives, or indeed for space governance.
Anyway, a crossover podcast on this would be amazing! I'll send you a message.
Thanks so much for writing this Ben! I think it's great that strong longtermism is being properly scrutinised, and I loved your recent podcast episode on this (as well as Vaden's piece).
I don't have a view of my own yet; but I do have some questions about a few of your points, and I think I can guess at how a proponent of strong longtermism might respond to others.
For clarity, I'm understanding part of your argument as saying something like the following. First, "[E]xpected value calculations, Bayes theorem, and mathematical models" are tools — often useful, often totally innapropriate or inapplicable. Second, 'Bayesian epistemology' (BE) makes inviolable laws out of these tools, running into all kinds of paradoxes and failing to represent how scientific knowledge advances. This makes BE silly at best and downright 'refuted' at worst. Third, the case for strong longtermism relies essentially on BE, which is bad news for strong longtermism.
I can imagine that a fan of BE would just object that Bayesianism in particular is just not a tool which can be swapped out for something else when it's convenient . This feels like an important but tangential argument — this LW post might be relevant. Also, briefly, I'm not 100% convinced by Popper's argument against Bayesianism which you're indirectly referencing, and I haven't read the paper Vaden wrote but it looks interesting. In any case: declaring that BE "has been refuted" seems unfairly rash.
You suggest at a few points that longtermists are just pulling numbers out of nowhere in order to take an expectation over, for instance, the number of people who will live in the long-run future. In other words, I'm reading you as saying that these numbers are totally arbitrary. You also mention that they're problematically unfalsifiable.
On the first point, it feels more accurate to say that these numbers are highly uncertain rather than totally arbitrary. I can imagine someone saying "I wouldn't be surprised if my estimate were off by several orders of magnitude"; but not "I have literally no reason to believe that this estimate is any better than a wildly different one". That's because it is possible to begin reasoning about these numbers. For instance, I was reminded of Nick Beckstead's preliminary review of the feasibility of space colonisation. If it turned out that space colonisation was practically impossible, the ceiling would fall down on estimates for the size of humanity's future. So there's some information to go on — just very little.
You make the same point in the context of estimating existential risks:
My credence could be that working on AI safety will reduce existential risk by 5% and yours could be 10−19%, and there’s no way to discriminate between them.
Really? If you're a rationalist (in the broad Popperian sense and the internet-cult sense), and we share common knowledge of each other's beliefs, then shouldn't we be able to argue towards closer agreement? Not if our estimates were totally arbitrary — but clearly they're not. Again, they're just especially uncertain.
[I]t abolishes the means by which one can disagree with its conclusion, because it can always simply use bigger numbers.
You can use bigger numbers in the sense that you can type extra zeroes on your keyboard, but you can't use bigger numbers if you care about making sure your numbers fall reasonably in line with the available facts, right? I could try turning "donating to Fin's retirement fund" into an EA cause area by just lying about its impact, but there are norms of honesty and criticism (and common sense) which would prevent the plot succeeding. Because I don't think you're suggesting that proponents of strong longtermism are being dishonest in this way, I'm confused about what you are suggesting.
Plus, as James Aung mentioned, I don't think it works to criticise subjective probabilities (and estimates derived from them) as too precise. The response is presumably: "sure, this guess is hugely uncertain. But better to give some number rather than none, and any number I pick is going to seem too precise to you. Crucially, I'm trying to represent something about my own beliefs — not that I know something precise about the actual world."
On the falsifiability point, estimates about the size of humanity's future clearly are falsifiable — it's just going to take a long time to find out. But plenty of sensible scientific claims are like this — e.g. predictions about the future of stars including our Sun. So the criticism can't be that predictions about the size of humanity's future are somehow unscientific because not immediately falsifiable.
I think this paragraph is key:
Thus, subjective credences tend to be compared side-by-side with statistics derived from actual data, and treated as if they were equivalent. But prophecies about when AGI will take over the world — even when cloaked in advanced mathematics — are of an entirely different nature than, say, impact evaluations from randomized controlled trials. They should not be treated as equivalent.
My reaction is something like this: even if other interpretations of probability are available, it seems at least harmless to form subjective credences about the effectiveness of, say, global health interventions backed by a bunch of RCTs. Where there's lots of empirical evidence, there should be little daylight between your subjective credences and the probabilities that fall straight out of the 'actual data'. In fact, using subjective credences begins to look positively useful when you venture into otherwise comparable but more speculative interventions. That's because whether you might want to fund that intervention is going to depend on your best guess about its likely effects and what you might learn from them, and that guess should be sensitive to all kinds of information — a job Bayesian methods were built for. However, if you agree that subjective credences are applicable to innocuous 'short-term' situations with plenty of 'data', then you can imagine gradually pushing the time horizon (or some other source of uncertainty) all the way to questions about the very long-run future. At this extreme, you've said that there's something qualitatively wrong with subjective credences about such murky questions. But I want to say: given that you can join up the two kinds of subject matter by a series of intermediate questions, and there wasn't originally anything wrong with using credences and no qualitative or step-change, why think that the two ends of the scale end up being "of an entirely different nature"? I think this applies to Vaden's point that the maths of taking an expectation over the long-run future is somehow literally unworkable, because you can't have a measure over infinite possibilities (or something). Does that mean we can't take an expectation over what happens next year? The next decade?
I hope that makes sense! Happy to say more.
My last worry is that you're painting an unrealistically grim picture of what strong longtermism practically entails. For starters, you say "[l]ongtermism asks us to ignore problems now", and Hilary and Will say we can "often" ignore short-term effects . Two points here: first, in situations where we can have a large effect on the present / immediate future without risking something comparably bad in the future, it's presumably still just as good to do that thing. Second, it seems reasonable to expect considerable overlap between solving present problems and making the long-run future go best, for obvious reasons. For example, investing in renewables or clean meat R&D just seem robustly good from short-term and long-term perspectives.
I'm interested in the comparison to totalitarian regimes, and it reminded me of something Isaiah Berlin wrote:
[T]o make mankind just and happy and creative and harmonious forever - what could be too high a price to pay for that? To make such an omelette, there is surely no limit to the number of eggs that should be broken[.]
However, my guess is that there are too few similarities for the comparison to be instructive. I would want to say that the totalitarian regimes of the past failed so horrendously not because they used expected utility theory or Bayesian epistemology correctly but innapropriately, but because they were just wrong — wrong that revolutionary violence and totalitarianism make the world remotely better in the short or long term. Also, note that a vein of longtermist thinking discusses reducing the likelihood of a great power conflict, improving instutional decision-making, and spreading good (viz. liberal) political norms in general — in other words, how to secure an open society for our descendants.
Longtermism asks us to ignore problems now, and focus on what we believe will be the biggest problems many generations from now. Abiding by this logic would result in the stagnation of knowledge creation and progress.
Isn't it the case that strong longtermism makes knowledge creation and accelerating progress seem more valuable, if anything? And would the world really generate less knowledge, or progress at a slower rate, if the EA community shifted priorities in a longtermist direction?
Finally, a minor point: my impression is that 'longtermism' is generally taken to mean something a little less controversial than 'strong longtermism'. I appreciate you make the distinction early on, but using the 'longtermism' shorthand seems borderline misleading when some of your arguments only apply to a specific version.
For what it's worth, I'm most convinced by the practical problems with strong longtermism. I especially liked your point about longtermism being less permeable to error correction, and generally I'm curious to know more about reasons for thinking that influencing the long-run future is really tractable. Thanks again for starting this conversation along with Vaden!
Got it. The tricky thing seems to be that sensitivity to stakes is an obvious virtue in some circumstances; and (intuitively) a mistake in others. Not clear to me what marks that difference, though. Note also that maximising expected utility allows for decisions to be dictated by low-credence/likelihood states/events. That's normally intuitively fine, but sometimes leads to 'unfairness' — e.g. St. Petersburg Paradox and Pascal's wager / mugging.
I'm not entirely sure what you're getting at re the envelopes, but that's probably me missing something obvious. To make the analogy clearer: swap out monetary payouts with morally relevant outcomes, such that holding A at the end of the game causes outcome O1 and holding B causes O2. Suppose you're uncertain between T1 and T2. T1 says O1 is morally bad but O2 is permissible, and vice-versa. Instead of paying to switch, you can choose to do something which is slightly wrong on both T1 and T2, but wrong enough that doing it >10 times is worse than O1 and O2 on both theories. Again, it looks like the sortition model is virtually guaranteed to recommend taking a course of action which is far worse than sticking to either envelope on either T1 or T2 — by constantly switching and causing a large number of minor wrongs.
But agreed that we should be uncertain about the best approach to moral uncertainty!