Personally I think equating strong longtermism with longtermism is not really correct.
Agree! While I do have problems with (weak?) longtermism, this post is a criticism of strong longtermism :)
If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff
Why? GiveWell charities have developed theories about the effects of various interventions. The theories have been tested and, typically, found to be relatively robust. Of course, there is always more to know, and always ways we could improve the theories.
I don't see how this relates to not being able to develop a statistical estimate of the probability we go extinct tomorrow. (Of course, I can just give you a number and call it "my belief...
Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I'm not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify.
I totally agree that most of the existential threats currently tackled by the EA community are real problems (nuclear threats, pandemics, climate change, etc).
...I would note that the Greaves and MackAskill paper actually has a section putting forward 'advancing progress' as a plausible longter
I think I agree, but there's a lot smuggled into the phrase "perfect information on expected value". So much in fact that I'm not sure I can quite follow the thought experiment.
When I think of "perfect information on expected value", my first thought is something like a game of roulette. There's no uncertainty (about what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient - we would know wha...
There are non-measurable sets (unless you discard the axiom of choice, but then you'll run into some significant problems.) Indeed, the existence of non-measurable sets is the reason for so much of the measure-theoretic formalism.
If you're not taking a measure theoretic approach, and instead using propositions (which I guess, it should be assumed that you are, because this approach grounds Bayesianism), then using infinite sets (which clearly one would have to do if reasoning about all possible futures) leads to paradoxes. As E.T. Jaynes ...
What I meant by this was that I think you and Ben both seem to assume that strong longtermists don't want to work on near-term problems. I don't think this is a given (although it is of course fair to say that they're unlikely to only want to work on near-term problems).
Mostly agree here - this was the reason for some of the (perhaps cryptic) paragraphs in the Section "the Antithesis of Moral Progress." Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working ...
Thanks AGB, this is helpful.
I agree that longtermism is core part of the movement, and probably commands a larger share of adherents than I imply. However, I'm not sure to what extent strong longtermism is supported. My sense is that while most people agree with the general thrust of the philosophy, many would be uncomfortable with "ignoring the effects" of the near term, and remain focused on near-term problems. I didn't want to claim that a majority of EAs supported longtermism broadly-defined, but then only criticize a subset of those views.
I hadn't seen the results of the EA Survey - fascinating.
Thanks for the engagement!
I think you're mistaking Bayesian epistemology with Bayesian mathematics. Of course, no one denies Bayes' theorem. The question is: to what should it be applied? Bayesian epistemology holds that rationality consists in updating your beliefs in accordance with Bayes' theorem. As this LW post puts it:
...Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our b
I don't think the question makes sense. I agree with Vaden's argument that there's no well-defined measure over all possible futures.
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn't GiveWell make similar assumptions about the impacts of most of their recommended charities?
Yes, we do! And the strength of those assumptions is key. Our skepticism should rise in proportion to the number/feasibility of the assumptions. So you're definitely right, we should always be skeptical of social science research - indeed, any empirical research. We should be looking for hasty generalizations, gaps in the analysi...
Why are probabilities prior to action - why are they so fundamental? Could Andrew Wiles "rationally put probabilities" on him solving Fermat's Last Theorem? Does this mean he shouldn't have worked on it? Arguments do not have to be in number form.
If you refuse to claim that the chance of nuclear war up to 2100 is greater than 0.000000000001%, then I don't see how you could make a good case to work on it over some other possible intuitively trivial action, such as painting my wall blue. What would the argument be if you are completely agnostic as to whether it is a serious risk?
Sure - Nukes exist. They've been deployed before, and we know they have incredible destructive power. We know that many countries have them, and have threatened to use them. We know the protocols are in place for their use.
To me this seems like you're making a rough model with a bunch of assumptions like that past use, threats and protocols increase the risks, but not saying by how much or putting confidences or estimates on anything (even ranges). Why not think the risks are too low to matter despite past use, threats and protocols?
Hi Michael!
It seems like you're acting as if you're confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but I'm not sure you actually believe this. Do you?
I have no idea about the number of future people. And I think this is the only defensible position. Which interventions do you mean? My argument is that longtermism enables reasoning that de-prioritizes current problems in lieu of possible, highly uncertain, future problems. Focusing on such ...
You refuse to commit to a belief about x, but commit to one about y and that's inconsistent.
I would rephrase as "You say you refuse to commit to a belief about x, but seem to act as if you've committed to a belief about x". Specifically, you say you have no idea about the number of future people, but it seems like you're saying we should act as if we believe it's not huge (in expectation). The argument for strong longtermism you're trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes throu...
Oh interesting. Did you read my critique as saying that the philosophy is wrong? (Not sarcastic; serious question.) I don't really even know what "wrong" would mean here, honestly. I think the reasoning is flawed and if taken seriously leads to bad consequences.
Yeah I suppose I would still be skeptical of using ranges in the absence of data (you could just apply all my objections to the upper and low bounds of the range). But I'm definitely all for sensitivity analysis when there are data backing up the estimates!
I have read about (complex) cluelessness. I have a lot of respect for Hilary Greaves, but I don't think cluelessness is particularly illuminating concept. I view it as a variant of "we can't predict the future." So, naturally, if you ground your ethics in expected value calculations over the long term future then, well, there's going to be problems.
I would propose to resolve cluelessness as follows: Let's admit we can't predict the future. Our focus should instead be on error-correction. Our actions will have consequences - both intended and unintend...
Hey Fin! Nice - lot's here. I'll respond to what I can. If I miss anything crucial just yell at me :) (BTW, also enjoying your podcast. Maybe we should have a podcast battle at some point ... you can defend longtermism's honour).
In any case: declaring that BE "has been refuted" seems unfairly rash.
Yep, this is fair. I'm imagining myself in the position of some random stranger outside of a fancy EA-gala, and trying to get people's attention. So yes - the language might be a little strong (although I do really think Bayesianism doesn't stand up t...
I'm tempted to just concede this because we're very close to agreement here.
For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.
If this turns out to be true (i.e., people end up working on actual problems and not, say, defunding the AMF to worry about "AI controlled police and armies"), then I have much less of a problem with longtermism...
Well, far be it from me to tell others how to spend their time, but I guess it depends on what the goal is. If the goal is to literally put a precise number (or range) on the probability of nuclear war before 2100, then yes, I think that's a fruitless and impossible endeavour. History is not an iid sequence of events. If there is such a war, it will be the result of complex geopolitical factors based on human belief, desires, and knowledge at the time. We cannot pretend to know what these will be. Even if you were to gather all the available ev...
You say that "there are good arguments for working on the threat of nuclear war". As I understand your argument, you also say we cannot rationally distinguish between the claim "the chance of nuclear war in the next 100 years is 0.00000001%" and the claim "the chance of nuclear war in the next 100 years is 1%". If you can't rationally put probabilities on the risk of nuclear war, why would you work on it?
Hi Owen!
Re: inoculation of criticism. Agreed that it doesn't make criticism impossible in every sense (otherwise my post wouldn't exist). But if one reasons with numbers only (i.e., EV reasoning), then longtermism becomes unavoidable. As soon as one adopts what I'm calling "Bayesian epistemology", then there's very little room to argue with it. One can retort: Well, yes, but there's very little room to argue with General Relativity, and that is a strength of the theory, not a weakness. But the difference is that GR is very precise: It's h...
Cool. I do think that trying to translate your position into the ontology used by Greaves+MacAskill it's sounding less like "longtermism is wrong" and more like "maybe longtermism is technically correct; who cares?; the practical advice people are hearing sucks".
I think that's a pretty interestingly different objection and if it's what you actually want to say it could be important to make sure that people don't hear it as "longtermism is wrong" (because that could lead them to looking at the wrong type of thing to try to refute you).
Hi Jack,
I think you're right, the comparison to astrology isn't entirely fair. But sometimes one has to stretch a little bit to make a point. And the point, I think, is important. Namely, that these estimates can be manipulated and changed all too easily to fit a narrative. Why not half a quadrillion, or 10 quadrillion people in the future?
On the falsifiability point - I agree that the claims are technically falsifiable. I struggled with the language for this reason while writing it (and Max Heitmann helpfully tried to make th...
As a major aside - there's a little joke Vaden and I tell on the podcast sometimes when talking about Bayesianism vs Criticial Rationalism (an alternative philosophy first developed by Karl Popper). The joke is most certainly a strawman of Bayesianism, but I think it gets the point across.
Bob and Alice are at the bar, being served by Carol. Bob is trying to estimate whether Carol has children. He starts with a prior of 1/2. He then looks up the base rate of adults with children, and updates on that. Then he updates based on Carol's age. And wha...
Hey James!
Answering this in its entirety would take a few more essays, but my short answer is: When there are no data available, I think subjective probability estimates are basically useless, and do not help in generating knowledge.
I emphasize the condition when there are no data available because data is what allows us to discriminate between different models. And when data is available, well, estimates become less subjective.
Now, I should say that I don't really care what's "reasonable" for someone to do - I definitely don't want...
Hi Elliott, just a few side comments from someone sympathetic to Vaden's critique:
I largely agree with your take on time preference. One thing I'd like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there's often a move made where people say "in theory we should have a zero discount factor, s...
Hi Owen! Really appreciate you engaging with this post. (In the interest of full disclosure, I should say that I'm the Ben acknowledged in the piece, and I'm in no way unbiased. Also, unrelatedly, your story of switching from pure maths to EA-related areas has had a big influence over my current trajectory, so thank you for that :) )
I'm confused about the claim
...I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their ins
...The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us
Anyway, I'm a huge fan of 95% of EA's work, but really think it has gone down the wrong path with longtermism. Sorry for the sass -- much love to all :)
It's all good! Seriously, I really appreciate the engagement from you and Vaden: it's obvious that you both care a lot and are offering the criticism precisely because of that. I currently think you're mistaken about some of the substance, but this kind of dialogue is the type of thing which can help to keep EA intellectually healthy.
...I'm confused about the claim
>I don't think they're saying (
Hi Linch!
I'd rather not rely on the authority of past performance to gauge whether someone's arguments are good. I think we should evaluate the arguments directly. If they are, they'll stand on their own regardless of someone's prior luck/circumstance/personality.
... (read more)