All of ben_chugg's Comments + Replies

Hi Linch! 

We can look at their track record on other questions, and see how reliably (or otherwise) different people's predictions track reality.

I'd rather not rely on the authority of past performance to gauge whether someone's arguments are good. I think we should evaluate the arguments directly. If they are, they'll stand on their own regardless of someone's prior luck/circumstance/personality. 

In general I'm not a fan of this particular form of epistemic anarchy where people say that they can't know anything with enough precision under uncert

... (read more)
3
MichaelA
3y
You or other readers might find this post of mine from last year of interest: Potential downsides of using explicit probabilities. The potential downsides I cover include causing overconfidence, underestimating the value of information, and anchoring, among other things that are less directly related to your point. That said, I ultimately conclude that: Relatedly, I think it's not at all obvious that putting numbers on things, forecasting, etc. would tend to get in the way of "Fostering an environment of criticism and error-correction becomes paramount". (It definitely could get in the way sometimes; it depends on the details.) There are various reasons why putting numbers on things and making forecasts can be actively helpful in fostering such an environment (some of which I discuss in my post).
2
MichaelA
3y
[Disclaimer that I haven't actually read your post yet - sorry! - though I may do so soon :)] I agree that we should often/usually evaluate arguments directly. But: * We have nowhere near enough time to properly evaluate all arguments relevant to our decisions. And in some cases, we also lack the relevant capabilities. So in effect, it's often necessary and/or wise to base certain beliefs mostly on what certain other people seem to believe. * For example, I don't actually know that much about how climate science works, and my object-level understanding of the arguments for climate change being real, substantial, and anthropogenic are too shallow for me to be confident - on that basis alone - that those conclusions are correct. (I think a clever person could've made false claims about climate science sound similarly believable to me, if they'd been motivated to do so and I'd only looked into it to the extent that I have.) * The same is more strongly true for people with less education and intellectual curiosity than me. * But it's good for us to default to being fairly confident that things most relevant scientists agree are true are indeed true. * The same basic point is even more clearly true when it comes to things like the big bang or the fact that dinosaurs existed and when they did so * See also epistemic humility * We can both evaluate arguments directly and consider people's track records * We could also evaluate the "meta argument" that "people who have been shown to be decent forecasters (or better forecasters than other people are) on relatively short time horizons will also be at least slightly ok forecasts (or at least slightly better forecasters than other people are) on relatively long time horizons" * Evaluating that argument directly, I think we should land on "This seems more likely to be true than not, though there's still room for uncertainty" * See also How Feasible Is Long-range Forecasting?, and particularly foot

Personally I think equating strong longtermism with longtermism is not really correct.

 

Agree! While I do have problems with (weak?) longtermism, this post is a criticism of strong longtermism :)

If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff

Why? GiveWell charities have developed theories about the effects of various interventions. The theories have been tested and, typically, found to be relatively robust. Of course, there is always more to know, and  always ways we could improve the theories. 

I don't see how this relates to not being able to develop a statistical estimate of the probability we go extinct tomorrow. (Of course, I can just give  you a number and call it "my belief... (read more)

9[anonymous]3y
The benefits of GiveWell's charities are worked out as health or economic benefits which are realised in the future. e.g. AMF is meant to be good because it allows people who would have otherwise died to live for a few more years. If you are agnostic about whether everyone will go extinct tomorrow, then you must be agnostic about whether people will actually get these extra years of life. 

Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I'm not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify

I totally agree that most of the existential threats currently  tackled by  the EA community are real problems (nuclear threats, pandemics, climate change, etc). 

I would note that the Greaves and MackAskill paper actually has a section putting forward 'advancing progress' as a plausible longter

... (read more)
3
JackM
3y
Maybe (just maybe) we're getting somewhere here. I have no interest in adopting a 'problem/knowledge focused ethic'. That would seem to presuppose the intrinsic value of knowledge. I only think knowledge is instrumentally valuable insofar as it promotes welfare. Instead most EAs want to adopt an ethic that prioritises 'maximising welfare over the long-run'. Longtermism claims that the best way to do so is to actually focus on long-term effects, which may or may not require a focus on near-term knowledge creation - whether it does or not is essentially an empirical question. If it doesn't require it, then a strong longtermist shouldn’t consider a lack of knowledge creation to be a significant drawback.

I think I agree,  but there's  a lot smuggled into the phrase "perfect information on expected value". So much in fact that I'm not sure I can quite follow the thought experiment. 

When I think of "perfect information on expected value", my first thought is something like a game of roulette. There's no uncertainty (about  what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient - we would know wha... (read more)

2
weeatquince
3y
Hi Ben. I agree with you. Yes I think roulette is a good analogy. And yes I think the "perfect information on expected value" is a strange claim to make. But I do think it is useful to think about what could be said and justified. I do think a claim along these lines could be made and it would not be wholly unfalsifiable and it would not require completely preferencing Bayesian expected value calculations.   To give another analogy I think there is a reasonable long-termist equivalent of statements like: Because of differences in wealth and purchasing power we expect that a donor in the developed west can have a much bigger impact overseas than in their home country. So in practice looking towards those kinds of international development options is a useful tool to apply when we are deciding what to do.  This does not completely exclude the probability that we can have impact locally with donations, but it does direct our searching.   Being charitable to Will+Hillary, maybe that is all they are saying. And maybe it is so confusing because they have dressed it up in philosophical language – but this is because, as per GPI's goals, this paper is about engaging philosophy academics rather than producing any novel insight. (If being more critical I am not convinced that Will+Hillary successfully give sufficient evidence to make such a claim in this paper and also see my list of things their paper could improve above.)

There are non-measurable sets (unless you discard the axiom of choice,  but then you'll run into some significant problems.) Indeed, the existence of non-measurable sets is the reason for so much of the measure-theoretic formalism. 

If you're not taking a measure theoretic approach, and instead using propositions (which  I guess, it should be assumed that you are, because this approach grounds Bayesianism), then using infinite sets (which clearly one would have to do if reasoning about all possible futures) leads to paradoxes. As E.T. Jaynes ... (read more)

3
MichaelStJules
3y
This depends on the space.  It's at least true for real-valued intervals with continuous measures, of course, but I think you're never going to ask for the measure of a non-measurable set in real-world applications, precisely because they require the axiom of choice to construct (at least for the real numbers, and I'd assume, by extension, any subset of any Rn), and no natural set you'll be interested in that comes up in an application will require the axiom of choice (more than dependant choice) to construct. I don't think the existence of non-measurable sets is viewed as a serious issue for applications. It is not true in a countable measure space (or, at least, you could always extend the measure to get this to hold), since assuming each singleton (like {x},x∈X) is measurable, every union of countably many singletons is measurable, and hence every subset is measurable (A=∪x∈A{x} is a countable union of singletons, A⊆X, X countable) . In particular, if you're just interested in the number of future people, assuming there are at most countably infinitely many (so setting aside the many-worlds interpretation of quantum mechanics for now), then your space is just the set of non-negative integers, which is countable. You could group outcomes to represent them with finite sets. Bayesians get to choose the measure spaces/propositions they're interested in. But again, I don't think dealing with infinite sets is so bad in applications.

What I meant by this was that I think you and Ben both seem to assume that strong longtermists don't want to work on near-term problems. I don't think this is a given (although it is of course fair to say that they're unlikely to only want to work on near-term problems).

Mostly agree here - this was the reason for some of the (perhaps cryptic) paragraphs in the Section "the Antithesis of Moral Progress." Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working ... (read more)

1
JackM
3y
I don't necessarily see working on reducing extinction risk as wildly speculating about the far future. In many cases these extinction risks are actually thought to be current risks. The point is that if they happen they necessarily curtail the far future. I would note that the Greaves and MackAskill paper actually has a section putting forward 'advancing progress' as a plausible longtermist intervention! As I have mentioned this is only insofar as it will make the long-run future go well.

Thanks AGB, this is helpful. 

I agree that longtermism is core part of the movement, and probably commands a larger share of adherents than I imply. However, I'm not sure to what extent strong longtermism is supported. My sense is that while most people agree with the general thrust of the philosophy, many would be uncomfortable with "ignoring the effects" of the near term, and remain focused on near-term problems. I didn't want to claim that a majority of EAs  supported longtermism broadly-defined, but then only criticize a subset of those views. 

I hadn't seen the results of the EA Survey - fascinating. 

7
lukeprog
3y
I know I'm late to the discussion, but… I agree with AGB's comment, but I would also like to add that strong longtermism seems like a moral perspective with much less "natural" appeal, and thus much less ultimate growth potential, than neartermist EA causes such as global poverty reduction or even animal welfare. For example, I'm a Program Officer in the longtermist part of Open Philanthropy, but >80% of my grantmaking dollars go to people who are not longtermists (who are nevertheless doing work I think is helpful for certain longtermist goals). Why? Because there are almost no longtermists anywhere in the world, and even fewer who happen to have the skills and interests that make them a fit for my particular grantmaking remit. Meanwhile, Open Philanthropy makes far more grants in neartermist causes (though this might change in the future), in part because there are tons of people who are excited about doing cost-effective things to help humans and animals who are alive and visibly suffering today, and not so many people who are excited about trying to help hypothetical people living millions of years in the future. Of course to some degree this is because longtermism is fairly new, though I would date it at least as far back as Bostrom's "Astronomical Waste" paper from 2003. I would also like to note that many people I speak to who identify (like me) as "primarily longtermist" have sympathy (like me) for something like "worldview diversification," given the deep uncertainties involved in the quest to help others as much as possible. So e.g. while I spend most of my own time on longtermism-motivated efforts, I also help out with other EA causes in various ways (e.g. this giant project on animal sentience), and I link to or talk positively about GiveWell top charities a lot, and I mostly avoid eating non-AWA meat, and so on… rather than treating these non-longtermist priorities as a rounding error. Of course some longtermists take a different approach than I do,

Thanks for the engagement! 

I think you're mistaking Bayesian epistemology with Bayesian mathematics. Of course, no one denies Bayes' theorem. The question is: to what should it be applied? Bayesian epistemology holds that rationality consists in updating your beliefs in accordance with Bayes' theorem. As this LW post puts it: 

Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our b

... (read more)

I don't  think the question makes sense.  I agree with Vaden's argument that there's no well-defined measure over all possible futures. 

5
MichaelStJules
3y
There are definitely well-defined measures on any set (e.g. pick one atomic outcome to have probability 1 and the rest 0); there's just not only one, and picking exactly one would be arbitrary. But the same is true for any set of outcomes with at least two outcomes, including finite ones (or it's at least often arbitrary when there's not enough symmetry for equiprobability). For the question of how many people will exist in the future, you could use a Poisson distribution. That's well-defined, whether or not it's a reasonable distribution to use. Of course, trying to make your space more and more specific will run into feasibility issues.
5[anonymous]3y
Do you for example think there is a more than 50% chance that it is greater than 10 billion?

But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn't GiveWell make similar assumptions about the impacts of most of their recommended charities?

 

Yes, we do! And the  strength of those assumptions is key. Our skepticism should rise in proportion to the number/feasibility of the assumptions. So you're definitely right, we should always be  skeptical of social science research - indeed, any empirical research. We should be looking for hasty generalizations, gaps in the analysi... (read more)

Why are probabilities prior to action - why are they so fundamental? Could Andrew Wiles "rationally put probabilities" on him solving Fermat's Last Theorem? Does this mean he shouldn't have worked on it? Arguments do not have to be in number form. 

3
Neel Nanda
3y
To me, the fundamental point isn't probabilities, it's that you need to make a choice about what you do. If I have the option to give a $1mn grant to preventing nuclear war or give the grant to something else, then no matter what I do, I have made a choice. And so, I need to have a decision theory for making a choice here. And to me, subjective probabilities and Bayesian epistemology more generally, are by far the best decision theory I've come across for making choices under uncertainty. If there's a 1% chance of nuclear war, the grant is worth making, if there's a 10^-15 chance of nuclear war, the grant is not worth making. I need to make a decision, and so probabilities are fundamental, because they are my tool for making a decision. And there are a bunch of important question where we don't have data, and there's no reasonable way to get data (eg, nuclear war!). And any approach which rejects the ability to reason under uncertainty in situations like this, is essentially the decision theory of "never make speculative grants like this". And I think this is a clearly terrible decision theory (though I don't think you're actually arguing for this policy?)
[anonymous]3y24
0
0

If you refuse to claim that the chance of nuclear war up to 2100 is greater than 0.000000000001%, then I don't see how you could make a good case to work on it over some other possible intuitively trivial action, such as painting my wall blue. What would the argument be if you are completely agnostic as to whether it is a serious risk?

Sure - Nukes exist. They've been deployed before, and we know they have incredible destructive power. We know that many  countries have them, and have threatened to use them.  We know the protocols are in place for their use. 

To me this seems like you're making a rough model with a bunch of assumptions like that past use, threats and protocols increase the risks, but not saying by how much or putting confidences or estimates on anything (even ranges). Why not think the risks are too low to matter despite past use, threats and protocols?

Hi Michael! 

It seems like you're acting as if you're confident that the number of people in the future is not huge, or that the interventions are otherwise not so impactful (or they do more harm than good), but I'm not sure you actually believe this. Do you? 

I have no idea about the number of future people. And I think this is the only defensible position. Which interventions do you mean? My argument is that longtermism enables reasoning that de-prioritizes current problems in lieu of possible, highly uncertain, future problems. Focusing on such ... (read more)

You refuse to commit to a belief about x, but commit to one about y and that's inconsistent.

I would rephrase as "You say you refuse to commit to a belief about x, but seem to act as if you've committed to a belief about x". Specifically, you say you have no idea about the number of future people, but it seems like you're saying we should act as if we believe it's not huge (in expectation). The argument for strong longtermism you're trying to undermine (assuming we get the chance of success and sign roughly accurate, which to me is more doubtful) goes throu... (read more)

Oh interesting. Did you read my critique as saying that the philosophy is wrong? (Not sarcastic; serious question.) I don't really even know what "wrong" would mean here, honestly. I think the reasoning is flawed and if taken seriously leads to bad consequences.  

2
Owen Cotton-Barratt
3y
I read your second critique as implicitly saying "there must be a mistake in the argument", whereas I'd have preferred it to say "the things that might be thought to follow from this argument are wrong (which could mean a mistake in the argument that's been laid out, or in how its consequences are being interpreted)".

Yeah I suppose I would still be skeptical of using ranges in the absence of data (you could just apply all my objections to the upper and low bounds of the range). But I'm definitely all for sensitivity analysis when there are data backing up the estimates!

I have read about (complex) cluelessness. I have a lot of respect for Hilary Greaves, but I don't think cluelessness is particularly illuminating concept. I view it as a variant of "we can't predict the future." So, naturally, if you ground your ethics in expected value calculations over the long term future then, well, there's going to be problems. 

I would propose to resolve cluelessness as follows: Let's admit we can't predict the future. Our focus should instead be on error-correction. Our actions will have consequences - both intended and unintend... (read more)

4
JackM
3y
I do think it's far more illuminating than "we can't predict the future". Really complex cluelessness is saying OK great you've carried out a CBA/CEA but you've omitted/ignored effects from the analysis that we: 1. Have good reason to expect will occur 2. Have good reason to suspect are sufficiently important such that they could change the sign of your final number if properly included in your analysis If the above factors are in fact true in the case of GiveWell (I think they probably are) then I don't think GiveWell CBAs are all that useful and the original point you were trying to make - that GiveWell analysis is obviously superior because it makes use of data - sort of breaks down because, quite simply, the data has a massive, gaping hole in it. This is not to criticise GiveWell in the slightest, it's just to acknowledge the monstrous task they're up against. Correct me if I'm wrong but what you seem to be arguing is that we're actually complexly clueless about everything, so we may as well just ignore the problem. I actually don't think this is true - we may be clueless about everything but not necessarily in a complex way. Consider the promotion of philosophy in schools, a class of interventions that I have written about. I'm not sure if these are definitely the best interventions (reception to my post was fairly lukewarm), but I also don't think we are complexly clueless about their effects in the same way that we are about the effects of distributing bednets. This is because it's just quite hard to think up reasons why it might be bad to promote philosophy in schools. Sure it could be the case that promoting philosophy in schools makes something bad happen, but I don't really have much of a reason to entertain that possibility if I can't think of a specific effect that fulfils the two factors I listed above. In the case of distributing bednets we are pretty certain there will be population effects, we are pretty certain these will be very significant

Hey Fin! Nice - lot's here. I'll respond to what  I can. If I miss anything crucial just yell at me :) (BTW, also enjoying your podcast. Maybe we should have a podcast battle at some point ... you can defend longtermism's honour). 

In any case: declaring that BE "has been refuted" seems unfairly rash.

Yep, this is fair. I'm imagining myself in the position of some random stranger outside of a fancy EA-gala, and trying to get people's attention. So yes - the language might be a little strong (although I do really think Bayesianism doesn't stand up t... (read more)

1
finm
3y
Thanks for replying Ben, good stuff! Few thoughts. I'll concede that point! I think a better response to the one I originally gave was to point out that the case for strong longtermism relies on establishing a sensible lower(ish) bound for total future population. Greaves and MacAskill want to convince you that (say) at least a quadrillion lives could plausibly lie in the future. I'm curious if you have an issue with that weaker claim? I think your point about space exploration is absolutely right, and more than a nitpick. I would say two things: one is that I can imagine a world in which we could be confident that we would never colonise the stars (e.g. if the earth were more massive and we had 5 decades before the sun scorched us or something). Second, voicing support for the 'anything permitted by physics can become practically possible' camp indirectly supports an expectation of a large numbers of future lives, no? Hmm — to my lights Greaves and MacAskill are fairly clear about the differences between the two kinds of estimate. If your reply is that doing any kind of (toy) EV calculation with both estimates just implies that they're somehow "equally as capable of capturing something about reality", then it feels like you're begging the question. I don't understand what you mean here, which is partly my fault for being unclear in my original comment. Here's what I had in mind: suppose you've run a small-scale experiment and collected your data. You can generate a bunch of statistical scores indicating e.g. the effect size, plus the chance of getting the results you got assuming the null hypothesis was true (p-value). Crucially (and unsurprisingly) none of those scores directly give you the likelihood of an effect (or the 'true' anything else). If you have reason to expect a bias in the direction of positive results (e.g. publication bias), then your guess about how likely it is that you're picked up on a real effect may in fact be very different from any sta
6
JackM
3y
I don't think this is true. Whenever Greaves and MacAskill carry out a longtermist EV calculation in the paper it seems clear to me that their aim is to illustrate a point rather than calculate a reliable EV of a longtermist intervention. Their world government EV calculation starts with the words "suppose that...". They also go on to say: This is the point they are trying to get across by doing the EV calculations.

I'm tempted to just concede this because we're very close to agreement here. 

For example we need to wrestle with problems we face today to give us good enough feedback loops to make substantial progress, but by taking the long-term perspective we can improve our judgement about which of the nearer-term problems should be highest-priority.

If this turns out to be true (i.e., people end up working on actual problems and not, say, defunding the AMF to worry about "AI controlled police and armies"), then I have much  less of a problem with longtermism... (read more)

5
Owen Cotton-Barratt
3y
I think this is might be a case of the-devil-is-in-the-details. I'm in favour of people scanning the horizon for major problems whose negative impacts are not yet being felt, and letting that have some significant impact on which nearer-term problems they wrestle with. I think that a large proportion of things that longtermists are working on are problems that are at least partially or potentially within our foresight horizons. It sounds like maybe you think there is current work happening which is foreseeably of little value: if so I think it could be productive to debate the details of that.

Well, far be it from me to tell others how to spend their time, but I guess it depends on what the goal is. If the goal is to literally put a precise number (or range) on the probability of nuclear war before 2100, then yes, I think that's a fruitless and impossible endeavour. History is not an  iid sequence of events. If there is such a war, it will be the result of complex geopolitical factors based on human belief, desires, and knowledge  at the time. We cannot pretend to know what these will be. Even if you were to gather all the available ev... (read more)

[anonymous]3y10
0
0

You say that "there are good arguments for working on the threat of nuclear war". As I understand your argument, you also say we cannot rationally distinguish between the claim "the chance of nuclear war in the next 100 years is 0.00000001%" and the claim "the chance of nuclear war in the next 100 years is 1%". If you can't rationally put probabilities on the risk of nuclear war, why would you work on it?

4
MichaelStJules
3y
But we also have to make similar (although less strong) assumptions and have generalization error even with RCTs. Doesn't GiveWell make similar assumptions about the impacts of most of their recommended charities? As far as I know, there are recent studies of GiveDirectly's effects, but the "recent" studies of the effects of the interventions of the other charities have probably had their samples chosen years ago, so their effects might not generalize to new locations. Where's the cutoff for your skepticism? Should we boycott the GiveWell-recommended charities whose ongoing intervention impacts of terminal value  (lives saved, quality of life improvements) are not being measured rigorously in their new target areas, in favour of GiveDirectly? To illustrate the issue of generalization, GiveWell did a pretty arbitrary adjustment for El Niño for deworming, although I think this is the most suspect assumption I've seen them make. See Eva Vivalt's research on generalization (in the Causal Inference section) or her talk here.
8
MichaelStJules
3y
Can you give some examples? I expect that someone could respond "That could be too unlikely to matter enough" to each of them, since we won't have good enough data.

Hi Owen! 

Re: inoculation of criticism. Agreed that it doesn't make criticism impossible in  every sense (otherwise my post wouldn't exist). But if one reasons with numbers only (i.e., EV reasoning), then longtermism becomes unavoidable. As soon as one adopts what I'm calling "Bayesian epistemology", then there's very little room to argue with it. One can retort: Well,  yes, but there's very little room to argue with General Relativity, and that is a strength of the theory, not a weakness. But the difference is that GR is very precise: It's h... (read more)

Cool. I do think that trying to translate your position into the ontology used by Greaves+MacAskill it's sounding less like "longtermism is wrong" and more like "maybe longtermism is technically correct; who cares?; the practical advice people are hearing sucks".

I think that's a pretty interestingly different objection and if it's what you actually want to say it could be important to make sure that people don't hear it as "longtermism is wrong" (because that could lead them to looking at the wrong type of thing to try to refute you).

Hi Jack, 

I think you're right, the comparison to astrology isn't entirely fair. But sometimes one has to stretch a little bit to make a point. And the point, I think, is important. Namely,  that these estimates can be manipulated and changed all too easily to fit a narrative.  Why not half a quadrillion, or 10 quadrillion people in the future? 

On the falsifiability point - I agree that the claims are  technically falsifiable. I struggled with the language for this reason while writing it (and Max Heitmann helpfully tried to make th... (read more)

2
JackM
3y
Yes if you're happy to let your calculations be driven by very small probabilities of enormous value I suppose you're right that the great filter would never be conclusive. Whether or not it is reasonable to allow this is an open question in decision theory and I don't think it's something that all longtermists accept. The authors themselves don't appear to be all that comfortable with accepting it: This implies if they think a credence is miniscule or a long-lasting influence negligible that they might throw away the calculation.
5
JackM
3y
The fact that they can be manipulated and changed doesn't strike me as much of a criticism. The more relevant question is if people actually do manipulate and change the estimates to fit their narrative. If they do we should call out these particular people, but even in this case I don't think it would be an argument against longtermism generally, just against the particular arguments these 'manipulaters' would put forward. The authors do at least set out their assumptions for the one quadrillion which they call their conservative estimate. For example, one input into the figure is an estimate that earth will likely be habitable for another 1 billion years, which is cited from another academic text. Now I'm not saying that their one quadrillion estimate is brilliantly thought through (I'm not saying it isn't either), I'm just countering a claim I think you're making that Greaves and MacAskill would likely add zeros or inflate this number if required to protect strong longtermism e.g. to maintain that their conservative longtermist EV calculation continues to beat GiveWell's cost-effectiveness calculation for AMF. I don't see evidence to suggest they would and I personally don't think they would manipulate in such a way. That's not to say that the one quadrillion figure may not change, but I would hope and would expect this to be for a better reason than "to save longtermism". To sum up I don't think your "amenable to drastic change" point is particularly relevant. What I do think is more relevant is that the one quadrillion estimate is slightly arbitrary, and I see this as a subtly different point. I may address this in a different comment.

As a major aside - there's a little joke Vaden and I tell on the podcast sometimes when talking about Bayesianism vs Criticial Rationalism (an alternative philosophy first developed by Karl Popper). The joke is most certainly a strawman of Bayesianism, but I think it gets the point across. 

Bob and Alice are at the bar, being served by Carol. Bob is trying to estimate whether Carol has children. He starts with a prior of 1/2. He then looks up the base rate of adults with children, and updates on that. Then he updates based on  Carol's age. And wha... (read more)

7
james
3y
Thanks for the reply and taking the time to explain your view to me :) I'm curious: My friend has been trying to estimate the liklihood of nuclear war before 2100. It seems like this is a question that is hard to get data on, or to run tests on. I'd be interested to know what you'd recommend them to do? Is there a way I can tell them to approach the question such that it relies on 'subjective estimates' less and 'estimates derived from actual data' more? Or is it that you think they should drop the research question and do something else with their time, since any approach to the question would rely on subjective probability estimates that are basically useless?

Hey James! 

Answering this in its entirety would take a few more essays, but my short answer is: When there are no data available, I think subjective probability estimates are basically useless, and do not help in generating knowledge. 

I emphasize the condition when there are no data available because data is what  allows us to discriminate between different models. And when data is available, well, estimates become less subjective. 

Now, I should say that I don't really care what's "reasonable" for someone to do - I definitely don't want... (read more)

3
ben_chugg
3y
As a major aside - there's a little joke Vaden and I tell on the podcast sometimes when talking about Bayesianism vs Criticial Rationalism (an alternative philosophy first developed by Karl Popper). The joke is most certainly a strawman of Bayesianism, but I think it gets the point across.  Bob and Alice are at the bar, being served by Carol. Bob is trying to estimate whether Carol has children. He starts with a prior of 1/2. He then looks up the base rate of adults with children, and updates on that. Then he updates based on  Carol's age. And what car she drives. And the fact that she's married. And so on. He pulls out a napkin, does some complex math, and arrives at the following conclusion: It's 64.745% likely that Carol has children. Bob is proud of his achievement and shows the napkin to Alice.  Alice leans over the bar and asks "Hey Carol - do you have kids?".   Now, obviously this is not how the Bayesian acts in real life. But it demonstrates the care the Bayesian takes in having correct beliefs;  about having the optimal brain state. I think this is the wrong target. Instead, we should be seeking to falsify as many conjectures as possible, regardless of where the conjectures came from. I don't care what Alice thought the probability  was before she asked the question, only about the result of the test. 

Hi Elliott, just a few side comments from someone sympathetic to Vaden's critique: 

I largely agree with your take on time preference. One thing I'd like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there's often a move made where people say "in theory we should have a zero discount factor, s... (read more)

2
EJT
3y
Thanks! Your point about time preference is an important one, and I think you're right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions. On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we're concerned. But can't the longtermist make the same response? Imagine they said: 'For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we're concered. The outcome space about which we're concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.' And, in any case, it seems like Vaden's point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So  it seems like these probabilities are not undefined after all.

Hi Owen! Really appreciate you engaging with this post. (In the interest of full disclosure, I should say that I'm the Ben acknowledged in the piece, and I'm in no way unbiased. Also, unrelatedly, your story of switching from pure maths to EA-related areas has had a big influence over my current trajectory, so thank you for that :) ) 

I'm confused about the claim 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their ins

... (read more)
4
axioman
3y
"The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. " This claim seems confused, as every nonempty set allows for the definition of a probability measure on it  and measures on function spaces exist ( https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ). To obtain non-existence, further properties of the measure such as translation-invariance need to be required (https://aalexan3.math.ncsu.edu/articles/infdim_meas.pdf) and it is not obvious to me that we would necessarily require such properties. 

The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us

... (read more)
4
djbinder
3y
  It certainly not obvious that the universe is infinite in the sense that you suggest. Certainly nothing is "provably infinite" with our current knowledge. Furthermore, although we may not be certain about the properties of our own universe, we can easily imagine worlds rich enough to contain moral agents yet which remain completely finite. For instance, you could image a cellular automata with a finite grid size and which only lasted for a finite duration. However, perhaps the more important consideration is the in principle set of possible futures that we must consider when doing EV calculations, rather than the universe we actually inhabit, since even if our universe is finite we would never be able to convince our selves of this with certainty. Is it this set of possible futures that you think suffers from "immeasurability"?

Anyway, I'm a huge fan of 95% of EA's work, but really think it has gone down the wrong path with longtermism. Sorry for the sass -- much love to all :) 

It's all good! Seriously, I really appreciate the engagement from you and Vaden: it's obvious that you both care a lot and are offering the criticism precisely because of that. I currently think you're mistaken about some of the substance, but this kind of dialogue is the type of thing which can help to keep EA intellectually healthy.

I'm confused about the claim 

>I don't think they're saying (

... (read more)