BC

ben_chugg

136 karmaJoined Dec 2020

Comments
26

Hi Linch! 

We can look at their track record on other questions, and see how reliably (or otherwise) different people's predictions track reality.

I'd rather not rely on the authority of past performance to gauge whether someone's arguments are good. I think we should evaluate the arguments directly. If they are, they'll stand on their own regardless of someone's prior luck/circumstance/personality. 

In general I'm not a fan of this particular form of epistemic anarchy where people say that they can't know anything with enough precision under uncertainty to give numbers, and then act as if their verbal non-numeric intuitions are enough to carry them through consistently making accurate decisions. 

I would actually argue that it's the opposite of epistemic anarchy. Admitting that we can't know the unknowable changes our decision calculus: Instead of focusing on making the optimal decision, we recognize that all  decisions will have unintended negative consequences which we'll have to correct. Fostering an environment of criticism and error-correction becomes paramount. 

It's easy to lie (including to yourself) with numbers, but it's even easier to lie without them.

I'd disagree. I think trying to place probabilities on inherently unknowable events lends us a false sense of security. 

(All said with a smile of course :) ) 

Personally I think equating strong longtermism with longtermism is not really correct.

 

Agree! While I do have problems with (weak?) longtermism, this post is a criticism of strong longtermism :)

If you are agnostic about that, then you must also be agnostic about the value of GiveWell-type stuff

Why? GiveWell charities have developed theories about the effects of various interventions. The theories have been tested and, typically, found to be relatively robust. Of course, there is always more to know, and  always ways we could improve the theories. 

I don't see how this relates to not being able to develop a statistical estimate of the probability we go extinct tomorrow. (Of course, I can just give  you a number and call it "my belief that we'll go extinct tomorrow," but this doesn't get us anywhere. The question is whether it's accurate - and what accuracy means in this case.) What would be the parameters of such a model? There are uncountably many things - most of them unknowable - which could affect such an outcome.  

Agree with almost all of this. This is why it was tricky to argue against, and also why I say (somewhere? podcast maybe?) that I'm not particularly worried about the current instantiation of longtermism, but what this kind of logic could justify

I totally agree that most of the existential threats currently  tackled by  the EA community are real problems (nuclear threats, pandemics, climate change, etc). 

I would note that the Greaves and MackAskill paper actually has a section putting forward 'advancing progress' as a plausible longtermist intervention!

Yeah - but I found this puzzling. You don't need longtermism to think this is a priority - so why  adopt it? If you instead adopt a problem/knowledge focused ethics,  then you get to keep all the good aspects of longtermism (promoting progress, etc), but don't open yourself up to what (in my view) are its drawbacks. I try to say this in the "Antithesis of Moral Progress" section, but obviously did a terrible job haha :) 

I think I agree,  but there's  a lot smuggled into the phrase "perfect information on expected value". So much in fact that I'm not sure I can quite follow the thought experiment. 

When I think of "perfect information on expected value", my first thought is something like a game of roulette. There's no uncertainty (about  what can affect the system), only chance. We understand all the parameters of the system and can write down a model. To say something like this about the future means we would be basically omniscient - we would know what  sort of future knowledge will be developed, etc. Is this also what you had in mind?

(To complicate matters,  the roulette analogy is imperfect. For a typical game of roulette we can write down a pretty robust probabilistic model. But it's only a model. We could also study the precise physics of that particular roulette board,  model the hand spinning the wheel (is that how roulette works? I don't even know), take into account the initial position, the toss of the  white ball,  and so on and so forth. If we spent a long time doing this, we could come up with a model which was more accurate than our basic probabilistic model. This is all to say that models are tools suited for a particular purpose. So it's unclear to me what the model would be for the future which allowed us to write down a precise model - which is implicitly required for EV calculations). 

There are non-measurable sets (unless you discard the axiom of choice,  but then you'll run into some significant problems.) Indeed, the existence of non-measurable sets is the reason for so much of the measure-theoretic formalism. 

If you're not taking a measure theoretic approach, and instead using propositions (which  I guess, it should be assumed that you are, because this approach grounds Bayesianism), then using infinite sets (which clearly one would have to do if reasoning about all possible futures) leads to paradoxes. As E.T. Jaynes writes in Probability Theory and the Logic of Science: 

It is very important to note that our consistency theorems have been established only for probabilities assigned on finite sets of propositions ... In laying down this rule of conduct, we are only following the policy that mathematicians from Archimedes to Gauss have considered clearly necessary for nonsense avoidance in all of mathematics. (pg. 43-44). 

(Vaden makes this point in the podcast.) 

What I meant by this was that I think you and Ben both seem to assume that strong longtermists don't want to work on near-term problems. I don't think this is a given (although it is of course fair to say that they're unlikely to only want to work on near-term problems).

Mostly agree here - this was the reason for some of the (perhaps cryptic) paragraphs in the Section "the Antithesis of Moral Progress." Longtermism erodes our ability to make progress to whatever extent it has us not working on real problems. And, to the extent that it does have us working on real problems,  then I'm not sure what longtermism is actually adding.

Also, just a nitpick on terminology - I dislike the term "near-term" problems, because it seems to imply that there is a well-defined class of "future" problems that we can choose to work on. As if there were a set of problems, and they could be classified as either short-term or long-term. But the fact is that the only problems are near-term problems; everything else is just a guess about what the  future might hold. So I see it less about choosing what kinds of problems to work on, but a choice between working on real problems, or conjecturing about future ones, and I  think the latter is actively harmful. 

Thanks AGB, this is helpful. 

I agree that longtermism is core part of the movement, and probably commands a larger share of adherents than I imply. However, I'm not sure to what extent strong longtermism is supported. My sense is that while most people agree with the general thrust of the philosophy, many would be uncomfortable with "ignoring the effects" of the near term, and remain focused on near-term problems. I didn't want to claim that a majority of EAs  supported longtermism broadly-defined, but then only criticize a subset of those views. 

I hadn't seen the results of the EA Survey - fascinating. 

Thanks for the engagement! 

I think you're mistaking Bayesian epistemology with Bayesian mathematics. Of course, no one denies Bayes' theorem. The question is: to what should it be applied? Bayesian epistemology holds that rationality consists in updating your beliefs in accordance with Bayes' theorem. As this LW post puts it: 

Core tenet 3: We can use the concept of probability to measure our subjective belief in something. Furthermore, we can apply the mathematical laws regarding probability to choosing between different beliefs. If we want our beliefs to be correct, we must do so. 

Next, it's not that "Bayesianism is the right approach in these fields," (I'm not sure what that means) it's that Bayesian methods are useful for some problems. But Bayesianism falls short when it comes to explaining how we actually create knowledge. (No amount of updating on evidence + Newtonian mechanics gives you relativity.)

Despite his popularity among scientists who get given one philosophy of science class. 

 Love the ad hominem attack. 

If you deny that observations confirm scientific theories, then you would have no reason to believe scientific theories which are supported by observational evidence, such as that smoking causes lung cancer. 

Smoking causes lung cancer is a hypothesis, smoking does not cause lung cancer is another. We then discriminate between the hypotheses based on evidence (we falsify  incorrect hypotheses). We slowly develop more and more sophisticated explanatory theories of how smoking causes lung cancer, always seeking to falsify them. At any time, we are left  with the best explanation of a given phenomenon. This is how falsification works. (I can't comment on your claim about Popper's beliefs - but I would be surprised if true. His books are filled with examples of scientific progress.)

 If you deny the rationality of induction, then you must be sceptical about all scientific theories that purport to be confirmed by observational evidence.

 Yes. Theories are not confirmed by evidence (there's no number of white swans you can see which confirms that all swans are white. "Swans are white" is a hypothesis, which can be refuted by  seeing a black swan), they are falsified by it. Evidence plays the role of discrimination, not confirmation.

Inductive sceptics must hold that if you jumped out of a tenth floor balcony, you would be just as likely to float upwards as fall downwards.

No - because we have explanatory theories telling us why we'll fall downwards (general relativity). These theories are the only ones which have survived scrutiny, which is why we abide by them. Confirmationism, on the other hand, purports to explain phenomenon by appealing to previous evidence. "Why do we fall downwards? Because we fell downwards before".  The sun rising tomorrow morning does not confirm the hypothesis that the sun rises every day. We should not increase our confidence in the sun rising tomorrow because it rose yesterday. Instead, we have a theory about why and when the sun rises when it does  (heliocentric model + axis-tilt theory). 

Observing additional evidence in favour of the theory should not increase our "credence" in it. Finding confirming evidence of a theory is easy, as evidenced by  astrology and ghost stories. The amount of confirmatory evidence for these theories is irrelevant, what matters is whether and by what they can be falsified. There are more accounts of people seeing UFOs than there are of people witnessing gamma ray bursts. According the confirmationism, we should thus increase our credence in the former, and have almost none in the existence of the latter. 

If you haven't read this piece on the failure of probabilistic induction to favour one generalization over another, I highly encourage you to do so. 

Anyway, happy to continue this debate if you'd like, but that was my primer. 

I don't  think the question makes sense.  I agree with Vaden's argument that there's no well-defined measure over all possible futures. 

Load more