Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.

I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some. 

Cheers!

Comments79
Sorted by Click to highlight new comments since: Today at 6:05 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Thanks! I think that there's quite a lot of good content in your critical review, including some issues that really should be discussed more. In my view there are a number of things to be careful of, but ultimately not enough to undermine the longtermist position. (I'm not an author on the piece you're critiquing, but I agree with enough of its content to want to respond to you.)

Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you're pointing out apparently unpalatable implications. I think this is a useful type of criticism, but one that often leads me suspecting that neither side is simply-incorrect, but rather looking for a good synthesis position which understands all of the important points. (Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.)

The point I most appreciate you making is that it seems like strong longtermism could be used to justify ignoring all sorts of pressing present problems. I think that this is justifiably concerning, and deserves attention. However my view is more like "beware naive longtermism" (rather like "bew... (read more)

In response to the plea at the end (and quoting of Popper) to focus on the now over the utopian future: I find myself sceptical and ultimately wanting to disagree with the literal content, and yet feeling that there is a good deal of helpful practical advice there:

  • I don't think that we must focus on the suffering now over thinking about how to help the further-removed future
    • I do think that if all people across time were united in working for the good, then our comparative advantage as being the only people who could address current issues (for both their intrinsic and instrumental value) would mean that a large share of our effort would be allocated to this
  • I do think that attempts to focus on hard-to-envision futures risk coming to nothing (or worse) because of poor feedback loops
    • In contrast tackling issues that are within our foresight horizon allows us to develop experience and better judgement about how to address important issues (while also providing value along the way!)
    • I don't think this means we should never attempt such work; rather we should do so carefully, and in connection with what we can learn from wrestling with more imminent challenges

Regarding the point about the expectation of the future being undefined: I think this is correct and there are a number of unresolved issues around exactly when we should apply expectations, how we should treat them, etc.

Nonetheless I think that we can say that they're a useful tool on lots of scales, and many of the arguments about the future being large seem to bite without relying on getting far out into the tails of our hypothesis space. I would welcome more work on understanding the limits of this kind of reasoning, but I'm wary of throwing the baby out with the bathwater if we say we must throw our hands up rather than reason at all about things affecting the future.

To see more discussion of this topic, I particularly recommend Daniel Kokotajlo's series of posts on tiny probabilities of vast utilities.

As a minor point, I don't think that discounting the future really saves you from undefined expectations, as you're implying. I think that on simple models of future growth -- such as are often used in practice -- it does, but if you give some credence to wild futures with crazy growth rates, then it's easy to make the entire thing undefined even through a positive discount rate for pure time preference.

Hey Owen - thanks for your feedback! Just to respond to a few points - 

>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.

Would be able to elaborate a bit on where the weaknesses are? I see in the thread  you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes  your sniff-test :) ). If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to NaN.  Does this not refute at least 1 / 2 of the assumptions longtermism needs to 'get off the ground'?  

> Overall I feel like a lot of your critique is not engaging directly with the case for strong longtermism; rather you're pointing out apparently unpalatable implications.

Just to comment here - yup I intentionally didn't address the philosophical arguments in favor of longtermism, just because I felt that criticizing the incorrect use of expected values was a "deeper" critique and one which I hadn't seen made on the forum before.  What would the argument for strong longtermism look like without the expected val... (read more)

>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.

Would be able to elaborate a bit on where the weaknesses are? I see in the thread  you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes  your sniff-test :) ). 

I think it proves both too little and too much.

Too little, in the sense that it's contingent on things which don't seem that related to the heart of the objections you're making. If we were certain that the accessible universe were finite (as is suggested by (my lay understanding of) current physical theories), and we had certainty in some finite time horizon (however large), then all of the EVs would become defined again and this technical objection would disappear.

In that world, would you be happy to drop your complaints? I don't really think you should, so it would be good to understand what the real heart of the issue is.

Too much, in the sense that if we apply the argument naively then it appears to rule out using EVs as a decision-making tool in many practical situations (where subjective probabilities are fed... (read more)

Hi Owen! Really appreciate you engaging with this post. (In the interest of full disclosure, I should say that I'm the Ben acknowledged in the piece, and I'm in no way unbiased. Also, unrelatedly, your story of switching from pure maths to EA-related areas has had a big influence over my current trajectory, so thank you for that :) ) 

I'm confused about the claim 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value.

This seems in direct opposition to what the authors say (and what Vaden quoted above), namely that:

The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years

I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized. Next, you write that if

we had certainty in some finite time horizon (however large), then all of the EVs would become defined again and this technical objection would disappear.

I don't t... (read more)

The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?  

I can see two possible types of arguments here, which are importantly different.

  1. Arguments aiming to show that there can be no probability measure - or at least no "non-trivial" one - on some relevant set such as the set of all possible futures.
  2. Arguments aiming to show that, among the many probability measures that can be defined on some relevant set, there is no, or no non-arbitrary way to identify a particular one.

[ETA: In this comment, which I hadn't seen before writing mine, Vaden seems to confirm that they were trying to make an argument of the second rather than the first kind.]

In this... (read more)

Technical comments on type-1 arguments (those aiming to show there can be no probability measure). [Refer to the parent comment for the distinction between type 1 and type 2 arguments.]

I basically don't see how such an argument could work. Apologies if that's totally clear to you and you were just trying to make a type-2 argument. However, I worry that some readers might come away with the impression that there is a viable argument of type 1 since Vaden and you mention issues of measurability and infinite cardinality. These relate to actual mathematical results showing that for certain sets, measures with certain properties can't exist at all.

However, I don't think this is relevant to the case you describe. And I also don't think it can be salvaged for an argument against longtermism. 

First, in what sense can sets be "immeasurable"? The issue can arise in the following situation. Suppose we have some set (in this context "sample space" - think of the elements at all possible instances of things that can happen at the most fine-grained level), and some measure  (in this context "probability" - but it could also refer to something we'd intuitively call length or volume) we ... (read more)

2
Max_Daniel
3y
As even more of an aside, type 1 arguments would also be vulnerable to a variant of Owen's objection that they "prove too little". However, rather than the argument depending too much on contingent properties of the world (e.g. whether it's spatially infinite), the issue here is that they would depend on the axiomatization of mathematics. The situation is roughly as follows: There are two different axiomatizations of mathematics with the following properties:  * In both of them all maths that any of us are likely to ever "use in practice" works basically the same way. * For parallel situations (i.e. assignments of measure to some subsets of some set, which we'd like to extend to a measure on all subsets) there are immeasurable subsets in exactly one of the axiomatizations. Specifically, for example, for our intuitive notion of "length" there are immeasurable subsets of the real numbers in the standard axiomatization of mathematics (called ZFC here). However, if we omit a single axiom - the axiom of choice - and replace it with an axiom that loosely says that there are weirdly large sets then every subset of the real numbers is measurable. [ETA: Actually it's a bit more complicated, but I don't think in a way that matters here. It doesn't follow directly from these other axioms that everything is measurable, but using these axioms it's possible to construct a "model of mathematics" in which that holds. Even less importantly, we don't totally omit the axiom of choice but replace it with a weaker version.] I think it would be pretty strange if the viability of longtermism depended on such considerations. E.g. imagine writing a letter to people in 1 million years explaining why you didn't choose to try to help more rather than fewer of them. Or imagine getting such a letter from the distant past. I think I'd be pretty annoyed if I read "we considered helping you, but then we couldn't decide between the axiom of choice and inaccessible cardinals ...".
5
Max_Daniel
3y
Technical comments on type-2 arguments (i.e. those that aim to show there is no, or no non-arbitrary way for us to identify a particular probability measure.) [Refer to the parent comment for the distinction between type 1 and type 2 arguments.] I think this is closer to the argument Vaden was aiming to make despite the somewhat nonstandard use of "measurable" (cf. my comment on type 1 arguments for what measurable vs. immeasurable usually refers to in maths), largely because of this part (emphasis mine) [ETA: Vaden also confirms this in this comment, which I hadn't seen before writing my comments]: Some comments: * Yes, we need to be more careful when reasoning about infinite sets since some of our intuitions only apply to finite sets. Vaden's ball reshuffling example and the "Hilbert's hotel" thought experiment they mention are two good examples for this. * However, the ball example only shows that one way of specifying a measure no longer works for infinite sample spaces: we can no longer get a measure by counting how many instances a subset (think "event") consists of and dividing this by the number of all possible samples because doing so might amount to dividing infinity by infinity. * (We can still get a measure by simply setting the measure of any infinite subset to infinity, which is permitted for general measures, and treating something finite divided by infinity as 0. However, that way the full infinite sample space has measure infinity rather than 1, and thus we can't interpret this measure as probability.) * But this need not be problematic. There are a lot of other ways for specifying measures, for both finite and infinite sets. In particular, we don't have to rely on some 'mathematical structure' on the set we're considering (as in the examples of real numbers that Vaden is giving) or other a priori considerations; when using probabilities for practical purposes, our reasons for using a particular measure will often be tied to empirical inf
3
brekels
3y
Hi Max!   Again, I agree the  longtermist and garden-variety cases may not actually differ regarding the measure-theoretic features in Vaden's post, but some additional comments here. Although "probability of 60%" may be less meaningful than we'd like / expect, you are certainly allowed to enter such bets.   In fact, someone willing to take the other side suggests that he/she disagrees.    This highlights the difficulty of converging on objective probabilities for future outcomes which aren't directly subject to domain-specific science (e.g. laws of planetary motion).   Closer in time, we might converge reasonably closely on an unambiguous  measure, or appropriate parametric statistical model. Regarding the "60% probability" for future outcomes, a  useful thought experiment for me was how I might reason about the risk profile of bets made on open-ended future outcomes.   I quickly become less convinced I'm estimating meaningful risk the further out I go.    Further, we only run the future once, so it's hard to actually confirm our probability is meaningful (as for repeated coin flips).    We could make longtermist bets by transferring $ btwn our far-future offspring, but can't tell who comes out on top "in expectation" beyond simple arbitrages. Honest question being new to EA...  is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time?   Shouldn't we calculate Expected Utility over billion year futures for all  current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?    For example,  the downstream effects of donating to Anti-Malaria would be difficult to reason about, but we are clueless as to whether its EU would be dwarfed by AI safety on the billion yr timescale, e.g. bringing the entire world out of poverty limiting political risk leading to totalitarian government.
4
Max_Daniel
3y
Yes, I agree that it's problematic. We "should" do the full calculation if we could, but in fact we can't because of our limited capacity for computation/thinking. But note that in principle this situation is familiar. E.g. a CEO might try to maximize the long-run profits of her company, or a member of government might try to design a healthcare policy that maximizes wellbeing. In none of these cases are we able to do the "full calculation", albeit my a less dramatic margin than for longtermism.  And we don't think that the CEO's or the politician's effort are meaningless or doomed or anything like that. We know that they'll use heuristics, simplified models, or other computational shortcuts; we might disagree with them which heuristics and models to use, and if repeatedly queried with "why?" both they and we would come to a place where we'd struggle to justify some judgment call or choice of prior or whatever. But that's life - a familiar situation and one we can't get out of.

Anyway, I'm a huge fan of 95% of EA's work, but really think it has gone down the wrong path with longtermism. Sorry for the sass -- much love to all :) 

It's all good! Seriously, I really appreciate the engagement from you and Vaden: it's obvious that you both care a lot and are offering the criticism precisely because of that. I currently think you're mistaken about some of the substance, but this kind of dialogue is the type of thing which can help to keep EA intellectually healthy.

I'm confused about the claim 

>I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value.

This seems in direct opposition to what the authors say (and what Vaden quoted above), that 

>The idea, then, is that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years

I understand that they may not feel this way, but it is what they argued for and is, consequently, the idea that deserves to be criticized.

So my interpretation had bee... (read more)

4
axioman
3y
"The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. " This claim seems confused, as every nonempty set allows for the definition of a probability measure on it  and measures on function spaces exist ( https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ). To obtain non-existence, further properties of the measure such as translation-invariance need to be required (https://aalexan3.math.ncsu.edu/articles/infdim_meas.pdf) and it is not obvious to me that we would necessarily require such properties. 
1
vadmas
3y
See discussion below w/ Flodorner on this point :)  You are Flodorner! 
4
djbinder
3y
  It certainly not obvious that the universe is infinite in the sense that you suggest. Certainly nothing is "provably infinite" with our current knowledge. Furthermore, although we may not be certain about the properties of our own universe, we can easily imagine worlds rich enough to contain moral agents yet which remain completely finite. For instance, you could image a cellular automata with a finite grid size and which only lasted for a finite duration. However, perhaps the more important consideration is the in principle set of possible futures that we must consider when doing EV calculations, rather than the universe we actually inhabit, since even if our universe is finite we would never be able to convince our selves of this with certainty. Is it this set of possible futures that you think suffers from "immeasurability"?
3
vadmas
3y
  Aarrrgggggg was trying to resist weighing in again ... but I think there's some misunderstanding of my argument here. I wrote: A few comments: * We're talking about possible universes, not actual ones, so cast-iron guarantees about the size and future lifespan of the universe are irrelevant (and impossible anyway). * I intentionally framed it as someone shouting a natural number in order to circumvent any counterargument based on physical limits of the universe. If someone can think it, they can shout it. * The set of possible futures is provably infinite because the "shouting a natural number" argument established a one-to-one correspondence between the set of possible (triple emphasis on the word * possible * ) futures, and the set of natural numbers, which are provably infinite (see proof here ). * I'm not using fancy or exotic mathematics here, as Owen can verify. Putting sets in one-to-one correspondence with the natural numbers is the standard way one proves a set is countably infinite. (See https://en.wikipedia.org/wiki/Countable_set). * Physical limitations regarding the largest number that can be physically instantiated are irrelevant to answering the question "is this set finite or infinite"? Mathematicians do not say the set of natural numbers are finite because there are a finite number of particles in the universe. We're approaching numerology territory here... Okay this will hopefully be my last comment, because I'm really not trying to be a troll in the forum or anything. But please represent my argument accurately!

You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments

Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)

I second what Alex has said about this discussion being very valuable pushback against ideas that have got some traction - at the moment I think that strong longtermism seems right, but it's important to know if I'm mistaken! So thank you for writing the post & taking some time to engage in the comments.

On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".

But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the universe ends, so this doesn't show there are infinitely many possible universes. (Of course, there might be other arguments for infinite possible futures.)

More generally, I think I agree with Owen's point that if we make the (strong) assumption the universe is finite in duration and finite in possible states, and can quantise time, then it fol... (read more)

7
vadmas
3y
Hey Issac, Yup you've misunderstood the argument. When we talk about the set of all future possibilities, we don't line up all the possible futures and iterate through them sequentially. For example, if we say it's possible tomorrow might either rain, snow, or  hail, we * aren't  * saying that it will first rain, then snow, then hail. Only one of them will actually happen. Rather we are discussing the set of possibilities {rain, snow, hail}, which has no intrinsic order, and in this case has a cardinality of 3.   Similarly with the set of all possible futures. If we let fi represent a possible future where someone shouts the number i, then the set of all possible futures is {f1, f2, f3, ... }, which has cardinality ∞ and again no intrinsic ordering. We aren't saying here that a single person will shout all numbers between 1 and ∞, because as with the  weather example, we're talking about what might possibly happen, not what actually  happens.  No this is wrong.  We don't consider physical constraints when constructing the set of future possibilities - physical constraints come into the picture later.  So in the weather example, we could include into our set of future possibilities something absurd, and which violates known laws of physics. For example we are free to construct a set like {rain, snow, hail, rains_frogs}.  Then we factor in physical constraints by assigning probability 0 to the absurd scenario. For example our probabilities might be {0.2,0.4,0.4,0}. But no laws of physics are being violated with the scenario "someone shouts the natural number i".  This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers, and why we can say the set of future possibilities is (at least) countably infinite. (You could establish that the set of future possibilities is uncountably infinite as well by having someone shout a single digit in Cantor's diagonal argument, but that's beyond what is necessary to
Mau
3y15
0
0

Hi Vaden, thanks again for posting this! Great to see this discussion. I wanted to get further along C&R before replying, but:

no laws of physics are being violated with the scenario "someone shouts the natural number i".  This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers

If we're assuming that time is finite and quantized, then wouldn't these assumptions (or, alternatively, finite time + the speed of light) imply a finite upper bound on how many syllables someone can shout before the end of the universe (and therefore a finite upper bound on the size of the set of shoutable numbers)? I thought Isaac was making this point; not that it's physically impossible to shout all natural numbers sequentially, but that it's physically impossible to shout any of the natural numbers (except for a finite subset).

(Although this may not be crucial, since I think you can still validly make the point that Bayesians don't have the option of, say, totally ruling out faster-than-light number-pronunciation as absurd.)

Note also that EV style reasoning is only really popular in this community. No other community of researchers

... (read more)
9
Owen Cotton-Barratt
3y
I meant if everyone were actively engaged in this project. (I think there are plenty of people in the world who are just getting on with their thing, and some of them make the world a bit worse rather than a bit better.) Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like "it is the duty of each generation to do what it can to make the world a little bit better for its descendants"; there will be some interesting content in which dimensions of betterness we pay most attention to (e.g. I think that the longtermist lens on things makes some dimension like "how much does the world have its act together on dealing with possible world-ending catastrophes?" seem really important).
3
vadmas
3y
Goodness, I really hope so. As it stands, Greaves and MacAskill are telling people that they can “simply ignore all the effects [of their actions] contained in the first 100 (or even 1000) years”, which seems rather far from the practical advice both you and I hope they arrive at. Anyway, I appreciate all your thoughtful feedback - it seems like we agree much more than we disagree, so I’m going to leave it here :)

I think the crucial point of outstanding disagreement is that I agree with Greaves and MacAskill that by far the most important effects of our actions are likely to be temporally distant. 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value. Of course, there are also important instrumental reasons to attend to the intrinsic value of various effects, so I don't think intrinsic value should be ignored either.

AGB
3y13
0
0

In their article vadmas writes:

Strong longtermism goes beyond its weaker counterpart in a significant way. While longtermism says we should be thinking primarily about the far-future consequences of our actions (which is generally taken to be on the scale of millions or billions of years), strong longtermism says this is the only thing we should think about.

Some of your comments, including this one, seem to me to be defending simple or weak longtermism ('by far the most important effects are likely to be temporally distant'), rather than strong longtermism as defined above. I can imagine a few reasons for this:

  1. You don't actually agree with strong longtermism
  2. You do agree with strong longtermism, but I (and presumably vadmas) am misunderstanding what you/MacAskill/Greaves mean by strong longtermism; the above quote is, presumably unintentionally, misunderstanding their views. In this case I think it would be good to hear what you think the 'strong' in 'strong longermism' actually means. 
  3. You think the above quote is compatible with what you've written above.

At the moment, I don't have a great sense of which one is the case, and think clarity on this point would be useful. I could also have missed an another way to reconcile these. 

I think it's a combination of a couple of things.

  1. I'm not fully bought into strong longtermism (nor, I suspect, are Greaves or MacAskill), but on my inside view it seems probably-correct.

When I said "likely", that was covering the fact that I'm not fully bought in.

  1. I'm taking "strong longtermism" to be a concept in the vicinity of what they said (and meaningfully distinct from "weak longtermism", for which I would not have said "by far"), that I think is a natural category they are imperfectly gesturing at. I don't agree with with a literal reading of their quote, because it's missing two qualifiers: (i) it's overwhelmingly what matters rather than the only thing; & (ii) of course we need to think about shorter term consequences in order to make the best decisions for the long term.

Both (i) and (ii) are arguably technicalities (and I guess that the authors would cede the points to me), but (ii) in particular feels very important.

5
Adam Binks
3y
I think this is a good point, I'm really enjoying all your comments in this thread:) It strikes me that one way that the next century effects of our actions might be instrumentally useful is that they might give some (weak) evidence as to what the longer term effects might be. All else equal, if some action causes a stable, steady positive effect each year for the next century, then I think that action is more likely to have a positive long term effect than some other action which has a negative effect in the next century. However this might be easily outweighed by specific reasons to think that the action's longer run effects will differ.
9
Owen Cotton-Barratt
3y
I'm sympathetic to something in the vicinity of your complaint here, striving to compare like with like, and being cognizant of the weaknesses of the comparison when that's impossible (e.g. if someone tried the reasoning from the Shivani example in earnest rather than as a toy example in a philosophy paper I think it would rightly get a lot of criticism). (I don't think that "subjective" and "objective" are quite the right categories here, btw; e.g. even the GiveWell estimates of cost-to-save-a-life include some subjective components.) In terms of your general sympathy with longtermism -- it makes sense to me that the behaviour of its proponents should affect your sympathy with those proponents.  And if you're thinking of the position as a political stance (who you're allying yourself etc.) then it makes sense that it could affect your sympathy with the position. But if you're engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see -- whether or not anyone actually made them. (Of course I'm expressing a super idealistic position here, and there are practical reasons not to be all the way there, but I still think it's worth thinking about.) 
AGB
3y15
0
0

But if you're engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see

If someone who I have trusted with working out the answer to a complicated question makes an error that I can see and verify, I should also downgrade my assessment of all their work which might be much harder for me to see and verify. 

Related: Gell-Mann Amnesia
(Edit: Also related, Epistemic Learned Helplessness)

Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the ba

... (read more)
6
Owen Cotton-Barratt
3y
I appreciate the points here. I think I might be slightly less pessimistic than you about the ability to evaluate arguments in foreign domains, but the thrust of why I was making that point was because: I think for pushing out the boundaries of collective knowledge it's roughly correct to adopt the idealistic stance I was recommending; & I think that Vaden is engaging in earnest and noticing enough important things that there's a nontrivial chance they could contribute to pushing such boundaries (and that this is valuable enough to be encouraged rather than just encouraging activity that is likely to lead to the most-correct beliefs among the convex hull of things people already understand).
4
AGB
3y
Ah, gotcha. I agree that the process of scientific enquiry/discovery works best when people do as you said. I think it’s worth distinguishing between that case where taking the less accurate path in the short-term has longer-term benefits, and more typical decisions like ‘what should I work on’, or even just truth-seeking that doesn’t have a decision directly attached but you want to get the right answer. There are definitely people who still believe what you wrote literally in those cases and ironically I think it’s a good example of an argument that sounds compelling but is largely incorrect, for reasons above.
3
MichaelA
3y
Just wanted to quickly hop in to say that I think this little sub-thread contains interesting points on both sides, and that people who stumble upon it later may also be interested in Forum posts tagged “epistemic humility”.

Thanks for writing this. I think it's very valuable to be having this discussion. Longtermism is a novel, strange, and highly demanding idea, so it merits a great deal of scrutiny. That said, I agree with the thesis and don't currently find your objections against longtermism persuasive (although in one case I think they suggest a specific set of approaches to longtermism).

I'll start with the expected value argument, specifically the note that probabilities here are uncertain and therefore random valuables, whereas in traditional EU they're constant. To me a charitable version of Greaves and MacAskill's argument is that, taking the expectation over the probabilities times the outcomes, you have a large future in expectation. (What you need for the randomness of probabilities to sink longtermism is for the probabilities to correlate inversely and strongly with the size of the future.) I don't think they'd claim the probabilities are certain.

Maybe the claim you want to make, then, is that we should treat random probabilities differently from certain probabilities, i.e. you should not "take expectations" over probabilities in the way I've described. The problem with this is that (a) a... (read more)

2
MichaelStJules
3y
This might also be of interest:  The Sequential Dominance Argument for the Independence Axiom of Expected Utility Theory by Johan E. Gustafsson, which argues for the Independence Axiom with stochastic dominance, a minimal rationality requirement, and also against the Allais paradox and Ellsberg paradox (ambiguity aversion).  However, I think a weakness in the argument is that it assumes the probabilities exist and are constant throughout, but they aren't defined by assumption in the Ellsberg paradox. In particular, looking at the figure for case 1, the argument assumes p is the same when you start at the first random node as it is looking forward when you're at one of the two choice nodes, 1 or 2. In some sense, this is true, since the colours of the balls don't change between, but you don't have a subjective estimate of p by assumption and "unknown probability" is a contradiction in terms for a Bayesian. (These are notes I took when I read the paper a while ago, so I hope they make sense! :P.) Another weakness is that I think these kinds of sequential lotteries are usually only relevant in choices where an agent is working against you or trying to get something from you (e.g. money for their charity!), which also happen to be the cases where ambiguity aversion is most useful. You can't set up such a sequential lottery for something like the degree of insect consciousness, P vs NP,  or whether the sun will rise tomorrow. See my discussion with Owen Cotton-Barratt.
2
MichaelStJules
3y
On the expected value argument, are you referring to this? Based on the link to the wiki page for random variables, I think Vaden didn't mean that the probabilities themselves follow some distributions, but was rather just identifying probability distributions with the random variables they represent, i.e., given any probability distribution, there's a random variable distributed according to it. However, I do think his point does lead us to want to entertain multiple probability distributions. If you did have probabilities over your outcome probabilities or aggregate utilities, I'd think you could just take iterated expectations. If  U is the aggregate utility, U∼p and p∼q, then you'd just take the expected value of p with respect to q first, and calculate: EV∼Eq[p][V]] If the dependence is more complicated (you talk about correlations), you might use (something similar to) the law of total expectation. And you'd use Gilboa and Schmeidler's maxmin expected value approach if you don't even have a joint probability distribution over all of the probabilities. A more recent alternative to maxmin is the maximality rule, which is to rule out any choices whose expected utilities are weakly dominated by the expected utilities of another specific choice. https://academic.oup.com/pq/article-abstract/71/1/141/5828678 https://globalprioritiesinstitute.org/andreas-mogensen-maximal-cluelessness/ https://forum.effectivealtruism.org/posts/WSytm4XG9DrxCYEwg/andreas-mogensen-s-maximal-cluelessness Mogensen comes out against this rule in the end for being too permissive, though. However, I'm not convinced that's true, since that depends on your particular probabilities. I think you can get further with hedging.
2
zdgroff
3y
Yeah, that's the part I'm referring to. I take his comment that expectations are not random variables to be criticizing taking expectations over expected utility with respect to uncertain probabilities. I think the critical review of ambiguity aversion I linked to us sufficiently general that any alternatives to taking expectations with respect to uncertain probabilities will have seriously undesirable features.
1
Mau
3y
Hi Zach, thanks for this! I have two doubts about the Al-Najjar and Weinstein paper--I'd be curious to hear your (or others') thoughts on these. First, I'm having trouble seeing where the information aversion comes in. A simpler example than the one used in the paper seems to be enough to communicate what I'm confused about: let's say an urn has 100 balls that are each red or yellow, and you don't know their distribution. Someone averse to ambiguity would (I think) be willing to pay up to $1 for a bet that pays off $1 if a randomly selected ball is red or yellow. But if they're offered that bet as two separate decisions (first betting on a ball being red, and then betting on the same ball being yellow), then they'd be willing to pay less than $0.50 for each bet. So it looks like preference inconsistency comes from the choice being spread out over time, rather than from information (which would mean there's no incentive to avoid information). What am I missing here? (Maybe the following is how the authors were thinking about this? If you (as a hypothetical ambiguity-averse person) know that you'll get a chance to take both bets separately, then you'll take them both as long as you're not immediately informed of the outcome of the first bet, because you evaluate acts, not by their own uncertainty, but by the uncertainty of your sequence of acts as a whole (considering all acts whose outcomes you remain unaware of). This seems like an odd interpretation, so I don't think this is it.) [edit: I now think the previous paragraph's interpretation was correct, because otherwise agents would have no way to make ambiguity averse choices that are spread out over time and consistent, in situations like the ones presented in the paper. The 'oddness' of the interpretation seems to reflect the oddness of ambiguity aversion: rather than only paying attention to what might happen differently if you choose one action or another, ambiguity aversion involves paying attention to poss
3
zdgroff
3y
Thanks! Helpful follow-ups. On the first point, I think your intuition does capture the information aversion here, but I still think information aversion is an accurate description. Offered a bet that pays $X if I pick a color and then see if a random ball matches that color, you'll pay more than for a bet that pays $X if a random ball is red. The only difference between these situations is that you have more information in the latter: you know the color to match is red. That makes you less willing to pay. And there's no obvious reason why this information aversion would be something like a useful heuristic. I don't quite get the second point. Commitment doesn't seem very relevant here since it's really just a difference in what you would pay for each situation. If one comes first, I don't see any reason why it would make sense to commit, so I don't think that strengthens the case for ambiguity aversion in any way. But I think I might be confused here.
1
Mau
3y
Thanks! I'm not sure I follow. If I were to take this bet, it seems that the prior according to which my utility would be lowest is: you'll pick a color to match that gives me a 0% chance of winning. So if I'm ambiguity averse in this way, wouldn't I think this bet is worthless? (The second point you bring up would make sense to me if this first point did, although then I'd also be confused about the papers' emphasis on commitment.)
2
zdgroff
3y
Sorry—you're right that this doesn't work. To clarify, I was thinking that the method of picking the color should be fixed ex-ante (e.g. "I pick red as the color with 50% probability"), but that doesn't do the trick because you need to pool the colors for ambiguity to arise. The issue is that the problem the paper identifies does not come up in your example. If I'm offered the two bets simultaneously, then an ambiguity averse decision maker, like an EU decision maker, will take both bets. If I'm offered the bets sequentially without knowing I'll be offered both when I'm offered the first one, then neither an ambiguity-averse nor a risk-averse EU decision-maker will take them.  The reason is that the first one offers the EU decision-maker a 50% chance of winning, so given risk-aversion its value is less than 50% of $1. So your example doesn't distinguish a risk-averse EU decision-maker from an ambiguity-averse one. So I think unfortunately we need to go with the more complicated examples in the paper. They are obviously very theoretical. I think it could be a valuable project for someone to translate these into more practical settings to show how these problems can come up in a real-world sense.
EJT
3y21
0
0

Hi Vaden,

Cool post! I think you make a lot of good points. Nevertheless, I think longtermism is important and defensible, so I’ll offer some defence here.

First, your point about future expectations being undefined seems to prove too much. There are infinitely many ways of rolling a fair die (someone shouts ‘1!’ while the die is in the air, someone shouts ‘2!’, etc.). But there is clearly some sense in which I ought to assign a probability of 1/6 to the hypothesis that the die lands on 1.

Suppose, for example, that I am offered a choice: either bet on a six-sided die landing on 1 or bet on a twenty-sided die landing on 1. If both probabilities are undefined, then it seems I can permissibly bet on either. But clearly I ought to bet on the six-sided die.

Now you may say that we have a measure over the set of outcomes when we’re rolling a die and we don’t have a measure over the set of futures. But it’s unclear to me what measure could apply to die rolls but not to futures.

And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities... (read more)

5
ben_chugg
3y
Hi Elliott, just a few side comments from someone sympathetic to Vaden's critique:  I largely agree with your take on time preference. One thing I'd like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there's often a move made where people say "in theory we should have a zero discount factor, so let's focus on the future!". But the conclusion ignores that in practice we never have such unconditional knowledge of the future.   Re: the dice example:  True - there are infinitely many things that can happen while the die is in the air, but  that's not the outcome space about which we're concerned. We're concerned about  the result of the roll, which is a finite space with six outcomes. So of course probabilities are defined in that case (and in the 6 vs 20 sided die case). Moreover, they're defined by us, because we've chosen that a particular mathematical technique applies relatively well to the situation at hand. When reasoning about all possible futures however, we're trying to shoehorn in some mathematics that is not appropriate to the problem (math is a tool - sometimes it's useful, sometimes it's not). We can't even write out the outcome space in this scenario, let alone define a probability measure over it.   Once you buy into the idea that you must quantify all your beliefs with numbers, then yes - you have to start assigning probabilities to all eventualities, and they must obey certain equations. But you can drop that framework completely. Numbers are not primary - again, they are just a tool. I know this community is deeply steeped in Bayesian epistemology, so this is going to be an uphill battle, but assigning credences to beliefs is not the way to generate knowledge. (I recently wrote about this
2
EJT
3y
Thanks! Your point about time preference is an important one, and I think you're right that people sometimes make too quick an inference from a zero rate of pure time preference to a future-focus, without properly heeding just how difficult it is to predict the long-term consequences of our actions. But in my experience, longtermists are very aware of the difficulty. They recognise that the long-term consequences of almost all of our actions are so difficult to predict that their expected long-term value is roughly 0. Nevertheless, they think that the long-term consequences of some very small subset of actions are predictable enough to justify undertaking those actions. On the dice example, you say that the infinite set of things that could happen while the die is in the air is not the outcome space about which we're concerned. But can't the longtermist make the same response? Imagine they said: 'For the purpose of calculating a lower bound on the expected value of reducing x-risk, the infinite set of futures is not the outcome space about which we're concered. The outcome space about which we're concerned consists of the following two outcomes: (1) Humanity goes extinct before 2100, (2) Humanity does not go extinct before 2100.' And, in any case, it seems like Vaden's point about future expectations being undefined still proves too much. Consider instead the following two hypotheses and suppose you have to bet on one of them: (1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So  it seems like these probabilities are not undefined after all.
8
Owen Cotton-Barratt
3y
Just want to register strong disagreement with this. (That is, disagreement with the position you report, not disagreement that you know people holding this position.) I think there are enough variables in the world that have some nonzero expected impact on the long term future that for very many actions we can usually hazard guesses about their impact on at least some such variables, and hence about the expected impact of the individual actions (of course in fact one will be wrong in a good fraction of cases, but we're talking about in expectation). Note I feel fine about people saying of lots of activities "gee I haven't thought about that one enough, I really don't know which way it will come out", but I think it's a sign that longtermism is still meaningfully under development and we should be wary of rolling it out too fast.
6
brekels
3y
The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link).     It doesn't tell you that you must  assign probabilities, but if you do and are willing to bet on them, they must be consistent with probability axioms. It may be an interesting shift in focus to consider where you would be ambivalent between betting for  or against  the proposition that ">= 10^24 people exist in the future", since, above, you reason only about  taking and not laying  a billion to one odds.   An inability to find such a value might cast doubt on the usefulness of probability values here.    I don't believe this relies on any probabilistic argument, or assignment of probabilities, since the superiority of bet (2) follows from logic.    Similarly, regardless of your beliefs about the future population, you can win now by arbitrage (e.g. betting against (1) and for (2)) if I'm willing to take both sides of both bets at the same odds. Correct me if I'm wrong, but I understand a Dutch-book to be taking advantage of my own inconsistent credences (which don't obey laws of probability, as above).    So once I build my set of assumptions about future worlds, I should reason probabilistically within that worldview, or else you can arbitrage me subject to my willingness to take both sides.   If you set your own set of self-consistent assumptions for reasoning about future worlds, I'm not sure how to bridge the gap.     We might debate the reasonableness of assumptions or priors that go into our thinking.   We might negotiate odds at which we would bet on ">= 10^24 people exist in the future", with our far-future progeny  transferring $ based on the outcome,  but I see no way of objectively resolving who is making a "better bet" at the moment
4
MichaelStJules
3y
I think the probability of these events regardless of our influence is not what matters; it's our causal effect that does. Longtermism rests on the claim that we can predictably affect the longterm future positively. You say that it would be overconfident to assign probabilities too low in certain cases, but that argument also applies to the risk of well-intentioned longtermist interventions backfiring, e.g. by accelerating AI development faster than we align it, an intervention leading to a false sense of security and complacency, or the possibility that the future could be worse if we don't go extinct. Any intervention can backfire. Most will accomplish little. With longtermist interventions, we may never know, since the feedback is not good enough. I also disagree that we should have sharp probabilities, since this means making fairly arbitrary but potentially hugely influential commitments. That's what sensitivity analysis and robust decision-making under deep uncertainty are for. The requirement that we should have sharp probabilities doesn't rule out the possibility that we could come to vastly different conclusions based on exactly the same evidence, just because we have different priors or weight the evidence differently.

I will primarily focus on The case for strong longtermism, listed as “draft status” on both Greaves and MacAskill’s personal websites as of November 23rd, 2020. It has generated quite a lot of conversation within the effective altruism (EA) community despite its status, including multiple podcast episodes on 80000 hours podcast (one, two, three), a dedicated a multi-million dollar fund listed on the EA website, numerous blog posts, and an active forum discussion.

"The Case for Strong Longtermism" is subtitled "GPI Working Paper No. 7-2019," which leads me to believe that it was originally published in 2019. Many of the things you listed (two of the podcast episodes, the fund, and several of the blog and forum posts) are from before 2019. My impression is that the paper (which I haven't read) is more a formalization and extension of various existing ideas than a totally new direction for effective alturism.

The word "longtermism" is new, which may contribute to the impression that the ideas it describes are too. This is true in some cases, but many people involved with effective altruism have long been concerned about the very long run.

2
vadmas
3y
Oops good catch, updated the post with a link to your comment. 

Hi all! Really great to see all the engagement with the post! I'm going to write a follow up piece responding to many of the objections raised in this thread. I'll post it in the forum in a few weeks once it's complete - please reply to this comment if you have any other questions and I'll do my best to address all of them in the next piece :)

Thanks for writing this, I'm reading a lot of critiques of longtermism at the moment and this is a very interesting one.

Apart from the problems that you raise with expected value reasoning about future events, you also question the lack of pure time preference in the Greaves-MacAskill paper. You make a few different points here, some of which could co-exist with longtermism and some couldn't. I was wondering how much of your disagreement might be meaningfully recast as a differing opinion on how large your impartial altruistic budget should be, as an indiv... (read more)

This [The ergodicity problem in economics] seems like it could be important, and might fit in somewhere with the discussions of expected utility. I haven't really got my head around it though.

Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total. Since you’re just as likely to flip heads as tails, it would appear that you should, on average, come out ahead if you played enough times because your potential payoff each time is greater than your potential loss. In economics jargon

... (read more)

Greaves and MacAskill do discuss risk aversion, uncertainty/ambiguity aversion and the issue of seemingly arbitrary probabilities in sections 4.2 and 4.5. They admit that risk aversion with respect to the difference one makes does undermine strong longtermism (and I think ambiguity aversion with respect to the difference one makes would, too, although it might also lead you to doing as little as possible to avoid backfiring), although they cited (Snowden, 2015) claiming that aversion with respect to the difference on makes is too agent-relative and therefo... (read more)

I wrote up my understanding of Popper's argument on the impossibility of predicting one's own knowledge (Chapter 22 of The Open Universe) that came up in one of the comment threads. I am still a bit confused about it and would appreciate people pointing out my misunderstandings.

Consider a predictor:

A1: Given a sufficiently explicit prediction task, the predictor predicts correctly

A2: Given any such prediction task, the predictor takes time to predict and issue its reply (the task is only completed once the reply is issued).

T1: A1,A2=> Given a self-predi... (read more)

4
vadmas
3y
  Haha just gonna keep pointing you to places where Popper writes about this stuff b/c it's far more comprehensive than anything I could write here :)  This question (and the questions re. climate change Max asked in another thread)  are the focus of Popper's book The Poverty of Historicism, where  "historicism" here means "any philosophy that tries to make long-term predictions about human society" (i.e marxism, fascism, malthusianism, etc).  I've attached a screenshot for proof-of-relevance:    (Ben and I discuss historicism here fwiw.) I have a pdf of this one, dm me if you want a copy :)
2
Max_Daniel
3y
Yeah, I was also vaguely reminded of e.g. logical induction when I read the summary of Popper's argument in the text Vaden linked elsewhere in this discussion.
3
vadmas
3y
Yes! Exactly! Hence why I keep bringing him up :) 
1
vadmas
3y
Impressive write up! Fun historical note - in a footnote Popper says he got the idea of formulating the proof using prediction machines from personal communication with the "late Dr A. M. Turing". 

I am confused about the precise claim made regarding the Hilbert Hotel and measure theory.  When you say "we have no  measure over the set of all possible futures",  do you mean that no such measures exist (which would be incorrect without further requirements:  https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ), or that we don't have a way of choosing the right measure?  If it is the latter,  I agree that this is an important challenge, but I'd like to highlight that the situati... (read more)

2
Max_Daniel
3y
(I was also confused by this, and wrote a couple of comments in response. I actually think they don't add much to the overall discussion, especially now that Vaden has clarified below what kind of argument they were trying to make. But maybe you're interested given we've had similar initial confusions.)
1
vadmas
3y
Yup, the latter. This is why the lack-of-data problem is the other core part of my argument. Once data is in the picture, now  we can start to get traction. There is something to fit the measure to, something to be wrong about, and a means of adjudicating between which choice of measure is better than which other choice. Without data, all this probability talk is just idol speculation painted with a quantitative veneer. 
1
axioman
3y
Ok, makes sense. I  think that our ability to make predictions about the future steeply declines with increasing time horizions, but find it somewhat implausible that it would become entirely uncorrelated with what is actually going to happen in finite time. And it does not seem to be the case that data supporting long term predictions is impossible to get by: while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years; in part due to a lot of data collection and hypothesis testing done by physicist. 
7
Greg_Colbourn
3y
"while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years" These two things are correlated.
1
axioman
3y
They are, but I don't think that the correlation is strong enough to invalidate my statement. P(sun will exist|AI risk is a big deal) seems quite large to me. Obviously, this is not operationalized very well...
3
vadmas
3y
Yes, there are certain rare cases where longterm prediction is possible. Usually these involve astronomical systems, which are unique because they are cyclical in nature and unusually unperturbed by the outside environment. Human society doesn't share any of these properties unfortunately, and long term historical prediction runs into the impossibility proof in epistemology anyway.  
3
axioman
3y
I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute). I think there is some important true point behind your intuition about how knowledge (especially of more complex form than about a coin flip) is hard to predict, but I am almost certain you  won't be able to find any rigorous mathematical proof for  this intuition because reality is very fuzzy (in a mathematical sense, what exactly is the difference between the coin flip and knowledge about future technology?) so I'd be a lot more excited about other types of arguments (which will likely only support weaker claims). 
1
vadmas
3y
  In this example you aren't predicting future knowledge, you're predicting that you'll have knowledge in the future - that is, in one minute, you will know the outcome of the coin flip. I too think we'll gain knowledge in the future, but that's very different from predicting the content of that future knowledge today. It's the difference between saying "sometime in the future we will have a theory that unifies quantum mechanics and general relativity" and describing the details of future theory itself. The proof is here: https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf .  (And who said proofs have to be mathematical? Proofs have to be logical - that is, concerned with deducing true conclusions from true premises - not mathematical, although they often take mathematical form.)  

The proof [for the impossibility of certain kinds of long-term prediction] is here: https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf

Note that in that text Popper says:

The argument does not, of course, refute the possibility of every kind of social prediction; on the contrary, it is perfectly compatible with the possibility of testing social theories - for example economic theories - by way of predicting that certain developments will take place under certain conditions. It only refutes the possibility of predicting historical developments to the extent to which they may be influenced by the growth of our knowledge.

And that he rejects only

the possibility of a theoretical history; that is to say, of a historical social science that would correspond to theoretical physics.

My guess is that everyone in this discussion (including MacAskill and Greaves) agree with this, at least as claims about what's currently possible in practice. On the other hand, it seems uncontroversial that some form of long-run predictions are possible (e.g. above you've conceded they're possible for some astronomical systems).

Thus it seems to me that the key question is whether longterm... (read more)

3
Max_Daniel
3y
Regarding Popper's claim that it's impossible to "predict historical developments to the extent to which they may be influenced by the growth of our knowledge": I can see how there might be a certain technical sense in which this is true, though I'm not sufficiently familiar with Popper's formal arguments to comment in detail. However, I don't think the claim can be true in the everyday sense (rather than just for a certain technical sense of "predicting") that arguably is relevant when making plans for the future. For example, consider climate change. It seems clear that between now and, say, 2100 our knowledge will grow in various ways that are relevant: we'll better understand the climate system, but perhaps even more crucially we'll know more about the social and economic aspects (e.g. how people will to adapt to a warmer climate, how much emission reductions countries will pursue, ...) and on how much progress we've made with developing various relevant technologies (e.g. renewable energy, batteries, carbon capture and storage, geoengineering, ...).  The latter two seem like paradigm examples of things that would be "impossible to predict" in Popper's sense. But does it follow that regarding climate change we should throw our hands up in the air and do nothing because it's "impossible to predict the future"? Or that climate change policy faces some deep technical challenge? Maybe all we are doing when choosing between climate change policies in Popper's terms is "predicting that certain developments will take place under certain conditions" rather than "predicting historical developments" simpliciter. But as I said, then this to me just suggests that as longtermists we will be just fine using "predictions of certain developments under certain conditions". I find it hard to see why there would be a qualitative difference between longtermism (as a practical project) and climate change mitigation which implies that the former is infeasible while the latter i
2
Max_Daniel
3y
If we're giving a specific probability distribution for the outcome of the coin flip, it seems like we're doing more than that:  Consider that we would predict to know the outcome of the coin flip in one minute no matter what we think the odds of heads are. Therefore, if we do give specific odds (such as 50%), we're doing more than just saying we'll know the outcome in the future.
3
brekels
3y
Hi Max_Daniel!   I'm sympathetic to both your and Vaden's arguments, so I may try to bridge the gap on climate change vs. your Christmas party vs. longtermism. Climate change is a problem now, and we have past data to support projecting already-observed effects into the future.   So statements of the sort "if  current data projected forward with no notable intervention, the Earth would be uninhabitable in x years."    This statement is reliant on some assumptions about future data vs. past data, but we can be reasonably clear about them and debate them.     Future knowledge will undoubtedly help things and reframe certain problems, but a key point is that we know where to start gathering data on some of the aspects you raise:  "how ppl will adapt", "how can we develop renewable energy or batteries", etc, because climate change is already a well defined problem.  We have current knowledge that will help us get off the ground. I agree the measure theoretic arguments may prove too much, but the number of people at your Christmas party is an unambiguously posed question  for which you have data on how many people you invited, how flaky your friends are, etc.    In both cases, you may use probabilistic predictions, based on a set of assumptions, to compel others to act on climate change or compel yourself to invite more people. At the risk of oversimplification by using AI Safety example as a representative longtermist argument,  the key difference is that we haven't created or observed human-level AI, or even those which can adaptively set their own goals. There are meaningful arguments we can use to compel others to discuss issues of safety (in algorithm development, government regulation, etc).   After all, it will be a human process to develop and deploy these AI, and we can set guardrails by focused discussion today. Vaden's point seems to be that arguments that rely on expected values or probabilities are of significantly less value in this case.   We are no
2
Max_Daniel
3y
Hi brekels, I think these are fair points. In particular, I think we may be able to agree on the following statement as well as more precise versions of it: In my view, the key point is that, say, climate change and AI safety differ in degree but not in kind regarding whether we can make probabilistic predictions, should take action now, etc. In particular, consider the following similarities: * I agree that for climate change we utilize extrapolations of current trends such as "if  current data projected forward with no notable intervention, the Earth would be uninhabitable in x years." - But in principle we can do the same for AI safety, e.g. "if Moore's Law continued, we could buy a brain-equivalent of compute for $X in Y years." * Yes, it's not straightforward to say what a "brain-equivalent of compute" is, or why this matters. But neither is it straightforward to e.g. determine when the Earth becomes "uninhabitable". (Again, I might concede that the latter notion is in some sense easier to define - my point it just that I don't see a qualitative difference.) * You say we haven't yet observed human-level AI. But neither have we observed (at least not directly an on a planetary scale), say, +6 degrees of global warming  compared to pre-industrial times. Yes, we have observed anthropogenic climate change, but we've also observed AI systems developed by humans including specific failure modes (e.g. misspecified rewards, biased training data, or lack of desired generalization in response to distributional shift). * In various ways it sounds right to me that we have "more data" on climate change, or that the problem of more severe climate change is "more similar" to current climate change than the problem of misaligned transformative AI is to current AI failure modes. But again, to me this seems like "merely" a difference in degree. Separately, I think that if we try hard to find the most effective intervention to avoid some distant harm (say, one we thin
1
axioman
3y
It seems like the proof critically hinges on assertion 2) which is not proven in your link. Can you point me to the pages of the book that contain the proof? I agree that proofs are logical, but since we're talking about probabilistic predictions,  I'd be very skeptical of the relevance of a proof that does not involve mathematical reasoning,
2
vadmas
3y
Yep it's Chapter 22 of The Open Universe (don't have a pdf copy unfortunately) 

Great and interesting theme!

(I've just written a bunch of thoughts on this post in a new EA Forum post.)

Just saw this, which sounds relevant to some of the comment discussion here:

We are excited to announce that

@anderssandberg

will give a talk at the OKPS about which kinds of historical predictions are possible and impossible, and where Popper's critique of 'historicism' overshoots its goals.

https://twitter.com/OxfordPopper/status/1343989971552776192?s=20

1
vadmas
3y
Nice yeah Ben and I will be there! 

Hi Vaden, 

I'm a bit late to the party here, I know. But I really enjoyed this post. I thought I'd add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don't consider myself a strong longtermist. That said, I wouldn't like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I'm not saying that's what necessarily comes through, but I think there is important middle ground (and this middl... (read more)

More from vadmas
Curated and popular this week
Relevant opportunities