Hey! Can't respond most of your points now unfortunately, but just a few quick things :)
(I'm working on a followup piece at the moment and will try to respond to some of your criticisms there)
My central point is the 'inconsequential in the grand scheme of things' one you highlight here. This is why I end the essay with this quote:
> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a ...
Hey Vaden!
Yeah, I didn't read your other posts (including Proving Too Much), so it's possible they counter some of my points, clarify your argument more, or the like.
(The reason I didn't read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskill's paper), so it seems relatively unlikely that reading your other posts would change my mind.)
---
Hmm, something that strikes me about that quote is that it seems to really be ab...
Oops sorry haha neither did I! "this" just meant low-engagement, not your excellent advice about title choice. Updated :)
Hehe taking this as a sign I'm overstaying my welcome. Will finish the last post of the series though and move on :)
You're correct, in practice you wouldn't - that's the 'instrumentalist' point made in the latter half of the post
Both actually! See section 6 in Making Ado Without Expectations - unmeasurable sets are one kind of expectation gap (6.2.1) and 'single-hit' infinities are another (6.1.2)
Worth highlighting the passage that the "mere ripples" in the title refers to for those skimming the comments:
...Referring to events like “Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts [sic], World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS" Bostrom writes that
these types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected,
What is your probability distribution across the size of the future population, provided there is not an existential catastrophe?
Do you for example think there is a more than 50% chance that it is greater than 10 billion?
I don't have a probability distribution across the size of the future population. That said, I'm happy to interpret the question in the colloquial, non-formal sense, and just take >50% to mean "likely". In that case, sure, I think it's likely that the population will exceed 10 billion. Credences shouldn't be taken any more s...
Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.
Yep :)
(as far as I can tell their entire point is that you can always do an expected value calculation and "ignore all the effects contained in the first 100" years)
Yes, exactly. One can always find some expected value calculation that allows one to ignore present-day suffering. And worse, one can keep doing this between now and eternity, to ignore all suffering forever. We can describe this using the language of "falsifiability" or "irrefutability" or whatever - the word choice doesn't really matter here. What matters is that this is a very dangerous game to be playing.
Firstly, you and vadmas seem to assume number 2 is the case.
Oops nope the exact opposite! Couldn't possibly agree more strongly with
Working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well
Perfect, love it, spot on. I'd be 100% on board with longtermism if this is what it's about - hopefully conversations like these can move it there. (Ben makes this point near the end of our podcast conversation fwiw)
...Do you in fact think that knowledge creation has strong intrinsic value?
I don't see how that gets you out of facing the question
Check out chapter 13 in Beginning of Infinity when you can - everything I was saying in that post is much better explained there :)
Hey Mauricio! Two brief comments -
Some others are focused on making decisions. From this angle, EV maximization and Bayesian epistemology were never supposed to be frameworks for creating knowledge--they're frameworks for turning knowledge into decisions, and your arguments don't seem to be enough for refuting them as such.
Yes agreed, but these two things become intertwined when a philosophy makes people decide to stop creating knowledge. In this case, it's longtermism preventing the creation of moral and scientific knowledge by grinding ...
I don't see how we could predict anything in the future at all (like the sun's existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions?
Haha just gonna keep pointing you to places where Popper writes about this stuff b/c it's far more comprehensive than anything I could write here :)
This question (and the questions re. climate change Max asked in another thread) are the focus of Popper's book The Poverty of Historicism, where "historicism" ...
Impressive write up! Fun historical note - in a footnote Popper says he got the idea of formulating the proof using prediction machines from personal communication with the "late Dr A. M. Turing".
I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute).
In this example you aren't predicting future knowledge, you're predicting that you'll have knowledge in the future - that is, in one minute, you will know the outcome of the coin flip. I too think we'll gain knowledge in the future, but that's very different from predicting the content of that future know...
The proof [for the impossibility of certain kinds of long-term prediction] is here: https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf .
Note that in that text Popper says:
...The argument does not, of course, refute the possibility of every kind of social prediction; on the contrary, it is perfectly compatible with the possibility of testing social theories - for example economic theories - by way of predicting that certain developments will take place under certain conditions. It only refutes the possibility of predicting historical dev
Hi all! Really great to see all the engagement with the post! I'm going to write a follow up piece responding to many of the objections raised in this thread. I'll post it in the forum in a few weeks once it's complete - please reply to this comment if you have any other questions and I'll do my best to address all of them in the next piece :)
Yes, there are certain rare cases where longterm prediction is possible. Usually these involve astronomical systems, which are unique because they are cyclical in nature and unusually unperturbed by the outside environment. Human society doesn't share any of these properties unfortunately, and long term historical prediction runs into the impossibility proof in epistemology anyway.
Yup, the latter. This is why the lack-of-data problem is the other core part of my argument. Once data is in the picture, now we can start to get traction. There is something to fit the measure to, something to be wrong about, and a means of adjudicating between which choice of measure is better than which other choice. Without data, all this probability talk is just idol speculation painted with a quantitative veneer.
Hey Issac,
...On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".
But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the univers
Hi Vaden, thanks again for posting this! Great to see this discussion. I wanted to get further along C&R before replying, but:
no laws of physics are being violated with the scenario "someone shouts the natural number i". This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers
If we're assuming that time is finite and quantized, then wouldn't these assumptions (or, alternatively, finite time + the speed of light) imply a finite upper bound on how many syllables someone can shout befor...
if we helped ourselves to some cast-iron guarantees about the size and future lifespan of the universe (and made some assumptions about quantization) then we'd know that the set of possible futures was smaller than a particular finite number (since there would only be a finite number of time steps and a finite number of ways of arranging all particles at each time step). Then even if I can't write it down, in principle someone could write it down, and the mathematical worries about undefined expectations go away.
...It certainly not obvious that the univ
I second what Alex has said about this discussion being very valuable pushback against ideas that have got some traction - at the moment I think that strong longtermism seems right, but it's important to know if I'm mistaken! So thank you for writing the post & taking some time to engage in the comments.
On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the ...
You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments
Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)
Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like "it is the duty of each generation to do what it can to make the world a little bit better for its descendants."
Goodness, I really hope so. As it stands, Greaves and MacAskill are telling people that they can “simply ignore all the effects [of their actions] contained in the first 100 (or even 1000) years”, which seems rather far from the practical advice both you and I hope they arrive at.
Anyway, I appreciate all your thoughtful feedback - it seems like we agree much more than we disagree, so I’m going to leave it here :)
I think the crucial point of outstanding disagreement is that I agree with Greaves and MacAskill that by far the most important effects of our actions are likely to be temporally distant.
I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value. Of course, there are also important instrumental reasons to attend to the intrinsic value of various effects, so I don't think intrinsic value should be ignored either.
Hey Owen - thanks for your feedback! Just to respond to a few points -
>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.
Would be able to elaborate a bit on where the weaknesses are? I see in the thread you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes your sniff-test :) ). If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to N...
>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.
Would be able to elaborate a bit on where the weaknesses are? I see in the thread you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes your sniff-test :) ).
I think it proves both too little and too much.
Too little, in the sense that it's contingent on things which don't seem that related to the heart of the objections you're making. If we wer...
It does and we should. I wrote a post about this you might find useful: https://vmasrani.github.io/blog/2021/the_credence_assumption/