All of vadmas's Comments + Replies

Hey! Can't respond most of your points now unfortunately,  but just a few quick things :) 

(I'm working on a followup piece at the moment and will try to respond to some of your criticisms there) 

My central point is the 'inconsequential in the grand scheme of things' one you highlight here. This is why I end the essay with this quote:

> If among our aims and ends there is anything conceived in terms of human happiness and misery, then we are bound to judge our actions in terms not only of possible contributions to the happiness of man in a ... (read more)

Hey Vaden! 

Yeah, I didn't read your other posts (including Proving Too Much), so it's possible they counter some of my points, clarify your argument more, or the like. 

(The reason I didn't read them is that I read your first post, read most comments on it, listened to the 3 hour podcast, and have read a bunch of other stuff on related topics (e.g., Greaves & MacAskill's paper), so it seems relatively unlikely that reading your other posts would change my mind.)

---

Hmm, something that strikes me about that quote is that it seems to really be ab... (read more)

Very balanced assessment! Nicely done :) 

1
JackM
3y
Thanks!

Oops sorry haha neither did I! "this" just meant low-engagement, not  your excellent advice about title choice. Updated :) 

Hehe taking this as a sign I'm overstaying my welcome. Will finish the last post of the series though and move on :) 

2
JackM
3y
No I didn't mean that! It's interesting content. I just note that your first post got more engagement and maybe that was because it was more clearly an attack on longtermism, which is obviously a philosophy close to many EAs hearts.  I'm not endorsing outrageously clickbaity titles, but I think title choice is still something worth thinking about. 

You're correct, in practice you wouldn't - that's the 'instrumentalist' point made in the latter half of the post 

Both actually! See section 6 in Making Ado Without Expectations - unmeasurable sets are one kind of expectation gap (6.2.1) and 'single-hit' infinities are another (6.1.2)

6
MichaelStJules
3y
When would you need to deal with unmeasurable sets in practice? They can't be constructed explicitly, i.e. with just ZF without the axiom of choice, at least for the Lebesgue measure on the real numbers (and I assume this extends to Rn, but I don't know about infinite-dimensional spaces). I don't think they're a problem.

Worth highlighting the passage that the "mere ripples" in the title  refers to  for those  skimming the comments:

 Referring to events like “Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts [sic], World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS" Bostrom writes that
 
  these types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected,

... (read more)

Nice yeah Ben and I will be there! 

What is your probability distribution across the size of the future population, provided there is not an existential catastrophe? 

Do you for example think there is a more than 50% chance that it is greater than 10 billion?

 

I don't have a probability distribution across the size of the future population. That said, I'm happy to interpret the question in the colloquial, non-formal sense, and just take >50% to mean "likely". In that case, sure, I think it's likely that the population will exceed 10 billion. Credences shouldn't be taken any more s... (read more)

Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point. 

 Yep :) 

I don't consider human extermination by AI to be a 'current problem' - I think that's where the disagreement lies.  (See my blogpost for further comments on this point) 

1
Neel Nanda
3y
I feel a bit confused reading that. I'd thought your case was framed around a values disagreement about the worth of the long-term future. But this feels like a purely empirical disagreement about how dangerous AI is, and how tractable working on it is. And possibly a deeper epistemological disagreement about how to reason under uncertainty. How do you feel about the case for biosecurity? That might help disentangle whether the core disagreement is about valuing the longterm future/x-risk reduction, vs concerns about epistemology and empirical beliefs, since I think the evidence base is noticeably stronger than for AI. I think there's a pretty strong evidence base that pandemics can happen and, eg, dangerous pathogens can get developed in labs and released from labs. And I think there's good reason to believe that future biotechnology will be able to make dangerous pathogens, that might be able to cause human extinction, or something close to that. And that human extinction is clearly bad for both the present day, and the longterm future.  If a strong longtermist looks at this evidence, and concludes that biosecurity is a really important problem because it risks causing human extinction and thus destroying the value of the longterm future, and is a thus a really high priority, would you object to that reasoning?
2
MichaelStJules
3y
Either way, the problems to work on would be chosen based on their longterm potential. It's not clear that say global health and poverty would be among those chosen. Institutional decision-making and improving the scientific process might be better candidates.
1
JackM
3y
Apologies, I do still need to read your blogpost! It’s true existential risk from AI isn’t generally considered a ‘near-term’ or ‘current problem’. I guess the point I was trying to make is that a strong longtermist’s view that it is important to reduce the existential threat of AI doesn’t preclude the possibility that they may also think it’s important to work on near-term issues e.g. for the knowledge creation it would afford. Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.

(as far as I can tell their entire point is that you can always do an expected value calculation and "ignore all the effects contained in the first 100" years)

 

Yes, exactly. One can always find some  expected value calculation that allows one to ignore present-day suffering. And worse, one can keep doing this between now and eternity, to ignore all suffering forever. We can describe this using the language of "falsifiability" or "irrefutability" or  whatever - the word choice doesn't really matter here. What matters is that this is a very dangerous game to be playing.

2
weeatquince
3y
I think it is worth trying to judge the paper / case for longtermism charitably. I do not honestly think that Will means that we can literally ignore everything in the first 100 years – for a start just because the short-term affects the long-term.  If you want to evaluate interventions, even those designed for long-term impact, you need to look at the short-term impacts. But that is where I get stuck trying to work out what Will + Hillary  mean. I think they are saying more than just you should look at the long and short term effects of interventions (trivially true under most ethical views). They seem to be making empirical, not philosophical, claims about the current state of the world. They appear to argue that if you use expected value calculations for decision making then you will arrive at the conclusions that suggest that you should care about highly speculative long-term effects over clear short term effects. They combine this with an assumption that expected value calculations are the correct decision making tool to conclude that long-term interventions are most likely to be the best interventions. I think  * the logic of the argument is roughly correct. * the empirical claims made are dubious and ideally need more than a few examples to justify, but it is plausible they are correct. I think there is at least a decent case for marginal extra resources being directed to x-risk prevention in the world today. * the assumption that expected value calculations are the correct decision making tool is incorrect, (as per others at GPI like Owen's work and Andreas' work, bounded rationality, the entire field of risk management, economists like Taleb, knightian uncertainty, etc. etc) . A charitable reading would say that they recognises this is an assumption but chooses not to address it.   Hmmm... I now feel I have a slightly better grasp of what the arguments are after having written that. (Ben I think this counts as disentangling some of the claims made

Yikes... now I'm even more worried ... :| 

Firstly, you and vadmas seem to assume number 2 is the case.

 

Oops nope the exact opposite! Couldn't possibly agree more strongly with

Working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well

Perfect, love it, spot on. I'd be 100%  on board with longtermism if this is what it's about - hopefully conversations like these can move it there. (Ben makes this point near the end of our podcast conversation fwiw)

Do you in fact think that knowledge creation has strong intrinsic value?

... (read more)
1
JackM
3y
This wasn't clearly worded in hindsight. What I meant by this was that I think you and Ben both seem to assume that strong longtermists don't want to work on near-term problems. I don't think this is a given (although it is of course fair to say that they're unlikely to only want to work on near-term problems).
2
JackM
3y
I have to admit that I'm slightly confused as to where the point of contention actually is. If you believe that working on current problems allows us to create moral and scientific knowledge that will help us make the long-run future go well, then you just need to argue this case and if your argument is convincing enough you will have strong longtermists on your side. More importantly though I'm not sure people actually do in fact disagree with this. I haven't come across anyone who has publicly disagreed with this. Have you? It may be the case that both you and strong longtermists are actually on the exact same page without even realising it.

I don't see how that gets you out of facing the question

 

Check out chapter 13 in Beginning of Infinity when you can - everything I was saying in that post is much better explained there :) 

Hey Mauricio! Two brief comments - 

Some others are focused on making decisions. From this angle,  EV maximization and Bayesian epistemology were never supposed to be frameworks for creating knowledge--they're frameworks for turning knowledge into decisions, and your arguments don't seem to be enough for refuting them as such.

Yes agreed, but these two things become intertwined when a philosophy  makes people decide to stop creating knowledge. In this case, it's longtermism preventing the creation of moral and scientific knowledge by grinding ... (read more)

1
Mau
3y
Hey Vaden, thanks! Yeah, fair. (Although less relevant to less naive applications of this philosophy, which as Ben puts it draw some rather than all of our attention away from knowledge creation.) I'm not sure I see where you're coming from here. EV does pass the buck on plenty of things (on how to generate options, utilities, probabilities), but as I put it, I thought it directly answered the question (rather than passing the buck) about what kinds of bets to make/how to act under uncertainty: Also, regarding this: I don't see how that gets you out of facing the question. If criticism uses premises about how we should act under uncertainty (which it must do, to have bearing on our choices), then a discussion will remain badly unfinished until it's scrutinized those premises. We could scrutinize them on a case-by-case basis, but that's wasting time if some kinds of premises can be refuted in general.

Yes! Exactly! Hence why I keep bringing him up :) 

I don't see how we could predict anything in the future at all (like the sun's existence or the coin flips that were discussed in other comments). Where is the qualitative difference between short- and long-term predictions? 

 

Haha just gonna keep pointing you to places where Popper writes about this stuff b/c it's far more comprehensive than anything I could write here :) 

This question (and the questions re. climate change Max asked in another thread)  are the focus of Popper's book The Poverty of Historicism, where  "historicism" ... (read more)

Impressive write up! Fun historical note - in a footnote Popper says he got the idea of formulating the proof using prediction machines from personal communication with the "late Dr A. M. Turing". 

Oops good catch, updated the post with a link to your comment. 

Yep it's Chapter 22 of The Open Universe (don't have a pdf copy unfortunately) 

I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute).

 

In this example you aren't predicting future knowledge, you're predicting that you'll have knowledge in the future - that is, in one minute, you will know the outcome of the coin flip. I too think we'll gain knowledge in the future, but that's very different from predicting the content of that future know... (read more)

The proof [for the impossibility of certain kinds of long-term prediction] is here: https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf

Note that in that text Popper says:

The argument does not, of course, refute the possibility of every kind of social prediction; on the contrary, it is perfectly compatible with the possibility of testing social theories - for example economic theories - by way of predicting that certain developments will take place under certain conditions. It only refutes the possibility of predicting historical dev

... (read more)
2
Max_Daniel
3y
If we're giving a specific probability distribution for the outcome of the coin flip, it seems like we're doing more than that:  Consider that we would predict to know the outcome of the coin flip in one minute no matter what we think the odds of heads are. Therefore, if we do give specific odds (such as 50%), we're doing more than just saying we'll know the outcome in the future.
1
axioman
3y
It seems like the proof critically hinges on assertion 2) which is not proven in your link. Can you point me to the pages of the book that contain the proof? I agree that proofs are logical, but since we're talking about probabilistic predictions,  I'd be very skeptical of the relevance of a proof that does not involve mathematical reasoning,

Hi all! Really great to see all the engagement with the post! I'm going to write a follow up piece responding to many of the objections raised in this thread. I'll post it in the forum in a few weeks once it's complete - please reply to this comment if you have any other questions and I'll do my best to address all of them in the next piece :)

See discussion below w/ Flodorner on this point :) 

You are Flodorner! 

Yes, there are certain rare cases where longterm prediction is possible. Usually these involve astronomical systems, which are unique because they are cyclical in nature and unusually unperturbed by the outside environment. Human society doesn't share any of these properties unfortunately, and long term historical prediction runs into the impossibility proof in epistemology anyway.  

3
axioman
3y
I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute). I think there is some important true point behind your intuition about how knowledge (especially of more complex form than about a coin flip) is hard to predict, but I am almost certain you  won't be able to find any rigorous mathematical proof for  this intuition because reality is very fuzzy (in a mathematical sense, what exactly is the difference between the coin flip and knowledge about future technology?) so I'd be a lot more excited about other types of arguments (which will likely only support weaker claims). 

Yup, the latter. This is why the lack-of-data problem is the other core part of my argument. Once data is in the picture, now  we can start to get traction. There is something to fit the measure to, something to be wrong about, and a means of adjudicating between which choice of measure is better than which other choice. Without data, all this probability talk is just idol speculation painted with a quantitative veneer. 

1
axioman
3y
Ok, makes sense. I  think that our ability to make predictions about the future steeply declines with increasing time horizions, but find it somewhat implausible that it would become entirely uncorrelated with what is actually going to happen in finite time. And it does not seem to be the case that data supporting long term predictions is impossible to get by: while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years; in part due to a lot of data collection and hypothesis testing done by physicist. 

Hey Issac,

On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".

But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the univers

... (read more)
Mau
3y15
0
0

Hi Vaden, thanks again for posting this! Great to see this discussion. I wanted to get further along C&R before replying, but:

no laws of physics are being violated with the scenario "someone shouts the natural number i".  This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers

If we're assuming that time is finite and quantized, then wouldn't these assumptions (or, alternatively, finite time + the speed of light) imply a finite upper bound on how many syllables someone can shout befor... (read more)

if we helped ourselves to some cast-iron guarantees about the size and future lifespan of the universe (and made some assumptions about quantization) then we'd know that the set of possible futures was smaller than a particular finite number (since there would only be a finite number of time steps and a finite number of ways of arranging all particles at each time step). Then even if I can't write it down, in principle someone could write it down, and the mathematical worries about undefined expectations go away.

 

It certainly not obvious that the univ

... (read more)

I second what Alex has said about this discussion being very valuable pushback against ideas that have got some traction - at the moment I think that strong longtermism seems right, but it's important to know if I'm mistaken! So thank you for writing the post & taking some time to engage in the comments.

On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the ... (read more)

You really don't seem like a troll! I think the discussion in the comments on this post is a very valuable conversation and I've been following it closely. I think it would be helpful for quite a few people for you to keep responding to comments

Of course, it's probably a lot of effort to keep replying carefully to things, so understandable if you don't have time :)

Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like "it is the duty of each generation to do what it can to make the world a little bit better for its descendants."

Goodness, I really hope so. As it stands, Greaves and MacAskill are telling people that they can “simply ignore all the effects [of their actions] contained in the first 100 (or even 1000) years”, which seems rather far from the practical advice both you and I hope they arrive at.

Anyway, I appreciate all your thoughtful feedback - it seems like we agree much more than we disagree, so I’m going to leave it here :)

I think the crucial point of outstanding disagreement is that I agree with Greaves and MacAskill that by far the most important effects of our actions are likely to be temporally distant. 

I don't think they're saying (and I certainly don't think) that we can ignore the effects of our actions over the next century; rather I think those effects matter much more for their instrumental value than intrinsic value. Of course, there are also important instrumental reasons to attend to the intrinsic value of various effects, so I don't think intrinsic value should be ignored either.

Hey Owen - thanks for your feedback! Just to respond to a few points - 

>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.

Would be able to elaborate a bit on where the weaknesses are? I see in the thread  you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes  your sniff-test :) ). If we agree EVs are undefined over possible futures, then in the Shivani example, this is like comparing 3 lives to N... (read more)

9
Owen Cotton-Barratt
3y
I meant if everyone were actively engaged in this project. (I think there are plenty of people in the world who are just getting on with their thing, and some of them make the world a bit worse rather than a bit better.) Overall though I think that longtermism is going to end up with practical advice which looks quite a lot like "it is the duty of each generation to do what it can to make the world a little bit better for its descendants"; there will be some interesting content in which dimensions of betterness we pay most attention to (e.g. I think that the longtermist lens on things makes some dimension like "how much does the world have its act together on dealing with possible world-ending catastrophes?" seem really important).
9
Owen Cotton-Barratt
3y
I'm sympathetic to something in the vicinity of your complaint here, striving to compare like with like, and being cognizant of the weaknesses of the comparison when that's impossible (e.g. if someone tried the reasoning from the Shivani example in earnest rather than as a toy example in a philosophy paper I think it would rightly get a lot of criticism). (I don't think that "subjective" and "objective" are quite the right categories here, btw; e.g. even the GiveWell estimates of cost-to-save-a-life include some subjective components.) In terms of your general sympathy with longtermism -- it makes sense to me that the behaviour of its proponents should affect your sympathy with those proponents.  And if you're thinking of the position as a political stance (who you're allying yourself etc.) then it makes sense that it could affect your sympathy with the position. But if you're engaged in the business of truth-seeking, why does it matter what the proponents do? You should ignore the bad arguments and pay attention to the best ones you can see -- whether or not anyone actually made them. (Of course I'm expressing a super idealistic position here, and there are practical reasons not to be all the way there, but I still think it's worth thinking about.) 

>Your argument against expected value is a direct rebuttal of the argument for, but in my eyes this is one of your weaker criticisms.

Would be able to elaborate a bit on where the weaknesses are? I see in the thread  you agree the argument is correct (and from googling your name I see you have a pure math background! Glad it passes  your sniff-test :) ). 

I think it proves both too little and too much.

Too little, in the sense that it's contingent on things which don't seem that related to the heart of the objections you're making. If we wer... (read more)