Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.
I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some.
Cheers!
Hi Elliott, just a few side comments from someone sympathetic to Vaden's critique:
I largely agree with your take on time preference. One thing I'd like to emphasize is that thought experiments used to justify a zero discount factor are typically conditional on knowing that future people will exist, and what the consequences will be. This is useful for sorting out our values, but less so when it comes to action, because we never have such guarantees. I think there's often a move made where people say "in theory we should have a zero discount factor, so let's focus on the future!". But the conclusion ignores that in practice we never have such unconditional knowledge of the future.
Re: the dice example:
True - there are infinitely many things that can happen while the die is in the air, but that's not the outcome space about which we're concerned. We're concerned about the result of the roll, which is a finite space with six outcomes. So of course probabilities are defined in that case (and in the 6 vs 20 sided die case). Moreover, they're defined by us, because we've chosen that a particular mathematical technique applies relatively well to the situation at hand. When reasoning about all possible futures however, we're trying to shoehorn in some mathematics that is not appropriate to the problem (math is a tool - sometimes it's useful, sometimes it's not). We can't even write out the outcome space in this scenario, let alone define a probability measure over it.
Once you buy into the idea that you must quantify all your beliefs with numbers, then yes - you have to start assigning probabilities to all eventualities, and they must obey certain equations. But you can drop that framework completely. Numbers are not primary - again, they are just a tool. I know this community is deeply steeped in Bayesian epistemology, so this is going to be an uphill battle, but assigning credences to beliefs is not the way to generate knowledge. (I recently wrote about this briefly here.) Anyway, the Bayesianism debate is a much longer one (one that I think the community needs to have, however), so I won't yell about any longer, but I do want to emphasize that it is only one way to reason about the world (and leads to many paradoxes and inconsistencies, as you all know).
Appreciate your engagement :)