Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.
I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some.
Cheers!
I think it proves both too little and too much.
Too little, in the sense that it's contingent on things which don't seem that related to the heart of the objections you're making. If we were certain that the accessible universe were finite (as is suggested by (my lay understanding of) current physical theories), and we had certainty in some finite time horizon (however large), then all of the EVs would become defined again and this technical objection would disappear.
In that world, would you be happy to drop your complaints? I don't really think you should, so it would be good to understand what the real heart of the issue is.
Too much, in the sense that if we apply the argument naively then it appears to rule out using EVs as a decision-making tool in many practical situations (where subjective probabilities are fed into the process), including many where we have practical experience of it and it has a good track record.
Overall, my take is something like:
[Mostly an aside] I think the example has been artificially simplified to make the point cleaner for an audience of academic philosophers, and if you take account of indirect effects from giving to AMF then properly we should be comparing NaN to NaN. But I agree that we should not be trying to make any longtermist decisions by literally taking expectations of the number of future lives saved.
Not in my view. I don't think we should be using expectations over future lives as a fundamental decision-making tool, but I do think that thinking in terms of expectations can be helpful for understanding possible future paths. I think it's a moderately robust point that the long-term impacts of our actions are predictably a bigger deal than the short-term impacts -- and this point would survive for example artificially capping the size of possible futures we could reach.
(I think it's a super important question how longtermists should make decisions; I'll write up some more of my thoughts on this sometime.)