Hello! My name is Vaden Masrani and I'm a grad student at UBC in machine learning. I'm a friend of the community and have been very impressed with all the excellent work done here, but I've become very worried about the new longtermist trend developing recently.
I've written a critical review of longtermism here in hopes that bringing an 'outsiders' perspective might help stimulate some new conversation in this space. I'm posting the piece in the forum hoping that William MacAskill and Hilary Greaves might see and respond to it. There's also a little reddit discussion forming as well that might be of interest to some.
Cheers!
I can see two possible types of arguments here, which are importantly different.
[ETA: In this comment, which I hadn't seen before writing mine, Vaden seems to confirm that they were trying to make an argument of the second rather than the first kind.]
In this comment I'll explain why I think both types of arguments would prove too much and thus are non-starters. In other comments I'll make some more technical points about type 1 and type 2 arguments, respectively.
(I split my points between comments so the discussion can be organized better and people can use up-/downvotes in a more fine-grined way)
I'm doing this largely because I'm worried that to some readers the technical language in Vaden's post and your comment will suggest that longtermism specifically faces some deep challenges that are rooted in advanced mathematics. But in fact I think that characterization would be seriously mistaken (at least regarding the issues you point to). Instead, I think that the challenges either have little to do with the technical results you mention or that the challenges are technical but not specific to longtermism.
[After writing I realized that the below has a lot of overlap with what Owen and Elliot have written earlier. I'm still posting it because there are slight differences and there is no harm in doing so, but people who read the previous discussions may not want to read this.]
Both types of arguments prove too much because they (at least based on the justifications you've given in the post and discussion here) are not specific to longtermism at all. They would e.g. imply that I can't have a probability distribution over how many guests will come to my Christmas party tomorrow, which is absurd.
To see this, note that everything you say would apply in a world that ends in two weeks, or to deliberations that ignore any effects after that time. In particular, it is still true that the set of these possible 'short futures' is infinite (my house mate could enter the room any minute and shout any natural number), and that the possible futures contains things that, like your example of a sequence of black and white balls, have no unique 'natural' structure or measure (e.g. the collection of atoms in a certain part of my table, or the types of possible items on that table).
So these arguments seem to show that we can never meaningfully talk about the probability of any future event, whether it happens in a minute or in a trillion years. Clearly, this is absurd.
Now, there is a defence against this argument, but I think this defence is just as available to the longtermist as it is to (e.g.) me when thinking about the number of guests at my Christmas party next week.
This defence is that for any instance of probabilistic reasoning about the future we can simply ignore most possible futures, and in fact only need to reason over specific properties of the future. For instance, when thinking about the number of guests to my Christmas party, I can ignore people shouting natural numbers or the collection of objects on my table - I don't need to reason about anything close to a complete or "low-level" (e.g. in terms of physics) description of the future. All I care about is a single natural number - the number of guests - and each number corresponds to a huge set of futures at the level of physics.
But this works for many if not all longtermist cases as well! The number of people in one trillions years is a natural number, as is the year in which transformative AI is being developed, etc. Whether or not identifying the relevant properties, or the probability measure we're adopting, is harder than for typical short-term cases - and maybe prohibitively hard - is an interesting and important question. But it's an empirical question, not one we should expect to answer by appealing to mathematical considerations around the cardinality or measurability of certain sets.
Separately, there may be an interesting question about how I'm able to identify the high-level properties I'm reasoning about - whether that high-level property is the number of people coming to my party or the number of people living in a trillion years. How do I know I "should pay attention" only to the number of party guests and not which natural numbers they may be shouting? And how am I able to "bridge" between more low-level descriptions of futures (e.g. a list of specific people coming to the party, or a video of the party, or even a set of initial conditions plus laws of motion for all relevant elementary particles)? There may be interesting questions here, but I think these are questions for philosophy or psychology who in my view aren't particularly illuminated by referring to concepts from measure theory. (And again, they aren't specific to longtermism.)