Hi Max_Daniel! I'm sympathetic to both your and Vaden's arguments, so I may try to bridge the gap on climate change vs. your Christmas party vs. longtermism.
Climate change is a problem now, and we have past data to support projecting already-observed effects into the future. So statements of the sort "if current data projected forward with no notable intervention, the Earth would be uninhabitable in x years." This statement is reliant on some assumptions about future data vs. past data, but we can be reasonably clear about them and debate them.
Future knowledge will undoubtedly help things and reframe certain problems, but a key point is that we know where to start gathering data on some of the aspects you raise: "how ppl will adapt", "how can we develop renewable energy or batteries", etc, because climate change is already a well defined problem. We have current knowledge that will help us get off the ground.
I agree the measure theoretic arguments may prove too much, but the number of people at your Christmas party is an unambiguously posed question for which you have data on how many people you invited, how flaky your friends are, etc.
In both cases, you may use probabilistic predictions, based on a set of assumptions, to compel others to act on climate change or compel yourself to invite more people.
the key question is whether longtermism requires the kind of predictions that aren’t feasible
At the risk of oversimplification by using AI Safety example as a representative longtermist argument, the key difference is that we haven't created or observed human-level AI, or even those which can adaptively set their own goals.
There are meaningful arguments we can use to compel others to discuss issues of safety (in algorithm development, government regulation, etc). After all, it will be a human process to develop and deploy these AI, and we can set guardrails by focused discussion today.
Vaden's point seems to be that arguments that rely on expected values or probabilities are of significantly less value in this case. We are not operating in a well-defined problem, with already-available or easily -collectable data, because we haven't even created the AI.
This seems to be the key point about "predicting future knowledge" being fundamentally infeasible (just as people in 1900 couldn't meaningfully reason about the internet, let alone make expected utility calculations). Again, we're not as ignorant as ppl in 1900 and may have a sense this problem is important, but can we actually make concrete progress with respect to killer robots today?
Everyone on this forum may have their own assumptions about the future AI, or climate change for that matter. We may not ever be able to align our priors and sufficiently agree on the future, but for the purposes of planning and allocating resources, the discussion around climate change seems significantly more grounded.
And, in any case, there are arguments for the claim that we must assign probabilities to hypotheses like ‘The die lands on 1’ and ‘There will exist at least 10^16 people in the future.’ If we don’t assign probabilities, we are vulnerable to getting Dutch-booked
The Dutch-Book argument relies on your willingness to take both sides of a bet at a given odds or probability (see Sec. 1.2 of your link). It doesn't tell you that you must assign probabilities, but if you do and are willing to bet on them, they must be consistent with probability axioms.
It may be an interesting shift in focus to consider where you would be ambivalent between betting for or against the proposition that ">= 10^24 people exist in the future", since, above, you reason only about taking and not laying a billion to one odds. An inability to find such a value might cast doubt on the usefulness of probability values here.
(1) The human population will be at least 8 billion next year, (2) The human population will be at least 7 billion next year. If the probabilities of both hypotheses are undefined, then it would seem permissible to bet on either. But clearly you ought to bet on (2). So it seems like these probabilities are not undefined after all.
I don't believe this relies on any probabilistic argument, or assignment of probabilities, since the superiority of bet (2) follows from logic. Similarly, regardless of your beliefs about the future population, you can win now by arbitrage (e.g. betting against (1) and for (2)) if I'm willing to take both sides of both bets at the same odds.
Correct me if I'm wrong, but I understand a Dutch-book to be taking advantage of my own inconsistent credences (which don't obey laws of probability, as above). So once I build my set of assumptions about future worlds, I should reason probabilistically within that worldview, or else you can arbitrage me subject to my willingness to take both sides.
If you set your own set of self-consistent assumptions for reasoning about future worlds, I'm not sure how to bridge the gap. We might debate the reasonableness of assumptions or priors that go into our thinking. We might negotiate odds at which we would bet on ">= 10^24 people exist in the future", with our far-future progeny transferring $ based on the outcome, but I see no way of objectively resolving who is making a "better bet" at the moment
Hi Max! Again, I agree the longtermist and garden-variety cases may not actually differ regarding the measure-theoretic features in Vaden's post, but some additional comments here.
Although "probability of 60%" may be less meaningful than we'd like / expect, you are certainly allowed to enter such bets. In fact, someone willing to take the other side suggests that he/she disagrees. This highlights the difficulty of converging on objective probabilities for future outcomes which aren't directly subject to domain-specific science (e.g. laws of planetary motion). Closer in time, we might converge reasonably closely on an unambiguous measure, or appropriate parametric statistical model.
Regarding the "60% probability" for future outcomes, a useful thought experiment for me was how I might reason about the risk profile of bets made on open-ended future outcomes. I quickly become less convinced I'm estimating meaningful risk the further out I go. Further, we only run the future once, so it's hard to actually confirm our probability is meaningful (as for repeated coin flips). We could make longtermist bets by transferring $ btwn our far-future offspring, but can't tell who comes out on top "in expectation" beyond simple arbitrages.
Honest question being new to EA... is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time? Shouldn't we calculate Expected Utility over billion year futures for all current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?
For example, the downstream effects of donating to Anti-Malaria would be difficult to reason about, but we are clueless as to whether its EU would be dwarfed by AI safety on the billion yr timescale, e.g. bringing the entire world out of poverty limiting political risk leading to totalitarian government.