I'm an undergrad at Stanford, where I study a mix of political science, philosophy, and economics, while helping to organize Stanford's EA group and Existential Risks Initiative. I'm especially interested in improving the long-term future.
Hi, thanks for your comment!
Good points--many cultural establishments are valuable in ways that calculations of lives saved miss, and the part of the situation you describe would be worse if people don't donate to museums. I'm still worried by this: if we don't (for example) donate to the purchase of bednets that protect people from malaria, then more kids will die of preventable diseases, which would also be a worse situation. So I'm not sure I understand where you're coming from here--it seems to me that any good cause we don't donate to will be worse off if we don't donate to it, so noticing this about some cause won't go far in helping us find the best opportunities to help others.
Hey Vaden, thanks!
these two things become intertwined when a philosophy makes people decide to stop creating knowledge
Yeah, fair. (Although less relevant to less naive applications of longterimsm, which as Ben puts it draw some rather than all of our attention away from knowledge creation.)
Both approaches pass on the buck
I'm not sure I see where you're coming from here. EV does pass the buck on plenty of things (on how to generate options, utilities, probabilities), but as I put it, I thought it directly answered the question (rather than passing the buck) about what kinds of bets to make/how to act under uncertainty:
we should be willing to bet on X happening in proportion to our best guess about the strength of the evidence for the claim that X will happen.
Also, regarding this:
And one doesn't necessarily need to answer your question, because there's no requirement that the criticism take EV form
I don't see how that gets you out of facing the question. If criticism uses premises about how we should act under uncertainty (which it must do, to have bearing on our choices), then a discussion will remain badly unfinished until it's scrutinized those premises. We could scrutinize them on a case-by-case basis, but that's wasting time if some kinds of premises can be refuted in general.
Another worry is that probabilities are so useful that we won't find a better alternative.
I think of probabilities as language for answering the earlier basic question of "What bets should I make?" For example, "There's a 25% chance (i.e. 1:3 odds) that X will happen" is (as I see it) shorthand for "My potential payoff better be at least 3 times bigger than my potential loss for betting on X to be worth it." So probabilities express thresholds in your answers to the question "What bets on event X should I take?" That is, from a pragmatic angle, subjective probabilities aren't supposed to be deep truths about the world; they're expressions of our best guesses about how willing we should be to bet on various events. (Other considerations also make probabilities particularly well-fitting tools for describing our preferences about bets.)
So rejecting the use of probabilities (as I understand them) under severe uncertainty seems to have an unacceptable, maybe even absurd, conclusion: the rejection of consistent thresholds for deciding whether to bet on uncertain events. This is a mistake--if we accept/reject bets on some event without a consistent threshold for what reward:loss ratios are worth taking, then we'll necessarily be doing silly things like refusing to take a bet, and then accepting a bet on the same event for a less favorable reward:loss ratio.
You might be thinking something like "ok, so you can always describe an action as endorsing some betting threshold, but that doesn't mean it's useful to think about this explicitly." I'd disagree, because not recognizing our betting threshold makes it harder to notice and avoid mistakes like the one above. It also takes away clarity and precision of thought that's helpful for criticizing our choices, e.g. it makes an extremely high betting threshold about the value of x-risk reduction look like agnosticism.
Thanks again for your thoughtful post!
Hey Ben, thanks a lot for posting this! And props for having the energy to respond to all these comments :)
I'll try to reframe points that others have made in the comments (and which I tried to make earlier, but less well): I suspect that part of why these conversations sometimes feel like we're talking past one another is that we're focusing on different things.
You and Vaden seem focused on creating knowledge. You (I'd say) correctly note that, as frameworks for creating knowledge, EV maximization and Bayesian epistemology aren't just useless--they're actively harmful, because they distract us from the empirical studies, data analysis, feedback loops, and argumentative criticism that actually create knowledge.
Some others are focused on making decisions. From this angle, EV maximization and Bayesian epistemology aren't supposed to be frameworks for creating knowledge--they're frameworks for turning knowledge into decisions, and your arguments don't seem to be enough for refuting them as such.
To back up a bit, I think probabilities aren't fundamental to decision making. But bets are. Every decision we make is effectively taking or refusing to take a bet (e.g. going outside is betting that I won't be hit in the head by a meteor if I go outside). So it's pretty useful to have a good answer to the question: "What bets should I take?"
In this context, your post isn't convincing me because I don't have a good alternative to my current answer to that question (roughly, "take bets that maximize EV"), and because I think that in an important way there can't be a good alternative.
One of the questions your post leaves me with is: What kinds of bets do you think I should I take, when I'm uncertain about what will happen? i.e. How do you think I should make decisions under uncertainty?
Maximizing EV under a Bayesian framework offers one answer, as you know, roughly that: we should be willing to bet on X happening in proportion to our best guess about the strength of the evidence for the claim that X will happen.
I think you're right in pointing that this approach has significant weaknesses: it has counterintuitive results when used with some very low probabilities, it's very sensitive to arbitrary judgements and bias, and our best guesses about whether far-future events will happen might be totally uncorrelated with whether they actually happen. (I'm not as compelled by some of your other criticisms, largely for reasons others' comments discuss.)
Despite these downsides, it seems like a bad idea to drop my current best guess about "what kinds of bets should I take?" until I see a better answer. (Vaden offers a promising approach to making decisions, but it just passes the buck on this--we'll still need an answer to my question when we get to his step 2.) As your familiarity with catastrophic dictatorships suggests, dumping a flawed status quo is a mistake if we don't have a better alternative.
Offered a bet that pays $X if I pick a color and then see if a random ball matches that color, you'll pay more
I'm not sure I follow. If I were to take this bet, it seems that the prior according to which my utility would be lowest is: you'll pick a color to match that gives me a 0% chance of winning. So if I'm ambiguity averse in this way, wouldn't I think this bet is worthless?
(The second point you bring up would make sense to me if this first point did, although then I'd also be confused about the papers' emphasis on commitment.)
Hi Zach, thanks for this!
I have two doubts about the Al-Najjar and Weinstein paper--I'd be curious to hear your (or others') thoughts on these.
First, I'm having trouble seeing where the information aversion comes in. A simpler example than the one used in the paper seems to be enough to communicate what I'm confused about: let's say an urn has 100 balls that are each red or yellow, and you don't know their distribution. Someone averse to ambiguity would (I think) be willing to pay up to $1 for a bet that pays off $1 if a randomly selected ball is red or yellow. But if they're offered that bet as two separate decisions (first betting on a ball being red, and then betting on the same ball being yellow), then they'd be willing to pay less than $0.50 for each bet. So it looks like preference inconsistency comes from the choice being spread out over time, rather than from information (which would mean there's no incentive to avoid information). What am I missing here?
(Maybe the following is how the authors were thinking about this? If you (as a hypothetical ambiguity-averse person) know that you'll get a chance to take both bets separately, then you'll take them both as long as you're not immediately informed of the outcome of the first bet, because you evaluate acts, not by their own uncertainty, but by the uncertainty of your sequence of acts as a whole (considering all acts whose outcomes you remain unaware of). This seems like an odd interpretation, so I don't think this is it.)
[edit: I now think the previous paragraph's interpretation was correct, because otherwise agents would have no way to make ambiguity averse choices that are spread out over time and consistent, in situations like the ones presented in the paper. The 'oddness' of the interpretation seems to reflect the oddness of ambiguity aversion: rather than only paying attention to what might happen differently if you choose one action or another, ambiguity aversion involves paying attention to possible outcomes that will not be affected by your action, since they might influence the uncertainty of your action.]
Second, assuming that ambiguity aversion does lead to information aversion, what do you think of the response that "this phenomenon simply reflects a [rational] trade-off between the intrinsic value of information, which is positive even in the presence of ambiguity, and the value of commitment"?
Hi Vaden, thanks again for posting this! Great to see this discussion. I wanted to get further along C&R before replying, but:
no laws of physics are being violated with the scenario "someone shouts the natural number i". This is why this establishes a one-to-one correspondence between the set of future possibilities and the natural numbers
If we're assuming that time is finite and quantized, then wouldn't these assumptions (or, alternatively, finite time + the speed of light) imply a finite upper bound on how many syllables someone can shout before the end of the universe (and therefore a finite upper bound on the size of the set of shoutable numbers)? I thought Isaac was making this point; not that it's physically impossible to shout all natural numbers sequentially, but that it's physically impossible to shout any of the natural numbers (except for a finite subset).
(Although this may not be crucial, since I think you can still validly make the point that Bayesians don't have the option of, say, totally ruling out faster-than-light number-pronunciation as absurd.)
Note also that EV style reasoning is only really popular in this community. No other community of researchers reasons in this way, and they're able to make decisions just fine.
Are they? I had the impression that most communities of researchers are more interested in finding interesting truths than in making decisions, while most communities of decision makers severely neglect large-scale problems (e.g. pre-2020 pandemic preparedness, farmed animal welfare). (Maybe there's better ways to account for scope than EV, but I'd hesitate to look for them in conventional decision making.)
Thanks for your comment!
hope I wasn't too annoying!
Are you counting cases where there are intra-elite battles for power [...] Not sure how broad "strategic alliances" are referring to.
What I have in mind is: cases when elite group A included group B, because group A thought that group B would use its new influence in ways beneficial for group A. I wouldn't count the example you mention, because then the benefit seems to come from the exploiters being weakened (not being able to charge such low prices), rather than from the new influence of the formerly excluded.(I'm trying to distinguish between inclusion that comes from the influence of the excluded, and inclusion that doesn't, because only the latter could help groups like future generations.)The dynamic you bring up does seem important. I'd currently put it in the miscellaneous bucket of "costs of inclusion" (as a negative cost--a benefit for elites). I wonder if there's some better way to think about it?
From Larissa MacFarquhar's Strangers Drowning:
"What do-gooders lack is not happiness but innocence. They lack that happy blindness that allows most people, most of the time, to shut their minds to what is unbearable. Do-gooders have forced themselves to know, and keep on knowing, that everything they do affects other people, and that sometimes (though not always) their joy is purchased with other people’s joy. And, remembering that, they open themselves to a sense of unlimited, crushing responsibility.”
"This is the difference between do-gooders and ordinary people: for do-gooders, it is always wartime. They always feel themselves responsible for strangers — they always feel that strangers, like compatriots in war, are their own people. They know that there are always those as urgently in need as the victims of battle, and they consider themselves conscripted by duty.”
“Do-gooders learn to codify their horror into a routine and a set of habits they can live with. They know they must do this in order to stay sane. But this partial blindness is chosen and forced and never quite convincing.”
"Have you ever experienced a moment of bliss? On the rapids of inspiration maybe, your mind tracing the shapes of truth and beauty? Or in the pulsing ecstasy of love? Or in a glorious triumph achieved with true friends? Or in a conversation on a vine-overhung terrace one star-appointed night? Or perhaps a melody smuggled itself into your heart, charming it and setting it alight with kaleidoscopic emotions? Or when you prayed, and felt heard?
... you may have discovered inside it a certain idle but sincere thought: 'Heaven, yes! I didn’t realize it could be like this. This is so right, on whole different level of right; so real, on a whole different level of real. Why can’t it be like this always? Before I was sleeping; now I am awake.'
Quick, stop that door from closing! Shove your foot in so it does not slam shut.
And let the faint draught of the beyond continue to whisper... the tender words of what could be!"
- Nick Bostrom, Letter from Utopia