Hide table of contents

Why the standard EV Maximization technique is a problem

EV Maximization, despite being an idealization (and thus can not represent bounded agents like us) does have useful lessons for us, though recently there has been criticism of expected value calculations and maximization of EV. I argue that the critiques of EV Maximization, to the extent those are real, derive from a problem with the standard way EV Maximization is taught and framed.

Specifically, the EV Maximization taught usually assumes logical omniscience, that is we assume that the agent already knows about all about mathematics there is, and that they know all outcomes of bets. Especially if mathematics is reality, as a very plausible outcome, this amounts to asking them to know everything about every reality. This is impossible for finite reasoners like us, and essentially amounts to an assumption that our own internal reasoning is perfect, when they aren't. For details, see page 3 of the Vingean Reflection paper.

https://intelligence.org/files/VingeanReflection.pdf

So what should we modify EV calculations with?

A noise floor, beyond which our reasoning process fails to deliver resilient conclusions. Specifically if below a certain probability we can't get a conclusion that doesn't depend on the specific numbers of the problem (meaning no uncertainty.) than we should treat it as noise, where we don't care about it.

The biggest reason we should do this is because, below a certain probability, you must start using non-resilient reasoning, where the conclusions are highly sensitive to certain parameters.

How this solves Pascal's Mugging and Pascal's Wager

Now while Pascal's Mugging has multiple problems (including assuming a random draw when there are adversarial forces.) We will create a version of Pascal's Mugging without adversarial forces. Let's say there are 3 interventions that are the following:

1 is an intervention by GiveWell that has the chance to save 50-50,000 lives this year with 30-90% probability.

2 is an intervention by a AI Alignment group that can help create a flourishing future for 10^40 humans within the next hundred years at 10-60% probability.

3 is an intervention by a fanatical longtermist group that has the chance to create a flourishing future for 3^^^3 humans that takes 10^10 years to do and has a 1 in a 3^^^3 chance or maybe 1 in a 3^^3 chance to be successful.

Which is the best option, given bounded rationality constraints and potential time limits? Think about it.

The answer is 2, by far, since 3 requires you to have more-precise knowledge than your brain can support, and thus any conclusion is not resilient and thus 2 wins out when we use a noise floor of 10^-4 probability.

Pascal's Wager can be handled the same way. Even if we accept that infinity is something real, we may not be able to reason reliably about it with arbitrarily small probability, and infinity is only one state out of the infinite amount of numbers, so again we see non-resilience in conclusions. Thus we should only take seriously arguments over infinity if they happen to rise above a noise floor of probability.

Basically the entire problem of Pascal's Mugging and Pascal's Wager exists because Standard EV calculations doesn't know what to do with uncertainty, as well as privileging a specific number of utilons, without any prior reason to think that we can privilege it. Thus the need for noise floors.

I will also quote Holden Karnofsky and Eliezer Yudkowsky for additional reasons to be wary of Pascalian probabilities for huge results from this link:

https://www.lesswrong.com/posts/RdpqsQ6xbHzyckW9m/why-we-can-t-take-expected-value-estimates-literally-even#comments

From Holden Karnofsky:

In such a world, when people decided that a particular endeavor/action had outstandingly high EEV, there would (too often) be no justification for costly skeptical inquiry of this endeavor/action. For example, say that people were trying to manipulate the weather; that someone hypothesized that they had no power for such manipulation; and that the EEV of trying to manipulate the weather was much higher than the EEV of other things that could be done with the same resources. It would be difficult to justify a costly investigation of the "trying to manipulate the weather is a waste of time" hypothesis in this framework. Yet it seems that when people are valuing one action far above others, based on thin information, this is the time when skeptical inquiry is needed most. And more generally, it seems that challenging and investigating our most firmly held, "high-estimated-probability" beliefs - even when doing so has been costly - has been quite beneficial to society.

Related: giving based on EEV seems to create bad incentives. EEV doesn't seem to allow rewarding charities for transparency or penalizing them for opacity: it simply recommends giving to the charity with the highest estimated expected value, regardless of how well-grounded the estimate is. Therefore, in a world in which most donors used EEV to give, charities would have every incentive to announce that they were focusing on the highest expected-value programs, without disclosing any details of their operations that might show they were achieving less value than theoretical estimates said they ought to be.

From Eliezer Yudkowsky:

I'm a major fan of Down-To-Earthness as a virtue of rationality, and I have told other SIAI people over and over that I really think they should stop using "small probability of large impact" arguments. I've told cryonics people the same. If you can't argue for a medium probability of a large impact, you shouldn't bother.

Part of my reason for saying this is, indeed, that trying to multiply a large utility interval by a small probability is an argument-stopper, an attempt to shut down further debate, and someone is justified in having a strong prior, when they see an attempt to shut down further debate, that further argument if explored would result in further negative shifts from the perspective of the side trying to shut down the debate.

So what is EV good for, anyway?

There are several purposes you can use EV distributions (not point estimates, as they often give false precision and thus don't represent uncertainty well) for:

  1. As a check on whether something is actually doing what it claims. Here, a negative result (they're doing less impact than claimed) matters more than a positive result. If a negative result occurs, then there's reasonable probability that something is wrong somewhere, no matter how much it looks good, and is a serious result. Passing an EV calculation means that there's more probability that they're doing well, albeit less than the case of negative results.

  2. As a way to upper-bound different cause areas. So long as the probabilities of success don't become so small that they are below the noise floor, EV is a reasonable guide to where you should direct your efforts. While the uncertainty means you should probably be a little risk averse or update somewhat downward on the actual distribution of EV (since you don't fully know whether the event or intervention will come or work, as well as the fact that the top is just one of many outcomes), you shouldn't be wildly risk-averse, that is you probably should at least take some risk in trying out interventions.

  3. As a way to detect nonsensical reasoning. While EV in it's pure form has issues like Pascal's Mugging and Wager, it does show what types of reasoning to avoid. This is why critiques that target EA and LW's focus on say, long-termism or AI Alignment is nonsensical given both the large stakes and the non-pascalian probabilities of affecting each of these areas. The arguments tend to focus on them potentially having lower probability than EA thinks it is, thus we should deprioritize them. But unless they show that the probabilities are so small that they are essentially Pascalian, that is not enough of an argument to then deprioritize that cause area.

12

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities