Michael_Wiebe

Comments

Expected value theory is fanatical, but that's a good thing

Yes, I'm saying that it happens to be the case that, in practice, fanatical tradeoffs never come up.

Furthermore, you'd have to assign  when , which means perfect certainty in an empirical claim, which seems wrong.

Hm, doesn't claiming  also require perfect certainty? Ie, to know that V is literally infinite rather than some large number.

Michael_Wiebe's Shortform

What is ? It seems all the work is being done by having  in the exponent.

Expected value theory is fanatical, but that's a good thing

How about this: fanaticism is fine in principle, but in practice we never face any actual fanatical choices. For any actions with extremely large value V, we estimate p < 1/V, so that the expected value is <1, and we ignore these actions based on standard EV reasoning.

Michael_Wiebe's Shortform

Will says:

in order to assess the value (or normative status) of a particular action we can in the first instance just look at the long-run effects of that action (that is, those after 1000 years), and then look at the short-run effects just to decide among those actions whose long-run effects are among the very best.

Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?

Michael_Wiebe's Shortform

What are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty?

Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents, , with endowments   (with probability 1) and  So  either gets nothing or twice as much as .

We choose a transfer  to solve:

For a baseline, consider  and . Then we get an optimal transfer of . Intuitively, as  (if B gets 10 for sure, don't make any transfer from A to B), and as  (if B gets 0 for sure, split A's endowment equally).

So that's a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we're uncertain about the value of ?

Suppose we think , for some distribution  over . If we maximize expected utility, the problem becomes:

Since the objective function is linear in probabilities, we end up with the same problem as before, except with  instead of . If we know the mean of , we plug it in and solve as before.

So it turns out that this form of uncertainty doesn't change the problem very much.

Questions:
- if we don't know the mean of , is the problem simply intractable? Should we resort to maxmin utility?
- what if we have a hyperprior over the mean of ? Do we just take another level of expectations, and end up with the same solution?
- how does a stochastic dominance decision theory work here?

Formalizing longtermism

Do you think Will's three criteria are inconsistent with the informal definition I used in the OP ("what most matters about our actions is their very long term effects")?

Formalizing longtermism

In my setup, I could say  for some large ; ie, generations  contribute basically nothing to total social utility . But I don't think this captures longtermism, because this is consistent with the social planner allocating no resources to safety work (and all resources to consumption of the current generation); the condition puts no constraints on . In other words, this condition only matches the first of three criteria that Will lists:

(i) Those who live at future times matter just as much, morally, as those who live today;

(ii) Society currently privileges those who live today above those who will live in the future; and

(iii) We should take action to rectify that, and help ensure the long-run future goes well.

Modelling the odds of recovery from civilizational collapse

I'm a bit skeptical about the value of formal modelling here. The parameter estimates would be almost entirely determined by your assumptions, and I'd expect the  confidence intervals to be massive.

I think a toy model would be helpful for framing the issue, but going beyond that (to structural estimation) seems not worth it.

Formalizing longtermism

and also a world where shorttermism is true

On Will's definition, longtermism and shorttermism are mutually exclusive.

Formalizing longtermism

Suppose you're taking a one-off action , and then you get (discounted) reward 

I'm a bit confused by this setup. Do you mean that  is analogous to , the allocation for ? If so, what are you assuming about ? In my setup, I can compare  to. , so we're comparing against the optimal allocation, holding fixed .

 where  is some large number.

I'm not sure this works. Consider: this condition would also be satisfied in a world with no x-risk, where each generation becomes successively richer and happier, and there's no need for present generations to care about improving the future. (Or are you defining  as the marginal utility of  on generation , as opposed to the utility level of generation  under ?)

Load More