DS

Derek Shiller

Philosophy Researcher @ Rethink Priorities
882 karmaJoined Mar 2019Derekshiller.com

Comments
93

Is there any reason for you having decided to go for non-null probabilities of the interventions having no effect?

A zero effect reflects no difference in the value targeted by the intervention. For xrisk interventions, this means that no disaster was averted (even if the probability was changed). For animal welfare interventions, the welfare wasn’t changed by the intervention. Each intervention will have side effects that do matter, but those side effects will be hard to predict or occur on a much smaller scale. Non-profits pay salaries. Projects represent missed opportunity costs. Etc. Including them would add noise without meaningfully changing the results. We could use some scheme to flesh out these marginal effects, as you suggest, but it would take some care to figure out how to do so in a way that wasn’t arbitrary and potentially misleading. Do you see ways for this sort of change to be decision relevant?

It is also worth noting that assigning a large number of results to a single exact value makes certain computational shortcuts possible. More fine-grained assessments would only be feasible with fewer samples.

Less importanty, I also think the negative part of the effects distribution may have a different shape than the positive part. So the model would ideally allow one to specify not only the probability of the intervention being negative, but also the effects distribution conditional on the effect being negative (in the same way one can for the positive part).

Fair point. I agree that having separate settings would be more realistic. I’m not sure whether it would make a significant difference to the results to have the ability to set different shapes of positive and negative distributions, given the way these effects are sampled for an all-or-nothing verdict on whether the intervention makes a difference. However, there are costs to greater configurability, and we opted for less configurability here. Though I could see a reasonable person having gone the other way.

Based on the numbers, I'm guessing that this is a bug in which we're showing the median DALYs averted per $1000 but describing it as the median cost per DALY. We're planning to get rid of the cost per DALY averted and just stick with DALYs per $1000 to avoid future confusions.

Thanks for this insightful comment. We've focused on capturing the sorts of value traditionally ascribed to each kind of intervention. For existential risk mitigation, this is additional life years lived. For animal welfare interventions, this is suffering averted. You're right that there are surely other effects of these interventions. Existential risk mitigation and ghd interventions will have an effect on animals, for instance. Animal welfare interventions might contribute to moral circle expansion. Including these side effects is not just difficult, it adds a significant amount of uncertainty. The side effects we choose to model may determine the ultimate value we get out. The way we choose to model these side effects will add a lot of noise that makes the upshots of the model much more sensitive to our particular choices. This doesn't mean that we think it's okay to ignore these possible effects. Instead, we conceive of the model as a starting point for further thought, not a conclusive verdict on relative value assessments.

Similarly, for an animal welfare intervention such as a corporate cage-free campaign, the long-term effects would depend on how long the cage-free policy is expected to last, how well it's enforced, etc. This would be undoubtedly complicated to model, but would help make these interventions easier to compare with traditionally "longtermist" interventions.

To some extent, these sorts of considerations can be included via existing parameters. There is a parameter to determine how long the intervention's effects will last. I've been thinking of this as the length of time before the same policies would have been adopted, but you might think of this as the time at which companies renege on their commitments. We can also set a range of percentages of the population affected that represents the failure to follow through.

I don't have any special insight. I would be surprised if there were aspects of donation that made the surgery especially likely to result in post-operative pain, so I would imagine that the prevalence of post-operative pain in general would give you some clue about how reliable this study is. That said, given what I've read, if there were subtle ways in which it significantly reduced quality of life, I wouldn't be surprised if it wasn't well publicized. It seems to me a good sign that the doctor mentioned the possibility of post-operative pain to you.

The issue is that our parameters can lead to different rates of cubic population growth. A 1% difference in the rate of cubic growth can lead to huge differences over 50,000 years. Ultimately, this means that if the right parameter values dictating population are sampled in a situation in which the effect of the intervention is backfires, the intervention might have an average negative value across all the samples. With high enough variance, the average sign will be determined by the sign of the most extreme value. If xrisk mitigation work backfires in 1/4 of cases, we might expect 1/4 of collections of samples to have a negative mean.

You're right! That wasn't particularly surprising in light of our moral weights. Thanks for clarifying: I did a poor job of separating the confirmations from the surprising results.

Thanks for your engagement and these insightful questions.

I consistently get an error message when I try to set the CI to 50% in the OpenPhil bar (and the URL is crazy long!)

That sounds like a bug. Thanks for reporting!

(The URL packs in all the settings, so you can send it to someone else -- though I'm not sure this is working on the main page. To do this, it needs to be quite long.)

Why do we have probability distributions over values that are themselves probabilities? I feel like this still just boils down to a single probability in the end.

You're right, it does. Generally, the aim here is just conceptual clarity. It can be harder to assess the combination of two probability assignments than those assignments individually.

Why do we sometimes use ? It seems unnecessarily confusing.

Yeah. It has been a point of confusion within the team too. The reason for cost per DALY is that is a metric that is often used by people making allocation decisions. However, it isn't a great representation for Monte Carlo simulations where a lot of outcomes involve no effect, because the cost per DALY is effectively infinite. This has some odd implications. For our purposes, DALYs per $1000 is a better representation. To try to accommodate both considerations, we include both values in different places.

OK, but what if life is worse than 0, surely we need a way to represent this as well? My vague memory from the moral weights series was that you assumed valence is symmetric about 0, so perhaps the more sensible unit would be the negative of the value of a fully content life.

The issue here is that interventions can affect different levels of suffering. For instance, a corporate campaign might include multiple asks that affect animals in different ways. We could have made the model more complicated by incorporating its effect on each different level. Instead, we simplified by 'summarizing' the impact with one level. We calibrated with research on the impact of similar afflictions in humans. You can represent a negative value just by choosing a higher number of hours than actually suffered. Think of it in terms of the amount of normal life that that suffering would balance out. If it is really bad, one hour of suffering might be as bad as weeks of normal life would be good.

The intervention is assumed to produce between 160 and 3.6K suffering-years per dollar (unweighted) condition on chickens being sentient." This input seems unhelpfully coarse-grained, as it seems to hide a lot of the interesting steps and doesn't tell me anything about how these numbers are estimated, and it is not the sort of thing I can intelligently just choose my own numbers for.

There is a balance between accuracy and model configurability. In some places, we want to include numbers that are based on other research that we thought was likely to be accurate, but where we couldn't directly translate into the parameters of the model. I would like to convert those assessments into the terms of the model, maybe backtracking to see what parameters get a similar answer, but this wasn't a priority.

In the small-scale biorisk project, I never seem to get more than about 1000 DALYs per $1000, even when I crank expansion speed to 0.9c and length of future to 1e8, and the annual extinction risk in era 4 to 1e-8. Why is this? Yes 150,000 is too few, but I thought I should at least see some large effect when I change key parameters by several OOMs. Not really sure what is going on here, I'll be interested if you replicate this, and whether there is a bug or I am just misunderstanding something.

Our estimates include both calculations of catastrophic events and extinction. For the small-scale biorisk, the chance of a catastrophic event is relatively high, but the chance of extinction is low. I think you're seeing the results of catastrophic events and no extinction events. When I up the probability of extinction to be higher, and include the far future, I see very large numbers. (E.g. https://bit.ly/ccm-bio-high-risk).

(1) Unfortunately, we didn't record any predictions beforehand. It would be interesting to compare. That said, the process of constructing the model is instructive in thinking about how to frame the main cruxes, and I'm not sure what questions we would have thought were most important in advance.

(2) Monte Carlo methods have the advantage of flexibility. A direct analytic approach will work until it doesn't, and then it won't work at all. Running a lot of simulations is slower and has more variance, but it doesn't constrain the kind of models you can develop. Models change over time, and we didn't want to limit ourselves at the outset.

As for whether such an approach would work with the model we ended up with: perhaps, but I think it would have been very complicated. There are some aspects of the model that seem to me like they would be difficult to assess analytically -- such as the breakdown of time until extinction across risk eras with and without the intervention, or the distinction between catastrophic and extinction-level risks.

We are currently working on incorporating some more direct approaches into our model where possible in order to make it more efficient.

I believe Marcus and Peter will release something before long discussing how they actually think about prioritization decisions.

I think you're right that we don't provide a really detailed model of the far future and we underestimate* expected value as a result. It's hard to know how to model the hypothetical technologies we've thought of, let alone the technologies that we haven't. These are the kinds of things you have to take into consideration when applying the model, and we don't endorse the outputs as definitive, even once you've tailored the parameters to your own views.

That said, I do think the model has a greater flexibility than you suggest. Some of these options are hidden by default, because they aren't relevant given the cutoff year of 3023 we default to. You can see them by extending that year far out. Our model uses parameters for expansion speed and population per star. It also lets you set the density of stars. If you think that we'll expand and near the speed of light and colonize every brown dwarf, you can set that. If you think each star will host a quintillion minds, you can set that too. We don't try to handle relative welfare levels for future beings; we just assume their welfare is the same as ours. This is probably pessimistic. We considered changing this, but it actually doesn't make a huge difference to the overall shape of the results, so we didn't consider it a priority. The same goes for clock speed differences. If you want to represent this within the model as written, you can just inflate the population per star. What the model can't do is capture non-cubic (and non-static) population growth rates. It also breaks down in the real far future, and we don't model the end of the universe.

Perhaps you object to parameter settings we chose as defaults. Whatever defaults we picked would be controversial. In response, let me just stress that they're not intended as our answers to these questions. They are just a flexible starting point for people to explore.

* My guess is that the EV of surviving to the far future is infinite, if it isn't undefined.

Load more