Toby_Ord

856Joined Aug 2014

Comments
62

While I do suggest a 0.1% probability of existential catastrophe from climate change, note that this is on my more restricted definition, where that is roughly the chance that humanity loses almost all its longterm potential due to a climate catastrophe. On Beard et al's  looser definition, I might put that quite a bit higher (e.g. I think there is something more like a 1% chance of a deep civilisation collapse from climate change, but that in most such outcomes we would eventually recover). And I'd  put the risk factor from climate change quite a bit higher than 0.1% too — I think it is more of a risk factor than a direct risk.

Thanks so much for posting this Gideon. I like your way of framing this into these two loose clusters, and especially your claim that it is good to have both. I completely agree. While my work is indeed more within the simple cluster, I feel that a fight over which approach is right would be misguided.

All phenomena can be modelled at lesser or greater degrees of precision, with different advantages and disadvantages of each. Often there are some sweet spots where there is an especially good tradeoff between accuracy and ability to actually use the model. We should try to find those and use them all to illuminate the issue.

There is a lot to be said  for simple and for complex approaches. In general, my way forward with all kinds of topics is to start as simple as possible and only add complexity when it is clearly needed to address a glaring fault. We all know the truth is as complex as the universe, so the question is not whether the more complex model is more accurate, but whether it adds sufficient accuracy to justify the problems it introduces, such as reduced usability, reduced clarity,  and overfitting. Sometimes it clearly is. Other times I don't see that it is and am happy to wait for those who favour the complex model to point to important results it produces.

One virtue of a simple model that I think is often overlooked is its ability to produce crisp insights that, once found, can be clearly explained and communicated to others. This makes knowledge sharing easier and makes it easier to build up a field's understanding from these crisp insights. I think the kind of understanding you gain from more complex models is often more a form of improving your intuitions and is harder to communicate, and doesn't typically come with a simple explanation that the other person can check to see if you are right without spending a similar amount of time with the model.

Though note that infinite sequences of choices are a well known paradox-ridden corner of decision theory, so proving that a theory falls down there is not conclusive.

I feel that exotic cases like this are interesting and help build up a picture of difficult cases for theories to cover, but don't count strongly against particular theories which are shown to fail them. This is because it isn't clear whether (1) any rival theories can deal with the exotic case, or (2) whether usual conditions (or theories) need to be slightly modified in the exotic setting. In other words, it may be another area where the central idea of Richard's post ('Puzzles for Everyone') applies.

To be clear, he is factually incorrect about that claim. I never seriously considered calling it that.

One of the major points of effective altruism in my mind was that it isn't only utilitarians who should care about doing more good rather than less, and not only consequentialists either. All theories that agree saving 10 lives is substantially more important than saving 1 life should care about effectiveness in our moral actions and could benefit from quantifying such things. I thought it was a great shame that effectiveness was usually only discussed re utilitarianism and I wanted to change that.

Thanks so much for writing this. There are a lot of new leads on understanding the risks here, and it is great to see people working on what I think is the largest of all the natural risks. 

I'd be very interested to see what estimates of the overall existential risk from supervolcanic eruptions you come up with. As direct extinction appears particularly difficult, much of the question comes down to estimates of the chance that an eruption of a given magnitude would cause the collapse of civilisation and the chance of recovering from that collapse. Factoring the risk in this way may be helpful for both showing where disagreements between researchers lie and for understanding where the current uncertainty is.

Note also that there was a numerical mistake in my supervolcanic eruption section which I explain on the book's Errata page: https://theprecipice.com/errata 

Thanks for writing this Sam. I think a large part of the disconnect you are perceiving can be explained as follows:

The longtermist community are primarily using an unusual (or unusually explicit) concern about what happens over the long term as a way of doing cause prioritisation. e.g. it picks out issues like existential risk as being especially important, and then with some more empirical information, one can pick out particular risks or risk factors to work on. The idea is that these are far more important areas to work on than the typical area, so you already have a big win from this style of thinking. In contrast, you seem to be looking at something that could perhaps be called 'long-term thinking', which takes any area of policy and tries to work out ways to better achieve its longterm goals using longterm plans. 

These are quite different approaches to having impact. A useful analogy  would be the difference between using cost-effectiveness as a tool for selecting a top cause or intervention to work on, vs using it to work out the most cost-effective way to do what you are already doing. I think a lot of the advantages of both cost-effectiveness and longtermist thinking are had in this first step of its contribution to cause-prioritisation, rather than to improving the area you are already working on. 

That said, there are certainly cases of overlap. For example, while one could use longtermist cause-prioritisation to select nuclear disarmament as an area and then focus on the short term goal of re-establishing the INF treaty, which lapsed under Trump, one could also aim higher, for the best ways to completely eliminate nuclear weapons over the rest of the century, which would  require longterm planning. I  expect that longtermists could benefit from advances in longterm planning more than the average person, but it is not always required in order to get large gains from a longtermist approach.

Thanks — I hadn't heard of f-means before and it is a useful concept, and relevant here.

I think we are roughly in agreement on this, it is just hard to talk about. I think that compression of the set of expert estimates down to a single measure of central tendency (e.g. the arithmetic mean) loses information about the distribution that is needed to give the right answer in each of a variety of situations. So in this sense, we shouldn't aggregate first. 

The ideal system would neither aggregate first into a single number, nor use each estimate independently and then aggregate from there (I suggested doing so as a contrast to aggregation first, but agree that it is not ideal). Instead, the ideal system would use the whole distribution of estimates (perhaps transformed based on some underlying model about where expert judgments come from, such as assuming that numbers between the point estimates are also plausible) and then doing some kind of EV calculation based on that. But this is so general an approach as to not offer much guidance, without further development.

I agree with a lot of this. In particular, that the best approach for practical rationality involves calculating things out according to each  of the probabilities and then aggregating from there (or something like that), rather than aggregating first. That was part of what I was trying to show with the institution example. And it was part of what I was getting at by suggesting that the problem is ill-posed — there are a number of different assumptions we are all making about what these probabilities are going to be used for and whether we can assume the experts are themselves careful reasoners etc. and this discussion has found various places where the best form of aggregation depends crucially on these kinds of matters. I've certainly learned quite a bit from the discussion.

 I think if you wanted to take things further, then teasing out how different combinations of assumptions lead to different aggregation methods would be a good next step.

I see what you mean, though you will find that scientific experts often end up endorsing probabilities like these. They model the situation, run the calculation and end up with 10^-12 and then say the probability is 10^-12. You are right that if you knew the experts were Bayesian and calibrated and aware of all the ways the model or calculation could be flawed, and had a good dose of humility, then you could read more into such small claimed probabilities — i.e. that they must have a mass of evidence they have not yet shared. But we are very rarely in a situation like that. Averaging a selection of Metaculus forecasters may be close, but is quite a special case when you think more broadly about the question of how to aggregate expert predictions.

Load More