I'm a senior research scholar at FHI, with a background in theoretical physics.

Topic Contributions


Request for proposals: Help Open Philanthropy quantify biological risk

Thanks for pointing this out, but unfortunately we cannot shift the submission deadline.

On infinite ethics

I agree with with your first question, the utilitarian needs a measure (they don't need a separate utility function from their measure, but there may be other natural measures to consider in which case you do need a utility function).

With respect to your second question, I think you can either give up on the infinite cases (because you think they are "metaphysically" impossible, perhaps) or you demand that a regularization must exist (because with this the problem is "metaphysically" underspecified). I'm not sure what the correct approach is here, and I think it is an interesting question to try and understand this in more detail. On the latter case you have to give up impartiality, but only in a fairly benign way, and that our intuitions about impartiality are probably wrong here (analogous situations occur in physics with charge conservation, as I noted in another comment).

With respect to your third question, I think it is likely that problems with no regularization are non-sensical. This is not to say that all problems involving infinities are themselves non-sense, nor to say that correct choice of regularization is obvious.

As an intuition pump maybe we can consider cases that don't involve infinities. Say we are in (rather contrived world) in which utility is literally a function of space-time, and we integrate to get the total utility. How should I assign utility for a function which has support on a non-measurable set? Should I even think such a thing is possible? After all, the existence of non-measuarable sets follows not from ZF alone, but requires also the axiom of choice. As another example, maybe my utility function depends on whether or not the continuum hypothesis is true or false. How should I act in this case?

My own guess is that such questions likely have no meaningful answer, and I think the same is true for questions involving infinities without specified ways to operationalize the infinities. I think it would be odd to give up on the utilitarian dream due to unmeasurable sets, and that the same is true for ill-defined infinities.

On infinite ethics

I think you are right about infinite sets (most of the mathematicians I've talked to have had distinctly negative views about set theory, in part due to the infinities, but my guess is that such views are more common amongst those working on physics-adjacent areas of research). I was thinking about infinities in analysis (such as continuous functions, summing infinite series, integration, differentiation, and so on), which bottom out in some sort of limiting process.

On the spatially unbounded universe example, this seems rather analogous to me to the question of how to integrate functions over the same space. There are a number of different sets of functions which are integrable over , and even for some functions which are not integrable over there are natural regularization schemes which allows the integral to be defined. In some cases these regularizations may even allow a notion of comparing different "infinities", as in cases where the integral diverges as the regularizer is taken to zero one integral may strictly dominate the other. When dealing with situations in ethics, perhaps we should always be restricting to these cases? There are a lot of different choices here, and it isn't clear to me what the correct restriction is, but it seems plausible to me that some form of restriction is needed. Note that such a restrictions include ultrafinitism, as an extreme case, but in general allows a much richer set of possibilities.

Expansionism is neceessarily incomplete, it assumes that the world has a specific causal structure (ie, one that is locally that of special relativity) which is an empirical observation about our universe rather than a logically necessary fact. I think it is plausible that, given the right causal assumptions, expansionism follows (at least for individual observers making decisions that respect causality).

On infinite ethics

As an aside, while neutrality-violations are a necessary consequence of regularization, a weaker form of neutrality is preserved. If we regularize with some discounting factor so that everything remains finite, it is easy to see that "small rearrangments" (where the amount that a person can move in time is finite) do not change the answer, because the difference goes to zero as . But "big rearrangments" can cause differences that grow with . Such situations do arise in various physical situations, and are interpretted as changes to boundary conditions, whereas the "small rearrangments" manifestly preserve boundary conditions and manifestly do not cause problems with the limit. (The boundary is most easily seen by mapping the infinite interval sequence onto a compact interval, so that "infinity" is mapped to a finite point. "Small rearrangments" leave infinity unchanged, whereas "large" ones will cause a flow of utility across infinity, which is how the two situations are able to give different answers.)

On infinite ethics

I think what is true is probably something like "neverending process don't exist, but arbitrarily long ones do", but I'm not confident. My more general claim is that there can be intermediate positions between ultrafinitism ("there is a biggest number"), and any laissez faire "anything goes" attitude, where infinities appear without care or scrunity. I would furthermore claim (but on less solid ground), that the views of practicing mathematicians and physicists falls somewhere in here.

As to the infinite series examples you give, they are mathematically ill-defined without giving a regularization. There is a large literature in mathematics and physics on the question of regularizing infinite series. Regularization and renormalization are used through physics (particular in QFT), and while poorly written textbooks (particularly older ones) can make this appear like voodoo magic, the correct answers can always be rigorously be obtained by making everything finite.

For the situation you are considering, a natural regularization would be to replace your sum with a regularized sum where you discount each time step by some discounting factor . Physically speaking, this is what would happen if we thought the universe had some chance of being destroyed at each timestep; that is, it can be arbitrarily long-lived, yet with probability 1 is finite. You can sum the series and then take and thus derive a finite answer.

There may be many other ways to regulate the series, and it often turns out that how you regulate the series doesn't matter. In this way, it might make sense to talk about this infinite universe without reference to a specific limiting process, but rather potentially with only some weaker limiting process specification. This is what happens, for instance, in QFT; the regularizations don't matter, all we care about are the things that are independent of regularization, and so we tend to think of the theories as existing without a need for regularization. However, when doing calculations it is often wise to use a specific (if arbitrary) regularization, because it guarantees that you will get the right answer. Without regularizations it is very easy to make mistakes.

This is all a very long-winded way to say that there are at least two intermediate views one could have about these infinite sequence examples, between the "ultrafinitist" and the "anything goes":

  1. The real world (or your priors) demands some definitive regularization, which determines the right answer. This would be the case if the real world had some probability of being destroyed, even if it is arbitrarily small.

  2. Maybe infinite situations like the one you described are allowed, but require some "equivalence class of regularizations" to be specified in order to be completely specified. Otherwise the answer is as indeterminant as if you'd given me the situation without specifiying the numbers. I think this view is a little weirder, but also the one that seems to be adopted in practice by physicists.

On infinite ethics

I think Section XIII is too dismissive of the view that infinities are not "real", conflating it with ultrafinitism. But the sophisticated version of this view is that infinities should only be treated as "idealized limits" of finite processes. This is, as far as understand, the default view amongst practicing mathematicians and physicists. If you stray from it, and use infinities without specifying the limiting process, it is very easy to produce paradoxes, or at least, indeterminancy in the problem. The sophisticated view, then, is not that infinities don't exist, but that, since they only exist as limiting cases of finite processes. One must always specify the limiting process, and in doing so any paradoxes or indeterminancies will disappear.

As Jaynes' summarizes in Chapter 15 of Probability: The Logic of Science:

[P]aradoxes caused by careless dealing with infinite sets or limits can be mass-produced by the following simple procedure:

(1) Start from a mathematically well-defined situation, such as a finite set, a normalized probability distribution, or a convergent integral, where everything is well-behaved and there is no question about what is the correct solution.

(2) Pass to a limit – infinite magnitude, infinite set, zero measure, improper pdf, or some other kind – without specifying how the limit is approached.

(3) Ask a question whose answer depends on how the limit was approached.

Pascal's Mugging and abandoning credences

In principal I agree, although in practice there are other mitigating factors which means it doesn't seem to be that relevant.

This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks.

It is partly also because at a practical level the interventions long-termists consider don't rely on the possibility of 10^52 future lives, but are good even over just the next few hundred years. I am not aware of many things that have smaller impacts and yet still remain robustly positive, such that we would only pursue them due to the 10^52 future lives. This is essentially for the reasons that asolomonr gives in their comment.

Pascal's Mugging and abandoning credences

Attempts to reject fanatacism necessarily lead to major theoretical problems, as described for instance here and here.

However, questions about fanatacism are not that relevant for most questions about x-risk. The x-risks of greatest concern to most long-termists (AI risk, bioweapons, nuclear weapons, climate change) all have reasonable odds of occurring within the next century or so, and even if we care only about humans living in the next century or so we would find that these are valuable to prevent. This is mostly a consequence of the huge number of people alive today.

What are some moral catastrophes events in history?

The Great Big Book of Horrible Things is a list of the 100 worst man-made events in history, many of which fit your definition of moral catastrophe.

Practices (rather than events) that might fit your definition include

What complexity science and simulation have to offer effective altruism

Thanks for the reply Rory! I think at this point it is fairly clear where we agree (quantitative methods and ideas from maths and physics can be helpful in other disciplines) and where we disagree (whether complexity science has new insights to offer, and whether there is a need for an interdisciplinary field doing this work separate from the ones that already exist), and don't have any more to offer here past my previous comments. And I appreciate your candidness noting that most complexity scientists don't mention complexity or emergence much in their published research; as is probably clear I think this suggests that, despite their rhetoric, they haven't managed to make these concepts useful.

I do not think the SFI, at least judging from their website, and from the book Scale which I read a few years ago, is a good model of public relations that EAs should try to emulate. They make grand claims about what they have achieved which seems to me to be out of proportion to their actual accomplishments. I'm curious to hear what you think the great success stories of SFI are. The one I know the most about, the scaling laws, I'm pretty skeptical of for the reasons outlined previously. I had a look at their "Evolution of Human Languages" program, and it seems to be  fringe research by the standards of mainstream comparative linguistics. But there could well be success stories that I am unfamiliar with,  particularly in economics.

Load More