djbinder

I'm a senior research scholar at FHI, with a background in theoretical physics.

Wiki Contributions

Comments

Pascal's Mugging and abandoning credences

In principal I agree, although in practice there are other mitigating factors which means it doesn't seem to be that relevant.

This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks.

It is partly also because at a practical level the interventions long-termists consider don't rely on the possibility of 10^52 future lives, but are good even over just the next few hundred years. I am not aware of many things that have smaller impacts and yet still remain robustly positive, such that we would only pursue them due to the 10^52 future lives. This is essentially for the reasons that asolomonr gives in their comment.

Pascal's Mugging and abandoning credences

Attempts to reject fanatacism necessarily lead to major theoretical problems, as described for instance here and here.

However, questions about fanatacism are not that relevant for most questions about x-risk. The x-risks of greatest concern to most long-termists (AI risk, bioweapons, nuclear weapons, climate change) all have reasonable odds of occurring within the next century or so, and even if we care only about humans living in the next century or so we would find that these are valuable to prevent. This is mostly a consequence of the huge number of people alive today.

What are some moral catastrophes events in history?

The Great Big Book of Horrible Things is a list of the 100 worst man-made events in history, many of which fit your definition of moral catastrophe.

Practices (rather than events) that might fit your definition include

What complexity science and simulation have to offer effective altruism

Thanks for the reply Rory! I think at this point it is fairly clear where we agree (quantitative methods and ideas from maths and physics can be helpful in other disciplines) and where we disagree (whether complexity science has new insights to offer, and whether there is a need for an interdisciplinary field doing this work separate from the ones that already exist), and don't have any more to offer here past my previous comments. And I appreciate your candidness noting that most complexity scientists don't mention complexity or emergence much in their published research; as is probably clear I think this suggests that, despite their rhetoric, they haven't managed to make these concepts useful.

I do not think the SFI, at least judging from their website, and from the book Scale which I read a few years ago, is a good model of public relations that EAs should try to emulate. They make grand claims about what they have achieved which seems to me to be out of proportion to their actual accomplishments. I'm curious to hear what you think the great success stories of SFI are. The one I know the most about, the scaling laws, I'm pretty skeptical of for the reasons outlined previously. I had a look at their "Evolution of Human Languages" program, and it seems to be  fringe research by the standards of mainstream comparative linguistics. But there could well be success stories that I am unfamiliar with,  particularly in economics.

What complexity science and simulation have to offer effective altruism

If the OP wants to discuss agent-based modeling, then I think they should discuss agent-based modeling. I don't think there is anything to be gained by calling agent-based models "complex systems", or that taking a complexity science viewpoint adds any value.

Likewise, if you want to study networks, why not study networks? Again, adding the word "complex" doesn't buy you anything.

As I said in my original comment, part of complexity science is good: this is the idea we can use maths and physics to modeling other systems. But this is hardly a new insight. Economists, biophysicists, mathematical biologists, computer scientists, statisticians, and applied mathematicians have been doing this for centuries. While sometimes siloing can be a problem, for the most part ideas flow fairly freely between these disciplines and there is a lot of cross-pollination. When ideas don't flow it is usually because they aren't useful in the new field. (Maybe they rely on inappropriate assumptions, or are useful in the wrong regime, or answer the wrong questions, or are trivial and/or intractable in situations the new field cares about, or don't give empirically testable results, or are already used by the new field in a slightly different way.)  The "problem" of "siloing" that complexity science claims to want to solve is largely a mirage.

But of course, complexity science makes greater claims than just this. It claims to be developing some general insights into the workings of complex systems. As I've noted in my previous comment, these claims are at best just false and at worst completely vacuous. I think it is dangerous to support the kind of sophistry spouted by complexity scientists, for the same reason it is dangerous to support sophistry anywhere. At best it draws attention away from scientists who are making progress on real problems, and at worst it leads to piles of misleading and overblown hype.

My criticism is not analogous to the claim that "ML is just a rebranding of statistics". After all, ML largely studies different topics and different questions to statistics. No, it would be as if we lived in a world without computers, and ML consisted of people waxing lyrically about how "computation" would solve learning, but then when asked how would just say basic (and sometimes incorrect) things about statistics.

What complexity science and simulation have to offer effective altruism

As someone with a background in theoretical physics, I am very skeptical of the claims made by complexity science. At a meta-level I dislike being overly negative, and I don't want to discourage people posting things that they think might be interesting or relevant on the forum. But I have seen complexity science discussed now by quite a few EAs rather credulously, and I think it is important to set the record straight.

On to the issues with complexity science. Broadly speaking, the problem with "complexity science" is that it is trying to study "complex systems". But the only meaningful definition of "complex system" is a system that is not currently amenable to mathematical analysis. (Note this not always the definition that "complexity scientists" seem to actually use, since they like to talk about things like the Ising model which are not only well understood and long studied by physicists, but was actually exactly solved in 1944!) Trying to study the set of all "complex systems" is a bit like trying to study the set of animals that aren't jellyfish, snails, lemurs or sting rays.

The concepts developed by "complexity scientists" are usually either well-known and understood concepts from physics and mathematics (such as "phase transition", "non-linear", "non-equilibrium", "non-ergodicity", "criticality", "self-similarity") or else so hopelessly vague that as to be useless ("complexity","emergence","non-reducibility","self-organization"). If you want to learn about the former topics I would just recommend reading actual textbooks written by and aimed at physicists and mathematicians. For instance, I particularly like Nonlinear Dynamics and Chaos by Strogatz, if you want to understand dynamical systems, and David Tong's lecture notes on Statistical Physics and Statistical Field Theory if you want to understand phase-transitions and critical phenomena.

Note that none of these concepts are new. Even the idea of applying these concepts to the social sciences is hardly novel, see this review for example. Note the lack of hype, and lack of buzz words.

Unfortunately, the research that I've seen under the moniker of "complexity science" uses these (precise, limited in scope) concepts both liberally and in a facile way. As a single example, let's have a look at "scaling laws". Scaling laws are symptoms of critical behavior, and, as already mentioned such critical phenomenon has long been studied by physicists. If you look at empirical datasets (such as those of city sizes, or how various biological features scale with the size of an animal) sometimes you also find power-laws, and so naturally we might try to claim that these are also "critical systems". But this plausible idea doesn't seem to work in reality, for both for theoretical and empirical reasons.

The theoretical problem is that, pretty much all critical systems in physics require fine-tuning. For instance, you might have to dial the temperature and pressure of your gas to really specific values in order to see the behavior. There have been attempts to find models where we don't need to fine-tune, and this is known as "self-organized criticality", but these have basically all failed. Models which are often claimed to posses "self-organized criticality", such as the forest-fire model, do not actually have this behavior. On the empirical side, most purported "power-laws" are, in practice, not obviously power-laws. A long discussion of this can be found here but essentially the difficult is that it is hard in practice to distinguish power-laws from other plausible distributions, such as log-normals.

If we want to talk about the hopelessly vague topics, well, there is really nothing much to be said about them, either by complexity scientists or by anyone else. To pick on "emergence", for the moment, I think this post from The Sequences sums up nicely the emptiness of this word. There is notion of "emergence" that does appear in physics, known as "effective field theory", which is very central to our current understanding of both particle and condensed matter physics. You can find this discussed in any quantum field theory textbook (I particularly like Peskin & Schroeder). For some reason I've never seen complexity scientists discuss it, which is strange, since this is the precise mathematical language physicists use to describe the emergence of large-scale behavior in physical systems.

TLDR There is no secret sauce to studying complicated systems, and "complexity science" has not made any progress on this front. To paraphrase a famous quote, "The part that is good is not original, and the part that is correct is not original (and also misapplied)."

A case against strong longtermism

I don't think so. The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?  

 

It certainly not obvious that the universe is infinite in the sense that you suggest. Certainly nothing is "provably infinite" with our current knowledge. Furthermore, although we may not be certain about the properties of our own universe, we can easily imagine worlds rich enough to contain moral agents yet which remain completely finite. For instance, you could image a cellular automata with a finite grid size and which only lasted for a finite duration.

However, perhaps the more important consideration is the in principle set of possible futures that we must consider when doing EV calculations, rather than the universe we actually inhabit, since even if our universe is finite we would never be able to convince our selves of this with certainty. Is it this set of possible futures that you think suffers from "immeasurability"?

Thoughts on whether we're living at the most influential time in history

I agree with your criticism of my second argument. What I should have instead said is a bit different. There are actions whose value decreases over time. For instance, all else being equal it is better to implement a policy which reduces existential risk sooner rather than later. Patient philanthropy makes sense only if either (a) you expect the growth of your resources to outpace the value lost by failing to act now, or (b) you expect cheaper opportunities to arise in the future. I don't think there are great reasons to believe either of these is true (or indeed false, I'm not very certain on the issue).

There are two issues with knowledge, and I probably should have separated them more clearly. The more important one is that the kind of decision-relevant information Will is asking for, that is, knowing when and how to spend your money optimally, may well just be unattainable. Optimal strategies with imperfect information probably look very different from optimal strategies with perfect information.

A secondary issue is that you actually need to generate the knowledge. I agree it is unclear whether Will is considering the knowledge problem as part of "direct" or "patient" philanthropy. But since knowledge production might eat up a large chunk of your resources, and since some types of knowledge may be best produced by trying to do direct work, plausibly the "patient philanthropist" ends up spending a lot of resources over time. This is not the image of patient philanthropy I originally had, but maybe I've been misunderstanding what Will was envisaging.

Thoughts on whether we're living at the most influential time in history

I can't speak for why other people down-voted the comment but I down-voted it because the arguments you make are overly simplistic.

The model you have of philanthropy is that on an agent in each time period has the choice to either (1) invest or (2) spend their resources, and then getting a payoff depending on how influential'' the time is. You argue that the agent should then save until they reach the most influential'' time, before spending all of their resources at this most influential time.

I think this model is misleading for a couple of reasons. First, in the real world we don't know when the most influential time is. In this case the agent may find it optimal to spend some of their resources at each time step. For instance direct philanthropic donations may give them a better understanding in the future of how influentialness varies (ie, if you don't invest in AI safety researchers now, how will you ever know whether/when AI safety will be a problem?) You may also worry about "going bust": if while you are being patient, an existential catastrophe (or value lock-in) happens, then the patient long-termist looses their entire investment.

Perhaps one way to phrase how important this knowledge problem is to finding the optimal strategy is to think about it as analogous to owning stocks in a bubble. You strategy is that we should sell at the market peak, but we can't do that if we don't know when that will be.  

Second, there are very plausible reasons why now may be the best time to donate. If we can spend money today to permanently reduce existential risk, or to permanently improve the welfare of the global poor, then it is always more valuable to do that action ASAP rather than wait.  Likewise we plausibly get more value by working on biorisk, AI safety, or climate change today then we will in 20 years.

Third, the assumption of no diminishing marginal returns is illogical. We should be thinking about how EAs as a whole should spend their money as a whole. As an individual, I would not want to hold out for the most influential time if I thought everyone else was doing the same, and of course as a community we can coordinate.

Thoughts on whether we're living at the most influential time in history

I should also point out that, if I've understood your position correctly Carl, I agree with you. Given my second argument, that a prior we have something like 1 in a trillion odds of being the most influential, I don't think we should end up concluding much about this.  

Most importantly, this is because whether or not I am the most influential person is not actually relevant decision making question.

But even aside from this I have a lot more information about the world than just a prior odds. For instance, any long-termist has information about their wealth and education which would make them fairly exceptional compared to the average human that has ever lived. They also have reasonable evidence about existential risk this century and plausible (for some loose definition of plausible) ways to influence this. At the end of the day  each of us still has low odds of being the most influential person ever, but perhaps with odds more in the 1 in 10 million range, rather than 1 in a trillion.

Load More