All of djbinder's Comments + Replies

I think this is too bearish on the economic modeling. If you want to argue that climate change could pose some risk of civilization collapse, you have to argue that some pathway exists from climate to a direct impact on society that prevents the society from functioning. When discussing collapse scenarios from climate most people (I think) are envisiging food, water, or energy production becoming so difficult that this causes further societal failures. But the economic models strongly suggest that the perturbations on these fronts are only "small", so that... (read more)

6
cwa
2y
Thanks for the perspective! I agree in part with your point about trusting the models while the perturbations they predict are small, but even then I'd say that there are two very different possibilities: 1. we can safely ignore real-world nonlinearities, cascading effects, etc., because the economic models suggest the perturbations are small. 2. the predicted perturbations are small because the economic models neglect key real-world nonlinearities and cascading effects. As long as we think the second option is plausible enough, strong skepticism of the models remains justified. I don't claim to know what's actually the case here --- this seems like a pretty important thing to work on understanding better.

We have decided to extend the deadline to June 5th, if you'd still be to do advertise this in your forceasting newsletter that would be helpful!

Thanks for pointing this out, but unfortunately we cannot shift the submission deadline.

[This comment is no longer endorsed by its author]Reply

I agree with with your first question, the utilitarian needs a measure (they don't need a separate utility function from their measure, but there may be other natural measures to consider in which case you do need a utility function).

With respect to your second question, I think you can either give up on the infinite cases (because you think they are "metaphysically" impossible, perhaps) or you demand that a regularization must exist (because with this the problem is "metaphysically" underspecified). I'm not sure what the correct approach is here, and I th... (read more)

I think you are right about infinite sets (most of the mathematicians I've talked to have had distinctly negative views about set theory, in part due to the infinities, but my guess is that such views are more common amongst those working on physics-adjacent areas of research). I was thinking about infinities in analysis (such as continuous functions, summing infinite series, integration, differentiation, and so on), which bottom out in some sort of limiting process.

On the spatially unbounded universe example, this seems rather analogous to me to the quest... (read more)

So you're saying a utilitarian needs both a utility function, and a measure with which to integrate over any sets of interest (OK)? And also some transformations to regularise infinite sets (giving up the dream of impartiality)? And still there are some that cannot be regularised, so utilitarian ethics can't order them (but isn't that the problem we were trying to solve)?

As an aside, while neutrality-violations are a necessary consequence of regularization, a weaker form of neutrality is preserved. If we regularize with some discounting factor so that everything remains finite, it is easy to see that "small rearrangments" (where the amount that a person can move in time is finite) do not change the answer, because the difference goes to zero as . But "big rearrangments" can cause differences that grow with . Such situations do arise in various physical situations, and are interpretted as changes to boundary conditions... (read more)

I think what is true is probably something like "neverending process don't exist, but arbitrarily long ones do", but I'm not confident. My more general claim is that there can be intermediate positions between ultrafinitism ("there is a biggest number"), and any laissez faire "anything goes" attitude, where infinities appear without care or scrunity. I would furthermore claim (but on less solid ground), that the views of practicing mathematicians and physicists falls somewhere in here.

As to the infinite series examples you give, they are mathematically ill... (read more)

I think Section XIII is too dismissive of the view that infinities are not "real", conflating it with ultrafinitism. But the sophisticated version of this view is that infinities should only be treated as "idealized limits" of finite processes. This is, as far as understand, the default view amongst practicing mathematicians and physicists. If you stray from it, and use infinities without specifying the limiting process, it is very easy to produce paradoxes, or at least, indeterminancy in the problem. The sophisticated view, then, is not that infinities do... (read more)

Agree with djbinder on this, that "infinities should only be treated as 'idealized limits' of finite processes". 


To explain what I mean:

Infinites outside of limiting sequences are not well defined (at least that is how I would describe it). Sure you can do some funky set theory maths on them but from the point of view of physics they don’t work, cannot be used.

(My favorite example (HT Jacob Hilton) is a man throws tennis balls into a room once every 1 second numbered 1,2,3,4,... and you throw them out once every 2 seconds, how many balls are in the ro... (read more)

But the sophisticated version of this view is that infinities should only be treated as "idealized limits" of finite processes. This is, as far as understand, the default view amongst practicing mathematicians and physicists.

 

I think this is false in general (at least for mathematicians), but true for many specific applications. Mathematicians frequently deal with infinite sets, and they don't usually treat them like limits of finite processes, especially if they're uncountable.

How would you handle the possibility of a spatially unbounded universe, e.... (read more)

9
Joe_Carlsmith
2y
A few questions about this:  1. Does this view imply that it is actually not possible to have a world where e.g. a machine creates one immortal happy person per day, forever, who then form an ever-growing line? 2. How does this view interpret cosmological hypotheses on which the universe is infinite? Is the claim that actually, on those hypotheses, the universe is finite after all?  3. It seems like lots of the (countable) worlds and cases discussed in the post can simply be reframed as never-ending processes, no? And then similar (identical?) questions will arise? Thus, for example, w5 is equivalent to a machine that creates a1 at -1, then a3 at -1, then a5 at -1, etc. w6 is equivalent to a machine that creates a1 at -1, then a2 at -1, a3 at -1, etc. What would this view say about which of these machines we should create, given the opportunity? How should we compare these to a w8 machine that creates b1 at -1, b2 at -1, b3 at -1, b4 at -1, etc? Re: the Jaynes quote: I'm not sure I've understood the full picture here, but in general, to me it doesn't feel like the central issues here have to do with dependencies on "how the limit is approached," such that requiring that each scenario pin down an "order" solves the problems. For example, I think that a lot of what seems strange about Neutrality-violations in these cases is that even if we pin down an order for each case, the fact that you can re-arrange one into the other makes it seem like they ought to be ethically equivalent. Maybe we deny that, and maybe we do so for reasons related to what you're talking about - but it seems like the same bullet. 

In principal I agree, although in practice there are other mitigating factors which means it doesn't seem to be that relevant.

This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks.

It is partly also because at a pr... (read more)

Attempts to reject fanatacism necessarily lead to major theoretical problems, as described for instance here and here.

However, questions about fanatacism are not that relevant for most questions about x-risk. The x-risks of greatest concern to most long-termists (AI risk, bioweapons, nuclear weapons, climate change) all have reasonable odds of occurring within the next century or so, and even if we care only about humans living in the next century or so we would find that these are valuable to prevent. This is mostly a consequence of the huge number of peo... (read more)

2
MichaelStJules
3y
I think timidity, as described in your first link, e.g. with a bounded social welfare function, is basically okay, but it's a matter of intuition (similarly, discomfort with Pascalian problems is a matter of intuition). However, it does mean giving up separability in probabilistic cases, and it may instead support x-risks reduction (depending on the details). I would also recommend https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ https://globalprioritiesinstitute.org/christian-tarsney-exceeding-expectations-stochastic-dominance-as-a-general-decision-theory/ Also, questions of fanaticism may be relevant for these x-risks, since it's not the probability of the risks that matter, but the difference you can make. There's also ambiguity, since it's possible to do more harm than good, by increasing the risk instead or increasing other risks (e.g. reducing extinction risks may increase s-risks, and you may be morally uncertain about how to weigh these).
1
AndreaSR
3y
Thanks for your answer. I don't think I under stand what you're saying, though. As I understand it, it makes a huge difference to the resource distribution that longtermism recommends, because if you allow for e.g. Bostrom's 10^52 happy lives to be the baseline utility, avoiding x-risk becomes vastly more important than if you just consider the 10^10 people alive today. Right?
Answer by djbinderJun 22, 202122
0
0

The Great Big Book of Horrible Things is a list of the 100 worst man-made events in history, many of which fit your definition of moral catastrophe.

Practices (rather than events) that might fit your definition include

2
Question Mark
3y
Male genital mutilation is far more widespread and is arguably just as horrible as female genital mutilation.
4[anonymous]3y
adding:  * corporal punishment (especially of minors, by parents, in schools etc) * rape in marriages  /marital rape * "marry your rapist" laws * human trafficking * with-hunt (though I am not aware of the # of affected individuals)   I also want to note that the things I have added and many others added are still ongoing. It would be naive to say that these are only moral catastrophes of the past.   A few more controversial moral catastrophes:  * religion (arguably counterfactually responsible for at least a few wars and a few really unhealthy cultural traditions, though it is hard to say whether the counterfactual would have been much better) * no available education for everyone - under certain moral philosophy (I am thinking of Mill, in particular), the access to academic thought and literary texts, critical thinking is of critical importance. Billions were barred from it. 

Thanks for the reply Rory! I think at this point it is fairly clear where we agree (quantitative methods and ideas from maths and physics can be helpful in other disciplines) and where we disagree (whether complexity science has new insights to offer, and whether there is a need for an interdisciplinary field doing this work separate from the ones that already exist), and don't have any more to offer here past my previous comments. And I appreciate your candidness noting that most complexity scientists don't mention complexity or emergence much in their pu... (read more)

If the OP wants to discuss agent-based modeling, then I think they should discuss agent-based modeling. I don't think there is anything to be gained by calling agent-based models "complex systems", or that taking a complexity science viewpoint adds any value.

Likewise, if you want to study networks, why not study networks? Again, adding the word "complex" doesn't buy you anything.

As I said in my original comment, part of complexity science is good: this is the idea we can use maths and physics to modeling other systems. But this is hardly a new insight. Eco... (read more)

@djbinder Thanks for taking the time to write these comments. No need to worry about being negative, this is exactly the sort of healthy debate that I want to see around this subject.

I think you make a lot of fair points, and it’s great to have these insights from someone with a background in theoretical physics, however I would still disagree slightly on some of them, I will try to explain myself below.

I don’t think the only meaningful definition of complex systems is that they aren’t amenable to mathematical analysis, that is perhaps a feature of them, b... (read more)

As someone with a background in theoretical physics, I am very skeptical of the claims made by complexity science. At a meta-level I dislike being overly negative, and I don't want to discourage people posting things that they think might be interesting or relevant on the forum. But I have seen complexity science discussed now by quite a few EAs rather credulously, and I think it is important to set the record straight.

On to the issues with complexity science. Broadly speaking, the problem with "complexity science" is that it is trying to study "complex sy... (read more)

Overall, this seems like a weak criticism worded strongly. It looks like the opposition here is more to the moniker of Complexity Science and its false claims of novelty but not actually to the study of the phenomenon that fall within the Complexity Science umbrella. This is analogous to a critique of Machine Learning that reads "ML is just a rebranding of Statistics". Although I agree that it is not novel and there is quite a bit of vagueness in the field, I disagree on the point that Complexity Science has not made progress.

I think the biggest utility of... (read more)

I don't think so. The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measu

... (read more)
3
vadmas
3y
  Aarrrgggggg was trying to resist weighing in again ... but I think there's some misunderstanding of my argument here. I wrote: A few comments: * We're talking about possible universes, not actual ones, so cast-iron guarantees about the size and future lifespan of the universe are irrelevant (and impossible anyway). * I intentionally framed it as someone shouting a natural number in order to circumvent any counterargument based on physical limits of the universe. If someone can think it, they can shout it. * The set of possible futures is provably infinite because the "shouting a natural number" argument established a one-to-one correspondence between the set of possible (triple emphasis on the word * possible * ) futures, and the set of natural numbers, which are provably infinite (see proof here ). * I'm not using fancy or exotic mathematics here, as Owen can verify. Putting sets in one-to-one correspondence with the natural numbers is the standard way one proves a set is countably infinite. (See https://en.wikipedia.org/wiki/Countable_set). * Physical limitations regarding the largest number that can be physically instantiated are irrelevant to answering the question "is this set finite or infinite"? Mathematicians do not say the set of natural numbers are finite because there are a finite number of particles in the universe. We're approaching numerology territory here... Okay this will hopefully be my last comment, because I'm really not trying to be a troll in the forum or anything. But please represent my argument accurately!

I agree with your criticism of my second argument. What I should have instead said is a bit different. There are actions whose value decreases over time. For instance, all else being equal it is better to implement a policy which reduces existential risk sooner rather than later. Patient philanthropy makes sense only if either (a) you expect the growth of your resources to outpace the value lost by failing to act now, or (b) you expect cheaper opportunities to arise in the future. I don't think there are great reasons to believe either of these is true (or... (read more)

I can't speak for why other people down-voted the comment but I down-voted it because the arguments you make are overly simplistic.

The model you have of philanthropy is that on an agent in each time period has the choice to either (1) invest or (2) spend their resources, and then getting a payoff depending on how influential'' the time is. You argue that the agent should then save until they reach the most influential'' time, before spending all of their resources at this most influential time.

I think this model is misleading for a couple of reasons. First... (read more)

If we can spend money today to permanently reduce existential risk, or to permanently improve the welfare of the global poor, then it is always more valuable to do that action ASAP rather than wait.

This seems straightforwardly untrue, because you may be able to permanently reduce existential risk more cheaply in the future.

I also think (but am not sure) that Will doesn't include solving the knowledge problem as part of "direct action", and so your first critique is not very relevant to the choice between patient philanthropy and direct action, because probably you'll want to gain knowledge in either case.

I should also point out that, if I've understood your position correctly Carl, I agree with you. Given my second argument, that a prior we have something like 1 in a trillion odds of being the most influential, I don't think we should end up concluding much about this.  

Most importantly, this is because whether or not I am the most influential person is not actually relevant decision making question.

But even aside from this I have a lot more information about the world than just a prior odds. For instance, any long-termist has information about their ... (read more)

In his first comment Will says he prefers to frame it as "influential people" rather than "influential times". In particular if you read his article (rather than the blog post), then in the end of section 5 he says he thinks it is plausible that the most influential people may live within the next few thousand years, so I don't his odds that this century is the most influential can be very low (at a guess, one in a thousand?).  I might be wrong though; I'd be very curious to know what Will's prior is that the most influential person will be alive this century.

7
CarlShulman
3y
It's the time when people are most influential per person or per resource.

I'm confused as to what your core outside-view argument is Will. My initial understanding of it was the following:
(A1) We are in a potentially large future with many trillions of trillions of humans
(A2) Our prior should be that we are randomly chosen amongst all living humans
then we conclude that  
(C)  We should have extremely low a prior odds of being amongst the most influential
To be very crudely quantitative about this, multiplying the number of humans on earth by the number of stars in the visible universe and the lifetime of the Earth, we qu... (read more)

2
Vasco Grilo
8mo
Hi Damon, I do not think your calculation is correct:
9
CarlShulman
3y
The argument is not about whether Will is the most influential person ever, but about whether our century has the best per person influence. With population of 10 billion+ (78 billion alive now, plus growth and turnover for the rest of the century), it's more like 1 in 13 people so far alive today if you buy the 100 billion humans thus far population figure (I have qualms about other hominids, etc, but still the prior gets quite high given A1, and A1 is too low).  

I'm curious what numbers you are using for Europe's growth between 1000-1700; I didn't think European growth over that period was particularly unusual. It is worth remembering that Europe in 1000 (particularly northern Europe) was a backwater and so benefitted from catchup growth relative to (say) China. I also don't know how much of European growth was driven by extensive growth in eastern Europe, which doesn't seem to be relevant that to the great divergence.

Arguments against the idea that Europe c1700 was technologically ahead o... (read more)

7
Paul_Christiano
4y
I took numbers from Wikipedia but have seen different numbers that seem to tell the same story although their quantitative estimates disagree a ton. * https://en.wikipedia.org/wiki/Medieval_demography gives +60% growth from 1000-1500 * https://en.wikipedia.org/wiki/Demographics_of_Europe gives +60% growth from 1500-1700 * (I also think 0-1000 growth is in the ballpark of +60%?) The first two numbers are all higher than growth rates could have plausibly been in a sustained way during any previous part of history (and the 0-1000AD one probably is as well), and they seem to be accelerating rather than returning to a lower mean (as must have happened during any historical period of similar growth). My current view is that China was also historically unprecedented at that time and probably would have had an IR shortly after Europe. I totally agree that there is going to be some mechanistic explanation for why europe caught up with and then overtook china, but from the perspective of the kind of modeling we are discussing I feel super comfortable calling it noise (and expecting similar "random" fluctuations going forward that also have super messy contingent explanations).

There seems to be a major disconnect between the Hyperbolic Growth Hypothesis and the great divergence literature. If we take the Hyperbolic Growth Hypothesis seriously, it seems that there is really little to explain about the industrial revolution. It is just an inevitable consequence of hyperbolic growth and is not qualitatively distinct from what occured before. Although I'm not an economic historian, I have read a number of books on the great divergence and none of them seem to agree with that analysis. They may be disagreement about the causes a... (read more)

6
bgarfinkel
4y
I also pretty strongly have this intuition: the Kremer model, and the explanation it gives for the Industrial Revolution, is in tension with the impressions I've formed from reading the great divergence literature. Although, to echo Max's comment, you can 'believe' the Kremer model without also thinking that an 18th/19th century Industrial Revolution was inevitable. It depends on how much noise you allow. One of the main contributions in David Roodman's recent report is to improve our understanding of how noise/stochasticity can result in pretty different-looking growth trajectories, if you roll out the same hyperbolic growth model multiple times. For example, he fits a stochastic model to data from 10000BC to the present, then reruns the model using the fitted parameters. In something like a quarter of the cases, the model spits out a growth takeoff before 1AD. I believe the implied confidence interval, for when the Industrial Revolution will happen, gets smaller and smaller as you move forward through history. I'm actually not sure, then, how inevitable the model says the IR would be by (e.g.) 1000AD. If it suggests a high level of inevitability in the timing, for instance implying the IR basically had to happen by 2000, then that would be cause for suspicion; the model would likely be substantially understating contingency. (As one particular contingency you mention: It seems super plausible to me, especially, that if the Americas didn't turn out to exist, then the Industrial Revolution would have happened much later. But this seems like a pretty random/out-of-model fact about the world.)
4
Max_Daniel
4y
I agree this is puzzling, and I'd love to see more discussion of this. However, it seems to be that at least in principle there could be a pretty boring explanation: The HGH is correct about the fundamental trend, and the literature on the Industrial Revolution has correctly identified (and maybe explained) a major instance of noise. Note also that the phenomenon that social behavior that is individually contingent is nevertheless governed by simple macro-laws with few parameters is relatively ubiquitous. E.g. the exact timing of all major innovations since the Industrial Revolution (electricity, chemical engineering, computers, ...) seems fairly contingent, and yet overall the growth rate is remarkably close to constant. Similarly for the rest of Kaldor's facts.