WilliamKiely

Wiki Contributions

Comments

Towards a Weaker Longtermism

I think that's an accurate restatement of my view, with the caveat that I do have some moral uncertainty, i.e. give some weight to the possibility that my true moral values may be different. Additionally, I wouldn't necessarily endorse that people be morally required to endure personal pain; personal pain would just be necessary to do greater amounts of good.

I think the important takeaway is that doing good for future generations via reducing existential risk is probably incredibly important, i.e. much more than half of expected future value exists in the long-term future (beyond a few centuries or millenia from now).

Towards a Weaker Longtermism

I'm not sure I know what you mean by "moral objectivism" here. To try to clarify my view, I'm a moral anti-realist (though I don't think that's relevant to my point) and I'm fairly confident that the following is true about my values: the intrinsic value of my enjoyment of ice cream is no greater than the intrinsic value of other individuals' enjoyment of ice cream (assuming their minds are like mine and can enjoy it in the same way), including future individuals. I think we live at a time in history where our expected effect on the number of individuals that ultimately come into existence and enjoy ice cream is enormous. As such, the instrumental value of my actions (such as my action to eat or not eat ice cream) generally dwarfs the intrinsic value of my conscious experience that results from my actions. So it's not that there's zero intrinsic value to my enjoyment of ice cream, it's just that that intrinsic value is quite trivial in comparison to the net difference in value of the future conscious experiences that come into existence as a result of my decision to eat ice cream.

The fact that I have to spend some resources on making myself happy in order to do the best job at maxizing value overall (which mostly looks like productively contributing to longtermist goals in my view) is just a fact about my nature. I don't see it as a criticism or shortcoming of my or human nature, just a thing that is true. So our preferences do matter also; it just happens that when trying to do the most good we find that it's much easier to do good for future generations in expectation than it is to do good for ourselves. So the best thing to do ends up being to help ourselves to the degree that helps us help future generations the most (such that helping ourselves any more or less causes us to do less for longtermism). I think humane nature is such that that optimal balance looks like us making ourselves happy, as opposed to us making great sacrifices and living lives of misery for the greater good.

Let me know if you're still unsure why I take the view that I do.

Towards a Weaker Longtermism

Adding to this what's relevant to this thread, re Eliezer's model:

it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.

The way I think about the 'we can't suppress and beat down our desire for ice cream' is that it's part of our nature to want ice cream meaning that we literally can't just stop having ice cream, at least not without it harming our ability to pursue longtermist goals. (This is what I was referring to when I said above that the longtermist part of you would not be able to fulfill its end of the bargain in the world in which it turns out that the universe can support 3^^^3 ops.)

And we should not deny this fact about ourselves. Rather, we should accept it and go about eating ice cream, caring for ourselves, and working on short-termist goals that are important to us (e.g. reducing global poverty even in cases when it makes no difference to the long term future, to use David's example from the OP).

To do otherwise is to try to suppress and beat something out of you that cannot be taken out of you without harming your ability to productively pursue longtermist goals. (What I'm saying is similar to Julia's Cheerfully post.)

I don't think this is a rationalization in general, though it can be in some cases. Rather, in general, I think it is the correct attitude to take (given a "strong longtermist" view) in response to certain facts about our human nature.

The easiest way to see this is just to look at other people in the world who have done a lot of good or who are doing a lot of good currently. They have not beaten the part of themselves that likes ice cream out of themselves. As such, it is not a rationalization for you to make peace with the fact that you like ice cream and fulfill those wants of yours. Rather, that is the smart thing to do to allow to you to have more cheer and motivation to productively work on longtermist goals.

So I don't have any problem with the conclusion that the overwhelming majority of expected value lies in the long term future. I don't feel any need to reject this conclusion and tell myself that I should accept a different bottom line that reads that 50% of the value is in the long term future and 50% in the short term. Perhaps the behavioral policy I ought to follow is one in which I devote 50% of my time and effort and to myself and my personal goals and 50% of my time and effort to longtermist goals, but that's not because that's not because the satisfaction I get from eating ice cream has great intrinsic value relative to future lives, it's because trying to devote much more of my time and effort to longtermist goals is counterproductive to the goal of advancing those longtermist goals. We know it's generally counterproductive because the other people in the world doing the most longtermist good are not actively trying to deny the part of themselves that cares about things like ice cream.

Towards a Weaker Longtermism

I just commented on your linked astronomical waste post:

Wei, insofar as you are making the deal with yourself consider that in the world in which it turns out that the universe could support doing at least 3^^^3 ops you may not be physically capable of changing yourself to work more toward longtermist goals than you would otherwise. (I.e. Human nature is such that making huge sacrifices to your standard of living and quality of life negatively effects your ability to work productively on longtermist goals for years.) If this is the case, then the deal won't work since one part of you can't uphold the bargain. So in the world in which it turns out that the universe can support only 10^120 ops you should not devote less effort to longtermism than you would otherwise, despite being physically capable of devoting less effort.

In a related kind of deal, both parts of you may be capable of upholding the deal, in which case I think such deals may be valid. But it seems to me that you don't need UDT-like reasoning and the deal future to believe that your future self with better knowledge of the size of the cosmic endowment ought to change his behavior in the same way as implied by the deal argument. Example: If you're a philanthropist with a plan to spend $X of your wealth on shortermist philanthropy and $X on longtermist-philanthropy when you're initially uncertain about the size of the cosmic endowment because you think this is optimal given your current beliefs and uncertainty, then when you later find out that the universe can support 3^^^3 ops I think this should cause you to shift how you spend your $2X to give more toward longtermist philanthropy just because the longtermist philanthropic opportunities now just seem more valuable. Similarly, if you find out that the universe can only support 10^120, then you ought to update to giving more toward short-termist philanthropy.

So is there really a case for UDT-like reasoning plus hypothetical deals our past selves could have made with themselves suggesting that we ought to behave differently than more common reasoning suggests we ought to behave when we learn new things about the world? I don't see it.

All Possible Views About Humanity's Future Are Wild

The other thing is that I'm not sure the "observation selection effect" does much to make this less "wild": anthropically, it seems much more likely that we'd be in a later-in-time, higher-population civilization than an early-in-time, low-population one.

That's a good point: my hypothesis doesn't help to make reality seem any less wild.

This Can't Go On

An important point that I don't think we've said yet is that information density is of course not the same as economic productivity.

What would the Gross Galactic Product be of a maximally-efficient galaxy economy that had reached the 4×10^106 bit information density limit? It would necessarily be close to $10^106 or close to 10^106 times greater than the size of today's GWP, right?

Similarly, if annual GWP increases at 2%/year, that does not necessarily mean that the economy's information density (or perhaps more accurately, the information density of the system the economy is enclosed in) is increasing at close to 2%/year, does it?

This Can't Go On

I've never heard of the Bekenstein bound; thanks for this sharing this additional way to estimate the limits of economic efficiency.

An upper bound for information density is given by https://en.wikipedia.org/wiki/Bekenstein_bound and it is exceedingly large, so large that there isn't a fundamental limit on the time frames considered here.

This doesn't seem right. Specifically it seems like the Bekenstein bound might be larger than the limit Holden discusses in his post, but not so large as to not be reachable with Business As Usual exponential growth in a short timeframe.

Can we quantify what the Bekenstein bound actually is here to check? Here's my attempt:

Using the Earth as the system and the equation from Wikipedia, it looks like the Bekenstein bound is 9.82*10^74 bits. (Let me know if this is incorrect.) There are 10^50 atoms in Earth, so that's 10^25 bits/atom.

For our galaxy, it looks like the number is 8*10^36, assuming again that I set up the math right. Holden's claim is that the limit to economic efficiency is likely less than the efficiency of an economy that is a factor of 10^70 times larger than today's world economy and that uses less than 10^70 atoms.

Note however that (a) there's not enough time to colonize the galaxy in 8,200 years even at the speed of light and (b) economic growth during colonization of the galaxy is quadratic, not exponential, since it is limited by the speed of light expansion of civilization. So given these two considerations, I think it makes more sense to look at the Earth-system rather than the Galaxy-system.

Using the Earth as the system instead, the similar claim would be that the maximum size economy that could be sustained by the Earth would be an economy less than 10^50 times as large as today's world economy.

At 2% annual economic growth, it would take 5,813 years for the economy to grow by a factor of 10^50.

If we make the very conservative claim that today's economy only uses 1 bit of information [1], then to grow the economy by a factor of 10^75 would presumably mean that the resulting economy would have at least 10^75 bits (I think this is a reasonable assumption; let me know if it's not), i.e. the Bekenstein bound for the Earth.

At 2% annual growth, it would take 8,720 years to grow the economy by this factor, i.e. definitely still in the range of time frames considered in this post.


  1. In reality, our current world economy uses more than 1 bit, and the Bekenstein bound would presumably thus be reached before the economy is able to grow by a factor of 10^75. E.g. Maybe the actual maximum factor according to the Bekenstein bound might be closer (on a log scale) to 10^50 than 10^75 (?). I have no idea and would be interested in hearing from anyone who thinks they do have a way to estimate this. ↩︎

How Do AI Timelines Affect Giving Now vs. Later?

Thanks, I only read through Appendix A.

It seems to me that your concern "that the older model trivialized the question by assuming we could not spend our money on anything but AI safety research" could be addressed by dividing existing longtermist or EA capital up into one portion to be spent on AI safety and one portion to be spent on other causes. Each capital stock can then be spent at independent rates according to the value of availabkr giving opportunities in their respective cause areas.

Your model already makes the assumption:

Prior to the emergence of AGI, we don't want to spend money on anything other than AI safety research.

And then:

The new model allows for spending money on other things [but only after AGI]

It just seems like a weird constraint to say that with one stock of capital you only want to spend it on one cause (AI safety) before some event but will spend it on any cause after the event.

I'm not sure that I can articulate a specific reason this doesn't make sense right now, but intuitively I think your older model is more reasonable.

How Do AI Timelines Affect Giving Now vs. Later?

Model assumption:

  1. After AGI is developed, we get an amount of utility equal to the logarithm of our remaining capital.

This doesn't seem to me like an appropriate assumption to make if analyzing this from an altruistic perspective.

If friendly AGI is developed, and assuming it can handle all future x-risks for us, then don't we just get utility equal to our cosmic endowment? We get a future of astronomical value. The amount of capital we have leftover affects how quickly we can begin colonization by a small amount, but isn't that a pretty trivial effect compared to the probability of actually getting friendly AGI?

It seems to me then that we should roughly be following Bostom's maxipok rule to "Maximise the probability of an ‘OK outcome’, where an OK outcome is any outcome that avoids existential catastrophe." In toy model terms, this would be maximizing the probability of friendly AGI without regard for how much capital is leftover after AGI.

Am I correct that that's not what your model is doing? If so, why do you think doing what your model is doing (with the 5th assumption quoted above) is more appropriate?

(With your assumption, it seems the model will say that we should spend less on AGI than we would with the assumption I'm saying is more appropriate to make (maximize probability of friendly AGI), since your model will accept a marginally higher probability of x-risk in exchange for a sufficiently higher amount of capital remaining after friendly AGI.)

This Can't Go On

One way I thought of to try to better identify what the physical limits on the size of the economy are likely to be is to ask on Metaculus What will real Gross World Product be in 2200, in trillions of 2020 US$?.

Currently I have the high end of the range set to 10^15 trillion 2020US$, which is 10^13 times as large as the economy is today. Metaculus currently gives 20% credence to the economy being larger than that in 2200. (My forecast is more pessimistic.)

For reference, if we turned the entire mass of the Earth into human brains, there would be about 5*10^14 times more human brains than there are human brains today. (I'm using this as a reference for a large economy. My assumption is that a solar-system-size economy that produces as much value as that many brains is quite efficient, though not necessarily at or near the limit of what's physically possible.)

Additionally note that 50 years of near-speed-of-light galaxy colonization we'll be able to reach an additional 1,000 stars and with 150 years of that we'll be able to reach 7,594 stars (I think WolframAlpha undercounts the number of stars that actually exist, but not by an order of magnitude until you get out to further distances). So that means the economy could potentially be ~3-4 orders of magnitude larger by 2200 than the largest economy our solar system can support.

(Note: I've asked a moderator on Metaculus to adjust the high end of the range up from 10^15 to 10^30 trillion 2020$ so that we can capture the high end of peoples' estimates for how large the economy can get.)

(Note that at 30% annual economy growth (the threshold for what Open Phil calls explosive economic growth), the economy can reach 10^15 trillion USD in a mere 115 years starting at the economy's current size. Again, Metaculus gives a 20% chance that this size economy or greater will exist in 2200. Metaculus gives a 30% chance that the economy will be greater than 10^15 trillion USD in 2200. From this I'd speculate that Metaculus would give roughly 10% to >10^20, and roughly 5% to >10^25. Hopefully we'll be able to see this exactly more precisely once the high end of the range is increased to 10^30.)

(Final note: Only about a dozen unique people have made forecasts on the 2200 GWP Metaculus question so far, so the forecasts are likely very speculative and could probably be improved a lot with more research.)

(UPDATE: Changing ranges on Metaculus questions after forecasts have been made apparently isn't possible, so instead I've created a second version of the question with a range going up to 10^29 trillion 2020 US$ (it should be approved by moderators and visible within a couple days). Hopefully this is high enough to capture >98% of Metaculus' probability mass.)

Load More