Carl_Shulman

Comments

Can I have impact if I’m average?

Here are two posts from Wei Dai, discussing the case for some things in this vicinity (renormalizing in light of the opportunities):

https://www.lesswrong.com/posts/Ea8pt2dsrS6D4P54F/shut-up-and-divide

https://www.lesswrong.com/posts/BNbxueXEcm6dCkDuk/is-the-potential-astronomical-waste-in-our-universe-too

What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?

Thanks for this detailed post on an underdiscussed topic!  I agree with the broad conclusion that extinction via partial population collapse and infrastructure loss, rather than by the mechanism of catastrophe being potent  enough to leave no or almost no survivors (or indirectly enabling some  later extinction level event) has very low probability.  Some comments:

  • Regarding case 1, with a pandemic leaving 50% of the population dead but no major infrastructure damage, I think you can make much stronger claims about there not being 'civilization collapse' meaning near-total failure of industrial food, water, and power systems. Indeed, collapse so defined from that stimulus seems nonsensical to me for rich quantitative reasons.
    • There is no WMD war here, otherwise there would be major infrastructure damage.
    • If half of people are dead, that cuts the need for food and water by half (doubling per capita stockpiles), while already planted calorie-rich crops can easily be harvested with a half-size workforce.
    • Today agriculture makes up closer to 5% than 10% of the world economy, and most of that effort is expended on luxuries such as animal agriculture, expensive fruits, avoidable food waste, and other things that aren't efficient ways to produce nutrition.  Adding all energy (again, most of which is not needed for basic survival as opposed to luxuries) brings the total to ~15%, and perhaps 5% on necessities (2.5% for half production for half population).  That leaves a vast surplus workforce.
    • The catastrophe doubles resources of easily accessible fossil fuels and high quality agricultural land per surviving person, so just continuing to run the best 50% of farmland and the best 50% of oil wells means an increase in food and fossil fuels per person.
    • Likewise, there is a surplus of agricultural equipment, power plants, water treatment plants, and operating the better half of them with the surviving half of the population could improve per capita availability.  These plants are parallel and independent enough that running half of them would not collapse productivity, which we can confirm by looking back to when there were half as many, etc.
    • Average hours worked per capita is already at historical lows, leaving plenty of room for trained survivors to work longer shifts while people switch over from other fields and retrain
    • Historical plagues such as the Black Death or smallpox in the Americas did not cause a breakdown of food production per capita for the survivors.
    • Historical wartime production changes show enormous and adequate flexibility in production.
  • Re the likelihood of survival without industrial agriculture systems, the benchmark should be something closer to preindustrial European agriculture, not hunter-gatherers. You discuss this but it would be helpful to put more specific credences on those alternatives.
    • The productivity of organic agriculture is still enormously high relative to hunting and gathering.
    • Basic knowledge about crop rotation, access to improved and global crop varieties such as potatoes, ploughs, etc permitted very high population density before industrial agriculture, with very localized supply chains.  One can see this in colonial agricultural communities which could be largely self-sustaining (mines for metal tools being one of the worst supply constraints, but fine in a world where so much metal has already been mined and is just sitting around for reuse).
    • By the same token, talking about 'at least 10%' of 1-2 billion subsistence farmers continuing agriculture is a very low figure.  I assume it is a fairly extreme lower bound, but it would be helpful to put credences on lower bounds and to help distinguish them from more likely possibilities.
  • Re food stockpiles:
    • "I’m ignoring animal agriculture and cannibalism, in part because without a functioning agriculture system, it’s not clear to me whether enough people would be able to consume living beings."
      • Existing herds of farmed animals would likely be killed and eaten/preserved.
        • If transport networks are crippled, then this could be for local consumption, but that would increase food inequality and likelihood of survival in dire situations
      • There are about 1 billion cattle alone, with several hundred kg of edible mass each, plus about a billion sheep,  ~700 million pigs, and 450 million goats.
      • In combination these could account for hundreds of billions of human-days of nutritional requirements (I think these make up a large share of 'global food stocks' in your table of supplies)
    • Already planted crops ready to harvest constitute a huge stockpile for the scenarios without infrastructure damage.
    • Particularly for severe population declines, fishing is limited by fish supplies, and existing fishing boats capture and kill vast quantities of fishes in days when short  fishing seasons open.  If the oceans are not damaged, this provides immense food resources to any survivors with modern fishing knowledge and some surviving fishing equipment.
  • "But if it did, I expect that the ~4 billion survivors would shrink to a group of 10–100 million survivors during a period of violent competition for surviving goods in grocery stores/distribution centers, food stocks, and fresh water sources."
  • "So what, concretely, do I think would happen in the event of a catastrophe like a “moderate” pandemic — one that killed 50% of people, but didn’t cause infrastructure damage or climate change? My best guess is that civilization wouldn’t actually collapse everywhere. But if it did, I expect that the ~4 billion survivors would shrink to a group of 10–100 million survivors during a period of violent competition for surviving goods in grocery stores/distribution centers, food stocks, and fresh water sources."
    • For the reasons discussed above I strongly disagree with the claim after "I expect."
  • "All this in mind, I think it is very likely that the survivors would be able to learn enough during the grace period to be able to feed and shelter themselves ~indefinitely."
    • I would say the probability should be higher here.
  • Regarding radioactive fallout, an additional factor not discussed is the decline of fallout danger over time: lethal areas are quite different over the first week vs the first year, etc.
  • Re Scenario 2: "Given all of this, my subjective judgment is that it’s very unlikely that this scenario would more or less directly lead to human extinction" I would again say this is even less likely.
  • In general I think extinction probability from WMD war is going to be concentrated in the plausible future case of greatly increased/deadlier arsenals: millions of nuclear weapons rather than thousands, enormous and varied bioweapons arsenals, and billions of anti-population hunter-killer robotic drones slaughtering survivors including those in bunkers, all released in the same conflict.
  • "Given this, I think it’s fairly likely, though far from guaranteed, that a catastrophe that caused 99.99% population loss, infrastructure damage, and climate change (e.g. a megacatastrohe, like a global war where biological weapons and nuclear weapons were used) would more or less directly cause human extinction."
    • This seems like a sign error, differing from your earlier and later conclusions?
    • "I think it’s fairly unlikely that humanity would go extinct as a direct result of a catastrophe that caused the deaths of 99.99% of people (leaving 800 thousand survivors), extensive infrastructure damage, and temporary climate change (e.g. a more severe nuclear winter/asteroid impact, plus the use of biological weapons)."



 

Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

It sounds like you're assuming a common scale between the theories (maximizing expected choice-worthiness)).

A common scale isn't necessary for my conclusion (I think you're substituting it for a stronger claim?)  and  I didn't invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability  get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don't amount to dictatorship of one theory, the weight for an NU view is not going to spend its decision-influence on the former rather than the latter when they're both non-vanishing possibilities.

I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.

Sure (which will make the s-risk definition even more inapt for those people), and those scenarios will be approximately ignored vs scenarios that are more like 1/100 or 1/1000 being tortured on a lexical view, so there will still be the same problem of s-risk not tracking what's action-guiding or a big deal in the history of suffering.

Longtermism which doesn't care about Extinction - Implications of Benatar's asymmetry between pain and pleasure

Just a clarification: s-risks (risks of astronomical suffering) are existential risks. 

This is not true by the definitions given in the original works that defined these terms. Existential risk is defined to only refer to things that are drastic relative to the potential of Earth-originating intelligent life:

where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.

Any X-risks are going to be in the same ballpark of importance if they occur, and immensely important to the history of Earth-originating life. Any x-risk is a big deal relative to that future potential.

S-risk is defined as just any case where there's vastly more total suffering than Earth history heretofore, not one where suffering is substantial relative to the downside potential of the future.

S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

 In an intergalactic civilization making heavy use of most stars, that would be met by situations where things are largely utopian but  1 in 100 billion people per year get a headache, or a hell where everyone was tortured all the time.  These are both defined as s-risks, but the bad elements in the former are microscopic compared to the latter, or the expected value of suffering.  

With even a tiny weight on views valuing good parts of future civilization the former could be an extremely good world, while the latter would be a disaster by any reasonable mixture of views. Even with a fanatical restriction to only consider suffering and not any other moral concerns, the badness  of the former should be almost completely ignored relative to the latter if there is non-negligible credence assigned to both.

 So while x-risks are all critical for civilization's upside potential if they occur, almost all s-risks will be incredibly small relative to the potential for suffering, and something  being an s-risk doesn't mean its occurrence would be an important part of the history of suffering if both have non-vanishing credence.

From the s-risk paper:

We should differentiate between existential risks (i.e., risks of “mere” extinction or failed potential) and risks of astronomical suffering1(“suffering risks” or “s-risks”). S-risks are events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

The above distinctions are all the more important because the term “existential risk” has often been used interchangeably with “risks of extinction”, omitting any reference to the future’s quality.2 Finally, some futures may contain both vast amounts of happiness and vast amounts of suffering, which constitutes an s-risk but not necessarily a (severe) x-risk. For instance, an event that would create 1025 unhappy beings in a future that already contains 1035 happy individuals constitutes an s-risk, but not an x-risk.

If one were to make an analog to the definition of s-risk for loss of civilization's potential it would be something like risks of loss of potential welfare or goods much larger than seen on Earth so far. So it would be a risk of this type to delay interstellar colonization by a few minutes and colonize one less  star system. But such 'nano-x-risks' would have almost none of the claim to importance and attention that comes with the original definition of x-risk. Going from 10^20 star systems to 10^20 star systems less one should not be put in the same bucket as premature extinction or going from 10^20 to 10^9. So long as one does not have a completely fanatical view and gives some weight to different perspectives, longtermist views concerned with realizing civilization's potential should give way on such minor proportional differences to satisfy other moral concerns, even though the absolute scales are larger.

Bostrom's Astronomical Waste paper specifically discusses such things, but argues since their impact would be so small relative to existential risk they should not be a priority (at least in utilitarianish terms)  relative to the latter.

This disanalogy between the x-risk and s-risk definitions is a source of ongoing frustration to me, as s-risk discourse thus often conflates hellish futures (which are existential risks, and especially bad ones), or possibilities of suffering on a scale significant relative to the potential for suffering (or what we might expect), with bad events many orders of magnitude smaller or futures that are utopian by common sense standards and compared to our world or the downside potential.

I wish people interested in s-risks that are actually near worst-case scenarios, or that are large relative to the background potential or expectation for downside would use a different word or definition, that would make it possible to say things like 'people broadly agree that a future constituting an s-risk is a bad one, and not a utopia' or at least  'the occurrence of an s-risk is of the highest importance for the history of suffering.' 

We're Lincoln Quirk & Ben Kuhn from Wave, AMA!

$1B commitment attributed to Musk early on is different from the later Microsoft investment. The former went away despite the media hoopla.

CEA's 2020 Annual Review

It's invested in unleveraged index funds, but was out of the market for the pandemic crash and bought in at the bottom. Because it's held with Vanguard as a charity account it's not easy to invest as aggressively as I do my personal funds for donation, in light of lower risk-aversion for altruistic investors than those investing for personal consumption, although I am exploring options in that area.

The fund has been used to finance the CEA donor lottery, and to make grants to ALLFED and Rethink Charity (for nuclear war research). However, it should be noted that I only recommend grants for the fund that I think aren't a better fit for other funding sources I can make recommendations to, and often with special circumstances or restricted funding, and grants it has made should not be taken as recommendations from me to other donors to donate to the same things at the margin. [For the object-level grants, although using donor lotteries is generally sensible for a wide variety of donation views.] 

If Causes Differ Astronomically in Cost-Effectiveness, Then Personal Fit In Career Choice Is Unimportant

Longtermists sometimes argue that some causes matter extraordinarily more than others—not just thousands of times more, but 10^30 or 10^40 times more. 

I don't think any major EA or longtermist institution believes this about expected impact for 10^30 differences. There are too many spillovers for that, e.g. if doubling the world economy of $100 trillion/yr would modestly shift x-risk or the fate of wild animals, then interventions that affect economic activity have to have expected absolute value of impact much greater than 10^-30 of the most expected impactful interventions.



This argument requires that causes differ astronomically in relative cost-effectiveness. If causes A is astronomically better than cause B in absolute terms, but cause B is 50% as good in relative terms, then it makes sense for me to take a job in cause B if I can be at least twice as productive.

I suspect that causes don't differ astronomically in cost-effectiveness. Therefore, people should pay attention to personal fit when choosing an altruistic career, and not just the importance of the cause.
 

The premises and conclusion don't seem to match here. A difference of 10^30x is crazy, but rejecting that doesn't mean you don't have huge practical differences in impact like 100x or 1000x. Those would be plenty to come close to maxing out the possible effect of differences between causes(since if you're 1000x as good at rich-country homelessness relief as preventing  pandemics, then if nothing else your fame for rich country poverty relief would be a powerful resource to help out in other areas like public endorsements of good anti-pandemic efforts).

The argument seems sort of like "some people say if you go into careers like quant trading you'll make 10^30 dollars and can spend over a million dollars to help each animal with a nervous system. But actually you can't make that much money even as a quant trader, so people should pay attention to fit with different careers in the world when trying to make money, since you can make more money in a field with half the compensation per unit productivity if you are twice as productive there." The range for realistic large differences in compensation between fields (e.g. fast food cashier vs quant trading) is missing from the discussion.

You define astronomical differences at the start as 'not just thousands of times more' but the range to thousands of times more is where all the action is.

Thoughts on whether we're living at the most influential time in history

It's the time when people are most influential per person or per resource.

Thoughts on whether we're living at the most influential time in history

This seems important to me because, for someone claiming that we should think that we're at the HoH, the update on the basis of earliness is doing much more work than updates on the basis of, say, familiar arguments about when AGI is coming and what will happen when it does.  To me at least, that's a striking fact and wouldn't have been obvious before I started thinking about these things.

It seems to me the object level is where the action is, and the non-simulation Doomsday Arguments mostly raise a phantom consideration that cancels out (in particular, cancelling out re whether there is an influenceable lock-in event this century).

You could say a similar thing about our being humans rather than bacteria, which cumulatively outnumber us by more than 1,000,000,000,000,000,000,000,000  times on Earth thus far according to the paleontologists. 

Or you could go further and ask why we aren't neutrinos? There are more than 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 of them in the observable universe.

However extravagant the class you pick, it's cancelled out by the knowledge that we find ourselves in our current situation.  I think it's more confusing than helpful to say that our being humans rather than neutrinos is doing more than 10^70 times as much work as object-level analysis of AI in the case for attending to x-risk/lock-in with AI. You didn't need to think about that in the first place to understand AI or bioweapons, it was an irrelevant distraction.

The same is true for future populations that know they're living in intergalactic societies and the like. If we compare possible world A, where future Dyson spheres can handle a population  of P (who know they're in  that era), and possible world B, where future Dyson spheres can support a population of 2P, they don't give us much different expectations of the number of people finding themselves in our circumstances, and so cancel out.

The simulation argument (or a brain-in-vats story or the like) is different and doesn't automatically  cancel out  because it's a way to make our observations more likely and common. However, for policy it does still largely cancel out, as long as the total influence of people genuinely in our apparent circumstances is a lot greater than that of all simulations with apparent circumstances like ours: a bigger future world means more influence for genuine inhabitants of important early times and also more simulations. [But our valuation winds up being bounded by our belief  about  the portion of all-time resources allocated to sims in apparent positions like ours.]

Another way of thinking about this is that prior to getting confused by any anthropic updating, if you were going to set a policy for humans who find ourselves in our apparent situation across nonanthropic possibilities assessed at the object level (humanity doomed, Time of Perils, early lock-in, no lock-in), you would just want to add up the consequences of the policy across genuine early humans and sims in each (non-anthropically assessed) possible world.

A vast future gives more chances for influence on  lock-in later, which might win out as even bigger than this century (although this gets rapidly less likely with time and expansion), but it shouldn't change our assessment of lock-in this century, and a substantial chance of that gives us a good chance of HoH (or simulation-adjusted HoH).

Nuclear war is unlikely to cause human extinction

I agree it's very unlikely that a nuclear war discharging current arsenals could directly cause human extinction. But the conditional probability of extinction given all-out nuclear war can go much higher if the problem gets worse. Some aspects of this:

-at the peak of the Cold War arsenals there were over 70,000  nuclear weapons, not 14,000
-this Brookings estimate puts spending building the US nuclear arsenal at several trillion current dollars, with lower marginal costs per weapon, e.g. $20M per weapon and $50-100M all-in for for ICBMs
-economic growth since then means the world could already afford far larger arsenals in a renewed arms race
-current US military expenditure is over $700B annually,  about 1/30th of GDP; at the peak of the Cold War in the 50s and 60s it was about 1/10th; Soviet expenditure was proportionally higher
-so with 1950s proportional military expenditures, half going to nukes, the US and China could each produce 20,000+ ICBMs, each of which could be fitted with MIRVs and several warheads, building up to millions of warheads over a decade or so; the numbers could be higher for cheaper delivery systems
-economies of scale and improvements in technology would likely bring down the per warhead cost
-if AI and robotics greatly increase economic growth the above numbers could be increased by orders of magnitude
-radiation effects could be intentionally greatly increased with alternative warhead composition
-all-out discharge of strategic nuclear arsenals is also much more likely to be accompanied by simultaneous deployment of other WMD, including pandemic bioweapons (which the Soviets pursued as a strategic weapon for such circumstances)and drone swarms (which might kill survivors in bunkers); the combined effects of future versions of all of these WMD at once may synergistically cause extinction 

Load More