CarlShulman

3980Joined Aug 2014

Comments
336

This is much more of a problem (and an overwhelming one) for risks/opportunities  that are microscopic compared to others. Baseline asteroid/comet risk is more like 1 in a billion. Much less opportunity for that with 1% or 10% risks.

They're wildly quantitatively off. Straight 40% returns are way beyond equities, let alone the risk-free rate. And it's inconsistent with all sorts of normal planning, e.g. it would be against any savings in available investments, much concern for long-term health, building a house, not borrowing everything you could on credit cards, etc.

Similarly the risk aversion for rejecting a 15% of $1M for $1000 would require a bizarre situation (like if you needed just $500 more to avoid short term death), and would prevent dealing with normal uncertainty integral to life, like going on dates with new people, trying to sell products to multiple customers with occasional big hits, etc.

Hi Brian,

I agree that preferences at different times and different subsystems can conflict. In particular, high discounting of the future can lead to forgoing a ton of positive reward or accepting lots of negative reward in the future in exchange for some short-term change. This is one reason to pay extra attention to cases of near-simultaneous comparisons, or at least to look at different arrangements of temporal ordering. But still the tradeoffs people make for themselves with a lot of experience under good conditions look better than what they tend to impose on others casually. [Also we can better trust people's self-benevolence than their benevolence towards others,  e.g. factory farming as you mention.]

And the brain machinery for processing stimuli into decisions and preferences does seem very relevant to me at least, since that's a primary source of intuitive assessments of these psychological states as having value, and for comparisons where we can make them. Strong rejection of interpersonal comparisons is also used to argue that relieving one or more pains can't compensate for losses to another individual.

I agree the hardest cases for making any kind of interpersonal comparison will be for minds with different architectural setups and conflicting univocal viewpoints, e.g. 2 minds with equally passionate complete enthusiasm  (with no contrary psychological processes or internal currencies to provide reference points) respectively for and against their own experience, or gratitude and anger for their birth (past or future).  They  can respectively consider a world with and without their existences completely unbearable and beyond compensation. But if we're in the business of helping others for their own sakes rather than ours, I don't see the case for excluding either one's concern from our moral circle.

Now, one can take take a more nihilistic/personal aesthetics view of morality, and say that one doesn't personally care about the gratitude of minds happy to exist. I take it this is more your meta-ethical stance around these things? There are good arguments for moral irrealism and nihilism, but it seems to me that going too far down this route can lose a lot of the point of the altruistic project. If it's not mainly about others and their perspectives, why care so much about shaping (some of) their lives and attending to (some of) their concerns?

David Pearce sometimes uses the Holocaust to argue for negative utilitarianism, to say that no amount of good could offset the pain people suffered there. But this view dismisses (or accidentally valorizes) most of the evil of the Holocaust. The death camps centrally were destroying lives and attempting to destroy future generations of peoples, and the people inside them wanted to live free, and being killed sooner was not a close substitute. Killing them (or willfully letting them die when it would be easy to prevent)  if they would otherwise escape with a delay would not be helping them for their own sakes, but choosing to be their enemy by only selectively attending to their concerns. And even though some did choose death. Likewise to genocide by sterilization  (in my Jewish household growing up the Holocaust was cited as a reason to have children).

Future generations, whether they enthusiastically endorse or oppose their existence, don't have an immediate voice (or conventional power) here and now their existence isn't counterfactually robust.  But when I'm in a mindset of trying to do impartial good I don't see the appeal of ignoring those who would desperately, passionately want to exist, and their gratitude in worlds where they do.

I see demandingness and contractarian/game theory/cooperation reasons that bound  sacrifice to realize impartial uncompensated help to others, and inevitable moral dilemmas (almost all beings that could exist in a particular location won't, wild animals are desperately poor and might on average wish they didn't exist, people have conflicting desires, advanced civilizations I expect will have far more profoundly self-endorsing good lives than unbearably bad lives but on average across the cosmos will have many of the latter by sheer scope). But being an enemy of all the countless beings that would like to exist, or do exist and would like to exist more (or more of something), even if they're the vast supermajority, seems at odds to me with  my idea of impartial benevolence, which I would identify more with trying to be a friend to all, or at least as much as you can given conflicts.

In the surveys they know it's all hypothetical.

You do see a bunch of crazy financial behavior in the world, but it decreases as people get more experience individually and especially socially (and with better cognitive understanding).

People do engage in rounding to zero in a lot of cases, but with lots of experience will also take on pain and injury with high cumulative or instantaneous probability (e.g. electric shocks to get rewards, labor pains, war,  jobs that involve daily frequencies of choking fumes or injury.

Re lexical views that still make probabilistic tradeoffs, I don't really see the appeal  of contorting lexical views that will still be crazy with respect to real world cases so that one can say they assign infinitesimal value to good things in impossible hypotheticals (but effectively 0 in real life). Real world cases like labor pain and risking severe injury doing stuff aren't about infinitesimal value too small for us to even perceive, but macroscopic value that we are motivated by.  Is there a parameterization you would suggest as plausible and addressing that?

I'd use reasoning like this, so simulation concerns don't have to be ~certain to drastically reduce  EV gaps between local and future oriented actions.

Today there is room for an intelligence explosion and explosive reproduction of AGI/robots (the Solar System can support trillions of AGIs for every human alive today). If aligned AGI undergoes such intelligence explosion and reproduction there is no longer free energy for rogue AGI to grow explosively. A single rogue AGI introduced to such a society would be vastly outnumbered and would lack special advantages, while superabundant AGI law enforcement would be well positioned to detect or prevent such an introduction in any case.


Already today states have reasonably strong monopolies on the use of force. If all military equipment (and an AI/robotic infrastructure that supports it and most of the economy) is trustworthy (e.g. can be relied on not to engage in military coup, to comply with and enforce international treaties via AIs verified by all  states, etc) then there could be trillions of aligned AGIs per human, plenty to block violent crime or WMD terrorism. 

For war between states, that's point #7. States can make binding treaties  to renounce WMD war or protect human rights or the like, enforced by AGI systems jointly constructed/inspected by the parties.

Barring simulation shutdown sorts of things or divine intervention I think more like 1 in 1 million per century, on the order of magnitude of encounters with alien civilizations. Simulation shutdown is a hole in the argument that we could attain such a state, and I think a good reason not to say things like 'the future is in expectation 50 orders of magnitude more important than the present.'

It's quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:

  1. Riches and technology make us comprehensively immune to  natural disasters.
  2. Cheap ubiquitous detection, barriers, and sterilization make civilization immune to biothreats
  3. Advanced tech makes neutral parties immune to the effects of nuclear winter.
  4. Local cheap production makes for small supply chains that can regrow from disruption as industry becomes more like information goods.
  5. Space colonization creates robustness against local disruption.
  6. Aligned AI blocks threats from misaligned AI (and many other things).
  7. Advanced technology enables stable policies (e.g. the same AI police systems enforce treaties banning WMD war for billions of years), and the world is likely to wind up in some stable situation (bouncing around until it does).

If we're more than 50% likely to get to that kind of robust state, which I think is true, and I believe Toby  does as well, then the life expectancy of civilization is very long, almost as long on a log scale as with 100%.

Your argument depends on  99%+++ credence that such safe stable states won't be attained, which is doubtful for 50% credence, and quite implausible at that level. A classic paper by the climate economist Martin Weitzman shows that the average discount rate over long periods  is set by the lowest plausible rate (as the possibilities of high rates drop out after a short period and you get a constant factor penalty for the probability of low discount rates, not exponential decay).

An  example of an argument matching the OP's thesis: Bryan Caplan rejecting animal rights (disagreeing with his favorite philosopher, Michael Huemer) based on the demands of applying a right to life to wild insects.

Load More