Hide table of contents

Abstract

This is an argument for implementing a set list of long-term goals for humanity's development, beyond an energy scale: we should use ethics, not technology alone, to measure how advanced our civilization could become.

Disclaimer: this article was inspired by content from an exurb1a video. I was hesitant to use this material because of a troubling story that has come to light about the creator exurb1a, but felt it could make for a valuable conversation in the EA sphere. If you’re curious, you can find that story here.

Section 1: 

Nice to Meet You, Nikolai

You might be already familiar with the Kardashev Scale and its way of measuring a civilization's level of advancement based on how much energy it's able to use, divided into three types. A Type I civilization has harnessed all available energy on its planet, a Type II has done this with all available energy in its star and planetary system (hello, Dyson Sphere!), and a Type III has mastered all available energy in its local galaxy.

Nikolai Kardashev, the man behind the concept, was a Soviet radio astronomer tasked with sending out cosmic signals in the hope of finding or communicating with extraterrestrial life. He caused one of the most culturally influential false alarms about alien existence when he proposed that the newly discovered CTA-102, a blazar-type quasar, provided evidence of a Type II civilization attempting to communicate. This caused a sensational impact when a public announcement was made regarding the possibility— The Byrds even made a song about it in 1967.  However, there's something I feel should have been more strongly considered. What would a highly advanced civilization, one with mastery over an entire planetary system, think about us and our ethical atrocities? If we advance to a Type I, will we still be allowing our own to die when we could prevent it, or causing horrific amounts of suffering in factory farms? Will this other civilization be equally cruel, even worse, or view our inadequacies with shock and disgust? And most notably for this particular topic, on what other metric could we measure how “civilized” our civilization has become? In the exurb1a video mentioned above, a different scale is proposed: the Kardashev scale of wisdom. To make it more EA-oriented, I like to view it as a kinder way to measure our own advancements.

Section 2:

Kardashev for Kindness

Type I

In a Type I Wisdom civilization, no one goes hungry, no one dies from curable diseases, and everybody has a universal baseline in which their basic needs are met. Humanity no longer exploits the suffering of non-human animals for unnecessary pleasure. (exurb1a placed concern for non-human animals under a Type II civilization, but I chose to weigh this with more immediate urgency.) 

Type II

In a Type II Wisdom civilization, no human involuntarily has to experience physical or mental maladies like mental illness or chronic pain; extreme human suffering that cannot be consented to (see Vinding and Tomasik) is wiped out entirely.  Unnecessary non-human suffering is also largely eliminated, including unnecessary wildlife suffering— for example, 999 out of every 1,000 baby sea turtles wouldn’t have to die horribly from exposure and predation. 

Type III

In a Type III Wisdom civilization, nothing and no one has to experience suffering at all, whether human, non-human animal, or sentient AI. 

Section 3:

So, why does this matter?

In Suffering-Focused Ethics: Defense and Implications, Magnus Vinding makes a tour de force argument for why we should care about the elimination of suffering. I won’t try to belabor it too much here, beyond restating one of the main points: there is a severe asymmetry between pleasure and suffering, and the mere existence of extreme suffering calls out for redress. This isn’t too hard an argument to make here on the EA forums. If there’s one thing we have in common, it’s a strong capacity for actually caring; caring strongly about real-world impact and making humanity just a bit more morally conscious than it is now. 

When we fix our eyes on the future, we have well established concepts like the Kardashev Scale waiting for us, time-honored milestones to measure how far we’ve come in terms of scientific progress. Yet when it comes to our ethical goals for the future, all we’ve got is a sort of blur of good intentions. Maybe one day we’ll stop torturing and killing animals,  suffering from diseases, starving to death, and shooting missiles at each other. Establishing set goals and measuring our progress by them could be key to actually ensuring that we advance towards a future that isn’t just impressive on a typical Kardashev scale, but one that is kind, compassionate, and free of unnecessary suffering. 

30

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since: Today at 12:56 PM

So I like this idea, but I think the exclusively suffering-focused viewpoint is misguided. In particular:
"In a Type III Wisdom civilization, nothing and no one has to experience suffering at all, whether human, non-human animal, or sentient AI"

^this would be achieved if we had a "society" entirely of sentient AI that were always at hedonic neutral. Such lives would involve experiencing zero sense of joy, wonder, meaning, friendship, love, etc – just totally apathetic sensory of the outside world and meaningless pursuit of activity. It's hard to imagine this would be the absolute pinnacle of civilizational existence. 

Edit: to be clear, I'm not arguing "for" suffering (or that suffering is necessary for joy), just "for" pleasure in addition to the elimination of suffering.

I would agree that pleasure is important too, but I think I'd place a higher disvalue on suffering than I place value on pleasure. I definitely don't think that a world without suffering would necessarily be a state of hedonic neutral, or result in meaninglessness. However, I would also be one to bite the bullet and say that a Melba toast world with general pleasantness but no true joy or wonder would be preferable to a world with widespread extreme suffering (at least on the scale it exists on today) if that was necessary. I'd also say the ideal version of a Type 3 wouldn't have had to, since I would agree that pleasure doesn't depend on suffering to exist. I think the strongest drawback would be the one mentioned in the comment below: the risk of forgetting suffering too soon. Empathy isn't our strong point when it comes to that sort of thing. Thanks for the response!

"I definitely don't think that a world without suffering would necessarily be a state of hedonic neutral, or result in meaninglessness"

Right, it wouldn't necessary be natural – my point was your definition of Type III allowed for a neutral world, not that it required it. I think it makes more sense for the highest classification to be specifically for a very positive world, as opposed to something that could be anywhere from neutral to very positive.

The slow (if not revese) progress towards a world without intense suffering is depressing, to say the least. So thank you for writing this inspiring piece.

It aslo reminded me of David Pearce's essay "High-tech Jainism". It outlines a path towards civilization that abolished suffering while also warns about potential pitfalls like forgetting about suffering too soon, before it's prevented for all sentient beings. (In Suffering-Focused Ethics: Defense and Implications (ch. 13) mentioned in the post, Vinding even argues that, given the irreducible uncertainty about suffering re-emerging in the future, there's always risk in disconnecting from suffering completely.)

I'd definitely like to write more on the concept since I truly believe it could be useful, at the very least as a source of hope. It's all too easy to feel depressed diving into the viewpoint of suffering-focused ethics, but that probably slows motivation that would be more effective otherwise.  The possibility of forgetting suffering to soon is a good point to remember, I'll take a look at the essay linked. Thanks for the response!

Curated and popular this week
Relevant opportunities