Hide table of contents

This is the first time I post on the EA forum, so here is a quick introduction. My name is Gaetan Selle, I’m a French video producer and member of EA France, I run a French YouTube channel and a podcast on futurism called The Flares and I consider myself a longtermist.

I recently had the opportunity to talk with David Pearce on my podcast. He is a British philosopher known for creating the World Transhumanist Association (Humanity +) with Nick Bostrom and for a manifesto called The Hedonistic Imperative.

He made me realize that some of his ideas are relevant in a longtermist framework. Which motivated me to write this post and put the idea out there, as I haven’t seen anything similar (but maybe there is).

It seems to me that once we are on board with the idea that the long term future is a key moral priority of our time, we need to ask what to do about it.

I usually see 3 options put forward by the community:

  • Reducing existential risks
  • Trajectory change
  • Value lock-in

In short, David Pearce suggests that our moral imperative should be to phase out the biological substrate of suffering using biotechnologies.

Why is this a longtermist action?

Reducing existential risks

One trend seems to be that technological progress leads to an increased capacity for a small group of people to cause a lot of harm. From a sword capable of killing a handful of people to an engineered pathogen made in a basement by one individual that can kill billions.

Something Nick Bostrom explores in his paper “the vulnerable world hypothesis”.

Now, let’s think about the psychology of someone who is willing to kill as many people as possible. We can assume this would not be a life lover experiencing positive states of mind. We picture someone, either delusional or going through psychological pain, with a very depressive and bleak view of the world. At least, in expectation.

Recall the case of the German co-pilot Andreas Lubitz. After having suffered major depressive episodes throughout his life, he deliberately crashed an airliner into the French Alps in 2015, killing himself and 149 other people. Sadly, some people with suicidal tendencies might want to take the world down with them if they could.

It’s very difficult to imagine solutions preventing individuals to cause a lot of harm. High-tech mass surveillance and turned key totalitarianism are sometimes suggested, but it comes with its share of problems, to say the least.

Another solution could be to prevent the motivation to cause harm altogether.

A predisposition to low mood can be at least as devastating to quality of life as genetic diseases. Like cystic fibrosis, the genetics of low mood can potentially be purged from the human germline. According to David Pearce, two tempting targets are pain thresholds and hedonic recalibration. The hedonic treadmill is a concept in psychology describing the tendency of humans to quickly return to a relatively stable level of happiness despite major positive or negative events or life changes. If we can reach it up our base level, we could experience a much better quality of life, similar to the genetic outliers living today who benefit from high hedonic set points (Hyperthymic temperament). 

Is this possible? After all, hundreds of genes are probably responsible for our predisposition to happiness and satisfaction. A few candidates have already been discovered. SCN9A, FAAH, FAAH OUT, COMT genes. More research is worth pursuing. But the development of CRISPR CAS9 and other gene-editing technics show a strong indication that we might very well end up in a world where these interventions are routinely done in-vivo or on embryos.

If future generations experience much happy state of mind, exempt from the worst forms of mental suffering, the percentage of people willing to end the world should be reduced. At least, more than if we just leave the genetic lottery to play its cards.

It is an intervention focused not on the technologies themselves, not on laws and regulations, but on the motivations of their potential users. It would just be unthinkable for a person to cause human extinction or civilizational collapse.

Therefore, the result of a genetic strategy to phase out suffering might lead to a reduction of existential risks. Needless to say, it would also mean less suffering in the world, but one doesn’t need to be a negative utilitarian or a suffering focused ethicist to see this as a candidate for a longtermist intervention. The least people are willing to blow up the world, the better.

That being said, it is unclear if people will be on board with recalibrating the hedonic treadmill. And even if they do, it might not be during the period of perils we are living right now.

Trajectory change

Enhancing the quality of life of our future children and grandchildren thanks to gene editing can also be viewed through the lens of trajectory change. Indeed, any modification in the human germline can, by definition, last for a very long time.

It is unlikely that our descendants living happy lives will want to reintroduce genetic traits which cause more suffering. So if successful, the fraction of malicious lone actors should stay low for a very long time.

Now, there are obvious pitfalls with this kind of intervention. Gene editing and designer babies bring all too often the fear of the E word (Eugenics). And all sorts of dystopic scenarios come to mind, tinted by Huxley’s Brave New World. Things could go wrong!

Value lock-in

The Hedonistic Imperative outlined by David Pearce could also be an idea or goal that our generation or the next decides to lock-in. It would therefore become a civilizational value maintained for thousands of years. Which is, in some sense, already an ideal we pursue. After all, if we look at the official constitution of the World Health Organization, we find an impressive definition of health: 

"Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity."

The word "complete" here is pretty bold. No one in history has ever been healthy! If we want to live up to that definition, the only way seems to be to phase out the biological substrate of pain and suffering.

Conclusion

This idea is very likely deeply flawed, but from all the content on longtermism I have read until now, this is the first time I stumble upon this approach of phasing out suffering as a way of shaping the far future.

So at least, it gives a new angle on the topic.

David Pearce doesn’t argue for longtermism per se, but his work indicates some interesting avenues in that regard, and I encourage anyone willing to know more to read his website.

There are also interesting ideas in the academic book “Unfit for the future” by Ingmar Persson and Julian Savulescu who argue that our species is not hardwired to face the future, and we should implement moral enhancement if we want to survive the long term future. 

It is very much in the realm of transhumanism, but I think a non-negligible fraction of EA/longtermist are sympathetic to transhumanist ideas (and vice-versa), which makes sense considering they both contemplate the long term future.

You can also find my conversation with David Pearce here (Introduction in French).
 

36

0
0

Reactions

0
0

More posts like this

Comments8
Sorted by Click to highlight new comments since: Today at 4:23 AM

Thanks for the post and the interview, Gaetan!

For any one interested, David Pearce's own written response on EA's longtermism can be found on his website.

I'm glad the post  generated some interest and thank you for the links and further reading

Brian Tomasik's essay "Why I Don't Focus on the Hedonistic Imperative" is worth reading. Since biological life will almost certainly be phased out in the long run and be replaced with machine intelligence, AI safety probably has far more longtermist impact compared to biotech-related suffering reduction. Still, it could be argued that having a better understanding of valence and consciousness could make future AIs safer.

To me this sort of extrapolation seems like a “reductio ad absurdum” that demonstrates that suffering is not the correct metric to minimize.

Here’s a thought experiment. Let’s say that all sentient beings were converted to algorithms, and suffering was a single number stored in memory. Various actions are chosen to minimize suffering. Now, let’s say you replaced everyone’s algorithm with a new one. In the new algorithm, whenever you would previously get suffering=x, you instead get suffering=x/2.

The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.

Have you done a great thing for the world, or is it a meaningless change of units?

and suffering was a single number stored in memory

I think it's extraordinarily unlikely suffering could just be this. Some discussion here.

If your interpretation of the thought experiment is that suffering cannot be mapped onto a single number, then the logical corollary is that it is meaningless to “minimize suffering”. Because any ordering you can place on the different possible amounts of suffering an organism experiences implies that they can be mapped onto a single number.

I'm saying the amount of suffering is not just the output of some algorithm or something written in memory. I would define it functionally/behaviourally, if at all, although possibly at the level of internal behaviour, not external behaviour. But it would be more complex than your hypothesis makes it out to be.

The total amount of global suffering is cut in half. However, nothing else about the algorithm changes, and nobody’s behavior changes.

This probably doesn't apply to Pearce's qualia realist view, but it's possible to have a functionalist notion of suffering where eliminating suffering would change people's behavior. 

For instance, I think of suffering as an experienced need to change something about one's current experience, something that by definition carries urgency to bring about change. If you get rid of that, it has behavioral consequences. If a person experiences pain asymbolia where they don't consider their "pain" bothersome in any way, I would no longer call it suffering. 

Curated and popular this week
Relevant opportunities