In late 2014, I ate lunch with an EA who prefers to remain anonymous. I had originally been of the opinion that, should humans survive, the future is likely to be bad. He convinced me to change my mind about this.
I haven’t seen this argument written up anywhere and so, with his permission, I'm attempting to put it online for discussion.
A sketch of the argument is:
Humans are generally not evil, just lazy
Therefore, we should expect there to only be suffering in the future if that suffering enables people to be lazier
The most efficient solutions to problems don’t seem like they involve suffering
Therefore, as technology progresses, we will move more towards solutions which don’t involve suffering
Furthermore, people are generally willing to exert some (small) amount of effort to reduce suffering
As technology progresses, the amount of effort required to reduce suffering will go down
Therefore, the future will contain less net suffering
Therefore, the future will be good
My Original Theory for Why the Future Might Be Bad
There are about ten billion farmed land animals killed for food every year in the US, which has a population of ~320 million humans.
The farmed animals are overwhelmingly living in factory farming conditions, which results in enormous cruelties, and probably have lives which are not worth living. Since (a) farmed animals so completely outnumber humans, (b) humans are the cause of their cruelty, and (c) humans haven't caused an equal/higher # of beings to lead happy lives, human existence is plausibly bad on net.
Furthermore, technology seems to have instigated this problem. Animal agriculture has never been great for the animals which were being slaughtered, but there was historically some modicum of welfare. For example: chickens had to be let outside at least some of the time, because otherwise they would develop vitamin D deficiencies. But with the discovery of vitamins and methods for synthesizing them, chickens could now be kept indoors for their entire lives. Other scientific advancements like antibiotics enabled them to be packed densely, so that now the average chicken has 67 inches of space (about two thirds the size of a sheet of paper).
It's very hard to predict the future, but one reasonable thing you can do is guess that current trends will continue. Even if you don't believe society is currently net negative, it seems fairly clear that the trend has been getting worse (e.g. the number of suffering farmed animals grew much more rapidly than the [presumably happy] human population over the last century), and therefore we should predict that the future will be bad.
Technology is neither good nor bad, it’s merely a tool which enables the people who use it to do good or bad things. In the case of factory farming, it it seemed to me (Ben) that people overwhelmingly wanted to do bad things, and therefore technological progress was bad. Technological progress will presumably continue, and therefore we might expect this ethical trend to continue and the future to be even worse than today.
He pointed out that this wasn’t an entirely accurate way of viewing things: people didn’t actively want to cause suffering, they are just lazy, and it turns out that the lazy solution in this case causes more suffering.
So the key question is: when we look at problems that the future will have, will the lazy solution be the morally worse one?
It seems like the answer is plausibly “no”. To give some examples:
Factory farming exists because the easiest way to get food which tastes good and meets various social goals people have causes cruelty. Once we get more scientifically advanced though, it will presumably become even more efficient to produce foods without any conscious experience at all by the animals (i.e. clean meat); at that point, the lazy solution is the more ethical one.
(This arguably is what happened with domestic work animals on farms: we now have cars and trucks which replaced horses and mules, making even the phrase “beat like a rented mule” seem appalling.)
Slavery exists because there is currently no way to get labor from people without them having conscious experience. Again though, this is due to a lack of scientific knowledge: there is no obvious reason why conscious experience is required for plowing a field or harvesting cocoa, and therefore the more efficient solution is to simply have nonconscious robots do these tasks.
(This arguably is what happened with human slavery in the US: industrialization meant that slavery wasn’t required to create wealth in a large chunk of the US, and therefore slavery was outlawed.)
Of course, this is not a definitive proof that the future will be good. One can imagine the anti-GMO lobby morphing into an anti-clean meat lobby as part of some misguided appeal to nature, for example.
But this does give us hope that the lazy – and therefore default – position on issues will generally be the more ethical one, and therefore people would need to actively work against the grain in order to make the world less ethical.
If anything, we might have some hope towards the opposite: a small but nontrivial fraction of people are currently vegan, and a larger number of people spend extra money to buy animal products which (they believe) are less inhumane. I am not aware of any large group which does the opposite (go out of their way to cause more cruelty to farmed animals). Therefore, we might guess that the average position of people is slightly ethical and so people would be willing to not just be vegan if that was the cheaper option, but also be willing to pay a small amount of money to live more ethically.
The same thing goes for slavery: a small fraction of consumers go out of their way to buy slave-free chocolate, with no corresponding group of people who go out of their way to buy chocolate produced with slavery. Once machines come close to human cocoa growing abilities, we would expect chocolate industry slavery to die off.
If the default course of humanity is to be ethical, our prior should be that the future will be good, and the burden of proof shifts to those who believe that the future will be bad.
I do not believe it provides a knockdown counterargument to concerns about s-risks, but I hope this argument’s publication encourages more discussion of the topic, and a viewpoint some readers have not before considered.
This post represents a combination of my and the anonymous EA’s views. Any errors are mine. I would like to thank Gina Stuessy and this EA for proofreading a draft of this post, and for talking about this and many other important ideas about the far future with me.