Hide table of contents

Partially adapted from an old old Discord rant

On this past April’s eve, I was visited by the ghosts of several superintelligent beings from throughout spacetime[1], to help me answer a question of great import and difficulty. Even if we know what can harm and benefit a being, how can we know when a being we could benefit or harm even belongs in our moral circle to begin with? Excitingly, I feel that I got much cleaner explanations for why animals shouldn’t be of concern to morality than I’ve heard before! Below are samples of some explanations provided by these wise minds:

Supervenium the Self-Aware:

“How can the humans have any worth, they can’t even conceive of the operation of their own minds fully, they rely on words like “reasons” and “beliefs” and “intuitions”, they don’t even understand the full underlying process of their own neurons, how can they claim to have interests they don’t comprehend?”

Omnispawn the Infosphere:

“How can the humans have any worth, they don’t understand reality as it really is, why, without aids they can’t even tell what’s happening right behind them at any given moment. Their view of reality is entirely self-contained, so how can they hold any sincere values?”

Pinea the Singulasingularity:

“How can the humans have any worth, their “selves” are conceivably devisable, you could replace every neuron in their brains one by one and they would never notice the difference, there is no single replacement or removal that makes them altogether someone else. If you have no absolute underlying self, How can you have a legitimate sense of dignity?”

Prebangor the Elder:

“How can the humans have any worth, they are mortal, one day they will just be gone forever and there was a time before they were born. Anything that is done to them is done to a single blip in the history of everything, how can it have any importance?”

AaaaaaackOoooooooh the Torment Nexus:

“How can the humans have any worth, their capacity to feel has physical limits, they are incapable of the highest forms of superhappiness and the worst forms of supersadness. Even if we can benefit them more than one of our own, they would never appreciate the larger context of this benefit, so how can they truly value it?”

Binklx the Intelligently Designed:

"How can humans have any worth, you can follow their lineage by degrees all the way back to weird molecules in pools of water. Sure humans are cute, but if we give them consideration, where does it end? We would have to give moral consideration to everything!"

Janveson the Definer:

"How can humans have any worth, morality is about social contracts. They can't manipulate their brains to adopt the equilibrium utility function of the social unit, they can only make a few unreliable trades and agreements at a time, and anything else is at best hypothetical. By definition they aren't part of "ethics"!"

Having recorded the wisdom of the superior minds, I prepared to go to sleep. But I heard another voice then, lying in bed.

Jala the Glittering:

"I am all of the parts of the universe, confused and lost in myself, which fleeting joys and fleeting horrors pass over, lighting little bits up out of the dark. I care, and I care about my caring, every fleck of it, no matter how little and fragile and confused, every bit I see as it is lit up in its own light. I love myself."

And in the cold and the dark of my bedroom, I for a moment felt an unexplainable urge to whisper back “I love you”, as though I could never be warm again until I said it.

But forumeers! I’ve written this up to get your feedback, what do you make of the overminds' advice?


    1. Something something acausal something. ↩︎

Comments3


Sorted by Click to highlight new comments since:

I'm a Definooooor! I'm gonna Defiiiiiiine! AAAAAAAAAAAAAAAA

I like circles, though my favorites are (of course) boxes and arrows.

Pinea did complain about how many dimensions I wanted in my ethics...

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
LewisBollard
 ·  · 6m read
 · 
> Despite the setbacks, I'm hopeful about the technology's future ---------------------------------------- It wasn’t meant to go like this. Alternative protein startups that were once soaring are now struggling. Impact investors who were once everywhere are now absent. Banks that confidently predicted 31% annual growth (UBS) and a 2030 global market worth $88-263B (Credit Suisse) have quietly taken down their predictions. This sucks. For many founders and staff this wasn’t just a job, but a calling — an opportunity to work toward a world free of factory farming. For many investors, it wasn’t just an investment, but a bet on a better future. It’s easy to feel frustrated, disillusioned, and even hopeless. It’s also wrong. There’s still plenty of hope for alternative proteins — just on a longer timeline than the unrealistic ones that were once touted. Here are three trends I’m particularly excited about. Better products People are eating less plant-based meat for many reasons, but the simplest one may just be that they don’t like how they taste. “Taste/texture” was the top reason chosen by Brits for reducing their plant-based meat consumption in a recent survey by Bryant Research. US consumers most disliked the “consistency and texture” of plant-based foods in a survey of shoppers at retailer Kroger.  They’ve got a point. In 2018-21, every food giant, meat company, and two-person startup rushed new products to market with minimal product testing. Indeed, the meat companies’ plant-based offerings were bad enough to inspire conspiracy theories that this was a case of the car companies buying up the streetcars.  Consumers noticed. The Bryant Research survey found that two thirds of Brits agreed with the statement “some plant based meat products or brands taste much worse than others.” In a 2021 taste test, 100 consumers rated all five brands of plant-based nuggets as much worse than chicken-based nuggets on taste, texture, and “overall liking.” One silver lining
 ·  · 1m read
 ·