Elias Au-Yeung

84Joined Mar 2022

Comments
4

I think the numbers on the right column are supposed to be the 'totals' of the biomass of the domains/kingdoms/viruses as entire groups.

The original table is on page 89 of the supplementary materials document of the Bar-On, Phillips, and Milo paper. 

Regarding whether to focus on numbers or biomass, I think the following articles  could be relevant:

Brian Tomasik's Is Brain Size Morally Relevant? explores arguments for brain-size/complexity weighting and for 'equality weighting' (equal consideration of systems). Despite the title, it's not just brain size that's discussed in the article but sometimes also mental complexity ("Note: In this piece, I unfortunately conflate "brain size" with "mental complexity" in a messy way. Some of the arguments I discuss apply primarily to size, while some apply primarily to complexity.").

The question of whether complexity matters is related to the intensity of experiences. Jason Schukraft's Differences in the Intensity of Valenced Experience across Species explores this and seems to conclude that for less familiar systems (well, animals -- I think only animals are explored in that piece) it's really unclear whether the "intensity range" is larger or smaller. They could potentially suffer more intensely. They could potentially suffer less intensely.

Another article by Brian Tomasik: Fuzzy, Nested Minds Problematize Utilitarian Aggregation raises the question of how to subset/divide physical reality into minds. Like Brian, the approach that's most intuitive for me is to "Sum over all sufficiently individuated "objects"" and count subsystems (e.g. suffering subroutines) too. Although, I think the question is still super confusing. It's also written in the article that "[p]erforming any exact computations seem intractable for now" although I don't know enough math to say.

I think it's ultimately all quite subjective though (which doesn't mean it's not important!). For me, I don't see how we can definitively show that some approach is correct. (Brian Tomasik describes this problem as a "moral question" and frames it as there's "[n]o objective answer" but I feel like his language is slightly confusing. Rather, I think there is an answer regarding suffering/sentience, but it's just that we might not ever have the tools to know that we've reached it. In the absence of those tools, a lot of what we're doing when saying "there's more suffering here" or "there's more suffering there" might be or might be compared to a discussion of morality.) We're also using a bunch of intuitions that were shaped by factors that don't necessarily guide us to truths about suffering/sentience (I explore this in the Factors affecting how we attribute suffering section in my recent microorganism post).

Bear with me, I can only reply piecemeal.

Sure, that's alright :)

If, as seems likely, scientists conclusively show that either brains or neural architecture are the necessary conditions for conscious experience, then that would heavily downgrade the likelihood that some alternative mechanism exists to provide bacteria with consciousness.

One of the things I mention in the post is that whenever we're looking at scientific findings, we're imposing certain standards on what counts as evidence. But it's actually not all that clear how we're supposed to construct these standards in the case of first-person experiences we don't have access to. Brains/neural architectures are categories we invent to put particular instances of "brains" and "neural architectures" in. They're useful in science and medicine but that doesn't mean referring to those categories with those boundaries automatically tells us everything to know about conscious experience/suffering.

What we're really interested in is the category containing systems capable of suffering, and there are a number of different views on what sort of criteria identifies elements in this category: some people follow criteria that suggest only other humans are similar enough to be capable of suffering[1], some people follow criteria that suggest mammals are also similar enough, some people follow criteria that suggest insects are also similar enough. These views have us decrease our threshold for acceptable similarity.[2] One next step might be to extend criteria from the cell-to-cell signaling of nervous systems to the intracellular signaling of microrganisms. If we're confident in/accept some of the other criteria, can we really rule out similar adjacent criteria? This seems difficult given how much uncertainty we have about our reasoning regarding consciousness and suffering[3] . We only need to assign some credence to the views that count some things we know of microbes as evidence of suffering (e.g., chemical-reactive movement, cellular stress responses, how microbes react to predators & associated mechanisms) -- in order to think that microorganism suffering is at least a possibility.

There's a lot of subjective judgment in this. And scientists can't escape it too. The evolution of human intuitions was guided by what's needed for social relationships etc., not fundamental truths about what is conscious/what can suffer. Scientists also impose their standards on what counts as evidence of consciousness/suffering -- but the standards are constructed with those intuitions! I discuss more in the section linked in the footnote. [3] 

discover the mechanistic link between biochemistry and consciousness. It seems to me that this missing link is the main factor that leaves room for possible bacterial suffering.

(I actually I hold a physicalist (monist) view. I don't think consciousness is something additional to physics. Dualism seems unlikely mostly given how science hasn't found additional "consciousness stuff" over and above physical reality. My (very rough) approach to all this is that (some) physical processes (e.g. in the brain) can be said to be conscious experiences, under some interpretation/criteria. Although, I don't mention this stuff in the post to be more theory-neutral (hence my describing things as physical 'evidence of suffering'), which is okay since I think the post doesn't really rely on physicalism being true)

  1. ^

    Similar enough to oneself -- the only system that we're absolutely sure can suffer

  2. ^

    What about more "fundamental" differences? I respond to that in the Counterarguments subsection.

  3. ^

We already have a robust scientific understanding of the biochemical causes of bacterial behavior. Why would we posit that some form of cognitive processing of suffering is also involved in controlling their actions?

I think even if we have a robust scientific understanding of, e.g., the human brain, we would still think that human suffering exists. I don't think understanding the physical mechanisms behind a particular system means that it can't be associated with a first-person experience.

Endorsing a program of study purely on the basis of "important, if true" seems like it would also lead you to endorse studying things like astrology.

So I'm not looking at the topic only because "important, if true". As I explain in the piece, I think there are signs pointing toward "there's at least a small chance" based on reflecting on our concept of what we take to be evidence of suffering, and aspects of microbes that fit possible extended criteria.

I think there's a lot of scientific evidence against astrology. On the other hand,  how we should attribute mental experiences to physical systems seems to be an open question because of the problem of (not being) other minds.

Since you're mainly taking the fact that bacteria "look like they might be suffering," it seems like you should also be concerned that nonliving structures are potentially suffering. Wouldn't it be painful to be as hot as a star for up to trillions of years?

I am actually concerned that there's some chance various nonliving structures do suffer.  Digital minds are one example.  I don't know much about astronomy/astrophysics but I doubt that stars have functioning that strongly matches, e.g., damage responses exhibited by animals. This isn't to say there's absolutely zero chance we can attribute negative experiences to such a system, but I might think it's much more unlikely than microorganisms that better match damage responses of animals. It might be valuable for someone to look into whether some astronomical objects match some interpretations of what it means to be evidence of suffering, and to what extent. I think there's quite a lot of uncertainty surrounding questions in the philosophy of mind (e.g., some philosophers endorse panpsychism while others might be more skeptical about the sentience of non-human animals).