DM

David Mathers🔸

5290 karmaJoined

Posts
11

Sorted by New

Comments
599

On the 7-15% figure I don't actually see where the idea that smaller, less intelligent animals suffer less when they are in physical pain is commonsense comes from. People almost never cite a source for it being commonsense, and I don't recall having had any opinion about it before I encountered academic philosophy. I think it is almost certainly true that people don't care very much about small dumb animals, but that, but there are a variety of reasons why that is only moderate evidence for the claim that ordinary people think they experience less intense pain:

-They might not have ever thought about it, since most people don't feel much need to give philosophical justifications for banal, normal opinions like not caring much about animals.

-Hedonistic utilitarianism is not itself part of commonsense, but without assuming it, you can't quickly and easily move from "what happens to bees isn't important" to "bees have low capacity for pain."

-We know there are cases where people downgrade the importance of what happens to subjects who they see as outside of their community, even when they definitely don't believe those subjects have diminished capacity for pain. Many ordinary people are nationalists who don't care that much about foreigners, but they don't think foreigners feel less pain!

-They might just assume that it is unlikely small simple animals can feel pain at all. This doesn't necessarily mean they also think that, conditional on small simple animals being able to feel pain they only feel it a little bit.

Independently of what the commonsense prior is here, I'd also say that I have a PhD in the philosophy of consciousness, and I don't think the claim that less neurons=less capacity for pain is commonly defended in the academic literature. At most some people might defend the more general idea that how conscious a state is comes in degrees, and some theories that allow for that might predict bee pains are not very conscious. But I've never seen any sign that this is a consensus view. In general, "more neurons=more intense pains" seems to play badly with the standard functionalist picture that what makes a particular mental state the mental state it is, is it's typical causes and effects, not its intrinsic properties. Not to mention that it seems plausible there could be aliens without neurons who nonetheless felt pain.

The risk is non-zero, but you made a stronger claim that it was "the most probable extinction risk around". 

EDIT: As for reasons to think they will reverse, they seem to be a product of liberal modernity, but currently we need a population way, way above the minimum viable number to keep long term modernity going. Maybe AI could change that I guess, but it's very hard to make predictions about the demographic trend if AI does all work. 

"Below-replacement fertility is perhaps the simplest and most probable extinction risk around"

For it to present a significant extinction risk, you'd need current demographic trends to persist way past the point where changes in population have completely transformed society to the point where there's no reason to think current demographic trends will hold. 

This is very bad news for longtermism if correct, since it suggests that value in the far future gained by preventing extinction now is much lower than it would otherwise be. 

If you think there might well be forms of naturalism that are true but trivial, is your credence in anti-realism really well over >99%?

This forum probably isn't the place for really getting into the weeds of this, but I'm also a bit worried about accounts of triviality that conflate a priority or even analyticity and triviality: Maths is not trivial in any sense of "trivial" on which "trivial" means "not worth bothering with". Maybe you can get out of this by saying maths isn't analytic and it's only being analytic that trivializes things, but I don't think it is particulary obvious that there is a sense making concept of analyticity that doesn't apply to maths. Apparently Neo-Fregeans think that lots of maths is analytic, and as far as I know that is a respected option in the philosophy of math: https://plato.stanford.edu/entries/logicism/#NeoFre

I also wonder about exactly what is being claimed to be trivial: individual identifications of moral properties with naturalistic properties, if they are explicitly claimed to be analytic? Or the claim that moral naturalism is true and there are some analytic truths of this sort? Or both?

Also, do you think semantic claims in general are trivial? 

Finally, do you think the naturalists whose claims you consider "trivial" mostly agree with you that their views have the features that you think make for triviality but disagree that having those features means their views are of no interest. Or do most of them think their claims lack the features you think make for triviality? Or do you think most of them just haven't thought about it/don't have a good-faith substantive response?



 

So your claim is that  naturalists are just stipulating a particular meaning of their own for moral terms? Can you say why you think this? Don't some naturalists just defend the idea that moral properties could be identical with complex sociological properties without even saying *which* properties? How could those naturalists be engaging in stipulative definition, even accidentally? 

I'd also say that this only bears on the truth/falsity of naturalism fairly indirectly. There's no particular connection between whether naturalism is actually true and whether some group of naturalist thinkers happen to have stipulatively defined a moral term, although I guess if most defenses of naturalism did this, that would be evidence that naturalism couldn't be defended in other ways, which is evidence against it's truth.

Is being trivial and of low interest evidence that naturalist forms of realism are *false*? "Red things are red" is boring and trivial, but my credence in it is way above 0.99. 

Yeah, I think I recall David Thorstad complaining that Ord's estimate was far too high also.

Be careful not to conflate "existential risk" in the special Bostrom-dervied definition that I think Ord, and probably Will as well, are using with "extinction risk" though. X-risk from climate *can* be far higher than extinction risk, because regressing to a pre-industrial state and then not succeeding in reindustrialising (perhaps because easily accessible coal has been used up), counts as an existential risk, even though it doesn't involve literal extinction. (Though from memory, I think Ord is quite dismissive of the possibility that there won't be enough accessible coal to reindustrialise, but I think Will is a bit more concerned about this?) 

Is there actually an official IPCC position on how likely degrowth from climate impacts is? I had a vague sense that they were projecting a higher world gdp in 2100 than now, but when I tried to find evidence of this for 15 minutes or so, I couldn't actually find any. (I'm aware that even if that is the official IPCC best-guess position that does not necessarily mean that climate experts are less worried about X-risk from climate than AI experts are about X-risk from AI.)

Load more