This really awesome and helpful! Thanks Saulius!
One group that is probably pretty small but isn't listed here - animals in wildlife rehabilitation clinics: this page says 8k to 9k animals (I'm guessing mostly vertebrates?) enter clinics in Minnesota every year. If that scales by land area for the contiguous United States, that would be 270k - 305k animals per year in the US, so maybe a few million globally? But that's just a guess from the first good source I saw.
On pet shelters - I used to work at one, and every month, we reported our current animal population (along with a lot of other stats), to this organization - https://shelteranimalscount.org/ - I think their data could probably be used to get a very accurate estimate of animals currently in shelters in the US.
Yeah I think that is right that it is a conservative scenario - my point was more, the proposed future scenarios don't come close to imagining as much welfare / mind-stuff as might exist right now.
Hmm, I think my point might be something slightly different - more to pose a challenge to explore how taking animal welfare seriously might change the outcomes of conclusions about the long term future. Right now, there seems to be almost no consideration. I guess I think it is likely that many longtermists thinks animals matter morally already (given the popularity of such a view in EA). But I take your point that for general longtermist outreach, this might be a less appealing discussion topic.
Thanks for the thoughts Brian!
Yeah, the idea of looking into longtermism for nonutilitarians is interesting to me. Thanks for the suggestion!
I think regardless, this helped clarify a lot of things for me about particular beliefs longtermists might hold (to various degrees). Thanks!
That makes sense!
Is the deadline at a specific time on February 6th, or before the 6th (i.e. EOD the 5th)? The wording is just slightly vague.
Thanks for all you do!
Thanks for the feedback - that's a good rule of thumb!
Thanks for laying out this response! It was really interesting, and I think probably a good reason to not take animals as seriously as I suggest you ought to, if you hold these beliefs.
I think something interesting is that this, and the other objections that have been presented to my piece have brought out is that to avoid focusing exclusively on animals in longtermist projects, you have to have some level of faith in these science-fiction scenarios happening. I don't necessarily think that is a bad thing, but it isn't something that's been made explicit in past discussions of long-termism (at least, in the academic literature), and perhaps ought to be explicit?
A few comments on your two arguments:
Claim: Our descendants may wish to optimize for positive moral goods.
I think this is a precondition for EAs and do-gooders in general "winning", so I almost treat the possibility of this as a tautology.
This isn't usually assumed in the longtermist literature. It seems more like the argument is made on the basis of future human lives being net-positive, and therefore good that there will be many of them. I think the expected value of your argument (A) hinges on this claim, so it seems like accepting it as a tautology, or something similar, is actually really risky. If you think this is basically 100% likely to be true, of course your conclusion might be true. But if you don't, it seems plausible that that, like you mention, priority possibly ought to be on s-risks.
In general, a way to summarize this argument, and others given here could be something like, "there is a non-zero chance that we can make loads and loads of digital welfare in the future (more than exists now), so we should focus on reducing existential risk in order to ensure that future can happen". This raises a question - when will that claim not be true / the argument you're making not be relevant? It seems plausible that this kind of argument is a justification to work on existential risk reduction until basically the end of the universe (unless we somehow solve it with 100% certainty, etc.), because we might always assume future people will be better at producing welfare than us.
I assume people have discussed the above, and I'm not well read in the area, but it strikes me as odd that the primary justification in these sci-fi scenarios for working on the future is just a claim that can always be made, instead of working directly on making lives with good welfare (but maybe this is a consideration with longtermism in general, and not just this argument).
I guess part of the issue here is you could have an incredibly tiny credence in a very specific number of things being true (the present being at the hinge of history, various things about future sci-fi scenarios), and having those credences would always justify deferral of action.
I'm not totally sure what to make of this, but I do think it gives me pause. But, I admit I haven't really thought about any of the above much, and don't read in this area at all.
Thanks again for the response!
Yeah, I think it probably depends on your specific credence that artificial minds will dominate in the future. I assume that most people don't place a value of 100% on that (especially if they think x-risks are possible prior to the invention of self-replicating digital minds, because necessarily that decreases your credence that artificial minds will dominate). I think if your credence in this claim is relatively low, which seems reasonable, it is really unclear to me that the expected value of working on human-focused x-risks is higher than that of working on animal-focused ones. There hasn't been any attempt that I know of to compare the two, so I can't say this with confidence though. But it is clear that saying "there might be tons of digital minds" isn't a strong enough claim on its own, without specific credences in specific numbers of digital minds.
That's a good point!
I think something to note is that while I think animal welfare over the long term is important, I didn't really spend much time thinking about possible implications of this conclusion in this piece, as I was mostly focused on the justification. I think that a lot of value could be added if some research went into these kinds of considerations, or alternative implications of a longtermist view of animal welfare.