Summary: Whatever your beliefs about the expected value of the average welfare of animals in the wild under constant conditions and at equilibrium, I think you should expect, a priori (weakly, without further evidence), their average welfare to be lower after conditions change. If you believe that under constant conditions and at equilibrium, the expected value of the average welfare in the wild is at most (or, perhaps by resource efficiency and symmetry, equal to) 0, then you should believe, a priori, that under changing conditions, it is negative, and so, with it, the total welfare would also be negative in expectation. With these beliefs, since conditions are constantly changing, you should expect, a priori, the net welfare in the wild to be negative.
The same argument should apply a priori to farmed animals, as well as sentient AIs developed without welfare in mind and optimized for a specific purpose through some optimization algorithm, if their affective (reward and punishment) systems are also optimized for that purpose.
Disclaimers: I have no formal background in evolution or ecology past high school, so there's a good chance I'm wrong or even very wrong here. My formal background is in math and computer science. I also have suffering-focused views, so this might bias me towards making wild animal welfare look worse to those with more symmetric views, and that was one of my motivations for writing this post.
This is effectively the Anna Karenina principle (or The Principle of Fragility of Good Things).
I think this conclusion is already pretty obvious enough, but it's worth expanding.
The a priori analyses of the sign of the total/net welfare of animals in the wild that I'm aware of take as implicit the assumption that populations are in equilibrium and unchanging, and the conditions in which they live aren’t changing either. If our prior beliefs about the net welfare were captured by a distribution with expected value 0 (by symmetry) or negative, could we not have reason to believe that under changing conditions (perhaps including cycles, although you'd expect some adaptation in many cases, e.g. to weather cycles), animal welfare will tend to be lower on average? Of course, this will in each case depend on the particular changes — many changes can indeed be good for welfare — but we might expect them to usually be bad, because changes are often away from the conditions for which the population is adapted, since animals are adapted to specific conditions which might have "sweet spots".
Here is my expanded argument, which depends on some intuitive (but not necessarily technical) understanding of the basics of continuous optimization:
Evolution is an optimization algorithm with a possibly moving target: the ultimate target is the proliferation of genes, but we can use the evolutionary fitness of a population as a proxy and a function of genetics, and this goal is specified under particular conditions, so with changing conditions, which solutions are better can change, too. Because evolution is an optimization algorithm, you expect it to prefer small changes to its populations’ genetics which increase fitness to small changes which decrease fitness. You would expect it to be more likely to pass through saddle points towards increased fitness than to decreased fitness, and to avoid producing local minima for the current conditions (except possibly in very flat regions of the function).
Now, suppose evolution had produced a solution (not necessarily optimal) of population genes under equilibrium for a given fixed set of conditions. Suppose now that the conditions change slightly. I expect these changing conditions, a priori, to be bad for the average welfare of that population. I’ll illustrate with two examples, for which I consider what’s good or bad for the welfare of individuals who live through the changes in conditions (and not the generation's offspring, who may be better adapted), and I assume average fitness and average welfare for a given population under different conditions correlate locally (enough to use them interchangeably):
a. Change in nutrition quality/abundance. If it increases, this is good. If it decreases, this is bad.
b. Change in temperature. If it increases, this is bad, since it increases the risk of hyperthermia. If it decreases, this is bad, since it increases the risk of hypothermia. This is because evolution will aim to tune the body temperatures and heat regulation of a population to the current conditions (or a given range of conditions, given cyclical weather patterns), and under different conditions, the solution may do too little (e.g. not enough fur) or too much (e.g. too much fur). Later generations may be better or worse off, though, since they may need to use fewer or more resources for temperature regulation.
Notice that in the first case, it can be either good or bad, but in the second, it only looks bad. Of course, other considerations might lead us to believe that a change in temperature is actually good in one direction, but bad in the other, e.g. it might affect nutrition quality/abundance or allow animals to use their energy more efficiently. For cases like a., we should a priori expect the possible good from it to match the bad, on average, but this will depend on specifics and perhaps even parametrization and scales. Here, "dimension" and "direction" can be combinations of different factors, e.g. temperature and food abundance.
However, and this is the crucial claim: for a given stable solution for a fixed set of conditions, we should expect small changes in the conditions across one dimension that are good in both directions to be less likely than small changes in the conditions across one dimension which are bad in both directions (like b). This breaks the symmetry between good and bad changes, and implies changes should a priori be bad in expectation.
Personally, I have not been able to even think of a dimension in conditions according to which a change in either direction would be good, but this could just be my own ignorance. Please comment with some if you do think of them. I also suspect that such a solution would be less stable from a population genetics standpoint, if fitness-improving changes in genetics can align with changes in conditions. I suspect it's possible to make this claim more formal and prove a form of it mathematically.
1. If conditions decrease population sizes, even if the average welfare decreases, the total welfare may increase or decrease.
2. I think we can do much better than relying on a symmetric prior and making judgements about the net balance of pleasure and suffering in the world (or particular cases) appealing to it and little else (when pleasure and pain are not used for guiding action but simply induced in artificial minds, we might think this kind of symmetry in energy efficiency could hold). Life history classification can better inform our judgements, see:
Also, (subjectively) aggregating different welfare indicators as in:
3. An opposite principle might be antifragility. I think this would only hold for small changes, and if conditions continue to change, they can outpace population adaptation, and this would still be bad.
4. "How Much Do Wild Animals Suffer? A Foundational Result on the Question is Wrong." by Zach Groff (EA Forum post, EA Global talk, transcripts) for discussion of the two papers:
The correction to the original: "Does suffering dominate enjoyment in the animal kingdom? An update to welfare biology" (2019) by Zach Groff and Yew‑Kwang Ng"
5. I spent a bit of time thinking about this in terms of a fitness function of conditions and population genetics, small changes in conditions and genetics, partial derivatives and directional derivatives. This might still be a promising approach, and I might try again eventually, but it's not a priority for me now, and it would probably be better off in the hands of someone with more background in ecology or evolution.