Summary: Desire theories of welfare hold that our welfare consists in the degrees to which our desires or preferences are satisfied or frustrated. Some issues and subtleties exist in attempting to develop a satisfactory desire theory that counts the suffering of non-reflective animals but uses reflective preferences in humans, while making continuous tradeoffs between reflective preferences in reflective humans and revealed preferences in nonreflective animals. I briefly describe possible solutions and discuss the practice of making such tradeoffs in this post.
(Disclaimer: This is a short post I wrote in about 1-2 hours based on accumulated background knowledge, but no targeted literature review on the specific topic, so plausibly contains errors or oversights.)
Weighing reflective and revealed preferences in theory
Reflective preferences/desires are preferences that individuals endorse after thought. When you ask someone their preferences about an issue, you generally expect to get their reflective preferences. Only beings capable of a certain degree of thought have reflective preferences, and this probably excludes many nonhuman animals (at least without training with language).
Revealed preferences are preferences of individuals we infer based on their actual choices in real world situations. Basically all living animals have revealed preferences and even non-conscious beings may have revealed preferences, although I assume only the preferences of conscious beings actually matter.
If only reflective preferences count ethically, then many nonhuman animals (and plausibly some humans with limited cognitive capacities) capable of suffering and their interests wouldn't matter in themselves at all. If reflective preferences lexically dominate revealed preferences, i.e. reflective preferences are always prioritized over revealed preferences whenever they disagree, then the result is practically the same, and we may as well ignore non-reflective beings.
However, if we instead allow continuous tradeoffs between reflective preferences and revealed preferences, optionally ignoring revealed preferences in an individual when their reflective preferences are available, then we can get continuous tradeoffs between human and nonhuman animal preferences. I'd guess this could be justified by an account on which both revealed and reflective preferences are measures of some true underlying weights of desires, but reflective preferences could happen to be more accurate in individuals capable of reflection. Alternatively, for a moral anti-realist or moral pluralist, they might just be inclined to make such continuous tradeoffs, without any need for an underlying moral construct that explains both in a unified manner.
(Edited to add) Reflective and revealed preferences might come from different kinds of conscious evaluations or judgements of circumstances: life satisfaction is a reflective evaluation, while revealed preferences may (often but not always) be motivated by pleasure and suffering or result from reinforcement with pleasure and suffering, and pleasure and suffering are themselves or involve "felt evaluations" or "felt judgements". Suffering involves judging circumstances negatively, while pleasure involves judging them positively, where these judgements are felt, not reasoned. The distinction between reflective judgements and felt ones can be blurry, too, because reflective judgements will, I think, necessarily be based at least in part on evaluative impressions or intuitions, themselves felt judgements, although not necessarily hedonistic in nature. I see reflective preferences as coming from pulling out felt judgements, reasoning about them, and weighing them and inferences based on them against one another. Without felt judgements to start, there's nothing to reason about and weigh together to come to any overall reflective judgement other than indifference. In some cases, when there's no reflection to do, a reflective judgement just is a felt judgement. So, it would be odd to discount nonhuman animals' felt judgements, including their pleasure and suffering.
(Another approach could be to consider what animals' reflective preferences would be were they capable of reflection, and deal only with idealized reflective preferences (credits to M.B.), but I'll set that possibility aside here.)
The next section assumes we can make such continuous tradeoffs between reflective and revealed preferences, and discusses how to actually do so between humans and nonhuman animals, through first isolating the reflective desire-based weights of physical pain in humans.
Weighing preferences in practice
Physical pain typically also results in or is otherwise associated with functional impairments that limit activities. People tend to avoid activities that cause them physical pain, even if they're important for satisfying desires they actually think are more important than avoiding the pain, on reflection. Or, the cause of their physical pain, like an injury, may also limit their capacities and activities, but not through the pain itself.
So, in considering an intervention that reduces physical pain and measuring its effects on life satisfaction, QALYs, DALYs or some other reflective desire-based measure, we'd be capturing not only just the reduction in the desire-based harm of the suffering from the pain itself, but also the effects of allowing people to pursue activities they wouldn't otherwise, and the effects of both the pain reduction and functional improvements on overall long-term mood, which further impacts desire satisfaction. However, there's lots of data on the EQ-5D dimensions of health-related welfare, and we could estimate the effects of the pain/discomfort dimension on life satisfaction or QALYs, DALYs, holding constant the other EQ-5D dimensions of mobility, self-care and usual activities (either including or excluding anxiety/depression) and baseline demographic info. There may be practical issues in actually doing so with EQ-5D data, say because the data is not sufficiently precise about intensities, frequencies and durations of suffering, but this illustrates what we could do: just control for other factors. This should give us a desire-based weight just for the suffering pain causes in humans, not (or not primarily) through its effect on limiting activities or frustrating other desires.
Then, fixing another animal species, with
- a multiplier between humans' average absolute/cardinal reflective desire-based weight to some of their own individual suffering and the other species' average absolute/cardinal revealed preference-based weight to some of their own individual suffering,
- humans' reflective desire-based weights between various desires, and intensities, frequencies and durations of suffering in our own individual tradeoffs, and
- the other species' revealed preference-based weights between various desires, and intensities, frequencies and durations of suffering in their own individual tradeoffs,
we can make tradeoffs between revealed preferences in that other species and reflective desires in humans.
In other words, we have two separate welfare scales: (2) humans' reflective preferences to measure our own welfare, and (3) a nonhuman animal species' revealed preferences to measure their own welfare, and we use (1) to put them on a common scale and make tradeoffs between them.
1 may unfortunately turn out to be fairly arbitrary, and there are issues to consider with the two envelopes problem for moral uncertainty.