Is anyone familiar with H.R. 485? It has been introduced in the House, but it is not yet law.
According to the CRS "This bill prohibits all federal health care programs, including the Federal Employees Health Benefits Program, and federally funded state health care programs (e.g., Medicaid) from using prices that are based on quality-adjusted life years (i.e., measures that discount the value of a life based on disability) to determine relevant thresholds for coverage, reimbursements, or incentive programs".
I think the motivation might be to prevent discrimination against people with disabilities, but it seems to me like it goes too far.
It seems to me it would prevent the use of QALYs for making decisions such as is a particular cure for blindness worthwhile, and how might it compare to treatments for other diseases and conditions.
Is anyone familiar with this bill and able to shed more light on it?
Thanks for sharing, gordoni.
I am not familiar with the bill. However, I think wellbeing-adjusted life years are a better measure than quality-adjusted life years (see here).
WELLBYs are proposed in the doc you link as a measure specifically for non-heath and non-pecuniary measures. QALYs take subjective well-being into account, along with physical health metrics through the psychological component of HRQoL, so a shift to WELLBYs in this context just excludes the physical health component of QALYs in pricing physical health interventions.
Hi there,
Thanks for clarifying. In any case, I think we should only care about health and pecuniary benefits to the extent they affect wellbeing, so using WELLBYs still seems better than QALYs for assessing those too. In addition, I would prefer a wellbeing metric that measured happiness instead of life satisfaction.
This is getting into philosophical territory, so here’s a thought experiment. Let’s say you’d lost your legs. You had to choose between a $10 pill that instantly regrew your legs and restored your subjective well-being, and a $0 pill that only corrected any loss in subjective well-being from having lost your legs. Do you really choose the well-being only pill in this case?
Thanks for the thought experiment!
So, in the example you described, I would pay the 10 $ to get my legs back. However, this is just because I am altruistic, and with my legs minus 10 $ I would have a greater positive impact in the world than without my legs (not having legs surely implies a loss in produtivity due to e.g. having to spend more time to move from one place to the other).
If the expected total hedonistic utility (ETHU) for all the moral patients excluding me was the same in both scenarios, I would be totally indifferent between the 2 options.
Interesting! Do you think that is a common view? And do you think that federal healthcare policy should be made by somehow tapping into commonsense moral intuitions? Or should a winning, even if unpopular, argument determine policy options?
Edit: perhaps we can value QALYs on the principle that we’re unlikely to be able to accurately track all contributors to total ETHU in practice, but having people maintain physical health is probably an important contributor to it in practice. Physical health has positive externalities that go beyond subjective well-being and therefore we should value it in setting healthcare policy.
In the general population, no. It is hard to imagine wellbeing being the same without 2 legs, so people would answer the question ignoring the fact wellbeing would be the same.
I think commonsense moral intuitions should absolutely be taken into account. However, our intuitions can easily be misleading, so we should see whether they are consistent. For example, humans finding it much more intuitive to make comparisons between the mean level of wellbeing often results in people rejecting the Repugnant Conclusion, which follows from pretty undisputable premises.
Personally, I find expectational total hedonistic utilitarianism being true as intuitive as 1 = 1 being true. So, when asked about my preference about 2 situations in which ETHU is constant, I am always indifferent between them.
I also believe most disagreements about these thought experiments come from different interpretations about the meaning of wellbeing. For example, I think it is often said wellbeing does not allow for intrinsically valueing relationships, beauty, and freedom. However, all of these are words we use to describe conscious states, i.e. wellbeing. Another common argument is that people value unconscious objects for their own sake (not just for the sake of the observer). However, all things we call unconscious are actually conscious in expectation, because they have a non-null chance of being conscious, so they also relate to wellbeing in expectation.
Great point! I agree there is a positive correlation between QALYs and ETHU, but guess the correlation between WELLBYs and ETHU is stronger. Anyways, I am not confident about this. I am mainly in favour of a more widespread usage of WELLBYs in order to change the focus to what actually matters, wellbeing. Even if WELLBYs is not a great measure of it, adopting them would hopefully eventually lead to the adoption of better metrics in the future.
I think ETHU is all that matters (see this related episode of The 80,000 Hours Podcast), and in that sense that are not positive/negative externalities which go beyond it. I suppose you are alluding to e.g. better health leading to economic growth, which tends to increase wellbeing even if it does not immediately impact it in the short term. However, I am generally quite uncertain about whether economic growth is good or bad. While subjective wellbeing has been increasing with greater consumption (say, since at least the industrial revolution), existential risk has increased too. In other words, improving health does not look to me like a robust way of achieving differential progress.
So maybe the focus should not be on QALYs nor WELLBYs, but on good metrics to achieve differential progress. Maybe ones about rationality? Being more rational means being better at achieving goals. So, to the extent high existential risk is not aligned with our goals, greater rationality will tend to decrease it. I guess this is part of the motivation for 80,000 Hours having epistemics and institutional decision-making as one of its most pressing problems.
In addition, I believe the attention should shift (on the current margin) from gross domestic product to things like total amount of compute, or cost of DNA screening, which are much more informative about the greater x-risks, advanced AI and engineered pandemics.