I often see media coverage of effective altruism that says "effective altruists want to maximise the number of QALYs in the world." (e.g. London Review of Books).

This is wrong. QALYs only measure health, and health is not all that matters. Most effective altruists care about increasing the number of "WALYs" or well-being adjusted life years, where health is just one component of wellbeing.

(Some effective altruists also care about goods besides welfare, such as the environment and justice. Depending on your view of population ethics, you might also distinguish between achieving WALYs by improving lives vs. adding to the number of people alive).

This is a bad misconception since it makes it look like we have a laughably narrow view of what's good. Even a hardcore hedonistic utilitarian would at least care about *happiness* rather than just health, and very few people are hedonistic utilitarians.

Why does effective altruism get misinterpreted as thinking we only care about QALYs?

1) Sometimes community members actually say "we want to maximise the number of QALYs". I think we should stop doing this. Instead say something more like: "we want to maximise the number of people who have good lives", or just "maximise the good we do" and then if someone asks what "good" means, you can say it's people having happy or flourishing lives.

2) Sometimes when people ask us "how do you measure 'good'?" we talk about QALYs as an example. This is what happens in Will's book. I think this is a reasonable move - the idea of QALYs is really important to introduce to new people - but it can create the impression that you only care about QALYs. The QALY bit will be the most memorable, and people won't remember your disclaimers. This means if you're explaining QALYs you need to put a lot of emphasis on how that's not all that matters. You can do this by instead leading with "you can measure your impact by choosing good proxies within the cause you're working in e.g. in health there's QALYs, in education you can look at improvements in test scores and income, in economic empowerment programs you can look at income change, and so on. Use the best proxies you have available." Alternatively you can introduce the idea of QALYs, but then point out what we ultimately care about is welfare; it's just that right now health is the cause where quantification is easiest. 

Edit: I don't propose publicly promoting and using the term "WALYs" - just bear in mind "WALYs not QALYs" to help you remember. (My suggestions about what to say publicly are just above in 1 and 2).

Comments18


Sorted by Click to highlight new comments since:

Hmm. I sort of thought "Quality Adjusted Life Year" effectively conveyed the thing I wanted (as opposed to Disability Adjusted Lifeyear, which definitely didn't)

In any case, if people are getting confused on that point, WALY seems to be good as a term to hold in reserve to explain to people who think we're all about health. (I think if we just popularized it as a new term, especially if we hadn't worked out any way to actually measure things other than health in a robust fashion, it'd just end up with the same problems as QALY)

Agree - added a clarification at the bottom of the main post.

Thanks for the post.

It might be worth saying even when making clear that QALYs aren't the only things that EAs care about that even welfare maximisation doesn't have no be the only thing EAs care about; this might vary based on one's conception to EA, but given the movement at least currently accommodates for non-utilitarians (and I hope it continues to do so!) we don't want to fall into a WALY-maximisation trap any more than a QALY-maximisation trap.

That is to say: this post tells us, "look, specifically in the realm of health, there does seem to be ways of measuring things, but we actually care about measuring welfare". I'd suggest we say instead: "look, specifically in the realm of health, there does seem to be ways of measuring things, but we might actually want to measure any given value we might care about".

Agree - I mention that in brackets, and think it's also good to clarify if you have the opportunity.

Or rather than using a different term, we could just expand the scope of the existing QALY term to include all forms of wellbeing? I was under the impression that QALYs can be used for mental health already. It's good that emotional wellbeing is being considered as important by EAs -- according to WHO, depression is the biggest cause of disability in the world and is on track to becoming the "second cause of the global disease burden." Furthermore, treating depression in the developing world is more economically feasible than you would think.

Can 'we' actually do that? If someone hears us talking about QALYs, they are either already going to know what they are (which is a health measure as far as the rest of the world is concerned) or they're going to look it up an get a Wikipedia definition, not our definition.

And yes QALYs can be and are used for mental health, but that's because it's still health, which isn't quite the same as wellbeing.

This is a mega-important point.

Especially re 2, whenever I use QALY as an example I immediately follow it up by talking about the difficulty of comparing QALYs to other things that are really good to increase, like improved education or better access to political institutions for marginalised people. This helps undermine both the 'you only care about QALYs' attack as well as the 'you don't care about systemic change' attack. It makes it clear we do care about those things, even if we don't have great ways to assess effectiveness there yet.

Generally: agree with the point.

Terminology: I think we should not try to push the term WALY; it sounds silly enough as an acronym that it is hard to build momentum around. I've heard health economists discuss this issue and complain about 'WALY' as a term, and I don't think there's an ideal solution. It would have been nice if QALYs had been called HALYs ("health-adjusted life year"), leaving QALY as the more general term. "Wellbeing QALY" is one alternative, but it's quite clunky.

Agree - added a clarification at the bottom of the main post.

Great post! I think the WALY / QALY distinction - and how to explain it carefully - should be taught to anyone running a giving game. As someone who organised a game recently, I found it tricky to navigate the trade-off between emphasising the basics ( "QALYs are a really important metric!") and the caveats ("It's only one of many important metrics"). This article by NICE helped me think the issue through.

I'm pleased and surprised to hear you say this. I'd thought that the default EA metric was QALYs: I've had more conversations than I can count with EAs saying that QALYs are probably not the best metric overall or even for health itself. A couple of points.

  1. There no common sense understanding of the term - unlike 'health' - there are also three different and incompatible accounts of well-being (hedonism, desire-satisfaction, objective list).Saying "we're in favour of well-being" can be equally mysterious.

  2. I think there's something a bit weird about saying "this is what we actually mean, but don't tell anyone in case they think we're weird". I worry it's getting a bit, well, Scientological, if you're at the stage where you have various versions of the truth: one for the public, another for those in the know.

  3. My suggestions is to talk about "HALYs" - Happiness adjusted life years. Not only is this what we actually care about (if we're utilitarians, though I presume any view should value happiness, all other things being equal), happiness is much more intuitive than well-being and it sounds less silly.

Hi Michael,

On 2, I don't mean to suggest having two versions of the truth - I mean we should just say "we want to maximise the number of people who have flourishing lives" or something broader like that, rather than "WALYs", which sounds like an official metric, but isn't, and is a bit goofy.

On 1 and 3, I still lean towards talking about wellbeing or flourishing, because happiness sounds too narrow. It depends a bit on who you're talking to - many people don't notice the distinction.

Ah, I see. I do think that makes sense: we stress the value of things besides health but shy away from using terms which make us look silly.

And yet whilst 'well-being' and 'flourishing' are good names, they seem problematically vague to my ears: I imagine a conversation where I say "I want to help people lead flourishing lives" and they pause before saying "I agree, QALYs sounds too narrow ... but what exactly do you mean about leading a flourishing life? How are you defining/measuring that?" I think there's an advantage, if you want to do cost-effectiveness, to having a clear, if slightly wrong, measure. QALYs have the virtue of providing a uniform score sheet.

On your last point, I think that reveals a problem about word use. I don't see 'happiness' being used narrowly in ordinary language at all. It describes a whole host of things: the good life, well-being, life satisfaction, emotions, etc.

In defence of WALYs, and in reply to your specific points:

  1. I don't share your intuition here. Well-being is what we're talking about when we say "I'm not sure he's doing so well at the moment", or when we say "I want to help people as much as possible". It's a general term for how well someone is doing, overall. It's an advantage, in my eyes, that it's not committed to any specific account of well-being, for any such account might have its drawbacks.

  2. I worry that, in adopting HALYs, EA would tie its aims to a narrow view of what human well-being and flourishing consists of. This is unnecessary, for EA is just about helping people as much as possible. Even if we were convinced that the only component of well-being was happiness, it would still be an additional claim to the core of EA.

Thanks for the comments Tom.

On 1. I agree that the broadness of leaving 'well-being' unspecified looks like an advantage, but I think that's someone illusory. If I ask you "okay, so if you want to help people do better, what do you mean by 'better'?" then you've got to specify an account of well-being unless you want to give a circular answer. If you just say "well, I want to do what's good for them" that wouldn't tell me what you meant..

This might seem picky, but depending on you view of well-being you get quite sharply different policy/EA decisions. I'm doing some research on this now and hope to write it up soon.

On 2. I should probably reveal my cards and say i'm a hedonist about well-being. I'm not interested in any intervention which doesn't make people experience more joy and less suffering. To make the point by contrast, lots of thinks which make people richer do nothing to increase happiness. I'm very happy for other EAs to choose their own accounts of well-being of course. As it happens, lots of EAs seem to be implicit or explicit hedonists too.

Excellent point!

Especially if we consider the layman's definition of what health is: "being free from illness or injury". This definition opens up the promotion of health to the same kind of objections that fells Negative Utilitarianism, namely: if all that mattered was reducing suffering, isn't the easiest way to do so merely getting rid of all the sentient beings?

We should be open to the possibility that making the world less ill might also make it worse off, if there was also a simultaneous loss of those other things of value besides health that offset the reduction of illness.

We should be clear that when we focus on QALY's it is only because it (arguably) is one of the most tractable ways of promoting well-being.

I often see media coverage of effective altruism that says "effective altruists want to maximise the number of QALYs in the world." (e.g. London Review of Books).

The specific example you mention is particularly puzzling: it is a review of Doing Good Better, which makes this point very clearly (pp. 39-40):

the same methods that were used to create the QALY could be used to measure the costs and benefits of pretty much anything. We could use these methods to estimate the degree to which your well-being is affected by stubbing your toe, or by going through a divorce, or by losing your job. We could call them well-being-adjusted life years instead. e idea would be that being dead is at 0 percent well-being; being as well off as you realistically can be is at 100 percent well-being. You can compare the impact of different activities in term of how much and for how long they increase people’s well-being. In chapter one we saw that doubling someone’s income gives a 5 percentage points increase in reported subjective well-being. On this measure, doubling someone’s income for twenty years would provide one WALY.

Thinking in terms of well-being improvements allows us to compare very different outcomes, at least in principle. For example, suppose you were unsure about whether to donate to the United Way of New York City or to Guide Dogs of America. You find out that it costs Guide Dogs of America approximately $50,000 to train and provide one guide dog for one blind person. Which is a better use of fifty dollars: providing five books, or a 1/1,000th contribution to a guide dog? It might initially seem like such a comparison is impossible, but if we knew the impact of each of these activities on people’s well-being, then we could compare them.

Suppose, hypothetically, that we found out that providing one guide dog (at a cost of $50,000) would give a 10 percentage points increase in reported well-being for one person’s life over nine years (the working life of the dog). at would be 0.9 WALYs. And suppose that providing five thousand books (at a cost of $50,000) provided a 0.001 percentage point increase in quality of life for five hundred people for forty years. at would be two WALYs. If we knew this, then we’d know that spending $50,000 on schoolbooks provided a greater benefit than spending $50,000 on one guide dog.

The difficulty of comparing different sorts of altruistic activity is therefore ultimately due to a lack of knowledge about what will happen as a result of that activity, or a lack of knowledge about how different activities translate into improvements to people’s lives. It’s not that different sorts of benefits are in principle incomparable.

This happens a lot - the most striking part of your message is what's most memorable. Disclaimers and qualifications normally get forgotten.

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
LewisBollard
 ·  · 6m read
 · 
> Despite the setbacks, I'm hopeful about the technology's future ---------------------------------------- It wasn’t meant to go like this. Alternative protein startups that were once soaring are now struggling. Impact investors who were once everywhere are now absent. Banks that confidently predicted 31% annual growth (UBS) and a 2030 global market worth $88-263B (Credit Suisse) have quietly taken down their predictions. This sucks. For many founders and staff this wasn’t just a job, but a calling — an opportunity to work toward a world free of factory farming. For many investors, it wasn’t just an investment, but a bet on a better future. It’s easy to feel frustrated, disillusioned, and even hopeless. It’s also wrong. There’s still plenty of hope for alternative proteins — just on a longer timeline than the unrealistic ones that were once touted. Here are three trends I’m particularly excited about. Better products People are eating less plant-based meat for many reasons, but the simplest one may just be that they don’t like how they taste. “Taste/texture” was the top reason chosen by Brits for reducing their plant-based meat consumption in a recent survey by Bryant Research. US consumers most disliked the “consistency and texture” of plant-based foods in a survey of shoppers at retailer Kroger.  They’ve got a point. In 2018-21, every food giant, meat company, and two-person startup rushed new products to market with minimal product testing. Indeed, the meat companies’ plant-based offerings were bad enough to inspire conspiracy theories that this was a case of the car companies buying up the streetcars.  Consumers noticed. The Bryant Research survey found that two thirds of Brits agreed with the statement “some plant based meat products or brands taste much worse than others.” In a 2021 taste test, 100 consumers rated all five brands of plant-based nuggets as much worse than chicken-based nuggets on taste, texture, and “overall liking.” One silver lining
 ·  · 6m read
 · 
Cross-posted from Otherwise. Most people in EA won't find these arguments new. Apologies for leaving out animal welfare entirely for the sake of simplicity. Last month, Emma Goldberg wrote a NYT piece contrasting effective altruism with approaches that refuse to quantify meaningful experiences. The piece indicates that effective altruism is creepily numbers-focused. Goldberg asks “what if charity shouldn’t be optimized?” The egalitarian answer Dylan Matthews gives a try at answering a question in the piece: “How can anyone put a numerical value on a holy space” like Notre Dame cathedral? For the $760 million spent restoring the cathedral, he estimates you could prevent 47,500 deaths from malaria. “47,500 people is about five times the population of the town I grew up in. . . . It’s useful to imagine walking down Main Street, stopping at each table at the diner Lou’s, shaking hands with as many people as you can, and telling them, ‘I think you need to die to make a cathedral pretty.’ And then going to the next town over and doing it again, and again, until you’ve told 47,500 people why they have to die.” Who prefers magnificence? Goldberg’s article draws a lot on author Amy Schiller’s plea to focus charity on “magnificence” rather than effectiveness. Some causes “make people’s lives feel meaningful, radiant, sacred. Think nature conservancies, cultural centers and places of worship. These are institutions that lend life its texture and color, and not just bare bones existence.” But US arts funding goes disproportionately to the most expensive projects, with more than half of the funding going to the most expensive 2% of projects. These are typically museums, classical music groups, and performing arts centers. When donors prioritize giving to communities they already have ties to, the money stays in richer communities. Some areas have way more rich people than others. New York City has 119 billionaires; most African countries have none. Unsurprisingly, Schill