This is a cross-posted comment from Clearer Thinking regranting competition on Manifold Markets (with a couple of minor edits and typo corrections)
What it is about?
The article describes my take on why I think the Happier Lives Institute should receive funding through the Clearer Thinking regranting round (Clearer Thinking organized a tournament on Manifold Markets to help them crowd-evaluate which projects should receive funding).
This is by no means a comprehensive review of the Happier Lives Institute (HLI). I have been exposed to HLI work relatively recently. Think of it as an interface to quickly understand what the Happier Lives Institute does and a subjective assessment of the potential value threads they are creating.
In the following text, I am arguing that HLI brings value in two dimensions. One is their work increasing well-being directly – they evaluate and support the most cost-effective organizations globally. Another is their work applying and stress-testing the Subjective Well-Being framework (SWB). I think having an alternative, thoroughly researched framework like SWB has a high expected value for the EA community. Most EA org rely on QALY+ framework. This work therefore can help diversify worldviews and help calibrate judgments of the main EA organizations.
Why I am writing this?
I am posting this on the Forum because some ideas may apply more broadly e.g. examining what is behind relatively low engagement within the EA community in supporting projects tackling increasing well-being directly. I would also love to hear feedback. Notes on my reasoning or the Happier Lives Institute's approach to increasing global well-being are welcome.
I must be biased because I was voting yes on this market during the Clearer Thinking tournament on Manifold Markets. I had prior exposure to HLI and when I saw the chances of HLI receiving a grant at 40% I thought the prediction is way off.
Since then I spent a couple of days researching the topic. I watched a couple of HLI YouTube lectures and read a couple of their EA Forum posts. I am not very knowledgeable about the internal mechanics of frameworks like SWB and QALY+. I have decent knowledge and long exposure to topics like psychotherapy, well-being, and evidence-based therapies.
Who may be interested in reading this?
- People interested in the well-being discourse.
- People who are skeptical about EA organizations tackling increasing well-being directly
- People who don't know HLI or don't understand the value they are bringing
How to navigate through this document?
All the sections (marked by the titles) stand on their own and don't need knowledge from previous sections to be understood. Feel free to skip around.
Abbreviations that are used in the text:
- HLI – Happier Lives Institute
- SWB – subjective well-being framework
- QALY – Quality-adjusted life years framework
Utilitarianism and wellbeing
In many definitions of utilitarianism, well-being is the central, defining term. Take some generic one from Wikipedia: “Utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and well-being for all individuals”
Well-being, however, is notoriously hard to define and measure. Perhaps that’s why this area is relatively neglected within the EA community. Also, in the past, established frameworks like QALY+ didn’t render opportunities in the space particularly impactful. At least in the intuitive sense, it seems bizarre that EA couldn’t identify interventions that are attempting to increase well-being directly. Intuitively, it seems there should be projects out there with a high expected value tackling the problem directly.
Speculatively thinking there may be one more reason for the lack of interest in the community. People within EA seem highly analytical – the majority consists of engineers, economists, and mathematicians. Could demography like this mean that people on average score lower in the emotional intelligence skills bucket? – therefore making the community less interested in projects optimizing the space.
Happier Lives Institute as an organization
In simplest terms, the Happier Lives Institute is like a GiveWell that specializes in well-being. They are working with the most cost-effective opportunities to increase global well-being.
Michael Plant, its founder, is an active member of the EA forum since 2015. He has written 26 posts gathering more than 5.6k karma. He seems to be interested in the subject matter at least since 2016 when he wrote the first post on the Forum asking Is effective altruism overlooking human happiness and mental health? I argue it is. His lectures on the subject matter seem clear, methodical, and follow the best epistemological practices in the community. He was Peter Singer’s research assistant for two years, and Singer is an advisor to the institute.
The Clearer Thinking regrant is sponsoring the salary of Dr. Lily Yu. She seems to have relevant experience at the intersection of science, health, entrepreneurship, and grant-making.
The cause area seems to be neglected within EA. Besides HLI I am aware of EA Psychology Lab, and Effective Self-help – but none of these organizations do as comprehensive work as HLI.
Subjective well-being framework
Even if the only value proposed by HLI was to research and donate to the most cost-effective opportunities to increase global well-being, I think it would be an outstanding organization to support.
However, HLI also works and stress-tests the Subjective Well-being framework (SWB) – work that the whole EA community can benefit from. Michael Plant describes the SWB methodology in this article and this lecture. Most leading EA orgs like Open Philanthropy and GiveWell use a different approach – the QALY+ framework.
I think the big chunk of the HLI's value lies in running the alternative to QALY+ framework and challenging its assumptions. Michael Plant does this in the essay A philosophical review of Open Philanthropy’s Cause Prioritisation Framework. I am not gonna attempt to summarize this topic here (please see the links above for details), but I am gonna highlight a couple of the most interesting threads.
“It’s worth pointing out that QALYs and DALYs, the standard health metrics that OP, GiveWell, and others have relied on in their cause prioritization framework, are likely to be misleading because they rely on individuals' assessments of how bad they expect various health conditions would be, not on observations of how much those health conditions alter the subjective wellbeing of those who have them (Dolan and Metcalfe, 2012) … our affective forecasts (predictions of how others, or our later selves, feel) are subject to focusing illusions, where we overweight the importance of easy-to-visualise details, and immune neglect, where we forget that we will adapt to some things and not others, amongst other biases (Gilbert and Wilson 2007).” Link
Also worth noting is that the SWB framework demonstrates a lot of potential in areas previously ignored by EA organizations:
“at the Happier Lives Institute conducted two meta-analyses to compare the cost-effectiveness, in low-income countries, of providing psychotherapy to those diagnosed with depression compared to giving cash transfers to very poor families. We did this in terms of subjective measures of wellbeing and found that therapy is 9x more cost-effective” Link
HLI also looked at Open Philanthropy and GiveWell’s backed interventions to compare QALY+ with SWB results.
“I show that, if we understand good in terms of maximising self-reported LS [Life satisfaction], alleviating poverty is surprisingly unpromising whereas mental health interventions, which have so far been overlooked, seem more effective” Link
But how the reasoning of this type can influence organizations like Open Philanthropy or GiveWell? Here Michael Plant describes how grant-making decisions can vary based on the weight given to different frameworks. The example describes value loss assessment based on the age of death.
“Perhaps the standard view of the badness of death is deprivationism, which states that the badness of death consists in the wellbeing the person would have had, had they lived. On this view, it’s more important to save children than adults, all else equal, because children have more wellbeing to lose.
Some people have an alternative view that saving adults is more valuable than saving children. Children are not fully developed, they do not have a strong psychological connection to their future selves, nor do they have as many interests that will be frustrated if they die. The view in the philosophical literature that captures this intuition is called the time-relative interest account (TRIA).
A third view is Epicureanism, named after the ancient Greek philosopher Epicurus, on which death is not bad for us and so there is no value in living longer rather than shorter.” Link
Prioritizing each of these approaches means different grant-making decisions (are we valuing kids' or adults' lives more?). Plant thinks that GiveWell does an insufficient job in their modeling:
“On what grounds are the donor preferences [60% of their weight on this marker] is the most plausible weights … The philosophical literature is rich with arguments for and against each of the views on the badness of death (again, Gamlund and Solberg, 2019 is a good overview). We should engage with those arguments, rather than simply polling people… [Open Philantropy] do not need to go ‘all-in’ on a single philosophical view. Instead, they could divide up their resources across deprivationism, TRIA, and Epicureanism in accordance with their credence in each view.” Link
I also see the value in promoting evidence-based approaches to therapy because of my personal background. I grew up in Poland, a country that had a rough 19th and 20th centuries: Partitions, uprisings, wars, the holocaust, change of borders, communism, transformation. Generational trauma is still present in my country.
I went through four types of therapies and only later stumbled upon evidence-based approaches. From my experience, it seems critical to pick the right therapies. Their effectiveness varies widely. Approaches like Cognitive behavioral therapies (CBT) or Third wave therapies tend to be more effective. (Third-wave therapies are evidence-based approaches based on CBT foundation, but recreated using human instead of animal models).
In my country, but also in many others, ineffective and unscientific approaches are still largely present, often dominating. It seems valuable to have an organization with a high epistemic culture that assesses and promotes evidence-based interventions.
I think the work of HLI would be compromised if the SWB framework had major flaws. Reading Michael Plant’s article on the subject makes me think that SWB is well researched and heavily discussed approach however I don’t know much about its internal mechanics and didn’t investigate its potential flaws.
I see the value of HLI supporting interventions increasing global well-being directly. But I also see value in their work with the SWB framework. I think having an alternative, thoroughly researched framework like SWB has a high expected value for the whole community. I think the regrant will help HLI stress-test their assumptions and run it on more organizations. The work of HLI could have an impact on leading EA organizations like Open Philanthropy or GiveWell – potentially helping recalibrate their recommendations and assessments.
A lot of this posts reads as an intro to HLI, what they do, why wellbeing matters. And this is important and I also agree, neglected.
At the same time, you write that this post is about why HLI should receive a grant for a specific proposal of theirs: https://docs.google.com/document/d/1zANITg1HuKAn5uEe7nzepTZXxyMDy44vowsdVcMFiHo/edit
And it seems to me you do not really address the value or specifics of this proposal? Your post reads to me more as 'we should fund HLI's research' but the proposal asks for funding for a grants specialist and seed money. And it's strange to me that you mostly recommend funding them based on prior work (which again, I also see as work of quality and importance) rather than also evaluating the proposal at hand.
For instance, HLI are requesting $100,000 as a seed fund to e.g. 'make some early-stage grants'. This would effectively be a regrant of a regrant. People in the comments have expressed skepticism of this (e.g. Nuño's comment: "FTX which chooses regrantors which give money to Clearthinking which gives money to HLI which gives money to their projects. It's possible I'm adding or forgetting a level, but it seems like too many levels of recursion, regardless of whether the grant is good or bad.") There's a lot of dilution and I wonder what you think of this?
Other people on Manifold (John and Rina) have pointed out how non-specific this proposal is, how lacking of a plan it appears in the current way it's written, that there might be harm risks that aren't considered at all. I understand there might have been word limits but other proposals are much more concrete.
It would be great if Clearer Thinking publish more information on how they evaluated all of these final proposals.
Thanks for highlighting these concerns! Here is what I think about these topics:
I focused on doing an overview of the HLI and the problem area because compared to other teams it seemed as one of the most established and highest quality orgs within the Clearer Thinking regranting round. I thought this may be missed by some and is a good predictor of the outcomes.
I focused on the big-picture lens because the project they are looking for funding for is pretty open-ended.
I think the prior performance and the quality of the methodology they are using are good predictors of the expected value of this grant.
I didn’t get the impression that the application lacks specific examples. Perhaps could be improved though. They listed three specific projects they want to investigate the impact of:
That said, I wish they listed a couple of more organizations/projects/policies they would like to investigate. Otherwise, communicate something along the line: We don’t have more specifics this time as the nature of this project is to task Dr Lily Yu to identify potential interventions worth funding. We, therefore focus more on describing methodology, direction, and our relevant experience.
I am not sure how much support HLI gets from the whole EA ecosystem. It may be low. In their EA forum profile, it appears low “As of July 2022, HLI has received $55,000 in funding from Effective Altruism Funds”. Because of that, I thought discussing this topic on a higher level may be helpful.
I also think the SWB framework aspect wasn’t highlighted enough in the application. I focused on this as I see a very high expected value in supporting this grant application as it will help HLI stress test SWB methodology further.
As for Nuño's comment. I don't see a problem that money is passed further through a number of orgs. I sympathize with Austin's fragment of the comment (please read the whole comment as this fragment is a little misleading on what Austin meant there)
Initially, FTX decided on the regrant dynamic – perhaps to distribute the intelligence and responsibility to more actors. What if adding more steps actually adds quality to the grants? I think the main question here is whether this particular step adds value.