This is for the Effective Altruism red-teaming contest. I’ll be honest: still don’t quite know what that means. But I hope everyone can get some value out of this, contest or no!
TL;DR: Effective Altruism and its resulting effective altruists ought to think more about meaning, as opposed to stopping at things like suffering and pleasure or being comfortable with the vagueness of wellbeing. This emphasis on meaning can help EAs in their daily activities, in choosing and aligning themselves with their causes, and can transform their overarching hopes and goals for world-saving.
Meaning, fast and slow
While it’s correct to say that we care about freedom, health, and being alive, it’s even more correct to say that we care about meaning, of our life and the lives of others we care about being experienced as being as meaningful as possible. This underlying motivation drives our actions, both consciously and unconsciously pursued.
Perhaps most recognizably called to attention in Viktor Frankl’s Man’s Search for Meaning, each of us has a felt sense of purpose (meaning) that drives our actions. The qualities and quantities of this purpose effect our behavior. We do not merely feel that life has purpose—we actively strive to live in accordance with our purpose. This narrative sense of meaning allows us to care about more than the here and now, of the child drowning in front of us. Through this narrative, imaginative purpose, we do much more good than we would were we bound to the near and present. In my work, I refer to this narrative meaning as the slow kind of meaning, the meaning that forms the background of our lives.
The other kind of meaning is no less important, though it is rarely acknowledged or recognized in scientific literature. This meaning is the meaning that is  in our direct, perceptually driven experience. I refer to this as immediate or fast meaning. Because immediate meaning is experiential, it can be difficult to describe with words, but it can be done with some participation from your own experience. Immediate meaning is similar to . When you see the color “blue”, you do not perceive the physical properties of blue (e.g., electromagnetic radiation of a certain frequency) but instead have an experience of blue, of blueness. If you look around you and notice all the brightest and clear colors (for me, there’s a very red hat nearby), you find that they kind of stand out in a way that feels a certain way. You find it easier to pick out objects that are especially colorful and find it easy to ignore colors that are dull or overly similar to others in their surrounding space. So too do we perceive objects as being more or less meaningful. A pretty rock I collected and placed on my shelf at home feels different from that same rock out in the wild and would feel another kind of different if it were on someone else’s shelf. It’s my pretty rock—it’s not just another rock. There are many reasons why the immediate meaning that objects present to us through experience might be adaptive and , but we can sidestep that conversation here—what matters is we find ourselves at the center of an experienced world full of meaningful objects. These objects need not be physically in front of us. We can find ideas and people, real or imagined, immediately meaningful.
Immediate (fast) and narrative (slow) meaning together participate in a lifelong process that shapes our behavior. Narrative meaning is informed by what we find immediately meaningful—we orient our lives around the objects with which we have had meaningful experiences, ensuring further interactions with those objects. Immediate meaning is bounded by our narrative meaning, as our narrative ends cause us to place ourselves in situations that lead to the generation of immediately meaningful experiences. This in turn leads us to pursue our narrative ends. If I am never or rarely able to act in accordance with my felt purpose, this will undermine my immediate feelings of meaning. Thus, meaning begets meaning and its absence begets further absence. I act according to my purpose and my resulting experiences lead to me adjusting my purpose accordingly. This process is cyclical and self-reinforcing. Or, when it fails, it is self-undermining, as is what happens when we fail to act in accordance with our felt purpose (e.g., when anxiety or another obstacle paralyzes us) or when we do not receive the immediate experience we had expected (e.g., expectations vs reality). In this case, lack of meaning begets further lack of meaning (sometimes characterized as an absence of connectedness).
This meaning dynamic is not reducible to hedonic pleasure/pain dimensions, as something that is meaningful (e.g., a sad song in our playlist) might feel bad, yet significant in a way that is not explained by its properties. Meaning is spread out across time and space in a way that more traditional measures like happiness (almost exclusively asked about in simple questionnaires that ask about its presence straightforwardly: “Are you happy?”) and awe (only immediate) cannot hope to capture. We certainly think that we want to be happy and that we want other experiencing beings to be happy, but that’s not an accurate or practical intention—instead, we want our lives and the lives of others to be replete with meaning. This sidesteps issues like the hedonic treadmill, which brings us the sad tidings that pleasure costs more and more the more we get it. In other words, meaning can be maximized without concern for increasing tolerance. Meaning is also more directly applicable to non-human agents, as we can make efforts to understand it without a need for language-bound data collection, of asking and expecting a true and accurate statement—or so I claim. Admittedly, demonstrating this in a rigorous way is something that I’m actively pursuing—it’s very much a work in progress (please reach out to me if you are interested in helping!). Very briefly, I assert that meaning can be understood as a natural process through which agents set and pursue goals across space and time and that the presence of certain features ought to tell us whether that agent experiences and pursues meaning (as opposed to being an automata or mere symbol-shuffler that only looks like it’s living out a meaningful existence). In other words, meaning is a fundamental part of life at all scales and this meaning-making dynamic can be observed in natural environments.
Important aside: Isn’t meaning just wellbeing?
There’s already a great deal of talk around wellbeing, of eudaimonia, and the “good life”. Why introduce another term, especially one as overloaded as meaning? This merits its own deeper treatment, but the short version is that wellbeing is too high level to be turned into the kind of fundamental, predictive, mathematical, rigorous understanding that we need. Starting with just the kind of experience that we most value and targeting that over all the complexity we get when we look at wellbeing (which involves the messiness of self-report, reasoning, culture, and context as a starting point) lets us keep what we care about at the forefront. This isn’t to say that we ought not to pursue wellbeinget al.—but that we can and should be comfortable going deeper to the heart of our cares.
Whatever the case may be, a meaning-first approach to effective altruism, to doing good better, could lead to several advantages.
The way things are right now
Effective Altruism (EA) was founded on a straightforward philosophy: if you’re going to do good, do it well. And do it the most—do not exclude those who are more physically and mentally distant from you. The first set of obvious projects this philosophy led to involved coming up with ways to more efficiently save lives with the resources at our disposal. Saving lives is good, but that’s clearly not an end goal so much as a high-level hope that preventing death can lead to something of value. It is therefore assumed that being alive is on balance a good thing. Yet, it is also well-understood that life can be very bad, experientially speaking. Experience is what ultimately grounds all of EA’s fundamental cares, even today. Saving the world from meteoric extinction, making the lives of creatures better, aligning ourselves with artificially intelligent superchildren—all are grounded in a single assumed fact—that we care about the experience of ourselves and others.
What then is the experience we want to protect and shape? In science and philosophy, this is nebulously called “consciousness”. We all have it, most of the time (after all, we turn it off almost completely for about a third of the time we’re alive), and we want it to continue indefinitely. A universe without consciousness is fundamentally not aligned with any of the things we care about.
Consciousness is a bit of a fraught term, as it refers both to our experience and to our awareness of experience. For the sake of this work, I will simply call it experience—experience is the felt quality of being, the what-it-is-like-ness of existence. The canonical example from Thomas Nagel asks us to imagine what it is like to be a bat, to echolocate, to have little bat wings and finger-like stubs on our wings, to sleep upside-down, to want bugs and the company of other bats—to feel as a bat (presumably) feels. Here we have another assumption. We assume that it is anything at all to be a bat, that there’s an inner experience to these creatures that are similar but different from ourselves. This means that they can feel good and bad, to feel pleasure and pain, to be happy or to suffer. It’s the last dichotomy that drove me to write this.
Many effective altruists have very pragmatically agreed that happiness and suffering are more accurate reflections of their cares than merely keeping as many people alive as possible (though it’s clear that both happiness and suffering are contingent on our continued being-alive). Though the earliest EA-inspired organizations like GiveWell did and still do care almost exclusively about reducing preventable death, other organizations quickly sprouted up. Among these is GiveDirectly, an org that is hellbent on getting as much money as possible to people that need it for both fulfilling their basic needs and for making purchases to improve the quality of their lives. There’s a bit of (amicable) tension between these cause areas, as they are competing for funding and attention. If a rational donor wants keep as many people alive today as possible, GiveWell is a much more rational channel. But if a rational donor cares about quality of life, the math might come out differently. The problem is that the math here can’t be done.
This isn’t to say that we couldn’t or haven't tried to do the math and quantify the quality of life. Quality Adjusted Life Years (QALYs) were invented to this end, as were their younger, more ambitious sibling Well-Being-Adjusted Life-Years (WELLBYs). QALYs quite audaciously try to put a number on how bad bad things are. If—all else being equal—there were two of me and one of me were blind, QALYs would quantify my blind self’s life as being of lower quality. It’s a troubling calculation, but these kinds of troubling calculation landmines are everywhere for effective altruists. In the end, decisions affecting countless lives are going to be made, so we may as well make them as effectively as possible. Or so the argument goes. The only problem is that we simply do not have a rigorous and systematic way of quantifying these things, because we don’t have a good way of evaluating the quality of experience.
We do have some prototypical methods. Thanks to language, we can ask people to tell us how they are feeling and correlate their reports with their situations, behavior, and neurophysiological responses. This method does get us somewhere. On average, people who are suffering will report feeling less happy than those who are not. And those who are enjoying their lives will show up as less anxious than those who are not. Psychology is replete with studies that show these kinds of correlations. But, as has been droned into our heads for years, correlation is not causation. Nor is the report of how one feels the same as the feeling itself.
These things are frustratingly sensitive to priming—if I gifted you a well-used clown nose and then asked you about how happy you were with your life, you might be more likely to report being unhappy than if I had gifted you nothing at all. Or, if you come from a place where they idolize clowns, you might just come off as euphoric in your happiness reports. Within the field, this is treated as both a feature and a bug. On the one hand, we are collecting many reports and gathering trends, which lets us ignore all the noisiness of individual variation. For the sake of an effective intervention, this could be good enough to give us an actionable set of solutions. If we found that gifting potted plants reliably produces an uptick in measured happiness across a diverse and representative sample, we can go out and buy and distribute these, thus increasing happiness. On the other hand, this method of data collection is riddled with bugs:
- Non-representative (e.g., too young, too smart, too culturally homogenous, etc.)
- Contextually unrealistic (e.g., gathered in a laboratory)
- Sparse/misleading (i.e., limited to a snapshot of processes that unfold over time—we’re measuring a single point, as opposed to the slope (derivative) or the area (integral) of the phenomena)
- Semantically unaligned (respondents interpret questions differently—where one thinks of happiness one way, another thinks about it another way)
- Meta-awareness sensitive (e.g., respondents knew they were being asked about happiness and tailored their answers according to what they thought our expectations were)
- Culturally sensitive (e.g., in a religious culture, happiness may be attached to how one’s children are doing as opposed to one’s own felt happiness)
- Does not scale (e.g., if everyone were to get a potted plant, the gift would no longer be considered special or freely given)
Even with these issues, psychology (as well as anthropology and sociology) boldly continues designing and running these kinds of correlation studies in hopes that the results can lead to practical upshots. But recall that what we care about are not the reports of happiness or the reports of suffering but in the experience of happiness or suffering. It is only practicality that has led us to accept fundamentally weak methods for capturing them. Experience collection via questionnaires works (a bit) for some humans if we are careful and are willing to put in the substantial resources to run these studies, but it is a poor fit for much of what EA cares about:
- Sentience: Non-human sentiences do not have language and are therefore unable to respond to our questions about their happiness or suffering. Any assertion of affect is based on behavioral and evolutionary similarity.
- Longtermism: Future people, especially the mere potentially existing ones, cannot be tested. We must instead assume their interests based on our own (insofar as we can even hope to understand those).
- AI Alignment: Artificial agents will, at least in the near-term, be behaviorally unique entities—asking them to report anything qualitative will be undermined by their unique position as being both highly competent and highly unique. In short, they could report having qualitative experiences but we would have no standing to validate or reject these claims.
Even without any ability to measure experience, we pursue these and other causes. On what grounds do we do this? On the grounds that we need to do something—doing nothing is itself a decision. So, we ignore our gaps in understanding and work with what we can. Sentience researchers go ahead and assume that things that are like humans have experience and can suffer and be happy. Longtermists recognize that they don’t know exactly what future people want, but that it probably shares some DNA with what we want today (e.g., existence, happiness, freedom, good vibes)—and in order to hedge against misalignment, they refrain from prescriptive ideologies (e.g., X form of government type is optimal; Y language is ideal). AI alignment researchers are actively trying to figure our preferences from the ground up, as it becomes immediately and practically clear that humans are not good at specifying our preferences, though we are okay at recognizing when things are misaligned. All causes have experience as a grounding motivation and a directional influence—but to my knowledge, there’s very little work on trying to understand the nature of experience and the kinds of experiences that we are targeting as we go about our business of saving the world. This is the gap that meaning research can try to fill.
Practical insights from meaning research
What are some ways that thinking about and understanding meaning as foundational and fundamental can lead EAs to do good a little bit better? Most directly, understanding our own place as creatures of meaning. Recognizing that each of us has our own narrative and immediate meanings is directly valuable.
Career choice: Many EAs are young and full of ambition and hope—these lean on services like 80,000 Hours to direct their desire to do good better. They learn that EA needs ops specialists and economists, so they go to school and build up career capital to fill this role. Sometimes this works out beautifully—they find themselves surrounded by people they care about and work that motivates them, feeling that what they are doing is improving the lives of others. Other times, this doesn’t work out so well. Since I’m the “meaning guy”, young EAs have found me and have asked me for advice on finding meaning in their work—the advice I give (usually much less elaborately) is something like this: “Make sure that you actually find the day-to-day work you do meaningful—having an overall purpose and goal is great, but that passion can’t sustain itself if its not founded on meaningful moments in-between”.
I didn’t learn this lesson until much later than I might have wished. My first career as a software engineer was stillborn because I really didn’t like the way that sitting and coding made me feel. Even though I was able to find work at decent companies doing meaningful kinds of work, this misalignment was slowly eating away as my save-the-world ambitions. I got out, but it was a near thing.
In short, understanding your own meaning is one of the most pressing problems you can solve, because doing this will free you to expand your meaning to encompass others.
In [your cause] area: You are in a better position to understand your own cause area than I am. However, I ask that you consider at a high level whether and in what ways you are pursuing meaning within your space. For example, I did some work in AI alignment and quickly ran into this: I didn’t know what it was that we even wanted AI to do for us, aside from helping to get rid of the bad things. But there was no positive direction that I could see—what does a truly good AI do for us? What is our purpose as a species? Survival? Feeling good? Being free? Wellbeing? These questions eventually led me to go do philosophy and then cognitive science (which is arguably what I do today). These questions still matter and I’m still not in a position to answer them (not that they are even answerable in that form), but thinking about understanding and maximizing meaning has at least allowed me to tackle them indirectly. The meaning-making process requires existence, feeling, and freedom—they emerge from this process.
Your area is likely different. If you work with sentience and suffering, you might think about how a different sentience has a different or similar meaning space. Maybe this can lead you to target sentiences that seem more capable of experiencing meaning versus sentiences that can only suffer/feel pleasure. This is up to you to consider as an expert in the field. If your area is existential risk, you might try to incorporate more measures and models that consider meaning (think something like WELLBYs).
From a movement standpoint, Effective Altruism could also benefit from promoting the meaning it brings, both within and without the movement. Public perception has at times been cool towards EA, as it is presented as a hyper-rational and judgmental group of smart and well-intentioned nerds—this is not inaccurate but it fails to include the fact that EA’s nerds are also really really good people who care and feel very passionately about doing good. Meaning drives EA.
Unfortunately and excitingly, we still have a lot of work to do in understanding meaning—my intention here isn’t to offer it as an answer to anything but to make sure it’s being raised as a question across effective altruism. The bottom line is that you are a creature that lives according to meaning, of purpose and care. You want other creatures to have the things you value and you want to protect them from the things that you believe will cause them to suffer. In short, you mean well.
End note: If you read this, thank you! Even better, if you read this and liked this and happen to think similarly and want to do something that I think is important, I’m looking for other people to help me build out Meaningful Minds, an organization I started to help collect and coordinate research around meaning. For much more detail on meaning, I’ve published a more extensive work on Meaningful Minds, as well as on my personal writing site
Feel free to reach out to me through the website or here or elsewhere—for better or for worse, I’m easy to find!
This is of course a nod to Daniel Kahneman’s Thinking, Fast and Slow title ↩︎
Meaning is rooted in care. When we care deeply, what we care about is always grounded by potential changes to experience. We do not care what atoms on the opposite end of the universe are doing, insofar as they are not and will not interact with anything that we care about. We do not care about the distant past, as it has no way to affect us here and now. And we find it difficult to care about the distant and potential future, as most of those cares will not amount to anything of value. Caring is critical for survival, but it is constrained by our limited resources. ↩︎
Meaning is also similar to emotion—Damasio’s Somatic Marker Hypothesis asserts that emotions are shortcuts that let us react quickly and assertively in complex situations that don’t lend themselves to long bouts of intense pondering. Meaning might well be similar. Things seem meaningful because they help us make quick and assertive decisions. ↩︎
Depression infamously strips the world of its color. Where once our pretty rock meant something to us, we begin seeing it as “just” a rock again. So too does the rest of the world and its possibilities become flat and dull. ↩︎
Recall that “objects” are not limited to things like rocks and trees, but people and ideas—the world as we experience it is a world full of objects and their relations ↩︎
Somehow, it has managed to be controversial to people who are very talented controversy-hunters ↩︎
For the David Benatars out there, I get where you’re coming from but you’re confused ↩︎
I’d add that being a bat might let you know how it feels to be the harbinger of a global pandemic, but we humans already know how that feels, many times over! ↩︎
My own position is that these things are not all or nothing—it’s probably something that it’s like to be a bat but that something is almost certainly less qualitatively deep or rich than what it is like to be a human. Take this as you will when it comes to making priority judgements. ↩︎
Today, all four of GiveWell’s top charities are exclusively about preventing death ↩︎
This independent of self-report. Were my blind twin to report being super happy and my non-blind self equally happy, the tiebreaker is the functional difference we might not care about. But if my blind twin were to report being much happier than my non-blind self, we are now left with a genuine problem. ↩︎
(where we could consider adding an extra 50 cent topping at the vegan ice cream shop tantamount to ending 1/10000 of a life were the money donated effectively, according to the current Against Malaria conversion rate) ↩︎
This correlative line of inquiry has led to the field’s “replication crisis”, where running the same study in a different context has failed to produce similar results ↩︎
(at least in short timescales—our judgement breaks down as time and space are expanded) ↩︎