Hide table of contents

This is my contribution to the animal welfare vs. global health debate week. Thank you to Felix Werdermann, Helene Kortschak, and Vasco Grilo for their feedback.

TL;DR

I think there's significant uncertainty about whether most sentient lives, human or non-human, are truly "net positive" - meaning their positive experiences outweigh their negative ones. This uncertainty has important implications when comparing animal welfare (AW) and global health and development (GHD) interventions. While GHD often focuses on extending human lives, AW interventions have a stronger tendency to reduce the number of sentient lives lived. Factoring in uncertainty about whether most lives are worth living might justify a stronger focus on AW interventions over GHD, especially in areas with extreme suffering.

Context and epistemic status

I currently strongly prioritize AW over GHD, both in my giving and in my active work for Animal Advocacy Africa. My perspective is reasonably summarized by existing work from Rethink Priorities (e.g., their Moral Weight Project and Cross-Cause Cost-Effectiveness Model) and arguments made in this or this post. Since these posts are excellent and I don’t think I can add much substantial value to them, I decided to contribute a different perspective and explore a more uncertain avenue. I hope others find this fruitful.

For clarity, I want to note that my preference for AW over GHD is much more informed by empirical facts and studies than by the more theoretical and tentative arguments I make in this post. Here, I simply want to explore an argument that has been on my mind lately, but which I am highly uncertain about.

I have not researched these topics in depth and am only outlining tentative thoughts below. My goal is to raise a consideration that I think is relevant to how we should allocate resources between AW and GHD. If others find this relevant and helpful, I am happy to explore this topic further. I am equally open to changing my mind and thinking in different directions.

It is uncertain whether most sentient lives are worth living

At the heart of my argument is a question I’ve struggled with for a while: Are most sentient lives - human and non-human - worth living? In other words, do the positive experiences outweigh the negative ones for most sentient beings (i.e., are they “net positive” from a utilitarian perspective)?

I don’t think we have a clear answer, and I think it’s important to properly acknowledge this. For non-human animals, reasoning about whether their lives are net positive or negative faces extreme uncertainties because we know very little about their subjective experiences. For humans, we seem to know more. It is my impression that most people want to live longer and generally consider their lives worth living. But even here, I have my doubts. I will unpack these doubts in the following two subsections, before explaining their implications for this debate.

How rationally can we think about the net value of our own lives?

First, I think it’s extremely difficult for anyone to assess their life satisfaction in terms of net positives or negatives. When I’ve tried to do this myself, I found that I naturally fall into comparing my life to past experiences, future hopes, or the lives of others around me, rather than objectively evaluating my overall well-being. In other words, I think we’re often not asking, “Is my life overall worth living?” but rather, “How does my life compare to what it could be?” I think this is the way most people think about life satisfaction and it’s hard to do otherwise.[1]

Some people in the EA community have tried to take an objective look at global life satisfaction. In 2022, Vasco Grilo estimated that around 6% of people globally lead net negative lives, based on a neutral point informed by two surveys and assuming comparability of scores across countries. This matches with Will MacAskill’s tentative suggestion that around 10% of the global population lead net negative lives.[2] I am unsure how much weight to place on these estimates (also considering the factors I outline in the next subsection), but I think these are very important efforts, and I would love to see more work in this area.

Evolutionary Perspectives on Happiness and Dissatisfaction

Second, I have some doubts about the reliability of people’s (including my own!) self-reports about life satisfaction. I find the argument compelling that we are not wired to be consistently happy or satisfied. Evolution has shaped us to continually strive for more, because the ultimate goal from an evolutionary standpoint is survival and reproduction, not happiness. Even when we achieve our goals, we often quickly adapt to our new circumstances and find ourselves pursuing the next goal.

Some perspectives that have pushed me in this direction:

  • Buddhist philosophy and meditation practices, such as those discussed by Robert Wright in Why Buddhism is True, emphasize that we are often trapped in cycles of desire and dissatisfaction.
  • Daniel Kahneman’s distinction between the experiencing self and the remembering self highlights how we think about our lives in fragmented ways, often placing disproportionate weight on remembered experiences rather than lived ones. From my experience, there is a tendency to think nostalgically about the past, omitting certain hardships or justifying them in terms of the achieved outcomes. Such a positive view of the remembering self often does not match with what the experiencing self would report.
  • Arthur Schopenhauer’s philosophy also resonates with me. His concept of the “Will to live” describes a blind, irrational drive that keeps us striving, but never fully satisfied.

These views imply that we form an inaccurate picture of our own lives, overestimating the positive and underestimating the negative moment-to-moment experiences in our lives, when evaluating our lives as a whole. I am aware that this is a tenuous position to take, denying that people (can) accurately evaluate the goodness of their life. As I wrote above, I am very uncertain about this view, but I think there are certain pieces of evidence that point in this direction.

I want to be clear: I’m not arguing that most lives are net negative or not worth living. But I think it’s reasonable to be deeply uncertain about whether most sentient lives, including human lives, are truly net positive, an assumption that seems to be implicit in many interventions and cause areas driven by the EA community.

(Farmed) animal welfare interventions have a stronger tendency to reduce the number of sentient lives lived than global health interventions

Given this uncertainty about whether most lives are net positive, I think there’s an important consideration when comparing GHD and AW interventions. While GHD interventions mostly aim to extend or save human lives, AW interventions more often aim to reduce the number of sentient lives - for example, by reducing the number of animals bred and raised for factory farming.

I realize that this is a gross simplification, and that not all GHD and AW interventions fall neatly into these categories. For instance, some GHD as well as AW interventions are simply aimed at improving lives but do not have clear effects on the number or duration of lives lived, and are therefore not as relevant to the question I am discussing here. StrongMinds’ community-based therapy intervention or the Open Wing Alliance’s work to move laying hens out of battery cages seem to fall into this category. But at a high level, it seems that AW interventions are more focused on reducing the number of potentially net-negative lives (like those of factory-farmed animals), while GHD interventions are focused on increasing the number of human life-years. Think about GiveWell’s current estimate that the Against Malaria Foundation averts a human death for USD 3,000 to 8,000 versus Animal Ask’s estimate that digital and mass media meat reduction campaigns spare 3.7 animals per USD.

This difference likely stems from the belief that human lives are typically net positive, while farmed animal lives are net negative. However, this difference in focus may also reflect taboos or cultural sensitivities. There’s significant societal discomfort around the idea that some human lives may not be worth living or may be net negative. Topics such as antinatalism or euthanasia remain highly controversial (ask Peter Singer), and interventions aimed at preventing lives in the human context are often considered morally fraught or politically untenable. By contrast, the intensive suffering of factory-farmed animals and the stark reality of their existence seem to make it more palatable to focus on preventing their lives rather than improving their living conditions. Thus, the difference in prioritization between human and animal lives may not only be a matter of cost-effectiveness but also a reflection of cultural and ethical considerations that make it easier to justify preventing negative lives in the animal context than in the human one. How to navigate such issues is a critical consideration, but it is a more practical one which I am not focusing on in this post.

Uncertainty around whether most sentient lives are worth living may warrant a stronger focus on animal welfare interventions

If we accept that it is less clear than commonly thought that most sentient lives are worth living, I think this could lead us to put a stronger focus on AW interventions. Generally, we should of course rely on estimates of where we can improve well-being most effectively - whether it be the well-being of a net positive or negative life, what matters is the absolute change. It is very plausible to me that certain interventions that improve human or non-human lives are more effective than interventions that simply reduce the number of really bad lives. But clear numeric estimates and comparisons can be extremely hard to make. On a high strategic level with so much uncertainty, I think that considerations like the one I am outlining here could carry some weight. They could even be included in existing quantitative estimates, and I would appreciate readers bringing such efforts to my attention if they exist.

Factoring in my uncertainty around the directionality of the net value of sentient lives, interventions that increase the number or duration of lives lived could look worse, all else being equal. People may underestimate the uncertainty surrounding whether the lives saved through GHD interventions in low-income countries are net positive or net negative. This uncertainty, if properly accounted for, could decrease the perceived cost-effectiveness of GHD interventions. Considering the above-mentioned tentative estimates that up to 10% of the global population may lead net negative lives (which may be a conservative estimate, given the uncertainties I have outlined), it’s plausible that some GHD interventions may prolong such lives and therefore have net negative welfare effects.

Considering the reverse case, I think that interventions that reduce the number of farmed animals run a very small risk of preventing potential net positive lives. In my view, the suffering of factory-farmed animals likely represents one of the most extreme examples of negative experiences and I expect such animals to largely lead net-negative lives. A counterpoint is that reducing the number of animals may be less favorable if future welfare reforms continue to improve the lives of farmed animals. Acknowledging uncertainty around when (if ever) populations of animals will have positive lives may mean we should place more weight on improving animal conditions rather than reducing their numbers.

Of course, other interventions could also look more appealing when factoring in the considerations outlined above, such as efforts to reduce s-risks (large-scale suffering in the future), but I’m not aware of any case where the impact seems as clear-cut as with factory-farmed animals. I would welcome readers pointing me toward such cases and challenging my view that AW interventions benefit the most from incorporating this uncertainty.

Conclusion

To summarize, I think that there is huge uncertainty around whether most sentient lives are worth living and suggest that we may not be adequately hedging our bets against this uncertainty in our prioritization decisions. Acknowledging that we don’t know whether most sentient lives are net positive could, I think, lead to a greater focus on AW interventions.

While I’ve mostly focused on farmed animals in this post, I think similar reasoning applies to certain wild animal suffering interventions as well (e.g., the use of contraceptives in wild animal populations).

I hope this post encourages others to reflect on these uncertainties and to contribute their own thoughts. I would love to be challenged on this perspective and think more deeply about this.

  1. ^

    For instance, consider the measurement taken by the World Happiness Report, one of the leading publications on subjective well being. They ask respondents:

    “Please imagine a ladder, with steps numbered from 0 at the bottom to 10 at the top. The top of the ladder represents the best possible life for you, and the bottom of the ladder represents the worst possible life for you. On which step of the ladder would you say you personally feel you stand at this time?”

    If someone rates their life as a 7/10 on that scale, does that imply their life is net positive? I think this is far from clear and cannot be decided based on this measurement. The question encourages relative rather than absolute judgments about life’s value and I think that this captures an intuitive way of thinking for most people.

  2. ^
Comments9


Sorted by Click to highlight new comments since:

Hi Moritz :) Thanks for articulating this take, which I think is novel and quite daring in arguing that we should be deeply uncertain whether most sentient lives are worth living. I agree with quite a lot of object-level stuff you say. I also have scattered half-thoughts that unfortunately haven't cohered, hopefully you find at least some of them useful or interesting:

1) re: your first argument for doubting whether most sentient lives are worth living, I'm reminded of the neutral point debate, and in particular footnote 31: "Some people I’ve spoken to have suggested it’s bad to save lives solely on the grounds those in the developing world lead lives belows the neutral point." I intuitively emphatically disagree with the strong version of this, but couldn't justify it on purely SWB grounds unless the neutral point could justifiably be set lower than (say) that of Afghanistan, the lowest country in the WHRs. And there is some justification. That said...

2) I used to be ~all in on happiness & SWB for altruistic decision guidance, persuaded by arguments such as HLI's, but find myself putting more weight on valuism & capabilitarianism recently, after much introspection on my own pursuits ("100% generalizable to others!") coupled with arguments such as Jason Crawford's contra HLI. I think this reduces the force of your second argument even though I agree with your perspectives supporting that, and (because value-fulfillment is necessarily objective, as Crawford explains) lets us be a bit more rational in evaluating value of life vs subjective assessments, which partly reduces the force of your first one. (As an aside, animals arguably have central capabilities too.) That said, I don't know how to calculative cost-effectiveness from a valuism / capabilitarianism perspective... 

Thank you Mo, you are a well of great resources, as always!

1) The neutral point debate is fascinating and something I should have been aware of. I will dig deeper into this! The IDInsight study is also very interesting and relevant. However, I think it doesn't fully address my skepticism about how rationally we as humans can think about the net value of our own and other lives. I realise that this kind of skepticism is hard to address via studies, but I think there are better ways than surveying people due to the reasons I mentioned.

2) I agree that we should put weight on different moral theories and that those will favour saving lives over not doing so to a very large extent (except for maybe antinatalism and a few others). This is a reason why I am very uncertain about the view I outlined.

Overall, these kinds of considerations lead me to think that it is probably better to save lives than not and this is why I am NOT saying that the number of sentient lives should be reduced across the board. But I have significant uncertainty around this, which somewhat move the needle towards (1) welfare-improving interventions that do not have strong population effects (e.g., cage-free egg campaigns or mental health interventions) and (2) interventions that reduce the number of some of the worst lives lived (e.g., diet change campaigns which lead to less animals being farmed (mostly in factory farms)).

A few thoughts related to wild animals

  • General: Thanks so much for mentioning wild animals! They make up such a large majority of animals that I think it's a really good practice to allude to that part of the picture even when you don't want to cover it in depth.
  • Dodging the question: There are many interventions that would only be beneficial if you knew whether animals in the affected population(s) had net-positive or net-negative lives on average. But I've had so many conversations with people who seem to think we can't do anything until we answer that question. In fact, there are lots of things we can do. Anything that reduces suffering without changing population levels is worthwhile, because between two lives not worth living, I'd still prefer the one that has less pain.
  • Evolution: Here's a great talk by my colleague that goes in depth into depth on fitness and happiness in the context of wild animal welfare. It's totally consistent with the arguments you make on the subject. https://www.wildanimalinitiative.org/blog/what-is-fitness
  • Contraception: You're right that wildlife contraception could be a great tool for humanely reducing populations, and that that would be in animals' interest if those animals were living net-negative lives. I want to add that one of the reasons contraception has so much potential is you can also use it to reduce suffering even if animals are living net-positive lives. Here are two strategies you could use to improve net welfare without knowing the net value of animals' experiences.
    • Strategy 1: Same population, less suffering. You could use contraception as an alternative to painful lethal methods of population control. For example, cities, farms, factories, etc. often manage rat populations with anticoagulant poisons that cause the animals to bleed out over hours or days. You could eliminate the need for that by using contraception (it's not quite clear whether the tech is there yet, but it's close -- this is one of the things we [@Wild_Animal_Initiative] would like more funding to research). People would probably want to bring the population down as much as possible, but they might decide to keep it at the same level as it was when they were using poison, and you would still expect a net benefit to welfare.
    • Strategy 2: Optimal population density (see this white paper with neat charts). Assuming you think more good lives is a good thing, all else equal, in reality not all else is equal when a population grows. Many wild animal populations are constrained by resource (e.g. food) availability: the population grows and competition intensifies until deaths from competition (e.g. starvation) increase the total mortality rate to the point where it equals the birth rate. So by the time the population stabilizes, resource competition is intense, and quality of life is likely affected. Even if these animals were having good lives, it might be better for there to be slightly fewer of them if that meant they'd be having much better lives. And if it turns out the animals were having net-negative lives, then reducing their population would be good for the usual reasons.

Thanks for your comment Cameron and the work you're doing! Wild animal suffering is an area that I think is highly important but I struggle to think very clearly about. That's why I only mentioned it briefly. Thank you for elaborating further on this and linking some resources.

I strongly agree with your point about reducing suffering without affecting population size (significantly). As I wrote in reply to Mo's comment, I think that welfare-improving interventions that do not have strong population effects seem more promising, if we are uncertain about the net value of affected lives (as I am).

Great and thought provoking post, thank you very much Moritz!
I especially liked the Evolutionary Perspectives on Happiness and Dissatisfaction part.

You might be interested in this talk on what exactly fitness means in an evolutionary context and why we shouldn't expect it to reliably select for traits that give wild animals happy lives: https://www.wildanimalinitiative.org/blog/what-is-fitness

Thank you very much Cameron!
Funnily enough I watched this video and took some notes, last week.
Very interesting presentation with some eye-opening facts and thoughts!

Thank you Johannes!

I am adding here in anonymized form some feedback I received privately and my responses, since I think this may be helpful to others.

First, here is the feedback:

I was recently reminded of how important these considerations around moral uncertainty are when we had a discussion about this with X. Although all of us were interested in global health/development, we ended up having quite different moral intuitions on saving lives vs. the value of improving them. These differences led us to choosing to work on very different projects with very different people than we would have chosen without these conversations.

I wanted to quickly share some reflections that I had on the article based on the thoughts that I have collected on this issue over the last couple of weeks (apologies for this being not super well laid out):

  • Before vs. after birth: in the article you seem to be making a distinction between GHD extending lives and AW reducing the number of animals suffering. In theory there are two ways to reduce the number of animals suffering, (a) by killing more of them faster and (b) by preventing them from coming into existence in the first place. Your argument seems mainly based on scenario (a). (a) also feels morally very different from (b) although they end up with the same number of sentient beings alive and suffering. I am wondering if this factor also makes up part of your preference for one over the other in addition to the arguments you are making? Translating this into the GHD space, family planning vs. letting very sick people die faster feels morally very different.
  • Non/somewhat-utilitarian perspectives: I would argue that there is quite a bit of moral uncertainty on measuring the value of life in utilitarian terms and what the unit of measurement should be. It also seems like most value systems/religions place quite a large value on saving a life and this seems to push against the general norm which might be an indicator that we are crossing some moral guardrails. I personally place quite a bit of value in my moral parliament on preference utilitarianism. This in turn leads me to believe that in theory people who perceive their lives as net negative could take their own life if they choose to do so (although there is quite a bit of complexity around that of course).
  • Letting the same argument lead us to GHD as the answer: One could argue that the unit of suffering could be made up by the following factors: [Sentience of experience x level of suffering of that experience]. The level of suffering for most animals seems a lot higher than the level of suffering for most people. Having said that, regarding sentience, the certainty is higher in humans than in most animals. Potentially, these factors could somewhat cancel themselves out? If you then want to apply the same framework (being uncertain if lives are net positive) to humans, that could lead to cause areas such as family planning and mental health.

Implications/ questions/ takeaways:

  • FAW implications: It seems like we are roughly working on the following categories in animal advocacy: (1) bringing less of them into existence and (2) making their lives better while they are in existence. If we have high certainty that their lives are net negative, should we look more into approaches that reduce their lifespan/killing large proportions of them faster?
  • GHD implications: If one would apply the same arguments to GHD, maybe focusing on family planning or mental health could be a good place to land? One could also argue that family planning would be the ultimate FAW intervention if it counterfactually reduces the number of people born.

And here is my response:

Thanks for sharing those reflections. Some thoughts/responses:

  • In the context of farmed animals, I think that "killing more of them faster" could have quite a few negative flow-through effects. For instance, broiler chickens are bred exactly in a way that makes them grow and gain mass very quickly, which leads to a lot of welfare problems because their bodies cannot handle it. Also, I fear that just making this process more efficient will simply then lead to higher production capacity and even more animals being farmed (rebound effect - if something becomes more efficient, you don't necessarily reduce the effort spent on it (effort in this case being animal life years) but rather increase output). In theory, yes, killing animals faster could be an option. But I think it is better to pursue options that either lead to less animals being farmed or lead to better life quality. There might be some kind of intervention though that avoids the negative externalities I mentioned and I'd be super interested to hear about such ideas.
  • Yes, 100% agreed on the non-utilitarian perspectives. I also wrote a post about this a while ago, that we should put some weight on "common sense morality". I think "saving a life is good" is about as common sense as it gets.
  • On the point about "people who perceive their lives as net negative could take their own life", I would refer to the sections where I outline that I am unsure whether we can accurately evaluate the value of our own lives. I think we have very basic instincts that lead us to strongly avoid the suicide option, whether that is rational or not. It's also very important to think about the probably extremely negative effects of someone taking their own life on their environment (family, friends, etc.). So I think there are very good reasons that there are strong social norms against this and I wouldn't want to change that.
  • Yes, I agree that we should also apply uncertainty to the sentience of animals. But most AW interventions simply have way stronger welfare effects (excluding the uncertainty around their sentience) that you would have to be extremely uncertain about animal sentience. I don't think that is warranted. I think Rethink Priorities' work on this quite clearly favours AW interventions, even if you factor in significant uncertainty about animal sentience (see their post for the debate week here). As I wrote under "Context and epistemic status" this is closer to the actual reason why I prioritise AW over GHD.
  • I strongly agree with you about family planning. I think that these interventions often have positive impacts on the lives already lived, they address the meat eater problem, and they also hedge our bets against the uncertainty that we may be unintentionally increasing the amount of net negative lives on the planet. I love the work Family Empowerment Media is doing, for example.

Hope this is helpful!

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f