Hide table of contents

Background: CS student in university, in tech circles, introduced to EA a year ago, read up on most books like Doing Good Better, What we Owe the future, & Precipice as well as the intro fellowship at my uni. I'm very interested in philosophy and morality. I find myself EA aligned but don't really take action (right now looking into alignment research). 

This post is more of my thoughts (so sorry if doesn't fit the norms of posts and as cleanly put through). I also may misinterpret EA beliefs so please correct me if I do. 

I also understand how difficult it is to measure utility and so this isn't an attack on EA but I am trying to make more sense of how to live life meaningfully. 

My beliefs

I do believe we need some sort of utility function to maximize. First, do I believe in our lives we should do good? Yes. If we buy into that idea, then we should maximize it. This seems logical to me. 

Where this becomes difficult is doing good while retaining the elements of being human. I've thought about it like this. If you knew at every moment the action you could take that would return the highest expected value for doing good, you would be a robot. But we're not robots. Our lives are filled with non maximized points that don't promote human welfare. We enjoy nature, sing, do random projects, read on epic lore like Lord of the Rings, Star Wars, etc. 

So this is the constraint we have to deal with. How do we maximize the good we do in our lives and also keep what makes live worth living for? 

The second issue is the difficulty of even creating a utility function that would make sense. Now I do applaud EA for creating their best attempts of calculating utility through EV and research. It is a difficult task because some things are not quantifiable and in the long run, EV calculations become extremely difficult.

This is the Incommensurability Problem. How do you compare the "utility" of a surgeon who performs live saving surgeries to a teacher who inspires students to be their best selves? How do you bring ideas like love, kindness, joy into the equation? If someone forgives someone does that increase utility?

It seems to me that EA (when applied) often ignores these qualitative goods for more quantifiable goods like lives saved, QALYs, etc. That is fair if you don't want to jump to conclusions and want to concretely find a proven method to increasing welfare, but at the same time, QALY is directly related to quality of relationships. And in general qualitative ideas and experiences appear to be the bulk of what a good quantifiable life even is. We remember our lovers, our friendships, and experiences as the highlights of our lives. 

And to this point, I believe EA has overoptimized for quantitative measures and not enough for qualitative measures. 

Being kind goes a long way. Fostering deep relationships improves our lives. Helping someone during a time of need could transform their life. These are things that you can't directly quantify but I'm sure we are all aware of. 

It's also a harder path I'd argue. Once you know what the optimal nonprofit to donate to is, you can easily donate. But becoming a person who is kind, who fosters deep relationships, who helps others, that's harder. And that's perhaps a missing element of EA. 

So we should not be an organization that solely aims to improve quantitative measures but also qualitative measures too. But you will not find EA posts about kindness or improving qualitative measures. (I do recognize that there have been people like Benjamin Todd who did advocate for WALYs (welfare adjusted life years) but it seems in practice EA does not do much in this regards.

Think about our society today. We are quantitatively living great lives but when you ask people how they are doing, they aren't doing well. It's a mix of decreased friendships, lovers, economic troubles, social media and algorithmic ruin, etc. And the thing is, by all the numbers Americans are doing vastly better than the world (top 1%). But suicides have increased, depression increased, among other harms created by qualitative issues. 

This is deeply concerning. So how do we begin to address these issues if we always default to solving quantitative harms? And if we default to the highest magnitude issue, we won't solve these issues until they themselves become even worse. So how do you balance this?  

The second issue is how do you calculate utility over the long run? 
 

The primary challenge in moral optimization is the Calculation Problem. We cannot accurately measure the long-term utility of a career because the causal chains are too complex and non-linear.

Consider the 'Invisibles': a teacher whose influence reshapes a student's trajectory, or an author whose words provide the psychological resilience for a future leader to persevere. These impacts are 'seeped into the soul'—they are unquantifiable yet foundational. Furthermore, history shows that the highest EV moments for human welfare often emerge from Serendipity rather than rigid planning. Figures who revolutionized the world often did so by following deep curiosity and being 'at the right place at the right time.'

Consequently, rigid career 'bucketing' is a flawed strategy. Because life is a dynamic 'river'—a non-static environment where opportunities shift constantly—long-term calculation is an exercise in false precision. Instead of treating impact as a target variable to be hit through a fixed 40-year plan, we should treat it as a heuristic. We should optimize for 'Option Value' by staying curious and agile, ensuring that we are positioned to act when the 'random' high-impact moment arrives."

Maybe it depends on the person. Maybe someone wants to just take the calculations from EA and find a career using that logic. But for me, I rather take life more free flowing while thinking about impact, passion, and leverage. 

EA loves to talk about Norman Borlaug, also known as the "Father of the Green Revolution" for developing high-yield, disease-resistant wheat and rice varieties that dramatically increased food production, saving over a billion people from starvation and earning him the Nobel Peace Prize. 

But the chance of him succeeding was low and incalculable. 

For a thought experiment, if Borlaug and other high impact individuals were taught about EA ideas, they probably would not have continued their work realizing how logical EA is. Borlaug maybe would work to prevent nuclear war (having grown up right at that time during world war II). He would not have realized that famine was such a large issue for example or thought it was less important than solving nuclear war. 

But you can take a look at his life yourself. He might have not gone to college if not for his grandfather's advice (and thus not caused the Green Revolution). He failed the entrance exam to college. He only learned about rust, the plant disease he would help fight against, during his last months of his undergrad by a guest lecturer. He almost studied forest pathology instead of plant pathology. He was surrounded by plants during childhood (an argument that people should follow interests not welfare maximization), etc. 

It was very very unlikely that Borlaug would know his path would lead him to save so many lives. It is also something that you simply cannot plan for or will into existence. You cannot use EA ideas to somehow kindle this sort of impact into others / replicate it. 

The highest asymmetric welfare creation comes from black swan events, not predictable EA actions. Same with other people EA loves to talk about like Petrov and Vasili Arkhipov (the Soviets credited for avoiding nuclear war). These are unpredictable moments. 

The Argument: The Paradox of Asymmetric Impact


There is a fundamental incongruity in how we conceptualize career impact. EA often cites historical 'Black Swans'—Norman Borlaug, Vasili Arkhipov—as the gold standard of impact. Yet, these individuals did not follow a calculated EA framework; they achieved asymmetric results through a combination of deep domain immersion, curiosity, and situational timing.

By overfitting career advice toward 'visible' high-impact paths, we create a systemic risk: we may be diminishing the Exploration Phase of human progress. If every talented mind migrates to a handful of 'validated' industries, we reduce our collective 'Luck Surface Area.' We lose the specialists who are positioned in the 'invisible' niches where the next great asymmetric delta will actually occur.

While calculated EA actions improve the global mean of impact, the greatest advancements in human welfare are often found in the tails of the distribution. True impact optimization requires a synthesis: using impact evaluation as a compass, but relying on passion and long-term persistence in a specific industry to develop the 'readiness' required to capture a Black Swan event when it arises."

Thus, we potentially risk losing the greatest asymmetric human welfare deltas in the future. These are the people who often increase their surface area for impact, dive deep into one thing, and return much greater asymmetry by doing so. 

One last thing I'll note is that we also cannot simply rely on black swan events to create change. Imagine you decided to become a nuclear war submarine commander so that off some small change WWIII almost breaks out, you have the ability to not fire the nukes, this seems like a pretty bad way to live life in my opinion. Huge global utility but terrible for your own life.

This is why I believe the best method is following your interests + follow impact + dive deep as you will be satisfied by following your interests and can optimize for impact while also creating impact through black swan events.

The Better Model: Multi-Objective Optimization

Now I don't doubt that EAers have already thought of this. Just want to put it concretely and I don't think these ideas are mentioned in any of the mainstream EA books. 

In engineering, you rarely optimize for just one thing. You optimize for a Pareto frontier where you balance multiple objectives. Similar to life, and I bet most of you agree, we need to optimize and care for more than just impact. Otherwise, we lose the very core of what it means to be human. We must balance doing good (util), remaining ethical (kant), and also have room for personal flourishing (leisure). 

This framework makes sense to me because there is no true one moral idea that allows you to live perfectly. If you are always utilitarian, that means sacrificing your mom for 2 strangers. If you are always Kantian, that means you would not sacrifice one person to save the whole world. We need a way to know when to maximize for utility and when to listen to Kant.

And unfortunately EA appears to be free flowing - telling people that they should be utilitarian but does not advocate when Kant should matter more. Maybe moral frameworks should be personal, but I believe we can approach a universal-ish framework that can be adapted by each person. 

You should prioritize Utility when making Large-Scale, Impersonal Decisions.

  • Example: Choosing between two software projects. One helps a luxury fashion brand optimize ads; the other helps distribute vaccines in the tropics. Use the "Utility" calculation here. The stakes are high, the impact is measurable, and no one's fundamental rights are being violated.
  • The Logic: When dealing with "the world," math is your best tool for avoiding bias.

This most often shows up in our career, in donations, etc. And this is where EA ideas make a lot of sense. 

You should prioritize Kantian ethics when making Immediate, Personal, and Interpersonal Decisions.

  • Example: You could make $10k by lying to a client or betraying a friend’s trust. Even if you argue that "I will donate that $10k to save lives," Kant says No.
  • The Logic: Humans are terrible at "Calculating Utility" in the heat of the moment. We usually use it to justify being selfish. Kant's "Categorical Imperative" (Act only according to that maxim whereby you can, at the same time, will that it should become a universal law) acts as a hard constraint or a "safety interlock" to prevent you from becoming a monster in the name of the "Greater Good."

This is like SBF taking his crypto money and spending it on effective charities. Good in the moment but terrible in the long run as it destroys trust and long term impact. I think some sort of utility function that can perfectly map into the future would perhaps encapsulate Kantian ethics, but it's hard to say. In general, we should promote Kantian ideas along with utilitarianism. Otherwise if EA strictly talks about utility, we might end up with a whole generation of EA-ers who act like SBF. 

Lastly, is just a caution to maximization. We never have perfect information. If you try to "maximize," you often end up "overfitting" your life to a specific moral theory that might be wrong. 

How to view your life then?

View your life as a Multi-Objective Optimization where "Doing Good" is the primary goal, but "Personal Flourishing" and "Integrity" are the required constraints that keep the system running.

The Goal: To find the Pareto Optimal life—the life where you cannot do any more good for the world without fundamentally breaking yourself as a human being.

Techno Optimism

As a thought experiment, imagine that you were born in the 1600s and you were guided by effective altruist beliefs. If you lived in this time period, you probably would optimize for the wrong things. That is to say, you would probably find problems that you think were big but in the grand scheme of things were just wrong because we had the wrong world view and model. And obviously today, we have much better information. 

In the 1600s, the "Drowning Child" was often a child dying of plague or "bad air" (miasma).

  • The EA 1650 Action: You would optimize for building better-ventilated hospitals to keep "foul air" away from the sick. You might fund the distribution of pomanders (scented oranges) to the poor to ward off disease smells.
  • The Error: You would be optimizing for a false causal model. Because germ theory didn't exist, you would be spending massive resources on "odors" while the real killers (bacteria/viruses) went ignored.

In the 1600s, an EA might have seen a scientist like Isaac Newton or a tinkerer like Robert Boyle as a "distraction."

  • The EA 1650 Critique: "Why are you looking at the stars or vacuum pumps when there is a famine in Ireland? Your physics research has 0 utility today."
  • The Error: You would have optimized for Redistribution (giving grain) instead of Innovation (the scientific method). By discouraging "frivolous" curiosity, you would have delayed the Industrial and Green Revolutions, which eventually saved billions.

What actually was the most important thing during this time period was kick-starting the Industrial Revolution which would bring humanity the greatest quantitative improvement in life quality,.

And I'd argue that today, for sure, we probably have much better information of what is the best thing to do but at the same time perhaps 100 years from now we will look back and see that our actions of the 21st century were incorrect and what we needed to do was optimize for technology growth as is the narrative with human progress for all of humanity.

Innovation is what lets us do more with less input.  

So on the macro, it appears that EA is too focused on the ideas of today and not enough about the ideas of tomorrow, like startups, like technology improvement, and applied AI. 

 

1

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from Zach
Curated and popular this week
Relevant opportunities