When my dad was in his 20s, he took a trip to Central America and witnessed the poverty there. Determined to do something about it, he returned to America, became a nurse, and worked on a tribal reservation that was as poor as the places he'd visited in Central America. There, he managed the clinic, and that early experience in healthcare administration led to an opportunity, years later and off the reservation, to become the executive director of a nonprofit health clinic and help it recover from a financial crisis. He spent the rest of his career managing and extending the services his clinic provided. He was widely admired by his colleagues and left the clinic in good shape when he retired.

Let's look at his career through an EA lens. He identified a cause that seemed important to him, and perhaps neglected. He found a way of working on it that was tangible and tractable. This afforded him the chance to prove his ability to wield responsibility and demonstrate leadership, leading to higher positions of authority over time where his decisions could have a higher impact. He spent his career working in American healthcare, where the cost of saving a life is high, but he also worked with low-income populations.

I've asked him about the degree to which he thinks he made a unique impact in his career. For example, what would have happened had he not taken his first job helping salvage a distressed clinic? Would the board have been able to find an alternative candidate? Would that candidate have had his level of success? To what degree would the resources of that clinic - providers, equipment, money, and relationships with patients - have been wasted, and to what degree would they have been recycled by some other healthcare system?

He's not entirely sure about the answers to these questions, of course. But one way of looking at it is that the nonprofit healthcare system he joined was bottlenecked by a lack of competent administrators willing to run failing clinics. Even if he had no reason to think that he was an above-average administrator, he was likely to be at least average. If average is good enough to turn around a failing clinic, and without him it would have failed, then by taking the job, he can take credit for preventing whatever degree of waste would have occurred had the clinic dissolved.

As I negotiate a career change inspired by this community, I think about my dad often and try to derive lessons from his life as a competent and successful altruist doing effective direct work.

One lesson I take from his life is that being average is good enough, as long as you show up to the right cause and are willing to take on the jobs that nobody else wants to do: risky, relatively poorly compensated, and demanding. If you do this, you don't need to fear that you'll do just as bad as the last person because of the "outside view:" the last person was unusually bad, and you're likely to do much better as long as you stick it out.

Another lesson is that producing a convincing plan, achieving a concrete success, convincing the unconvinced and leading people to real change, is different from an isolated exercise in intelligence. I am writing this blog post for my own sake, because I enjoy writing, and because this is the best audience I can think of for it. I don't expect it to move anybody much, change or improve any decisions, or be useful on my CV or in my future work.

The advantage of working in conventional roles, as opposed to blogging about altruism from the outside, is that you have more opportunities to show up every day and achieve tangible outcomes, signal your competence, and discover opportunities to do the undesirable but important work that makes an absolute difference in the world. You're expanding the number of average people trying to do altruistic work by one, and that matters a lot if you're also willing to do the undesirable but important jobs when they're offered to you. The work of the average but motivated person is cleaning up the messes left by the most incompetent 1% in whatever field you enter.

My new heuristic for individual EAs is this:

Find a cause that seems like a good fit and also to be altruistically important. Expect that it will take you a long time to be credible and experienced enough to know what needs doing and how to do it. Just get your foot in the door in that cause area. Show up and do a good-enough job. Look for the obvious jobs that nobody wants to do, and offer to take them over. Keep doing this until you get somewhere. If you can't get your foot in the door in your top cause area, just keep looking for something one or two or three jumps away. Be patient. Think in terms of decades about your long-term impact, but focus on doing a great job right now, even if you're not sure about the effectiveness of your work.

For myself, I decided a couple years ago to go back to school, and to pursue work as a biomedical researcher. Maybe someday I'll work on pandemic prevention, a cure for Alzheimer's or chronic severe pain, or on technology that slows the aging process.

Students willing to pursue STEM are still relatively rare compared to their expected value to society, so I feel that even though I don't know precisely how I'll make my impact, just by showing up to school and doing well, I'm moving closer to that goal. Even if I only end up displacing another student vying for a position in graduate school, hopefully I will be average and displace a far-below-average candidate, who will in turn find something else useful to do.

And once I am in graduate school, that's when it will be especially important for me to focus on doing a consistently good job, being willing to take on tasks that are undesirable but necessary, and demonstrating my competence to more experienced people. Hopefully I'll keep being a plug in the leaky holes of whatever meaningful institution I join, on and on until I retire.

I think this is an excellent vision for a life outcome, and I think it should be the default vision for just about everyone who's interested in the EA movement.

Comments5


Sorted by Click to highlight new comments since:

Nothing terribly original for me to add but this is a beautifully written article and your dad sounds like an amazing person.

Thank you :)

Thank you for this. Assuming that your kind heart, contemplative insight, and outstanding dedication make you someone who contributes greatly to any area they focus on, please just do not forget to focus on causes that are neglected by the for-profit sector (e. g. researching cost-effective prevention/cures to any of the 19/20 neglected tropical diseases not yet covered by EA charities, as opposed to researching something like baldness (that takes more funds than malaria research), or cancer, or Alzheimer's disease that burdens predominantly rich people who live long lives and thus has perhaps 1000x more funding/focus). It is a structural issue that those who are privileged and kind focus on helping their communities (that are similarly privileged), in consequence hurting others who these kind people counterfactually neglect.

Please do not research Alzheimer's disease to make your father proud (unless there is a sound case that it is better for the world than researching some of the NTDs), continue his work by researching a neglected cause that makes the world a better place truly-counterfactually and cost-effectively.

I think these issues are extremely complex, and I think you bring up a good point, one with underlying values that I agree with. Nevertheless, many of my research interests are in Alzheimer's, chronic severe pain, and life extension. I think that people in poor countries ultimately are going to improve their length and quality of life, and there's a strong trend in that direction already. I am long on Malaria being eradicated within the next 30 years. We mostly know what to do; what's holding us back is a combination of environmental caution and the challenges of culturally sensitive governance.

I'm most concerned with the despair and suffering of the elderly and chronically ill, from a sheer "loss of utility" perspective. These problems are incredibly complex: we still just have one Alhzeimer's drug, and it buys you maybe an extra year. We don't understand how pain works. Most of the utility of the investment in R&D lies at the end of the research process, so the non-neglected nature of these problems is irrelevant from the perspective of utility. Of course, it's quite relevant from the perspective of basic fairness. That's just less of a motivator for me.

Beyond that, I'm sort of an immortalist. I think that the best way to get people to broaden their moral horizons and think long-term is to make them life longer, happier, healthier lives. I honestly do think it's an emergency that even in the industrialized world, life expectancy is only into the late 70s and our declines come with lots of suffering. You spend your best years trying to save up to afford your worst years. Preaching about animals and the poor and our descendents doesn't work on a scale big enough to change the world. The only way I see to change the situation is to dramatically improve the experience of old age and reduce chronic suffering. My intuition is that happy and relaxed people are more compassionate, and that it's fear or the experience of pain and dementia that undermine our happiness and contemplative ability.

Very clear argument, thank you. While I do not believe that I can change your mind, judging from your tone, I also think that I do not need to: happier and more relaxed people may truly be in a better position to share their privileges with others, who then will be also happier and more relaxed. Then, I hope you will succeed in your research, while reminding your peers about the cost-effective, EA ways to share happiness with persons in the world.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f