Hide table of contents

TL;DR: Longpost. I’m starting a blog on “Good Living” (by my lights), EA, non-EA ways of doing good, heroic media that inspires me, my own journey, and whatever other topic I think writing about seems like it could lead to more good than rewatching Andor again.

Epistemic status: This post leans hard into Draft Amnesty Week; it introduces ideas I've been developing for almost twenty years, and it's long because I haven't had time to make it short. I haven’t had time to get feedback and intend to edit/update it once I do. Could probably use some images. It was written in close collaboration with Claude[1], which felt both fitting and uncomfortable given my AI takes (LF[2]). I am not a philosopher and I’m making some big claims here and in future posts; I’m willing to invoke Cunningham’s Law and get thrashed so long as I get these ideas out of my head and take the chance that they make some good happen in the world. 

Why now

Several things converged recently that made me feel like I had to scrap my original launch plan and rush an introduction that smashes together several posts' worth of content.

The obvious is that I'm posting this during Draft Amnesty Week, which Toby and the forum team graciously and brilliantly designed for exactly this kind of logjam-breaking moment.

Another: the announcement of the release of Doing Good Better’s tenth anniversary edition. I'll have more to say about the book below, but the short version is that it shaped the community I work in, I'm glad it exists, and there are things I've wanted to say about it for years.

Another: Oscar also used this week to launch a blog that bridges his personal passion — Arsenal, analytics, the long arc of a project done right — with EA thinking. He and I talked about our blogging goals, but I didn’t know he was pulling the trigger this week. His post is laden with references I don’t understand, and writing talent I aspire to reach one day, and mostly a clear voice describing (and therefore doing) something fun and good. I’m inspired to come out of the woodwork and do the same.

Another: a couple weeks ago EAG SF overlapped with the Project for Awesome (and the launch of the Maternal Center of Excellence in Sierra Leone). I know I wasn’t the only person who felt tension about which to prioritize participating in.

Finally, it’s been an abnormally eventful week in the EA-world and outside it. I consider it a privilege and a burden to be professionally able to work on things and try to make them better. But my job isn’t the appropriate way for me to approach all things. Practical Heroism (PH) is the lens through which I look at the world, and it would have me do more.

I work for the Centre for Effective Altruism [all content here and on the blog is my own and does not represent my employer], I've been following EA since the beginning, and started contributing via volunteer and professional work in 2015. I wrote a post in 2023 in defense of General EA that I’m proud of but no longer wholly endorse. I still absolutely think EA is a good movement doing important work, I want it to succeed, and any criticisms here come from that position.

I'm also a Nerdfighter since roughly the beginning — I'll come back to why that matters — and a person whose identity has been substantially organized around heroic narrative and fantastical fiction for as long as I can remember. In my experience, superhero nerds are far more likely to be unknowingly EA-adjacent than sports fans, so if Oscar can make a case for the sports > EA pipeline, I should have a much easier case on the Comic Con crowd. They've spent years imagining what it would mean to have great power and actually use it for good. They just haven't been told that's something adults are allowed to take seriously.

This blog, Practical Heroism, is the place where those things come together. It will include posts on philosophy and ethics, on the state of the world, on EA and adjacent movements, and on my own life — the costs and rewards of trying to take heroism seriously as a daily practice. It will also be where I discuss heroic media at length, with more enthusiasm than is probably warranted. Some posts will be rigorous. Some will be overwrought. I'm told this is fine.

On Doing Good Better (DGB)

DGB was published in 2015. Not entirely coincidentally, 2015 was the year I (temporarily) gave up on trying to publish my own set of ideas on what it means to live ethically, what a community built around those ideas might look like, and what we could accomplish together by building that community deliberately. I had been working on them since 2008 and reached a breaking point where it seemed clear that my mental health couldn’t withstand continued focus on trying to articulate my thinking. I wasn’t capable of publicly commenting on DGB then, but I am now.

What Doing Good Better got right

A lot, genuinely. The wins MacAskill documents in the new foreword are real: GiveWell moving over a billion dollars, over ten thousand GWWC pledges, thousands of career changes toward high-impact work. These are not small things. The rigor of cause prioritization, the normalization of serious giving, the pipeline of talented people into work that matters — these are enormous accomplishments, the movement should be proud of them, and I believe I’ve contributed to some of them by dedicating my professional life to them for years.

I wrote my 2023 post partly because I was frustrated at how much criticism was circulating after the FTX collapse without enough people making the affirmative case, and that case still holds. EA took a serious institutional hit and has had a genuinely difficult few years, including failures it needs to own. But the underlying project — using evidence and reason to do as much good as possible — is not discredited by those failures, and the community that rallied around it did remarkable things.

So what follows is not a repudiation. It's a description of what I think EA didn't build, and couldn't easily build, given the frameworks it chose.

Moral licensing is the wrong side of the spectrum to focus on

As I understand it (since I haven’t read it in full), the tenth-anniversary edition of Doing Good Better is largely a re-release that doesn’t chronicle the impacts that the book has had, or MacAskill’s changed view of the world since its original writing.

The new edition does include a new foreword, and it comments on one of my longstanding gripes with DGB, albeit in a very unsatisfying way. DGB notes that the moral licensing study discussed in the text — the finding that doing one good act can make people feel licensed to skip the next one — did not replicate, though a recent meta-analysis found a small genuine effect.

I don't want to make too much of this, because it's an honest and appropriate acknowledgement, and I don’t know what goes into a book re-release. But I’d hoped that the new edition would use the lack of replication to reckon with the idea of moral licensing. Since it didn’t, I’ll ask the question: was moral licensing the right psychological frame for EA in the first place?

I think moral licensing effects probably do happen, but I believe it falls on a spectrum. On the other side is a less-discussed pattern sometimes called moral momentum or ethical consistency: good acts reinforce identity, which motivates more good acts. The person who gives 10% and takes an effective career path isn't secretly relieved to be off the hook. They're more likely to be wondering whether there's more they could do, thinking about what giving more would look like, and finding meaning in the fact that doing good is who they are. I think moral momentum describes something real that is foundational to how serious ethical practice actually works.

My evidence for this is thin in the scientific sense (though possibly no less thin than that for moral licensing). I have anecdata of years of being around EAs who exhibit exceptional all-around character — people whose duty to do good does not appear to be discharged by their work, even when that work is among the most impactful in the world. And I have a case study of one: myself (LF: list of blog posts tagged ‘autobiographical’), including the ways that moral momentum can tip into something less healthy, and how it can be better applied to bring about a sustainable mindset of feeling good about doing good (sharing a lot of what Kuhan describes as “Taking ethics seriously, and enjoying the process”).

An especially important note this week: the fact that EA attracts individuals who exhibit exceptionally moral behavior does not mean that all EAs are upstanding characters and that EA as a community doesn’t have serious issues to contend with on an individual and a structural level. There are real moral failures in EA, and I don't want to minimize them. The community that produces exceptional people also produces specific failure modes, and I think some of those are downstream of a culture built around maximizing within the weird and neglected — a point I'll return to.

But the question I want to sit with is: if you were designing a movement for people who exhibit moral momentum rather than moral licensing, what would you build? I think it would look quite different from what EA became. And I think the frame chosen at the beginning — individual optimization, marginal impact, treating each good act as a discrete transaction — shaped everything that followed in ways that were both appropriate and limiting.

Neglectedness framing and what it foreclosed

Doing Good Better made neglectedness a central pillar of cause prioritization: all else equal, neglected problems offer more room for additional impact. This is sensible. It's part of why EA has been unusually good at finding high-value opportunities at the frontier — pandemic preparedness before COVID, AI risk before it was mainstream, animal welfare at a scale most people weren't considering, not to mention the global health and wellbeing work that has likely saved hundreds of thousands of lives. But I don’t think it’s the only, or best, way of looking for the ways to do the most good. To my mind, neglectedness as a governing logic has costs that I think are underappreciated.

The first is that while the other elements of the ITN framework, importance and tractability, directly correspond to whether working on a problem is likely to cause good outcomes, neglectedness is more of a heuristic for one way of finding high-leverage problems. Imagine an alternative heuristic to look for problems nearing a threshold (sufficient electoral support to win an election is an obvious one but many others exist, including accumulating enough capital to build infrastructure rather than address the lack of it, changing legal and cultural norms). In this case, under the “Importance, tractability, threshold” framework, the leverage and positive EV of working on a problem increase as more attention and resources are given to a problem. One of the reasons I’m happy to work in EA is that I think there are more neglected problems than there are (easily identifiable) threshold problems, but I nevertheless think that seeing neglectedness as fundamental rather than useful to identifying “how to do the most good” is upstream of many EA failures and lost opportunities[3]. This is the core of my departure from my earlier, more full-throated defense of EA: I now think EA is valuable because it has leaned into neglectedness, but it can’t do this and claim to be a full answer to its central question.

The second cost of focusing on neglectedness is that it makes EA harder to understand and easier to attack. The "keep EA weird" refrain reflects something important about EA's comparative advantage and also captures an orientation that can push good people away. Many ethical and motivated people are working on morally salient problems that are saturated with attention because of how important and tractable they are, and EA can be seen to say, “your work isn’t part of our movement”. The weird and neglected is one place where you find exceptional leverage. But it's also where you lose most people, and where the movement becomes easy to caricature.

The third cost is more structural, and it's what I want to focus on here. Neglectedness-first thinking creates a strong pull toward individual, marginal, tractable actions. It asks: given limited resources, where can one person make the most difference? This is a good question, and EA has become excellent at answering it. But it's not the only good question. And optimizing for it systematically crowds out a different question: what could a community — a movement, or eventually a full civic society — accomplish through coordinated action?

These two questions have different answers. Sometimes very different answers.

The ethical consumption example

Chapter eight of Doing Good Better is where moral licensing is discussed, and where that framing and neglectedness-focus intersect to create a viewpoint that is directly oppositional to my own. DGB addresses sweatshops, fair trade, and carbon offsets in sequence, and the conclusions reached reveal something important about the frameworks that launched EA.

On sweatshops, DGB begins with the important and true information that many laborers choose and benefit from the option to work in sweatshops since their alternatives are even worse, and naive boycotts may simply remove the economic opportunities available to desperate workers. The right response is to end the extreme poverty that makes sweatshop wages the best available option. So far, so good.

It then turns to fair trade as the obvious candidate for ethical purchasing — if we can't boycott bad conditions, can we at least premium-pay for good ones? It cites evidence that Fairtrade (a specific institution issuing certifications, not to be confused with the overall concept of fair trade) has had limited success reaching the poorest workers and limited measurable impact on producer welfare, and draws the reasonable conclusion that consumer certification schemes have significant problems.

Then it turns to offsets. It notes the obvious objection — it's hard to know whether offset providers are doing anything real or counterfactual — and concludes, correctly, that this isn't an argument against offsets in general. With proper vetting, supporting carbon-reduction efforts is good. DGB provides an example of a well-regarded offset provider as proof this exists.

The structure of these two arguments is internally consistent. One study of relative failure, one example of success. But I think the framework that generates both conclusions quietly smuggles in a problematic assumption: that the relevant unit of analysis is always the individual acting on the margin.

From that perspective, the Fairtrade case and the offset case look similar: both involve imperfect mechanisms for channeling individual consumer choices into distant good outcomes. Both require evidence of effectiveness. Fairtrade doesn't pass that test; a vetted offset provider does.

But the moment you zoom out to ask what coordinated behavior at scale would produce, the cases look quite different. I think DGB and PH target the same audience; ideally all ethically-minded people who have questions about how to do good in the world. DGB offers advice to the individual: buy cheap goods so you can donate more, offset your carbon output with offsets that do much more to improve the environment than personally restricting yourself. But what happens if lots of people follow the advice given (and for the record, I think that despite the state of the world and the fact that there are only five to fifty thousand EAs (depending on how you count), there are tens or hundreds of millions of people receptive and eager for advice to live ethically).

A world in which a significant portion of consumers are told to buy the cheapest sweatshop goods because it is ethical results in more demand and entrenches the existing exploitative practices. I believe a world in which there is much more attention on fair trade — and a commensurate increase in investment in using evidence and reason to improve its systems — can solve the problems that existing systems don’t have the resources to address.

Similarly, a world in which a community committed to reducing its consumption and supporting carbon-reduction efforts is doing something categorically different from a world in which individuals offset their continuing high-carbon consumption to feel absolved of it. Supporting carbon-reduction efforts is genuinely good. But framing offsets as personal culpability-management, which the individual-marginal framing tends to encourage, produces different behavior than framing them as one part of a coordinated effort to address a collective problem.

I have basically the reverse intuitions from DGB on these two cases. I think coordinated ethical consumption is undersold as a lever, and I think offsets-as-absolution is a corrosive framing in the environmental discourse. These aren't arbitrary disagreements — they follow from starting with collective action rather than individual optimization as the primary frame.

And this brings me to the biggest cost of neglectedness-first, individual-marginal EA: it is structurally incapable of targeting the class of problems that are only solvable through coordination. Not because EA people can't think about coordination — obviously they can — but because a framework built around "what can one person do on the margin in an underattended space" has no natural language for "what would we need to collectively commit to in order to change the rules of the game." EA is excellent at finding places to push. It doesn't have a good answer for when the problem is that everyone is pushing in incompatible directions, or that the direction everyone is pushing in is determined by an incentive structure that nobody chose and that everyone would prefer to change (LF: posts tagged “Moloch”).

The vegan advocacy exception is interesting here. EA culture has been relatively comfortable with lifestyle advocacy around animal welfare — encouraging people to reduce or eliminate meat consumption as a moral practice, not just a personal preference. This is the one domain where something like coordinated consumption norms has been embraced within the EA ecosystem. It's not a coincidence that it's also the cause area with the clearest individual moral salience: you are, with each meal, making a specific choice about something with direct consequences. I think EA can and should apply this logic more broadly. But the neglectedness frame tends to work against it — advocating people reduce their meat consumption is not a neglected intervention.

What I mean by Practical Heroism

In response to my critique of DGB, and as a complement to EA, I want to introduce the idea that organizes this blog, and which I've been developing for long enough that it feels embarrassing to still be introducing it.

“Practical” implies several concepts that are central to PH philosophy. It emphasizes the need for grounded and achievable action. It also reminds us that those actions should be consistently taken: that they should be one’s regular practice, as one would practice meditation or a martial art.

The "heroic" part is not decorative. Heroic narrative — superhero fiction, myth, stories of people who chose something harder than necessary because it was right — is the primary cultural technology humans have for transmitting the intuition that self-transcendence is possible and worth attempting. I don't think this is naive. I think it's ancient and important, and I think the people who are most drawn to EA are disproportionately people who were shaped by these narratives and have never quite been given permission to take them seriously in public. Part of my hope is to use this blog to marry the fun and fantastical with the real.

The other part is honesty about costs. Practicing heroism consistently is not costless, and a framework that doesn't account for that will break its practitioners. Moral momentum is what builds and sustains the practice — good acts reinforcing identity, which motivates more good acts — but momentum can tip into rigidity, into scrupulosity, into burnout. I've experienced this, and I'll write about it specifically. I'll write about what recovery looked like (LF) and what I think makes the practice sustainable (LF). Any honest account of what I'm proposing has to include this.

That said, what PH asks is, straightforwardly, for you to be a hero. That’s the tagline. What I hope to provide with it is guidance and support in the process.

The core of Practical Heroism started as a set of questions about what I should do to live up to the real and fictional heroes that inspire me, and why it seemed that I was surrounded by fans of similar stories who did not seem to outwardly exhibit any attempts to be the heroes of their own story. Part of the answer felt like people didn’t know how, so I wanted to build a system of normative ethics to govern myself and to promote to others. The first draft was: “Think of the most genuinely heroic figure that inspires you. Ask yourself what they would do in your shoes, then just go do that.” This isn’t a bad first step, but it’s rife with potential failure modes and, most importantly, doesn’t actually provide real answers.

[Jumping into a very layperson's discussion of philosophy] Because I’ve felt the “be a hero” north star my whole life, I’ve never felt a visceral distinction between deontology, consequentialism, or virtue ethics, because they all seem to point towards the same answers.

My current working definition of “what would a hero do” is to follow a meta-decision rule: make your choices by taking your best guess at “if everyone else made choices using this rule, we would create the best possible world".

Practical Heroism is universalizability as a daily practice. It seems to be in the family of Kantian ethics, but it's motivated by consequentialism — I use it because I believe it's actually the best way to produce good outcomes when applied consistently, not because of any prior commitment to duty or rules. It's also in the family of virtue ethics in that it asks what kind of person you'd need to be, consistently, rather than optimizing each decision in isolation.

Future posts will discuss what I think “the good” is that heroes should strive for and how to balance doing good and being happy, a combined effort I refer to as “Good Living”.

CGL: the coordination layer

Practical Heroism is a personal practice. What I've now come to call CGL — Cooperative Good Living — is the answer to what happens when people practicing PH try to coordinate[4].

The intuition is this: EA succeeded at building a community of individuals who independently made choices that were better on the margin. That's real and important. But a community of independent individual optimizers is not the same as a coordinated movement, and it can't do what a coordinated movement can do. CGL is the attempt to build the coordination layer that EA's framework precluded, and that the EA movement of the last 15 years hasn't developed, even as the need for it has become clearer.

The specific mechanics of CGL — how it handles collective decisions, how it avoids coercion while achieving coordination, how it relates to existing institutions — are for future posts (LF: posts tagged “CGL”). What I want to establish here is the ambition: CGL aims to be a voluntarist, non-coercive coordination mechanism for people who want to live well in all facets of their lives and who recognize that the biggest problems facing humanity are collective action problems that individual optimization can't solve.

I think EA, as it currently exists, is a subset — an important, rigorous, high-impact subset — of what CGL would ideally encompass. Not the other way around (even if, as I hope it plays out, the infrastructure of the EA forum does some of the lifting to create CGL and CGL employs EAs to deliberately use the founder effect to establish a culture that maintains epistemic rigor and spreadsheet compassion as it grows to a much larger size than makes sense for EA). EA has done the work of demonstrating that it's possible to take doing good seriously and be rigorous about it. CGL builds on that foundation to ask what happens next, when enough people are doing that.

It is my hope that CGL can become the answer to some of the biggest issues I see with EA: people critiquing EA as ignoring systemic problems, and the mismatch between “EA is the way to do the most good in the world” and “EA focuses on weird and neglected issues and doesn’t have a place for most people.” CGL aspires to actually be the place that (eventually) tackles all problems, big and small, and provides actions and a place for everyone to be a part of the collective solutions.

Two communities, one gap

I said I'd come back to Nerdfighteria — the community that grew around the Vlogbrothers, organized around the loose commitment to "decrease world suck" and the identity of being a nerd who cares — and I will, briefly. I've been a Nerdfighter since halfway through Brotherhood 2.0, so I “missed” the first 6 months of a 19-year journey. I've also been an EA since roughly the time it meant anything to “be an EA”. For most of the intervening time, these have felt like separate lives: the nerdy, earnest, emotionally expressive community on one side, and the rigorous, analytical, sometimes alienating EA community on the other.

What strikes me now is that Nerdfighteria succeeded at something EA didn't aim for but probably needs: it made feeling like you belonged to a community of do-gooders easy. The ask for identity membership is simple and clean — do you like the Vlogbrothers, do you want to decrease world suck, do you consider yourself a nerd? You're in. The community is genuinely delighted to have you.

EA is muddled as an identity, in a way that's both inevitable and costly. Is it the question of how to do the most good? A set of answers to that question? The community of orgs and people working in the space? All three? Different people mean different things, which creates constant confusion about membership and criticism and who gets to speak for what. It just makes sense that we ended up with EA-adjacent and EA-adjacent-adjacent. The only times I’ve encountered “Nerdfighter-adjacent” are from EA-types steeped in that memespace.

I've written before about why I think the name and the big-tent approach are worth defending despite this, and I still claim the identity “EA”. But I notice that despite spending all my professional time, and much more of my social life and brainspace on EA things, and relatively little time engaging in Nerdfighteria, it’s easier to identify wholly as a Nerdfighter. Internally, and among other EAs, I will add qualifiers to my EA identity.

Practical Heroism and CGL aim for clarity on this. You're practicing Practical Heroism[5] if you're using universalizability as a daily decision framework and trying to live up to it. You're participating in CGL[6] if you're committed to living well across all facets of your life and using the CGL platform to work with others to address collective action problems. These are clear enough that they can serve as genuine identity anchors. They're also demanding enough that they don't become meaningless.

I want to be honest that this ambition makes me nervous. There's a version of "I'm defining the ultimate source of good and making it available to everyone" that is grandiose and dangerous, and I am aware of it. I hold the whole project with a great deal of epistemic humility. But the movement I've described — optimistic, fantastical when it needs to be, rigorous when it needs to be, holistic, fun, genuinely welcoming — doesn't seem to exist, and I've been waiting for someone to build it for a long time. The School of Moral Ambition seemed for a moment like it might fill the gap; its direction so far suggests it's doing valuable work, but work that remains closer to EA[7] than to what I'm imagining.

Maybe I'm wrong about the gap. I'd genuinely love to be wrong, and for someone to make the case that there are already existing answers (maybe an EA reformation, or SMA is actually doing different things than I understand from the outside).

But it seems like the need is real, and what more prototypical motivation for heroism exists than "this thing should exist, it doesn't, and someone has to try"?

What comes next

This post is an introduction. Future posts will go into:

  • More about Practical Heroism theory: my conception of what “The Good” is that we should be working towards, aspects of moral momentum
  • Cooperative Good Living: mechanics, principles, why I believe voluntarist coordination can address problems that governments and markets can't
  • My take on communicating AI risk (the post I originally planned to lead with)
  • Hero-nerd stuff: my relationship to heroic media, the specific stories that shaped me and what it looks like to try to take them seriously as an adult, fan analysis through a PH lens, maybe some fanfic?
  • EA: questions of EA culture around elitism and weirdness, personal cause-prio, what I think the community gets right and wrong
  • Good Living:  personal and abstract discussions of how to navigate through a world on fire, unlicensed self-help advice of all kinds (informed from years of my own chronic-injury recovery, becoming a personal trainer and CFAR mentor, and a different counterfactual life where instead of publishing an alternative to DGB, I finished my alternative to NVC)

I am posting this now, in this form, because doing so is the way to live up to PH as I’ve defined it for myself. Waiting for the perfect version is a way of not writing the imperfect true thing. And the imperfect true thing is: I've been working on this for a long time, I think it matters, the moment feels right, and I'm scared of posting it. All of those things are true simultaneously, and a heroic framework says that's not a reason to wait.

  1. ^

     The amount of em dashes would make this obvious to ~anyone reading this. Historically, I wrote with way too many parenthetical asides, and I think the em dash is just better in most cases. So rather than trying to hide the most obvious LLM indicator, I’m actually trying to use it to train myself to use more of them.

  2. ^

     “Link forthcoming,” this one to a list of blog posts tagged ‘AI.’ There will be a lot of LF’s, and I want to track where to come back to. I think and write in hyperlink-dense fashion and have many of these posts drafted, so hopefully they’ll be published soon. If you see an “LF” that you particularly want to see addressed first, please say so!

  3. ^

    I’m far from the only one calling out ITN as harmful when applied prescriptively, but it nevertheless seems to remain the predominant metric against which problems and interventions are evaluated. I think it’s fair to question whether “EA qua EA” can exist in its current form without ITN as a bedrock.

  4. ^

     I previously called the project "Phorg" — short for Practical Heroism ORGanized. For the record: "Phorg" was meant to suggest "forge" — as in, forging something new, forge as in blacksmith, using concentrated pressure to bind together weak elements into something strong and useful. I'm (probably) retiring the name in favor of CGL, which seems cleaner, more professional, more straightforwardly self-descriptive, and more likely to be embraced by a broad audience. I’m leaving this footnote as a small memorial to an awkwardly-spelled, fantastical name that felt romantic and motivating for years.

  5. ^

     What to call someone who practices PH is an unresolved issue. I'm apparently the only person who thinks "effective altruismist" makes sense and would have avoided a lot of the issues of people thinking an "effective altruist" is a descriptive claim of how good they are in the world, rather than as a supporter of an ideology and community. Calling myself and others "Practical Heroes" seems like an obvious way to bring out that EA criticism times ten. “Practical Heroist” might have to do.

  6. ^

     Ditto here, but I might have a better answer for CGL than PH (LF: “Cogood”)

  7. ^

     SMA’s SSS framework calling out “sorely neglected” problems in particular indicates that SMA does not differentiate from EA principles in the way that CGL can.

  8. Show all footnotes

7

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities