BW

Brad West🔸

Founder & CEO @ Profit for Good Initiative
2137 karmaJoined Roselle, IL, USAProfit4good.org/

Bio

Participation
2

Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.

 

Comments
327

I think this creates a false dichotomy between growth and impact. If 1% of the global middle class gave effectively, that would dwarf all current EA funding - even at 1/100th the per-person impact.

More crucially, broad movements create the conditions for high-impact work to succeed. Try getting AI safety regulation or pandemic prevention funding in a world where altruism remains niche. The abolitionists needed both William Wilberforce and a mass movement.

Your prediction may be right - perhaps SMA will have numbers and EA will have impact per person. That's precisely why both are valuable. SMA normalizes caring about important problems; EA ensures the most dedicated people are optimally deployed.

It always has been somewhat odd that EA never seemed to go on the offensive against the normies... Everyday people have the power to save multiple people's lives, prevent obscene amounts of suffering of nonhuman animals and choose not to do so... and EA is on the defensive?

Your observation about the criticism tradeoffs is spot-on. EA has traditionally directed its criticism inward - endless debates about cause prioritization, effectiveness metrics, and optimization - while being remarkably gentle with those outside the movement who aren't trying at all. Meanwhile, SMA seems to flip this: they're saying "quit your bullshit job" to the broader public while maintaining more internal harmony through their Radical Kindness principle.

There's something refreshing about Bregman's willingness to say what many EAs think but rarely voice: that choosing prestige over impact when you have the resources to help is a moral failure. The average professional in a developed country could prevent multiple deaths through effective giving, yet spends that money on lifestyle upgrades. We've somehow normalized this as acceptable while agonizing over whether we're supporting the 3rd or 5th most effective intervention.

I wonder if EA's reluctance to criticize outward stems from: (1) a desire to seem welcoming rather than judgmental, (2) an intellectual culture that prizes nuance over bold claims, or (3) a strategic calculation that gentle persuasion works better than confrontation. But maybe SMA is showing us that there's room for both approaches - and that being morally ambitious means being willing to challenge societal norms more directly.

The real test will be which approach ultimately creates more impact. Does EA's internal rigor and external diplomacy attract more committed effective altruists? Or does SMA's external boldness and internal supportiveness mobilize more people to action? Perhaps we need both spiritual siblings playing different roles in expanding humanity's moral ambition.

Thanks for your thoughts on this. 

I would note that Moral Ambition did mention catastrophic risk, specifically mentioning  risks from Artificial General Intelligence, as a potentially promising area for morally ambitious people to make an impact. 

Also, work on systemic change is  consistent with core EA principles (doing the most good with the resources we can). Some areas could be a strong speculative bet, similar to the reasoning supporting some projects associated with longtermism. 

I think there's a very high degree of complementarity and compatibility with core EA philosophy, even if actual SMA conclusions in terms of cause areas differ in some ways from the cause areas EA tends to focus on. I think, however, core EA philosophy is about the fundamental principles, not the downstream cause areas, and if different people's epistemologies proceeding from those principles lead them different places than where the current EA community is, I don't think they are any less EAs. 

Incredible resolve, Steven. It’s rare to see someone—let alone a 14-year-old—grapple so squarely with how much good a single salary can buy. You’re right: a single $5 k donation to AMF plausibly saves a life, so every extra dollar you push toward the margin matters enormously.

I sometimes feel that EA discussions lean too far toward “Careful, you’ll burn out—dial it back.” Burnout is real, but the analysis often weights personal discomfort as if it were on par with someone else’s entire life. Even if an austere lifestyle shortens a career a bit, the extra years of near-maximal giving you do manage could still dominate the equation. The stakes are that high.

Keep refining the plan, of course—experiment with smaller pledges first, protect your health so you can give longer—but never lose sight of the basic arithmetic that inspired you. The world needs more people willing to do what you’re contemplating. I’m cheering you on.

Worth noting: Peter Singer and Rutger Bregman’s School for Moral Ambition are co-hosting a Profit for Good conference in Amsterdam on 11 June—a concrete EA-adjacent collaboration that channels Bregman’s “moral ambition” into effective-charity business models. Another good touchpoint for anyone looking to ride this wave.

https://www.moralambition.org/profit-for-good-conference-live-stream

I think that another aspect to consider in starting a new organization, non-profit or for-profit, is how many of your deficits (organizational, research, mathematical, etc.) can be addressed or alleviated by AI tools today. I think historically there were a number of qualities the absence of which would make starting new things very difficult. I think using AI tools could dramatically lower the bar if you have good idea for a business or a nonprofit.

I appreciate your exploration of the strategic complexity inherent in prioritizing effectiveness. A crucial aspect involves recognizing that impact often occurs in significant "chunks." Identifying key thresholds and accurately assessing their likelihood of being pivotal is essential for effective resource allocation. For instance, in farmed animal advocacy, securing cage-free commitments from major corporations can lead to disproportionate industry-wide improvements, making precise strategic targeting crucial. In these contexts, there might appear to be little impact until the critical moment. However, openly communicating these threshold calculations might inadvertently strengthen adversaries' resistance. Drawing from game theory's "madman" approach, an actor sometimes gains strategic advantage if adversaries believe it may irrationally commit excessive resources or accept high risks to achieve its goals, thus deterring aggressive opposition.

On a related semantic note, describing strategic resilience or integrating adversarial responses as "less effective" could oversimplify this nuanced issue. I would think when people say “effective” that they are talking about what best achieves one’s goals, and integrating adversarial responses would help in doing so.

I resonate deeply with your sadness. What helps me stay anchored is identifying EA primarily as a personal commitment and life philosophy rather than merely as a movement. This perspective keeps my dedication resilient, rooted in the core EA value of boundless determination to better the world, regardless of external disruptions or individual mistakes.

 Movements inevitably face setbacks and crises, but the philosophical essence of EA—its unwavering commitment to improving the world—remains solid. The movement serves as a practical tool for amplifying these core values, even if it occasionally falters. 

Controversies offer opportunities to recommit individually and collectively to fundamental EA principles such as transparency, humility, and rigorous inquiry. Rather than depending solely on central figures, these moments encourage broader ownership and individual agency. 

Ultimately, the enduring strength of EA lies not in flawless execution but in the earnest pursuit of doing the most good we can with the resources available. This foundational ideal, characterized by thoughtful compassion and pragmatic action, is deeply worth preserving.

Would a potential cure to the sycophancy be to reverse the framing to Claude, so that it perceives that you are your opponent and you are looking for flaws with the comment? I realize that this would not get quite what you are looking for, but getting strong arguments for the other side could be helpful.

Because we face substantial uncertainty around the eventual moral value of AIs, any small reduction in p(doom) or catastrophic outcomes—including S-risks—carries enormous expected utility. Even if delaying AI costs us a few extra years before reaping its benefits (whether enjoyed by humans, other organic species, or digital minds), that near-term loss pales in comparison to the potentially astronomical impact of preventing (or mitigating) disastrous futures or enabling far higher-value ones.

 From a purely utilitarian viewpoint, the harm of a short delay is utterly dominated by the scale of possible misalignment risks and missed opportunities for ensuring the best long-term trajectory—whether for humans, other organic species, or digital minds. Consequently, it’s prudent to err on the side of delay if doing so meaningfully improves our chance of securing a safe and maximally valuable future. This would be true regardless of the substrate of consciousness.

Load more