A transcript (a) of Ben Hoffman in conversation with a friend.

Relevant excerpts:

Mack: Do you consider yourself an Effective Altruist (capital letters, aligned with at least some of the cause areas of the current movement, participating, etc)?
Ben: I consider myself strongly aligned with the things Effective Altruism says it's trying to do, but don't consider the movement and its methods a good way to achieve those ends, so I don't feel comfortable identifying as an EA anymore.
Consider the position of a communist who was never a Leninist, during the Brezhnev regime.
...
Ben: Yeah, it's kind of funny in the way Book II (IIRC) of Plato's Republic is funny. "I don't know what I want, so maybe I should just add up what everyone in the world wants and do that instead..."
"I don't know what a single just soul looks like, so let's figure out what an ENTIRE PERFECTLY JUST CITY looks like, and then assume a soul is just a microcosm of that."

7

0
0

Reactions

0
0
Comments9


Sorted by Click to highlight new comments since:

I don't understand the meaning of this post/excerpt. It reads like a critique, but I can't tell what Ben wants to change or how he wants to change it, as the Republic summary has no clear bearing on the actions of any EA organization I'm aware of.

Also, if I weren't someone with a lot of context on who Ben is, this would be even more confusing (in my case, I can at least link his concerns here to other things I know he's written).

I recommend that people creating linkposts include a brief summary of what they took away from what they've linked to (or an actual abstract if you've linked to a scientific paper).

It can also be helpful to include information about the author if they don't have a position that makes their expertise obvious. Even if someone doesn't have a relevant background, just knowing about things they've written before can help; for example, you might note that Ben has been writing about EA for a while, partly from a perspective of being critical about issues X and Y.

On the plus side, I like the way you always include archive.org links! It's important to avoid reference rot, and you're the only poster I know of who takes this clearly positive step toward doing so.

It reads like a critique, but I can't tell what Ben wants to change or how he wants to change it, as the Republic summary has no clear bearing on the actions of any EA organization I'm aware of.

I'm not sure what Ben wants to change (or if he even has policy recommendations).

I think the Republic parallel is interesting. "Figure out how the entire system should be ordered, then align your own life such that it accords with that ordering" is a plausible algorithm for doing ethics, but it's not clear that it dominates alternative algorithms.

I appreciated the parallel because I hadn't made the connection before, and I think something like this algorithm is operating latently underneath a lot of EA reasoning.

I think it's odd how we have spent so much time burying old chestnuts like "I don't want to be an EA because I'm a socialist" or "I don't want to be an EA because I don't want to earn to give" and yet now we have people saying they are abandoning the community because of some amateur personal theory they've come up with on how they can do cause prioritization better than other people.

The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here. And that's odd because I'm pretty sure I've seen this guy around the discourse for a while.

Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric

This is incorrect anyway. First, even total central planners don't really need a totalizing metric; actual totalitarian governments have existed and they have not used such a metric (AFAIK).

Second, as long as your actions impact everything, a totalizing metric might be useful. There are non-totalitarian agents whose actions impact everything. In practice though it's just not really worth the effort to quantify so many things.

so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god

LOL, yes, if we agree and disagree with him in just the right combination of ways to give him an easy counterpunch. Wow, he really got us there!

The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here.

Some claim to, others don't.

I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It's explicitly not scoring all recommendations on a unified metric - I linked to the "Sequence vs Cluster Thinking" post which makes this quite clear - but at the time, there were four paintings on the wall of the GiveWell office illustrating the four core GiveWell values, and one was titled "Utilitarianism," which is distinguished from other moral philosophies (and in particular from the broader class "consequentialism") by the claim that you should use a single totalizing metric to assess right action.

OK, the issue here is you are assuming that metrics have to be the same in moral philosophy and in cause prioritization. But there's just no need for that. Cause prioritization metrics need to have validity with respect to moral philosophy, but that doesn't mean they need to be identical.

actual totalitarian governments have existed and they have not used such a metric (AFAIK).

Linear programming was invented in the Soviet Union to centrally plan production with a single computational optimization.

Still sounds like their metric was just economic utility from production, that does not encompass many other policy goals (like security, criminal justice etc).

Second, as long as your actions impact everything, a totalizing metric might be useful.

Wait, is your argument seriously "no one does this so it's a strawman, and also it makes total sense to do for many practical purposes"? What's really going on here?

It's conceptually sensible, but not practically sensible given the level of effort that EAs typically put into cause prioritization. Actually measuring Total Utils would require a lot more work.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
Recent opportunities in Community
47
Ivan Burduk
· · 2m read