A transcript (a) of Ben Hoffman in conversation with a friend.

Relevant excerpts:

Mack: Do you consider yourself an Effective Altruist (capital letters, aligned with at least some of the cause areas of the current movement, participating, etc)?
Ben: I consider myself strongly aligned with the things Effective Altruism says it's trying to do, but don't consider the movement and its methods a good way to achieve those ends, so I don't feel comfortable identifying as an EA anymore.
Consider the position of a communist who was never a Leninist, during the Brezhnev regime.
...
Ben: Yeah, it's kind of funny in the way Book II (IIRC) of Plato's Republic is funny. "I don't know what I want, so maybe I should just add up what everyone in the world wants and do that instead..."
"I don't know what a single just soul looks like, so let's figure out what an ENTIRE PERFECTLY JUST CITY looks like, and then assume a soul is just a microcosm of that."

7

0
0

Reactions

0
0
Comments9
Sorted by Click to highlight new comments since: Today at 3:26 AM

I don't understand the meaning of this post/excerpt. It reads like a critique, but I can't tell what Ben wants to change or how he wants to change it, as the Republic summary has no clear bearing on the actions of any EA organization I'm aware of.

Also, if I weren't someone with a lot of context on who Ben is, this would be even more confusing (in my case, I can at least link his concerns here to other things I know he's written).

I recommend that people creating linkposts include a brief summary of what they took away from what they've linked to (or an actual abstract if you've linked to a scientific paper).

It can also be helpful to include information about the author if they don't have a position that makes their expertise obvious. Even if someone doesn't have a relevant background, just knowing about things they've written before can help; for example, you might note that Ben has been writing about EA for a while, partly from a perspective of being critical about issues X and Y.

On the plus side, I like the way you always include archive.org links! It's important to avoid reference rot, and you're the only poster I know of who takes this clearly positive step toward doing so.

It reads like a critique, but I can't tell what Ben wants to change or how he wants to change it, as the Republic summary has no clear bearing on the actions of any EA organization I'm aware of.

I'm not sure what Ben wants to change (or if he even has policy recommendations).

I think the Republic parallel is interesting. "Figure out how the entire system should be ordered, then align your own life such that it accords with that ordering" is a plausible algorithm for doing ethics, but it's not clear that it dominates alternative algorithms.

I appreciated the parallel because I hadn't made the connection before, and I think something like this algorithm is operating latently underneath a lot of EA reasoning.

I think it's odd how we have spent so much time burying old chestnuts like "I don't want to be an EA because I'm a socialist" or "I don't want to be an EA because I don't want to earn to give" and yet now we have people saying they are abandoning the community because of some amateur personal theory they've come up with on how they can do cause prioritization better than other people.

The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here. And that's odd because I'm pretty sure I've seen this guy around the discourse for a while.

Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric

This is incorrect anyway. First, even total central planners don't really need a totalizing metric; actual totalitarian governments have existed and they have not used such a metric (AFAIK).

Second, as long as your actions impact everything, a totalizing metric might be useful. There are non-totalitarian agents whose actions impact everything. In practice though it's just not really worth the effort to quantify so many things.

so to some extent proposing such a metric is proposing a totalitarian central planner, or at least a notional one like a god

LOL, yes, if we agree and disagree with him in just the right combination of ways to give him an easy counterpunch. Wow, he really got us there!

The idea that EAs use a single metric measuring all global welfare in cause prioritization is incorrect, and raises questions about this guy's familiarity with reports from sources like Givewell, ACE, and amateur stuff that gets posted around here.

Some claim to, others don't.

I worked at GiveWell / Open Philanthropy Project for a year. I wrote up some of those reports. It's explicitly not scoring all recommendations on a unified metric - I linked to the "Sequence vs Cluster Thinking" post which makes this quite clear - but at the time, there were four paintings on the wall of the GiveWell office illustrating the four core GiveWell values, and one was titled "Utilitarianism," which is distinguished from other moral philosophies (and in particular from the broader class "consequentialism") by the claim that you should use a single totalizing metric to assess right action.

OK, the issue here is you are assuming that metrics have to be the same in moral philosophy and in cause prioritization. But there's just no need for that. Cause prioritization metrics need to have validity with respect to moral philosophy, but that doesn't mean they need to be identical.

actual totalitarian governments have existed and they have not used such a metric (AFAIK).

Linear programming was invented in the Soviet Union to centrally plan production with a single computational optimization.

Still sounds like their metric was just economic utility from production, that does not encompass many other policy goals (like security, criminal justice etc).

Second, as long as your actions impact everything, a totalizing metric might be useful.

Wait, is your argument seriously "no one does this so it's a strawman, and also it makes total sense to do for many practical purposes"? What's really going on here?

It's conceptually sensible, but not practically sensible given the level of effort that EAs typically put into cause prioritization. Actually measuring Total Utils would require a lot more work.

Curated and popular this week
Relevant opportunities