T

tc

24 karmaJoined Jul 2019

Comments
3

tc
1y2
1
0

"Some poor people have plenty of fish. No poor people have plenty of money."

tc
2y23
0
0

I think this is a strong post. It's been obvious for a long time that the skills and inclinations that make a good philosophy professor or forum poster are not precisely the same as the skills and inclinations that make a good CEO or project manager.

Solving this problem by reducing the influence of analytical discussion in EA, however, would solve the problem at the cost of reducing the distinctiveness of EA as a movement.

What is EA? EA is 1) an existing network of human relationships 2) a large pot of money, 3) a specific set of cultural norms about how weird philosophy nerds talk to each other and, downstream of 3), 4) a specific set of current ideas about how to do the most good.

The world has a very large number of altruistic ecosystems trying to do good. There are literally millions of civic organisations around the globefilled with worthy Haitian pastors. The Catholic Church is a single organisation with 1.3bn members and explicitly altruistic goals. In the EU alone, $13tn is invested in "ESG" funds with explicitly altruistic goals. 

My concern is that a "big tent" approach which attempts to unite people around altruistic goals while jettisoning EA's culture and methods will simply collapse into existing efforts to do good. EA's unusual leverage comes from the fact that it is a relatively tightly connected group of quite unusual individuals with extremely unusual beliefs.

Underlying the OP is an implied discomfort with the existing distribution of views within EA. If spending resources on averting nuclear war, or global health and wellbeing, or AI, is not in fact the best way to make the world a better place, I would prefer to see a post arguing this explicitly. Peter seems to be imagining that it's enough simply to build a large enough network of willing and capable volunteers, analogous to starting a company with the idea that once you have hired enough of the very best people across all continents the need to come up with a product will solve itself.

tc
5y2
0
0

This is an interesting idea that sands off some of the unfortunate Pareto-suboptimal edges of prioritarianism. But it has some problems.

Ex-ante prioritarianism looks good in the example cases given where it gives an answer that disagrees with regular prioritarianism but agrees with utilitarianism. However, the cases where ex-ante prioritarianism disagrees with

For instance, consider an extension of your experiment:

Suppose there are two people who are equally well off, and you are considering benefitting exactly one of them by a fixed given amount (the amount of benefit would be the same regardless of who receives it).

Suppose there are two people, A and B, who are equally well off with utility 100. Suppose we have the choice between two options. In Lottery 1, A gets a benefit of 100 with certainty, while B gets nothing. In Lottery 2, either A gets 50 with probability 0.4; B gets 50 with probability 0.4, or no-one gets anything (probability 0.2).

Prioritarianism prefers Lottery 1 to Lottery 2, since one person having a welfare of 100 and the other a welfare of 200 is preferred to an 80% chance of (150, 100) and a 20% chance of (100, 100).

Utilitarianism of course prefers the outcome with expected utility 300 to the outcome with expected utility 240.

But a sufficiently concave ex-ante prioritarianism prefers Lottery 2 because B's lower expected value in Lottery 1 is weighted more highly.

It seems perverse to prefer an outcome which is with certainty worse both on utilitarian and prioritarian grounds just to give B a chance to be the one who is on top.