Stijn

PhD in physics (thermodynamics of ecosystems) and in moral philosophy (animal rights), master in economics, researcher in health and welfare economics at KULeuven, president of EABelgium, environmental footprint analyst at Ecolife

Wiki Contributions

Comments

The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

Yes, my theory favours B, assuming that those 100 billion additional people have on expectation a welfare higher than the threshold, that the higher X-risk in world A does not on expectation decrease the welfare of existing people, and that  the negative welfare in absolute terms of having a miserable life is less than ten times higher than the positive welfare of currently existing people in world A. In that case, the added welfare of those additional people is higher than  the loss of welfare of the current people. In other words: if there are so many extra future people who are so happy, we really should sacrifice a lot in order to generate that outcome. 

However, the question is whether we would set the threshold lower than the welfare of those future people. It is possible that most current people are die-hard person-affecting utilitarians who care only about making people happy instead of making happy people. In that case, when facing a choice between worlds A and B, people may democratically decide to set a very high threshold, which means they prefer world A

The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

Hi Kevin,

thanks for the comment.  My theory mostly violates that neutrality principle: all else equal, adding a person to the world who has a  negative welfare is bad, adding a person who has a welfare higher than treshold T is good, and in its lexical extension, adding a person with welfare between 0 and threshold T, is good (the lexical extension says that if two states are equally good when it comes to the total welfare excluding the welfare of possible people between 0 and T, then the state that has the highest total welfare, including that of all possible people, is the best).

There is indeed an apparent intransitivity in my theory, which is not a real or serious intransitivity, as it is avoided in the same way as that dynamic inconsistency is avoided, namely by considering the choise sets. So, worlds A, B and C are equally good when you consider the full choice set {A,B,C}, but once that extra person is added, the choice set reduces to  {B,C}, and then C is better than B (the extra person becomes a necessary person in choice set {B,C}). The crucial thing is that the 'better than' relationship depends on the choice set, the set of all available states. This excludes the serious 'money pump' intransitivities. In the full choice set {A,B,C}, I am indifferent between A and B, so I'm willing to switch from A to B. Now I prefer C over B (because that extra person has a higher welfare in C), and hence I'm willing to pay to switch from B to C. But as the choice set is now reduced to {B,C},  after choosing C, I can no longer switch back to A, even if I was initially indifferent between C and A. In the lexical extension of my theory, I would end up with world C.

The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

My theory would be like critical level utilitarianism, where necessary people, possible people with negative welfare and possible people with high positive welfare have zero critical levels, and possible people with low positive welfare have a critical level equal to their own welfare. So people can have different critical levels, and the critical level might depend on the welfare of the person. 

The problem of identity could become difficult, when we consider identity as something fluid or vague. If for example copying a person (a kind of teleportation but without destroying the source person) would be possible: which of the two copies is the necessary person and which is the possible person? I guess the two copies have to fight over this for themselves. In general: once person A in state X identifies herself with a unique person B in state Y, and B identifies herself with A, only then are persons A and B considered identical. A necessary person is a person who is able to identify himself with a unique person in each other available state. 

The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

That's a good summary, except that the threshold is chosen democratically by those who definitely exist. If these people choose not to ignore those people who don't definitely exist and have welfare between 0 and T, then it reduces to total utilitarianism

The most successful EA podcast of all time: Sam Harris and Will MacAskill (2020)

Yep, in my new EA Fellowship group, one participant also mentioned that podcast as basic inspiration to join EA. Proof by anecdote.

Teruji Thomas, 'The Asymmetry, Uncertainty, and the Long Term'

I think the beatpath method to avoid intransitivity still results in a sadistic repugnant conclusion. Consider three situations. In situation 1, one person exist with high welfare 100. In situation 2, that person gets welfare 400, and 1000 additional people are added with welfare 0. In situation 3, those thousand people will have welfare 1, i.e. small but positive (lives barely worth living), and the first person now gets a negative welfare of -100. Total utilitarianism says that situation 3 is best, with total welfare 900. But comparing situations 1 and 3, I would strongly prefer situation 1, with one happy person. Choosing situation 3 is both sadistic (the one person gets a negative welfare) and repugnant (this welfare loss is compensated by a huge number of lives barely worth living). Looking at harms, in situation 1, the one person has 300 units of harm (400 welfare in situation 2 compared to 100 in situation 1). In situation 2, the 1000 additional people each have one unit of harm, which totals 1000 units. In situation 3, the first person has 200 units of harm (-100 in situation 3 compared to +100 in situation 1). According to person-affecting views, we have an intransitivity. But Schulze's beatpath method, Tideman’s ranked pairs method, minimax Condorcet method, and other selection methods to avoid intransitivity, select situation 3 if situation 2 were an option (and would select situation 1 if situation 2 was not an available option, violating independence of irrelevant alternatives).

Perhaps we can solve this issue by considering complaints instead of harms. In each situation X, a person can complain against choosing that situation X over another situation Y. That complaint is a value between zero and the harm that the person has in situation X compared to situation Y. A person can choose how much to complain.  For example, if the first person would fully complain in situation 1, then situation 3 will be selected, and in that situation the first person is worse-off.  Hence, learning about this sadistic repugnant conclusion, the first person can decide not to complain in situation 1, as if that person is not harmed in that situation. Without the complaint, situation 1 will be selected. We have to let people freely choose how much they want to complain in the different situations. 

What is the argument against a Thanos-ing all humanity to save the lives of other sentient beings?

I wrote some counter-arguments, why we could prefer human lives from an impartial (antispeciesist) perspective: https://stijnbruers.wordpress.com/2020/02/25/arguments-for-an-impartial-preference-for-human-lives/

Why EA groups should not use “Effective Altruism” in their name.

Good points, but I'm a little tiny bit skeptical. So those people who join the group under the name of PISE but would not have joined the group when it was called Effective Altruism Erasmus, I wonder if that is due to the reasons that were mentioned (that the -ism suffix reminds of something religious, makes the name too unfamiliar, too difficult, associated with elitism...). If that would be the case, I would be surprised if those people are potentially high impact effective altruists. To put it overly simplistic: suppose someone would not join because of the word altruism in the name. The person does not like that word or does not even know what it means (like I don't know what "Marnaism" means). How can such a person (who has such a cognitive bias towards words, is so hypersenstitive to the use of a single word, thinks that an -ism word is too difficult, makes strange associations with religion, or does not even know what altruism means) expected to become a rational, intelligent, self-critical, scientifically literate high impact effective altruist? In the PISE group there are members who should come to the conclusion that if the name were different, they would not have joined? Do the group members realize that? 

Differences in the Intensity of Valenced Experience across Species

About split brain; those studies are about cognition (having beliefs about what is being seen). Does anyone know if the same happens with affection (valenced experience)? For example: left brain sees a horrible picture, right brain sees picture of the most joyfull vacation memory. Now ask left and right brains how they feel. I imagine such experiments are already being done? My expectation is that asking the brain hemisphere who sees the picture of the vacation memory, that hemisphere will respond that the picture strangely enough gives the subject a weird, unexplainable, kind of horrible feeling instead of pure joy. As if feelings are still unified. Anyone knows about such studies?

Differences in the Intensity of Valenced Experience across Species

That anti-proportionality arguments seems tricky to me. It sounds comparable to the following example. You see a grey picture, composed of small black and white pixels. (The white pixles correspond to neuron firings in your example) The greyness depends on the proportion of white pixels. Now, what happens when you remove the black pixels? That is undefined. It could be that only white pixels are left and you now see 100% whiteness. Or the absent black pixels are still being seen as black, which means the same greyness as before. Or removing the black pixels correspond with making them transparent, and then who knows what you'll see? 

Load More