Frank_R

Wiki Contributions

Comments

Magnitude of uncertainty with longtermism

I think that the case for longtermism gets stronger if you consider truly irreversible catastrophic risks, for example human extinction. Lets say that there is a chance of 10% for the extinction of humankind. Suppose you suggest some policy that reduces this risk by 2%, but introduces a new extinction risk with a probability of 1%. Then it would be wise to enact this policy.

This kind of reasoning would be probably wrong if you have a chance of 2% for a very good outcome such as unlimited cheap energy, but an additional extinction risk of 1%.

Moreover, you cannot argue that everything will be OK several thousand years in the future if humankind is eradicated instead of "just" reduced to a much smaller population size. 

Your forum and your blog post contain many interesting thoughts and I think that the role of high variations in longtermism is indeed underexplored. Nevertheless, I think that even if everything that you  have written is correct, it would still be sensible to limit global warming and care for extinction risks. 

Anti-Aging and EA (Recorded Talk)

Thank you for your detailed answer. I expect that other people here have similar questions in mind. Therefore, it is nice to see your arguments written up.

Anti-Aging and EA (Recorded Talk)

Thank you for your answer and for the links to the other forum posts.

Anti-Aging and EA (Recorded Talk)

How would you answer the following arguments?

  1. Existential risk reduction is much more important than life extension since it is possible to solve aging a few generations later, whereas humankinds potential, which could be enormous, is lost after an extinction event.

  2. From a utilitarian perspective it does not matter if there are ten generations of people living 70 years or one generation of people living 700 years as long as they are happy. Therefore the moral value of life extension is neutral.

I am not wholly convinced of the second argument myself, but I do not see where exactly the logic goes wrong. Moreover, I want to play the devils advocate and I am curious for your answer.

How to assign numerical values to individual welfare?

My question was mainly the first one. (Are 20 insects happier than one human?) Of course similar problems arise if you compare the welfare of humans. (Are 20 people whose living standard is slightly above subsistence happier than one millionaire ?) 

The reason why I have chosen interspecies comparison as an example is that it is much harder to compare the welfare of members of different species. At least you can ask humans to rate their happiness on a scale from 1 to 10. Moreover, the moral consequences of different choices for the function f are potentially greater.

The forum post seems to be what I have asked for, but I need some time to read through the literature. Thank you very much! 

Digital People Would Be An Even Bigger Deal

You mention that the ability to create digital people could lead to dystopian outcomes or a Malthusian race to the bottom. In my humble opinion bad outcomes could only be avoided if there is a world government that monitors what happens on every computer that is capable to run digital people. Of course, such a powerful governerment is a risk of its own. 

Moreover I think that a benevolent world goverment can be realised only several centuries in the future, while mind uploading could be possible at the end of this century. Therefore I believe that bad outcomes are much more likely than good ones. I would be glad to hear if you have some arguments why this line of reasoning could be wrong.    

‘High-hanging Fruits’ and Coordination

I had similar thoughts , too. My scenario was that at a certain point in the future all technologies that are easy to build will have been discovered and that you need multi-generational projects to develop further technologies. Just to name an example, you can think of a Dyson sphere. If the sun was enclosed by a Dyson sphere, each individual would have a lot more energy available or there would be enough room for many  additional individuals. Obivously you need a lot of money before you get the first non-zero payoff and the potential payoff could be large.

Does this mean that effective altruists should prioritise building a Dyson sphere? There are at least three objections:

  1. According to some ethical theories (person-affecting views, certain brands of suffering-focused ethics) it may not be desirable to build a Dyson sphere.
  2. It is not clear if it is possible to improve existing technologies piecewisely such that you a obtain a Dyson sphere in the end. Maybe you start with space tourism, then hotels in the orbit, then giant solar plants in space etc. It could even be the case that each intermediate step is profitable such that market forces will lead to a Dyson sphere without the EA movement spending ressources. 
  3. If effective altruism becomes too much associated  with speculative ideas, it could be negative for the growth of the movement. 

Please do not misunderstand me. I am very  sympathetic towards your proposal, but the difficulties should not be underestimated and much more research is necessary before you can say with high enough certainty that the EA movement as a whole should prioritise some kind of high-hanging fruit. 

The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion

Thank you for sharing your thoughts. What do you think of the following scenario?

In world A the risk for an existential catastrophe is fairly low and most currently existing people are happy.

In world B the existential risk is slightly lower. In expectation there will live 100 billion additional people (compared to A) in the far future whose lifes are better than those of the people today. However, this reduction of risk is so costly that most of the currently existing people have miserable lifes. 

 Your theory probably favours option B. Is this intended ?

Open Thread: July 2021

Hi,

maybe you find this overview of longtermism interesting if you have not already found it:

Intro to Longtermism | Fin Moorhouse

Load More