FR

Frank_R

175 karmaJoined May 2021

Comments
42

Frank_R
7mo41

In my opinion, it is not clear if space colonization increases or decreases x-risk. See "Dark skies" from Daniel Deudney or the article "Space colonization and suffering risks: Reassessing the 'maxipok rule'” by Torres for a negative view. Therefore, it is hard to say if SpaceX or  Bezos Blue Origin are net-positive or negative.

Moreover, Google founded the life extension company Calico and Bezos invested in Unity Biotechnology. Although life extension is not a classical cause area of EA, it would be strange if the moral value of indefinite life extension was only a small positive or negative number.  

Frank_R
8mo40

I want to add that sleep training is a hot-button issue among parents. There is some evidence that starting to sleep-train your baby too early can be traumatic. My advice is simply to gather evidence from different sources before making a choice.

Otherwise, I agree with Geoffrey Millers reply. Your working hours as a parent are usually shorter, but you learn how to set priorities and work more effectively.  

Frank_R
9mo40

Thank you for writing this post. I agree with many of your arguments and criticisms like yours deserve to get more attention. Nevertheless, I still call myself a longtermist; mainly for the following reasons:

  • There exist longtermist interventions that are good with respect to a broad range of ethical theories and views about the far future, e.g. searching the waste water for unknown pathogens.
  • Sometimes it is possible to gather further evidence for counter-intuitive claims. For example, you could experiment with existing large language models and search for signs of misaligned behaviour.
  • There may exist unknown longtermist interventions that satisfy all of our criteria. Therefore,  a certain amount of speculative thinking is OK as long as you keep in mind that most speculative theories will  die.   

All in all, you should keep the balance between too conservative and too speculative thinking. 

Frank_R
9mo40

In my opinion, the philosophy that you have outlined should not be simply dismissed since it contains several important points. Many people in EA, including me, want to avoid the repugnant conclusion and do not think that wireheading is a valueable thing. Moreover, more holistic ethical theories may also lead to important insights. Sometimes an entity has emergent properties that are not shared by its parts. 

I agree that it is hard to reconcile animal suffering with a Nietzschian world view. Whats even worse is that it may lead to opinions like "It does not matter if there is a global catastrophe as long as the elite survives".

It could be possible to develop a more balanced philosophy with help of moral uncertainty or if you simple state  that avoiding suffering and excellence are both important values. Finally, you could point out that it is not plausible that humankind is able to flourish although many humans suffer. After all, you cannot be healthy if most of your organs are sick.      

Frank_R
10mo60

I have thought about similar issues as in your article and I my conclusions are broadly the same. Unfortunately, I have not written anything down since thinking about longtermism is something I do beside my job and family. I have some quick remarks:

  • Your conclusions in Section 6 are in my opinion pretty robust, even if you use a more general mathematical framework.
  • It is very unclear if space colonization increases or decreases existential risk. The main reason is that it is probably technologically feasible to send advanced weapons across astronomical distances, while building trust across such distances is hard.  
  •  Solving the AI alignment problem helps, but you need an exceptionally well aligned AI to realize the "time of perils"-scenario. If an AI does not "kill everyone immediately", it is not clear if it is able to stick to positive human values for several million years and can coordinate with AIs in space colonies which may have different values.

Since I have seen so many positive reactions to your article, I am wondering if it has some impact if I try to find time to write more about my thoughts.  

Frank_R
1y100

In my opinion there is a probability of >10%  that you are right, which means AGI will be developed soon and you have to solve some of the hard problems mentioned above. Do you have any reading suggestions for people who want to find out if they are able to make progress on these questions? On the MIRI website there is a lot of material. Something like "You should read this first.", "This is intermediate important stuff." and "This is cutting edge research." would be nice.   

Thank you for the link to the paper. I find Alexander Vilenkins theoretical work very interesting. 

Let us assume that a typical large but finite volume contains  happy simulations of you and  suffering copies of you, maybe Boltzmann brains or simulations made by a malevolent agent. If the universe is infinite, you have infinitely many happy and infinitely suffering copies of you and it is hard how to interpret this result.    

I see two problems with your proposal:

  1. It is not clear if a simulation of you in a patch of spacetime that is not causally connected to our part of the universe is the same as you. If you care only about the total amount of happy experiences, this would not matter, but if you care about personal identity, it becomes a non-trivial problem. 
  2. You probably assume that the multiverse is infinite. If this is the case, you can simply assume that for every copy of you that lives for N years another copy of you that lives for N+1 years appears somewhere by chance. In that case there would be no need to perform any action.

I am not against your ideas, but I am afraid that there are many conceptual and physical problems that have to solved before. What is even worse is that there is no universally accepted method how to resolve this issues. So a lot of further research is necessary. 

Thank you for your answers. With better brain preservation and a more detailed understanding of the mind it may be possible to resurrect recently deceased persons. I am more skeptical about the possibility to resurrect a peasant from the middle ages by simulating the universe backwards, but of course these are different issues.     

Load more