FR

Frank_R

176 karmaJoined May 2021

Comments
45

Unfortunately, I have not found time to listen to the whole podcast; so maybe I am writing stuff that you have already said. The reason why everyone assumes that utility can be measured by a real number is the von Neumann-Morgenstern utility theorem. If you have a relation of the kind "outcome x is worse than outcome y" that satisfies certain axioms, you can construct a utility function. One of the axioms is called continuity:

"If x is worse than y and y is worse than z, then there exists a probability p, such that a lottery where you receive x with a probability of p and z with a probability of (1-p), has the same preference as y."

If x is a state of extreme suffering and you believe in suffering focused ethics, you might disagree with the above axiom and thus there may be no utility function. A loophole could be to replace the real numbers by another ordered field that contains infinite numbers. Then you could assign to x a utility of -Omega, where Omega is infinitely large. 

Unfortunately, I do not have time for a long answer, but I can understand very well how you feel. Stuff that I find helpful is practising mindfulness and/or stoicism and taking breaks from internet. You said that you find it difficult to make future plans. In my experience, it can calm you down to focus on your career / family / retirement even if it is possible that AI timelines are short. If it turns out that fear of AI is the same as fear of grey goo in the 90s, making future plans is better anyway.

You may find this list of mental health suggestions helpful:

https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of

Be not afraid to seek help if you get serious mental health issues.

I have switched from academia to software development and I can confirm most that you have written from my own experience. Although I am not very involved in the AI alignment community, I think that there may be similar problems as in academia; mostly because the people interested in AI alignment are geographically scattered and there are too few senior researchers to advise all the new people entering the field. 

In my opinion, it is not clear if space colonization increases or decreases x-risk. See "Dark skies" from Daniel Deudney or the article "Space colonization and suffering risks: Reassessing the 'maxipok rule'” by Torres for a negative view. Therefore, it is hard to say if SpaceX or  Bezos Blue Origin are net-positive or negative.

Moreover, Google founded the life extension company Calico and Bezos invested in Unity Biotechnology. Although life extension is not a classical cause area of EA, it would be strange if the moral value of indefinite life extension was only a small positive or negative number.  

I want to add that sleep training is a hot-button issue among parents. There is some evidence that starting to sleep-train your baby too early can be traumatic. My advice is simply to gather evidence from different sources before making a choice.

Otherwise, I agree with Geoffrey Millers reply. Your working hours as a parent are usually shorter, but you learn how to set priorities and work more effectively.  

Thank you for writing this post. I agree with many of your arguments and criticisms like yours deserve to get more attention. Nevertheless, I still call myself a longtermist; mainly for the following reasons:

  • There exist longtermist interventions that are good with respect to a broad range of ethical theories and views about the far future, e.g. searching the waste water for unknown pathogens.
  • Sometimes it is possible to gather further evidence for counter-intuitive claims. For example, you could experiment with existing large language models and search for signs of misaligned behaviour.
  • There may exist unknown longtermist interventions that satisfy all of our criteria. Therefore,  a certain amount of speculative thinking is OK as long as you keep in mind that most speculative theories will  die.   

All in all, you should keep the balance between too conservative and too speculative thinking. 

In my opinion, the philosophy that you have outlined should not be simply dismissed since it contains several important points. Many people in EA, including me, want to avoid the repugnant conclusion and do not think that wireheading is a valueable thing. Moreover, more holistic ethical theories may also lead to important insights. Sometimes an entity has emergent properties that are not shared by its parts. 

I agree that it is hard to reconcile animal suffering with a Nietzschian world view. Whats even worse is that it may lead to opinions like "It does not matter if there is a global catastrophe as long as the elite survives".

It could be possible to develop a more balanced philosophy with help of moral uncertainty or if you simple state  that avoiding suffering and excellence are both important values. Finally, you could point out that it is not plausible that humankind is able to flourish although many humans suffer. After all, you cannot be healthy if most of your organs are sick.      

I have thought about similar issues as in your article and I my conclusions are broadly the same. Unfortunately, I have not written anything down since thinking about longtermism is something I do beside my job and family. I have some quick remarks:

  • Your conclusions in Section 6 are in my opinion pretty robust, even if you use a more general mathematical framework.
  • It is very unclear if space colonization increases or decreases existential risk. The main reason is that it is probably technologically feasible to send advanced weapons across astronomical distances, while building trust across such distances is hard.  
  •  Solving the AI alignment problem helps, but you need an exceptionally well aligned AI to realize the "time of perils"-scenario. If an AI does not "kill everyone immediately", it is not clear if it is able to stick to positive human values for several million years and can coordinate with AIs in space colonies which may have different values.

Since I have seen so many positive reactions to your article, I am wondering if it has some impact if I try to find time to write more about my thoughts.  

In my opinion there is a probability of >10%  that you are right, which means AGI will be developed soon and you have to solve some of the hard problems mentioned above. Do you have any reading suggestions for people who want to find out if they are able to make progress on these questions? On the MIRI website there is a lot of material. Something like "You should read this first.", "This is intermediate important stuff." and "This is cutting edge research." would be nice.   

Thank you for the link to the paper. I find Alexander Vilenkins theoretical work very interesting. 

Load more