Unfortunately, I do not have time for a long answer, but I can understand very well how you feel. Stuff that I find helpful is practising mindfulness and/or stoicism and taking breaks from internet. You said that you find it difficult to make future plans. In my experience, it can calm you down to focus on your career / family / retirement even if it is possible that AI timelines are short. If it turns out that fear of AI is the same as fear of grey goo in the 90s, making future plans is better anyway.
You may find this list of mental health suggestions hel...
I have switched from academia to software development and I can confirm most that you have written from my own experience. Although I am not very involved in the AI alignment community, I think that there may be similar problems as in academia; mostly because the people interested in AI alignment are geographically scattered and there are too few senior researchers to advise all the new people entering the field.
In my opinion, it is not clear if space colonization increases or decreases x-risk. See "Dark skies" from Daniel Deudney or the article "Space colonization and suffering risks: Reassessing the 'maxipok rule'” by Torres for a negative view. Therefore, it is hard to say if SpaceX or Bezos Blue Origin are net-positive or negative.
Moreover, Google founded the life extension company Calico and Bezos invested in Unity Biotechnology. Although life extension is not a classical cause area of EA, it would be strange if the moral value of indefinite life extension was only a small positive or negative number.
I want to add that sleep training is a hot-button issue among parents. There is some evidence that starting to sleep-train your baby too early can be traumatic. My advice is simply to gather evidence from different sources before making a choice.
Otherwise, I agree with Geoffrey Millers reply. Your working hours as a parent are usually shorter, but you learn how to set priorities and work more effectively.
Thank you for writing this post. I agree with many of your arguments and criticisms like yours deserve to get more attention. Nevertheless, I still call myself a longtermist; mainly for the following reasons:
In my opinion, the philosophy that you have outlined should not be simply dismissed since it contains several important points. Many people in EA, including me, want to avoid the repugnant conclusion and do not think that wireheading is a valueable thing. Moreover, more holistic ethical theories may also lead to important insights. Sometimes an entity has emergent properties that are not shared by its parts.
I agree that it is hard to reconcile animal suffering with a Nietzschian world view. Whats even worse is that it may lead to opinions like "It do...
I have thought about similar issues as in your article and I my conclusions are broadly the same. Unfortunately, I have not written anything down since thinking about longtermism is something I do beside my job and family. I have some quick remarks:
In my opinion there is a probability of >10% that you are right, which means AGI will be developed soon and you have to solve some of the hard problems mentioned above. Do you have any reading suggestions for people who want to find out if they are able to make progress on these questions? On the MIRI website there is a lot of material. Something like "You should read this first.", "This is intermediate important stuff." and "This is cutting edge research." would be nice.
Thank you for the link to the paper. I find Alexander Vilenkins theoretical work very interesting.
Let us assume that a typical large but finite volume contains happy simulations of you and suffering copies of you, maybe Boltzmann brains or simulations made by a malevolent agent. If the universe is infinite, you have infinitely many happy and infinitely suffering copies of you and it is hard how to interpret this result.
I see two problems with your proposal:
Thank you for your answers. With better brain preservation and a more detailed understanding of the mind it may be possible to resurrect recently deceased persons. I am more skeptical about the possibility to resurrect a peasant from the middle ages by simulating the universe backwards, but of course these are different issues.
Could you elaborate why we have to make choices before space colonisation if we want to survive beyond the end of the last stars? Until now, my opinion is that we can can "start solving heat death" a billion years in the future while we have to solve AI alignment in the next 50 - 1000 years.
Another thought of mine is that it is probably impossible to resurrect the dead by computing how the state of each neuron of a deceased person was at the time of her/his death. I think, you need to measure the state of each particle in the present with a very high preci...
It should be mentioned that all (or at least most) ideas to survive the heat death of the universe involve speculative physics. Moreover, you have to deal with infinities. If everyone is suffering but there is one sentient being that experiences a happy moment every million years, does this mean that there is an infinite amount of suffering and an infite amount of happiness and the future is of neutral value? If any future with an infinite amount of suffering is bad, does this mean that it is good if sentient life does not exists forever? There is no obvious answer to these questions.
Other S-risks that may or may not sound more plausible are suffering simulations (maybe an AI comes to the conclusion that a good way to study humans is to simulate earth at the time of the Black Death) or suffering subroutines (maybe reinforcement learners that are able to suffer enable faster or more efficient algorithms).
I have noticed that there are two similar websites for mathematical jobs. www.mathjobs.org is operated by the American Mathematical Society and is mostly for positions at universities, although they list jobs at other research institutions, too. www.math-jobs.com redirects you to www.acad.jobs , which has a broader focus. They advertise also government and industry jobs and it is also for job offers in computer science and other academic disciplines.
You have to register on both websites as an employer for several hundreds of dollars before you can po...
How much knowledge about AI alignment apart from the right mathematical background is necessary for this position? If the job is suitable for candidates without prior involvment in x-risks / longtermism / Effective Altruism, it may be a good idea to announce it at places as mathjobs.org.
I forgot to mention that you should be careful about how brain preservation increases or decreases the probability for suffering or existential risks. On the one hand, many patients waiting for whole brain emulation (WBE) could be a reason to push forward WBE without thinking about the possible negative effects deeply enough. On the other hand, if there are reasons to believe that some people alive today could live for millenia, this may ecourage longterm thinking. Since I cannot determine the sign of the risk, I am cautiously for brain preservation because of the positive nearterm effects.
I don't disagree with you. Although I think that existential and global catastrophic risks are the most important cause area, there are good project ideas in the life extension community without easy access to venture capital. Since biological aging is a major source of suffering, life extension and brain preservation are worthwhile cause areas.
I have a few questions on the more practical side of brain preservation. Are there any organisations working on this problem with more room for funding? I know about the Brain Preservation Foundation and Nectome, but as an outsider it is hard to tell how active they are and what they could do with extra money.
In my opinion, it is very difficult for a company offering brain preservation to hit the market. At the beginning, there are possibly only a few customers scattered throughout the world. You will probably need a standby team at the bed of the te...
I think this discussion will become important in the future. On the one hand, I struggle a little bit to notice every post that is interesting for me. On the other hand, there is the danger that the EA movement starts to fragment if the forum is splitted. Longtermists could read only longtermist stuff, people interested in animal suffering read only posts on animal advocacy etc.
I agree strongly with what you have written. Especially, since in my opinion it is unlikely that there will be a liberal and/or pro-western government in Russia, even if Putin will be replaced.
Do you have any suggestion what an average person in a western country can do? Of course, you can write to your representative that the borders should be opened for Russian emigrants. Unfortunately, I do not know if this is really effective since politicians get probably tons of mail.
In my opinion "the most controversial billionaire" is either Peter Thiel or Donald Trump. Otherwise, I agree with what you have written.
Estimates of Trump's wealth vary. He is certainly controversial, but I don't think his detractors view him as a billionaire.
Thank you for writing this post. I want to point out that your conclusions are highly dependent on your ethical and empirical assumptions. Here are some thoughts about what could change your conclusion:
Also, if you combine $1/ton with the estimated lives per ton from Bressler's paper, then you get $4,400 per life saved.
Thank you for writing this piece! I think that there should be a serious discussion if crypto is net positive or negative for the world.
In my opinion, there are a few more ways how crypto could contribute to existential risk. Since you can accept donations in monero, it is much easier to make a living by spreading dangerous ideologies (human extinction is a worthy goal, political measures against existential risk are totalitarian, etc.) Of course, you can also support an atheist blogger in Iran or a whistleblower in China by crypto, but it is very hard to ...
I suggest the following thought experiment. Imagine wild animal suffering can be solved. Then it would be possible to populate a square mile with millions of happy insects instead of a few happy human beings. If the repugnant conclusion was true, the best world would be populated with as many insects as possible and only a few human beings that take care that there is no wild animal suffering.
Even more radical, the best thing to do would be to fill as much of the future light cone as possible with hedonium. Both scenarios do not match the moral intui...
An important factor is how many people in the EA movement are actively searching for EA jobs and how many applications they write per year. Maybe this would be a good question for the next EA survey.
Genomic mass screening of wastewater for unknown pathogens, as described here:
[2108.02678] A Global Nucleic Acid Observatory for Biodefense and Planetary Health (arxiv.org)
A few test sites can already help to detect a new (natural or manmade) pandemic at an early stage. Nevertheless, there is room for a few billion dollars if you want to build a global screening network.
Unfortunately, I do not know if there is any organisation with need for funding working on this.
I agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans.
Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they coloniz...
There is a short piece on longtermism in Spiegel Online, which is probably the biggest news site in Germany:
Longtermism: Was ist das - Rettung oder Gefahr? - Kolumne - DER SPIEGEL
Google Translate:
As far as I know, this is the first time that longtermism is mentioned in a mayor German news outlet. The author mentions some key ideas and acknowledges that shorttime thinking is a big problem in society, but he is rather critical of the longtermist movement. F...
I think that it is not possible to delay technological progress if there are strong near-term and/or egoistical reasons to accelerate the development of new technologies.
As an example, let us assume that it is possible to stop biological aging within a timeframe of 100 years. Of course, you can argue that this is an irreversible change, which may or may not be good for humankinds longterm future. But I do not think that it is realistic to say "Let's fund Alzheimer's research and senolytics, but everything that prolongs life expectancy beyond 120 years will...
I think that it is possible that whole brain emulation (WBE) will be developed before AGI and that there are s-risks associated with WBE. It seems to me that most people in the s-risk community work on AI risks.
Do you know of any research that deals specifically with the prevention of s-risks from WBE? Since an emulated mind should resemble the original person, it should be difficult to tweak the code of the emulation such that extreme suffering is impossible. Although this may work for AGI, you need probably a different strategy for emulated minds.
Thank you very much for sharing your paper. I have heard somewhere that Thorium reactors could be a big deal against climate change. The advantage would be that there are greater Thorium reserves than Uranium reserves and that you cannot use Thorium to build nuclear weapons. Do you have an opinion if the technology can be developed fast enough and deployed worldwide?
I think that the case for longtermism gets stronger if you consider truly irreversible catastrophic risks, for example human extinction. Lets say that there is a chance of 10% for the extinction of humankind. Suppose you suggest some policy that reduces this risk by 2%, but introduces a new extinction risk with a probability of 1%. Then it would be wise to enact this policy.
This kind of reasoning would be probably wrong if you have a chance of 2% for a very good outcome such as unlimited cheap energy, but an additional extinction risk of 1%.
Moreover, you c...
Thank you for your detailed answer. I expect that other people here have similar questions in mind. Therefore, it is nice to see your arguments written up.
How would you answer the following arguments?
Existential risk reduction is much more important than life extension since it is possible to solve aging a few generations later, whereas humankinds potential, which could be enormous, is lost after an extinction event.
From a utilitarian perspective it does not matter if there are ten generations of people living 70 years or one generation of people living 700 years as long as they are happy. Therefore the moral value of life extension is neutral.
I am not wholly convinced of the second argument myself, but I do not see where exactly the logic goes wrong. Moreover, I want to play the devils advocate and I am curious for your answer.
Maybe you are interested in the following paper, which deals with similar questions as yours:
My question was mainly the first one. (Are 20 insects happier than one human?) Of course similar problems arise if you compare the welfare of humans. (Are 20 people whose living standard is slightly above subsistence happier than one millionaire ?)
The reason why I have chosen interspecies comparison as an example is that it is much harder to compare the welfare of members of different species. At least you can ask humans to rate their happiness on a scale from 1 to 10. Moreover, the moral consequences of different choices for the function f are potentially greater.
The forum post seems to be what I have asked for, but I need some time to read through the literature. Thank you very much!
You mention that the ability to create digital people could lead to dystopian outcomes or a Malthusian race to the bottom. In my humble opinion bad outcomes could only be avoided if there is a world government that monitors what happens on every computer that is capable to run digital people. Of course, such a powerful governerment is a risk of its own.
Moreover I think that a benevolent world goverment can be realised only several centuries in the future, while mind uploading could be possible at the end of this century. Therefore I believe that bad outcomes are much more likely than good ones. I would be glad to hear if you have some arguments why this line of reasoning could be wrong.
I had similar thoughts , too. My scenario was that at a certain point in the future all technologies that are easy to build will have been discovered and that you need multi-generational projects to develop further technologies. Just to name an example, you can think of a Dyson sphere. If the sun was enclosed by a Dyson sphere, each individual would have a lot more energy available or there would be enough room for many additional individuals. Obivously you need a lot of money before you get the first non-zero payoff and the potential payoff could be...
Thank you for sharing your thoughts. What do you think of the following scenario?
In world A the risk for an existential catastrophe is fairly low and most currently existing people are happy.
In world B the existential risk is slightly lower. In expectation there will live 100 billion additional people (compared to A) in the far future whose lifes are better than those of the people today. However, this reduction of risk is so costly that most of the currently existing people have miserable lifes.
Your theory probably favours option B. Is this intended ?
Hi,
maybe you find this overview of longtermism interesting if you have not already found it:
Hello! As long as I can remember, I have been interested in the long term future and have asked myself if there is any possibility to direct the future of humankind in a positive direction. Every once in a while I searched the internet for a community of like-minded people. A few month ago I discovered that many effective altruists are interested in longtermism.
Since then, I often take a look at this forum and have read 'The Precipice' by Toby Ord. I am not quite sure if I agree with every belief that is common among EAs. Nevertheless, I think that w...
Unfortunately, I have not found time to listen to the whole podcast; so maybe I am writing stuff that you have already said. The reason why everyone assumes that utility can be measured by a real number is the von Neumann-Morgenstern utility theorem. If you have a relation of the kind "outcome x is worse than outcome y" that satisfies certain axioms, you can construct a utility function. One of the axioms is called continuity:
"If x is worse than y and y is worse than z, then there exists a probability p, such that a lottery where you receive x with a proba... (read more)