All of Frank_R's Comments + Replies

Unfortunately, I have not found time to listen to the whole podcast; so maybe I am writing stuff that you have already said. The reason why everyone assumes that utility can be measured by a real number is the von Neumann-Morgenstern utility theorem. If you have a relation of the kind "outcome x is worse than outcome y" that satisfies certain axioms, you can construct a utility function. One of the axioms is called continuity:

"If x is worse than y and y is worse than z, then there exists a probability p, such that a lottery where you receive x with a proba... (read more)

Unfortunately, I do not have time for a long answer, but I can understand very well how you feel. Stuff that I find helpful is practising mindfulness and/or stoicism and taking breaks from internet. You said that you find it difficult to make future plans. In my experience, it can calm you down to focus on your career / family / retirement even if it is possible that AI timelines are short. If it turns out that fear of AI is the same as fear of grey goo in the 90s, making future plans is better anyway.

You may find this list of mental health suggestions hel... (read more)

I have switched from academia to software development and I can confirm most that you have written from my own experience. Although I am not very involved in the AI alignment community, I think that there may be similar problems as in academia; mostly because the people interested in AI alignment are geographically scattered and there are too few senior researchers to advise all the new people entering the field. 

In my opinion, it is not clear if space colonization increases or decreases x-risk. See "Dark skies" from Daniel Deudney or the article "Space colonization and suffering risks: Reassessing the 'maxipok rule'” by Torres for a negative view. Therefore, it is hard to say if SpaceX or  Bezos Blue Origin are net-positive or negative.

Moreover, Google founded the life extension company Calico and Bezos invested in Unity Biotechnology. Although life extension is not a classical cause area of EA, it would be strange if the moral value of indefinite life extension was only a small positive or negative number.  

1[anonymous]2y
I linkposted a review of "Dark skies" on this forum for any interested readers: https://forum.effectivealtruism.org/posts/gcPp2bPin3wywjnGH/is-space-colonization-desirable-review-of-dark-skies-space

I want to add that sleep training is a hot-button issue among parents. There is some evidence that starting to sleep-train your baby too early can be traumatic. My advice is simply to gather evidence from different sources before making a choice.

Otherwise, I agree with Geoffrey Millers reply. Your working hours as a parent are usually shorter, but you learn how to set priorities and work more effectively.  

3
Geoffrey Miller
2y
Frank -- thanks for your reply.  It's true that sleep training is quite controversial. If you look at Reddit parenting forums, it's one of the most viciously debated topics.  There's a strong taboo against explicitly training humans of any age using behaviorist reinforcement methods (which my wife Diana Fleischman is writing about in her forthcoming book). And there's a naturalistic bias in favor of kids co-sleeping with parents, frequent night-time nursing, etc. -- some of which may have an evolutionary rationale, but some of which may be parents virtue-signaling their dedication, empathy, etc.  Maybe sleep training too early can be traumatic, but it's not clear what 'too early' means, and I haven't seen good data either way. I'm open to updating on this issue -- with the caveat that a lot of parents throw around the term 'traumatic' in a rather alarmist way, without a very clear idea of what that actually means, or how it could be measured in a randomized controlled trial. (There's an analogy to dog training here -- a lot of dog owners do very little training, very badly, on the view that training is manipulative, oppressive, and mean, and doesn't allow their dogs to 'be themselves'. Whereas owners of well-trained dogs understand that the short-term frustrations of training can have big long-term benefits.) Regarding what prehistoric, hunter-gatherer, and traditional humans do in terms of parenting, it's useful and fascinating to look at the book 'Mothers and others' (2011) by anthropologist Sarah Blaffer Hrdy. 

Thank you for writing this post. I agree with many of your arguments and criticisms like yours deserve to get more attention. Nevertheless, I still call myself a longtermist; mainly for the following reasons:

  • There exist longtermist interventions that are good with respect to a broad range of ethical theories and views about the far future, e.g. searching the waste water for unknown pathogens.
  • Sometimes it is possible to gather further evidence for counter-intuitive claims. For example, you could experiment with existing large language models and search for
... (read more)
5
A. Wolff
2y
Thanks for the thoughts. I basically agree with you. I'd consider myself a "longtermist," too, for similar reasons. I mainly want to reject the comparatively extreme implications of "strong longtermism" as defended by Greaves and MacAskill that extremely speculative and epistemically fragile longtermist interventions are more cost effective than even the most robust and impactful near termist ones.  I think there's likely a lot of  steps we could and should be taking that could quite reasonably be expected to reduce real and pressing risks.     I would add to your last bullet, though, that speculative theories will only die if there's some way to falsify  them or at least seriously call them into question. Strong longtermism is particularly worrying because it is an unfalsifiable theory. For one thing, too much weight is placed on one fundamentally untestable contention: the size and goodness of the far future. Also, it's basically impossible to actually test whether speculative interventions intended to very slightly reduce existential risk actually are successful (how could we possibly tell if risk was actually reduced by 0.00001%? or increased by 0.00000001%?). As a result, it could survive forever, no matter how poor a job it's doing.  Longtermist interventions (even speculative ones) supported by "cluster thinking" styles that put more weight on more testable assumptions (e.g. about the neglectedness  or tractability of some particular issue, about the effect an intervention could have on certain "signposts" like international coordination, rate of near misses, etc.) or are intended to lead to more significant reductions in existential risk (which could be somewhat easier to measure than very small ones) are likely easier to reject if they prove ineffective. 

In my opinion, the philosophy that you have outlined should not be simply dismissed since it contains several important points. Many people in EA, including me, want to avoid the repugnant conclusion and do not think that wireheading is a valueable thing. Moreover, more holistic ethical theories may also lead to important insights. Sometimes an entity has emergent properties that are not shared by its parts. 

I agree that it is hard to reconcile animal suffering with a Nietzschian world view. Whats even worse is that it may lead to opinions like "It do... (read more)

I have thought about similar issues as in your article and I my conclusions are broadly the same. Unfortunately, I have not written anything down since thinking about longtermism is something I do beside my job and family. I have some quick remarks:

  • Your conclusions in Section 6 are in my opinion pretty robust, even if you use a more general mathematical framework.
  • It is very unclear if space colonization increases or decreases existential risk. The main reason is that it is probably technologically feasible to send advanced weapons across astronomical dista
... (read more)

In my opinion there is a probability of >10%  that you are right, which means AGI will be developed soon and you have to solve some of the hard problems mentioned above. Do you have any reading suggestions for people who want to find out if they are able to make progress on these questions? On the MIRI website there is a lot of material. Something like "You should read this first.", "This is intermediate important stuff." and "This is cutting edge research." would be nice.   

5
RobBensinger
2y
I'd mainly point to relatively introductory / high-level resources like Alignment research field guide and Risks from learned optimization, if you haven't read them. I'm more confident in the relevance of methodology and problem statements than of existing attempts to make inroads on the problem. There's a lot of good high-level content on Arbital (https://arbital.com/explore/ai_alignment/), but it's not very organized and a decent amount of it is in draft form.

Thank you for the link to the paper. I find Alexander Vilenkins theoretical work very interesting. 

Let us assume that a typical large but finite volume contains  happy simulations of you and  suffering copies of you, maybe Boltzmann brains or simulations made by a malevolent agent. If the universe is infinite, you have infinitely many happy and infinitely suffering copies of you and it is hard how to interpret this result.    

2
turchin
2y
I think that there is way to calculate relative probabilities even in infinite case and it will converge to 1:1⋅10−100. For example, there is an article "The watchers of multiverse" which suggest a plausible way to do so.   

I see two problems with your proposal:

  1. It is not clear if a simulation of you in a patch of spacetime that is not causally connected to our part of the universe is the same as you. If you care only about the total amount of happy experiences, this would not matter, but if you care about personal identity, it becomes a non-trivial problem. 
  2. You probably assume that the multiverse is infinite. If this is the case, you can simply assume that for every copy of you that lives for N years another copy of you that lives for N+1 years appears somewhere by chanc
... (read more)
1
turchin
2y
1.The identity problems is known to be difficult, but here I assume that continuity of consciousness is not needed for it. Only informational identity is enough. 2. The difference between quantum - or big world- immortality is that we can select which minds to create and exclude N+1 moments which are damages or suffering. 

Thank you for your answers. With better brain preservation and a more detailed understanding of the mind it may be possible to resurrect recently deceased persons. I am more skeptical about the possibility to resurrect a peasant from the middle ages by simulating the universe backwards, but of course these are different issues.     

4
turchin
2y
If we simulate all possible universes, we can do it. It is enormous computational task, but it can be done via acausal cooperation between different branches of multiverse, where each of them simulate only one history.

Could you elaborate why we have to make choices before space colonisation if we want to survive beyond the end of the last stars? Until now, my opinion is that we can can "start solving heat death" a billion years in the future while we have to solve AI alignment in the next 50 - 1000 years.

Another thought of mine is that it is probably impossible to resurrect the dead by computing how the state of each neuron of a deceased person was at the time of her/his death. I think, you need to measure the state of each particle in the present with a very high preci... (read more)

1
turchin
2y
If we start space colonisation, we may not be able to change goal-system of the spaceships that we will send to stars, as they will move away with near-light speed. So we need to specify what we will do with the universe before starting the space colonisation: either we will spend all resources to build as many simulations with happy minds as possible – or we will reorganise matter in the ways with will help to survive the end of the universe, e.g. building Tipler's Omega point or building worm hole into another universe. --- Very high precision of brain details is not needed for resurrection as we every second forget our mind state. So only a core of long-term memory is sufficient to preserve what I call "information identity", which is necessary conditions for a person to regard himself as the same person, say, next day. But the whole problem of identity is not solved yet, and it would be a strong EA cause to solve it: we want to help people in the ways which will not destroy their personal identity, if that identity really matters. 

It should be mentioned that all (or at least most) ideas to survive the heat death of the universe involve speculative physics. Moreover, you have to deal with infinities. If everyone is suffering but there is one sentient being that experiences a happy moment every million years, does this mean that there is an infinite amount of suffering and an infite amount of happiness and the future is of neutral value? If any future with an infinite amount of suffering is bad, does this mean that it is good if sentient life does not exists forever? There is no obvious answer to these questions.  

Other S-risks that may or may not sound more plausible are suffering simulations (maybe an AI comes to the conclusion that a good way to study humans is to simulate earth at the time of the Black Death) or suffering subroutines (maybe reinforcement learners that are able to suffer enable faster or more efficient algorithms). 

I have noticed that there are two similar websites for mathematical jobs. www.mathjobs.org is operated by the American Mathematical Society and is mostly for positions at universities, although they list jobs at other research institutions, too. www.math-jobs.com redirects you to www.acad.jobs , which has a broader focus. They advertise also government and industry jobs and it is also for job offers in computer science and other academic disciplines. 

You have to register on both websites as an employer for several hundreds of dollars before you can po... (read more)

1
Vanessa
2y
Thank you Frank, that's very useful to know!

How much knowledge about AI alignment apart from the right mathematical background is necessary for this position? If the job is suitable for candidates without prior involvment in x-risks / longtermism / Effective Altruism,  it may be a good idea to  announce it at places as mathjobs.org.   

2
Vanessa
2y
Thank you for this comment! Knowledge about AI alignment is beneficial but not strictly necessarily. Casting a wider net is something I planned to do in the future, but not right now. Among other reasons, because I don't understand the academic job ecosystem and don't want to spend a huge effort studying it in the near-term.  However, if it's as easy as posting the job on mathjobs.org, maybe I should do it. How popular is that website among applicants, as far as you know? Is there something similar for computer scientists? Is there any way to post a job without specifying a geographic location s.t. applicants from different places would be likely to find it?

I forgot to mention that you should be careful about how brain preservation increases or decreases the probability for suffering or existential risks. On the one hand, many patients waiting for whole brain emulation (WBE) could be a reason to push forward WBE without thinking about the possible negative effects deeply enough. On the other hand, if there are reasons to believe that some people alive today could live for millenia, this may ecourage longterm thinking. Since I cannot determine the sign of the risk, I am cautiously for brain preservation because of the positive nearterm effects.   

I don't disagree with you. Although I think that existential and global catastrophic risks are the most important cause area, there are good project ideas in the life extension community without easy access to venture capital. Since biological aging is a major source of suffering, life extension and brain preservation are worthwhile cause areas.   

I have a few questions on the more practical side of brain preservation. Are there any organisations working on this problem with more room for funding? I know about the Brain Preservation Foundation and Nectome, but as an outsider it is hard to tell how active they are and what they could do with extra money. 

In my opinion, it is very difficult for a company offering brain preservation to hit the market. At the beginning, there are possibly only a few customers scattered throughout the world. You will probably need a standby team at the bed of the te... (read more)

3
AndyMcKenzie
2y
Thanks for your interest in this topic!  I agree with you that it is hard as an outsider to tell what the current scope of the situation is regarding the need for more funding. This post was more of a high-level overview of the problem to see whether people agreed with me that this was a reasonable cause area for effective altruism.  Since it seems that a good number of people do agree (please tell me if you don't!), I am hoping to work on the practical area more in the future. For now, I don't think I know enough to publically say with any confidence whether I think that any particular organization could benefit from more EA-level funding. If pressed, my guess is that the most important thing would be to get more researchers and people in general interested in the field.  I also agree with you about the chicken-and-egg problem of lack of interest and lack of quality of the service. One approach is to start locally, rather than trying to achieve high-quality preservation all over the world. This makes things much cheaper. An obvious problem with the local approach is that any local area may not have enough people interested to get the level of practical feedback needed, although this also can be addressed. 

I think this discussion will become important in the future. On the one hand, I struggle a little bit to notice every post that is interesting for me.  On the other hand, there is the danger that the EA movement starts to fragment if the forum is splitted. Longtermists could read only longtermist stuff, people interested in animal suffering read only posts on animal advocacy etc.   

I agree strongly with what you have written. Especially, since in my opinion it is unlikely that there will be a liberal and/or pro-western government in Russia, even if Putin will be replaced.

Do you have any suggestion what an average person in a western country can do? Of course, you can write to your representative that the borders should be opened for Russian emigrants. Unfortunately, I do not know if this is really effective since politicians get probably tons of mail. 

In my opinion "the most controversial billionaire" is either Peter Thiel or Donald Trump. Otherwise, I agree with what you have written.

Estimates of Trump's wealth vary. He is certainly controversial, but I don't think his detractors view him as a billionaire. 

8
Nathan Young
2y
Agreed. Imagine I had said , "top billionaire".

Thank you for writing this post. I want to point out that your conclusions are highly dependent on your ethical and empirical assumptions. Here are some thoughts about what could change your conclusion:

  • If you donate to the top charities that are recommended by Founders Pledge, you can probably do much better than 30$/ton. I have not been able to find the precise numbers quickly, but if I remember correctly, 1$/ton is possible under reasonable assumptions. This would change your average  estimate to $25,000 per life saved.
  • Let us assume that the maximal
... (read more)
1
JBPDavies
2y
'Climate change could also increase other existential risks. For example, there could be a war about ressources that is fought by nuclear weapons, synthetic pathogens or malevolent AIs.' To add to this - solar geoengineering could be a major risk (and risk factor for inter-state conflict) that becomes increasingly likely under severe AGW scenarios (people accept more drastic measures in desparate circumstances).
jh
2y10
1
0

Also, if you combine $1/ton with the estimated lives per ton from Bressler's paper, then you get $4,400 per life saved.

1
andrew_richardson
2y
I'm glad they're looking for charities in the sub $10/ton range! I suspect there is limited room for funding at that value, but it's still marginally good. Finding cheaper climate interventions is really the only part of this equation we can control.  I disagree with your 10^12 QALYs analysis. First, I need a citation on the assumption that livable space will be reduced by 1 billion. Second, the earth isn't at maximum capacity, and I'm not sure population trends are expected to peak above capacity. Third, you shouldn't project out 100,000 years without temporal discounting because our ability to predict the far future is bad and we should use temporal discounting to avoid overconfidence there. For example, it's hard to predict what technology will arise in the future, and assuming a 1% chance that we'll never develop geo-engineering over such a long timespan is a bad assumption.  I agree about existential risks. If climate change causes geopolitical stress that increases the chance of nuclear war by even a small amount, that's obviously bad. I included an x-risk model where we assume climate change kills all humans, but I understand that x-risk would be bad above and beyond the tragic loss of all currently living individuals, so cashing that risk out into dollars per life is maybe incorrect.  About longtermism in general, I basically think EAs are super overconfident about long term predictions, and don't apply exponential discounting nearly enough. Even this analysis going out 100 years is probably overconfident because so much is going to change over that time. 
2[anonymous]2y
I think this might be the article from Founders Pledge that you are thinking of 💚

Thank you for writing this piece! I think that there should be a serious discussion if crypto is net positive or negative for the world.

In my opinion, there are a few more ways how crypto could contribute to existential risk. Since you can accept donations in monero, it is much easier to make a living by spreading dangerous ideologies (human extinction is a worthy goal, political measures against existential risk are totalitarian, etc.) Of course, you can also support an atheist blogger in Iran or a whistleblower in China by crypto, but it is very hard to ... (read more)

Answer by Frank_RDec 18, 20212
0
0

I suggest the following thought experiment. Imagine wild animal suffering can be solved. Then it would be possible to populate a square mile with millions of happy insects instead of a few happy human beings. If the repugnant conclusion was true, the best world would be populated with as many insects as possible and only a few human beings that take care that there is no wild animal suffering. 

Even more radical, the best thing to do would be to fill as much of the future light cone as possible with hedonium. Both scenarios do not match the moral intui... (read more)

An important factor is how many people in the EA movement are actively searching for EA jobs and how many applications they write per year. Maybe this would be a good question for the next EA survey.

7
David_Moss
2y
We have a sense of this from questions we asked before (though only as recently in 2019, so they don't tell us whether there's been a change since then). At that point 36.6% of respondents included EA non-profit work (i.e. working for an EA org) in their career plans. It was multiple select, so their plans could include multiple things, but it seems plausible that often EA org work is people's most preferred career and other things are backups.  At that time 32% of respondents cited too few job opportunities as a barrier to their involvement in EA. This was the most commonly cited barrier (and the third most cited was it being too hard to get an EA job!). These numbers were higher among more engaged respondents. I think these numbers speak to EA jobs being very hard to get (at least in 2019). Number of applications people are writing could be interesting to some degree, though I think there are a couple of limitations. Firstly, if people find that it is too hard to get a job and drop out of applying , this may make the numbers look better without the number of people who want a job and can't get one decreasing, and even without it becoming appreciably easier for those still applying for jobs. Secondly, if there are fewer (more) jobs for people to apply to this may reduce (increase) the number of applications, but this would be actually be making it harder (easier) for people to get jobs. To assess the main thing that I think these numbers would be useful for (how competitive jobs actually are), I think hiring data from orgs would be most useful (i.e. how many applicants to how many roles). The data could also be useful to assess how much time EAs are spending applying (since this is presumably at some counterfactual cost to the community), but for that we might simply ask about time spent on applications directly.
Answer by Frank_RNov 16, 202116
0
0

Genomic mass screening of wastewater for unknown pathogens, as described here:

[2108.02678] A Global Nucleic Acid Observatory for Biodefense and Planetary Health (arxiv.org)

A few test sites can already help to detect a new (natural or manmade) pandemic at an early stage. Nevertheless, there is room for a few billion dollars if you want to build a global screening network.

Unfortunately, I do not know if there is any organisation with need for funding working on this. 

6
Hauke Hillebrandt
2y
Also see Carl Shulman's 'Envisioning a world immune to global catastrophic biological risks'

I agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans.

Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they coloniz... (read more)

There is a short piece on longtermism in Spiegel Online, which is probably the biggest news site in Germany:

Longtermism: Was ist das - Rettung oder Gefahr? - Kolumne - DER SPIEGEL

Google Translate:

Longtermism: Was ist das - Rettung oder Gefahr? - Kolumne - DER SPIEGEL (www-spiegel-de.translate.goog)

As far as I know, this is the first time that longtermism is mentioned in a mayor German news outlet. The author mentions some key ideas and acknowledges that shorttime thinking is a big problem in society, but he is rather critical of the longtermist movement. F... (read more)

I think that it is not possible to delay technological progress if there are strong near-term and/or egoistical reasons to accelerate the development of new technologies.

As an example, let us assume that it is possible to stop biological aging within a timeframe of 100 years. Of course, you can argue that this is an irreversible change, which may or may not be good for humankinds longterm future. But I do not think that it is realistic to say "Let's fund Alzheimer's research and senolytics, but everything that prolongs life expectancy beyond 120 years will... (read more)

I think that it is possible that whole brain emulation (WBE) will be developed before AGI and that there are s-risks associated with WBE. It seems to me that most people in the s-risk community work on AI risks. 

Do you know of any research that deals specifically with the prevention of s-risks from WBE?  Since an emulated mind should resemble the original person, it should be difficult to tweak the code of the emulation such that extreme suffering is impossible. Although this may work for AGI, you need probably a different strategy for emulated minds.  

5
mlsbt
3y
Yea, WBE risk seems relatively neglected, maybe because of the really high expectations for AI research in this community. The only article I know talking about it is this paper by Anders Sandberg from FHI. He makes the interesting point that similar incentives that allow animal testing in today's world could easily lead to WBE suffering. In terms of preventing suffering his main takeaway is:  The other best practices he mentions, like perfectly blocking pain receptors, would be helpful but only become a real solution with a better theory of suffering.

Thank you very much for sharing your paper. I have heard somewhere that Thorium reactors could be a big deal against climate change.  The advantage would be that there are greater Thorium reserves than Uranium reserves and that you cannot use Thorium to build nuclear weapons. Do you have an opinion if the technology can be developed fast enough and deployed worldwide? 

1
policy_nerd
3y
Hi Frank, my pleasure! This is really interesting, I actually didn't know about Thorium reactors - thank you for pointing that out(: Having just read the Wikipedia page it appears that Thorium offers some promising advantages. In hindsight I definitely would've touched on this in the paper. I think regardless, getting to net-zero in the next several decades will require all the technologies and innovation we can muster, so this definitely sounds like something we should be investigating and dedicating resources to. As far as an opinion on the development timeline: hard to say without further research I think. All new tech investments are obviously accompanied by a certain level of risk; I would be hesitant to attempt to replace one nuclear source with another for the same reason I wouldn't replace nuclear with renewables, but as far as the potential to replace CO2-based energy sources in new regions or in places where the political situation favors the advantages of Thorium, it sounds like there's a lot of promise here!

I think that the case for longtermism gets stronger if you consider truly irreversible catastrophic risks, for example human extinction. Lets say that there is a chance of 10% for the extinction of humankind. Suppose you suggest some policy that reduces this risk by 2%, but introduces a new extinction risk with a probability of 1%. Then it would be wise to enact this policy.

This kind of reasoning would be probably wrong if you have a chance of 2% for a very good outcome such as unlimited cheap energy, but an additional extinction risk of 1%.

Moreover, you c... (read more)

Thank you for your detailed answer. I expect that other people here have similar questions in mind. Therefore, it is nice to see your arguments written up.

Thank you for your answer and for the links to the other forum posts.

How would you answer the following arguments?

  1. Existential risk reduction is much more important than life extension since it is possible to solve aging a few generations later, whereas humankinds potential, which could be enormous, is lost after an extinction event.

  2. From a utilitarian perspective it does not matter if there are ten generations of people living 70 years or one generation of people living 700 years as long as they are happy. Therefore the moral value of life extension is neutral.

I am not wholly convinced of the second argument myself, but I do not see where exactly the logic goes wrong. Moreover, I want to play the devils advocate and I am curious for your answer.

2
Emanuele_Ascani
3y
I answered you here:
6
Jack_H
3y
Thanks for the questions. These two lines of argumentation are quite common responses, and I would address them as follows: 1. It is entirely possible that existential risk mitigation (e.g. AI safety, nuclear, biosecurity) is by far the most effective cause area in EA due to the 'Pascal's mugging'-style argument you put forth (i.e. the potential for trillions of future lives to be saved). If you believe that 100% of EA funding should support ex-risk causes then you will be unlikely to be persuaded to donate to anti-aging research.  However, if you think there is also value in short-term cause areas (e.g. global poverty), given they have the advantage of direct, immediate and sometimes quantifiable return on investment (i.e. guaranteed 'bang for buck'), instead of only a possible chance of impacting the long-term future, and you support a more 'diversified' portfolio' in EA, then there is a case to be made for anti-aging. There is a trade-off here between potential impact (high for ex-risk, low for short-term cause areas) versus the probability that donations actually make a difference (potentially low for ex-risk, very high for short-term cause areas).  Now, anti-aging falls between short and long-term cause areas on this spectrum - it is potentially much higher impact than short-term cause areas, but the feedback cycles and return on investment are slightly less quantifiable. That said, based on the preliminary models that I cite in my talk, even in a conservative case in which it costs one trillion dollars to bring the anti-aging technology forward one year in time (irrespective of whether this occurs tomorrow or in a hundred years), it is still better to donate to an aging charity than a GiveWell one, given the QALYs saved. Remember, the model assumes that this technology will arrive at some point in the future (if we are not wiped out due to an ex-risk, of course), so the benefit of donating in QALYs is based on the difference in how much sooner this technolo
1
Yassin Alaya
3y
Thanks. Actually, I know the paper, but maybe I could have referenced it in my thesis...

My question was mainly the first one. (Are 20 insects happier than one human?) Of course similar problems arise if you compare the welfare of humans. (Are 20 people whose living standard is slightly above subsistence happier than one millionaire ?) 

The reason why I have chosen interspecies comparison as an example is that it is much harder to compare the welfare of members of different species. At least you can ask humans to rate their happiness on a scale from 1 to 10. Moreover, the moral consequences of different choices for the function f are potentially greater.

The forum post seems to be what I have asked for, but I need some time to read through the literature. Thank you very much! 

You mention that the ability to create digital people could lead to dystopian outcomes or a Malthusian race to the bottom. In my humble opinion bad outcomes could only be avoided if there is a world government that monitors what happens on every computer that is capable to run digital people. Of course, such a powerful governerment is a risk of its own. 

Moreover I think that a benevolent world goverment can be realised only several centuries in the future, while mind uploading could be possible at the end of this century. Therefore I believe that bad outcomes are much more likely than good ones. I would be glad to hear if you have some arguments why this line of reasoning could be wrong.    

5
Holden Karnofsky
3y
It seems very non-obvious to me whether we should think bad outcomes are more likely than good ones. You asked about arguments for why things might go well; a couple that occur to me are (a) as long as large numbers of digital people are committed to protecting human rights and other important values, it seems like there is a good chance they will broadly succeed (even if they don't manage to stop every case of abuse); (b) increased wealth and improved social science might cause human rights and other important values to be prioritized more highly, and might help people coordinate more effectively.

I had similar thoughts , too. My scenario was that at a certain point in the future all technologies that are easy to build will have been discovered and that you need multi-generational projects to develop further technologies. Just to name an example, you can think of a Dyson sphere. If the sun was enclosed by a Dyson sphere, each individual would have a lot more energy available or there would be enough room for many  additional individuals. Obivously you need a lot of money before you get the first non-zero payoff and the potential payoff could be... (read more)

Thank you for sharing your thoughts. What do you think of the following scenario?

In world A the risk for an existential catastrophe is fairly low and most currently existing people are happy.

In world B the existential risk is slightly lower. In expectation there will live 100 billion additional people (compared to A) in the far future whose lifes are better than those of the people today. However, this reduction of risk is so costly that most of the currently existing people have miserable lifes. 

 Your theory probably favours option B. Is this intended ?

1
Stijn
3y
Yes, my theory favours B, assuming that those 100 billion additional people have on expectation a welfare higher than the threshold, that the higher X-risk in world A does not on expectation decrease the welfare of existing people, and that  the negative welfare in absolute terms of having a miserable life is less than ten times higher than the positive welfare of currently existing people in world A. In that case, the added welfare of those additional people is higher than  the loss of welfare of the current people. In other words: if there are so many extra future people who are so happy, we really should sacrifice a lot in order to generate that outcome.  However, the question is whether we would set the threshold lower than the welfare of those future people. It is possible that most current people are die-hard person-affecting utilitarians who care only about making people happy instead of making happy people. In that case, when facing a choice between worlds A and B, people may democratically decide to set a very high threshold, which means they prefer world A

Hi,

maybe you find this overview of longtermism interesting if you have not already found it:

Intro to Longtermism | Fin Moorhouse

Hello! As long as I can remember, I have been interested in the long term future and have asked myself if there is any possibility to direct the future of humankind in a positive direction. Every once in a while I searched the internet for a community of like-minded people. A few month ago I discovered that many effective altruists are interested in longtermism. 

Since then, I often take a look at this forum and have read 'The Precipice' by Toby Ord. I am not quite sure if I agree with every belief that is common among EAs. Nevertheless, I think that w... (read more)