Hide table of contents

The Effective Altruism Community has been an unexpected and pleasant surprise. I remember wishing there was a group out there that shared at least one of my ideals. Instead, I found one that shares three: global reduction of suffering, rationality, and longtermism. However, with each conference I attend, posts I read on the forum, and organizations being created, I notice most of them fall into a few distinct categories. Global development/health, animal welfare, biosecurity, climate change, nuclear risk/global conflict, and AI Safety. Don’t get me wrong, these are some of the most important areas to possibly be working on (I’m currently focusing 90% of my energy on AI Safety, myself). But I think there are at least five other areas that could benefit substantially from a small growth in interest. 
 

Interplanetary Species Expansion

This might come as the biggest surprise to be on the list. After all, space exploration is expensive and difficult. But there are very few out there who are actually working on how to change humanity from being a Single Point of Failure System. If we are serious about longtermism and truly decreasing x-risk, this might be one of the most crucial achievements needed. Any x-risk is most likely greatly reduced by this, even perhaps AGI*. The sooner this process begins, the greater the reduction in risk, since this will be a very slow process. One comparatively low-cost research area is studying biospheres and how a separate ecosystem and climate could be created in complete isolation. And this can be studied on Earth. It’s been decades since someone has attempted creating a closed ecological system, and advancement in this could even improve our chances of surviving on Earth if the climate proves inhospitable.

 

Life Extension

~100,000 people die from age-related diseases every day. ~100 billion people have died in our history. (Read that again.) Aging causes an immense amount of suffering, both to those who suffer from it for years, and to those who must grieve. It also causes irrecoverable loss, and is perhaps the greatest tragedy that is treated as normal. If every person who dies of preventable diseases like malaria is a tragedy, I do not see the difference in those dying of other causes also being a tragedy. Even if you do believe extending the human lifespan is not important, consider the alternative case where you’re wrong. If your perspective is incorrect, then ~100k more tragedies happen for every day we delay solving it.
 

Cryonics

This is related to Life Extension, but even more neglected, and probably even more impactful. The number of people actually working on cryonics to improve human minds is easily below 100. A key advancement from one individual in research, technology, or organizational improvement could likely have enormous impact. The reason for doing this goes back to the idea of irrecoverable loss of sentient minds. As with life extension, if you do not believe cryonics to be important or even possible, consider the alternative where you’re wrong. If one day we do manage to bring people back  from suspended animation, I believe humanity will weep for all those that were needlessly thrown in the dirt or the fire: for they are the ones there is no hope for, an irreversible tragedy. The main reason why I think this isn’t being worked on more is because it is even "weirder" than most EA causes, despite making a good deal of sense.

 

Nanotechnology

80k lists  a survey provided on the 80k website** places nanotechnology as having a 5% chance of causing human extinction, the same as artificial superintelligence***, and 4% greater than nuclear war. Many do not seem to dispute the possible danger of nanoweapons. Many agree that nanoweapons are possible. Many agree that nanotechnology is expanding, even if it’s no longer in the news. So, where are all the EAs tackling nanotech? Where are the organizations devoted to it? Where are the research institutions?**** Despite so many seeming to agree that this cause is important, there seems to be a perplexing lack of pursuit.

 

Coordination Failures

Most of humanity’s problems come from coordination failures. Nuclear war and proliferation is a coordination failure: everyone would be safer if there were no nukes in the world, and very few people (with some obvious current world exceptions) actually benefit from many entities having them. Climate change is partially a coordination failure: everyone wants the benefits of reducing it, but no one wants to be the only one footing the bill. A large amount of AGI risk will likely be from coordination failures: everyone will be so concerned about others building dangerous AGI that they will be incentivized to build dangerous AGI first. Finding fundamental ways to solve this could not only radically decrease x-risk, but would probably make the lives of everyone unbelievably better. This is a big ask, though. It’s likely that most attempts at this will fail, but even a 1-5% chance of success I think is worth putting far more effort into. We have already seen some achievements. As Eliezer Yudkowsky notes in Inadequate Equilibria, Kickstarter achieved a way for people to contribute to a project, but only if the project got enough funding to actually be created, so that no one ended-up wasting their own money. The Satoshi Nakamoto Consensus created a way for contracts to be enforced without the need for government coercion. These were insights from a few individuals, with inspiration from a wide variety of domains. It is likely there are many others waiting to be discovered.





*I do not think AGI risk is prevented by having multiple human bases, but I think the high uncertainty around how an AGI might kill us all does pose the chance that other home worlds might be safe from it. This is contingent on 1: the AGI not wishing to expand exponentially, and 2: the AGI not being specifically interested in our extinction. All other x-risks I know of (nuclear war, climate change, bioweapons, etc.) are all substantially reduced by having other bases.

**80k actually places AI risk closer to 10%, and nanoweapons much lower.

***I believe this is far too low for AGI.

****There are a few. But institutions such as the Center for Responsible Nanotechnology don’t seem to have many people/funding, and haven’t published anything in years.


 

8

0
0

Reactions

0
0

More posts like this

Comments14
Sorted by Click to highlight new comments since: Today at 11:27 AM

Arden here from 80k -- just wanted to note the figures you cite are from a survey and were not 80k's overall views.

Our articles put AI risk closer to 10% (https://80000hours.org/problem-profiles/artificial-intelligence/) and nano much lower though we don't try to estimate it numerically (we have a mini writeup here https://80000hours.org/problem-profiles/atomically-precise-manufacturing/)

seems like we should update that article anyway though. thanks for drawing my attention to it

Thanks! I'll update to correct this.

This is such a breath of fresh air. Make EA weirder!

This might come as the biggest surprise to be on the list. After all, space exploration is expensive and difficult. But there are very few out there who are actually working on how to change humanity from being a Single Point of Failure System. If we are serious about longtermism and truly decreasing x-risk, this might be one of the most crucial achievements needed. Any x-risk is most likely greatly reduced by this, even perhaps AGI*. The sooner this process begins, the greater the reduction in risk, since this will be a very slow process.

Seems not everyone agrees with you that this is hard. See also the paper if video is not your thing.

I was also confused about why no one has written something more extensive on nanotech. My guess would be that, it might be rather hard to have a catastrophe 'by accident' as the gray goo failure mode is rather obviously undesirable. From the Wikipedia article on gray goo I gathered that Erik Drexler thinks it's totally possible to develop safe nanotechnology. That distinguishes it from AI which he seems to have shifted his focus to. See also this report, I found through this question

My guess a big reason is there doesn't really seem to be any framework to go about working on it, except perhaps on the policy side. Testing out various forms of nanotechnology to see if they're dangerous might be very bad. Even hypothetically doing that might create information hazards. I imagine we would have to see a few daring EAs blaze the trail for others to follow in. There's also the obvious skill and knowledge gap. You can't easily jump into something like nanotech the way you could for something like animal welfare.

"The main reason why I think this isn’t being worked on more is because it is even "weirder" than most EA causes, despite making a good deal of sense."

I don't think that is true. This was previously discussed here. It's very hard to argue that preserving existing lives through cryonics is more cost-effective than creating new lives if you have a totalist view of population ethics. And even if you have a person-affecting view of population ethics it's not clear how cryonics is more cost-effective than AI safety.

Is it actually more cost effective, though? Someone in suspended animation does not eat or consume resources. Unless you mean sometime in the future, but in that future we don't know what resource constraints will actually be, and we don't know what we will value most. Preventing irrecoverable loss of sentient minds still seems like a wiser thing to do, given this uncertainty. As for AI Safety, I think we're facing a talent deficit much more than a financial deficit right now. I'm not sure how much adding, say, 5 more million to the cause will really change at this time.

For a successful cryopreservation, you need facilities for storage, liquid nitrogen and staff overseeing the operation. All that costs money. Plastination alleviates some of those costs, and economies of scale would also apply.

And cryonics is expensive: The cryoprotectants currently used alone are nothing to sneer at.

  1. The perfusate has a shelf life of several years when stored in an ordinary refrigerator. Alcor’s purchase price for the ingredients in all 10 2-liter bags of perfusate, including M22, is ~$1,500. The concentration of M22 increases by a factor of 1.67 between bags, except that the last 3 bags have the same terminal concentration. While 10 bags is sufficient to achieve the desired terminal jugular cryoprotectant concentration, 16 bags were prepared for the initial trial (the final 9 bags having the same terminal concentration) to ensure that enough bags were available to achieve terminal jugular cryoprotectant concentration.

A Big Hairy Audacious Goal for Cryonics, Ralph Merkle, 2014

These areas all seem well-identified, but the essential problem is that EA doesn't have near the sufficient talent for top priority causes already. 

There's an interesting tension showing itself with the potential areas you point to: Two out of the five are mostly not public goods, and EA has traditionally focused on supplying public goods.

Both life extension and cryonics should be very much in everybody's interest assuming that most people want to live longer and healthier—so if they aren't, they are either very irrational (possible) or it's just not in their interest.

I do not see the difference in those dying of other causes also being a tragedy. Even if you do believe extending the human lifespan is not important, consider the alternative case where you’re wrong.

I think most EAs would agree that death is bad. The important question would be how tractable life expansion is.

This is related to Life Extension, but even more neglected, and probably even more impactful. The number of people actually working on cryonics to improve human minds is easily below 100. A key advancement from one individual in research, technology, or organizational improvement could likely have enormous impact. The reason for doing this goes back to the idea of irrecoverable loss of sentient minds. As with life extension, if you do not believe cryonics to be important or even possible, consider the alternative where you’re wrong. If one day we do manage to bring people back from suspended animation, I believe humanity will weep for all those that were needlessly thrown in the dirt or the fire: for they are the ones there is no hope for, an irreversible tragedy. The main reason why I think this isn’t being worked on more is because it is even "weirder" than most EA causes, despite making a good deal of sense.

Not sure if I can speak for everyone here, but personally I am not optimistic enough about AI to focus on cryo. Also, the limiting factor for cryonics seems to be more it's weirdness rather than research? The technology exists. The costs are probably only going down if more people sign up, right?

"Also, the limiting factor for cryonics seems to be more it's weirdness rather than research?"

 

Not really. The perfusion techniques haven't really updated in decades. And the standby teams to actually perform preservation in the event of an accident are extremely spread out and limited. I think some new organizations need to breath life back into cryonics, with clear benchmarks for standards they hope to achieve over a certain timeline. I think Tomorrow Biostasis is doing the kind of thing I'm speaking of, but would love to see more organizations like them.

Not really. The perfusion techniques haven't really updated in decades.

Honestly, not knowledgeable enough to know how much of a qualitative difference that makes (eg. how much does that increase your expected value of future you?).

They might also improve the process somewhat, but at the current scope, the impact is very limited as long as there is like less than (~10,000?) (just a ballpark, not looking up actual number) people signed up, and the whole thing costs ~50,000$ if you are getting it real cheap. Like, I also have extended family members that are close to dying, but I am not close enough to them to convince them that this could be a good idea and the cost is a real issue for 99% of the population (Maybe more could save the money in their bank account, but can they reason through the decision?). I honestly think that there are lots of people who would sign up for cryonics if given more capacity/time to think for themselves, but it's hard enough to get people to invest their money for future self in the same lifetime.

I think Tomorrow Biostasis is doing the kind of thing I'm speaking of, but would love to see more organizations like them.

Yeah, I had a call with them as I was not sure whether I would want to sign up or not, and it seems they are doing a great job at making the process way less painfull and weird. Not sure about the exact numbers anymore, but I remember that they expect to outgrow Alcor in a few years (or have they already?). I would really question whether there is room for lots of more cryo organizations if the public perception of them does not change, and I would definitely question whether it would be the best thing to pursue on longtermist grounds (rather than selfish (totally reasonable) grounds). I still recommend friends of mine to look into cryonics, but only because I care about them, if they care about helping other people I'd recommend other things.