M

Morpheus

24 karmaJoined Jul 2020

Posts
1

Sorted by New

Comments
13

Not really. The perfusion techniques haven't really updated in decades.

Honestly, not knowledgeable enough to know how much of a qualitative difference that makes (eg. how much does that increase your expected value of future you?).

They might also improve the process somewhat, but at the current scope, the impact is very limited as long as there is like less than (~10,000?) (just a ballpark, not looking up actual number) people signed up, and the whole thing costs ~50,000$ if you are getting it real cheap. Like, I also have extended family members that are close to dying, but I am not close enough to them to convince them that this could be a good idea and the cost is a real issue for 99% of the population (Maybe more could save the money in their bank account, but can they reason through the decision?). I honestly think that there are lots of people who would sign up for cryonics if given more capacity/time to think for themselves, but it's hard enough to get people to invest their money for future self in the same lifetime.

I think Tomorrow Biostasis is doing the kind of thing I'm speaking of, but would love to see more organizations like them.

Yeah, I had a call with them as I was not sure whether I would want to sign up or not, and it seems they are doing a great job at making the process way less painfull and weird. Not sure about the exact numbers anymore, but I remember that they expect to outgrow Alcor in a few years (or have they already?). I would really question whether there is room for lots of more cryo organizations if the public perception of them does not change, and I would definitely question whether it would be the best thing to pursue on longtermist grounds (rather than selfish (totally reasonable) grounds). I still recommend friends of mine to look into cryonics, but only because I care about them, if they care about helping other people I'd recommend other things.

I do not see the difference in those dying of other causes also being a tragedy. Even if you do believe extending the human lifespan is not important, consider the alternative case where you’re wrong.

I think most EAs would agree that death is bad. The important question would be how tractable life expansion is.

This is related to Life Extension, but even more neglected, and probably even more impactful. The number of people actually working on cryonics to improve human minds is easily below 100. A key advancement from one individual in research, technology, or organizational improvement could likely have enormous impact. The reason for doing this goes back to the idea of irrecoverable loss of sentient minds. As with life extension, if you do not believe cryonics to be important or even possible, consider the alternative where you’re wrong. If one day we do manage to bring people back from suspended animation, I believe humanity will weep for all those that were needlessly thrown in the dirt or the fire: for they are the ones there is no hope for, an irreversible tragedy. The main reason why I think this isn’t being worked on more is because it is even "weirder" than most EA causes, despite making a good deal of sense.

Not sure if I can speak for everyone here, but personally I am not optimistic enough about AI to focus on cryo. Also, the limiting factor for cryonics seems to be more it's weirdness rather than research? The technology exists. The costs are probably only going down if more people sign up, right?

This might come as the biggest surprise to be on the list. After all, space exploration is expensive and difficult. But there are very few out there who are actually working on how to change humanity from being a Single Point of Failure System. If we are serious about longtermism and truly decreasing x-risk, this might be one of the most crucial achievements needed. Any x-risk is most likely greatly reduced by this, even perhaps AGI*. The sooner this process begins, the greater the reduction in risk, since this will be a very slow process.

Seems not everyone agrees with you that this is hard. See also the paper if video is not your thing.

I was also confused about why no one has written something more extensive on nanotech. My guess would be that, it might be rather hard to have a catastrophe 'by accident' as the gray goo failure mode is rather obviously undesirable. From the Wikipedia article on gray goo I gathered that Erik Drexler thinks it's totally possible to develop safe nanotechnology. That distinguishes it from AI which he seems to have shifted his focus to. See also this report, I found through this question

I sort of view AGI as a standin for powerful optimization capable of killing us in AI Alignment contexts.

Yeah, I think I would count these as unambigous in hindsight. Though siren Worlds might be an exception.

Do you also think this yourself? I don't clearly see what worlds look like, where P (doom | AGI) would be ambiguous in hindsight? Some mayor accident because everything is going too fast?

Just in case this has something to do with the link: I got an error when trying to join the group with my google account. (Might try with email later).

I like your comparisons with other historical cases when people thought they had inevitable theories about society, and it is a thing I think about.

I do have a pet peeve though about the following claim.

Expected values were being used by the authors inappropriately (that is, without data to inform the probability estimates).

 Let's consider a very short argument for strong longterminism (and a tractable way to influence the distant future by reducing x-risk):
- There is a lot of future ahead of us.
- The universe is large
-  humans are fragile/the universe is harsh (most planets are not inhabitable for us (yet). We don't survive in most space by default)
 ⇒ Therefore expected outcomes  of your actions for the near future become rounding errors compared to future expected outcomes by making sure humanity survives.
All three of these points (while more might be necessary for a convincing case for longterminism) are very much informed by physical theories which in turn have been informed by data about the world we live in (observing through a telescope, going to the moon)!
To illustrate:

- Had I been born in a universe where physicists were predicting with high degrees of certainty (through well-established theories like thermodynamics in our world) that the universe (all of which already inhabited) would be facing an inevitable heat death in 1000 years from now, then I would think that the arguments for longterminism were weak since they would not apply to the universe we live in.


I am not convinced by your arguments around epistemology. I don't understand your fascination with Popper. Popper's philosophy seems more like an informal way to make bayesian updates. You did not provide sufficient evidence for me to convince me to the contrary. While I agree that rigid Bayseanism has flaws, my current best guess means more subjectivism, not less.
 

Load more