All of oge's Comments + Replies

A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4)

That's about the total annual cost of preserving a brain and spinal cord under an Alcor cryonics contract. I assume that the price paid while the patient are alive are roughly the same as the cost of preservation when dead.

A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4)

I estimate it'll cost at least $1,000/yr to preserve a brain. That's about the cost of maintaining a family at global poverty levels.

I should have posted such calculations first before posting the excerpts. Thanks for your comments.

1Emanuele_Ascani4y
Interesting! How did you arrive at the $1,000/yr figure?
A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4)

I posted the story to let folks know of a possible altruistic target: letting people live as long as they want by vitrifying their nervous systems for eventual resuscitation.

8Ben Millwood4y
There are many, many possible altruistic targets. I think to be suitable for the EA forum, a presentation of an altruistic goal should include some analysis of how it compares with existing goals, or what heuristics lead you to believe it's worthy of particular attention.
A case for developing Aldehyde Stabilized Cryopreservation into a medical procedure (1/4)

Pure chemical fixation without cooling would be ideal. The extra cryopreservation step is necessary since glutaraldehyde only fixes tissue for months rather than centuries.

0turchin4y
I think that actual good step in EA direction would be to find a relatively cheap combination of chemicals which provide fixation for a longer term, or may be preserving brain slices (as Lenin's brain was preserved). I am interested to write something about cryonics as a form EA, but the main problem here is price. Starting price of the funeral is 4000 pounds in UK and they are not much cheaper in poor countries. Cryonics should be cheaper to be successful and affordable.
Prioritization Consequences of "Formally Stating the AI Alignment Problem"

Thanks, Gordon.

"Make nice AI people we can believe are nice" makes sense to me; I hadn't been aware of the "...we can believe are nice" requirement.

Why I prioritize moral circle expansion over artificial intelligence alignment

Thank you for providing an abstract for your article. I found it very helpful.

(and I wish more authors here would do so as well)

Prioritization Consequences of "Formally Stating the AI Alignment Problem"

Hi Gordon, I don't have accounts on LW or Medium so I'll comment on your original post here.

If possible, could you explain like I'm five what your working definition of the AI alignment problem is?

I find it hard to prioritize causes that I don't understand in simple terms.

1G Gordon Worley III4y
I think the ELI5 on AI alignment is the same as it has been: make nice AI. Being a little more specific I like Russell's slightly more precise formulation of this as "align AI with human values", and being even more specific (without jumping to mathematical notation), I'd say we want to design AI that value what humans value and for us to believe these AI share our values. Maybe the key thing I'm trying to get at though is that alignable AI will be phenomenally conscious, or in ELI5 terms as much people as anything else (humans, animals, etc.). So then my position is not just "make nice AI" but "make nice AI people we can believe are nice".
A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good

Small nit: the links in the table of contents lead to a Google Doc, rather than to the body of the article.

Other than that, I love the article. Thanks for the giant disclaimer ;)

Talent gaps from the perspective of a talent limited organization.

Hi Joey, how can one apply for Charity Science's tech lead position? The link on your jobs page just goes to a Github repo.

1Joey5y
Great to see you are so keen. The job ad was not yet finished and public. It is now public and attached to that page.
Is the community short of software engineers after all?

FYI I applied to New Incentives with ~10 yrs experience on the 1st of September. Haven't heard back.

3Benjamin_Todd6y
Startups value fit with the team very highly, so it's always very hard to predict who they'll be interested in. Programming skill / experience is only one component.
Is there a hedonistic utilitarian case for Cryonics? (Discuss)

I find the argument "I'm so afraid of dying and believe in cyronics so much that signing up for cryonics would end many of my worries and let me be far more productive" kind of humorous

Hey Ozzie, could you explain why you find it humorous? Full disclosure: I'm in the cryo camp and I'd like to learn how to explain my beliefs to others in future.

0WilliamKiely6y
(Note: I found this old thread after Eliezer recently shared this Wait But Why post on his Facebook: Why Cryonics Makes Sense [http://waitbutwhy.com/2016/03/cryonics.html]) I don't find this argument humorous, but I do see it as perhaps the most plausible argument defending cryonics from an EA perspective. That said, I don't think the argument succeeds for myself or (I would presume) a large majority of other people. (It seems to me that the exceptions that may exist would tend to be the people who are very high producers (such that even a very small percentage increase in their good-production would outweigh the cost of their signing up for cryonics) rather than people who are exceptionally afraid of death and love the idea of possibly waking up in the distant future and living longer so much to the point that not signing up for cryonics would be debilitating to them and a sufficiently large hindrance on their productivity (e.g. due to feeling depressed and being unable to concentrate on EA work, knowing that this cryonics option exists that would give them hope) to outweigh the cost of signing up for cryonics.) So I don't see cryonics as being very defensible from an EA perspective.
EA Ventures Request for Projects + Update

I'd like to help with the evaluations. My e-mail is oge@nnadi.org

BTW, in Section 2 of the RFP page, the headings should use the singular, "criterion 1" rather than "criteria 1" http://dictionary.reference.com/browse/criteria?s=t