472 karmaJoined Feb 2021


In the context of a misaligned AI takeover, making negotiations and contracts with a misaligned AI in order to allow it to take over does not seem useful to me at all.

A misaligned AI that is in power could simply decide to walk back any promises and ignore any contracts it agreed to. Humans cannot do anything about it because they lost all the power at that point.

I have seen very little discussions about these things in EA circles and I dont know of a thorough investigation. Maybe some EAs have briefly thought about it but think that the cause is not as important/tractable/neglected as other causes, without doing a long writeup.

As for extinction of a single species, I imagine that moral factors are also at play here. Many people (including me), consider the extinction of homo sapiens to be much worse than the extinction of the "wandering albatross" (which I pulled from your linked list).

Could you give some examples of animal x-risk? And what I could do about it? How much to prioritize an issue depends on these more concrete things, not just abstract considerations.

Also, are you having in mind extinction scenarios of a single species, or extinction scenarios of all mammals, or all non-human animal life?

It sounds like you think that the other 19 employees of nonlinear had the same arrangement (travel with them and be paid $12k/year). I doubt this is true. Probably many of the 19 are being remotely employed.

They got to pocket $12k/year into savings and live like a king.

Many people spend money besides rent+food+travel, so this sounds exaggerated.

Do you read the "First they came for one EA leader" poem as ironic? When I read it, I saw it as an argument against "EA leader lynching", and as a request for people to speak up to protect EA leaders.

I think in general it is fine to use this poem in a joking manner, see the comment by Guy Raveh below, and I don't expect John G. Halstead to be against all repurposing of the holocaust poem.

I haven't checked your sources on twitter, because your link doesnt work for people without an account. But I don't consider random tweets to be a reliable source of whats considered insensitive anyways.

How much should a technical researcher be paid during an AI safety fellowship in your opinion? 3000 Euro per month does not sound like a lot to me.

(Actually, I think that many AI safety researchers are being paid a lot more than just 3000 Euro. My guess is that some at Anthropic might earn 6 times as much).

you are replying to John's first comment on this article.

I think it is totally fine to comment on some of the things in a very long article, without reading the whole article and appendix.

What do you think of the Carrick Flynn campaign? This seems to be a case of "EA in politics". Would you like to see more attempts like this?

"she told me not to talk to Ben about it" still can be true (but misleading) under this hypothesis. In a section written as true but misleading, this does not seem to me like evidence against "she" referring to Kat in that sentence.

Load more