A single data point: At a party at EAG, I met a developer who worked at Anthropic. I asked for his p(DOOM), and he said 50%. He told me he was working on AI capability.
I inquired politely about his views on AI safety, and he frankly did not seem to have given the subject much thought. I do not recall making any joke about "selling out", but I may have asked what effect he thought his actions would have on X-risk.
I don't recall anyone listening, so this was probably not the situation OP is referring to.
The current norm is that people have a right to not engage with a subject. It looks to me like this post disagrees with this norm. I base this on the following quotes:
Bostrom: It is not my area of expertise, and I don’t have any particular interest in the question. I would leave to others...
pseudonym: ...this reflects terribly on Nick...
The last link is broken, and should probably be:
https://forum.effectivealtruism.org/posts/ZQPig66wteqwbNHGh/ea-focusmate-group-announcement
Hi Fods12,
We read and discussed your critique in 2 sessions in the AISafety.com reading group. You raise many interesting points, and I found it worthwhile to make a paragraph-by-paragraph answer to your critique. The recording can be found here:
https://youtu.be/Xl5SMS9eKD4
https://youtu.be/lCKc_eDXebM
Best regards
Søren Elverlin
The Budapest Memorandum provided security assurances, not security guarantees. And I believe this war has already caused enough damage to Russia that we can't talk about Russia "getting away with" the invasion.
The destruction of the Russian military should be expected to make the world safer primarily because it will prevent future Russian agression.