W

withTheWind813

0 karmaJoined

Comments
1

My answer might be useless, but I somewhat went through the same. Purely from my side, after a week it was gone, as I was just temporarily overwhelmed. 

I also hold the opinion that Eliezer Yudkowsky might be doing important work (I am not educated enough to know what that valuable work is), but I think he should stop being anywhere near a spotlight. He might seem like one of the key people within EA, he certainly did seem to me when I discovered EA, but he is awful at PR. There are loads of skeptics around, and he is not the only one to represent the EA.

On the other hand maybe EY is a blessing in disguise because he creates so much fear, and actually helps the AI safety area. I don't think there is any harm done in exploring that area after all!

However, also bear in mind that even the “senior” EAs, are still people and their opinions might mean nothing. As an example Will MacAskill expressed this "I think existential risk this century is much lower than I used to think — I used to put total risk this century at something like 20%; now I’d put it at less than 1%" https://forum.effectivealtruism.org/posts/oPGJrqohDqT8GZieA/ask-me-anything?commentId=HcRNG4yhB4RsDtYit. Well thanks a lot Will, because that fear of a 20% chance of death almost forced me into changing a career, which would have been useless now, and I don't think Will would do anything to help me fix such a mistake once I would switch.