How do you feel retrospectively about the change to lower community engagement on the EA forum? Do you have any numbers about activity/engagement since then?
Thanks! It’s okay. This is a very touchy subject and I wrote a strongly opinionated piece so I’m not surprised. I appreciate it.
I’m going to go against the grain here, and explain how I truly feel about this sort of AI safety messaging.
As others have pointed out, fearmongering on this scale is absolutely insane to those who don’t have a high probability of doom. Worse, Elizier is calling for literal nuclear strikes and great power war to stop a threat that isn’t even provably real! Most AI researchers do not share his views, neither do I.
I want to publicly state that pushing this maximized narrative about AI x risk will lead to terrorist actions against GPU clusters or individuals involved in AI. These types of acts follow from the intense beliefs of those who agree with Elizier, and have a doomsday cult style of thought.
Not only will that sort of behavior discredit AI safety and potentially EA entirely, it could hand the future to other actors or cause governments to lock down AI for themselves, making outcomes far worse.
I strongly disagree with sharing this outside rationalist/EA circles, especially if people don’t know much about AI safety or x risk. I think this could drastically shift someone’s opinion on Effective Altruism if they’re new to the idea.
Hey man, I respect that. Clearly people like your post so keep it up, just my personal preference.
Like I said I absolutely agree with your points here.
Strongly agree with the premise but not a fan of your writing style here. If you could define “smart” and “wise” better, and maybe rely less on personal anecdotes, I think this post might be more persuasive overall.
Thanks for the thought our response! I suppose the main difference is that we have very diverging ideas of what the EA community is and what it will/should become.
I’ve been on the fringe of EA for years, just learning about concepts and donating but never been part of the tighter group so to speak. I see EA as a question - how do we do the most good with the resources available?
Poly is definitely something historically related to the early movement, but I guess I just disagree that the trade off of reputation and attacks over sexual harassment issues etc are positive because of vague notions of fun.
Also - if the EA community creates massive burnout maybe we should change the way we approach our communications and epistemics instead of accepting that and saying we’ll cope by having casual sex. That doesn’t seem like a good road to go down especially long term.
Then again I don’t have short AI timelines.
I’m concerned that less than 90% of the AI safety community would agree. I have heard some disturbing anecdotes.
Thanks for posting this retrospective! I’m curious about the quiz after fellowships - is that available anywhere?
True…. But as soon as the wrong group catches wind of this, it could turn into a powerful meme to demonize this sort of thinking.
In the spirit of weirdness points, it may be better not to be too blatant about the fringe of animal welfare arguments until public consensus has shifted farther. Perhaps I’m being too pessimistic, full disclosure I do not find insect welfare a compelling line of reasoning.