WP

Wil Perkins

872 karmaJoined Oct 2022

Comments
114

Yeah these criticisms are fair, my comment was made hastily and in poor taste. I've deleted it. 

Thanks for the clarification! I probably should've read in more depth before commenting, I was just viscerally shocked by seeing all of these social-media style pictures so prominent in the beginning of the post. 

Deleting this comment after some fair criticism. 

[This comment is no longer endorsed by its author]Reply

How do you feel retrospectively about the change to lower community engagement on the EA forum? Do you have any numbers about activity/engagement since then?

Thanks! It’s okay. This is a very touchy subject and I wrote a strongly opinionated piece so I’m not surprised. I appreciate it.

I’m going to go against the grain here, and explain how I truly feel about this sort of AI safety messaging.

As others have pointed out, fearmongering on this scale is absolutely insane to those who don’t have a high probability of doom. Worse, Elizier is calling for literal nuclear strikes and great power war to stop a threat that isn’t even provably real! Most AI researchers do not share his views, neither do I.

I want to publicly state that pushing this maximized narrative about AI x risk will lead to terrorist actions against GPU clusters or individuals involved in AI. These types of acts follow from the intense beliefs of those who agree with Elizier, and have a doomsday cult style of thought.

Not only will that sort of behavior discredit AI safety and potentially EA entirely, it could hand the future to other actors or cause governments to lock down AI for themselves, making outcomes far worse.

I strongly disagree with sharing this outside rationalist/EA circles, especially if people don’t know much about AI safety or x risk. I think this could drastically shift someone’s opinion on Effective Altruism if they’re new to the idea.

Hey man, I respect that. Clearly people like your post so keep it up, just my personal preference.

Like I said I absolutely agree with your points here.

Strongly agree with the premise but not a fan of your writing style here. If you could define “smart” and “wise” better, and maybe rely less on personal anecdotes, I think this post might be more persuasive overall.

Thanks for the thought our response! I suppose the main difference is that we have very diverging ideas of what the EA community is and what it will/should become.

I’ve been on the fringe of EA for years, just learning about concepts and donating but never been part of the tighter group so to speak. I see EA as a question - how do we do the most good with the resources available?

Poly is definitely something historically related to the early movement, but I guess I just disagree that the trade off of reputation and attacks over sexual harassment issues etc are positive because of vague notions of fun.

Also - if the EA community creates massive burnout maybe we should change the way we approach our communications and epistemics instead of accepting that and saying we’ll cope by having casual sex. That doesn’t seem like a good road to go down especially long term.

Then again I don’t have short AI timelines.

Load more