Blogging and editing at [/nickai/](https://www.thinkingmuchbetter.com/nickai/). PM me your fluid-g-increasing ideas. (Formerly President/VP of the EA Club at RIT (NY, USA)).
Genderfluid (differs on hour/day-ish timescale.). It's not a multiple-personality thing.
Looking for opportunities to do technical and/or governance work in AI alignment/safety.
Can help with some level of technical analysis (and I'm improving!), strategic planning, operations management, social media marketing, graphic design.
For more details on (the business side of) a potential AI crash, see recent articles by the blog Where's Your Ed At, which wrote the sorta-well-known post "The Man Who Killed Google Search".
For his AI-crash posts, start here and here and click on links to his other posts. Sadly, the author falls into the trap of "LLMs will never get to reasoning because they don't, like, know stuff, man", but luckily his core competencies (the business side, analyzing reporting) show why an AI crash could still very much happen.
Agreed. IMHO the only legitimate reason to make a list like this, is to prep for researching and writing one or more response pieces.
(There's a question of who would actually read those responses, and correspondingly where they'd be published, but that's a key question that all persuasive-media-creators should be answering anyway.)
Is any EA group funding adult human intelligence augmentation? It seems broadly useful for lots of cause areas, especially research-bottlenecked ones like AI alignment.
Why hasn't e.g. OpenPhil funded this project?: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing
EDIT: Due to the incoming administration's ties to tech investors, I no longer think an AI crash is so likely. Several signs IMHO point to "they're gonna go all-in on racing for AI, regardless of how 'needed' it actually is".