EDIT: Due to the incoming administration's ties to tech investors, I no longer think an AI crash is so likely. Several signs IMHO point to "they're gonna go all-in on racing for AI, regardless of how 'needed' it actually is".
For more details on (the business side of) a potential AI crash, see recent articles by the blog Where's Your Ed At, which wrote the sorta-well-known post "The Man Who Killed Google Search".
For his AI-crash posts, start here and here and click on links to his other posts. Sadly, the author falls into the trap of "LLMs will never get to reasoning because they don't, like, know stuff, man", but luckily his core competencies (the business side, analyzing reporting) show why an AI crash could still very much happen.
Agreed. IMHO the only legitimate reason to make a list like this, is to prep for researching and writing one or more response pieces.
(There's a question of who would actually read those responses, and correspondingly where they'd be published, but that's a key question that all persuasive-media-creators should be answering anyway.)
Is any EA group funding adult human intelligence augmentation? It seems broadly useful for lots of cause areas, especially research-bottlenecked ones like AI alignment.
Why hasn't e.g. OpenPhil funded this project?: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing
Much cheaper, though still hokey, ideas that you should have already thought of at some point:
Maybe! I'm most interested in math because of its utility for AI alignment and because math (especially advanced math) is notoriously considered "hard" or "impenetrable" by many people (even people who otherwise consider themselves smart/competent). Part of that is probably lack of good math-intuitions (grokking-by-playing-with-concept, maths-is-about-abstract-objects, law-thinking, etc.).
I have a question.
IF:
THEN, would you prefer if I:
(Assuming this is for answering one question. Presumably, since multiple entries are allowed, I could duplicate this strategy for the other question, or even use a different one for each. But if I'm wrong about this, I'd also like to know that!)
I was almost too lazy to even write my post this year, please TLDR this setup and explain how I can receive money and social status and other personal gains thank you
I hereby request funding for more overwrought posts about the community's social life, as they are a cost-effective way to do this.
Agreed, with the caveat that people (especially those inexperienced with the media and/or the specific sub-issue they're being asked about) go in with decent prep.This is not the same as being cagey or reserved, which would probably lower the "momentum" of this whole thing and make change less likely. Yudkowsky, at some points, has been good at balancing "this is urgent and serious" with "don't froth at the mouth", and plenty of political activists work on this too. Ask for help from others!
Personal feelings: I thought Karnofsky was one of the good ones! He has opinions on AI safety, and I agree with most of them! Nooooooooooo!
Object-level: My mental model of the rationality community (and, thus, some of EA) is "lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides."
Given this, I'm pessimistic that, in our current setup, we're able to attract the absolute "best and brightest and also most ethical and also most e...
Ah, thank you!
paraphrased: "morality is about the interactions that we have with each other, not about our effects on future people, because future people don't even exist!"
If that's really the core of what she said about that... yeah maybe I won't watch this video. (She does good subtitles for her videos, though, so I am more likely to download and read those!)
Agree, I don't see many "top-ranking" or "core" EAs writing exhaustive critiques (posts, not just comments!) of these critiques. (OK, they would likely complain that they have better things to do with their time, and they often do, but I have trouble recalling any aside from (debatably) some of the responses to AGI Ruins / Death With Dignity.)
This also could've helped with other orgs over the years, where the "culture" stuff turned out to have important signal. E.g. FTX, Leverage Research.