Reading this quickly on my lunch break, seems accurate to most of my core points. Not how I'd phrase them, but maybe that's to be expected(?)
Agreed. IMHO the only legitimate reason to make a list like this, is to prep for researching and writing one or more response pieces.
(There's a question of who would actually read those responses, and correspondingly where they'd be published, but that's a key question that all persuasive-media-creators should be answering anyway.)
Yeah I get that, I mean specifically the weird risky hardcore projects. (Hence specifying "adult", since that's both harder and potentially more necessary under e.g. short/medium AI timelines.)
Is any EA group funding adult human intelligence augmentation? It seems broadly useful for lots of cause areas, especially research-bottlenecked ones like AI alignment.
Why hasn't e.g. OpenPhil funded this project?: https://www.lesswrong.com/posts/JEhW3HDMKzekDShva/significantly-enhancing-adult-intelligence-with-gene-editing
Much cheaper, though still hokey, ideas that you should have already thought of at some point:
Maybe! I'm most interested in math because of its utility for AI alignment and because math (especially advanced math) is notoriously considered "hard" or "impenetrable" by many people (even people who otherwise consider themselves smart/competent). Part of that is probably lack of good math-intuitions (grokking-by-playing-with-concept, maths-is-about-abstract-objects, law-thinking, etc.).
Yeah, we'd hope there's a good bit of existing pedagogy that applies to this. Not much stood out to me, but maybe I haven't looked hard enough at the field.
We ought to have a new word, besides "steelmanning", for "I think this idea is bad, but it made me think of another, much stronger idea that sounds similar, and I want to look at that idea now and ignore the first idea and probably whoever was advocating it".
This post/cause seems sorely underrated; e.g. what org exists can someone donate to, for mass case detection? It has such a high potential lives-saved-per-$1,000!
OK, thanks! Also, after more consideration and object-level thinking about the questions, I will probably write a good bit of prose anyway.
I have a question.
IF:
THEN, would you prefer if I:
(Assuming this is for answering one question. Presumably, since multiple entries are allowed, I could duplicate this strategy for the other question, or even use a different one for each. But if I'm wrong about this, I'd also like to know that!)
I hereby request funding for more overwrought posts about the community's social life, as they are a cost-effective way to do this.
Way ahead of you, but 6 months of stimulants cost less than a catered dinner--only a few hundred thousand dollars.
And League is impossible! It is so hard! How do people work hard to accomplish things the normal way?
This is interesting, but I'm not sure I'll have the time to listen to it. Maybe make transcripts of these audio versions?
I want to ask for a source, but I'm not sure how to source this (maybe like an FLI tax form?). Where did that news outlet's document come from? Did they make it up? EDIT: nvm, found their actual statement.
Agreed, with the caveat that people (especially those inexperienced with the media and/or the specific sub-issue they're being asked about) go in with decent prep.This is not the same as being cagey or reserved, which would probably lower the "momentum" of this whole thing and make change less likely. Yudkowsky, at some points, has been good at balancing "this is urgent and serious" with "don't froth at the mouth", and plenty of political activists work on this too. Ask for help from others!
Personal feelings: I thought Karnofsky was one of the good ones! He has opinions on AI safety, and I agree with most of them! Nooooooooooo!
Object-level: My mental model of the rationality community (and, thus, some of EA) is "lots of us are mentally weird people, which helps us do unusually good things like increasing our rationality, comprehending big problems, etc., but which also have predictable downsides."
Given this, I'm pessimistic that, in our current setup, we're able to attract the absolute "best and brightest and also most ethical and also most e...
I think a common maladaptive pattern is to assume that the rationality community and/or EA is unusually good at "increasing our rationality, comprehending big problems", and I really, really, really doubt that "the most "epistemically rigorous" people are writing blog posts".
I think I agree with both of these, actually: EA needs unusually good leaders, possibly better than we can even expect to attract.
(Compare EA with, say, being an elite businessperson or politician or something.)
Ah, thank you!
paraphrased: "morality is about the interactions that we have with each other, not about our effects on future people, because future people don't even exist!"
If that's really the core of what she said about that... yeah maybe I won't watch this video. (She does good subtitles for her videos, though, so I am more likely to download and read those!)
Agree, I don't see many "top-ranking" or "core" EAs writing exhaustive critiques (posts, not just comments!) of these critiques. (OK, they would likely complain that they have better things to do with their time, and they often do, but I have trouble recalling any aside from (debatably) some of the responses to AGI Ruins / Death With Dignity.)
Agreed. When people require literally everything to be written in the same place by the same author/small-group, it disincentives writing potentially important posts.
Strong agree with most of these points; the OP seems to not... engage on the object-level of some of its changes. Like, not proportionally to how big the change is or how good the authors think it is or anything?
Reminder for many people in this thread:
"Having a small clique of young white STEM grads creates tons of obvious blindspots and groupthink in EA, which is bad."
is not the same belief as
"The STEM/techie/quantitative/utilitarian/Pareto's-rule/Bayesian/"cold" cluster-of-approaches to EA, is bad."
You can believe both. You can believe neither. You can believe just the first one. You can believe the second one. They're not the same belief.
I think the first one is probably true, but the second one is probably false.
Thinking the first belief is true, is nowhere ne...
Who should do the audit? Here's some criteria I think could help:
I'm a Definooooor! I'm gonna Defiiiiiiine! AAAAAAAAAAAAAAAA