Milan Griffes (maɪ-lɪn) is a community member who comes and goes - but so he's a reliable source of very new ideas. He used to work at Givewell, but left to explore ultra-neglected causes (psychedelics for mental health and speculative moral enhancement) and afaict also because he takes cluelessness unusually seriously, which makes it hard to be a simple analyst.
He's closer to nearby thinkers like David Pearce, Ben Hoffman, Andres Gomez Emilsson, and Tyler Alterman who don't glom with EA for a bunch of reasons, chiefly weirdness or principles or both.
Unlike most critics, he has detailed first-hand experience of the EA heartlands. For years he has tried to explain his disagreements, but they didn't land, mostly (I conjecture) because of his style - but plausibly also because of an inferential distance it's important for us to bridge.
He just put up a list of possible blindspots on Twitter which is very clear:
I think EA takes some flavors of important feedback very well but it basically can't hear other flavors of important feedback [such as:]
- basically all of @algekalipso's stuff [Gavin: the ahem direct-action approach to consciousness studies]
- mental health gains far above baseline as an important x-risk reduction factor via improved decision-making
- understanding psychological valence as an input toward aligning AI
- @ben_r_hoffman's point about seeking more responsibility implying seeking greater control of others / harming ability to genuinely cooperate
- relatedly how paths towards realizing the Long Reflection are most likely totalitarian
- embodied virtue ethics and neo-Taoism as credible alternatives to consequentialism that deserve seats in the moral congress
- metaphysical implications of the psychedelic experience, esp N, N-DMT and 5-MeO-DMT
- general importance of making progress on our understanding of reality, a la Dave. (Though EA is probably reasonably sympathetic to a lot of this tbh)
- consequentialist cluelessness being a severe challenge to longtermism
- nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang
- EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that
- generally too sympathetic to whatever it is we're pointing to when we talk about "neoliberalism"
- burnout & lack of robust community institutions actually being severe problems with big knock-on effects, @ben_r_hoffman has written some on this
- declining fertility rate being concerning globally and also a concern within EA (its implications about longrun movement health)
- @MacaesBruno's virtualism stuff about LARPing being what America is doing now and the implications of that for effective (political) action
- taking dharma seriously a la @RomeoStevens76's current research direction
- on the burnout & institution stuff, way more investment in the direction @utotranslucence's psych crisis stuff and also investment in institutions further up the stack
- bifurcation of the experience of elite EAs housed in well-funded orgs and plebeian EAs on the outside being real and concerning
- worldview drift of elite EA orgs (e.g. @CSETGeorgetown, @open_phil) via mimesis being real and concerning
- psychological homogeneity of folks attracted to EA (on the Big 5, Myers-Briggs, etc) being curious and perhaps concerning re: EA's generalizability
- relatedly, the "walking wounded" phenomenon of those attracted to rationality being a severe adverse selection problem
- tendency towards image management (e.g. by @80000Hours, @open_phil) cutting against robust internal criticism of the movement; generally low support of internal critics (Future Fund grant-making could help with this but I'm skeptical)
[Gavin editorial: I disagree that most of these are not improving at all / are being wrongly ignored. But most should be thought about more, on the margin.
I think #5, #10, #13, #20 are important and neglected. I'm curious about #2, #14, #18. I think #6, #7, #12, #15 are wrong / correctly ignored. So a great hits-based list.]
Thanks for this - a good mix of ideas that are:
(a) well-taken and important IMO and indeed neglected by other EAs IMO (though I wouldn't say they're literally unhearable) -- #5, #13, #18, #20
(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19
(c) actually I think taken into account just fine by EAs -- #12, #14, #22
(d) just wrong / correctly ignored IMO -- #2, #3, #6, #7, #9
(e) nonsensical ...at least to me -- #4, #15, #21
(f) not something I know enough about to comment on but also something I don't think I have a reason to prioritize looking into further (as I can't look into everything) -- #1, #8, #17
Though I guess any good list would include a combination of all six. And of course I could be the wrong one!
I'd particularly really like to hear more about "nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang" as I think this could change my priorities if true. Is the idea just that nuclear weapons are a particularly viable attack vector for a hostile AI?
(b) intriguing IMO and I want to hear more -- #10, #11, #16, #19
10. nuclear safety being as important as AI alignment and plausibly contributing to AI risk via overhang
See discussion in this thread
11. EA correctly identifies improving institutional decision-making as important but hasn't yet grappled with the radical political implications of doing that
This one feels like it requires substantial unpacking; I'll probably expand on it further at some point.
Essentially the existing power structure is composed of organizations (mostly l... (read more)