https://applieddivinitystudies.com/2020/09/05/rationality-winning (a)
Excerpt:
So where are all the winners?
The people that jump to mind are Nick Bostrom (Oxford Professor of Philosophy, author), Holden Karnofsky and Elie Hassenfeld (run OpenPhil and GiveWell, directing ~300M in annual donations) and Will MacAskill (Oxford Professor of Philosophy, author).
But somehow that feels like cheating. We know rationalism is a good meme, so it doesn’t seem fair to cite people whose accomplishments are largely built off of convincing someone else that rationalism is important. They’re successful, but at a meta-level, only in the same way Steve Bannon is successful, and to a much lesser extent.
And this, from near the end:
The primary impacts of reading rationalist blogs are that 1) I have been frequently distracted at work, and 2) my conversations have gotten much worse. Talking to non-rationalists, I am perpetually holding myself back from saying "oh yes, that’s just the thing where no one has coherent meta-principles" or "that’s the thing where facts are purpose-dependent". Talking to rationalists is not much better, since it feels less like a free exchange of ideas, and more like an exchange of "have you read post?"
There are some specific areas where rationality might help, like using Yudkowsky’s Inadequate Equilibria to know when it’s plausible to think I have an original insight that is not already "priced into the market", but even here, I’m not convinced these beat out specific knowledge. If you want to start a defensible monopoly, reading about business strategy or startup-specific strategy will probably be more useful than trying to reason about "efficiency" in a totally abstract sense.
And yet, I will continue reading these blogs, and if Slate Star Codex ever releases a new post, I will likely drop whatever I am doing to read it. This has nothing to do with self-improvement or "systematized winning".
It’s solely because weird blogs on the internet make me feel less alone.
[I only read the excerpts quoted here, so apologies if this remark is addressed in the full post.]
I think there's likely something about the author's observation, and I appreciate their frankness about why they think they engage with rationalist content. (I'd also guess they're far from alone in acting partly on this motivation.)
However, if we believe (as I think we should) that there is a non-negligible existential risk from AI this century, then the excerpt sounds too negative to me.
(Actually, maybe you don't need to believe in AI risk, as similar remarks apply to EA in general: While the momentum from GiveWell and the Oxford community may well have sufficed to get some sort of EA movement off the ground, it seems clear to me that the rationality community had a significant impact on EA's trajectory. Again, it's not obvious but at least plausibly there are some big wins hidden in that story.)
Are these 'winners' rare? Yes, but big wins are rare in general. Are 'rationalist winners' rarer then we'd predict based on some prior distribution of success for some reference population? I don't know. Are there various ways the rationality community could improve to increase its chances of producing winners? Very likely yes, but again I think that's the answer you should suspect in general, and my intuitive guess is that the rationality community tends to be worse-than-typical at some winning-relevant things (e.g. perhaps modeling and engaging in 'political'/power dynamics) and better at others (e.g. perhaps anticipating low-probability catastrophes), and I feel fairly unsure how this comes out on net.
(For disclosure, I say all of this as someone who I suspect among EAs tends to be more skeptical/negative about the rationality community, and certainly is personally somewhat alienated and sometimes annoyed by parts of it.)
I like this comment. To respond to just a small part of it:
I've also only read the excerpt, not the full post. There, the author seems to only exclude/discount as 'winning' convincing others of rationalism, not AI risk worries.
I had interpreted this exclusion/discounting as motivated by something like a worry about pyramid schemes. If the only way rationalism made one systematically more li... (read more)