https://applieddivinitystudies.com/2020/09/05/rationality-winning (a)
Excerpt:
So where are all the winners?
The people that jump to mind are Nick Bostrom (Oxford Professor of Philosophy, author), Holden Karnofsky and Elie Hassenfeld (run OpenPhil and GiveWell, directing ~300M in annual donations) and Will MacAskill (Oxford Professor of Philosophy, author).
But somehow that feels like cheating. We know rationalism is a good meme, so it doesn’t seem fair to cite people whose accomplishments are largely built off of convincing someone else that rationalism is important. They’re successful, but at a meta-level, only in the same way Steve Bannon is successful, and to a much lesser extent.
And this, from near the end:
The primary impacts of reading rationalist blogs are that 1) I have been frequently distracted at work, and 2) my conversations have gotten much worse. Talking to non-rationalists, I am perpetually holding myself back from saying "oh yes, that’s just the thing where no one has coherent meta-principles" or "that’s the thing where facts are purpose-dependent". Talking to rationalists is not much better, since it feels less like a free exchange of ideas, and more like an exchange of "have you read post?"
There are some specific areas where rationality might help, like using Yudkowsky’s Inadequate Equilibria to know when it’s plausible to think I have an original insight that is not already "priced into the market", but even here, I’m not convinced these beat out specific knowledge. If you want to start a defensible monopoly, reading about business strategy or startup-specific strategy will probably be more useful than trying to reason about "efficiency" in a totally abstract sense.
And yet, I will continue reading these blogs, and if Slate Star Codex ever releases a new post, I will likely drop whatever I am doing to read it. This has nothing to do with self-improvement or "systematized winning".
It’s solely because weird blogs on the internet make me feel less alone.
I like this comment. To respond to just a small part of it:
I've also only read the excerpt, not the full post. There, the author seems to only exclude/discount as 'winning' convincing others of rationalism, not AI risk worries.
I had interpreted this exclusion/discounting as motivated by something like a worry about pyramid schemes. If the only way rationalism made one systematically more likely to 'win' was by making one better at convincing others of rationalism, then that 'win' wouldn't provide any real value to the world; it could make the convincers rich and high-status, but by profiting off of something like a pyramid scheme.
This would seem similar to a person writing a book or teaching a course on something like how to get rich quick, but with that person seeming to have gotten rich quick only via those books or courses.
(I think the same thing would maybe be relevant with regards to convincing people of AI risk worries, if those worries were unfounded. But my view is that the worries are well-founded enough to warrant attention.)
But I think that, if rationalism makes people systematically more likely to 'win' in other ways as well, then convincing others of rationalism: