The Rationality and Effective Altruism communities have experienced wildly different trajectories in recent years. While EA has meetups at most major universities, is backed by a multi-billion dollar foundation, has proliferated organisations to the point of confusion and now even has its own media outlet; the rationality community had to struggle just to manage to resurrect Less Wrong. LW finally seems to be on a positive trajectory, but the rationality community is still much less than what it could have been. Its ideas have barely penetrated academia, there isn't a rationality conference and there isn't even an organisation dedicated to growing the movement.
A large part of this reason is that the kinds of people who would actually do something and run these kinds of projects have been drawn into either EA or ai-risk. While these communities benefit from this talent, I suspect that this effect has occurred to the point of killing the geese that lays the golden egg (this is analogous to concerns about immigration to the Bay Area hollowing out local communities).
I find this concerning for the following reasons:
- The Less Wrong community has traditionally been a fantastic recruiting ground for EA. Companies often utilise multiple brands to target different audience segments and this principle still applies even though LW and EA are seperate
- Many of the most prominent EAs consider AI safety the highest priority cause area. LW has been especially effective as a source of AI safety researchers and many of the initial ideas about AI safety were invented here.
- EA has managed to hold unusually high epistemic standards and has been much more successful than average movements at updating based on new evidence and avoiding ideological capture. LW has produced much of the common knowledge that has allowed this to occur. The rationality community also provides a location for the development of advice related to health, productivity, personal development and social dynamics.
The failure of LW to fulfil its potential has made these gains much less than what they could have been. I suspect that as per the Pareto Principle, a small organisation promoting rationality might be far better than no organisation trying to promote it (CFAR focuses on individuals, not broader groups within society or society as a whole). At the very least, a small scale experiment seems worthwhile. Even though there is a high chance that the intervention would have no discernible effect, as per Owen's Prospecting for Gold talk, the impacts in the tail could be extremely large, so the gamble seems worthwhile. I don't know what to suggest that such and the organisation could do, but I imagine that there are a number of difference approaches they could experiment with, at least some of which might plausibly be effective.
I do see a few potential risks with this project:
- This project wouldn't succeed without buy-in from the LW community. This requires people with sufficient credibility being pursuing this at the expense of other opportunities and incurs opportunity cost in the case where they do.
- Increasing the prominence of LW mean that people less aligned with the community have access to more of its insights, so perhaps this would make it easier for someone unaligned to develop an AGI which turns out poorly.
Nonetheless, funding wouldn't have to be committed until it could be confirmed that suitable parties were interested and the potential gains seem like they could justify the opportunity cost. In terms of the second point, I suspect that far more good actors will be created than bad actors, such that the net effect is positive.
This post was written with the support of the EA Hotel
I couldn't agree more.
I believe that rationality (incl. emotional intelligence etc.) is the key to a better version of mankind.
I expressed this in several LW posts / comments, eg.:
https://www.lesswrong.com/posts/7maCtYTsrFhq4D3gK/what-is-being-done
https://www.lesswrong.com/posts/Qwi3zMnfduGHztWSu/rationalism-for-the-masses
I am looking for people to assist me in creating an online step by step guide to
Such guide should start from zero should be somewhat easier to access than LW.
More details in above LW posts.
I have many ideas / concepts around such project and want to discuss them in some kind of workgroup or forum, whatever works best.
I will start an own threat about this here later, depending on feedback on this comment.
Thanks, Marcus.
Haha! Bulls Eye!
It actually was around october that i found clearerthinking.org by googling reason vs. emotion. I friended Spencer Greenberg on FB and asked him if there was some movement/community around this.
He advised me to check out RATIONALISM, LW and EA.
Just check my above posts if you please, I hope i finde the time to post a new version of RATIONALITY FOR THE MASSES here soon...
What is your background? (ie. why are you not like those LW folks?)
I mean: I am so reliefed to get some positive feedbacks here, while LW only gave me ignorance and disqualification...