The Rationality and Effective Altruism communities have experienced wildly different trajectories in recent years. While EA has meetups at most major universities, is backed by a multi-billion dollar foundation, has proliferated organisations to the point of confusion and now even has its own media outlet; the rationality community had to struggle just to manage to resurrect Less Wrong. LW finally seems to be on a positive trajectory, but the rationality community is still much less than what it could have been. Its ideas have barely penetrated academia, there isn't a rationality conference and there isn't even an organisation dedicated to growing the movement.

A large part of this reason is that the kinds of people who would actually do something and run these kinds of projects have been drawn into either EA or ai-risk. While these communities benefit from this talent, I suspect that this effect has occurred to the point of killing the geese that lays the golden egg (this is analogous to concerns about immigration to the Bay Area hollowing out local communities).

I find this concerning for the following reasons:

  • The Less Wrong community has traditionally been a fantastic recruiting ground for EA. Companies often utilise multiple brands to target different audience segments and this principle still applies even though LW and EA are seperate
  • Many of the most prominent EAs consider AI safety the highest priority cause area. LW has been especially effective as a source of AI safety researchers and many of the initial ideas about AI safety were invented here.
  • EA has managed to hold unusually high epistemic standards and has been much more successful than average movements at updating based on new evidence and avoiding ideological capture. LW has produced much of the common knowledge that has allowed this to occur. The rationality community also provides a location for the development of advice related to health, productivity, personal development and social dynamics.

The failure of LW to fulfil its potential has made these gains much less than what they could have been. I suspect that as per the Pareto Principle, a small organisation promoting rationality might be far better than no organisation trying to promote it (CFAR focuses on individuals, not broader groups within society or society as a whole). At the very least, a small scale experiment seems worthwhile. Even though there is a high chance that the intervention would have no discernible effect, as per Owen's Prospecting for Gold talk, the impacts in the tail could be extremely large, so the gamble seems worthwhile. I don't know what to suggest that such and the organisation could do, but I imagine that there are a number of difference approaches they could experiment with, at least some of which might plausibly be effective.

I do see a few potential risks with this project:

  • This project wouldn't succeed without buy-in from the LW community. This requires people with sufficient credibility being pursuing this at the expense of other opportunities and incurs opportunity cost in the case where they do.
  • Increasing the prominence of LW mean that people less aligned with the community have access to more of its insights, so perhaps this would make it easier for someone unaligned to develop an AGI which turns out poorly.

Nonetheless, funding wouldn't have to be committed until it could be confirmed that suitable parties were interested and the potential gains seem like they could justify the opportunity cost. In terms of the second point, I suspect that far more good actors will be created than bad actors, such that the net effect is positive.

This post was written with the support of the EA Hotel

31

0
0

Reactions

0
0

More posts like this

Comments31
Sorted by Click to highlight new comments since: Today at 3:53 AM
there isn't even an organisation dedicated to growing the movement

Things that are not movements:

  • Academic physics
  • Successful startups
  • The rationality community

They all need to grow to some extent, but they have a particular goal that is not generic 'growth'. Most 'movements' are primarily looking for something like political power, and I think that's a pretty bad goal to optimise for. It's the perennial offer to all communities that scale: "try to grab political power". I'm quite happy to continue being for something other than that.

Regarding the size of the rationality and EA communities right now, this doesn't really seem to me like a key metric? A more important variable is whether you have infrastructure that sustains quality at the scale the community is at.

  • The standard YC advice says the best companies stay small long. An example of Paul Graham saying it is here, search "I may be an extremist, but I think hiring people is the worst thing a company can do."
  • There are many startups that have 500 million dollars and 100 employees more than your startup, but don't actually have a product-market fit, and are going to crash next year. Whereas you might work for 5-10 years then have a product that can scale to several billions of dollars of value. Again, scaling right now will seems shiny and appealing, but something you often should fight against.
  • Regarding growth in the rationality community, I think a scientific field is a useful analogue. And if I told you I'd started some new field and in the first 20 years I'd gotten a research group in every university, is this necessarily good? Am I machine learning? Am I bioethics? I bet all the fields that hit the worst of the replication crisis have experienced fast growth at some point in the past 50 years. Regardless of intentions, the infrastructure matters, and it's not hard to simply make the world worse.

Other thoughts: I agree that the rationality project has resulted in a number of top people working on AI x-risk, effective altruism, and related projects, and that the ideas produced a lot of the epistemic bedrock for the community to be successful at noticing important and new ideas. I am also sad there hasn't been better internal infrastructure built in the past few years. As Oli Habryka said downthread (amongst some other important points), the org I work at that built the new LessWrong (and AI Alignment Forum and EA Forum, which is evidence for your 'rationalists work on AI and EA claim' ;) ) is primarily trying to build community infrastructure.

Meta thoughts: I really liked the OP, it concisely brought up a relevant proposal and placed it clearly in the EA frame (pareto principle, heavy tailed outcomes, etc).

The size of the rationality community hasn't been limited so much by quality concerns, as by lack of effort expended in growth.

I think it is easy to grow too early, and I think that many of the naive ways of putting effort into growth would be net negative compared to the counterfactual (somewhat analagous to a company that quickly makes 1 million when it might've made 1 billion).

Focusing on actually making more progress with the existing people, by building more tools for them to coordinate and collaborate, seems to me the current marginal best use of resources for the community.

(I agree that effort should be spent improving the community, I just think 'size' isn't the right dimension to improve.)

Added: I suppose I should link back to my own post on the costs of coordinating at scale.

How would you define a rationality project? I am working on psychological impediments to effective giving and how they can be overcome with Lucius Caviola at Oxford. I guess that can also be seen as a rationality project, though I am not quite sure how you would define that notion.

Previously, I ran several other projects which could be seen as rationality projects - I started a network for evidence-based policy, created a political bias test, and did work on argument-checking.

I am generally interested in doing more work in this space. In particular, I would be interested in doing work that relates to academic psychology and philosophy, which is rigorous, and which has a reasonably clear path to impact.

I think one sort of diffuse "project" that one can work on on the side of one’s main project is work to maintain and improve the EA community’s epistemics, e.g., by arguing well and in good faith oneself, and by rewarding others who do that as well. I do agree that good epistemics are vital for the EA community.

Stefan linked to a Forum piece about a tool built by Clearer Thinking, but I wanted to use this post to link that organization specifically. They demonstrate one model for what a "rationality advocacy" organization could do. Julia Galef's Update Project is another, very different model (working closely with a few groups of high-impact people, rather than building tools for a public audience).

Given that the Update Project is semi-sponsored by the Open Philanthropy Project, and that the Open Philanthropy Project has also made grants to rationality-aligned orgs like CFAR, SPARC, and even the Center for Election Science (which I'd classify as an organization working to improve institutional decision-making, if the institution is "democracy"), it seems like EA already has quite an investment in this area.

casebash (and other commenters): What kind of rationality organization would you like to see funded which either does not exist or exists but has little to no EA funding? Alternatively, what would a world look like that was slightly more rational in the ways you think are most important?

I was referring specifically to growing the rationality community as a cause area.

Then I would suggest changing the title of the post. 'Rationality as a cause area' can mean many things besides 'growing the rationality community'.

Furthermore, some of the considerations you list in support of the claim that rationality is a promising cause area do not clearly support, and may even undermine, the claim that one should grow the rationality community. Your remarks about epistemic standards, in particular, suggest that one should approach growth very carefully, and that one may want to deprioritize growth in favour of other forms of community building.

Replace "growing" the rationality community with "developing" the rationality community. But that's a good point. It is worthwhile keeping in mind that the two are seperate. I imagine one of the first tasks of such a group would be figuring out what this actually means.

I also feel similarly. Thanks for writing this.

Points I would add:

-This organisation could focus on supporting local LessWrong groups (which CFAR isn't doing).

-This organisation could focus on biases that make people shift in a better direction rather than going in the same direction faster. For example, reducing the scope insensitivity bias seems like a robust way to make people more altruistic, whereas improving people's ability to make Trigger-Action-Plans might simply accelerate the economy as a whole (which could be bad if you think that crunches are more likely than bangs and shrieks, as per Bostrom's terminology).

-The organisation might want to focus on theories with more evidence (ie. be less experimental than CFAR) to avoid spreading false memes that could be difficult to correct, as well as being careful about idea inoculations.

I think the whole thing has to go way beyond biases and the like.

You have to know how to pick up folks and make them stick.

All that LW stuff, as true as it may be, is perfect to actually chase folks away.

Even the word "rationalism" (just like any other term ending in 'ism') has to be largely avoided, even if you are only aiming at innovators, let alone early adopters.

This marketing strategy is probably more critical than the content itself...

Maybe an alternative way to look at this is, why is rationality not more a part of EA community building? Rationality as a project likely can't stand on its own because it's not trying to do anything; it's just a bunch of like-minded folks with a similar interest in improving their ability to apply epistemology. The cases where the rationality "project" has done well, like building up resources to address AI risk, were more like cases where the project needed rationality for an instrumental purpose and then built up LW and CFAR in the service of that project. Perhaps EA can more strongly include rationality in that role as part of what it considers essential for training/recruiting in EA and building a strong community that is able to do the things it wants to do. This wouldn't really mean rationality is a cause area, more an aspect of effective EA community building.

Its ideas have barely penetrated academia, there isn't a rationality conference and there isn't even an organisation dedicated to growing the movement.

I think you can think of the new LessWrong organization as doing roughly that (though I don't think the top priority should be growth, but more about building infrastructure to make sure the community can productively grow and be productive). We are currently focusing on the online community, but we also did some thing to improve the meetup system, are starting to run more in-person events, and might run a conference in the next year (right now we have the Secular Solstice, which I actually think complements existing conferences like EA Global quite well, and does a good job at a lot of the things you would want a conference to achieve).

I agree that it's sad that there hasn't been an org focusing on this for the last few years.

On the note of whether the ideas of the rationality community have failed to penetrated academia, I think that's mostly false. I think the ideas have probably penetrated academia more than the basics of Effective Altruism have. In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander's writing have attracted significant attention and readership, and mostly continue doing so (as a Fermi, I expect about 10x more people have read the LW sequences/Rationality:A-Z than have read Doing Good Better, and about 100x have read HPMOR). Obviously, I think we can do better, and do think there is a lot of value in distilling/developing core ideas in rationality more and helping them penetrate into academia and other intellectual hubs.

I do think that in terms of community-building, there has been a bunch of neglect, though I think overall in terms of active meetups and local communities, the rationality community is still pretty healthy. I do agree that on some dimensions there has been a decline, and would be excited about more people trying to put more resources into building the rationality community, and would be excited about collaborating and coordinating with them.

To give a bit of background in terms of funding, the new LessWrong org was initially funded by an EA-Grant, and is currently being funded by a grant from BERI, Nick Beckstead and Matt Wage. In general EA funders have been supportive for the project and I am glad for their support.

"In terms of web-traffic and general intellectual influence among the intellectual elite, the sequences as well as HPMOR and Scott Alexander's writing have attracted significant attention and readership, and mostly continue doing so" - I was talking more about academia than the blogosphere. Here, only AI safety has had reasonable penetration. EA has had several heavyweights in philosophy, plus FHI for a while and also now GPI.

Whether you count FHI as rationality or EA is pretty ambigious. I think memetically FHI is closer to the transhumanist community, and a lot of the ideas that FHI publishes about are ideas that were discussed on SL4 and LessWrong before FHI published them in a more proper format.

Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom's book Against Empathy (sadly I don't remember which article of his Bloom cites), and I get the impression a fair few academics read him.

Bostrom has also cited him in his papers.

I think it's really easy to make a case that funding the rationality community is a good thing to do. It's much harder to make a case that it's better to fund the rationality community than competing organizations. I'm sympathetic to your concerns, but I'm surprised that the reaction to this post is so much less critical than other "new cause area" posts. What have I missed?

I don't think you've missed anything in particular. But there is a difference between reaction to a post being "not critical" and being "enthusiastic".

My read on the comments so far is that people are generally "not critical" because the post makes few specific claims that could be proven wrong, but that people aren't very "enthusiastic" about the post itself; instead, people are using the comments to make their own suggestions on the original topic.

That said, it seems perfectly reasonable if the main result of a post is to kick off a discussion between people who have more information/more concrete suggestions!

I agree that LW has been a big part of keeping EA epistemically strong, but I think most of that is selection rather than education. It's not that reading LW makes you much more clearer-thinking or focused on truth, it's that only people who are that way to begin with decide to read LW, and they then get channeled to EA.

If that's true, it doesn't necessarily discredit rationality as an EA cause area, it just changes the mechanism and the focus: maybe the goal shouldn't be making everybody LW-rational, it should be finding the people that already fit the mold, hopefully teaching them some LW-rationality, and then channeling them to EA.

the rationality community is still much less than what it could have been

I couldn't agree more.

I believe that rationality (incl. emotional intelligence etc.) is the key to a better version of mankind.

I expressed this in several LW posts / comments, eg.:

https://www.lesswrong.com/posts/7maCtYTsrFhq4D3gK/what-is-being-done

https://www.lesswrong.com/posts/Qwi3zMnfduGHztWSu/rationalism-for-the-masses

I am looking for people to assist me in creating an online step by step guide to

  • rationality
  • self reflection / empathy
  • emotional intelligence
  • brain debugging
  • reason vs. emotion
  • (low) self esteem

Such guide should start from zero should be somewhat easier to access than LW.

More details in above LW posts.

I have many ideas / concepts around such project and want to discuss them in some kind of workgroup or forum, whatever works best.

I will start an own threat about this here later, depending on feedback on this comment.

Thanks, Marcus.

I would be surprised to see much activity on a comment on a three month old thread. If you want to pursue this, I'd suggest writing a new post. Good luck, I'd love to see someone pursuing this project!

You can bet I will be pursuing this vision.

I only heard about LW / EA etc a few months ago.

I was VERY surprised no one has done it before. I basically only asked around to

(Now that I got a taste of the LW community I am a little less surprised, though... :-) )

The closest NGO I could find so far is InIn, but they still have a different focus.

And even this forum here was rather hidden...

Anyway:

Your response is the first/only ENCOURAGING one I got so far.

If you happen to remember anyone who was even only writing about this somewhere, let me know.

Yeah, InIn was the main attempt at this. Gleb was able to get a large number of articles published in news sources, but at the cost of quality. And some people felt that this would make people perceive rationality negatively, as well as drawing in people from the wrong demographic. I think he was improving over time, but perhaps too slowly?

PS. Have you seen this? https://www.clearerthinking.org

Haha! Bulls Eye!

It actually was around october that i found clearerthinking.org by googling reason vs. emotion. I friended Spencer Greenberg on FB and asked him if there was some movement/community around this.

He advised me to check out RATIONALISM, LW and EA.

Just check my above posts if you please, I hope i finde the time to post a new version of RATIONALITY FOR THE MASSES here soon...

What is your background? (ie. why are you not like those LW folks?)

I mean: I am so reliefed to get some positive feedbacks here, while LW only gave me ignorance and disqualification...

I think rationality should not be considered as a seperate cause area, but perhaps deserves to be a sub-cause area of EA movement building and AI safety.

  1. It seems very unlikely that promoting rationality (and hoping some of those folks would be attracted to EA) is more effective than promoting EA in the first place.
  2. I am unsure whether it is more effective to grow the the number of people interested in AI safety by promoting rationality or by directly reaching out to AI researchers (or other things one might do to grow the AI safety community).

Also, the post title is misleading since an interpretation of it could be that making people more rational is intrinsically valuable (or that due to increased rationality they would live happier lives). While this is likely true, this would probably be an ineffective intervention.

Part of my model is that there is decreasing marginal utility as you invest more effort in one form of outreach, so there can be significant benefit in investing some resources in investing small amounts of resources in alternate forms of outreach.

Thanks for explaining your views further! This seems about right to me, and I think this is an interesting direction that should be explored further.

Promoting effective altruism promotes rationality in certain domains. And, improving institutional decision making is related to improving rationality. But yeah, these don't cover everything in improving rationality.

perhaps this would make it easier for someone unaligned to develop an AGI which turns out poorly

Not the way I have figured out.

Again you seem to be too focussed on LW.

Of course, because there hardly is anything else out there.

But I started unbiasing in 1983 when most of those folks weren't even born yet.

I took me 30 years, but living rationality is a totally different thing than reading and writing about it!

Jeeez, can't wait to make this post...

This project wouldn't succeed without buy-in from the LW community.

I don't think such LW will even be directly involved or of much support.

I want to buy-in / talk-in these guys:

https://www.youtube.com/watch?v=nKd2QVrQVIM

I guess you heared of Simon Sinek, Denzel Wahsington... :-)

This video has 14M views and it is neither well produced nor really streamlined or dedicated!

But it dwarfs LW or anyting around it.