Value of movement growth

This tag is meant for posts that discuss questions about how largethe movement should grow, how quickly, and why, rather than posts that only cover strategies to bolster movement growth (for such posts, see the movement building effective altruism tag). Posts that discuss growth strategies in light of potential downsides of growth could fit this tag.

building effective altruism | community building strategy | global outreach | movement collapsenetwork building

It could be good for this entry to summarise or draw on some of the following points Max Daniel made in a comment about key uncertainties relevant to the grantmaking of the EA Infrastructure Fund:

  • How can we structure the EA community in such a way that it can 'absorb' very large numbers of people while also improving the allocation of talent or other resources?
    • I am personally quite unsatisfied with many discussions and standard arguments around "how much should EA grow?" etc. In particular, I think the way to mitigate potential negative effects of too rapid or indiscriminate growth might not be "grow more slowly" or "have a community of uniformly extremely high capability levels" but instead: "structure the community in such a way that selection/screening and self-selection push toward a good allocation of people to different groups, careers, discussions, etc.".
    • I find it instructive to compare the EA community to pure maths academia, and to large political parties.
      • Making a research contributions to mature fields of pure maths is extremely hard and requires highly unusual levels of fluid intelligence compared to the general situation. Academic careers in pure maths are extremely competitive (in terms of, e.g., the fraction of PhDs who'll become tenured professors). A majority of mathematicians will never make a breakthrough research contribution, and will never teach anyone who makes a breakthrough research contribution. But in my experience mathematicians put much less emphasis on only recruiting the very best students, or on only teaching maths to people who could make large contributions, or on worrying about diluting the discipline by growing too fast or ... And while perhaps in a sense they put "too little" weight on this, I also think they don't need to put as much weight on this because they can rely more on selection and self-selection: a large number of undergraduates start, but a significant fraction will just realize that maths isn't for them and drop out, ditto at later stages; conversely, the overall system has mechanism to identify top talent and allocate it to the top schools etc.
        • Example: Srinivasa Ramanujan was, by some criteria, probably the most talented mathematician of the 20th century, if not more. It seems fairly clear that his short career was only possible because (1) he went to a school that taught everyone the basics of mathematics and (2) later he had access to (albeit perhaps 'mediocre') books on advanced mathematics: "In 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carr's collection of 5,000 theorems.[14]:39[20] Ramanujan reportedly studied the contents of the book in detail.[21] The book is generally acknowledged as a key element in awakening his genius.[21]"
        • I'm not familiar with Carr, but the brevity of his Wikipedia article suggests that, while he taught at Cambridge, probably the only reason we remember Carr today is that he happened to write a book which happened to be available in some library in India.
        • Would someone like Carr have existed, and would he have written his Synopsis , if academic mathematics had had an EA-style culture of fixating on the small fraction of top contributors while neglecting to build a system that can absorb people with Carr-levels of talent, and that consequently can cast a 'wide net' that exposes very large numbers of people to mathematics and an opportunity to 'rise through its ranks'.
      • Similarly, only a very small number of people have even a shot at, say, becoming the next US president. But it would probably still be a mistake if all local branches of the Democratic and Republican parties adopted an 'elitist' approach to recruitment and obsessed about only recruiting people with unusually good ex-ante changes of becoming the next president.
      • So it seems that even though these other 'communities' also face, along some metrics, very heavy-tailed ex-post impacts, they adopt a fairly different approach to growth, how large they should be, etc. - and are generally less uniformly and less overtly "elitist". Why is that? Maybe there are differences between these communities that mean their approaches can't work for EA.
        • E.g., perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too 'preparadigmatic' to allow for something like that.
        • Perhaps the key difference for political parties is that they have higher demand for 'non-elite' talent - e.g., people doing politics at a local level and the general structural feature that in democracies there are incentives to popularize one's views to large fractions of the general population.
        • But is that it? And is it all? I'm worried that we gave up too early, and that if we tried harder we'd find a way to create structures that can accommodate both higher growth and improve the allocation of talent (which doesn't seem great anyway) within the community, despite these structural challenges.

(I also provided some reflections/counterpoints in a reply.)

I think it would be good to integrate something like the following points from a Robin Hanson interview:

Robin Hanson:       There’s the crying wolf effect, and I’m particularly worried about it. For example, space colonization is a thing that could happen eventually. And for the last 50 years, there have been enthusiasts who have been saying, “It’s now. It’s now. Now is the time for space colonization.” They’ve been consistently wrong. For the next 50 years, they’ll probably continue to be consistently wrong, but everybody knows there’s these people out there who say, “Space colonization. That’s it. That’s it.”
 

Whenever they hear somebody say, “Hey, it’s time for space colonization,” they go, “Aren’t you one of those fan people who always says that?” The field of AI risk kind of has that same problem where again today, but for the last 70 years or even longer, there have been a subset of people who say, “The robots are coming, and it’s all going to be a mess, and it’s now. It’s about to be now, and we better deal with it now.” That creates sort of a skepticism in the wider world that you must be one of those crazies who keep saying that.
 

That can be worse for when there really is, when we really do have the possibility of space colonization, when it is really the right time, we might well wait too long after that, because people just can’t believe it, because they’ve been hearing this for so long. That makes me worried that this isn’t a positive effect. Calling attention to a problem, like a lot of attention to a problem, and then having people experience it as not a problem, when it looks like you didn’t realize that.
 

Now, if you just say, “Hey, this nuclear power plant type could break. I’m not saying it will, but it could, and you ought to fix that,” that’s different than saying, “This pipe will break, and that’ll happen soon, and better do something.” Because then you lose credibility when the pipe doesn’t usually break.
 

Robert Long:        Just as a follow-up, I suppose the official line for most people working on AI safety is, as it ought to be, there’s some small chance that this could matter a lot, and so we better work on it. Do you have thoughts on ways of communicating that that’s what you actually think so that you don’t have this crying wolf effect?
 

Robin Hanson:       Well, if there are only the 100 experts, and not the 100,000 fans, this would be much easier. That does happen in other areas. There are areas in the world where there are only 100 experts and there aren’t 100,000 fans screaming about it. Then the experts can be reasonable and people can say, “Okay,” and take their word seriously, although they might not feel too much pressure to listen and do anything. If you can say that about computer security today, for example, the public doesn’t scream a bunch about computer security.
 

The experts say, “Hey, this stuff. You’ve got real computer security problems.” They say it cautiously and with the right degree of caveats that they’re roughly right. Computer security experts are roughly right about those computer security concerns that they warn you about. Most firms say, “Yeah, but I’ve got these business concerns immediately, so I’m just going to ignore you.” So we continue to have computer security problems. But at least from a computer security expert’s point of view, they aren’t suffering from the perception of hyperbole or actual hyperbole.
 

But that’s because there aren’t 100,000 fans of computer security out there yelling with them. But AI risk isn’t like that. AI risk, I mean, it’s got the advantage of all these people pushing and talking which has helped produce money and attention and effort, but it also means you can’t control the message.
 

Robert Long:        Are you worried that this reputation effect or this impression of hyperbole could bleed over and harm other EA causes or EA’s reputation in general, and if so are there ways of mitigating that effect?
 

Robin Hanson:       Well again, the more popular anything is, the harder it is for any center to mitigate whatever effects there are of popular periphery doing whatever they say and do. For example, I think there are really quite reasonable conservatives in the world who are at the moment quite tainted with the alt-right label, and there is an eager population of people who are eager to taint them with that, and they’re kind of stuck.
 

All they can do is use different vocabularies, have a different style and tone when they talk to each other, but they are still at risk for that tainting. A lot depends on the degree to which AI risk is seen as central to EA. The more it’s perceived as a core part of EA, then later on when it’s perceived as having been overblown and exaggerated, then that will taint EA. Not much way around that. I’m not sure that matters that much for EA though.
 

I mean I don’t see EA as driven by popularity or popular attention. It seems it’s more a group of people who– it’s driven by the internal dynamics of the group and what they think about each other and whether they’re willing to be part of it. Obviously in the last century or so, we just had these cycles of hype about AI, so that’s … I expect that’s how this AI cycle will be framed– in the context of all the other concern about AI. I doubt most people care enough about EA for that to be part of the story.
 

I mean, EA has just a little, low presence in people’s minds in general, that unless it got a lot bigger, it just would not be a very attractive element to put in the story to blame those people. They’re nobody. They don’t exist to most people. The computer people exaggerate. That’s a story that sticks better. That has stuck in the past.

2Pablo7moThanks. Do you mean that we should incorporate what Hanson says about the effects of the AI field on the reputation of EA, or what he says about the size of the AI field, as it applies to EA? It looks like the latter is the more relevant point, though it's not a point Hanson makes explicitly about EA. Hanson says that having lots of "fans" in addition to a few "experts" can be bad for the AI field, so presumably it could be similarly bad for EA to grow, and thus risk attracting lots of fans and few experts. Is that the point you think we should incorporate?
2MichaelA7moWhat I had in mind was roughly "what he says about the size of the AI field, as it applies to EA". It seems that point might be most relevant to existential risks (where "alarmism" and "crying wold" are most relevant). But that a broadly similar point of "lots of fans saying kinda dumb (or dumb-sounding) things makes it harder for other people to be taken seriously on similar topics" seems more widely applicable. So this could maybe be like making that point in an abstract way, then giving the AI example and citing Hanson there.
4Pablo7moThanks. I made a note to incorporate this into the article.

There could also be downsides to growing larger even if that's done slowly and deliberately. For example, it may increase the difficulties of some aspects of cooperation andaltruistic coordination.

Bibliography

Cotton-Barratt, Owen (2015) How valuable is movement growth?, Effective Altruism Forum, May 14.

This tag is meant for posts that discuss questions about how large wethe movement should try to grow the movement,grow, how quickly, and why, rather than posts that only cover strategies to bolster movement growth.growth (for such posts, see the movement building tag). Posts that discuss growth strategies in light of potential downsides of growth could fit this tag.

If we wantIt may seem that, in order for the effective altruism movement to do as much good as possible, it may seem as though wethe movement should aim to grow it as much as possible. However, there are risks to rapid growth that may be avoidable if we aim to grow more slowly and deliberately.

There could also be downsides to growing larger even if that's done slowly and deliberately. For example, it may increase the difficulties of some aspects of cooperation and coordination.

People often debate the importancevalue of movement growth in effective altruism.

People often debate the importance of movement growthin effective altruism. 

If we want the movement to do as much good as possible, it may seem as though we should aim to grow it as much as possible. However, there are risks to rapid growth that may be avoidable if we aim to grow more slowly and deliberately. 

movement building | promoting effective altruism network building | community building strategy | global outreach | movement building | movement collapsenetwork building | promoting effective altruism