Value of movement growth

Discuss the topic on this page. Here is the place to ask questions and propose changes.
Comments15
Sorted by

It could be good for this entry to summarise or draw on some of the following points Max Daniel made in a comment about key uncertainties relevant to the grantmaking of the EA Infrastructure Fund:

  • How can we structure the EA community in such a way that it can 'absorb' very large numbers of people while also improving the allocation of talent or other resources?
    • I am personally quite unsatisfied with many discussions and standard arguments around "how much should EA grow?" etc. In particular, I think the way to mitigate potential negative effects of too rapid or indiscriminate growth might not be "grow more slowly" or "have a community of uniformly extremely high capability levels" but instead: "structure the community in such a way that selection/screening and self-selection push toward a good allocation of people to different groups, careers, discussions, etc.".
    • I find it instructive to compare the EA community to pure maths academia, and to large political parties.
      • Making a research contributions to mature fields of pure maths is extremely hard and requires highly unusual levels of fluid intelligence compared to the general situation. Academic careers in pure maths are extremely competitive (in terms of, e.g., the fraction of PhDs who'll become tenured professors). A majority of mathematicians will never make a breakthrough research contribution, and will never teach anyone who makes a breakthrough research contribution. But in my experience mathematicians put much less emphasis on only recruiting the very best students, or on only teaching maths to people who could make large contributions, or on worrying about diluting the discipline by growing too fast or ... And while perhaps in a sense they put "too little" weight on this, I also think they don't need to put as much weight on this because they can rely more on selection and self-selection: a large number of undergraduates start, but a significant fraction will just realize that maths isn't for them and drop out, ditto at later stages; conversely, the overall system has mechanism to identify top talent and allocate it to the top schools etc.
        • Example: Srinivasa Ramanujan was, by some criteria, probably the most talented mathematician of the 20th century, if not more. It seems fairly clear that his short career was only possible because (1) he went to a school that taught everyone the basics of mathematics and (2) later he had access to (albeit perhaps 'mediocre') books on advanced mathematics: "In 1903, when he was 16, Ramanujan obtained from a friend a library copy of A Synopsis of Elementary Results in Pure and Applied Mathematics, G. S. Carr's collection of 5,000 theorems.[14]:39[20] Ramanujan reportedly studied the contents of the book in detail.[21] The book is generally acknowledged as a key element in awakening his genius.[21]"
        • I'm not familiar with Carr, but the brevity of his Wikipedia article suggests that, while he taught at Cambridge, probably the only reason we remember Carr today is that he happened to write a book which happened to be available in some library in India.
        • Would someone like Carr have existed, and would he have written his Synopsis , if academic mathematics had had an EA-style culture of fixating on the small fraction of top contributors while neglecting to build a system that can absorb people with Carr-levels of talent, and that consequently can cast a 'wide net' that exposes very large numbers of people to mathematics and an opportunity to 'rise through its ranks'.
      • Similarly, only a very small number of people have even a shot at, say, becoming the next US president. But it would probably still be a mistake if all local branches of the Democratic and Republican parties adopted an 'elitist' approach to recruitment and obsessed about only recruiting people with unusually good ex-ante changes of becoming the next president.
      • So it seems that even though these other 'communities' also face, along some metrics, very heavy-tailed ex-post impacts, they adopt a fairly different approach to growth, how large they should be, etc. - and are generally less uniformly and less overtly "elitist". Why is that? Maybe there are differences between these communities that mean their approaches can't work for EA.
        • E.g., perhaps maths relies crucially on there being a consensus of what important research questions are plus it being easy to verify what counts as their solution, as well as generally a better ability to identify talent and good work that is predictive of later potential. Maybe EA is just too 'preparadigmatic' to allow for something like that.
        • Perhaps the key difference for political parties is that they have higher demand for 'non-elite' talent - e.g., people doing politics at a local level and the general structural feature that in democracies there are incentives to popularize one's views to large fractions of the general population.
        • But is that it? And is it all? I'm worried that we gave up too early, and that if we tried harder we'd find a way to create structures that can accommodate both higher growth and improve the allocation of talent (which doesn't seem great anyway) within the community, despite these structural challenges.

(I also provided some reflections/counterpoints in a reply.)

I think it would be good to integrate something like the following points from a Robin Hanson interview:

Robin Hanson:       There’s the crying wolf effect, and I’m particularly worried about it. For example, space colonization is a thing that could happen eventually. And for the last 50 years, there have been enthusiasts who have been saying, “It’s now. It’s now. Now is the time for space colonization.” They’ve been consistently wrong. For the next 50 years, they’ll probably continue to be consistently wrong, but everybody knows there’s these people out there who say, “Space colonization. That’s it. That’s it.”
 

Whenever they hear somebody say, “Hey, it’s time for space colonization,” they go, “Aren’t you one of those fan people who always says that?” The field of AI risk kind of has that same problem where again today, but for the last 70 years or even longer, there have been a subset of people who say, “The robots are coming, and it’s all going to be a mess, and it’s now. It’s about to be now, and we better deal with it now.” That creates sort of a skepticism in the wider world that you must be one of those crazies who keep saying that.
 

That can be worse for when there really is, when we really do have the possibility of space colonization, when it is really the right time, we might well wait too long after that, because people just can’t believe it, because they’ve been hearing this for so long. That makes me worried that this isn’t a positive effect. Calling attention to a problem, like a lot of attention to a problem, and then having people experience it as not a problem, when it looks like you didn’t realize that.
 

Now, if you just say, “Hey, this nuclear power plant type could break. I’m not saying it will, but it could, and you ought to fix that,” that’s different than saying, “This pipe will break, and that’ll happen soon, and better do something.” Because then you lose credibility when the pipe doesn’t usually break.
 

Robert Long:        Just as a follow-up, I suppose the official line for most people working on AI safety is, as it ought to be, there’s some small chance that this could matter a lot, and so we better work on it. Do you have thoughts on ways of communicating that that’s what you actually think so that you don’t have this crying wolf effect?
 

Robin Hanson:       Well, if there are only the 100 experts, and not the 100,000 fans, this would be much easier. That does happen in other areas. There are areas in the world where there are only 100 experts and there aren’t 100,000 fans screaming about it. Then the experts can be reasonable and people can say, “Okay,” and take their word seriously, although they might not feel too much pressure to listen and do anything. If you can say that about computer security today, for example, the public doesn’t scream a bunch about computer security.
 

The experts say, “Hey, this stuff. You’ve got real computer security problems.” They say it cautiously and with the right degree of caveats that they’re roughly right. Computer security experts are roughly right about those computer security concerns that they warn you about. Most firms say, “Yeah, but I’ve got these business concerns immediately, so I’m just going to ignore you.” So we continue to have computer security problems. But at least from a computer security expert’s point of view, they aren’t suffering from the perception of hyperbole or actual hyperbole.
 

But that’s because there aren’t 100,000 fans of computer security out there yelling with them. But AI risk isn’t like that. AI risk, I mean, it’s got the advantage of all these people pushing and talking which has helped produce money and attention and effort, but it also means you can’t control the message.
 

Robert Long:        Are you worried that this reputation effect or this impression of hyperbole could bleed over and harm other EA causes or EA’s reputation in general, and if so are there ways of mitigating that effect?
 

Robin Hanson:       Well again, the more popular anything is, the harder it is for any center to mitigate whatever effects there are of popular periphery doing whatever they say and do. For example, I think there are really quite reasonable conservatives in the world who are at the moment quite tainted with the alt-right label, and there is an eager population of people who are eager to taint them with that, and they’re kind of stuck.
 

All they can do is use different vocabularies, have a different style and tone when they talk to each other, but they are still at risk for that tainting. A lot depends on the degree to which AI risk is seen as central to EA. The more it’s perceived as a core part of EA, then later on when it’s perceived as having been overblown and exaggerated, then that will taint EA. Not much way around that. I’m not sure that matters that much for EA though.
 

I mean I don’t see EA as driven by popularity or popular attention. It seems it’s more a group of people who– it’s driven by the internal dynamics of the group and what they think about each other and whether they’re willing to be part of it. Obviously in the last century or so, we just had these cycles of hype about AI, so that’s … I expect that’s how this AI cycle will be framed– in the context of all the other concern about AI. I doubt most people care enough about EA for that to be part of the story.
 

I mean, EA has just a little, low presence in people’s minds in general, that unless it got a lot bigger, it just would not be a very attractive element to put in the story to blame those people. They’re nobody. They don’t exist to most people. The computer people exaggerate. That’s a story that sticks better. That has stuck in the past.

Thanks.  Do you mean that we should incorporate what Hanson says about the effects of the AI field on the reputation of EA, or what he says about the size of the AI field, as it applies to EA? It looks like the latter is the more relevant point, though it's not a point Hanson makes explicitly about EA. Hanson says that having lots of "fans" in addition to a few "experts" can be bad for the AI field, so presumably it could be similarly bad for EA to grow, and thus risk attracting lots of fans and few experts. Is that the point you think we should incorporate?

What I had in mind was roughly "what he says about the size of the AI field, as it applies to EA". It seems that point might be most relevant to existential risks (where "alarmism" and "crying wold" are most relevant). But that a broadly similar point of "lots of fans saying kinda dumb (or dumb-sounding) things makes it harder for other people to be taken seriously on similar topics" seems more widely applicable.

So this could maybe be like making that point in an abstract way, then giving the AI example and citing Hanson there.

Thanks. I made a note to incorporate this into the article.

I would suggest renaming the article 'Movement growth' (omitting 'debate').

Hmm, I think I prefer the current name, or something like it. I think "movement growth" sounds like it'd cover posts about how to grow the movement and how much it's grown. But I think it's useful for there to be an entry/tag specifically on how much it should grow, how fast, and why (i.e., the scope Aaron proposed here). 

(But I imagine one could also come up with other names that also highlight that narrower scope, so it's not like I'm wedded to this specific one.)

Ah, I see. I hadn't considered that distinction. In that case, how about merging this article and promoting effective altruism

Hmm. I think I'm actually confused about precisely how promoting effective altruism is distinct from movement-building (though maybe it's about whether the EA label is explicitly used?), so: 

  • I'd see it as more natural to merge those two
  • I'd have the same (tentative) objection to merging "movement growth debate" with "promoting effective altruism" as the objection I mentioned above

Or to make the discussion less abstract we could consider a concrete post that you feel is a fit candidate for the "growth debate" tag but not so much for the "building/promoting" tag. I may not have a clear enough idea of the types of debate you have in mind.

I don't have a preference between 'promoting' and 'building'. 80k originally had a page called promoting effective altruism which they later replaced with one called building effective altruism.

Your objection above was that 'movement growth' was specifically about how much EA should grow, but 'promoting effective altruism' considers growth (as well as other forms of promotion) as an intervention or cause area, so it seems like a natural place to address that normative question, i.e. how much efforts to promote or build EA should focus on growth vs. e.g. quality of outreach. Am I misunderstanding your objection?

(Responding to both of your comments in one place)

I'm likewise neutral on promoting vs building. One part of what I was saying was roughly that I see "promoting effective altruism" and "movement building" and "movement growth" as roughly equivalent, except that perhaps:

  • A term with effective altruism in the name weakly implies the EA labels is explicitly used in the discussed efforts
    • But that's not necessarily the case
  • "growth" is a subset of movement-building, since movement-building can also include improving coordination, helping with skill-building, etc., not just more people

Another part of what I'm saying is that:

  • I think there's an important debate to be had about how much the EA movement (or related/subsidiary movements) should grow, how fast, and why
  • I think this would be a subset of a movement building or promoting EA tag, but an important subset worth having its own tag for
    • So all posts with this "movement growth debate" tag would probably also have the other tag(s)
    • Analogous to how estimation of existential risk is a subset of the topic of existential risk, but in my view warrants its own tag
  • The Cotton-Barratt post that has this tag and the 80k post that has this tag are examples of the sort of thing I'd want to have this tag in order to specifically collect

Does that clarify my position?

(But of course, other people are free to disagree with all of that.)

Thanks. Okay, I'm not sure I agree we should have a separate entry, but I'm happy to defer to your view.

I think the more general tag should be called 'building effective altruism', given that this is how 80k calls it (and that they have considered alternative names in the past and later rejected them), so it seems a suitable name for the narrower tag is 'growing effective altruism'. However, I think this will create confusion, since the difference between 'building' and 'growing' is not immediately clear. I really don't like 'Movement growth debate': there are lots of debates within EA and we typically cover those by having articles on the topic that is the subject of the debate, not articles on that topic followed by the word 'debate'. So if we are going to keep the article, I think we should try to find an alternative name for it.

I think the more general tag should be called 'building effective altruism', given that this is how 80k calls it (and that they have considered alternative names in the past and later rejected them), so it seems a suitable name for the narrower tag is 'growing effective altruism'.

This sounds good to me.

And then I think we can just explicitly note in the text that this can include efforts that don't explicitly use the EA label (since maybe that's why people currently thought there should be a tag for movement building and another for promoting effective altruism).

So if we are going to keep the article, I think we should try to find an alternative name for it.

Some ideas: value of movement growth; pros and cons of movement growth.

I prefer the former to the latter.

I think both are a bit better than "movement growth debate", though a downside is that they don't make it obvious that they cover the question of how fast to grow (as distinct from how big to ultimately grow). But it seems acceptable to just have the text of the entry make it clear that that's in-scope.

Cool. I updated the title to value of movement growth.