Hide table of contents

Epistemic status: not certain— a hand-wavy ramble.

Disclaimer: all opinions here are my own, not my employer’s.

Summary

  1. Language barriers can cause people to be dismissive of writing/speech due to aesthetics or communicative clumsiness rather than the content of the speech.

  2. This manifests in the EA community in a number of ways, one of which is that the language we speak is informed by our STEM-leaning community and our corresponding tendency to think quantitatively, which creates a context (discursive context?) that is foreign to people who are less STEM-y.[1]

  3. Harm 1: Good ideas and useful knowledge from these groups get discounted as a result.

  4. Harm 2: Talented non-STEM people who are into EA get misunderstood when they try to communicate, and don’t get noticed as “promising” (and we do in fact really need the these folks).

  5. Harm 3: non-STEM folks have a worse experience in the EA community.

  6. Some suggestions

Pre-amble

My university had a lot of international students. In class discussions and group interactions, I would often notice that the contributions of international students (those whose first language wasn’t English) tended to be dismissed faster and interrupted more frequently than others’, even when the others’ thoughts seemed less insightful. My guess was that this was due to the smoother and more familiar presentation of the native speakers’ thoughts.

This situation was not new to me; I had spent a total of around three years in school in France, experiencing something similar first-hand (as a nerdy American kid). I would frequently have to suppress a mix of anger, annoyance, and shame at being patronized or ignored because my French was halting and awkward.

The point of these stories is that not being fluent (or being less-than-native) in the language of the community with which you are conversing makes everything harder. And it can be unpleasant and discouraging.

I think a version of this happens in the EA community with people who are less STEM-y than the average. (It also happens with other underrepresented groups in EA, but I’m focusing on this one for this post.)

Harm 1: We lose out on ideas and knowledge

I have most frequently seen this phenomenon in live conversations. These folks’ natural speech or writing follows different norms, and they contort their thoughts to make the EA community hear them. They misuse jargon like “updating” and “outside view” in an attempt to get their point across, and their interlocutors decide that talking with them is not worth their time.[2]

More generally, I think experienced (and assimilated) members of the community tend to interpret a lack of fluency in the “language of EA” as a general lack of knowledge or skill. This, together with our tendency to miss writing that comes from outside the community,[3] leads to the community systematically deafening itself to communication by non-STEM folks.

Harm 2: EA is less actively welcoming towards non-STEM people, so we lose out on some of those people

This also factors into the ways promising people (e.g. students) who should be mentored and helped on their way to impact are identified by the EA community. My impression is that currently, this often happens through connections or random conversations where someone’s potential gets identified by someone well-placed in the community. I’m worried that we’re ending up with self-sustaining mechanisms by which interested and talented people who don’t speak numerically (or misuse “outside view”) are considered less promising and are not supported in earlier stages of their career.

(Notice that this can happen both ways: if EA always speaks the language of the STEM-y, less STEM-y people will potentially discount the EA community and think the theories it presents are stupid. This is somewhat related to the idea of inferential distances and this blog post about “Alike minds.”)

Of course, it’s true that a nuanced quantitative model (or even a simple Fermi estimate) of some phenomenon is often helpful, and can be a reasonable signal of quality.[4] But our focus on such quantitative elements misses other signals. Consider, for instance, the effect of illustrations or other visualizations, clarity in exposition, historical anecdotes, moving speech, etc..[5] Moreover, some aspects of the way the EA community talks are due to the community’s history rather than the inherent usefulness of those aspects. (The archetypal EA Forum post has an epistemic status, some technical terms, and maybe a calculation— which are all arguably useful. But it’s also got some rationalist jargon or a reference to hpmor.) (More on jargon in this post.)

I also suspect that talented less STEM-y people tend to get fewer chances to find out about EA and get involved than talented STEM-y people do, which would exacerbate the problem if true. So I think we should try to notice if we’re unusually likely to be the only point of contact someone has with EA. In particular, if you’re talking to a math major, you’re probably not the only person who can notice that this student should probably join a group and apply to an alignment bootcamp or something. But if you’re talking to a creative writing major who seems interested, you may be the only EA for a while who will get the chance to notice that this person is brilliant and altruistic and should join a group, become a comms person, write EA-inspired fiction, or take on a research project.[6]

I’m not claiming that numerical literacy is not important for EA. I absolutely think it is. But so are other skills.[7]

I think people who are more comfortable writing than modeling, or people who are better at managing interpersonal relations than at establishing a base rate or unraveling some technical issue— people who are not very STEM-y— can significantly help the world the way EA wants to do and are overlooked by the processes we use to notice and support promising people. In fact, all else equal, someone joining from an underrepresented field (like comparative religion) may be able to pick up more low-hanging fruit than someone coming in from a typical field for the community (like math or economics).

(Possible objections to this section. 1. We need to grow the community urgently, and STEM-y people are easier to reach. 2. Non-STEM people don’t have some skills that are fundamentally necessary for doing EA stuff. (As discussed, I don’t think this is the case.) 3. It’s currently too costly to fight Harm 2 for some reason.)

Harm 3: non-STEM folks have a worse experience in the EA community

I would guess that non-STEM people tend to have a worse experience in the community for reasons like the ones sketched out in the preamble above. I don’t think that their experience tends to be actively hostile, but I do think that it’s harder than it should be, and that we can improve it.

My suggestions

  1. Actively try to notice if you’re subconsciously dismissing someone because they speak a different language, not because the content of their thinking is unhelpful.
  2. Try a bit harder to understand people whose background is different from yours and be aware of the curse of knowledge when communicating.
  3. Create opportunities for talented-but-less-quantitative junior people who are into EA.
    1. Hire copy-writers! Notice potential awesome community organizers! Fund creative outreach projects! Etc.
    2. Some existing/past projects: Humanities Ideas for Longtermists, Social science projects on AI/x-risks, the Creative Writing Contest.
  4. Promote translation of EA concepts into less STEM-heavy writing or communication. (Conversely, import good foreign-to-EA ideas, e.g. by posting summaries).
  5. Use less jargon when possible, and help with overuse of jargon in other ways.
  6. If you’re involved with community building at a university, consider trying to reach out to non-STEM majors (and check that you aren’t accidentally excluding these people with your outreach process).

I would be very excited to hear more ideas on this front.

(Thanks to my brother and to Jonathan Michel for giving feedback on drafts of this post.)

Notes


  1. Note: I use “STEM” here as a shorthand for the dominant field and culture in EA, which is related to STEM-iness, but isn’t exactly it. ↩︎

  2. I use the pronoun “they” for this less STEM-y group, but to a certain extent I identify with them and have made the mistakes I listed. Although I was also a math major. ↩︎

  3. Relatedly, it would be great if people posted more summaries and collections ↩︎

  4. And it’s also truly easier to talk to people who speak like us. ↩︎

  5. Additionally, some good ideas and concepts are simply hard to put into quantitative language, so if that’s our primary mode of signaling quality, we’ll miss out on those. ↩︎

  6. And even if they do get involved, it may be harder for them to identify their next steps. ↩︎

  7. Semi-relevant post that talks about different aptitudes people may have: Modelers and Indexers ↩︎

Comments12
Sorted by Click to highlight new comments since: Today at 12:12 AM

(Warning: rambly hand-wavy comment incoming!)

Why is any field different from any other?  For example, physicists and engineers learn about fermi-estimation and first-principles reasoning, while ecologists and economists build thinking tools based on an assumption of competitive equilibrium, while lawyers and historians learn to compare the trustworthiness of conflicting sources by corroborating multiple lines of circumstantial evidence.  Different fields develop different styles of thinking presumably because those styles are helpful for solving the problems the field exists to make progress on.

But of course, fields also have incidental cultural differences -- maybe economists are disproportionately New Yorkers who like bagels and the big city, while ecologists are disproportionately Coloradans who like granola and mountain hiking trails.  It would be a shame if someone who could be a brilliant economist got turned off of that career track just because they didn't like the idea of bagel brunches.

I mention this because it seems like there are a few different things you could be saying, and I am confused about which ones you mean:

  1. The "core" thinking tools of EA need to be improved by an infusion of humanities-ish thinking.  Right now, the thinking style of EA is on the whole too STEM-ish, and this impairment is preventing EA from achieving its fundamental mission of doing the most good.
  2. The "core" thinking tools of EA are great and don't need to change, but STEM style is only weakly correlated with those core thinking tools.  We're letting great potential EAs slip through the cracks because we're stereotyping too hard on an easily-observed surface variable, thus getting lots of false positives and false negatives when we try to detect who really has the potential to be great at the "core" skills.  STEM style is more like an incidental cultural difference than a reliable indicator of "core" EA mindset.
  3. The "core" thinking tools of EA are great, and STEM style is a good-enough approximation for them, but nevertheless every large organization/movement needs a little bit of everything -- even pure engineering firms like Lockheed Martin also need legions of managers, event planners, CGI artists, copyeditors, HR staff, etc.  For this and other reasons (like the movement's wider reputation), EA should try to make itself accommodating and approachable for non-STEM thinkers even if we believe that the core mission is inherently STEM-ish.
  4. In general, it never hurts for individual people to try harder to listen and understand other people's diverse perspectives, since that is often the best way to learn something really new.

Personally, I am a pretty strong believer that the unique thinking style of effective altruism has been essential for its success so far, and that this thinking style is very closely related to certain skills & virtues common in STEM fields.  So I am skeptical that there is much substance behind claims #1 or #2 in general.  Of course I'd be very open to considering particular examples of ways that the "core" EA ideas should be changed, or ways that STEM-ishness makes a poor proxy for EA promisingness.

I am totally on board with #3 and #4, although they don't imply as much of a course-correction for the movement as a whole.

Despite feeling defensive about STEM values in EA, I also see that the growth of any movement is necessarily a tightrope walk between conserving too many arbitrary values, versus throwing the baby out with the bathwater.  If EA had stuck to its very early days of being a movement composed mostly of academic moral philosophers who donated 50%+ of their income to global health charities, it would never have grown and done as much good as it has IRL.  But it would also be pretty pointless for EA to dilute itself to the point of becoming indistinguishable from present-day mainstream thinking about charity.

Some bad visions of ways that I'd hate to see us move away from STEM norms (albeit not all of these are moving towards humanities):

  • If EA sank back towards the general mess of scope neglect, aversion to hits-based-giving, blame-avoidance / "copenhagen interpretation of ethics", and unconcern about impact that it was originally founded to rise above.
  • If EA was drawn into conflict-oriented political ideologies like wokeism.
  • If EA stopped being "weird" and thinking from first principles about what seems important (for instance, recognizing the huge potential danger of unaligned AI), instead placing more weight on following whatever official Very Serious Issues are popular among journalists, celebrities, etc.
  • If EA drifted into the elaborate, obfuscated writing style of some academic fields, like postmodern literary theory or continental philosophy.
  • [Losing some crucial aspects of rationality that I find hard to put into words.]

Some areas where I feel EA has drawn inspiration from humanities in a very positive way:

  • The fact that EA is an ambitious big-tent moral & social movement at all, rather than a narrower technical field of "evaluating charities like investments" as perhaps envisioned when Givewell was originally founded.
  • The interest in big philosophical questions about humanity and civilization, and the fact that imaginative speculation is acceptable & encouraged.
  • The ideas of effective altruism are often developed through writing and debate and communication, rather than primarily through experiment or observation in a formal sense.  The overall style of being "interdisciplinary" and generalist in its approach to many problems.

(My personal background: I am an aerospace engineer by trade, although I have a naturally generalist personality and most of my posts on the Forum to date have been weird creative-writing type things rather than hard-nosed technical analyses.  My gut reaction to this post was negative, but on rereading the post I think my reaction was unjustified; I was just projecting the fears that I listed above onto areas of the post that were vague.)

I guess my conclusion is that I am psyched about #3 and #4 -- it's always good to be welcoming and creative in helping people find ways to contribute their unique talents.  And then, past that... there are a bunch of tricky fundamental movement-growth issues that I am really confused about!

(Meta: I am afraid that I am strawmaning your position because I do not understand it correctly, so please let me know if that is the case )

Personally, I am a pretty strong believer that the unique thinking style of effective altruism has been essential for its success so far, and that this thinking style is very closely related to certain skills & virtues common in STEM fields.  So I am skeptical that there is much substance behind claims #1 or #2 in general.  

I agree with you that it seems plausible that the unique thinking style of EA has been essential to a lot of the successes achieved by EA + that those are closely related to STEM fields. 

  1. The "core" thinking tools of EA need to be improved by an infusion of humanities-ish thinking.  Right now, the thinking style of EA is on the whole too STEM-ish, and this impairment is preventing EA from achieving its fundamental mission of doing the most good.

But it is unclear to me why this should imply that #1 is wrong. EA wants to achieve this massive goal of doing the most good. This makes it very important to get a highly accurate map of the territory we are operating in. Taking that into account, it is a very strong claim that we are confident that the “core” thinking tools we have used so far are the best we could be using and that we do not need to look at the tools that other fields are using before we decide that ours are actually the best. This is especially true since we do lack a bunch of academic disciplines in EA. Most EA ideas and thinking tools are from western analytic philosophy and STEM research. And that does not mean they are wrong - it could be that they all turn out to be correct - but they do encompass only a small portion of all knowledge out there. I dare you to chat to a philosopher who researches non-western epistemology - your mind will be blown by how different it is. 

More generally: The fact that it is sometimes hard to understand people from very different fields is why it is so incredibly important and valuable to try to get those people into EA. They usually view the world through a very different lens and can check whether they see an aspect of the territory we do not see that we should incorporate into EA. 

I am afraid that we are so confident in the tools we have that we do not spend enough time trying to understand how other fields think and therefore miss out on an important part of reality. 

To be clear: I think that a big chunk of what makes EA special is related to STEM style reasoning and we should probably try hard to hold onto it. 

2. The "core" thinking tools of EA are great and don't need to change, but STEM style is only weakly correlated with those core thinking tools.  We're letting great potential EAs slip through the cracks because we're stereotyping too hard on an easily-observed surface variable, thus getting lots of false positives and false negatives when we try to detect who really has the potential to be great at the "core" skills.  STEM style is more like an incidental cultural difference than a reliable indicator of "core" EA mindset.

Small thing: It is unclear to me whether we get a lot of false positives + this was also not the claim of the post if I understand it correctly. 


 

I think I strongly agree with all the ideas in this post. The post is a very well written, thoughtful and focuses on the value of non-STEM communities, and improving communication and access for these valuable people. It has a lot of thoughtful suggestions including such as reducing jargon, and more intentional outreach and intentional listening to valuable people from other backgrounds.

 

Random thoughts that aren't fully related:

  • There is a recent case where a leader got "knocked on" for some phrasing in an EA forum post. This person was a mid-career Harvard lawyer. It's worthwhile pointing out that this person is not only personally capable in writing in "EA code/speak", but (in the alternate world where she went corporate instead of being an EA starting nonprofits) could have teams of underlings writing in EA speak for her too. Instead, she herself got on the horn here to write something that she thought was valuable. (Note that there was a reason her wording got criticized, it's unclear if this is a "defect"  and it's unclear how to solve this). 
     
  • I could see a similar issue with the above where extremely valuable policy makers, biologists, lawyers or economists (who have their own shibboleths) could be bounced off EA for related reasons. 
     
  • I don't know of a good way to solve this. What comes to mind quickly is deliberating creating discussion space for different groups or providing some sort of "ambassadors" for these disciplines (but "ambassadors" folds into gatekeeping and field building that senior EAs are involved in, so it's even harder than it sounds)? 
     
  • My guess is that if we wanted to move to a good compromise for jargon and tone, Holden's blog post "Cold-Takes" gives a good model. It takes more effort than it looks, but Holden's lack of jargon, short articles, and expression of uncertainty seems ideal for communicating to EA and regular folks.

I think there are deeper comments here, that basically gets into theories of change of EA and the heart of EA. I think the below is a stronger argument than I would normally write, but it is what came out quickly:

 

In short, the tools of Bayesian reasoning, STEM are often merely cultural signals or simulcra for thought. They are naively and wrongly implied to provide a lot more utility than they really have in analysis and meta thought. They don't have that much utility because the people who are breaking trail in EA appropriately take the underlying issues into account. They are rarely helped or hindered by a population of people fluent in running Guesstimates, control theory or applying "EV calculations". Instead, the crux of knowledge/decisions occur with other information—to get a sense of this and tying this back to EA discourse, this seems related to at least the top paragraphs of this post (but I haven't gone through the pretty deep examples in it).

Instead, these cultural signals provide cohesion for the movement, which is valuable. They might hinder growth too. I don't know what the net value is. Answering this probably involves some full, deep model of EA. 

But an aggressive, strong version of why we would be concerned might be that, in addition to filtering out promising young people, we might be harming acquisition of extremely valuable talent, who don't want to read 1,800 articles about something they effectively learned in middle school or have to walk around shibboleths when communicating to EAs.

To get a sense for this:

  • if you have experience in academic circles, one often learns from experience that it can be really unpromising to parse thoughts from others who built their career in alternative world views, and this can easily lead to bouncing off from them (e.g. mainstream economist having to have a dinner party talk to someone versed in heterodox, Keynesian or Marxists thinking)
  • In tech , if you are the "A team", and someone comes in to ask you to fix/refactor code that has weird patterns or has many odors, and that someone doesn't seem to be aware of this
  • You are someone who built businesses and work with exec, then encounter consultants who seem to have pretty simplistic and didactic views of business theories (start mansplaining "Agile" or "First Principles" to you).

People who are extremely talented aren't going to engage because the opportunity costs are extremely high.

 

As a caveat, the above is a story which may be wrong entirely, and I am pretty sure even when fully fleshed out, it is only part true. It also seems unnecessarily disagreeable to me, but I don't have the ability to fix it in a short amount of time, maybe because I am dumb. Note that this caveat you are reading is not, "please don't make fun of me"—it's genuinely saying I don't know, and I want these ideas to be ruthlessly stomped if it's wrong.

Buck
2y17
0
0

I have some sympathy to this perspective, and suspect you’re totally right about some parts of this.

They misuse jargon like “updating” and “outside view” in an attempt to get their point across, and their interlocutors decide that talking with them is not worth their time.

However, I totally don’t buy this. IMO the concepts of “updating” and “outside view” are important enough and non-quantitative enough that if someone can’t use that jargon correctly after learning it, I’m very skeptical of their ability to contribute intellectually to EA. (Of course, we should explain what those terms mean the first time they run across them.)

For many non-native speakers having a conversation in English is quite cognitively demanding – especially when talking about intellectual topics they just learned about. Even reasonably proficient speakers often struggle to express themselves as clearly as they could in their native language, there is a trade-off between fluent speech and optimal word choice/sentence construction. If given 2x more time, or the chance to write down their thoughts, they would possibly not misuse the jargon to the same degree.

Many people get excited about EA when they first hear about it and read a lot of materials. At this speed of learning retention of specific concepts is often not very good at first – but gets a lot better after a few repetitions. 

It's possible that they would be better off learning and using the concepts in a slower yet more accurate way. Misuse of concepts might be some evidence for them not being the most promising candidates for intellectual contributions. But there seem to be other characteristics that could easily compensate for a sub-optimal-but-good rate of learning (e.g. open-mindedness, good judgment, persistence, creativity).
 

I think there is a disagreement that gets at the core of the issue.

IMO the concepts of “updating” and “outside view” are important enough and non-quantitative enough that if someone can’t use that jargon correctly after learning it, I’m very skeptical of their ability to contribute intellectually to EA. 

The examples you mention are well chosen and get at the core of the issue, which is unnecessary in-group speak.

 

Updating: this basically means proportionately changing your opinions/worldview with new information. 

It's a neologism, and we're bastardizing its formal use in Bayesian updating, where it is a term of art for creating a new statistical distribution. 

So imagine you're in France, and trying vibe with some 200 IQ woman who has training in stats. Spouting off a few of these words in a row might annoy or confuse her.  They might turn up their high IQ gallic nose and walk away.

If you're talking to someone 120 IQ dude in China who is really credulous, wants to get into EA, but doesn't know these words and doesn't have a background in stats, and they go home and look up what Bayesian updating means, they might think EAs are literally calculating the posterior for their beliefs, and then wonder what prior they are using. The next day, that dude is going to look really "dumb" because they spent 50x more effort than needed and will ask weird questions about how people are doing algebra in their heads.

 

Outside View: This is another neologism. 

This time, it's not really clear what this word means. This is a problem.

I've used it various times in different situations to mean different things. No one ever calls me out on this abuse. Maybe that's because I speak fast, use big words, or know math stuff, or maybe I just use the word well, but it's a luxury not everyone has.

Once again, that somewhat smug misuse of language could really annoy or disadvantage a new person to EA, even someone perfectly intelligent. 

I really enjoyed this post, thank you! As a non-STEM-y person in EA, I relate to lots of this.  I've picked up a lot of the 'language of EA' - and indeed one of things I like about EA is that I've learnt lots of STEM-y concepts from it! - but I did in fact initially 'bounce off' EA, and may never have got involved if I hadn't continued to hear about it. I've also worried about people unfairly dismissing me because the 'point' of my field (ancient philosophy) is not obvious to STEM-y EAs. 

A note on 'assessing promisingness': a recent forum post on Introductory Fellowships mentioned that at some universities, organizers sort fellows into cohorts according to perceived 'promisingness'. This bothered me. I think part of what bothered me was egalitarian intuitions, but part of it was a consciousness that I might be unfairly assessed as 'unpromising' because my capacities and background are less legibly useful to EAs than others. 


 

not being fluent (or being less-than-native) in the language of the community with which you are conversing makes everything harder

I've felt this a lot. This is nothing shocking or revolutionary, but communicating well in a foreign language is hard. I'm an American who grew up speaking English, and I've spent most of my adult life outside of the US in places where English is not widely spoken. I was a monolingual anglophone at age 18, and I've learned three non-English languages as an adult. I've often wanted to express "college level ideas" but I've had "toddler grammar." I remember talking about an app ecosystem in Spanish, but I didn't express the idea well and the other person thought I was talking about ecology. I have had dozens of attempts to express nuance in Chinese, but I simply didn't know how to express it in that language. To be clear, I am comfortably conversational in both of these languages,[1] but being able to talk about your weekend plans or explain why you liked a movie is far more simple than describing a pipeline of community building or being able to talk about how universal basic income would work.

While I haven't attended in-person EA events, I could easily see people being evaluated heavily on accent and word choice, to the detriment of the content of their message.

Shamefully, I also find myself regularly judging other people based on this exact thing: they don't express their ideas well, so I assume that they aren't particularly intelligent. Shame on me. I am trying to do this less. It is an obvious bias, but we often aren't aware of it. It is kind of a halo effect, where we judge one thing based on another thing being impressive/unimpressive.

  1. ^

    Or at least I was when I was at my peak. It has been about 8 years since I've lived in Spain, and because I've used Spanish very little since then I assume that it has degraded a lot.

+1 to being able to speak about these important topics and concepts to a broad, general audience.

+1 to striving to be inclusive.

+1 to making me laugh at the irony of how this post has some technical (jargon) words both in the title (what is "promisingness"? hehe) and in the post. (But to be fair: you know your audience here on the forum.)

+1 to your suggestions. Great suggestions :)

I'm reminded of the successes of scientific educators that this EA community could learn from. For example educators like Neil deGrasse Tyson or Kurzgesagt demonstrate how taking some of the most complex and difficult-to-understand topics and explaining them in simple, everyday language can actually get a huge, broad, and diverse audience to care about science and genuinely get excited about it (to the point of wanting to participate).

This is something I feel I have yet to see happen at a similar scale of success with EA. I tend to believe effective communication with the use of everyday language is one of the most important keys to "building effective altruism".

[comment deleted]2y1
0
0
Curated and popular this week
Recent opportunities in Building effective altruism