Hide table of contents

Richard Ren gets points for the part of diversity's value prop that has to do with frame disruption, I also realized a previously neglected by me merit of standpoint epistemology while I was ranting about something to him.

Written quickly over not at all, based on my assessment of the importance of not letting the chase after that next karma hit prevent me from doing actual work, but also wanting to permanently communicate something that I've ranted about on discord and meatspace numerous times. 

Ozy really crushed it recently, and in this post I'm kind of expanding on their takedown of Doing EA Better's homogeneity takes. 

A premise of this post is that diversity is separable into demographic and intellectual parts. Demographic diversity is when you enumerate all the sources of variance in things like race, gender, faith, birthplace, et. al. and intellectual diversity is where you talk about sources of variance in ideas, philosophy, ideology, priors, et. al.. In this post, I'm going to celebrate intellectual exclusion, then explore an objection to see how much it weakens my argument. I'm going to be mostly boring and agreeable about EA's room for improvement on demographic diversity, but I'm going to distinguish between demographic diversity's true value prop and its false value prop. The false value prop will lead me to highlighting standpoint epistemology, talk about why I think rejection of it in general is deeply EA but also defend keeping it in the overton window, and outline what I think are the right and wrong ways to use it. I will distinguish solidarity from altruism as the two basic strategies for improving the world, and claim that EA oughtn't try too hard to cultivate solidarity. 

Definition 1: standpoint epistemology

Twice, philosophers have informed me that standpoint epistemology isn't a real tradition. That it has signaling functions on a kind of ideology level in and between philosophy departments, ready be used as a cudgel or flag, but there's no granular characterization of what it is and how it works that epistemologists agree on. This is probably cultural: critical theorists and analytical philosophers have different standards of what it means to "characterize" an "epistemology", but I don't in fact care about philosophy department politics, I care about a coherent phenomenon that I've observed in the things people say. So, without any appeal to the flowery language of the credentialed, and assuming the anticipation-constraint definition of "belief" as a premise, I will define standpoint epistemology as expecting people with skin in a game to know more about that game than someone who's not directly exposed

Take 1: poor people are cool and smart...

I'm singling out poor people instead of some other niche group because I spent several years under $10k/year. (I was a leftist who cared a lot about art, which means I was what leftists and artists affectionately call "downwardly mobile", not like earning similarly to my parents or anything close to that). I think those years directly cultivated a model of how the world works, and separately gave me a permanent ability to see between the staples of society all the resilience, bravery, and pain that exists out there. 

Here's a factoid that some of you might not know, in spite of eating in restaurants a lot: back-of-house is actually operationally marvelous. Kitchens are like orchestras. I sometimes tell people that they should pick up some back of house shifts just so they can be a part of it. 

There's kind of a scope sensitivity thing going on where you can abstractly communicate about single mothers relying on the bus to get to minimum wage jobs, but it really hits different when you've worked with them. Here's another, more personal factoid: I didn't have the bravery or resilience or pain tolerance to not join the IT industry. I did this mostly by leveraging birth lotteries- developmental nutrition, IQ, a baseline of computer literacy by growing up wealthy enough to have computers around as a kid, US citizenship, or whatever. But it was done partially out of fear and exhaustion, not because I wanted to roll dark mode and be a l33t hacker and solve logic puzzles. 

Take 2: ..., but why should EA care? 

It doesn't necessarily matter that restaurants are operationally marvelous. By "models of the world" above, I meant knowledge that enriches your life without necessarily helping you make more accurate predictions. Community builder Steve, a guy I made up, who thinks EA should only recruit at ivy league institutions because lots of rigor and curiosity is cultivated therein, might outperform (in utilons per unit effort) community builder Joe, another guy I made up, who thinks EA should only recruit at restaurants because lots of bravery and resilience and operational excellence is cultivated therein. This isn't fair, or aesthetically appealing, but it's plausible. 

Let's generalize to any minority-to-EA group now, beyond the proles: 

Claim: if you'd like to advocate for an increase in the EA proportion of some group, forecast with specificity the value they'd bring

High performance, in terms of fixing things that are broken, should remain the EA priority. Not fairness or being a movement people feel good to be a part of. Instrumental value of fairness or the movement being a fun place to hang out is a separate argument, one that should probably stay in the overton window, because I think it's probably true! But what I want is for diversity advocates to make their claims falsifiable. I want calls for diversity (both kinds) to be judged by ITN forecasts/evaluations and then by reality, in that order, like anything else. 

Take 3: intellectual homogeneity is often a feature, not a bug

You could literally frame EA as exclusion, if you wanted. You can't include spreadsheets without excluding people who are too obsessed with intangibles to crack open a copy of how to measure anything for at least a few pages. Including analytic philosophy and excluding continental philosophy is awesome. Including the numerate might end up excluding the innumerate but that's a part of how we understand then try fixing broken stuff with finite resources, which I remind you is what brought us together in the first place (there are tons of opportunities for people who struggle with numbers to have impact, but those opportunities have more epistemic deference built in compared to someone with an appetite for doing the math themselves). You can't include true beliefs about the decline of poverty without excluding degrowthers, or to put it less controversially you can't include lukeprog's excellent industrial revolution post without excluding the unabomber. 

All of this sounds great to me! If the value prop of intellectual diversity doesn't fundamentally root itself in the fact that reality is not fairly dolling out participation trophies, and that some people are less wrong than others, then it's going to be a false value prop. 

Take 4: (hat tip for this objection to Richard Ren) outsiders help you reframe debates, which is good from time to time

A spectrum with an overly-whiggish "if capitalism isn't working, try more capitalism" view on poverty abolition on the one side and the unabomber on the other side is a frame. What if it's not a polarity thing, what if it's a triangle because of a third thing that we dismiss as anomalous when we put on that frame's blinders? What if the best path forward isn't splitting the difference between the extremes somehow, but has nothing to do with those extremes? Perusing non EAs, like people who are too obsessed with intangibles to do philanthropy as well as GiveWell, or woke twitter slacktivists, or degrowthers, or the national rifle association, or the union agitator trying to organize the EAG caterers, or "the x-risk community is displacing anxiety about white men losing status in a changing world" critical theorists, or people who really like a lot of presidential candidate signs on their yards, from time to time to see if your frame is still the most productive frame or if it needs iterating, is completely defensible. This does not mean including them or their arguments "into EA", and this does not for-free tell you from an explore/exploit perspective how to prioritize frame reassessment. But it's a salient value prop of intellectual diversity.  

Take 5: demographic diversity does not cause intellectual diversity

Standpoint epistemology does not outperform expertise, study, measurement. Standpoint epistemology is the false value prop of demographic diversity. When you run into a black person at EAG you don't assume that they know a lot about OpenPhil's CJR portfolio just because incarceration disproportionately effects black people, black EAs are free to care about whatever cause areas they want. The alternative (assuming that a minority at EAG isn't as impartial in their prioritization as a majority) is equal parts rude and absurd, but I feel like I've seen weak versions of this insinuation around, kind of a lot. 

Expertise, study, and measurement are not only great, but their greatness is what brought us together. There's a Milton Friedman video on youtube where a student asked him if he'd ever been poor, and to be fair Friedman did flex his standpoint epistemology credentials briefly, but then he said "you don't look for cancer doctors who've had cancer do you?"

Take 6: the true value prop of demographic diversity has to do with homophily 

Regardless of the evidence or any replication failure I'm not aware of, I find it anecdotally plausible that people skew a little toward hanging out with people who look like them. This is unfortunate, and if it's true then we should improve demographic diversity for reasons that have directly to do with talent. Talent which, I remind you, we need to maximize if we're going to understand then try to fix broken stuff. I think the EA overton window should admit "we're gonna beat homophily by being the kindest and most interesting social movement in history", but I actually think my money is on "beating homophily with kindness and interestingness is intractable", just not a lot of money. 

Definition 2: solidarity

Without doing a literature review of any kind, but drawing on my sordid past as a leftist, I define solidarity as improving the world by focusing on your own problems, joining with people in the same boat as you to improve the state of that boat in particular.

Definition 3: altruism

Altruism is simply fixing stuff when the stuff's brokenness doesn't effect you personally

Take 7: EA's comparative advantage is altruism, not solidarity

If an EA had to choose between organizing dialysis patients to lobby the government for more R&D spending on xenotransplantation and organizing kidney donors to lobby their friends to do undirected donation, and it may be the case that both interventions put an equal dent in the kidney crisis, I expect the EA to do a better job at the latter. Putting privilege to work on understanding then trying to fix broken stuff is where an EA shines. Having the resilience and bravery to survive or thrive on $2/day and mosquito bites is not the value EA provides to the world, EAs have no idea how to convert struggle into utilons, we're best at converting cash into utilons, and conveniently, we tend to have cash. 

You should notice: the EA rejection of standpoint epistemology, which is a feature not a bug, is exactly what leads our comparative advantage to be what it is here. 

To be clear: it's absolutely possible for standpoint epistemology to be great, for solidarity strategies to outperform altruism strategies, I am not saying we should literally wave at an opportunity as it passes by, on the margin. I'm only speaking toward the average case, toward the expectation that I think pays the most rent all else equal.





More posts like this

Sorted by Click to highlight new comments since:

but then he said "you don't look for cancer doctors who've had cancer do you?"

Is this supposed to be some sort of knockdown argument? It obviously doesn’t drown out all other considerations, but I’d absolutely consider it a major pro if my oncologist was herself a cancer survivor.

Once I have conceded the basic point that no, standpoint doesn’t add infinite value such that nothing else matters, and you’ve conceded the basic point that yes, standpoint does add more than literally 0 value all else equal, then where does that leave us? We still need some recipe to weigh standpoint relative with expertise, study, and all the rest of the considerations.

I do agree that if I want to give a lot of weight to standpoint, I should be able to tell you a specific story about the added value I expect to get from it. E.g., my cancer-survivor oncologist will have a better understanding of what it is like to navigate the medical system from the patient’s perspective, which will make her more compassionate; she’ll be better able to advise me on intangible quality-of-life tradeoffs associated with different treatment decisions; etc.

But I worry that this heuristic could lead to a sort of Diversity Dunning–Kruger effect, where the same Standpoint X that would help a project succeed is also required in order to even identify that Standpoint X would be helpful to have. It’s kind of paradoxical to require that I have a specific story in advance about how filling in a blindspot will help me; if I knew that, then it wouldn’t be a blindspot.

It’s kind of paradoxical to require that I have a specific story in advance about how filling in a blindspot will help me; if I knew that, then it wouldn’t be a blindspot.

This is an excellent objection. Known unknown vs. unknown unknown sort of thing. Let me think about this a hot second. 

Thanks for being patient. I decided that I think you can probably do something like a value of information calculation to figure out the opportunity cost of not exploring unknown unknowns multiplied by the probability that you're missing something critical. 

This is a good paragraph: 

Once I have conceded the basic point that no, standpoint doesn’t add infinite value such that nothing else matters, and you’ve conceded the basic point that yes, standpoint does add more than literally 0 value all else equal, then where does that leave us? We still need some recipe to weigh standpoint relative with expertise, study, and all the rest of the considerations.

Moreover, what's the principled line between standpoint in the sense of immutable demographic information (facts of birth) and standpoint in the sense of what someone chose to study, what they've cultivated downstream of their passions, etc.? Unclear to me. 

But still, I'm guessing that intangible gains from demographic diversity improvement as motivated by the homophily objection would probably overlap with any well understood recipe. 

One value prop of viewpoint diversity in an organization that I neglected in OP is that when skillsets are uncorrelated each team member's replaceability goes down. This plausibly creates a more potent pareto frontier of what the organization can do, especially what it can do that no one else can. 

I totally screwed up by not actually reading this post until today, even though I spoke a bunch with Xuan in meatspace about it and related topics. 

I want to highlight it as a positive example of how I think epistemic diversity claims should be made! It's literally like "I found a specific thing in your blindspots that I expect to provide value in these particular ways to help you accomplish your high level goals", which is great. Making a positive case for "contracts are the primitive units we want to study in alignment, not preferences" really hit hard and gave the author license to have earlier described their unease with the literature's blindspots. 

More from quinn
Curated and popular this week
Relevant opportunities