This post summarizes some of my conclusions on things that can make EA outreach to progressives hard, as well as some tentative recommendations on techniques for making such outreach easier.
To be clear, this post does not argue or assume that outreach to progressives is harder than outreach to other political ideologies. Rather, the point of this post is to highlight identifiable, recurring memes/thought patterns that cause Progressives to reject or remain skeptical of EA.
My Background (Or, Why I am Qualified to Talk About This)
Nothing in here is based on systematic empirical analysis. It should therefore be treated as highly uncertain. My analysis here draws on two sources:
- Reflecting on my personal journey as someone who transitioned from a very social-justice-y worldview to a more EA-aligned one (and therefore understands the former well), who is still solidly left-of-center, and who still retains contacts in the social justice (SJ) world; and
- My largely failed attempts as former head of Harvard Law School Effective Altruism to get progressive law students to make very modest giving commitments to GiveWell charities.
Given that the above all took place in America, this post is most relevant to American political dynamics (especially at elite universities), and may very well be inapplicable elsewhere.
Readers may worry that I am being a bit uncharitable here. However, I am not trying to present the best progressive objections to EA (so as to discover the truth), but rather the most common ones (so as to persuade people better). In other words, this post is about marketing and communications, not intellectual criticisms. Since I think many of the common progressive objections to EA are bad, I will attempt to explain them in (what I take to be) their modal or undifferentiated form, not steelman them.
Relatedly, when I say "progressives" through the rest of this post, I am mainly referring to the type of progressive who is skeptical of EA, not all progressives. There are many amazing progressive EAs, who do not see these two ideologies to be in conflict whatsoever. And many non-EA progressives will believe few of these things. Nevertheless, I do think I am pointing to a real set of memes that are common—but definitely not universal—among the American progressive left as of 2021. This is sufficient for understanding the messaging challenges facing EAs within progressive institutions.
Reasons Progressives May Not Like EA
Legacy of Paternalistic International Aid
Many progressives have a strong prior against international aid, especially private international aid. Progressives are steeped in—and react to—stories of paternalistic international aid, much in the way that EAs are steeped in stories of ineffective aid (e.g., Playpumps).
Interestingly, EAs and progressives will often (in fact, almost always) agree on what types of aid are objectionable. However, we tend to take very different lessons away from this.
EAs will generally take away the lesson that we have to be super careful about which interventions to fund, because funding the wrong intervention can be ineffective or actively harmful. We put the interests of our intended beneficiaries first by demanding that charities demonstrably advance their beneficiaries' interests as cost-effectively as possible.
Progressives tend to take a very different lesson from this. They tend to see this legacy as objectionable due to the very nature of the relationship between aid donors and recipients. Roughly, they may believe that the power differential between wealthy donors from the Global North and aid recipients in developing countries makes unobjectionable foreign aid either impossible or, at the very least, extremely difficult. They may therefore prefer aid frameworks in which parties approach each other more as equals or in which there is high-context transfer of feedback from recipient to donors. Of course, these heuristics will tend to privilege interventions within existing communities, and be harder to deploy internationally—hence progressives' skepticism of foreign aid. The fact that this in effect entirely cuts off the world's poorest people from aid at all counts for very little in the progressive worldview, probably as a result of the act-omission distinction: the bad to be avoided is paternalistic international aid, and simply abstaining from international aid is an easy way to do that.
The Oppression Worldview
Modern progressivism focuses a lot on oppression, which may be defined (from their perspective) as social systems that cause equally-worthy groups to receive preferential treatment or receive disparate rewards.
For reasons that elude my comprehension, many progressives do not seem to conceptualize the current assortment of economic and legal policies that cause some countries to be ~100x richer than others to be a relevant form of oppression. If they do, they are unlikely to give it as high a priority as, e.g., within-country racial disparities or within-country economic inequality.
A full analysis of why, exactly, global poverty is often not treated as a leading form of injustice by many progressives (as evidenced by the comparatively few progressive resources that go towards it) seems very valuable, and I cannot yet provide it. But I do feel confident in saying that to many progressives, global poverty is apparently a non-central example of oppression, or a lower-priority one.
Many progressives are skeptical of the tools of modern economics, believing them (inaccurately, in my view) to play a central role in legitimating domestic income inequality and other maladies. This is probably due to domestic political tendencies for the right to emphasize the value of markets and economic growth more than liberals (who tend to focus more on economic equality). Thus, they may tend to have a negative reaction to EAs relying on economic concepts and tools, including things like cost-benefit analyses, marginal thinking, and QALYs. They may also distrust interventions that leverage market forces or promote economic growth as such. They may tend to believe, despite evidence from economic history, that extreme poverty is solely the result of past injustices, which may have implications for how we ought to understand our moral obligations to the global poor. They are also very hesitant to accept that global poverty is much worse than domestic poverty in extent and severity, which leads to a larger focus on the latter.
Diversity, Equity, and Inclusion (DEI) Issues
Progressives see shared identity as very important to understanding and advocating for the interests of a group. If a group claims to be advocating for some group X, but lacks a member of X in its leadership, this will make progressives very suspicious. Specifically, when EAs purport to advocate for members of the global poor, but our leadership lacks people from the world's poorest countries, they are immediately skeptical that we actually do have their best interests in mind, or can effectively advocate for them. This and the Legacy of Paternalistic International Aid (see above) reinforce each other.
Incompatibility Between Intersectionality and Prioritization
Intersectionality is one of the dominant frameworks on the progressive left for understanding and advocating for social change. The academic and popular uses of intersectionality differ, but the slogan "[t]here is no such thing as a single-issue struggle, because we do not live single-issue lives" captures much of how this is currently understood and used in progressive spaces.
Intersectionality thus implies a strong anti-prioritization framework—or at least a hesitancy to engage in prioritization. Intersectionality implies that narrow prioritization (e.g., an AIDS charity prioritizing education and condom distribution over ART) is pro tanto objectionable insofar as it fails to consider, and allocate equal resources to, the differing needs of all members of a population.
Systemic Change and a Preference for State Action
Seasoned EAs will no doubt be aware that our critics on the left are some of the biggest proponents of the systemic change objection to EA. Progressives seem more likely to believe that major problems can or should only be solved through dramatic restructuring of society, in ways that EAs may be skeptical by default of for a variety of reasons. And when both agree on the need for some form of systemic change, they may often disagree on what that should look like.
Some people on the progressive left (especially in the US, it seems) are averse to animal advocacy due to what I will call "ordinal speciesism": the belief that prioritizing animal welfare over human welfare is objectionable. Consider the following quotes from this article (which I just selected arbitrarily because it seemed pretty representative of views I see):
White vegans’ priority is the top layer of veganism–animal exploitation, but they ignore the socio-economic impact that comes from the movement becoming more popularized. Some white vegans even go as far to compare historical genocides that have affected BIPOC to the workings of the meat and dairy industries. . . . Veganism can only be about the liberation of animals when it also stops the oppression of people.
The idea that the two can be meaningfully analyzed separately, and that it may be appropriate to prioritize animal welfare over human welfare, is anathema to this worldview, apparently.
EAs tend to reject person-affecting views of population ethics. This, however, has uncomfortable implications for some hot-button issues on the left, like reproductive rights and environmental ethics.
Guesses at How To Improve Messaging to Progressives
I am not a messaging expert, and have not had any overwhelming success at getting progressives more interested in EA. With that said, here are some of my guesses at what a more progressive-friendly approach to EA messaging could look like. Of course, this does not consider important tradeoffs, such as the potential for alienating other audiences. This will therefore be most useful to people whose primary audience is progressives.
I would consider progressive-friendly messaging from the outset of any public-facing communications, not as a band-aid to be deployed in response to criticisms from the left. First impressions are really important, and so starting messaging with things that are, at the very least, not off-putting to progressives should help advance conversation without as much negative reaction.
Develop and Highlight Community Feedback Mechanisms
As a fairly welfarist and quantitative bunch, internal EA discussion on charity evaluation focuses a lot on cost-benefit analyses and much less on qualitative factors that either inform or complement such analyses when making final recommendations. I don't think this is substantively wrong, but I do think it can give the impression that EAs care a lot about quantified spreadsheet inputs and not human factors like recipient's assessments of charities. The latter should not be simply treated as a "nice to have": if CEAs and users' assessments of a program differ dramatically, we can suspect that something has gone wrong, and end-users/recipients can be extremely valuable sources of feedback and suggestions for improvement.
I am not an expert on GiveWell's evaluation process, and am aware that they do do some of this already, but I still think EA as a community could benefit from maybe roughly doubling(?) our cultural attention to the existence and performance of community feedback mechanisms for human-facing charities. This has been a philosophical commitment since the early days of EA, yet information on how we (or the charities we prioritize) actually confirm with recipients that our programs are having the predicted positive impact on them receives, AFAICT, little attention in EA. It may also be that many top charities simply don't have good user feedback mechanisms because donors don't demand them, in which case we should probably encourage more charities to develop them anyway. Mechanisms like accessible feedback hotlines and recipient ombuds may be worth exploring further.
A Digression on GiveDirectly
GiveDirectly is often highlighted as a standout charity on this point, for good reason: features like GDLive and their customer support centers (and, of course, their general model), generally make clear that they care deeply about trusting and receiving honest feedback from end-users. But to the extent that EAs point to GD when this objection is raised without caring about whether other GiveWell charities (which generally receive more funding) have similar mechanisms in place, it feels like a bit of a motte-and-bailey.
Use the Right Words/Framings
Many EA actions can be accurately framed in ways that are more palatable to a progressive worldview. I often remember this quote from a Yale EA as an example:
For me, taking the Giving What We Can pledge was an expression of my commitment to using my class privilege to contributed to a movement towards a more equitable world for current and future generations
Note how this isn't framed in terms of maximizing QALYs/dollar or generalized impact, but rather as "using class privilege" to achieve "a more equitable world." Not only is this still quite faithful to EA principles, but it's also much more palatable to a progressive audience. Similarly, EAs can reconsider framing global health/development work as working towards "global health justice," "global income inequality," or "global healthcare access" while also highlighting the tools we use to prioritize between interventions in those cause areas.
While I think it's very easy to focus too much on DEI efforts at the expense of impact, I also think that improving DEI in leadership at global health charities—and especially inclusion of people from the recipient countries—can send a good signal about the relationship between the charity and the populations it intends to serve. Such leaders can probably also provide valuable perspective about the communities in which the charity is operating. At the very least, I think it poses a huge communications liability for a lot of these charities among Western progressive audiences.
Bring Policy In Earlier
A common way to communicate about EA is to first talk about "finding the most cost-effective charities" or something similar, then explaining the true scope of our ambitions (including policy goals) only later. This mirrors its internal evolution from global health prioritization to the inclusion of animals and ultimately future generations. Policy interventions came pretty late in this evolution.
But as policy becomes an ever-larger part of the EA portfolio, this message makes less and less sense, and reinforces the perception of EA as averse to enacting systemic change. EA should figure out catchy messages about the types of policy work we support, as we have done for our charitable work.
There are a lot of topics on which EA will have shared interests with typical progressive causes, like environmentalism, climate change, tax justice, welfare spending, immigrants' rights, incarceration reform, and pacifism. Where possible, EA groups should consider showing up for and helping to promote and organize events with common interests. This should enhance our credibility in those spaces.
Things We Shouldn't Do: Reduce Intellectual Rigor
I think there are serious problems with a lack of intellectual rigor and openness in many progressive spaces today. Despite being quite liberal, this is one reason I prefer EA spaces more to typical progressive ones. I think intellectual rigor remains vitally important to the project of EA, and nothing in this should be used to suggest that we should reduce our emphasis on that.
Indeed, EAs tend to be more progressive/left-of-center than the general population. See this post ↩︎
Christian missionary work is often an archetypal example of this. The ABC approach to AIDS prevention may be another. ↩︎
The rise of "mutual aid" as a framework for aid in leftist circles is an example of this. ↩︎
As measured by revealed preferences in the form of comparative resource allocation. ↩︎
Audre Lorde, Learning from the 60s, in Sister Outsider: Essays & Speeches by Audre Lorde 138 (2007). ↩︎
As an example, after ten minutes of searching I could not find information on GiveWell's overall view on this subject on their website. ↩︎
E.g., as far as I can tell, there's not a single person from sub-Saharan Africa on AMF's current staff, trustees, or Malaria Advisory Group. I think this is a pretty big optics liability for them among progressive audiences, independent of its substantive importance. ↩︎
One explanation based on Haidt's Moral foundations theory would be:
I don't know much about how solid moral foundations theory is, and haven't thought much about how plausible I find this explanation or how much of the effect I'd guess it explains.
I honestly think that the progressive movement increasingly values Loyalty (i.e. you're not a real minority if you're politically conservative) and Sanctity ( saying the N-word or wearing blackface make white people "unclean" in a way that cannot fully be explained by the Care/Harm framework), so if anything I think Haidt's Moral Foundations theory is more right than even Haidt suspected, the taboos and tribes of the Left are simply still being defined.
Thank you for this summary!
One thought that struck me is that most of the objections seem most likely to come up in response to 'GiveWell style EA'.
I expect the objections that would be raised to a longtermist-first EA would be pretty different, though with some overlap. I'd be interested in any thoughts on what they would be.
I also (speculatively) wonder if a longtermist-first EA might ultimately do better with this audience. You can do a presentation that starts with climate change, and then point out that the lack of political representation for future generations is a much more general problem.
In addition, longtermist EAs favour hits based giving, and that makes it clear that policy change is among the best interventions, while acknowledging it's very hard to measure effects, which seems more palatable than an approach highly focused on measurement of narrow metrics.
There might be a risk that some view the (very) long-run future as a "luxury problem", and that focusing on that, rather than short-term problems in your own country, reveals your privilege. (That attitude may be particularly common concerning causes like AI risk.) My guess is that people are less likely to have such an attitude towards someone who is focusing on global poverty.
Longtermism isn't just AI risk, but concern with AI-risk is associated with a Elon Musk-technofuturist-technolibertarian-Silicon Valley idea cluster. Many progressives dislike some or all of those things and will judge AI alignment negatively as a result.
I wonder if it's a good or bad thing that AI alignment (of existing algorithms) is increasingly being framed as a social justice issue, once you've talked about algorithmic bias it seems less privileged to then say "I'm very concerned about a future in which AI is given even more power".
In talking to many Brown University students about EA (most of who are very progressive), I have noticed that longtermist-first and careers-first EA outreach does better and seems to be because of these objections that come up in response to 'GiveWell style EA'.
The 2019 EA Survey says:
I think the survey is fairly strong evidence that EA has a comparative advantage in terms of recruiting left and center left people, and should lean into that.
The other side though is that the numbers show that there are a lot of libertarians (around 8 percent) and more 'center left' people who responded to the survey than there are 'left' people. There are substantial parts of SJ politics that are extremely disliked amongst most libertarians, and lots of 'center left' people. So while it might be okay from a recruiting and community stability pov to not really pay attention to right wing ideas, it is likely essential for avoiding community breakdown to maintain the current situation where this isn't a politicized space vis a vis left v center left arguments.
Probably the idea approach is some sort of marketing segmentation where the people in Yale or Harvard EA communities use a different recruiting pitch and message that emphasizes the way that EA is a way to fulfill the broader aim of attacking global oppression, inequity and systemic issues, while people who are talking to Silicon Valley inspired earn-to-give tech bros should keep with the current messages that seem to strongly ... (read more)
Great post! I think this is an issue worth a lot of exploration. My sense though—both from reacting to your post and from my own reflection—is that there is probably a pretty low ceiling in terms of how much is possible here. I'll speak from my own experience as both a fan of EA and as a leftist.
1. It seems to me that EA, right now, has two areas of congregation (very broadly speaking): university/city groups and professional networking circles. So if you're involved in EA you're probably one of the following: a student, someone with a pretty niche expertise, or someone in between. You might have a graduate degree from a top university, and you might be a serious contender for some pretty "big" jobs at important institutions. Pretty much, you (might be, obviously this is a generalization) a member of the "professional-managerial class" (PMC). This class status—which is distinct from working-class and capital owner—is, I think, always what EA will be and, therefore, will always appear (understandably) as "elitist" to leftists who are sensitive to working-class politics. To many leftists, EA will always appear like a niche intellectual exercise that is being done by members of the PM... (read more)
This is consistent with my experience. But also, I think a lot of people that end up at HLS don't think in those sort of Marxist/socialist class terms, but rather just have a sort of strong Rawslian egalitarianism commitment.
I also think many people at HLS are hilariously unaware of their class privilege. In fact, many of them think of themselves as victims of unfair power structures vis-a-vis being students. This is how you get HLS grads advocating for their student loans to be forgiven by the federal government (this was truly a fashionable position when I was there) or generally spending their time advocating for HLS students getting better treatment. For example, there were at least two student groups   advocating for HLS students to get better financial treatment The second one explicitly focuses on how large law firms (starting salary: $180-190k) treat early-career lawyers.
FWIW, I strongly agree with both of these statements for Oxbridge in the UK as well.
The latter I think is a combination of a common dynamic where most people think they are closer to the middle of the income spectrum than they are, plus a natural human tendency to focus on the areas where you are being treated poorly or unfairly over the areas where you are being treated well.
Thanks for writing this, I think this topic is worthy of more discussion.
I wonder how much we should even recommend leaning into the progressive/social justice framing when the audience primarily comes from this ideological bent.
If I’d read this testimonial on the local EA website, there’d be a solid chance I‘d have been significantly less interested because it doesn’t connect to my altruistic motivations and (in my head) strongly signals a political ideology.... (read more)
I think "The Privilege of Earning to Give" by Jeff Kaufman (who I'm married to) helped bridge a gap between us and our non-EA friends, who tend to have much more standard leftist views than we do.
[I was sympathetic to common progressive/left-wing/social justice views before encountering EA. I'm from Germany, so my experience might not apply as much to the US.]
I'm wondering if another reason why some progressives may not like EA is a much more cynical prior about the intentions of powerful people and institutions, plus an unwillingness to update away from it or inability to identify evidence that would allow for such updates.
E.g. it strikes me that before I encountered EA the only context in which I ever had heard about the Gates Foundation was in contexts where it was at least implied that obviously we should expect its activities to serve Gates's private interests rather than the common good. It requires some knowledge about the particulars of the Foundation's activities, and context to understand how it differs from the activities of other foundations, to come to a more sympathetic view.
I think of the intersectionality/social justice/anti-oppression cluster as being a bit more specific than just 'progressive' so I will only discuss the specific cluster. Through activism, I met many people in this cluster. I myself am quite sympathetic to the ideology.
But I have to ask: How do you hold this ideology while attending Harvard Law? From this perspective, Harvard law is a seat of the existing oppressive power structure and you are choosing to become part of this power structure by attending. The privileges that come from attending Harvard... (read more)
Ironically, the situation in which I have most frequently been asked about whether EA is elitist is while giving intro talks about EA at MIT, Yale, etc.
Based on my experiences as a Yale undergraduate, I've come away with the perhaps overly pessimistic conclusion that a lot of class-privileged leftists at Ivy+ schools don't actually resolve that contradiction, and are unfortunately not that interested in interrogating and addressing their class privilege, or thinking about redistributing what familial or future wealth / resources they may have access to. I say this as both a former organizer of Yale EA, but also as someone who started a Resource Generation chapter there, and found it difficult to get people to engage. By way of comparison, it was considerably easier to find people interested in the local DSA chapter.
(For context, Resource Generation is a movement that organizes young (USAmerican) people with wealth or class privilege to redistribute their wealth, land, and power, and I see it as perhaps the most viable movement for class-privileged US leftists who are really interested in addressing the contradiction of being both leftist and wealthy. See for example their giving pledge guidelines, which are considerably more ambitious than GWWC, and have as their goal for the " top 10% to develop plans to redistribute all or almos... (read more)
I don't think xuan's main point was about being charitable, although they had a few thoughts in that direction. More generally, trying to be charitable is usually good. Of course it's going to miss a point (what finite comment isn't), but maybe it's making another?
I appreciate you trying to bring the discussion towards what you see as the real reason for lefty positions being held by privileged students (subconscious social status jockeying), but I wonder if there's a more constructive way to speculate about this?
Maybe one prompt is: how would you approach a conversation with such a lefty friend to discover if that is their reason, or not?
You could be direct, put your cards on the table, and say you think they are just interested in the social status stuff, and let them defend themselves (that's usually what happens when you attack someone's subconscious motivation, regardless what's true). Or you could start by asking yourself, what if I was wrong here? Is there is another reason they might hold this position on this topic? That might lead you to ask questions about their reasons. You could test how load-bearing their explanations are, by asking hypotheticals, or for them to be co... (read more)
Discussion of progressive ordinal speciesism on the latest 80,000 Hours podcast:
What makes you say rejecting person-affecting views has uncomfortable (for progressive) and environmental ethics, out of curiosity? I would have thought the opposite: person-affecting views struggle not to treat environmental collapse as morally neutral if it leads to a different set of people existing than would have otherwise.
I can see why left wing views on abortion would biased people against totalist views, because they do not want to accept the implication that someone's desire to abort their child could be 'outweighed' by the interests of a possible-person. And I guess totalism would also imply we should have more children, in contradiction to the idea that we should hav... (read more)
Another thought I meant to include with my original post:
These reflections/experiences have also led me to believe that, all else equal, EA groups at colleges are more valuable than ones at grad schools. Anecdotally, One For The World college chapters were much more successful on average than HLS's, despite HLS grads' higher earning potential. My model is that many people adopt the sort of EA-skeptical progressive worldview described here in college, which makes outreach in grad schools harder.
I think making EA a viable alternative or complement to which college students are exposed during their formative years would be very valuable for this.
Thanks for writing! This definitely helped clarify some of the push-back I often get when trying to explain these ideas to friends.
This will definitely stick with me. It seems the only way to get around this contradiction is to just not think about it, but maybe I'm missing something?
I think it's because they know women/poc/trans ppl/ppl on whatever fashionable domestic axis of inequality you want to look at, but don't know anyone who lives in Burundi, and because the experience of oppressed people in America is still close enough to their own to actually empathize with. Lot easier to empathize with your friend who got called a slur than with someone dying of malaria in Africa. Both because they are your friend, and because you've probably been called mean names, maybe even by the same type of asshole tossing slurs at them, whereas deadly diseases that affect young healthy people are hard to even imagine.
This is a useful point but I would add a little bit to it. People on the left often think about racism, transphobia, and homophobia as quite a bit more than a POC friend of theirs being called a slur. Leftists often think of these as fundamentally systemic issues with very real, often physical, consequences. Like, racism in the US can manifest as, say, an entire generation of poor Black families being poisoned by a local CAFO, or an inability to develop intergenerational wealth due to explicitly racist economic policy.
I think sometimes EAs can offer a rather uncharitable take of the left, like that the left's concern with racism is just "SJW Safe Space" stuff or whatever. Not saying that's what's happening in this thread, but I would just say that if EA wants to be more open to progressives and leftists, it has to take very seriously what they actually believe.
As an example, I was pleased to see that the broad EA take during the BLM summer protests didn't seem to be just "well people should donate to AMF instead of buying markers and signs," Which may have been the take of 2015 EA. Whether EAs agree with them or not, ideas like socialism, progressivism, social justice, and so on, are serious ideas and shouldn't be dismissed in the way that I sometimes have seen them dismissed.
Arguments like this certainly don't help win over leftists to EA.
A self described democratic socialist nearly won the democratic nomination for president. The Democratic socialists of America (dsa) helped get dozens of candidates elected to national, state, and city offices over the last 4 years. Polling shows millennials being more sympathetic to socialism than capitalism.
The Warren example would also quickly get your political analysis dismissed by almost anyone on the left. Warren's lifetime voting record is slightly more left than bernies according to this site (https://progressivepunch.org/scores.htm?house=senate) but she took office in 2012, while bernie became a senator in 2006 after serving 16 years in the house. The democratic party has moved to the left over the last 30 years so more recently elected officials will have a more left record, all else equal. Bernie also received far more support from left wing organizations than Warren. The last point that Warren is trying to get actual legislation done while bernie and other socialists aren't is just wrong. Bernie played a huge role in shaping the recently passed American rescue plan, which is estimated to halve child pover... (read more)
I'm a self-described socialist. I also work at an EA-aligned nonprofit and co-organize one of the largest EA groups in the world. I know plenty of other EAs who do great work and identify as socialists or leftists.
But maybe EA would be better off without us because our political contributions are objectively wrong according to your analysis.
Your analysis assumes that the goal of anyone with left of center politics is to flip seats from red to blue, but this is not the goal of the DSA. Obviously, winning majorities is essential to enacting legislation, but the composition of those majorities will change what legislation looks like. In the example I linked above, Bernie was able to significantly influence the American Rescue Plan to get more unconditional cash to people who need it, among other things. In New York State, Dems hold super majorities in the Assembly and Senate. All 5 of DSA's endorsed candidates won their primaries (the actually competitive election). One of them was the lead sponsor on the HALT Solitary Confinement Act, which significantly restricts the usage of solitary confinement (i.e. torture) in New York's corrections facilities and just passed t... (read more)
I am not ideologically opposed to anything. I am opposed on empirical grounds to Marxism, and approximately indifferent between centrist democrats and what most Americans refer to as "socialism" on the merits. I am also empirically opposed to anyone referring to themself as a "socialist" in American politics, because it's a bad tactic in the elections that actually affect people's lives. Even in dem-supermajority legislatures, self-described socialists don't make up enough of the caucus to be the deciding vote on an issue that has a clean left-right divide.
I voted Sanders in 2016 because my uneducated instinct is "progressive good" and because I thought Clinton a particularly weak candidate. Then I learned how bad his record is on immigration (well to the right of Joe Biden, for example), and have deeply regretted that vote ever since. EA has moved me somewhat toward the Dem establishment and away from the Left because it has given me the tools to prioritize effectively between issues I care about. Which was something I always knew I should be doing, but didn't know how to do before. I always noticed a strain of America Only-ism in some quarters of the... (read more)
I worry that there's a danger in taking the ideas of the left too seriously, if I take ideas like "abolish the police" seriously, I want to respond with the best arguments against it in order to have a productive discussion of criminal justice policy, and end up denying people's lived experience. I think it would be a very bad idea for EA to take the ideas of the Left seriously in any way that risks seeming critical of them.
Whereas if I don't take the idea seriously and understand it merely as an expression of distaste for modern American policing, I can be much more compassionate and understanding. It's probably better to take the sentiment more seriously than the slogans.
I haven't read this whole thread, so forgive me if I'm re-stating someone else's point.
I think there's another explanation: they have a hypothesis about you/EAs/us that we are not disproving.
My experience has been that people in any numerical or social minority group (e.g. Black Americans, people with disabilities, someone who is the "only" person from a given group at their workplace, etc), are used to being met with disappointing responses if they try to share their experiences with people who don't have them (e.g. members of the numerical or social majority group that they are different from). Most of us have had this experience at least some of the time, maybe as EAs! People get blank stares, unwanted pity or admiration, or outright dismissal and invalidation (e.g. "it can't be all that bad" or "you're just playing the [race/poverty/privilege/ whatever] card"). This is definitely the kind of conversation people see over and over again on the internet. So, until proven otherwise, that's what people expect. Majority group members are expected to be ignorant of what life is really like for people who experience it differently. I think this is a ration... (read more)
I found this helpful, I'm in a similar situation of moving from "social justice" (mainly concerned with homelessness in my own city) to Effective Altruism, and so am trying to think of good ways to engage people/slightly concerned that if we don't phrase things in the correct way the left may try to destroy us.
I wonder if talking about the causes of international economic inequality makes it seem more like an issue of injustice to be addressed from a progressive/social justice framework? That's one way I'd frame the issue when talking about EA principles t... (read more)
FWIW, the most closely related Givewell article I'm aware of is How not to be a "white in shining armor". Relevant excerpts (emphasis in original):... (read more)
Great discussion of Econ Aversion from Julia Wise here: https://juliawise.net/economics-not-as-bad-as-i-thought/
Thanks for this great post. I'm closer to left-libertarian or classical liberal myself, but I have many friends and family (mostly in the US) who are more traditional progressives and much more sympathetic to typical social justice concerns than to EA. I agree with many of the issues identified here (including in the comments); my own experience has been that it is largely that they want to be able to "walk and chew gum at the same time". As an economist, I'm imbued with notions like opportunity cost and only being able to optimize one goal at a time (pote... (read more)
Thanks for the article, interesting and well-written. I'm sure will be useful as a reference for me in some future conversations.
With reference to your section titled Incompatibility Between Intersectionality and Prioritization - how do you see worldview diversification fitting in?
To me, this perspective incorporates the value of diversification of causes (which intersectionality protects) whilst still being realistic about actually getting things done (which prioritization protects). Under a worldview diversification lens, prioritization is less abo... (read more)
I feel like invoking worldview diversification here is discussing things at the wrong level.
Is like saying "oh its ok that you believe in intersectionality, because from a worldview diversification perspective we want to work on many causes anyway", and failing to address the fundamental disagreement that within their worldview a intersectionalist does not find cause prioritization useful.
Like, I feel the crux of intersectionality is about different problems being interwoven in complex, hard-to-understand ways. So as OP pointed out, if you believe this you'll need to address all problems at once by radically restructuring society.
Meanwhile, the crux of worldview diversificationists is that we are not certain of our own values and how they will change, so it is better to hedge your bets by compromising between many views.
That wasn't really what I was saying, and I don't think you're steelmanning the intersectionalist perspective, although I agree with your description of the crux. I think many (maybe most?) people who like intersectionality would agree that prioritization is sometimes necessary and useful.
An attempt to steelman intersectionality for a moment:
- problems are usually interwoven and complex
- separating problems from their contexts can cause more problems
- saying one problem is more important than another has negative side effects, because we are trying to fix a broken hammer with broken hammer (comparison culture is a cause of many problems, is a belief of many progressives, I believe)
I am unsure this is incompatible with prioritization, which in my view is simply a practical consequence of not having infinite resources. I think they'd agree, and would not take issue with, for example, someone dedicating their life to only climate change, as long as that person did not go around saying climate change is more important than all the other important issues, and also saw how climate change is related to, for example, improving international governance, or reducing corruption and work... (read more)
Whether this is a ‘good’ answer would depend on your audience, but I think one true answer from a typical EA would be ‘I care about those things too, but I think that the global poor/nonhuman animals/future generations are even more excluded from decision-making (and therefore ignored) than POC/women/LGBT groups are, so that’s where I focus my limited time and money’.
I don’t actually think the cause area challenge is quite what is going on here; I can easily imagine advancing those things being considered cause areas if they had a stronger case.
Perhaps overlooked take? (Somewhat echoing other commenters, though)
US politics is supremely polarised. Those self-identifying as progressive are mainly motivated by opposing the right, and vice versa.
EA doesn’t particularly focus on labelling the biggest issues as problems that are the “fault” of the right (or left).
Thus our reasoned and goal-driven approach will leave people on each side cold.
But conversely, I think some of the steps suggested, could, at the very least, make outreach to people to the right of center or (‘anti-woke’) more difficult.
E.g., “ commitment to using my class privilege…” narratives.
Also, I wanted to ask for more evidence for “EAs tend to reject person-affecting views of population ethics”.
I am personally fairy sympathetic to this view, at least in a common sense moderated version as suggested by Michael st Jules. I know some prominent voices (Wiblin) seem to favor the totalist view, but my impression is that they typically they moderate this with “but even if you have other views of population ethics we think the recommendations are similar”.
I don't know where you would go to get more quantitative evidence of this (the EA survey?) but the headline claim matches my experience. Many longtermist EAs seem to be highly motivated by "creating huge numbers of future happy people" sorts of arguments that fall flat from a (strongly) person-affecting view.
From this perspective, a corporate lawyer who went to Harvard is not a class traitor. They are just acting in their own class interests.