I strongly believe this article touches upon the most important question in the formation and continuation of Effective Altruism and should be discussed here. I think it stands for itself, provides abundant examples, and is well-reasoned in highlighting what I view are current and future limitations to EA as applied rationality. Hanania, as always, is disagreeable and phrases issues in a provocative manner, but is as evidence-based as any post on this forum. I will highlight some key passages that I found most valuable as a starting point for discussion:
In the end, EA will need something like the Darwin-Jesus synthesis of American conservatism. In this case it would of course be much more Darwin than Jesus, and find Aella more a source of amusement or scientific curiosity than a sign of the apocalypse. But taking Darwin too seriously puts you on a collision course with the left, and not just because it prevents you from achieving gender parity in leadership roles. Be the kind of movement that takes an evolutionary view of sex differences, and you’ll attract individuals able to think freely about the causes of other kinds of disparities. Group differences in IQ is right around the corner, and if you’re going to maintain any kind of commitment to rationalism you’re going to have to either stop yourself before getting on that train or take it to its logical destination.
This was obviously highlighted by the Bostrom scandal, which made me very aware that in EA, we still have sacred cows of our own, and many are unable to distance themselves from their own sacred beliefs and acknowledge evidence on its face.
EA has thus far avoided falling into either category on account of it being new and marginal. But it’s now entering the real world. One path it can take is to be folded into the Democratic coalition. It’ll have to temper its rougher edges, which means purging individuals for magic words, knowing when not to take an argument to its logical conclusion, compromising on free speech, more peer review and fewer disagreeable autodidacts, and being unwilling to engage with other individuals and communities that are too non-conformist to avoid having any heretical strains. A woke EA means noticing that the FDA might move too slow on approving certain kinds of drugs, while ignoring that the fields of biology and medicine are in the name of sensitivity being transformed to increasingly select for a kind of cultish conformity, pushing brilliant and independent thinkers into other kinds of work.
This has been touched on recently by Tyler Cowen and a variety of forum posts. EA is best able to fulfill its mission of improving well-being when it resists these polite temptations.
We already have a movement that is able to reason carefully, or at least have a rational discussion, on most things while being beyond hopeless on anything related to identity issues. It’s called liberalism! Accept its views on the need for diversity and the causes of group disparities, and you’re just debating technocratic questions about the best way to address global poverty. Which is fine, but makes EA a movement of extremely limited ambitions.
Putting aside political realities, an EA freed from the shackles of wokeness will be better able to live up to its highest ideals by taking seriously important threats to human well-being that the movement currently ignores for purely political reasons. What does it mean that birthrates are decreasing at the same time there is a negative relationship between IQ and fertility across much of the developed world? And, speaking from a strictly utilitarian perspective, why exactly do we let a tiny minority of violent criminals make large swaths of what are potentially some of our most economically productive urban areas uninhabitable, instead of simply getting rid of them in full confidence that we’re doing the greatest good for the largest number of people? These are the kinds of questions an honest movement either has to ignore or become obsessed with.
Hesitance on gene-editing, crime-as-a-cause-area, and yes, so called "HBD" highlight this. EA should be willing to explore all potentially fruitful avenues of mission fulfillment without regard to taboo. I think this topic is well explored in this Scott Alexander excerpt on Jewish achievement
People act like genetic engineering would be some sort of horrifying mad science project to create freakish mutant supermen who can shoot acid out of their eyes. But I would be pretty happy if it could just make everyone do as well as Ashkenazi Jews. The Ashkenazim I know are mostly well-off, well-educated, and live decent lives. If genetic engineering could give those advantages to everyone, it would easily qualify as the most important piece of social progress in history, even before we started giving people the ability to shoot acid out of their eyes.
But maybe the Jewish advantage will turn out to be cultural. If that's true, I think it would be even more interesting - it would mean there's some set of beliefs and norms which can double your income and dectuple your chance of making an important scientific discovery. I was raised by Ashkenazi Jews and I cannot even begin to imagine what those beliefs would be - as far as I can tell, the cultural payload I received as a child was totally normal, just a completely average American worldview. But if I'm wrong, figuring out exactly what was the active ingredient of that payload would be the most important task in social science, far outstripping lesser problems like crime or education or welfare (nobody expects good policy in these areas to double average income!). Far from trying to make this sound "less interesting", we should be recognizing it as one of the most interesting (and potentially socially useful) problems in the world.
EA's existing taboos are preventing it from answering questions like these, and as new taboos are accepted, the effectiveness of the movement will continue to wain.
Anon Rationalist - thanks for posting some excerpts from this Richard Hanania piece. I would encourage EAs to read the entire original piece for full context.
I'll just add two comments for now:
First, Hanania is writing as a trollish edgelord. He uses crude, sweeping generalizations, deliberately provocative language, and emotionally inflammatory criticisms of wokeness. Brace yourselves for that. He's not using the usual rhetorical style and rationalist epistemics of EA Forum posts. He's writing to get your attention, and you probably won't like at least half of what he says.
Second, despite all that, I agree with Hanania that wokeness is an existential threat to EA, to utilitarianism, and to rationalism, and we should take the threat more seriously.
I've noticed a tendency in EA to show exactly the kind of gutless, cringing, defensive deference to woke critiques of EA that Hanania talks about. Whenever a critic of EA (whether inside or outside the movement) uses woke argument styles, deploys woke terms, and invokes woke values (e.g. 'diversity, equity, inclusion'), most EAs seem too frightened to challenge them, for fear of suffering the usual fate of the anti-woke: dogpiling, brigading, ostracism, and cancellation.
If we don't develop robust defenses against wokeness, we will suffer the same fate as every other organization that values emotional victimization rhetoric over scope-sensitive rationality.
In general, where it doesn't directly relate to cause areas of principle concern to effective altruists, I think EAs should strive to respect others' sacred cows as much as possible. Effective Altruism is a philosophy promoting practical action. It would be harder to find allies who will help us achieve our goals if we are careless about the things other people care a lot about.
I generally agree that being palatable and well-funded are beneficial to effective altruism, and palatability and effectiveness exist on a utility curve. I do not know how we can accurately assess what cause areas should be of principle concern if certain avenues are closed due to respect for others' sacred cows. I think the quote from Scott Alexander addresses this nicely; if you could replicate Jewish achievement, whether culturally or genetically, doing so would be the single most significant development for human welfare in history. Regardless of taboos, that should be a cause area of principle concern, and would be if EA held such ideas to cost-benefit analyses instead of sacred beliefs. And as the lists of sacred beliefs grow, it further hampers other cause areas that would benefit from a rationalist mindset.
The relevant part of the Cowen talk:
"Group differences in IQ is right around the corner, and if you’re going to maintain any kind of commitment to rationalism you’re going to have to either stop yourself before getting on that train or take it to its logical destination."
Eugenics or "human biodiversity" isn't a new idea and is incredibly toxic to most people. It has no place in the EA movement. If you let it anywhere near the movement, the only people that will remain are contrarian right-wingers that care more about being edgy and provocative than helping people or other animals. And also those who enjoy hanging around contrarian right-wingers (maybe they find them endearing or something? idk).
Speaking as a group leader for a local EA group, if someone tried to start a conversation on HBD and its assumed "logical destination" at a group meetup, I would immediately and permanently ban them from the group. It is incredibly hard to bring in different perspectives that lead to good decisions and better normative ethics. One conversation like that is enough to permanently lose many people I want in my group. If banning a few disagreeable right-wing "rationalists" who like to talk about HBD means lots of other people stay, I'll gladly take that trade-off.
For those who think a ban is too harsh: go read the article linked to in this post, read several other top posts from the author, and read the comments from the blogs followers.
There are plenty of contributions right-wing ideas and disagreeable rationalists can make to the EA movement. Many of the movements best ideas come from those who identify that way. Just not the kind demonstrated in this post.
[EDIT] I'm puzzled by the disagreement votes, so adding some more context: In the linked blog post, Richard Hanania writes "A free market in ideas is like a free market in any other good or service. It ends up with Asian and white men on top who are there because they’re simply better than everyone else. Movements uncomfortable with this naturally get swallowed by wokeness."
I think it's pretty clear that what Richard Hanania and the post author mean when they say "anti-woke" is that they think EA should entertain operationalized racism and sexism. Anti-racism and anti-sexism are commitments wildly shared by EA community builders. If being anti-racist and anti-sexist is "woke", the majority of EA has been "woke" for a long time and does better because of it.
"Eugenics or 'human biodiversity' isn't a new idea and is incredibly toxic to most people."
>right, calling an idea "toxic" is literally the same thing as calling it "taboo." Hanania argues rationalism is the belief that "fewer topics...should be considered taboo...and not subject to cost-benefit anaysis."
It sounds like your argument isn't explicitly saying that you consider this topic off limits personally, but rather too many others view it as taboo so as a practical matter you will lose more people than you'll gain (or lose the right people, gain the wrong people).
This sounds like a cop-out to me. Do you feel these ideas, in and of themselves, are too "toxic" to justify a cost benefit-analysis or is your argument simply that the ideas are currently too unpopular to consider for practical reasons?
Ideas that you talk about don't stand on their own. They exist within a historical and social context. You can't look at the idea without also considering how it affects people. I imagine Matthew personally finds the idea toxic too, as do I - but that's not really the point.
Perhaps Rationalism really argues that fewer ideas should be taboo, or perhaps that's just Hanania's version of it. But EA isn't synonymous with Rationalism, and you don't need to adopt one (certainly not completely) to accept the other.
I didn't understand (1).
Why would it not be fine for topics to be off limit for discussion?
The first principle of EA discusses the need for a "'scout mindset' - seeking the truth, rather than to defend our current ideas."
You may be aware that at one point the idea the earth revolves around the Sun was taboo.
What is taboo varies widely over time and by culture. Even the idea that having an open honest discussion about anything could ever be construed as "causing harm" (beside from being a terrible one imo) is a very new concept and one that would have been universally dismissed maybe even 15 years ago.
At any rate, it sounds like you are fine with topics being absolutely off limits to discuss. This is a bit of a surprising admission to me considering the core principles of EA but you are, apparently, certainly not alone in this belief.
Traditionally, thought leaders in EA have been careful not to define any "core principles" besides the basic idea of "we want to find out using evidence and reason how to do as much good as possible, and to apply that knowledge in practice". While it's true that various perceptions and beliefs have creeped in over the years, none of them is sacred.
In any case, as far as I understand the "scout mindset" (which I admit isn't much), it doesn't rule out recognising areas which would be better left alone (for real, practical reasons - not because the church said so).
How can we “find out using evidence and reason how to do as much good as possible, and to apply that knowledge in practice" if some avenues to well-being are forbidden? The idea that no potential area is off limits is inherent in the mission. We must be open to doing whatever does the most good possible regardless of how it interacts with our pre-existing biases or taboos.
This would not have been a remotely controversial statement in a community like this 20 years ago.
The fact that this was downvoted several times without any counter argument is a pretty clear signal that we've reached the end of rational discussion here.
To me, "better left alone" and "sacred" are two sides of the same coin.
As a concrete example, suppose that 100 years ago, a bunch of racist politicians passed a minimum wage law in order to price a local ethnic minority out of the labor market. The minimum wage exists within that historical and social context. However, if more recent research shows definitively that the minimum wage is now improving employment outcomes for that same ethnic minority, the historical and social context would appear to be irrelevant.
I think you have misread and misused the quote. It does not suggest following HBD to its "logical destination" (which in my mind evokes things like genocide and forced sterilization), it suggests that if EA were to accept different biological bases for observed phenomena, such as men disproportionately occupying positions of leadership, EA would then naturally progress to the idea that other observed phenomena (Asian and Jewish overrepresentation in cognitively demanding fields, for example) could be reflections of biological reality.
I think Jgray's comment addresses my other notion that your position on a topic being too toxic to even discuss in good faith perfectly fits Hanania's framework of the taboo.
This post quotes Scott Alexander on a tangent about as much as it does Richard Hannania to bolster minor points made in Hannania's post by appealing to a bias in favour of Scott Alexander among effective altruists.
By linking to and so selectively quoting Hanania prominently, you're trying to create an impression that the post should be trustworthy to effective altruists in spite of the errors and falsehoods about effective altruism in particular and just in general. Assuming you've made this post in reinforcing a truth-seeking agenda in a truth-seeking agenda, you've failed by propagating an abysmal perspective.
There are anti-woke viewpoints that have been well-received on the EA Forum but this isn't one of them. Some of them haven't been anonymous, so the fact that you had no reason to worry more about your reputation than 'truth-seeking' isn't an excuse.
You would, could and should have done better if you had shared an original viewpoint really more familiar with effective altruism than Hannania is. May you take heed of this lesson for the next time you try to resolve disputes.
Considering an EA group leader said that they would permanently ban anyone who brought up some of these topics, I am content in my choice of anonymity.
This doesn't appear to be a good faith response. It doesn't address any of the ideas presented, just takes issue with OP "selectively quoting" and being Anon, which based on your contemptuous and dismissive response seems totally understandable to me .
You would, could, and should have done better with an honest response of an upvote and a comment, "yep, these ideas are way too taboo for us to touch."
How do you define "wokeness"? The term is often used very broadly as a placeholder for vaguely culturally left things the writer dislikes, broad enough that anyone in the audience can feel like its referring to specifically the things they dislike. And there's often a degree of strategic ambiguity/motte and bailey in how its used.
Almost every "bad" thing said here about "Woke EA" sounds good to me, while the "good" things EA would otherwise be able to achieve sound absolutely horrible.
dspeyer brought up an interesting example in another thread:
It seems likely to me that if EA were "folded into the Democratic coalition" in this way, we would've been much slower to recognize the importance of COVID-19.
It's only with the benefit of hindsight that we know COVID-19 was going to be a huge pandemic. A question to think about: What's the COVID-19 pandemic probability such that we should be indifferent between discussing the virus and staying silent -- the pandemic probability such that the expected downside from anti-Chinese racism equals the expected upside from pandemic preparedness?
IMO the benefit of EA getting folded into the Democratic coalition is limited, because people who think it's racist to discuss low-probability pandemics are already well-served by existing groups in the Democratic coalition. EA shouldn't try to be all things to all people. It's OK to leave EA and switch to antiracism advocacy if that's the cause that really speaks to you.
I don't think EA should be folded into a democratic coalition either, but this comment is a massive strawman.
Being in a coalition does not mean agreeing with the majority of the coalition at every time. There is in fact a huge degree of disagreement within said coalition, as becomes obvious around primary time.
Nobody was kicked out of the democratic party for being concerned about covid-19. At worst, there was mild social pressure within the coalition on the subject, which I hope EA would have been able to resist.
So I quoted a particular "bad" thing that Hanania brought up, and explain why I thought it was, in fact, bad.
I'm unclear on who you believe to be strawmanning who. My best guess is that you believe Hanania to be strawmanning the idea of "being folded into the Democratic coalition". However, that's not much of a strawman -- it's a phrase he invented, and he immediately explains what he meant by it.
I'm not reassured by your observation that disagreements within the Democratic coalition only tend to be apparent around primary time.
Nor am I reassured by your "Nobody was kicked out" observation -- being kicked out is often the result of ignoring accumulated "mild social pressure".
Again, benefit of hindsight -- suppose COVID-19 turned out to be a dud, and EA suffers the racism accusation without any corresponding vindication. How many dud pandemics before enough "mild social pressure" has accumulated that we are de facto no longer part of the coalition? My suspicion is less than ten -- we'd get an image as "those racists who are always warning about pandemics from other countries". So if EAs were significantly motivated by staying in the coalition, I think we could easily end up paying too little attention to pandemics.
I certainly hope EA would've been able to resist such social pressure if we were part of the coalition. But pro-woke EAs make me nervous, because they aren't providing a blueprint for when and how such social pressure should be resisted -- and I observe that social pressure has a tendency to create self-reinforcing spirals.
According to my model, being part of a political coalition means giving something up. I think anyone who wants EA to join a political coalition should explain what, specifically, EA should give up relative to a pure focus on doing the most good, and why this is a worthwhile sacrifice. I found your comment a bit frustrating because you seem to imply that joining a coalition is cost-free, and I don't think that's true.
Okay, now you're strawmanning me. Disagreements within the democratic coalition are continuous, they are simply most fervent and visible during primary season when the impacts are greatest.
If you're in the democratic coalition, being called racist on a flimsy basis by people on twitter is actually fairly inevitable. I can't think of a single politician or faction this hasn't happened to. And yet somehow, they keep on trucking.
The actual response to warning about the pandemic would be a handful of twitter weirdos calling you racist, most people going "that seems unreasonable", and everyone continuing on with their lives. this is mainly because warning about pandemics isn't actually racist.
I still don't think being in the coalition is a good idea, but the portrayal here makes it seem like being loosely affiliated with a political movement makes you a dogmatic zombie.
Do you think this is an incentive that people don't respond to?
See this search of pandemic news articles prior to March 2020. You can see lots of news outlets downplaying the virus in favor of racism concerns.
I'm curious just how many people reacted to these articles at the time by saying "that seems unreasonable". I don't remember much of anyone publicly reacting that way. This would be a good test of the degree to which "being called a racist" is an incentive people respond to, if you can find a number of prominent examples of people saying "that seems unreasonable" within the Democratic coalition.
My model is that if the coronavirus caused just as much damage, but in some complicated semi-hidden way that wasn't directly attributable to a pandemic, people would still be just as focused on the racism aspect of coronavirus discussion.
As far as I can tell, Peter Thiel went from being an interesting and intelligent person I had a ton of respect for (he donated lots to MIRI and gave a couple of EA summit keynotes) to a dogmatic zombie, primarily due to loose affiliation with a couple of political movements (neoreaction and the Republican party).
If someone who's famously contrarian and independently wealthy can't resist the pull of polarization, I'm not betting on anybody.
Edit: Here's something from Bryan Caplan
I'll only answer with a small point: I'm from a different country, and we don't have a "Democratic coalition", neither do we have racism against Chinese people because there are barely any Chinese people here (hence, we didn't have this pressure against making a big deal of COVID). I don't see EA through an American perspective, and mostly ignore phrases like that.
Still, generally speaking, I would side with US democrats on many things, and am sure the mild disagreements needed wouldn't be an actual problem. Progressivism is perceived by conservatives as something that creates extreme homogeneity of thought, but that doesn't really seem the case to me.
You say you happen to already agree on most things, perhaps you therefore wouldn't experience much pressure.
Could you expand on this? What do you find horrible about the ability to recreate the success of Ashekenazi Jews among different populations, for example?