I am not a card carrying member of EA. I am not particularly A, much less E in that context. However the past few months have been exhausting in seeing not just the community, one I like, in turmoil repeatedly, while clearly fumbling basic aspects of how they're seen in the wider world. I like having EA in the world, I think it does a lot of good. And I think you guys are literally throwing it away based on aesthetics of misguided epistemic virtue signaling. But it's late, and I read more than a few articles, and this post is me begging you to please just stop.
The specific push here is of course the Bostrom incident, when he clearly and highly legibly wrote black people have lower intelligence than other races. And his apology, was, to put it mildly, mealy mouthed and without much substance. If anything, in the intervening 25 years since the offending email, all he seems to have learnt to do is forget the one thing he said he wanted to do - to speak plainly.
I'm not here to litigate race science. There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders. I can easily imagine if you're young and not particularly interested in this space you'd have a variety of views, what is silly is seeing someone who is so clearly in a position of authority, with a reputation for careful consideration and truth seeking, maintaining this kind of view.
And not only is this just wrong, it's counterproductive.
If EA wants to work on the most important problems in the world and make progress on them, it would be useful to have the world look upon you with trust. For anything more than turning money into malaria nets, you need people to trust you. And that includes trusting your intentions and your character.
If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs. History is filled with optimisation experiments gone horribly wrong when these beliefs existed at the bottom. The base rate of horrible outcomes is uncomfortably large.
This is human values misalignment. Unless you have overwhelming evidence (or any real evidence), this is just a dumb prior to hold and publicise if you're working on actively changing people's lives. I don't care what you think about ethics about sentient digital life in the future if you can't figure this out today.
Again, all of which individually is fine. I'm an advocate of people holding crazy opinions should they want to. But when like a third of the community seems to support him, and the defenses require contortions that agree, dismiss and generally be whiny about drama, that's ridiculous. While I appreciate posts like this, which speak about the importance of epistemic integrity, it seems to miss the fact that applauding someone for not lying is great but not if the belief they're holding is bad. And even if this blows over, it will remain a drag on EA unless it's addressed unequivocally.
Or this type of comment which uses a lot of words but effectively seems to support the same thought. That no, our job is to differentiate QALYs and therefore differences are part of life.

But guess what, epistemic integrity on something like this (I believe something pretty reprehensible and am not cowing to people telling me so) isn't going to help with shrimp welfare or AI risk prevention. Or even malaria net provision. Do not mistake "sticking with your beliefs" to be an overriding good, above believing what's true, or acting kindly towards the world, or acting like serious members of a civilisation where we all need to work together. EA writes regularly about burnout from the sheer sense of feeling burdened with a duty to do good - guess what, here's a good chance.
In fact, if you can't see why sticking with the theory that "race X is inferior in Y" and "we unequivocally are in favour of QALY differentiation" together constitute a clear and dangerous problem, I don't know what to say. If you want to be a successful organisation that does good in the world, you have to stop confusing sophomoric philosophical arguments with actual lived concerns in the real world.
You can't sweep this under the rug as "drama of the day". I'm sorry, but if you want to be anything more than yet another NGO who take themselves a tad too seriously, this is actively harmful.
This isn't a PR problem, it's an actual problem. If one of the most influential philosophers and leaders of your movement is saying these things that are just wrong, it hurts credibility for any other sort of framework you might create. Not to mention the actual flesh and blood people who live in the year 2023.
It's one thing to play with esoteric thought experiments about the wellbeing of people in the year 20000. It's quite another to live in the year 2023. Everyone is free to analyse and experiment to explore any question they so choose, including this. However this is not that. It is starting from professing a belief, and saying you are okay doing so because there isn't any contrary evidence. That's not how science works, and that's not how a public facing organisation should work.
If he'd said, for instance, "hey I was an idiot for thinking and saying that. We still have IQ gaps between races, which doesn't make sense. It's closing, but not fast enough. We should work harder on fixing this." That would be more sensible. Same for the community itself disavowing the explicit racism.
By the way, it's insane that the Forum seems to hide this whole thread as if it is a minor annoyance instead of a death knell. The SBF issue I can understand, you were fooled like everyone else and its a black eye for the organisation, but this isn't that. And the level of condemnation that brought was a good way to react. This is much more serious.

I should say, I don't have a particular agenda here. This stream of consciousness is already quite long. A little annoyed perhaps that this is flooding the timeline and the responses from folks whom I'd considered thoughtful are tending towards debating weird theoretical corner cases, doing mental jiu-jitsu just to keep holding that faith a little longer. But mostly it's just frustration bubbling out as cope.
I just wish y'all could regain the moral high ground here. There are important causes that could use the energy. It's not even that hard.
Rohit - if you don't believe in epistemic integrity regarding controversial views that are socially stigmatized, you don't actually believe in epistemic integrity.
You threw in some empirical claims about intelligence research, e.g. 'There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders.'
OK. Ask yourself the standard epistemic integrity checks: What evidence would convince you to change your mind about these claims? Can you steel-man the opposite position? Are you applying the scout mindset to this issue? What were your Bayesian priors about this issue, and why did you have those priors, and what would update you?
It's OK for EAs to see a highly controversial area (like intelligence research), to acknowledge that learning more about it might be a socially handicapping infohazard, and to make a strategic decision not to touch the issue with a 10-foot-pole -- i.e. to learn nothing more about it, to say nothing about it, and if asked about it, to respond 'I haven't studied this issue in enough depth to offer an informed judgment about it.'
What's not OK is for EAs to suddenly abandon all rationality principles and epistemic integrity principles, and to offer empirically unsupported claims and third-hand critiques of a research area (that were debunked decades ago), just because there are high social costs to holding the opposite position.
It's honestly not that hard to adopt the 10-foot-pole strategy regarding intelligence research controversies -- and maybe that would be appropriate for most EAs, most of the time.
You just have to explain to people 'Look, I'm not an intelligence research expert. But I know enough to understand that any informed view on this matter would require learning all about psychometric measurement theory, item response theory, hierarchical factor analysis, the g factor, factorial invariance across groups, evolutionary cognitive psychology, evolutionary neurogenetics, multivariate behavior genetics, molecular behavior genetics, genome-wide association studies for cognitive abilities, extended family twin designs, transracial adoption studies, and several other fields. I just haven't put in the time. Have you?'
That kind of response can signal that you're epistemically humble enough not to pretend to have any expertise, but that you know enough about what you don't know, that whoever you're talking to can't really pretend any expertise they don't have either.
And, by the way, for any EAs to comment on the intelligence research without actually understanding the majority of the topics I mentioned above, would be pretty silly: analogous to someone commenting on technical AI alignment issues if they don't know the difference between an expert system and a deep neural network, or the difference between supervised and reinforcement learning.
David Reich claims that whilst we don't currently have any evidence to suggest that one particular population group is genetically more intelligent than another, the claim that such a thing is impossible or even unlikely, is also incorrect. Theres currently not much evidence either way and there's no theoretical basis on which to decide there aren't any such differences either.
At the same time he highlights the importance of respecting all people as individuals when treating with them, irrespective of the distribution of various characteristerics among their population groups.