Preamble
This is an extract from a post called "Doing EA Better", which argued that EA's new-found power and influence obligates us to solve our movement's significant problems with respect to epistemics, rigour, expertise, governance, and power.
We are splitting DEAB up into a sequence to facilitate object-level discussion.
Each post will include the relevant parts of the list of suggested reforms. There isn't a perfect correspondence between the subheadings of the post and the reforms list, so not all reforms listed will be 100% relevant to the section in question.
Finally, we have tried (imperfectly) to be reasonably precise in our wording, and we ask that before criticising an argument of ours, commenters ensure that it is an argument that we are in fact making.
Main
Summary: The Collective Intelligence literature suggests epistemic communities should be diverse, egalitarian, and open to a wide variety of information sources. EA, in contrast, is relatively homogenous, hierarchical, and insular. This puts EA at serious risk of epistemic blind-spots.
EA highly values epistemics and has a stated ambition of predicting existential risk scenarios. We have a reputation for assuming that we are the “smartest people in the room”.
Yet, we appear to have been blindsided by the FTX crash. As Tyler Cowen puts it:
Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.
If EA is going to do some lesson-taking, I would not want this point to be neglected.
So, what’s the problem?
EA’s focus on epistemics is almost exclusively directed towards individualistic issues like minimising the impact of cognitive biases and cultivating a Scout Mindset. The movement strongly emphasises intelligence, both in general and especially that of particular “thought-leaders”. An epistemically healthy community seems to be created by acquiring maximally-rational, intelligent, and knowledgeable individuals, with social considerations given second place. Unfortunately, the science does not bear this out. The quality of an epistemic community does not boil down to the de-biasing and training of individuals;[3] more important factors appear to be the community’s composition, its socio-economic structure, and its cultural norms.[4]
The field of Collective Intelligence provides guidance on the traits to nurture if one wishes to build a collectively intelligent community. For example:
- Diversity
- Along
essentially alla wide variety of dimensions, from cultural background to disciplinary/professional training to cognition style to age
- Along
- Egalitarianism
- People must feel able to speak up (and must be listened to if they do)
- Dominance dynamics amplify biases and steer groups into suboptimal path dependencies
- Leadership is typically best employed on a rotating basis for discussion-facilitation purposes rather than top-down decision-making
- Avoid appeals and deference to community authority
- Openness to a wide variety of sources of information
- Generally high levels of social/emotional intelligence
- This is often more important than individuals’ skill levels at the task in question
However, the social epistemics of EA leave much to be desired. As we will elaborate on below, EA:
- Is mostly comprised of people with very similar demographic, cultural, and educational backgrounds
- Places too much trust in (powerful) leadership figures
- Is remarkably intellectually insular
- Confuses value-alignment and seniority with expertise
- Is vulnerable to motivated reasoning
- Is susceptible to conflicts of interest
- Has powerful structural barriers to raising important categories of critique
- Is susceptible to groupthink
Decision-making structures and intellectual norms within EA must therefore be improved upon.[5]
Suggested reforms
Below, we have a preliminary non-exhaustive list of relevant suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.
It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!
In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.
Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.
Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.
Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!
Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.
Critique
General
- EAs must be more willing to make deep critiques, both in private and in public
- You are not alone, you are not crazy!
- There is a much greater diversity of opinion in this community than you might think
- Don’t assume that the people in charge must be smarter than you, and that you must be missing something if you disagree – even most of them don’t think that!
- EA must be open to deep critiques as well as shallow critiques
- We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”
- We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided
- Our willingness to consider a critique should be orthogonal to the seniority of the authors of the subject(s) of that critique
- When we reject critiques, we should present our reasons for doing so
- EAs should read more deep critiques of EA, especially external ones
- EA should cut down its overall level of tone/language policing
- Norms should still be strongly in favour of civility and good-faith discourse, but anger or frustration cannot be grounds for dismissal, and deep critique must not be misinterpreted as aggression or “signalling”
- Civility must not be confused with EA ingroup signalling
- Norms must be enforced consistently, applying to senior EAs just as much as newcomers
- EAs should make a conscious effort to avoid (subconsciously/inadvertently) using rhetoric about how “EA loves criticism” as a shield against criticism
- Red-teaming contests, for instance, are very valuable, but we should avoid using them to claim that “something is being done” about criticism and thus we have nothing to worry about
- “If we are so open to critique, shouldn’t we be open to this one?”
- EAs should avoid delaying reforms by professing to take critiques very seriously without actually acting on them
- EAs should state their reasons when dismissing critiques, and should be willing to call out other EAs if they use the rhetoric of rigour and even-handedness without its content
- EAs, especially those in community-building roles, should send credible/costly signals that EAs can make or agree with deep critiques without being excluded from or disadvantaged within the community
- EAs should be cautious of knee-jerk dismissals of attempts to challenge concentrations of power, and seriously engage with critiques of capitalist modernity
- EAs, especially prominent EAs, should be willing to cooperate with people writing critiques of their ideas and participate in adversarial collaborations
- EA institutions and community groups should run discussion groups and/or event programmes on how to do EA better
Epistemics
General
- EA should study social epistemics and collective intelligence more, and epistemic efforts should focus on creating good community epistemics rather than merely good individual epistemics
- As a preliminary programme, we should explore how to increase EA’s overall levels of diversity, egalitarianism, and openness
- EAs should practise epistemic modesty
- We should read much more, and more widely, including authors who have no association with (or even open opposition to) the EA community
- We should avoid assuming that EA/Rationalist ways of thinking are the only or best ways
- We should actively seek out not only critiques of EA, but critiques of and alternatives to the underlying premises/assumptions/characteristics of EA (high modernism, elite philanthropy, quasi-positivism, etc.)
- We should stop assuming that we are smarter than everybody else
- EAs should consciously separate:
- An individual’s suitability for a particular project, job, or role
- Their expertise and skill in the relevant area(s)
- The degree to which they are perceived to be “highly intelligent”
- Their perceived level of value-alignment with EA orthodoxy
- Their seniority within the EA community
- Their personal wealth and/or power
- EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/“heretical” views
- When EA organisations commission research on a given question, they should publicly pre-register their responses to a range of possible conclusions
Diversity
- EA institutions should select for diversity
- With respect to:
- Hiring (especially grantmakers and other positions of power)
- Funding sources and recipients
- Community outreach/recruitment
- Along lines of:
- Academic discipline
- Educational & professional background
- Personal background (class, race, nationality, gender, etc.)
- Philosophical and political beliefs
- Naturally, this should not be unlimited – some degree of mutual similarity of beliefs is needed for people to work together – but we do not appear to be in any immediate danger of becoming too diverse
- With respect to:
- Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
- EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
- People with heterodox/“heretical” views should be actively selected for when hiring to ensure that teams include people able to play “devil’s advocate” authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view
- Community-building efforts should be broadened, e.g. involving a wider range of universities, and group funding should be less contingent on the perceived prestige of the university in question and more focused on the quality of the proposal being made
- EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
- A greater range of people should be invited to EA events and retreats, rather than limiting e.g. key networking events to similar groups of people each time
- There should be a survey on cognitive/intellectual diversity within EA
- EAs should not make EA the centre of their lives, and should actively build social networks and career capital outside of EA
Openness
- Most challenges, competitions, and calls for contributions (e.g. cause area exploration prizes) should be posted where people not directly involved within EA are likely to see them (e.g. Facebook groups of people interested in charities, academic mailing lists, etc.)
- Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:
- Subject-matter experts from outside EA
- Researchers, practitioners, and stakeholders from outside of our elite communities
- For instance, we need a far greater input from people from Indigenous communities and the Global South
- External speakers/academics who disagree with EA should be invited give keynotes and talks, and to participate in debates with prominent EAs
- EAs should make a conscious effort to seek out and listen to the views of non-EA thinkers
- Not just to respond!
- EAs should remember that EA covers one very small part of the huge body of human knowledge, and that the vast majority of interesting and useful insights about the world have and will come from outside of EA
Expertise & Rigour
Rigour
- Work should be judged on its quality, rather than the perceived intelligence, seniority or value-alignment of its author
- EAs should avoid assuming that research by EAs will be better than research by non-EAs by default
Reading
- Insofar as a “canon” is created, it should be of the best-quality works on a given topic, not the best works by (orthodox) EAs about (orthodox) EA approaches to the topic
- Reading lists, fellowship curricula, and bibliographies should be radically diversified
- We should search everywhere for pertinent content, not just the EA Forum, LessWrong, and the websites of EA orgs
- We should not be afraid of consulting outside experts, both to improve content/framing and to discover blind-spots
- EAs should see fellowships as educational activities first and foremost, not just recruitment tools
- EAs should continue creating original fellowship ideas for university groups
- EAs should be more willing to read books and academic papers
Good Science
- EAs should be curious about why communities with decades of experience studying problems (similar to the ones) we study do things the ways that they do
Experts & Expertise
- EAs should deliberately broaden their social/professional circles to include external domain-experts with differing views
Funding & Employment
Grantmaking
- Grantmakers should be radically diversified to incorporate EAs with a much wider variety of views, including those with heterodox/”heretical” views
Governance & Hierarchy
Leadership
- EAs should avoid hero-worshipping prominent EAs, and be willing to call it out among our peers
- We should be able to openly critique senior members of the community, and avoid knee-jerk defence/deference when they are criticised
- EA leaders should take active steps to minimise the degree of hero-worship they might face
- For instance, when EA books or sections of books are co-written by several authors, co-authors should be given appropriate attribution
- EAs should deliberately platform less well-known EAs in media work
- EAs should assume that power corrupts, and EAs in positions of power should take active steps to:
- Distribute and constrain their own power as a costly signal of commitment to EA ideas rather than their position
- Minimise the corrupting influence of the power they retain and send significant costly signals to this effect
- Fireside chats with leaders at EAG events should be replaced with:
- Panels/discussions/double-cruxing discussions involving a mix of:
- Prominent EAs
- Representatives of different EA organisations
- Less well-known EAs
- External domain-experts
- Discussions between leaders and unknown EAs
- Panels/discussions/double-cruxing discussions involving a mix of:
Decentralisation
- EA institutions should see EA ideas as things to be co-created with the membership and the wider world, rather than transmitted and controlled from the top down
Contact Us
If you have any questions or suggestions about this article, EA, or anything else, feel free to email us at concernedEAs@proton.me
Yes, I gave David my wish list of stuff he could discuss in a comment when he announced his blog. So far he hasn't done that, but he's busy with his chosen topics, I expect. I wrote quite a lot in those comments, but he did see the list.
In an answer to Elliot Temple's question "Does EA Have An Alternative To Rational Written Debate", I proposed a few ideas, including one on voting and tracking of an EA canon of arguments. Nobody dunked on me for it, though Elliot's question wasn't that popular, so I suppose few people actually read it. I appreciated Elliot's focus on argumentation and procedure. Procedural tools to systematize debates are useful.
I'm not at all familiar with literature on impacts of diversity on decision-making. I'll follow up on your suggestions of what to read, as much as I can. There are different kinds of diversity (worldview, race, ideology, background, expertise, ...), but from what classes I took in communications studies and informal argumentation, I know that models are available and helpful to improve group discussion, and that best practices exist in several areas relevant to group communications and epistemics.
I was watching Cremer discuss ideas and read her Vox article about distributing power and changing group decision strategies. Her proposals seem serious, exciting, and somewhat technical, as do yours, ConcernedEA's. That implies a learning curve to follow but with results that I expect are typically worth it for EA folks. Any proposal that combines serious + exciting + technical is one that I expect will be worth it for those involved, if the proposal is accepted. However, that is as seen through your perspective, one intending to preserve the community.
As someone on the outside observing your community grapple with its issues, I still hope for a positive outcome for you all. Your community pulls together many threads in different areas, and does have an impact on the rest of the world.
I've already identified elsewhere just what I think EA should do, and still believe the same. EA can preserve its value as a research community and supporter of charitable works without many aspects of the "community-building" it now does. Any support of personal connections outside research conferences and knowledge-sharing could end. Research would translate to support of charitable work or nonprofits explicitly tied to obviously charitable missions. I suppose that could include work on existential risk, but in limited contexts.
I have tried to make the point that vices (the traditional ones, ok? Like drugs, alcohol, betting, ...) and the more general problem of selfishness are what to focus on. I'm not singling out your community as particularly vice-filled (well, betting is plausibly a strong vice in your community) but just that vices are in the background everywhere, and if you're looking for change, make positive changes there.
And what do I mean by the "general problem of selfishness"? Not what you could expect, that I think you're all too selfish. No. Selfishness matters because self-interest matters if altruism is your goal. Every altruistic effort is intended to serve someone else's self-interest. Meanwhile, selfishness vs altruism is the classic conflict in most ethical decisions. Not the only one, but the typical one. The one to check for first, like, when you're being self-serving, or when your "ethical goals" aren't ethical at all. Yet your community has not grappled with the implications. Furthermore, no one here seems to think it matters. In your minds, you put these old-fashioned ways of thinking behind you.
You seem to have put Peter Singer's work behind you as well, or some of you have, I think that is a mistake as well. I don't know what kind of personal embarrassing statements or whatever that Peter Singer might have ever made, everyone seems hyper-alert to that kind of thing. But his work in ethics is foundational and should have a prominent place in your thinking and debates.
Furthermore, if you stick with your work on AGI, Bostrom's work in Superintelligence showed insight and creativity in understanding and assessing AGI and ASI. I can't say I agree with his thinking in further work that he's produced, but if I were in your shoes, I wouldn't stop mentioning his professional work just because he wrote some shameful stuff on-line, once, 20 years ago, and recently acknowledged it. Like Peter Singer, MacAskill, and many others associated with EA, Bostrom's done impressive and foundational work(in Bostrom's case, in AI), and it deserves consideration on its merits.
But back to writing about what I think, which has a much less impressive source.
Me.
Problems that plague humanity don't really change. Vices are always going to be vices if they're practiced. And selfishness? It plays such a large role in everything that we do, if you ignore it, or focus solely on how to serve others' self-interests, you won't grapple with selfishness well when its role is primary, for example, in contexts of existential harm. This will have two results:
My heuristics about a positive community are totally satisfied if your EA community focuses on giving what you can, saving the lives that you can, effective charity, effective altruism. That EA is inspiring, even inspiring guilt, but in a good way. Sure, vices are typically in the background, and corruption, plausibly, but that's not the point. Are your goals self-contradicting? Are you co-opted by special interests already? Are you structurally incapable of providing effective charity? No, well, with caveats, but no. Overall, the mission and approach of the giving side of EA is and has been awesome and inspiring.
When EA folks go further, with your second and third waves, first existential risk prevention, now longtermism, you make me think hard about your effectiveness. You need to rock selfishness well just to do charity well (that's my hunch). But existential risk and longtermism and community-building.... The demands on you are much much higher, and you aren't meeting them. You need to stop all your vices, rid your community of them, prohibition-style. You need to intensively study selfishness and perform original academic research about it. I'm not joking. You really need think past current work in evolutionary psychology and utilitarianism and cognitive science. You could need to look into the past at failed research efforts and pick them up again, with new tools or ideas. Not so that you succeed with all your goals, but just so that you can stop yourself from being a significant net harm. Scout mindset was a step in the right direction and not an endpoint in improving your epistemics. Meanwhile, with your vices intact, your epistemics will suffer. Or so I believe.
If I had all the answers about selfishness vs altruism, and how to understand and navigate one's own, I would share them. It's a century's research project, a multidisciplinary one with plausibly unexpected results, involving many people, experiments, different directions, and some good luck.
I don't want to associate Singer, Cremer, Bostrom, Galef, MacAskill, or any other EA person or person who I might have referenced with my admittedly extreme and alienating beliefs about betting and other vices or with my personal declarations about what the EA community needs to do. I imagine most folks beliefs about vices and selfishness reflect modern norms and that none would not take the position that I am taking. And that's OK with me.
However, register my standards for the EA community as extreme given the goals you have chosen for yourself. The EA community's trifecta of ambitions is extreme. So are the standards that should be set for your behavior in your everyday life.
I wrote:
"You need to rock selfishness well just to do charity well (that's my hunch)."
Selfishness, so designated, is not a public health issue nor a private mental health issue, but does stand in contrast to altruism. To the extent that society allows your actualization of something you could call selfishness, that seems to be your option to manifest, and by modern standards, without judgement of your selfishness. Your altruism might be judged, but not your selfishness, like, "Oh, that's some effective selfishness" vs "Oh, that's a poser's selfishness righ... (read more)