Hide table of contents

Preamble

This is an extract from a post called "Doing EA Better", which argued that EA's new-found power and influence obligates us to solve our movement's significant problems with respect to epistemics, rigour, expertise, governance, and power.

We are splitting DEAB up into a sequence to facilitate object-level discussion.

Each post will include the relevant parts of the list of suggested reforms. There isn't a perfect correspondence between the subheadings of the post and the reforms list, so not all reforms listed will be 100% relevant to the section in question.

Finally, we have tried (imperfectly) to be reasonably precise in our wording, and we ask that before criticising an argument of ours, commenters ensure that it is an argument that we are in fact making.

Main

Summary: The Collective Intelligence literature suggests epistemic communities should be diverse, egalitarian, and open to a wide variety of information sources. EA, in contrast, is relatively homogenous, hierarchical, and insular. This puts EA at serious risk of epistemic blind-spots.

EA highly values epistemics and has a stated ambition of predicting existential risk scenarios. We have a reputation for assuming that we are the “smartest people in the room”.

Yet, we appear to have been blindsided by the FTX crash. As Tyler Cowen puts it:

Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.

If EA is going to do some lesson-taking, I would not want this point to be neglected.

So, what’s the problem?

EA’s focus on epistemics is almost exclusively directed towards individualistic issues like minimising the impact of cognitive biases and cultivating a Scout Mindset. The movement strongly emphasises intelligence, both in general and especially that of particular “thought-leaders”. An epistemically healthy community seems to be created by acquiring maximally-rational, intelligent, and knowledgeable individuals, with social considerations given second place. Unfortunately, the science does not bear this out. The quality of an epistemic community does not boil down to the de-biasing and training of individuals;[3] more important factors appear to be the community’s composition, its socio-economic structure, and its cultural norms.[4]

The field of Collective Intelligence provides guidance on the traits to nurture if one wishes to build a collectively intelligent community. For example:

  • Diversity
    • Along essentially all a wide variety of dimensions, from cultural background to disciplinary/professional training to cognition style to age
  • Egalitarianism
    • People must feel able to speak up (and must be listened to if they do)
    • Dominance dynamics amplify biases and steer groups into suboptimal path dependencies
    • Leadership is typically best employed on a rotating basis for discussion-facilitation purposes rather than top-down decision-making
    • Avoid appeals and deference to community authority
  • Openness to a wide variety of sources of information
  • Generally high levels of social/emotional intelligence
    • This is often more important than individuals’ skill levels at the task in question

However, the social epistemics of EA leave much to be desired. As we will elaborate on below, EA:

  • Is mostly comprised of people with very similar demographic, cultural, and educational backgrounds
  • Places too much trust in (powerful) leadership figures
  • Is remarkably intellectually insular
  • Confuses value-alignment and seniority with expertise
  • Is vulnerable to motivated reasoning
  • Is susceptible to conflicts of interest
  • Has powerful structural barriers to raising important categories of critique
  • Is susceptible to groupthink

Decision-making structures and intellectual norms within EA must therefore be improved upon.[5]

Suggested reforms

Below, we have a preliminary non-exhaustive list of relevant suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.

Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.

Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!

Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.

Critique

General

  • EAs must be more willing to make deep critiques, both in private and in public
    • You are not alone, you are not crazy!
    • There is a much greater diversity of opinion in this community than you might think
    • Don’t assume that the people in charge must be smarter than you, and that you must be missing something if you disagree – even most of them don’t think that!
  • EA must be open to deep critiques as well as shallow critiques
    • We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”
    • We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided
    • Our willingness to consider a critique should be orthogonal to the seniority of the authors of the subject(s) of that critique
    • When we reject critiques, we should present our reasons for doing so
  • EAs should read more deep critiques of EA, especially external ones
    • For instance this blog and this forthcoming book
  • EA should cut down its overall level of tone/language policing
    • Norms should still be strongly in favour of civility and good-faith discourse, but anger or frustration cannot be grounds for dismissal, and deep critique must not be misinterpreted as aggression or “signalling”
    • Civility must not be confused with EA ingroup signalling
    • Norms must be enforced consistently, applying to senior EAs just as much as newcomers
  • EAs should make a conscious effort to avoid (subconsciously/inadvertently) using rhetoric about how “EA loves criticism” as a shield against criticism
    • Red-teaming contests, for instance, are very valuable, but we should avoid using them to claim that “something is being done” about criticism and thus we have nothing to worry about
    • “If we are so open to critique, shouldn’t we be open to this one?”
    • EAs should avoid delaying reforms by professing to take critiques very seriously without actually acting on them
  • EAs should state their reasons when dismissing critiques, and should be willing to call out other EAs if they use the rhetoric of rigour and even-handedness without its content
  • EAs, especially those in community-building roles, should send credible/costly signals that EAs can make or agree with deep critiques without being excluded from or disadvantaged within the community
  • EAs should be cautious of knee-jerk dismissals of attempts to challenge concentrations of power, and seriously engage with critiques of capitalist modernity
  • EAs, especially prominent EAs, should be willing to cooperate with people writing critiques of their ideas and participate in adversarial collaborations
  • EA institutions and community groups should run discussion groups and/or event programmes on how to do EA better

Epistemics

General

  • EA should study social epistemics and collective intelligence more, and epistemic efforts should focus on creating good community epistemics rather than merely good individual epistemics
    • As a preliminary programme, we should explore how to increase EA’s overall levels of diversity, egalitarianism, and openness
  • EAs should practise epistemic modesty
    • We should read much more, and more widely, including authors who have no association with (or even open opposition to) the EA community
    • We should avoid assuming that EA/Rationalist ways of thinking are the only or best ways
    • We should actively seek out not only critiques of EA, but critiques of and alternatives to the underlying premises/assumptions/characteristics of EA (high modernism, elite philanthropy, quasi-positivism, etc.)
    • We should stop assuming that we are smarter than everybody else
  • EAs should consciously separate:
    • An individual’s suitability for a particular project, job, or role
    • Their expertise and skill in the relevant area(s)
    • The degree to which they are perceived to be “highly intelligent”
    • Their perceived level of value-alignment with EA orthodoxy
    • Their seniority within the EA community
    • Their personal wealth and/or power
  • EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/“heretical” views
  • When EA organisations commission research on a given question, they should publicly pre-register their responses to a range of possible conclusions

Diversity

  • EA institutions should select for diversity
    • With respect to:
      • Hiring (especially grantmakers and other positions of power)
      • Funding sources and recipients
      • Community outreach/recruitment
    • Along lines of:
      • Academic discipline
      • Educational & professional background
      • Personal background (class, race, nationality, gender, etc.)
      • Philosophical and political beliefs
    • Naturally, this should not be unlimited – some degree of mutual similarity of beliefs is needed for people to work together – but we do not appear to be in any immediate danger of becoming too diverse
  • Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
  • EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
  • People with heterodox/“heretical” views should be actively selected for when hiring to ensure that teams include people able to play “devil’s advocate” authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view
  • Community-building efforts should be broadened, e.g. involving a wider range of universities, and group funding should be less contingent on the perceived prestige of the university in question and more focused on the quality of the proposal being made
  • EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
  • A greater range of people should be invited to EA events and retreats, rather than limiting e.g. key networking events to similar groups of people each time
  • There should be a survey on cognitive/intellectual diversity within EA
  • EAs should not make EA the centre of their lives, and should actively build social networks and career capital outside of EA

Openness

  • Most challenges, competitions, and calls for contributions (e.g. cause area exploration prizes) should be posted where people not directly involved within EA are likely to see them (e.g. Facebook groups of people interested in charities, academic mailing lists, etc.)
  • Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:
    • Subject-matter experts from outside EA
    • Researchers, practitioners, and stakeholders from outside of our elite communities
      • For instance, we need a far greater input from people from Indigenous communities and the Global South
  • External speakers/academics who disagree with EA should be invited give keynotes and talks, and to participate in debates with prominent EAs
  • EAs should make a conscious effort to seek out and listen to the views of non-EA thinkers
    • Not just to respond!
  • EAs should remember that EA covers one very small part of the huge body of human knowledge, and that the vast majority of interesting and useful insights about the world have and will come from outside of EA

Expertise & Rigour

Rigour

  • Work should be judged on its quality, rather than the perceived intelligence, seniority or value-alignment of its author
    • EAs should avoid assuming that research by EAs will be better than research by non-EAs by default

Reading

  • Insofar as a “canon” is created, it should be of the best-quality works on a given topic, not the best works by (orthodox) EAs about (orthodox) EA approaches to the topic
    • Reading lists, fellowship curricula, and bibliographies should be radically diversified
    • We should search everywhere for pertinent content, not just the EA Forum, LessWrong, and the websites of EA orgs
    • We should not be afraid of consulting outside experts, both to improve content/framing and to discover blind-spots
  • EAs should see fellowships as educational activities first and foremost, not just recruitment tools
  • EAs should continue creating original fellowship ideas for university groups
  • EAs should be more willing to read books and academic papers

Good Science

  • EAs should be curious about why communities with decades of experience studying problems (similar to the ones) we study do things the ways that they do

Experts & Expertise

  • EAs should deliberately broaden their social/professional circles to include external domain-experts with differing views

Funding & Employment

Grantmaking

  • Grantmakers should be radically diversified to incorporate EAs with a much wider variety of views, including those with heterodox/”heretical” views

Governance & Hierarchy

Leadership

  • EAs should avoid hero-worshipping prominent EAs, and be willing to call it out among our peers
    • We should be able to openly critique senior members of the community, and avoid knee-jerk defence/deference when they are criticised
  • EA leaders should take active steps to minimise the degree of hero-worship they might face
    • For instance, when EA books or sections of books are co-written by several authors, co-authors should be given appropriate attribution
  • EAs should deliberately platform less well-known EAs in media work
  • EAs should assume that power corrupts, and EAs in positions of power should take active steps to:
    • Distribute and constrain their own power as a costly signal of commitment to EA ideas rather than their position
    • Minimise the corrupting influence of the power they retain and send significant costly signals to this effect
  • Fireside chats with leaders at EAG events should be replaced with:
    • Panels/discussions/double-cruxing discussions involving a mix of:
      • Prominent EAs
      • Representatives of different EA organisations
      • Less well-known EAs
      • External domain-experts
    • Discussions between leaders and unknown EAs

Decentralisation

  • EA institutions should see EA ideas as things to be co-created with the membership and the wider world, rather than transmitted and controlled from the top down

Contact Us

If you have any questions or suggestions about this article, EA, or anything else, feel free to email us at concernedEAs@proton.me

Comments11
Sorted by Click to highlight new comments since: Today at 5:47 AM
slg
1y32
7
0

Flagging that I'm only about 1/3 in.

Regarding this paragraph:

" An epistemically healthy community seems to be created by acquiring maximally-rational, intelligent, and knowledgeable individuals, with social considerations given second place. Unfortunately, the science does not bear this out. The quality of an epistemic community does not boil down to the de-biasing and training of individuals;[3] more important factors appear to be the community’s composition, its socio-economic structure, and its cultural norms.[4]"

When saying that the science doesn't bear this out you go on to cite footnotes in your original article. If you want to make the case for this, it might be better to either i) point to very specific ways how the current qualities of EA lead to flawed conclusions, or ii) point to research that makes a similar claim.

EAs should read more deep critiques of EA, especially external ones

For instance this blog and this forthcoming book

 

Yes, I gave David my wish list of stuff he could discuss  in a comment when he announced his blog.  So far he hasn't done that, but he's busy with his chosen topics, I expect.  I wrote quite a lot in those comments, but he did see the list.

In an answer to Elliot Temple's question "Does EA Have An Alternative To Rational Written Debate", I proposed a few ideas, including one on voting and tracking of an EA canon of arguments. Nobody dunked on me for it, though Elliot's question wasn't that popular, so I suppose few people actually read it. I appreciated Elliot's focus on argumentation and procedure. Procedural tools to systematize debates are useful.

I'm not at all familiar with literature on impacts of diversity on decision-making. I'll follow up on your suggestions of what to read, as much as I can. There are different kinds of diversity (worldview, race, ideology, background, expertise, ...), but from what classes I took in communications studies and informal argumentation, I know that models are available and helpful to improve group discussion, and that best practices exist in several areas relevant to group communications and epistemics. 

I was watching Cremer discuss ideas and read her Vox article about  distributing power and changing group decision strategies. Her proposals seem serious, exciting, and somewhat technical, as do yours, ConcernedEA's. That implies a learning curve to follow but with results that I expect are typically worth it for EA folks. Any proposal that combines serious + exciting + technical is one that I expect will be worth it for those involved, if the proposal is accepted. However, that is as seen through your perspective, one intending to preserve the community.

 As someone on the outside observing your community grapple with its issues, I still hope for a positive outcome for you all. Your community pulls together many threads in different areas, and does have an impact on the rest of the world. 

I've already identified elsewhere just what I think EA should do, and still believe the same. EA can preserve its value as a research community and supporter of charitable works without many aspects of the "community-building" it now does.  Any support of personal connections outside research conferences and knowledge-sharing could end.  Research would translate to support of charitable work or nonprofits explicitly tied to obviously charitable missions. I suppose that could include work on existential risk, but in limited contexts.  

I have tried to make the point that vices (the traditional ones, ok? Like drugs, alcohol, betting, ...) and the more general problem of selfishness are what to focus on. I'm not singling out your community as particularly vice-filled (well, betting is plausibly a strong  vice in your community) but just that vices are in the background everywhere, and if you're looking for change, make positive changes there. 

And what do I mean by the "general problem of selfishness"? Not what you could expect, that I think you're all too selfish. No. Selfishness matters because self-interest matters if altruism is your goal. Every altruistic effort is intended to serve someone else's self-interest. Meanwhile, selfishness vs altruism is the classic conflict in most ethical decisions.  Not the only one, but the typical one. The one to check for first, like, when you're being self-serving, or when your "ethical goals" aren't ethical at all. Yet your community has not grappled with the implications. Furthermore, no one here seems to think it matters. In your minds, you  put these old-fashioned ways of thinking behind you. 

You seem to have put Peter Singer's work behind you as well, or some of you have, I think that is a mistake as well. I don't know what kind of personal embarrassing statements or whatever that Peter Singer might have ever made, everyone seems hyper-alert to that kind of thing. But his work in ethics is foundational and should have a prominent place in your thinking and debates. 

Furthermore, if you stick with your work on AGI, Bostrom's work in Superintelligence showed insight and creativity in understanding and assessing AGI and ASI. I can't say I agree with his thinking in further work that he's produced, but if I were in your shoes, I wouldn't stop mentioning his professional work just because he wrote some shameful stuff on-line, once, 20 years ago, and recently acknowledged it. Like Peter Singer, MacAskill, and many others associated with EA, Bostrom's done impressive and foundational work(in Bostrom's case, in AI), and it deserves consideration on its merits.

But back to writing about what I think, which has a much less impressive source. 

Me. 

Problems that plague humanity don't really change. Vices are always going to be vices if they're practiced. And selfishness? It plays such a large role in everything that we do, if you ignore it, or focus solely on how to serve others' self-interests, you won't grapple with selfishness well when its role is primary, for example, in contexts of existential harm.  This will have two results:

  • your ostensible altruistic goals in those contexts will be abandoned
  • your further goals won't be altruistic at all

My heuristics about a positive community are totally satisfied if your EA community focuses on giving what you can, saving the lives that you can, effective charity, effective altruism. That EA is inspiring, even inspiring guilt, but in a good way. Sure, vices are typically in the background, and corruption, plausibly, but that's not the point. Are your goals self-contradicting? Are you co-opted by special interests already? Are you structurally incapable of providing effective charity? No, well, with caveats, but no. Overall, the mission and approach of the giving side of EA is and has been awesome and inspiring.

When EA folks go further, with your second and third waves, first existential risk prevention, now longtermism, you make me think hard about your effectiveness. You need to rock selfishness well just to do charity well (that's my hunch). But existential risk and longtermism and community-building.... The demands on you are much much higher, and you aren't meeting them. You need to stop all your vices, rid your community of them,  prohibition-style. You need to intensively study selfishness and perform original academic research about it. I'm not joking. You really need think past current work in evolutionary psychology and utilitarianism and cognitive science. You could need to look into the past at failed research efforts and pick them up again, with new tools or ideas. Not so that you succeed with all your goals, but just so that you can stop yourself from being a significant net harm. Scout mindset was a step in the right direction and not an endpoint in improving your epistemics. Meanwhile, with your vices intact, your epistemics will suffer.  Or so I believe.

If I had all the answers about selfishness vs altruism, and how to understand and navigate one's own, I would share them. It's a century's research project, a multidisciplinary one with plausibly unexpected results, involving many people, experiments, different directions, and  some good luck. 

I don't want to associate Singer, Cremer, Bostrom, Galef, MacAskill, or any other EA person or person who I might have referenced with my admittedly extreme and alienating beliefs about betting and other vices  or with my personal declarations about what the EA community needs to do.  I imagine most folks beliefs about vices and selfishness reflect modern norms and that none would not take the position that I am taking. And that's OK with me. 

However, register my standards for the EA community as extreme given the goals you have chosen for yourself. The EA community's trifecta of ambitions is extreme. So are the standards that should be set for your behavior in your everyday life. 

 

  1. ^

     

I wrote:

"You need to rock selfishness well just to do charity well (that's my hunch)."

Selfishness, so designated, is not a public health issue nor a private mental health issue, but does stand in contrast to altruism. To the extent that society allows your actualization of something you could call selfishness, that seems to be your option to manifest, and by modern standards, without judgement of your selfishness. Your altruism might be judged, but not your selfishness, like, "Oh, that's some effective selfishness" vs "Oh, that's a poser's selfishness right there" or "That selfishness there is a waste of money".

Everyone thinks they understand selfishness, but there don't seem to be many theories of selfishness, not competing theories, nor ones tested for coherence, nor puzzles of selfishness. You spend a great deal of time on debates about ethics, quantifying altruism, etc, but somehow selfishness is too well-understood to bother?

The only argument over selfishness that has come up here is over self-care with money. Should you spend your money on a restaurant meal, or on charity? There was plenty of "Oh, take care of yourself, you deserve it" stuff going around, "Don't be guilty, that's not helpful" but no theory of how self-interest works. It all seems relegated to an ethereal realm of psychological forces, that anyone wanting to help you with must acknowledge.

Your feelings of guilt, and so on, are all tentatively taken as subjectively impactful and necessarily relevant just by the fact of your having them. If they're there, they matter. There's pop psychology, methods of various therapy schools, and different kinds of talk, really, or maybe drugs, if you're into psychiatric cures, but nothing too academic or well thought out as far as what self-interest is, how to perform it effectively, how or whether to measure it, and its proper role in your life. I can't just look at the problem, so described, and say, "Oh, well, you're not using a helpful selfishness theory to make your decisions there, you need to..." and be sure I'm accomplishing anything positive for you. I might come up with some clever reframe or shift your attention successfully, but that says nothing about a normative standard of selfishness that I could advocate.

I understand rationalization and being self-serving, but only in well-defined domains where I've seen it before, in what some people call "patterns of behavior." Vices do create pathological patterns of behavior, and ending them is clarifying and helpful to many self-interested efforts. A 100-hundred year effort to study selfishness is about more than vices. Or, well, at least on the surface, depending on what researchers discover. I have my own suspicions.

Anyway, we don't have the shared vocabulary to discuss vices well. What do you think I mean by them? Is adderall a vice? Lite beer? Using pornography? The occasional cigarette? Donuts? Let's say I have a vice or two, and indulge them regularly, and other people support me in doing that, but we end doing stuff together that I don't really like, aside from the vice. Is it correct then to say that I'm not serving myself by keeping my vice going? Or do we just call that a reframe because somebody's trying to manipulate me into giving up my habits? What if the vice gets me through a workday?

Well, there's no theories of self-interest that people study in school to help us understand those contexts, or if there are, they don't get much attention. I don't mean theories from psychology that tend to fail in practice. It's a century's effort to develop and distribute the knowledge to fill that need for good theories.

Galef took steps to understand selfish behavior. She decided that epistemic rationality served humanity and individuals, and decided to argue for it. That took some evaluation of behavior in an environment. It motivated pursuit of rationality in a particular way.

Interestingly, her tests, such as the selective critic test, or the double standard test, reveal information that shifts subjective experience. Why do we need those tests(Not, do we need them, but, why do we need them)? What can we do about the contexts that seem to require them? Right now, your community's culture encourages an appetite for risk, particularly financial risk, that looks like a vice. Vices seem to attract more vices.

You're talking about epistemics. A lot of lessons in decision-making are culturally inherited. For various reasons, modern society could lose that inheritance. Part of that inheritance is a common-sense understanding of vices. Without that common-sense there is only a naivete that could mean our extinction. Or that's how I see it.

For example, in 2020, one of the US's most popular talk show hosts (Steven Colbert) encouraged viewers to drink, and my governor (Gavin Newsom) gave a speech about loosening rules for food deliveries so that we could all get our wine delivered to our doors while we were in lockdown. I'm not part of the Christian right, but I think they still have the culture to understand that kind of behavior as showing decadence and inappropriateness. I would hope so. Overall, though, my country, America, didn't see it that way. Not when, at least in people's minds, there was an existential threat present. A good time to drink, stuck at home, that's apparently what people thought.

I'm really not interested in making people have a less fun time. That is not my point at all.

I've also been unsuccessful in persuading people to act in their own self-interest. I already know it doesn't work.

If you don't believe in "vices", you don't believe in them. That's fine. My point here was that it's not safe to ignore them, and I would like to add, there's nothing stronger than a vice to make sure you practice self-serving rationalization.

If, for the next 40-60 years, humanity faces a drawn out, painful coping with increasing harms from climate change, as I believe, and our hope for policy and recommendations is communities like yours, and what we get is depressed panicky people indulging whatever vices they can and becoming corrupt as f**k? Well, things will go badly.

To understand why homogeneity can degrade group performance, It might help to take it to the extreme. Suppose Bob is your best performing employee. If you were asked to do a brainstorming task for new ideas, would you prefer a team comprised of:

a) Bob and two other high-performing (but not as good as bob) employees

or

b) Three identical clones of Bob.

Team a will reliably outperform b in creative tasks, because in team B all the Bobs will be thinking along the same lines, and carry with them similar false assumptions and beliefs. Whereas in team a, you'll get new ideas, and the Bob assumptions will be challenged.

(I should note that homogeneity does have some advantages, if you check the paper I linked, diverse groups can lead to socio-emotional  or value conflict that can hurt productivity and group cohesion if not handled properly). 

CG
1y5
1
0

I could be wrong but I didn't see political conflict mentioned specifically in that article, at least not explicitly. Not saying it can't reasonably be inferred but given the political centrist majority within EA, I just wanted to clarify this observation as it could be misleading (?).

From what I briefly read (and gleaned from asking Ghostreader [GPT-3]  in Readwise Reader), the studies found that when there is a lot of different knowledge and experience, increased task conflict (e.g. viewpoint diversity over content of a task) can override other forms of conflict, and actually lead to improved performance. More work here is needed, of course, but thanks for sharing this.

Yeah, I found the linked review quite interesting. The theory is that diversity increases different types of conflict, but that some conflict is good for performance (for the reasons I outlined), while others cause problems. 

It seperates conflict into (1) task conflict, (2) socio-emotional conflict or (3) value conflict, with task conflict increasing performance and the other two decreasing it. (I summarised these as "political" initially, I've edited it slightly to be more precise).  In other words, a diverse group will have better ideas overall, but may be dragged down in productivity by interpersonal conflict. 

In a field like EA where good ideas and correctness are incredibly important, I think the best performing strategy would be to foster a diverse community, while also taking steps to reduce conflict by practicing principles of empathy, compassion, and understanding. 

CG
1y2
0
0

Agreed. I've also seen other studies that suggest that the rate and quality of knowledge production increases from that kind of good faith dialectical feedback. Makes a lot of sense that some forms of conflict could be quite synergistic.

I will definitely give the piece a more thorough review when I get a chance. 

Nothing serious can change until the whole 'all important decisions are made by about ten people who dont see any need to get community buy in' issue is solved.

Arepo
1y-10
1
11
More from ConcernedEAs
262
· 1y ago · 84m read
Curated and popular this week
Recent opportunities in Building effective altruism