Hide table of contents

Preamble

It’s been a rough few months, hasn’t it?

Recent events, including the FTX collapse and the Bostrom email/apology scandal, have led a sizeable portion of EAs to become disillusioned with or at least much more critical of the Effective Altruism movement.

While the current crises have made some of our movement’s problems more visible and acute, many EAs have become increasingly worried about the direction of EA over the last few years. We are some of them.

This document was written collaboratively, with contributions from ~10 EAs in total. Each of us arrived at most of the critiques below independently before realising through conversation that we were not “the only one”. In fact, many EAs thought similarly to us, or at least were very easily convinced once thoughts were (privately) shared.

Some of us started to become concerned as early as 2017, but the discussions that triggered the creation of this post happened in the summer of 2022. Most of this post was written by the time of the FTX crash, and the final draft was completed the very day that the Bostrom email scandal broke.[1] Thus, a separate post will be made about the Bostrom/FLI issues in around a week.

A lot of what we say is relevant to the FTX situation, and some of it isn’t, at least directly. In any case, it seems clear to us that the FTX crisis significantly strengthened our arguments.

We reached the point where we would feel collectively irresponsible if we did not voice our concerns some time ago, and now seems like the time where those concerns are most likely to be taken seriously. We voice them in the hope that we can change our movement for the better, and have taken pains to avoid coming off as “hostile” in any way.

Experience indicates that it is likely many EAs will agree with significant proportions of what we say, but have not said as much publicly due to the significant risk doing so would pose to their careers, access to EA spaces, and likelihood of ever getting funded again.

Naturally the above considerations also apply to us: we are anonymous for a reason.

This post is also quite very long, so each section has a summary at the top for ease of scanning, and we’ll break this post up into a sequence to facilitate object-level discussion.

Finally, we ask that people upvote or downvote this post on the basis of whether they believe it to have made a useful contribution to the conversation, rather than whether they agree with all of our critiques.

Summary

  • The Effective Altruism movement has rapidly grown in size and power, and we have a responsibility to ensure that it lives up to its goals
  • EA is too homogenous, hierarchical, and intellectually insular, with a hard core of “orthodox” thought and powerful barriers to “deep” critiques
  • Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights
  • Some EA beliefs and practices align suspiciously well with the interests of our donors, and some of our practices render us susceptible to conflicts of interest
  • EA decision-making is highly centralised, opaque, and unaccountable, but there are several evidence-based methods for improving the situation

Introduction

As committed Effective Altruists, we have found meaning and value in the frameworks and pragmatism of the Effective Altruism movement. We believe it is one of the most effective broadly-focused social movements, with the potential for world-historical impact.

Already, the impact of many EA projects has been considerable and inspiring. We appreciate the openness to criticism found in various parts of the EA community, and believe that EA has the potential to avoid the pitfalls faced by many other movements by updating effectively in response to new information.

We have become increasingly concerned with significant aspects of the movement over our collective decades here, and while the FTX crisis was a shock to all of us, we had for some time been unable to escape the feeling that something was going to go horribly wrong.

To ensure that EA has a robustly positive impact, we feel the need to identify the aspects of our movement that we find concerning, and suggest directions for reform that we believe have been neglected. These fall into three major categories:

  1. Epistemics
  2. Expertise & Rigour
  3. Governance & Power

We do not believe that the critiques apply to everyone and to all parts of EA, but to certain – often influential – subparts of the movement. Most of us work on existential risk, so the majority of our examples will come from there.[2]

Not all of the ~10 people that helped to write this post agree with all the points made within, both in terms of “goes too far” and “doesn’t go far enough”. It is entirely possible to strongly reject one or more of our critiques while accepting others.

In the same vein, we request that commenters focus on the high-level critiques we make, rather than diving into hyper-specific debates about one thing or another that we cited as an example.

Finally, this report started as a dozen or so bullet points, and currently stands at over 20,000 words. We wrote it out of love for the community, and we were not paid for any of its writing or research despite most of us either holding precarious grant-dependent gig jobs or living on savings while applying for funding. We had to stop somewhere. This means that many of the critiques we make could be explored in far, far more detail than their rendition here contains.

If you think a point is underdeveloped, we probably agree; we would love to see others take the points we make and explore them in greater depth, and indeed to do so ourselves if able to do so while also being able to pay rent.

We believe that the points we make are vital for the epistemic health of the movement, that they will make it more accessible and effective, and that they will enhance the ability of EA as a whole to do the most good.

Two Notes:

  1. Some of the issues we describe are based on personal experience and thus cannot be backed by citations. If you doubt something we assert, let us know and we’ll give as much detail as we can without compromising our anonymity or that of others. You can also just ask around: we witnessed most of the things we mention on multiple independent occasions, so they’re probably not rare.
  2. This post ties a lot of issues together and is thus necessarily broad, so we will have to make some generalisations, to which there will be exceptions.

Epistemics

Epistemic health is a community issue

Summary: The Collective Intelligence literature suggests epistemic communities should be diverse, egalitarian, and open to a wide variety of information sources. EA, in contrast, is relatively homogenous, hierarchical, and insular. This puts EA at serious risk of epistemic blind-spots.

EA highly values epistemics and has a stated ambition of predicting existential risk scenarios. We have a reputation for assuming that we are the “smartest people in the room”.

Yet, we appear to have been blindsided by the FTX crash. As Tyler Cowen puts it:

Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.

If EA is going to do some lesson-taking, I would not want this point to be neglected.

So, what’s the problem?

EA’s focus on epistemics is almost exclusively directed towards individualistic issues like minimising the impact of cognitive biases and cultivating a Scout Mindset. The movement strongly emphasises intelligence, both in general and especially that of particular “thought-leaders”. An epistemically healthy community seems to be created by acquiring maximally-rational, intelligent, and knowledgeable individuals, with social considerations given second place. Unfortunately, the science does not bear this out. The quality of an epistemic community does not boil down to the de-biasing and training of individuals;[3] more important factors appear to be the community’s composition, its socio-economic structure, and its cultural norms.[4]

The field of Collective Intelligence provides guidance on the traits to nurture if one wishes to build a collectively intelligent community. For example:

  • Diversity
    • Along essentially all dimensions, from cultural background to disciplinary/professional training to cognition style to age
  • Egalitarianism
    • People must feel able to speak up (and must be listened to if they do)
    • Dominance dynamics amplify biases and steer groups into suboptimal path dependencies
    • Leadership is typically best employed on a rotating basis for discussion-facilitation purposes rather than top-down decision-making
    • Avoid appeals and deference to community authority
  • Openness to a wide variety of sources of information
  • Generally high levels of social/emotional intelligence
    • This is often more important than individuals’ skill levels at the task in question

However, the social epistemics of EA leave much to be desired. As we will elaborate on below, EA:

  • Is mostly comprised of people with very similar demographic, cultural, and educational backgrounds
  • Places too much trust in (powerful) leadership figures
  • Is remarkably intellectually insular
  • Confuses value-alignment and seniority with expertise
  • Is vulnerable to motivated reasoning
  • Is susceptible to conflicts of interest
  • Has powerful structural barriers to raising important categories of critique
  • Is susceptible to groupthink

Decision-making structures and intellectual norms within EA must therefore be improved upon.[5]

What actually is “value-alignment”?

Summary: The use of the term “value-alignment” in the EA community hides an implicit community orthodoxy. When people say “value-aligned” they typically do not mean a neutral “alignment of values”, nor even “agreement with the goal of doing the most good possible”, but a commitment to a particular package of views. This package, termed “EA orthodoxy”, includes effective altruism, longtermism, utilitarianism, Rationalist-derived epistemics, liberal-technocratic philanthropy, Whig historiography, the ITN framework, and the Techno-Utopian Approach to existential risk.

The term “value-alignment” gets thrown around a lot in EA, but is rarely actually defined. When asked, people typically say something about similarity or complementarity of values or worldviews, and this makes sense: “value-alignment” is of course a term defined in reference to what values the subject is (un)aligned with. You could just as easily speak of alignment with the values of a political party or a homeowner’s association.[6]

However, the term’s usage in EA spaces typically has an implicit component: value-alignment with a set of views shared and promoted by the most established and powerful components of the EA community. Thus:

  • Value-alignment = the degree to which one subscribes to EA orthodoxy

  • EA orthodoxy = the package of beliefs and sensibilities generally shared and promoted by EA’s core institutions (the CEA, FHI, OpenPhil, etc.)[7]

    • These include, but are not limited to:
      • Effective Altruism
        • i.e. trying to “do the most good possible”
      • Longtermism
        • i.e. believing that positively influencing the long-term future is a (or even the) key moral priority of our time
      • Utilitarianism, usually Total Utilitarianism
      • Rationalist-derived epistemics
        • Most notably subjective Bayesian “updating” of personal beliefs
      • Liberal-technocratic philanthropy
      • A broadly Whiggish/progressivist view of history
      • Cause-prioritisation according to the ITN framework
      • The Techno-Utopian Approach to existential risk, which includes for instance, and in addition to several of the above:
        • Defining “existential risk” in reference to humanity’s “long-term potential” to generate immense amounts of (utilitarian) value by populating the cosmos with vast numbers of extremely technologically advanced beings

        • A methodological framework based on categorising individual “risks”[8], estimating for each a probability of causing an “existential catastrophe” within a given timeframe, and attempting to reduce the overall level of existential risk largely by working on particular “risks” in isolation (usually via technical or at least technocratic means)

        • Technological determinism, or at least a “military-economic adaptationism” that is often underpinned by an implicit commitment to neorealist international relations theory

        • A willingness to seriously consider extreme or otherwise exceptional actions to protect astronomically large amounts of perceived future value

    • There will naturally be exceptions here – institutions employ many people, whose views can change over time – but there are nonetheless clear regularities

Note that few, if any, of the components of orthodoxy are necessary aspects, conditions, or implications of the overall goal of “doing the most good possible”. It is possible to be an effective altruist without subscribing to all, or even any, of them, with the obvious exception of “effective altruism” itself.

However, when EAs say “value-aligned” they rarely seem to mean that one is simply “dedicated to doing the most good possible”, but that one subscribes to the particular philosophical, political, and methodological views packaged under the umbrella of orthodoxy.

We are incredibly homogenous

Summary: Diverse communities are typically much better at accurately analysing the world and solving problems, but EA is extremely homogenous along essentially all dimensions. EA institutions and norms actively and strongly select against diversity. This provides short-term efficiency at the expense of long-term epistemic health.

The EA community is notoriously homogenous, and the “average EA” is extremely easy to imagine: he is a white male[9] in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent. He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above average. He hung around LessWrong for a while as a teenager, and now wears EA-branded shirts and hoodies, drinks Huel, and consumes a narrow range of blogs, podcasts, and vegan ready-meals. He moves in particular ways, talks in particular ways, and thinks in particular ways. Let us name him “Sam”, if only because there’s a solid chance he already is.[10]

Even leaving aside the ethical and political issues surrounding major decisions about humanity’s future being made by such a small and homogenous group of people, especially given the fact that the poor of the Global South will suffer most in almost any conceivable catastrophe, having the EA community overwhelmingly populated by Sams or near-Sams is decidedly Not Good for our collective epistemic health.

As noted above, diversity is one of the main predictors of the collective intelligence of a group. If EA wants optimise its ability to solve big, complex problems like the ones we focus on, we need people with different disciplinary backgrounds[11], different kinds of professional training, different kinds of talent/intelligence[12], different ethical and political viewpoints, different temperaments, and different life experiences. That’s where new ideas tend to come from.[13]

Worryingly, EA institutions seem to select against diversity. Hiring and funding practices often select for highly value-aligned yet inexperienced individuals over outgroup experts, university recruitment drives are deliberately targeted at the Sam Demographic (at least by proxy) and EA organisations are advised to maintain a high level of internal value-alignment to maximise operational efficiency. The 80,000 Hours website seems purpose-written for Sam, and is noticeably uninterested in people with humanities or social sciences backgrounds,[14] or those without university education. Unconscious bias is also likely to play a role here – it does everywhere else.

The vast majority of EAs will, when asked, say that we should have a more diverse community, but in that case, why are only a very narrow spectrum of people given access to EA funding or EA platforms? There are exceptions, of course, but the trend is clear.

It’s worth mentioning that senior EAs have done some interesting work on moral uncertainty and value-pluralism, and we think several of their recommendations are well-taken. However, the focus is firmly on individual rather than collective factors. The point remains that one cannot substitute a philosophically diverse community for an overwhelmingly utilitarian one where everyone individually tries to keep all possible viewpoints in mind. None of us are so rational as to obviate true diversity through our own thoughts.[15]

EA is very open to some kinds of critique and very not open to others

Summary: EA is very open to shallow critiques, but not deep critiques. Shallow critiques are small technical adjustments written in ingroup language, whereas deep critiques hint at the need for significant change, criticise prominent figures or their ideas, and can suggest outgroup membership. This means EA is very good at optimising along a very narrow and not necessarily optimal path.

EA prides itself on its openness to criticism, and in many areas this is entirely justified. However, willingness to engage with critique varies widely depending on the type of critique being made, and powerful structures exist within the community that reduce the likelihood that people will speak up and be heard.

Within EA, criticism is acceptable, even encouraged, if it lies within particular boundaries, and when it is expressed in suitable terms. Here we distinguish informally between “shallow critiques” and “deep critiques”.[16]

Shallow critiques are often:

  • Technical adjustments to generally-accepted structures
    • “We should rate intervention X 12% higher than we currently do.”
    • Changes of emphasis or minor structural/methodological adjustments
    • Easily conceptualised as “optimising” “updates” rather than cognitively difficult qualitative switches
  • Written in EA-language and sprinkled liberally with EA buzzwords
  • Not critical of capitalism

Whereas deep critiques are often:

  • Suggestive that one or more of the fundamental ways we do things are wrong

    • i.e. are critical of EA orthodoxy
    • Thereby implying that people may have invested considerable amounts of time/effort/identity in something when they perhaps shouldn’t have[17]
  • Critical of prominent or powerful figures within EA

  • Written in a way suggestive of outgroup membership

    • And thus much more likely to be read as hostile and/or received with hostility
  • Political

    • Or more precisely: of a different politics to the broadly liberal[18]-technocratic approach popular in EA

EA is very open to shallow critiques, which is something we absolutely love about the movement. As a community, however, we remain remarkably resistant to deep critiques. The distinction is likely present in most epistemic communities, but EA appears to have a particularly large problem. Again, there will be exceptions, but the trend is clear.

The problem is illustrated well by the example of an entry to the recent Red-Teaming Contest: “The Effective Altruism movement is not above conflicts of interest”. It warned us of the political and ethical risks associated with taking money from cryptocurrency billionaires like Sam Bankman-Fried, and suggested that EA has a serious blind spot when it comes to (financial) conflicts of interest.[19]

The article (which did not win anything in the contest) was written under a pseudonym, as the author feared that making such a critique publicly would incur a risk of repercussions to their career. A related comment provided several well-evidenced reasons to be morally and pragmatically wary of Bankman-Fried, got downvoted heavily, and was eventually deleted by its author.

Elsewhere, critical EAs report[20] having to develop specific rhetorical strategies to be taken seriously. Making deep critiques or contradicting orthodox positions outright gets you labelled as a “non-value-aligned” individual with “poor epistemics”, so you need to pretend to be extremely deferential and/or stupid and ask questions in such a way that critiques are raised without actually being stated.[21]

At the very least, critics have learned to watch their tone at all costs, and provide a constant stream of unnecessary caveats and reassurances in order to not be labelled “emotional” or “overconfident”.

These are not good signs.

Why do critical EAs have to use pseudonyms?

Summary: Working in EA usually involves receiving money from a small number of densely connected funding bodies/individuals. Contextual evidence is strongly suggestive that raising deep critiques will drastically reduce one’s odds of being funded, so many important projects and criticisms are lost to the community.

There are several reasons people may not want to publicly make deep critiques, but the one that has been most impactful in our experience has been the role of funding.[22]

EA work generally relies on funding from EA sources: we need to pay the bills, and the kinds of work EA values are often very difficult to fund via non-EA sources. Open Philanthropy, and previously FTX, has/had an almost hegemonic funding role in many areas of existential risk reduction, as well as several other domains. This makes EA funding organisations and even individual grantmakers extremely powerful.

Prominent funders have said that they value moderation and pluralism, and thus people (like the writers of this post) should feel comfortable sharing their real views when they apply for funding, no matter how critical they are of orthodoxy.

This is admirable, and we are sure that they are being truthful about their beliefs. Regardless, it is difficult to trust that the promise will be kept when one, for instance:

  • Observes the types of projects (and people) that succeed (or fail) at acquiring funding

    • i.e. few, if any, deep critiques or otherwise heterodox/“heretical” works
  • Looks into the backgrounds of grantmakers and sees how they appear to have very similar backgrounds and opinions (i.e they are highly orthodox)

  • Experiences the generally claustrophobic epistemic atmosphere of EA

  • Hears of people facing (soft) censorship from their superiors because they wrote deep critiques of the ideas of prominent EAs

    • Zoe Cremer and Luke Kemp lost “sleep, time, friends, collaborators, and mentors” as a result of writing Democratising Risk, a paper which was critical of some EA approaches to existential risk.[23] Multiple senior figures in the field attempted to prevent the paper from being published, largely out of fear that it would offend powerful funders. This saga caused significant conflict within CSER throughout much of 2021.
  • Sees the revolving door and close social connections between key donors and main scholars in the field

  • Witnesses grantmakers dismiss scientific work on the grounds that the people doing it are insufficiently value-aligned

    • If this is what is said in public (which we have witnessed multiple times), what is said in private?
  • Etc.

Thus, it is reasonable to conclude that if you want to get funding from an EA body, you must not only try to propose a good project, but one that could not be interpreted as insufficiently “value-aligned”, however the grantmakers might define it. If you have an idea for a project that seems very important, but could be read as a “deep critique”, it is rational for you to put it aside.

The risk to one’s career is especially important given the centralisation of funding bodies as well as the dense internal social network of EA’s upper echelons.[24]

Given this level of clustering, it is reasonable to believe that if you admit to holding heretical views on your funding application, word will spread, and thus you will quite possibly never be funded by any other funder in the EA space, never mind any other consequences (e.g. gatekeeping of EA events/spaces) you might face. For a sizeable portion of EAs, the community forms a very large segment of one’s career trajectory, social life, and identity; not things to be risked easily.[25] For most, the only robust strategy is to keep your mouth shut.[26]

Grantmakers: You are missing out on exciting, high potential impact projects due to these processes. When the stakes are as high as they are, verbal assurances are unfortunately insufficient. The problems are structural, so the solutions must be structural as well.

We can’t put numbers on everything…

Summary: EA is highly culturally quantitative, which is optimal for some problem categories but not others. Trying to put numbers on everything causes information loss and triggers anchoring and certainty biases. Individual Bayesian Thinking, prized in EA, has significant methodological issues. Thinking in numbers, especially when those numbers are subjective “rough estimates”, allow one to justify anything comparatively easily, and can lead to wasteful and immoral decisions.

EA places an extremely high value on quantitative thinking, mostly focusing on two key concepts: expected value (EV) calculations and Bayesian probability estimates.

From the EA Forum wiki: “The expected value of an act is the sum of the value of each of its possible outcomes multiplied by their probability of occurring.” Bayes’s theorem is a simple mathematical tool for updating our estimate of the likelihood of an event in response to new information.

Individual Bayesian Thinking (IBT) is a technique inherited by EA from the Rationalist subculture, where one attempts to use Bayes’ theorem on an everyday basis. You assign each of your beliefs a numerical probability of being true and attempt to mentally apply Bayes’ theorem, increasing or decreasing the probability in question in response to new evidence. This is sometimes called “Bayesian epistemology” in EA, but to avoid confusing it with the broader approach to formal epistemology with the same name we will stick with IBT.

There is nothing wrong with quantitative thinking, and much of the power of EA grows from its dedication to the numerical. However, this is often taken to the extreme, where people try to think almost exclusively along numerical lines, causing them to neglect important qualitative factors or else attempt to replace them with doubtful or even meaningless numbers because “something is better than nothing”. These numbers are often subjective “best guesses” with little empirical basis.[27]

For instance, Bayesian estimates are heavily influenced by one’s initial figure (one’s “prior”), which, especially when dealing with complex, poorly-defined, and highly uncertain and speculative phenomena, can become subjective (based on unspecified values, worldviews, and assumptions) to the point of arbitrary.[28] This is particularly true in existential risk studies where one may not have good evidence to update on.

We assume that, with enough updating in response to evidence, our estimates will eventually converge on an accurate figure. However, this is dependent on several conditions, notably well-formulated questions, representative sampling of (accurate) evidence, and a rigorous and consistent method of translating real-world observations into conditional likelihoods.[29] This process is very difficult even when performed as part of careful and rigorous scientific study; attempting to do it all in your head, using rough-guess or even purely intuitional priors and likelihoods, is likely to lead to more confidence than accuracy.

This is further complicated by the fact that probabilities are typically distributions rather than point values – often very messy distributions that we don’t have nice neat formulae for. Thus, “updating” properly would involve manipulating big and/or ugly matrices in your head. Perhaps this is possible for some people.

A common response to these arguments is that Bayesianism is “how the mind really works”, and that the brain already assigns probabilities to hypotheses and updates them similarly or identically to Bayes’ rule. There are good reasons to believe that this may be true. However, the fact that we may intuitively and subconsciously work along Bayesian lines does not mean that our attempts to consciously “do the maths” will work.

In addition, there seems to have been little empirical study of whether Individual Bayesian Updating actually outperforms other modes of thought, never mind how this varies by domain. It seems risky to put so much confidence in a relatively unproven technique.

The process of Individual Bayesian Updating can thus be critiqued on scientific grounds, but there is also another issue with it and hyper-quantitative thinking more generally: motivated reasoning. With no hard qualitative boundaries and little constraining empirical data, the combination of expected value calculations and Individual Bayesian Thinking in EA allows one to justify and/or rationalise essentially anything by generating suitable numbers.

Inflated EV estimates can be used to justify immoral or wasteful actions, and somewhat methodologically questionable subjective probability estimates translate psychological, cultural, and historical biases into truthy “rough estimates” to plug into scientific-looking graphs and base important decisions upon.

We then try to optimise our activities using the numbers we have. Attempting to fine-tune estimates of the maximally impactful strategy is a great approach when operating within fairly predictable, well-described domains, but is a fragile and risky strategy when operating in complex and uncertain domains (like existential risk) even when you have solid reasons for believing that your numbers are good – what if you’re wrong? Robustness to a wide variety of possibilities is typically the objective of professionals in such areas, not optimality; we should ask ourselves why.

Such estimates can also trigger the anchoring bias, and imply to lay readers that, for example, while unaligned artificial intelligence may not be responsible for almost twice as much existential risk as all other factors combined, the ratio is presumably somewhere in that ballpark. In fact, it is debatable whether such estimates have any validity at all, especially when not applied to simple, short-term (i.e. within a year),[30] theoretically well-defined questions. Indeed, they do not seem to be taken seriously by existential risk scholars outside of EA.[31] The apparent scientific-ness of numbers can fool us into thinking we know much more about certain problems than we actually do.

This isn’t to say that quantification is inherently bad, just that it needs to be combined with other modes of thought. When a narrow range of thought is prized above all others, blind spots are bound to emerge, especially when untested and controversial techniques like Individual Bayesian Thinking are conflated (as they sometimes are by EAs) with “transparent reasoning” and even applied “rationality” itself.

Numbers are great, but they’re not the whole story.

…and trying to weakens our collective epistemics

Summary: Overly-numerical thinking lends itself to homogeneity and hierarchy. This encourages undue deference and opaque/unaccountable power structures. EAs assume they are smarter/more rational than non-EAs, which allows us to dismiss opposing views from outsiders even when they know far more than we do. This generates more homogeneity, hierarchy, and insularity.

Under number-centric thinking, everything is operationalised as (or is assigned) some value unless there is an overwhelming need or deliberate effort to think otherwise. A given value X is either bigger or smaller than another value Y, but not qualitatively different to it; ranking X with respect to Y is the only possible type of comparison. Thus, the default conceptualisation of a given entity is a point on a (homogenous) number line. In a culture strongly focused on maximising value (that “line goes up”), one comes to assume that this model fits everything: put a number on something, then make the number bigger.

For instance, (intellectual) ability is implicitly assumed within much of EA to be a single variable[32], which is simply higher or lower for different people. Therefore, there is no need for diversity, and it feels natural to implicitly trust and defer to the assessments of prominent figures (“thought leaders”) perceived as highly intelligent. This in turn encourages one to accept opaque and unaccountable hierarchies.[33]

This assumption of cognitive hierarchy contributes to EA’s unusually low opinion of diversity and democracy, which reduces the input of diverse perspectives, which naturalises orthodox positions, which strengthens norms against diversity and democracy, and so on.

Moreover, just as prominent EAs are assumed to be authoritative, the EA community’s focus on individual epistemics leads us to think that we, with our powers of rationality and Bayesian reasoning, must be epistemically superior to non-EAs. Therefore, we can place overwhelming weight on the views of EAs and more easily dismiss the views of the outgroup, or even disregard democracy in favour of an “epistocracy” in which we are the obvious rulers.[34]

This is a generator function for hierarchy, homogeny, and insularity. It is antithetical to the aim of a healthy epistemic community.

In fact, work on the philosophy of science in existential risk has convincingly argued that in a field with so few independent evidential feedback loops, homogeneity and “conservatism” are particularly problematic. This is because unlike other fields where we have a good idea of the epistemic landscape, the inherently uncertain and speculative nature of Existential Risk Studies (ERS) means that not only are we uncertain of whether we have discovered an epistemic peak, but what the topography of the epistemic landscape even looks like. Thus, we should focus on creating the conditions for creative science, rather than the conservative science that we (i.e. the EAs within ERS) are moving towards through our extreme focus on a narrow range of disciplines and methodologies .

The EA Forum structurally discourages deep critique

Summary: The EA Forum gives more “senior” users far more votes and hides unpopular comments. This combines with cultural factors to silence critics and encourage convergence on orthodox views.

The EA Forum is interesting because it formalises some of the maladaptive trends we have mentioned. The Forum’s karma system ranks comments on the basis of user popularity, and comments below a certain threshold are hidden from view. This greatly reduces the visibility of unpopular comments, i.e. those that received a negative reaction from their readers.

Furthermore, the greater a user’s overall karma score, the more impactful their votes, to the point where some users can have 10x or even 16x the voting power of others. Thus, more established, popular, engaged users are able to give their preferred comments a significant boost, and in some cases unilaterally drop comments they dislike (e.g. from their critics) below the threshold past which a comment is hidden from view and thus not seen by most people.

Due to popularity feedback loops present on internet fora (where low-karma comments are likely to be further downvoted[35] and vice versa) as well as the related issues of trust and deference, these problems are likely to be magnified over the course of a discussion.

New users, who reasonably expect voting to be one-person-one-vote, can mistakenly believe that a comment with -5 karma from 15 votes represents 10 downvotes and 5 upvotes, when it could just as easily be a result of 13 upvotes being overruled by strong downvotes from a couple of members of the EA core network.

These arrangements give more orthodox individuals and groups disproportionate power over online discourse, and can make people feel less comfortable sharing critical views. It is a generator function for groupthink.

Some admirable work has been done to improve the situation, for instance the excellent step to separate karma and agreement ratings, but this is not enough to solve the problem.

The most important solution is simple: one person, one vote. Beyond that, having an option to sort by controversial, not hiding low-karma comments, having separate agreement karma for posts as well as comments, and perhaps occasionally putting a low-ranking comment nearer the top when one sorts comments by “top scoring” so critical comments don’t get buried all seem like good ideas.

Expertise & Rigour

We need to value expertise and rigour more

Summary: EA mistakes value-alignment and seniority for expertise and neglects the value of impartial peer-review. Many EA positions perceived as “solid” are derived from informal shallow-dive blogposts by prominent EAs with little to no relevant training, and clash with consensus positions in the relevant scientific communities. Expertise is appealed to unevenly to justify pre-decided positions.

There are many very enthusiastic EAs who have started studying existential risk recently, which includes several of the authors of this post. This is a massive asset to the movement. However, given the (obviously understandable) inexperience of many newcomers, we must be wary of the power of received wisdom. Under the wrong conditions, newcomers can rapidly update in line with the EA “canon”, then speak with significant confidence about fields containing much more internal disagreement and complexity than they are aware of, or even where orthodox EA positions are misaligned with consensus positions within the relevant expert communities.

More specifically, EA shows a pattern of prioritising non-peer-reviewed publications – often shallow-dive blogposts[36] – by prominent EAs with little to no relevant expertise. These are then accepted into the “canon” of highly-cited works to populate bibliographies and fellowship curricula, while we view the topic as somewhat “dealt with”'; “someone is handling it''. It should also be noted that the authors of these works often do not regard them as publications that should be accepted as “canon”, but they are frequently accepted as such regardless.[37]

This is a worrying tendency, given that these works commonly do not engage with major areas of scholarship on the topics that they focus on, ignore work attempting to answer similar questions, nor consult with relevant experts, and in many instances use methods and/or come to conclusions that would be considered fringe within the relevant fields. These works do not face adequate scrutiny due to the aforementioned issues with raising critique as well as (usually) an extreme lack of relevant expertise in the EA community caused by its disciplinary homogeneity

Elsewhere, ideas like the ITN framework and differential technological development are taken as core parts of EA orthodoxy, even being used to make highly consequential funding and policy decisions. This is worrying given that both are problematic (the ITN framework, for example, neglects co-benefits, response risks, and tipping points) and neither have been subjected to significant amounts of rigorous peer review and academic discussion[38].

This is not at all to say that Google Docs and blogposts are inherently “bad”: they are very good for opening discussions and providing preliminary thoughts before in-depth studies. In fact, one thing EA does much better than academia is its lower barrier to entry to important conversations, which is facilitated by things like EA Forum posts. This is a wonderful force for scientific creativity. However, the fact remains that these posts are simply no substitute for rigorous studies subject to peer review (or genuinely equivalent processes) by domain-experts external to the EA community.

Moreover, there seem to be rather inconsistent attitudes to expertise in the EA community. When Stuart Russell argues that AI could pose an existential threat to humanity, he is held up as someone worth listening to –”He wrote the book on AI, you know!” However, if someone of comparable standing in Climatology or Earth-Systems Science, e.g. Tim Lenton or Johan Rockström, says the same for their field, they are ignored, or even pilloried.[39] Moderate statements from the IPCC are used to argue that climate change is “not an existential risk”, but given significant expert disagreement among experts on e.g. deep learning capabilities, it seems very unlikely that a consensus-based “Intergovernmental Panel on Artificial Intelligence” would take a stance anything like as extreme as that of most prominent EAs. This seems like a straightforward example of confirmation bias to us. To the extent that we defer to experts, we should be consistent and rigorous about how it is done.

Finally, we sometimes assume that somebody holding significant power must mean that their opinions are particularly valuable. Sam Bankman-Fried, for instance, was given a huge platform to speak both on behalf of and to the EA movement, e.g. a 3.5-hour interview on 80,000 Hours. He was frequently asked to share his beliefs on a range of complex topics, from AI safety to theories of history, even though his only distinction was (as we know now, fraudulently) making lots of money in crypto. To interview someone or to invite them to speak about a given topic implies that they are someone whose views are particularly worth listening to compared to others. We should be critical about how we make that judgement, especially given the seniority-expertise confusion we discussed above.

Neither value-alignment nor seniority are equivalent to expertise or skill, and our assessments of the quality of research works should be independent of the perceived value-alignment and name-recognition of their authors. We’re dealing with really big problems: let’s make sure we get it right.

We should probably read more widely

Summary: EA reading lists are typically narrow, homogenous, and biased, and EA has unusual social norms against reading more than a handful of specific books. Reading lists often heavily rely on EA Forum posts and shallow dives over peer-reviewed literature. EA thus remains intellectually insular, and the resulting overconfidence makes many attempts by external experts and newcomers to engage with the community exhausting and futile. This gives the false impression that orthodox positions are well-supported and/or difficult to critique.

EA reading lists are notorious for being homogenous, being populated overwhelmingly by the output of a few highly value-aligned thinkers (i.e. MacAskill, Ord, Bostrom, etc.), and paying little attention to alternative perspectives. Whilst these thinkers are highly impactful, they aren’t (and don’t claim to be) the singular authorities on the issues EAs are interested in.

This, plus our community’s general intellectual insularity, can cause new EAs to assume that little of worth has been said on some problems outside of EA. For instance, it is not uncommon for an EA that is very interested in existential risk to have never heard of many of the key papers, concepts, or authors in Existential Risk Studies, which is unsurprising when most of our reading lists ignore almost all academic papers not written by senior members of the Future of Humanity Institute.[40]

Conversely, our insularity plus the resistance to “deep critiques” causes people with expertise in neglected fields to either burn out and give up in exhaustion after a while, or avoid engaging in the first place. We are personally familiar with innumerable examples of this, from senior academics to 18-year-old undergraduates. Since they either avoid EA after brief exposure or have their contributions ignored or downvoted into the ground, we don’t even notice how much we are losing out on and how many opportunities we are missing.

Our problems concerning outside expertise and knowledge are compounded by EA’s odd cultural relationship to books: student groups are given money to bulk-order EA-friendly books and hand them out for free, but otherwise there seems to be a general feeling that reading books is rarely a good use of time in comparison to reading (EA-aligned) blogposts. This issue reached its extreme in Sam Bankman-Fried. From a now-deleted article in Sequoia:

“Oh, yeah?” says SBF. “I would never read a book.”

I’m not sure what to say. I’ve read a book a week for my entire adult life and have written three of my own.

“I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that,” explains SBF. “I think, if you wrote a book, you fucked up, and it should have been a six-paragraph blog post.”

Most people don’t write blogposts, and some (most?) arguments are too complex and detailed to fit into blogposts. However, blogposts are very popular in the tech/rationalist spheres EA emerged from, and are extremely popular within EA. Thus, cultural forces once again push people away from potentially valuable outside ideas.

When those from outside EA have contributed to existential risk discussions, they have often had useful and insightful contributions. Thus, it is probably a good idea to assume that a lot of work outside of EA may have useful applications in an EA context. We are trying to deal with some of the most important issues in the world. We can't afford to assume that our little ecosystem has all the answers, because we don’t!

Luckily, there are expert opinion aggregation tools to rigorously combine the positions of many scholars. For instance, under the Delphi method, rounds of estimation and explanation are iterated in order to produce more reliable predictions. Participants are kept anonymous, and each participant does not know who made any given estimate or argument. This can counteract the negative impacts of experts’ personal or public stakes in certain ideas, and encourage participants to update their views freely. If we want to find the best available answers for our questions, we should look into the best-supported methods for generating bases of knowledge.

Other communities have been working on problems like the ones we focus on for decades: let’s hear what they have to say.

We need to stop reinventing the wheel

Summary: EA ignores highly relevant disciplines to its main area of focus, notably Disaster Risk Reduction, Futures Studies, and Science & Technology Studies, and in their place attempts to derive methodological frameworks from first principles. As a result, many orthodox EA positions would be considered decades out of date by domain-experts, and important decisions are being made using unsuitable tools.

EA is known for reinventing the wheel even within the EA community. This poses a significant problem given the stakes and urgency of problems like existential risk.

There are entire disciplines, such as Disaster Risk Reduction, Futures Studies, and Science and Technology Studies, that are profoundly relevant to existential risk reduction yet which have been almost entirely ignored by the EA community. The consequences of this are unsurprising: we have started near to the beginning of the history of each discipline and are slowly learning each of their lessons the hard way.

For instance, the approach to existential risk most prominent in EA, what Cremer and Kemp call the “Techno-Utopian Approach” (TUA), focuses on categorising individual hazards (called “risks” in the TUA),[41] attempting to estimate the likelihood that they will cause an existential catastrophe within a given timeframe, and trying to work on each risk separately by default, with a homogenous category of underlying “risk factors” given secondary importance.

However, such a hazard-centric approach was abandoned within Disaster Risk Reduction decades ago and replaced with one that places a heavy emphasis on the vulnerability of humans to potentially hazardous phenomena.[42] Indeed, differentiating between “risk” (the potential for harm), “hazards” (specific potential causes of harm) and “vulnerabilities” (aspects of humans and human systems that render them susceptible to the impacts of hazards) is one of the first points made on any disaster risk course. Reducing human vulnerability and exposure is generally a far more effective method of reducing risk posed by a wide variety of hazards, and far better accounts for “unknown unknowns” or “Black Swans”.[43]

Disaster risk scholarship is also revealing the growing importance of complex patterns of causation, the interactions between threats, and the potential for cascading failures. This area is largely ignored by EA existential risk work, and has been dismissed out of hand by prominent EAs.

As another example, Futures & Foresight scholars noted the deep limitations of numerical/probabilistic forecasting of specific trends/events in the 1960s-70s, especially with respect to long timescales as well as domains of high complexity and deep uncertainty[44], and low-probability high-impact events (i.e. characteristics of existential risk). Practitioners now combine or replace forecasts with qualitative foresight methods like scenario planning, wargaming, and Causal Layered Analysis, which explore the shape of possible futures rather than making hard-and-fast predictions. Yet, EA’s existential risk work places a massive emphasis on forecasting and pays little attention to foresight. Few EAs seem aware that “Futures Studies” as a discipline exists at all, and EA discussions of the (long-term) future often imply that little of note has been said on the topic outside of EA.[45]

These are just two brief examples.[46] There is a wealth of valuable insights and data available to us if we would only go out and read about them: this should be a cause for celebration!

But why have they been so neglected? Regrettably, it is not because EAs read these literatures and provided robust arguments against them; we simply never engaged with them in the first place. We tried to create the field of existential risk almost from first principles using the methods and assumptions that were already popular within our movement, regardless of whether they were suitable for the task.[47]

We believe there could be several disciplines or theoretical perspectives that EA, had it developed a little differently earlier on, would recognise as fellow travellers or allies. Instead, we threw ourselves wholeheartedly into the Founder Effect, and in our over-dependence on a few early canonical thinkers (i.e. MacAskill, Ord, Bostrom, Yudkowsky etc.), we thus far lost out on all that they have to offer.

This expands to a broader question: if we were to reinvent (EA approaches to) the field of Existential Risk Studies from the ground up, how confident are we that we would settle on our current way of doing things?

The above is not to say that all views within EA ought to always reflect mainstream academic views; there are genuine shortcomings to traditional academia. However, the sometimes hostile attitude EA has to academia has hurt our ability to listen to its contributions as well as those of experts in general.

Some ideas we should probably pay more attention to

Summary: Taster menu of topics directly applicable to existential risk work that EA pays little attention to: Vulnerability & Resilience, Complex (Adaptive) Systems, Futures & Foresight, Decision-Making under Deep Uncertainty/Robust Decision-Making, Psychology & Neuroscience, Science & Technology Studies, and the Humanities & Social Sciences in general.

So what are some areas that EA should take a greater notice of? Our list is far from exhaustive and is heavily focused on global catastrophic risk, but it seems like a good starting point. Naturally we welcome both suggestions for and constructive debates on the below.

Vulnerability and Resilience

Most communities trying to reduce risk focus on reducing human vulnerability and increasing societal resilience, rather than trying to fine-tune predictions of individual hazards, especially in areas full of unknown unknowns. It is possible that reducing the likelihood and magnitude of particular hazards may sometimes be the most effective way of reducing overall risk, but this should only be concluded after detailed assessment, rather than assumed a priori. In fact, our priors should be strongly against this claim given that it would imply that hazard-centric approaches are most suitable for existential risk scenarios (i.e. the areas of deepest uncertainty and highest complexity) which is the opposite of the trend seen in disaster/catastrophe risk more generally.

The principles of resilience include maintaining redundancy, diversity, and modularity, and ensuring that excessive connectivity doesn’t allow failures to cascade through a system (“systemic risk”). This is often achieved through self-organisation (as seen everywhere from ecosystems to democratic success stories) and institutional learning. Resilience is typically enhanced by popular participation in decision-making (consistent with collective intelligence, skin in the game, and the wisdom of the crowd), and enabling subsidiarity[48] (making decisions closest to where their impacts are, and where local knowledge can be effectively utilised).[49] Such multilevel governance, taking local knowledge into account, may be particularly valuable given our aforementioned problems around insularity and the dominance of the Global North. Consulting more widely, especially in areas of real vulnerability, may improve mitigation and adaptation strategies with respect to existential risk.

It is worth noting that the FTX crash is a perfect example of the fulfilment of a systemic, cascading risk that was unforeseen by EA and which EA was highly vulnerable to.

Complex (Adaptive) Systems

The siloed approach to existential risk, where the overwhelming majority of work focuses on reducing risk from one of the Big 4 Hazards[50] in isolation neglects emergent behaviour, feedback loops, interactions between cause areas, cascade/contagion effects, and the properties of complex adaptive systems more broadly. This is concerning because the likelihoods, magnitudes, and qualities of global catastrophic scenarios are determined by the structure of the current world-system, which is usefully conceptualised as a (staggeringly) complex adaptive system. Recent work from Len Fisher and Anders Sandberg, for example, highlights the advantages of analysing catastrophic threats as complex adaptive networks, work by Lara Mani, Asaf Tzachor, and Paul Cole has shown the issues with neglecting cascading catastrophic risk from volcanoes, and systems approaches are on the rise in Existential Risk Studies generally.[51]

There is a huge body of research on how to model and act within complex adaptive systems, including systems-dynamics, network-dynamics, and agent-based simulations, as well as qualitative approaches.

Complexity science has seen particularly extensive application in ecology and earth system science, where inherently interconnected systems vulnerable to tipping points, cascades, and collapse are common. Here, phenomena like temperature increases are best analysed as perturbations to the overall system state; one cannot simply add up the individual impacts of a predefined set of hazards.

For instance, recent work on catastrophic climate risk highlights the key role of cascading effects like societal collapses and resource conflicts. With as many as half of climate tipping points in play at 2.7°C - 3.4°C of warming and several at as low as 1.5°C, large areas of the Earth are likely to face prolonged lethal heat conditions, with innumerable knock-on effects. These could include increased interstate conflict, a far greater number of omnicidal actors, food-system strain or failure triggering societal collapses, and long-term degradation of the biosphere carrying unforeseen long-term damage e.g. through keystone species loss.[52]

Futures & Foresight

As mentioned above, foresight exercises – especially those conducted in groups – are the bread and butter of futures professionals. The emphasis is generally on qualitative or even narrative explorations of what the future might hold, with quantitative forecasting playing an important but not central role. You can’t put a probability on something if you don’t think of it in the first place,[53] and qualitative analysis often reveals that the something you were going to put a probability on isn’t a single distinct “something” at all.[54] In most cases, putting meaningful probabilities on events is impractical, and unnecessary for decision-making.

Futures Studies also includes large bodies of work on utopianism and socio-technical imaginaries, which seem vital given how much of EA’s existential risk work is premised on longtermism, a broadly utopian philosophy based on a particular image of the future.

Decision-Making under Deep Uncertainty/Robust Decision-Making

Robust decisions are designed to succeed largely independently of how the future plays out; this is achieved by preparing for things we cannot predict. When futures and risk professionals try to plan for an uncertain future, they typically do not try to perform fine-grained expected value calculations and optimise accordingly – “Predict and Act” – but construct plans that are robust to a wide variety of possible futures – “Explore and Adapt” – using simulations to explore the full parameter space and seeking agreement among stakeholders on particular decisions rather than particular models of the world. This approach vastly improves one’s performance when faced with Black Swans and unknown unknowns, and is much better at taking into account the positions of multiple stakeholders with differing value systems. These approaches are policy-proven (see the Colorado River Basin and Dutch “Room for the River” examples) and there is a wealth of literature on the subject, starting here and here.

Psychology and Neuroscience

By understanding the psychological processes that drive people’s behaviour, effective altruists and existential risk researchers can better predict how people will respond to various interventions and develop strategies that will be more likely to succeed. Additionally, psychology can provide valuable insights into how people perceive and respond to risk, which can help us better understand our audience and create effective strategies to reduce risk.

Elsewhere, neuroscientific studies have revealed the value of holistic/anti-reductionist thinking and embodied cognition, as well as significant areas in which EA’s Kahneman-derived emphasis on cognitive biases and dismissal of intuitive decision-making is misplaced.

Science and Technology Studies

Science and Technology Studies (STS) investigates the creation, development, and consequences of technologies with respect to history, society, and culture. Particularly relevant concepts include the “Risk Society”, which addresses how society organises itself in response to risk, and “Normal Accidents”, which contends that failures are inherent features of complex technical systems. Elsewhere, constructivist or co-productionist approaches to technology would provide valuable counterpoints to the implicit technological determinism of a large fraction of longtermist work.

The Humanities and Social Sciences

They exist! And are valuable!

Understanding how social change occurs will naturally be key to reducing risk, both in general (e.g. how do we build towards social tipping points, or communicate effectively?) and from ourselves (what risks are associated with utopian high-modernist movements? How do socio-economic conditions affect ideas about what counts as “rational” or “scientific”?).

Understanding how people have historically failed at the task of profoundly improving the world is vital if we want to avoid replicating those failures at larger scales.

Elsewhere, philosophies like critical realism may provide different epistemological and ontological bases for studying existential risk, and Kuhn-descended discussions of scientific paradigms helpfully highlight the contingent, cultural, and sometimes limited nature of science.

Studies of subjectivity, positionality, and postcolonialism provide useful insights about, for instance, how ideas of objectivity can be defined in terms that advantage those in power.

Also, much of the existential risk we face appears to arise from social phenomena, and thus it only seems rational to use the tools developed for such things.

Using the right (grantmaking) tools for the right (grantmaking) jobs

Summary: EA grantmaking methods have many advantages when applied to “classic” cause areas like endemic disease. However, current methods have significant methodological issues, and over-optimise in complex and uncertain environments like global catastrophic risk where robustness should be the primary objective. EA grantmaking should thus be decentralised and pluralised. Different methods should be trialled and rigorously evaluated.

Funding has a central role within EA, and a large proportion of EA institutions and projects would collapse if they were unable to secure funding from EA sources.

Open Philanthropy (OpenPhil) is by far the most powerful funding organisation in EA, so its cause prioritisations and decision-making frameworks have an extremely large influence on the direction of the movement.

We applaud essentially all of the cause areas OpenPhil funds[55] and the people we know at OpenPhil are typically intelligent, altruistic, and diligent.

Regardless of this, we will be using OpenPhil as a case study to explore two major problems with EA funding, both because of OpenPhil’s centrality, and because OpenPhil’s perspectives and practices are common across much of the rest of our movement, e.g. EA Funds.

The problems are:

  1. Our funding frameworks sometimes use inappropriate goals and tools
  2. It is socially and epistemically unhealthy for a movement to cultivate such a huge concentration of (unaccountable, opaque) power

We will discuss the former here, and explore the latter in subsequent sections.

The focus will be on the cause area of global catastrophic risk/existential risk/longtermism for two reasons: it’s the area most of us know the most about, and it’s where the issues we describe are most visible & impactful.

OpenPhil’s global catastrophic risk/longtermism funding stream is dominated by two hazard-clusters – artificial intelligence and engineered pandemics[56] – with little affordance given to other aspects of the risk landscape. Even within this, AI seems to be seen as “the main issue” by a wide margin, both within OpenPhil and throughout the EA community.

This is a problematic practice, given that, for instance:

  • The prioritisation relies on questionable forecasting practices, which themselves sometimes take contestable positions as assumptions and inputs

  • There is significant second-order uncertainty around the relevant risk estimates

  • The ITN framework has major issues, especially when applied to existential risk

    • It is extremely sensitive to how a problem is framed, and often relies on rough and/or subjective estimates of ambiguous and variable quantities

      • This poses serious issues when working under conditions of deep uncertainty, and can allow implicit assumptions and subconscious biases to pre-determine the result
      • Climate change, for example, is typically considered low-neglectedness within EA, but extreme/existential risk-related climate work is surprisingly neglected
      • What exactly makes a problem “tractable”, and how do you rigorously put a number on it?
    • It ignores co-benefits, response risks, and tipping points

    • It penalises projects that seek to challenge concentrations of power, since this appears “intractable” until social tipping points are reached[57]

    • It is extremely difficult and often impossible to meaningfully estimate the relevant quantities in complex, uncertain, changing, and low-information environments

    • It focuses on evaluating actions as they are presented, and struggles to sufficiently value exploring the potential action space and increasing future optionality

  • Creativity can be limited by the need to appeal to a narrow range of grantmaker views[58]

  • The current model neglects areas that do not fit [neatly] into the two main “cause areas”, and indeed it is arguable whether global catastrophic risk can be meaningfully chopped up into individual “cause areas” at all

  • A large proportion (plausibly a sizeable majority, depending on where you draw the line) of catastrophic risk researchers would, and if you ask, do, reject[59]:

    • The particular prioritisations made
    • The methods used to arrive at those prioritisations, and/or
    • The very conceptualisation of individual “risks” itself
  • It is the product of a small homogenous group of people with very similar views

There are important efforts to mitigate some of these issues, e.g. cause area exploration prizes, but the central issue remains.

The core of the problem here seems to be one of objectives: optimality vs robustness. Some quick definitions (in terms of funding allocation):

  • Optimality = the best possible allocation of funds
    • In EA this is usually synonymous with “the allocation with the highest possible expected value”
    • This typically has a unstated second component: “assuming that our information and our assumptions are accurate”
  • Robustness = capacity of an allocation to maintain near-optimality given conditions of uncertainty and change

In seeking to do the most good possible, EAs naturally seek optimality, and developed grantmaking tools to this end. We identify potential strategies, gather data, predict outcomes, and take the actions that our models tell us will work the best.[60] This works great when you’re dealing with relatively stable and predictable phenomena, for instance endemic malaria, as well as most of the other cause areas EA started out with.

However, now that much of EA’s focus has turned on to global catastrophic risk, existential risk, and the long-term future, we have entered areas where optimality becomes fragility. We don’t want most of our eggs in one or two of the most speculative baskets, especially when those eggs contain billions of people. We should also probably adjust for the fact that we may over-rate the importance of things like AI for reasons discussed in other sections

Given the fragility of optimality, robustness is extremely important. Existential risk is a domain of high complexity and deep uncertainty, dealing with poorly-defined low-probability high-impact phenomena, sometimes covering extremely long timescales, with a huge amount of disagreement among both experts and stakeholders along theoretical, empirical, and normative lines. Ask any risk analyst, disaster researcher, foresight practitioner, or policy strategist: this is not where you optimise, this is where you maintain epistemic humility and cover all your bases. Innumerable people have learned this the hard way so we don’t have to.

Thus, we argue that, even if you strongly agree with the current prioritisations / methods, it is still rational for you to support a more pluralist and robustness-focused approach given the uncertainty, expert disagreement, and risk management best-practices involved.

As well as a general diversification of the grantmaking community and a deliberate effort to value critical and community-external projects, a larger number and variety of funding sources and methods would likely be a good idea, especially if this was used as an opportunity to evaluate a range of different options.

There have been laudable efforts to decentralise grantmaking, e.g. the FTX Future Fund’s re-granting scheme. However, regrantors were picked by the central organisation (and tended to subscribe to all or most of EA orthodoxy), and even then grants still required approval from the central organisation. An admirable step in the right direction, to be sure, but in our view there is room to take several more.

One interesting route for us to explore might be lottery funding, where projects are chosen at random after an initial pass to remove bad-faith and otherwise obviously low-quality proposals. This solves a surprisingly large number of problems in grantmaking and science funding (eliminating bias and scientific conservatism, for example), and has been supported by multiple philosophers of science in existential risk.

OpenPhil’s and wider EA’s funding practices have many advantages: for instance, they require far less admin than conventional scientific funding, which accelerates progress and maximises the time researchers spend researching rather than applying for the opportunity to do so. This is great, but there is room for improvement, largely boiling down to our aforementioned problems with intellectual openness and wheel-reinventing, where we instinctively use the (grantmaking) tools that we have lying around when we enter a field rather than taking a step back and asking what the best way forward is in our new environment.

On another note, there does not seem to be any good information on whether grantmakers are effective or improving at forecasting the success of projects. Given that this is an extremely difficult and impactful task, it seems reasonable that there should be a significant level of oversight and transparency.

Intermission

The councillor comes with his battered old suit
And his head all filled with plans
Says "It's not for myself, nor the fame or wealth
But to help my fellow man."

Fist in the air and the first to stand
When the Internationale plays
Says "We'll break down the walls of the old Town Hall,
Fight all the life-long day!"

Ten years later, where is he now?
He's ditched all the old ideas
Milked all the life from the old cash cow
Now he's got a fine career
Now he's got a fine career.

A Fine Career – Chumbawamba

Governance & Power

We align suspiciously well with the interests of tech billionaires (and ourselves)

Summary: [61] EA is largely reliant on the goodwill of a small number of tech billionaires, and as a result fails to question the practice of elite philanthropy as well as the ways by which these billionaires acquired their wealth. Our cause prioritisations align suspiciously well with the interests and desires of both tech billionaires and ourselves. We are not above motivated reasoning.

EA is reliant on funding, and the vast majority of funds come from a handful of tech billionaires: Dustin Moskowitz and Cari Tuna got most of their wealth through Facebook (now Meta) and Asana, Vitalik Buterin has Ethereum, and Sam Bankman-Fried had FTX.

Elite philanthropy has faced numerous criticisms, from how it boosts and solidifies the economic and political power of the ultra-wealthy to the ways in which it undermines democracy and academic freedom. This issue has been studied and discussed at extreme length, so we will not expand further on the basic point, but recent events strongly suggest that EA should re-examine its relationship to the practice and seriously consider other sources of funding.

Furthermore, becoming a billionaire often involves a lot of unethical or risk-seeking behaviour, and according to some ethical codes the very act of being a billionaire is immoral in itself. The sources of EA funds in particular can sometimes be morally questionable. Cryptocurrency is of debatable social value, is full of money laundering, fraud and scams, and has been created and promoted as a deliberate political project to dodge taxes, concentrate power in the hands of the ultra-wealthy, and financialise an ever-growing proportion of human life.[62] As for Facebook, there is unfortunately an abundance of evidence that its impact on the world is likely to be net-negative.

The Effective Altruism movement is not above conflicts of interest. Relying on a small number of ultra-wealthy members of the tech sector incentivises us to accept or even promote their political, philosophical, and cultural beliefs, at the expense of the rigorous critical examination EA prides itself on. This may undermine even the most virtuous movement over the long term. Indeed, EA institutions and leaders rarely if ever interrogate the processes and structures that donors rely upon (digital surveillance, “Web 3.0”, neoliberal capitalism, and so on). The question of whether, for instance, making large quantities of money in the tech industry should give somebody the right to exercise significant control over the future of humanity is answered with an implicit but resounding “Yes.”

Our models sometimes even assume that (an American corporation) creating an “aligned” AGI, the fulfilment of Silicon Valley’s (not to mention much of the Pentagon’s…) collective dreams, will solve all other major problems.[63]

Indeed, it is possible that certain members of the EA leadership were aware of Sam Bankman-Fried’s unethical practices some time ago and were seemingly unable or unwilling to do anything about it. Additionally, Bankman-Fried is not the only morally questionable billionaire to have been courted by EA (e.g. Ben Delo).

It is worth noting that the areas EA focuses on most intensely (the long-term future and existential risk, and especially AI risk within that) align remarkably well with the sorts of things tech billionaires are most concerned about: longtermism is the closest thing to “doing sci-fi in real life”[64], existential catastrophes are one of the few ways in which wealthy people[65] could come to harm, and AI is the threat most interesting to people who made their fortunes in computing.

Fears about technological stagnation and slowed population growth receive pride of place in key EA texts, which strikingly parallel elite worries about increased labour costs.

Most of the proposed interventions also reflect the interests of Silicon Valley. Differential technological development, energy innovation, and high-tech solutions to pandemics are all favoured a priori. There is little to no support for bans on AGI projects, nor moratoria on Lethal Autonomous Weapons Systems, facial recognition, or new fossil fuel infrastructure. Similarly, priority concerns for the long-term future focus on economic elite interest areas like technological progress and GDP growth over other issues that are at least as critical but would undermine the power and/or status of wealthy philanthropists, like workplace democratisation or wealth redistribution. Again, it is not that any of these positions are inherently wrong because they align with elite interests, just that this is a bias we really need to be aware of.

Contrast the AI situation to climate change, routinely dismissed in EA, where the problems are messy, often mundane, predominantly political, and put the very concept of economic growth under debate, and where the greatest risk is posed to poor people from the Global South. Compare also with issues like global poverty, which very few people within EA are directly affected by (and which the funders are not by definition!) and which has come to be deemed “lower impact” within some of EA.[66]

Interestingly, a huge proportion of EA’s intellectual infrastructure can be traced back to the academic climate of the USA during the Cold War, where left-wing thinkers were eradicated from (analytic) philosophy by McCarthyist purges, Robert McNamara pushed for “rationalisation” and quantification throughout the US establishment, and the RAND Corporation developed concepts like Rational Choice Theory, Operations Research, and Game Theory. Indeed, the current President and CEO of RAND, Jason Matheny, is a CSET founder and former FHI researcher. Aside from the Silicon Valley influences (from which we get the blogposts, Californian Ideology, and most of the technofetishism), EA’s intellectual heritage is largely one of philosophy and economics intentionally stripped of their ability to challenge the status quo. As ever, that’s not to say that things like analytic philosophy or Game Theory are inherently evil or anything – they’re really quite good for some things – just that they are the tools we have for specific historical and political reasons, they are not the only ones available, and we should be critical of how and where we employ them.

The relative prioritisations we describe also fit rather well with the disciplinary and cultural backgrounds of us EAs. It seems that our subjectively-generated quantifications just so happened to have led us to conclude that the best way to improve (or even save) the world is to pay analytic philosophers and computer scientists like us large sums of money to work on the problems we read about in our favourite sci-fi novels.

It is possible that this truly is a coincidence and that our current prioritisations are correct,[67] but we should seriously consider what other factors might have been at play, especially given the potential for motivated reasoning embedded in our shared methods of thought.

On the topic of motivated reasoning, EA has been criticised in the past for being wasteful with its funds. Examples include buying Wytham Abbey[68] (which was on the market for £15,000,000), networking retreats taking place in the Bahamas, and funding for undergraduates to get their laundry done for them because their time is too valuable for them to do it themselves. A focus on frugality in service of others has evolved to incorporate generous expense accounts and all-expenses-paid trips to international EAGs.

Community builders are paid extremely high salaries despite these often being undergraduate students (or recent graduates) running student societies – something students generally do for free. To our knowledge there has not been a public explanation for how these numbers were reached, nor one for whether this is the most effective use of money.

There is also the problem of financial robustness. EA projects are highly dependent on the fortunes of a handful of people in two closely intertwined industries (tech and crypto). Such a small number of points of failure create serious resilience issues, as we saw during the FTX collapse. It is easier said than done, of course, but we strongly suggest that EA makes an effort to diversify its funding sources.

We are members of a movement dedicated to altruism, and we want to do what’s best for the world. That doesn’t mean that we are immune to (unconscious) bias, cultural influence, or motivated reasoning. If we want to do the most good, we need to closely examine why our authentic beliefs about “doing the most good” are so similar to the ones we and our financiers would like us to have, and which just so happen to involve ourselves living very comfortable and interesting lives.

Decentralised in theory, centralised in practice

Summary: The EA movement does not have a formal “CEO”, but the vast majority of power is held by a small number of unaccountable individuals. The movement is also centralised around a tight cluster of social and professional ties, creating issues around conflicts of interest.

The Effective Altruism movement is formally decentralised but informally centralised. We have no official “leader” nor a movement-wide formal hierarchical structure, but:

  • The vast majority of funding in EA is controlled by a very small number of people

    • Specifically Open Philanthropy, which is led by Holden Karnofsky and Alexander Berger, and overwhelmingly funded by tech billionaire couple Dustin Moskowitz and Cari Tuna
      • Almost all EA organisations began with or were scaled by funding from OpenPhil (CEA, 80k, CSET, GPI, Longview, MIRI, etc.), and many (likely most) other EA grantmaking bodies themselves receive a significant proportion of their funding from OpenPhil
    • Formerly, we also had Sam Bankman-Fried and the FTX Foundation, which was run by a small team led by Nick Beckstead
  • Access to the main EA Global event (a key networking opportunity) is also controlled by a very small number of people

    • Admission to locally-organised EAGx events is more decentralised
  • Media engagement and community health/training is mostly handled by a small number of people at the CEA, and almost all media appearances are made by a smaller number still (Will MacAskill, to a lesser degree Toby Ord, and until recently Sam Bankman-Fried)

    • Keynotes and “fireside chats” at EA events are disproportionately filled by MacAskill, Ord, and senior grantmakers/funders
  • EA’s two major book projects of recent times (i.e. _The Precipice _and What We Owe the Future) were written by members of the top leadership (i.e. Toby Ord and Will MacAskill)

    • The (truly formidable and well-funded) press push for the latter book tightly focused on Will as a personality[69]
  • A very, very small number of people are on the boards of a massive proportion of major EA institutions (most notably Will MacAskill)

The social circles of EA’s upper rungs are incredibly tight, and many of the most powerful people within EA are current or former close friends, flatmates, or romantic partners. This phenomenon is replicated to a lesser extent at the lower rungs, as many community groups serve as social hubs for their members.

Relatedly, EA organisations are very tightly interconnected, both by funding (notably via OpenPhil) and by people. Luke Muelhauser, for example, left his role as Director of MIRI to join OpenPhil in May 2015. Less than two years later, MIRI received a $500,000 grant from OpenPhil, with donations to date totalling over $14 million. Helen Toner worked at GiveWell, then OpenPhil, then GovAI, then CSET. Both GovAI and CSET receive OpenPhil funding, with CSET having received close to $100 million. Six OpenPhil staff (approximately 10%) previously worked at the FHI, and many others have either worked at other EA orgs or have close friends who do. To be clear, we are not accusing Luke, Helen, or anyone else of any kind of malpractice, they simply illustrate a revolving door that is not healthy in any social system.

Such a closely wound social-professional network is bound to create issues, especially conflicts of interest, as the probability increases that a grantmaker will be friends or otherwise connected with potential grantees. This came to a head in August 2019, when conflict of interest statements revealed that several grants made by the Long-Term Future Fund were made to housemates and personal friends of grantmakers. Later posts indicated that there would be stricter rules around conflicts of interest in future, but the LTFF appears to have discontinued public conflict of interest reporting after August 2019.[70]

The Effective Altruism movement gained size, funding, and influence very quickly, and it shows the signs of that experience. We still act like a new movement or a startup in many ways, with (often informal) decisions being heavily reliant on social ties and personal trust.[71] There are strengths to this, but ultimately large movements and organisations must necessarily create formal structures to ensure that they operate effectively and ethically. EA should not excise all its social qualities – we are a movement, not a corporation, and we are not arguing that EA should become any kind of anonymous bureaucracy[72] – but public reporting of important information, formal and transparent governance structures, and stringent conflict of interest regulations seem like reasonable suggestions for a movement containing thousands of people and billions of dollars.

Deciding together better

Summary: Decision-making within EA is currently oligarchic, opaque, and unaccountable. Empirical and theoretical research as well as numerous practical examples indicate that deliberative mechanisms can measurably improve on important aspects of decision-making, but even small experiments have been rejected by the leadership without explanation.

Despite exercising significant power over the direction of a large and influential movement, none of the people or groups we listed in the previous section are in any meaningful sense accountable to the EA membership, and the decisions they make are overwhelmingly made behind closed doors. The rank-and-file is welcome to contribute to discussions, e.g. through EA Forum posts, but decision-making is essentially oligarchic, that is, “rule by a few”. The leaders do not have to justify any decisions or answer any questions they don’t want to.

This centralisation of power reached a flashpoint when Will MacAskill tried to broker a deal between Sam Bankman-Fried and Elon Musk over the purchase of Twitter. Given that Bankman-Fried had committed his wealth to EA, this action, had it succeeded, would have taken large amounts of money from other EA projects. However, it is unclear how or why buying a stake in Twitter would be an optimal (or even good) use of money, and the decision was seemingly made by Will and Sam alone.

Will’s intentions here were undoubtedly good, but that is not enough to justify one or two men taking what could have been the most consequential decision ever made in the name of EA with little, if any, discussion or consent.

No matter how altruistic or intelligent one is, no single person is objective or immune to the corrupting influence of power. In fact, there is good evidence that power makes you both overconfident and less empathetic, which poses obvious issues when making highly impactful decisions about altruism. To make a perhaps unnecessary statement, opaque and unaccountable decisionmaking by a small unelected elite does not have a good historical track record.

EA has passed its startup phase and grown into a mature movement with considerable influence: we have a duty to be responsible with how our movement evolves, and take care not to lock in suboptimal or dangerous values. Power pools when left on its own, if for no other reason than the process of preferential attachment,[73] and organisations need active and powerful countermeasures to avoid gradually concentrating more and more power in the hands of fewer and fewer people.[74] Insofar as people are given power, a system of transparency and accountability is vital to ensure that actions taken on behalf of a movement are indeed the actions of that movement .[75]

The issues of transparency and accountability become especially problematic when dealing with tasks as huge as eradicating poverty or preventing human extinction: these are communal projects, with stakeholders numbering in the billions. We cannot be so arrogant as to assume that we, the “epistemically superior” elite of wealthy white dudes, should simply impose our preferred solutions from the top down. Projects with the aim of doing the most good should be embarked upon in cooperation and consultation with the people affected.[76] We should be transparent about what our interests are, how our decisions are made, and where our money comes from.

Even beyond ethical considerations, as long as decisions are made behind closed doors the community is only able to criticise them after they have been made. This is an inefficient and generally ineffective process that does not allow errors to be corrected before their negative consequences materialise. Inclusive, transparent decisions will naturally be epistemically superior because they receive greater, more diverse input from the start. In fact, we have good reasons to believe that democratic decisions outperform other kinds, in large part due to the collective intelligence properties we mentioned in previous sections. If the question of the Twitter purchase had been put to the membership or a representatively-sampled assembly of members, what would the outcome have been?

There are plenty of methods for us to choose from in crafting better decision-making structures, often supported by a wealth of research and real-world success stories.

Consensus building tools gather the views of many people, identify cruxes, and help build consensus. Pol.is, for instance, has seen significant success when implemented in Taiwan, even on deeply polarised issues. EA could easily employ tools such as these to discover what the membership really believes about certain issues, create better-informed consensus on key issues, and rigorously update our views. Indeed, certain community members have already started doing this.

Elsewhere, sortition assemblies (also known as “citizen juries”) have shown promise. Here, a representative random sample of a population is presented with the best-quality evidence[77] on a topic, given time to discuss and deliberate, and asked to produce collective decisions or recommendations. Such methods have an excellent track-record, where from Ireland to Mongolia they have allowed major political decisions to be made in a consensual and evidence-led way. We believe that these hold great potential for EA, especially with regard to major strategic decisions, big-picture funding-allocation questions, and navigating the crises/soul-searching we are currently embroiled in.

Furthermore, there is no particular reason why EA institutions shouldn’t be run by their members. Worker self-management has been shown to be effective, durable, and naturally better suited to collaborative, mission-oriented work than traditional top-down rule. We are not suggesting that everyone becomes part-time managers – there is certainly a role for operations and coordination specialists – but big-picture decisions about the strategy and funding of an organisation should be made by the people that create and maintain it.[78]

Ultimately, what fits our specific context will likely be determined by experimentation. Zoe Cremer provides an excellent plan of action for funding decisions:

  • Within 5 years: EA funding decisions are made collectively
    • First set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms
      • Subject matter experts are always used and weighed appropriately in this decision mechanism
    • Experiment in parallel with: randomly selected samples of EAs are to evaluate the decisions of one existing funding committee - existing decision-mechanisms are thus ‘passed through’ an accountability layer
    • All decision mechanisms have a deliberation phase (arguments are collected and weighed publicly) and a voting phase (majority voting, quadratic voting...)
    • Depending on the cause area and the type of choice, either fewer (experts + randomised sample of EAs) or more people (any EA or beyond) will take part in the funding decision .

Some of the benefits of deliberation and democracy have been noted in EA’s Improving Institutional Decision-Making community, and indeed deliberative democracy itself has roots in attempts to avoid “strategic” reasoning that have profound similarities to EA’s preferred epistemic approaches.

However, deliberative reforms, and even small experiments in different types of collective decision-making, have been rejected by the leadership with little explanation.

If EA wants to improve its ability to identify, prioritise, and solve problems, it should arrange itself optimally for that task. EA is full of incredible people with diverse expertise; we ought to harness that.

We are not suggesting that everyone in EA votes on every decision any EA institution makes. That would be silly. We are suggesting that the decision-making process itself is democratised, with individual decisions being made on the appropriate level. For instance, how a particular organisation is run should be up to the members of that organisation, and larger movement-wide decisions should be decided by assemblies or polls of members.

This isn’t just democracy for democracy’s sake: democratic structures would play an important epistemic (and thus instrumental) role in improving our impact on the world. Democratic reforms would also help protect against conflicts of interest as well as stemming the tide of disillusionment in the movement, helping EA to retain talent.

This is not far-flung utopianism or ivory-tower theory, it is how millions of people have successfully lived and worked across the world for hundreds of years, plausibly for as long as humans have existed.

Even movements that are (at best) agnostic on the subject of democracy, for instance Marxist-Leninist political parties, frequently have votes on constitutional and strategic issues, as well as leaders that are both elected and recallable by the membership. It is possible to have doubts about substantial democratising reforms without wishing to retain the overly top-down status quo.

Thus, the final point. We can talk all we like, but at the moment we have the system that we have, and solving structural problems will require the consent of those most empowered by those structures.

Therefore, the final part of this section is addressed directly to them.

Most of the people with power in the EA movement have been pivotal in building it. They have expanded EA from a basement in Oxford to the vibrant global community we see today. They are genuine inspirations to many of us (even when we disagree with some of their decisions) and some of us joined the movement as a direct result of the examples they set. But what worked before doesn’t work now, and we have tools at our disposal that are better-suited to the situation EA now finds itself in.

We now have a movement of thousands of smart, passionate, and dedicated people who often make considerable personal sacrifices in order to do as much good as possible. Our views matter as well.

We need to take full advantage of our greatest source of judgement and insight: ourselves. If we don’t, we risk condemning our movement to a slow calcification and decline. This is how we build a sustainable, dynamic, and mature movement.

If you believe in this community, you should believe in its ability to make its own decisions.

Conclusion

The Effective Altruism movement has already contributed to major improvements in the world and to humanity’s trajectory going forward. However, our current impact pales in comparison to the enormous potential EA has to change the world for the better over the coming years. To do this, EA must be able to accurately analyse the world and act accordingly. Important steps have been made towards this goal already, but they are not enough.

As it stands, EA neglects the importance of collective epistemics and overemphasises individual rationality, and as a result cultivates a community that is homogenous, hierarchical, and intellectually insular. EA is overwhelmingly white, male, upper-middle-class, and of a narrow range of (typically quantitative) academic backgrounds. EA reading lists and curricula over-emphasise a very narrow range of authors, which are assumed to be highly intelligent and thus to be deferred to. This narrows the scope of acceptable thought, and generates path dependencies and intellectual blind-spots.

The term “value-alignment” often hides an implicit community orthodoxy: a commitment to a particular package of views, including not just effective altruism but also longtermism, utilitarianism, Rationalist-derived epistemics, liberal-technocratic philanthropy, Whig historiography, the ITN framework, and the Techno-Utopian Approach to existential risk. Subscription to this package is a very different thing to being committed to “doing the most good”, but the two are treated as interchangeable. Hiring and funding practices that select for “value-aligned” individuals thus cultivate a homogenous and orthodox community.

EA is very open to “shallow” technical critiques. “Deep” critiques, in contrast, can dispute core EA beliefs and practices, criticise prominent EAs, and even question capitalism. These are much more likely to be rejected out of hand or treated as hostile, and EA has a suite of rhetorical and structural methods for dismissing them. These problems are magnified by the structure of the EA Forum, which gives some (typically quite senior and/or orthodox) community-members far more voting power than others.

The power of a small number of comparatively orthodox grantmakers makes raising important concerns dangerous for one’s career and community membership. Given how EA dominates many members’ career trajectories, social lives, and identities, making such “deep critiques” is simply not worth the risk. EA loses out on many valuable projects and updating opportunities due to the consequent chilling effect.

EA’s focus on the quantitative is powerful when addressing problems suitable for quantification, but can cause serious inaccuracy and overconfidence in others. This is particularly visible in grantmaking, where problems range from the blindspots generated by siloed thinking to multiple methodological issues associated with the ITN framework.

The major issue, however, is how we over-optimise interventions on the basis of doubtful numerical estimates made in inappropriate information environments, which is highly concerning due to the stakes involved. Characteristics of existential risk – deep uncertainty, high complexity, long timelines, poorly-defined phenomena, and low-probability high-impact events – are just those in which robustness-focused strategies outperform optimising ones.

There is a wealth of available material on how to act under such circumstances, from foresight methodologies to vulnerability reduction practices to robust decision-making tools, but these are neglected because of EA’s intellectual insularity as well as the Founder Effect. There are many other disciplines and practices that would be valuable to EA, but the above social-epistemic problems as well as narrow and homogenous reading lists and media diets cause them to be unknown to or ignored by much of the community.

EA can confuse value-alignment and seniority with expertise. Orthodox EA positions on some highly consequential issues are derived from unrigorous blogposts by prominent EAs with little or no relevant training. They use methods and come to conclusions that would be considered fringe by the relevant expert communities, but this is not adequately questioned because of (1) EA’s disciplinary homogeneity and intellectual insularity preventing EAs from coming across opposing perspectives, and (2) inappropriate deference and unwarranted assumptions about the superiority of EA rationality (and thus EA competence) causing external expert perspectives to be dismissed. Elsewhere, expertise is appealed to inconsistently to justify pre-decided positions, and powerful people are treated as authorities on topics for which they have no relevant qualifications or experience.

Our intellectual insularity, narrow conception of “good thinking”, and overconfidence can make engagement with EA difficult and exhausting for domain-experts , and they often withdraw quickly, seeing EAs as “weird” people “doing their own thing, I guess”, or burning themselves out trying to be heard.[79] This encourages further overconfidence and allows us to believe orthodox views are better-substantiated than they actually are.

Several viewpoints common within EA, including liberal-technocratic politics, a preference for speculative technofixes, and a belief in the overwhelming importance of AI alignment align suspiciously well with the interests and desires of both our tech-billionaire donors and ourselves. EA institutions fail to critique the ethical and political implications of our donors’ wealth and power, and what used to be a movement based on frugality has evolved into one in which we receive very healthy salaries as well as enviable benefits. This raises the spectre of motivated reasoning, which we are particularly vulnerable to as a result of our heavy reliance on sometimes controversial or untested quantitative tools like Individual Bayesian Thinking. Subjectively-generated, empirically un-bounded quantifications make it easy to rationalise and/or justify anything by coming up with appropriate “rough estimates'' of incredibly uncertain values.

Our movement, while not formally hierarchical, vests the vast majority of power in the hands of a small number of individuals within a tight cluster of social and professional networks. This makes us particularly susceptible to revolving-door dynamics and conflicts of interest.

Decision-making is opaque, unaccountable to the membership, and almost invariably top-down. This pattern of decision-making is associated with a wealth of ethical, psychological, and historical problems, and has already incurred serious risks to the movement.

There are several techniques for increasing deliberation and democracy within the movement, including consensus-building tools, sortition assemblies, and employee self-management. These are very well supported by empirical and theoretical research as well as numerous practical examples, and are likely to instrumentally improve decision-making outcomes.

As long as the problems we describe remain in place, EA will continue to alienate newcomers and limit its impact. Feedback loops may cause EA to become ever-more homogenous, hierarchical, insular, and narrow, locking us onto an ever-more rigid trajectory. Solving many of these problems will require the consent of those most empowered by them.

History holds many examples of organised groups of intelligent, well-educated, well-intentioned people causing considerable amounts of harm, from liberal eugenics to Marxism-Leninism. EA has already been involved in a number of scandals, and we have the potential to cause tremendous harm given our growing power, from playing down the importance of climate change to speeding up AGI development, from legitimising, empowering, and funding “Agents of Doom” to undercutting movements for social change.

If we are to hold power, we need to be able to wield it wisely.

The FTX crash was a shock to all of us, and we have to use this painful but valuable opportunity to change our movement for the better. We may not get a chance like this again.

Coda

There are thousands of people alive today who wouldn’t be if it wasn’t for EA. There are millions of animals in better living conditions because of our community. Risks that threaten the very existence of our species are on the global agenda thanks in part to our movement.

A few thousand people, dedicated, energetic and caring, have done this. If we play our cards well and choose the right path now, this may only be the beginning.

We need to choose carefully, though: countless people, innumerable animals, and perhaps even the future of our species may depend on it.

We have grown and gained power before we have gained wisdom. It is now time for us to mature as we grow and age. Things won’t be easy, but change is not just possible, but necessary.

One step, then another, then another.

Suggested reforms

Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.

Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.

Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!

Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.

Critique

General

  • EAs must be more willing to make deep critiques, both in private and in public
    • You are not alone, you are not crazy!
    • There is a much greater diversity of opinion in this community than you might think
    • Don’t assume that the people in charge must be smarter than you, and that you must be missing something if you disagree – even most of them don’t think that!
  • EA must be open to deep critiques as well as shallow critiques
    • We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”
    • We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided
    • Our willingness to consider a critique should be orthogonal to the seniority of the authors of the subject(s) of that critique
    • When we reject critiques, we should present our reasons for doing so
  • EAs should read more deep critiques of EA, especially external ones
    • For instance this blog and this forthcoming book
  • EA should cut down its overall level of tone/language policing
    • Norms should still be strongly in favour of civility and good-faith discourse, but anger or frustration cannot be grounds for dismissal, and deep critique must not be misinterpreted as aggression or “signalling”
    • Civility must not be confused with EA ingroup signalling
    • Norms must be enforced consistently, applying to senior EAs just as much as newcomers
  • EAs should make a conscious effort to avoid (subconsciously/inadvertently) using rhetoric about how “EA loves criticism” as a shield against criticism
    • Red-teaming contests, for instance, are very valuable, but we should avoid using them to claim that “something is being done” about criticism and thus we have nothing to worry about
    • “If we are so open to critique, shouldn’t we be open to this one?”
    • EAs should avoid delaying reforms by professing to take critiques very seriously without actually acting on them
  • EAs should state their reasons when dismissing critiques, and should be willing to call out other EAs if they use the rhetoric of rigour and even-handedness without its content
  • EAs, especially those in community-building roles, should send credible/costly signals that EAs can make or agree with deep critiques without being excluded from or disadvantaged within the community
  • EAs should be cautious of knee-jerk dismissals of attempts to challenge concentrations of power, and seriously engage with critiques of capitalist modernity
  • EAs, especially prominent EAs, should be willing to cooperate with people writing critiques of their ideas and participate in adversarial collaborations
  • EA institutions and community groups should run discussion groups and/or event programmes on how to do EA better

Institutions

  • Employees of EA organisations should not be pressured by their superiors to not publish critical work
  • Funding bodies should enthusiastically fund deep critiques and other heterodox/“heretical” work
  • EA institutions should commission or be willing to fund large numbers of zero-trust investigations by domain-experts, especially into the components of EA orthodoxy
  • EA should set up a counter foundation that has as its main goal critical reporting, investigative journalism and “counter research” about EA and other philanthropic institutions [within 12 months]*
    • This body should be run by independent people and funded by its own donations, with a “floor” proportional to other EA funding decisions (e.g. at least one researcher/community manager/grant program, admin fees in a certain height)
    • If this foundation is established, EA institutions should cooperate with it
  • EA institutions should recruit known critics of EA and offer them e.g. a year of funding to write up long-form deep critiques
  • EA should establish public conference(s) or assemblies for discussing reforms within 6 months, with open invitations for EAs to attend without a selection process. For example, an “online forum of concerns”:
    • Every year invite all EAs to raise any worries they have about EA central organisations
    • These organisations declare beforehand that they will address the top concerns and worries, as voted by the attendees
    • Establish voting mechanism, e.g. upvotes on worries that seem most pressing

Red Teams

  • EA institutions should establish clear mechanisms for feeding the results of red-teaming into decision-making processes within 6 months
  • Red teams should be paid, composed of people with a variety of views, and former- or non-EAs should be actively recruited for red-teaming
    • Interesting critiques often come from dissidents/exiles who left EA in disappointment or were pushed out due to their heterodox/”heretical” views (yes, this category includes a couple of us)
  • The judging panels of criticism contests should include people with a wide variety of views, including heterodox/”heretical” views
  • EA should use criticism contests as one tool among many, particularly well-suited to eliciting highly specific shallow critiques

Epistemics

General

  • EAs should see EA as a set of intentions and questions (“What does it mean to ‘do the most good’, and how can I do it?”) rather than a set of answers (“AI is the highest-impact cause area, then maybe biorisk.”)
  • EA should study social epistemics and collective intelligence more, and epistemic efforts should focus on creating good community epistemics rather than merely good individual epistemics
    • As a preliminary programme, we should explore how to increase EA’s overall levels of diversity, egalitarianism, and openness
  • EAs should practise epistemic modesty
    • We should read much more, and more widely, including authors who have no association with (or even open opposition to) the EA community
    • We should avoid assuming that EA/Rationalist ways of thinking are the only or best ways
    • We should actively seek out not only critiques of EA, but critiques of and alternatives to the underlying premises/assumptions/characteristics of EA (high modernism, elite philanthropy, quasi-positivism, etc.)
    • We should stop assuming that we are smarter than everybody else
  • When EAs say “value-aligned”, we should be clear about what we mean
    • Aligned with what values in particular?
    • We should avoid conflating the possession of the general goal of “doing the most good” with subscription to the full package of orthodox views
  • EAs should consciously separate:
    • An individual’s suitability for a particular project, job, or role
    • Their expertise and skill in the relevant area(s)
    • The degree to which they are perceived to be “highly intelligent”
    • Their perceived level of value-alignment with EA orthodoxy
    • Their seniority within the EA community
    • Their personal wealth and/or power
  • EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/“heretical” views
  • The EA Forum should have its karma/commenting system reworked to remove structural forces towards groupthink within 3 months. Suggested specific reforms include, in gently descending order of credence:
    • Each user should have equal voting weight
    • Separate agreement karma should be implemented for posts as well as comments
    • A “sort by controversial” option should be implemented
    • Low-karma comments should not be hidden
    • Low-karma comments should be occasionally shunted to the top
  • EA should embark on a large-scale exploration of “theories of change”: what are they, how do other communities conceptualise and use them, and what constitutes a “good” one? This could include:*
    • Debates
    • Lectures from domain-experts
    • Panel discussions
    • Series of forum posts
    • Hosting of experts by EA institutions
    • Competitions
    • EAG framed around these questions
    • Etc.
  • When EA organisations commission research on a given question, they should publicly pre-register their responses to a range of possible conclusions

Ways of Knowing

  • EAs should consider how our shared modes of thought may subconsciously affect our views of the world – what blindspots and biases might we have created for ourselves?
  • EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia
    • History is full of people who thought they were very rational saying very silly and/or unpleasant things: let’s make sure that doesn’t include us
  • EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews

Quantification

  • EAs should not assume that we must attach a number to everything, and should be curious about why most academic and professional communities do not
    • We should study cost-benefit trade-offs of quantification (e.g. ease of comparison/analysis vs information loss & category errors) and learn from other communities about how to best integrate numerical data with other kinds of information
  • Bayes’ theorem should be applied where it works
    • EA institutions should commission studies (preferably by independent statisticians, psychologists, philosophers of probability, etc.) into the circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought, by what criteria, and how this varies by subject/domain
    • Until (and indeed after) the conclusions of these studies are published, EAs should be aware of the criticisms of such techniques, and should not confuse the use of a particular (heavily contested) thinking-tool with “transparent reasoning”, and certainly not with the mere application of “rationality” itself
    • Important decisions should not be based upon controversial yet-to-be-proven techniques, especially under conditions that current evidence suggests they are ill-suited to
  • EAs should explore decision theories beyond expected value reasoning, as well as other ways of acting optimally in different environments
    • The goal of maximising value is orthogonal to scientific claims about our ability to actually accurately predict levels of value under particular conditions
  • When EAs make numerical estimates or forecasts, we should be wholly transparent about the reasoning processes, data, and assumptions we used to generate them
    • We should avoid uncritically repeating the estimates of senior EAs, especially when such transparency of reasoning is not present
    • We should be very specific about the events our probabilities actually refer to, given sensitivities to problem-framing
    • Where phenomena remain poorly-defined, they should be clarified (e.g. through foresight exercises) as a matter of priority, rather than generating a probability anyway and labelling it a “rough estimate”
  • EAs should be wary of the potential for highly quantitative forms of reasoning to (comparatively easily) justify anything
    • We should be extremely cautious about e.g. high expected value estimates, very low probabilities being assigned to heterodox/“heretical” views, and ruin risks
    • We should look into methods of putting hard ethical and theoretical boundaries on numbers, e.g. refusing to undertake actions with a ruin risk above x%, regardless of the results of expected value calculations
    • We should use Bayesian reasoning where it works (see above)

Diversity

  • EA institutions should select for diversity
    • With respect to:
      • Hiring (especially grantmakers and other positions of power)
      • Funding sources and recipients
      • Community outreach/recruitment
    • Along lines of:
      • Academic discipline
      • Educational & professional background
      • Personal background (class, race, nationality, gender, etc.)
      • Philosophical and political beliefs
    • Naturally, this should not be unlimited – some degree of mutual similarity of beliefs is needed for people to work together – but we do not appear to be in any immediate danger of becoming too diverse
  • Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
  • EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
  • People with heterodox/“heretical” views should be actively selected for when hiring to ensure that teams include people able to play “devil’s advocate” authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view
  • Community-building efforts should be broadened, e.g. involving a wider range of universities, and group funding should be less contingent on the perceived prestige of the university in question and more focused on the quality of the proposal being made
  • EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups
  • A greater range of people should be invited to EA events and retreats, rather than limiting e.g. key networking events to similar groups of people each time
  • There should be a survey on cognitive/intellectual diversity within EA
  • EAs should not make EA the centre of their lives, and should actively build social networks and career capital outside of EA

Openness

  • Most challenges, competitions, and calls for contributions (e.g. cause area exploration prizes) should be posted where people not directly involved within EA are likely to see them (e.g. Facebook groups of people interested in charities, academic mailing lists, etc.)
  • Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:
    • Subject-matter experts from outside EA
    • Researchers, practitioners, and stakeholders from outside of our elite communities
      • For instance, we need a far greater input from people from Indigenous communities and the Global South
  • External speakers/academics who disagree with EA should be invited give keynotes and talks, and to participate in debates with prominent EAs
  • EAs should make a conscious effort to seek out and listen to the views of non-EA thinkers
    • Not just to respond!
  • EAs should remember that EA covers one very small part of the huge body of human knowledge, and that the vast majority of interesting and useful insights about the world have and will come from outside of EA

Expertise & Rigour

Rigour

  • Work should be judged on its quality, rather than the perceived intelligence, seniority or value-alignment of its author
    • EAs should avoid assuming that research by EAs will be better than research by non-EAs by default
  • EAs should place a greater value on scientific rigour
    • We should use blogposts, Google Docs, and similar works as accessible ways of opening discussions and providing preliminary thoughts, but rely on peer-reviewed research when making important decisions, creating educational materials, and communicating to the public
    • When citing a blogpost, we should be clear about its scope, be careful to not overstate its claims, and not cite it as if it is comparable to a piece of peer-reviewed research
  • EAs should perform proper literature reviews, situate our claims within pre-existing literature, and when we make claims that deviate from expert consensus/norms we should explicitly state and justify this
    • The most valuable fringe ideas are extremely high-impact, but the mean fringe idea is likely to be net-negative
  • EA institutions should commission peer-reviewed research far more often, and be very cautious of basing decisions on shallow-dives by non-experts
    • For important questions, commission a person/team with relevant expertise to do a study and subject it to peer review
    • For the most important/central questions, commission a structured expert elicitation

Reading

  • Insofar as a “canon” is created, it should be of the best-quality works on a given topic, not the best works by (orthodox) EAs about (orthodox) EA approaches to the topic
    • Reading lists, fellowship curricula, and bibliographies should be radically diversified
    • We should search everywhere for pertinent content, not just the EA Forum, LessWrong, and the websites of EA orgs
    • We should not be afraid of consulting outside experts, both to improve content/framing and to discover blind-spots
  • EAs should see fellowships as educational activities first and foremost, not just recruitment tools
  • EAs should continue creating original fellowship ideas for university groups
  • EAs should be more willing to read books and academic papers

Good Science

  • EAs should consider the impact of EA’s cultural, historical, and disciplinary roots on its paradigmatic methods, assumptions, and prioritisations
    • What are the historical roots of our current cause prioritisations and preferred methodologies?
    • Why are we, for instance, so instinctively reductionist?
    • If existential risk and/or EA were to be reinvented from the ground up, what methods, disciplines, prioritisations, etc. would we choose?
  • EAs should value empiricism more, and be cautious of assuming that all important aspects of a topic can be derived from first principles through the proper application of rationality
  • EAs should be curious about why communities with decades of experience studying problems (similar to the ones) we study do things the ways that they do
  • EAs, especially those working in existential risk, should draw from the disciplines listed above:
    • Disaster Risk Reduction
    • Resilience Theory
    • Complex Adaptive Systems
    • Futures & Foresight
    • Decision-Making under Deep Uncertainty and Robust Decision-Making
    • Psychology & Neuroscience
    • Science & Technology Studies
    • The Humanities and Social Sciences in general
  • EAs should re-examine the siloing of issues under specific “cause areas”, and avoid relegating non-specific-hazard-focused existential risk work to a homogenous and de-valued “misc and meta” category
    • Often separation of causes is warranted (shrimp welfare advocacy is unlikely to have a major impact on AI risk), but our desire to categorise and understand the world can lead us to create artificial boundaries

Experts & Expertise

  • EAs should deliberately broaden their social/professional circles to include external domain-experts with differing views
  • EAs should be be consistent when appealing to expertise, and be cautious of subconsciously using it selectively to confirm our biases
  • EA institutions should have their policy recommendations vetted by external experts and/or panels of randomly-selected EAs before they are promoted by the Centre for Long-Term Resilience, Simon Institute, etc.*
  • When hiring for research roles at medium to high levels, EA institutions should select in favour of domain-experts, even when that means passing over a highly “value-aligned” or prominent EA

Funding & Employment

Finance

  • EAs should take care not to confuse the total net worth of EA donors with the actual resources of the EA community, especially given how much net worth can vary with e.g. share values
  • Donors should commit a large proportion of their wealth to EA bodies or trusts controlled by EA bodies to provide EA with financial stability and as a costly signal of their support for EA ideas
  • Funding bodies should be far more selective of donors, based on:
    • Their personal ethics records
    • The ethical consequences and implications of their work
    • Their personal trustworthiness and reliability
    • The likely stability of their wealth
  • Funding bodies should within 6 months publish lists of sources they will not accept money from, regardless of legality
    • Tobacco?
    • Gambling?
    • Mass surveillance?
    • Arms manufacturing?
    • Cryptocurrency?
    • Fossil fuels?
  • Funding bodies should take advice on how to avoid inadvertently participating in “ethics-washing”, and publish the policies that result
  • The big funding bodies (OpenPhil, EA Funds, etc.) should be disaggregated into smaller independent funding bodies within 3 years
  • EA institutions should each reduce their reliance on EA funding sources and tech billionaires by 50% within the next 5 years
    • This ensures you need to convince non-members that your work is of sufficient quality and relevance
    • This also greatly increases the resilience of the EA movement, as institutions would no longer all be dependent on the same small number of funding sources

Grantmaking

  • Grantmakers should be radically diversified to incorporate EAs with a much wider variety of views, including those with heterodox/”heretical” views
  • Funding frameworks should be reoriented towards using the “right tool for the right job”
    • Optimisation appears entirely appropriate in well-understood, predictable domains, e.g. public health interventions against epidemic diseases[80]

    • But robustness is far superior when addressing domains of deep uncertainty, areas of high complexity, low-probability high-impact events, long timescales, poorly-defined phenomena, and significant expert disagreement, e.g. existential risk

    • Optimising actions should be taken on the basis of high-quality evidence, e.g. meta-reviews or structured expert elicitations, rather than being used as the default or even the only mode of operation

  • Grantmaking organisations should commission independent external evaluations of the efficacy of their work (e.g. the success rates of grantmakers in forecasting the impact or success of projects) within 6 months, and release the results of any internal work they have done to this end
  • Within 5 years, EA funding decisions should be made collectively
    • First set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms
      • For example rotating panels, various forms of lottocracy
      • Subject matter experts are always used and weighed appropriately
    • Experiment in parallel with randomly selected samples of EAs evaluating the decisions of one existing funding committee
      • Existing decision-mechanisms are thus ‘passed through’ an accountability layer
    • All decision mechanisms should have a deliberation phase (arguments are collected and weighed publicly) and a voting phase (majority voting, quadratic voting, etc.)
    • Depending on the cause area and the type of choice, either fewer (experts + randomised sample of EAs) or more people (any EA or beyond) should take part in the funding decision
  • A certain proportion EA of funds should be allocated by lottery after a longlisting process to filter out the worst/bad-faith proposals*
    • The outcomes of this process should be evaluated in comparison to EA’s standard grantmaking methods as well as other alternatives
  • Grantmaking should require detailed and comprehensive conflict of interest reporting

Employment

  • Funding bodies should not be able to hire researchers who have previously been recipients in the last e.g. 5 years, nor should funders be able to join recipient organisations within e.g. 5 years of leaving their post
  • More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent “independent researchers”
  • EA funders should explore the possibility of funding more stable, safe, and permanent positions, such as professorships

Governance & Hierarchy

Leadership

  • EAs should avoid hero-worshipping prominent EAs, and be willing to call it out among our peers
    • We should be able to openly critique senior members of the community, and avoid knee-jerk defence/deference when they are criticised
  • EA leaders should take active steps to minimise the degree of hero-worship they might face
    • For instance, when EA books or sections of books are co-written by several authors, co-authors should be given appropriate attribution
  • EAs should deliberately platform less well-known EAs in media work
  • EAs should assume that power corrupts, and EAs in positions of power should take active steps to:
    • Distribute and constrain their own power as a costly signal of commitment to EA ideas rather than their position
    • Minimise the corrupting influence of the power they retain and send significant costly signals to this effect
  • Fireside chats with leaders at EAG events should be replaced with:
    • Panels/discussions/double-cruxing discussions involving a mix of:
      • Prominent EAs
      • Representatives of different EA organisations
      • Less well-known EAs
      • External domain-experts
    • Discussions between leaders and unknown EAs

Decentralisation

  • EA institutions should see EA ideas as things to be co-created with the membership and the wider world, rather than transmitted and controlled from the top down
  • The community health team and grantmakers should offer community groups more autonomy, independence, and financial stability
    • Community-builders should not worry about their funding being cut if they disagree with the community health team or appear somewhat “non-value-aligned”
  • EA media engagement should be decentralised
    • Community-builders and researchers should be offered media training, rather than being told to never speak to the press and always forward journalists to the CEA

Democratisation

  • EA institutions should implement clear and accessible democratic mechanisms for constitutional change within 12 months
  • EA leadership figures should be democratically accountable to the membership, including mechanisms for the membership able to elect them on a regular basis and to recall them if they underperform
  • EA institutions should be democratised within 3 years, with strategic, funding, and hiring policy decisions being made via democratic processes rather than by the institute director or CEO
  • EAs should be more open to institutions and community groups being run democratically or non-hierarchically
    • One experienced person and three comparatively inexperienced people will probably produce better answers together than the experienced person would alone
  • EA institutions should consider running referenda for the most consequential decisions (e.g. a large fraction of EA funds being used to help buy a social networking site)
  • EA institutions should consider having AGMs where the broader EA community can input into decision making

Transparency & Ethics

Community

  • EAs should expect their institutions to be transparent and open
  • EAs should make an effort to become more aware of EA’s cultural links to eugenic, reactionary and right-wing accelerationist politics, and take steps to identify areas of overlap or inheritance in order to avoid indirectly supporting such views or inadvertently accepting their framings

Institutions

  • EA institutions should make their decision-making structures transparent within 6 months, and be willing to publicly justify their decisions
  • EA institutions should list all of their funding sources (past and present) on their websites, including how much was received from each source, within 6 months
  • The minutes of grantmaker meetings should be made public*
  • There should be a full public mapping of all EA institutions
    • Who works or has worked at which organisations
    • Which organisations fund or have funded which others, when, and how much
    • Who is or has been on which boards of directors
    • Which organisations are or have been subsidiaries of other organisations
    • Etc.
  • EA institutions should increase transparency over
    • Who gets accepted/rejected to EAG and similar events and why
    • Leaders/coordination forums
  • EA institutions should set up regular independent audits and assessments within 12 months
    • We’re a movement that grew out of evaluating charities, it’s only fair we hold ourselves to the same standards
  • Quality journalists should be given full access to EA institutions to investigate*
  • EA institutions should set up whistleblower protection schemes for members of EA organisations within 6 months
    • Legal, financial and social support for those who want to come forward to make information public that is in the public interest
    • EA should explore the pros and cons of appointing an independent ombudsman, and the results of this exploration should be published within 12 months*
  • EA organisations should enable vetting and oversight by people external to the EA community, and be accountable to the wider public more generally. This could be achieved through, for instance:
    • Providing clear statements about how decisions/funding allocations were/are made
    • Taking advice on how this is done outside the EA community, e.g. in academia, industry, and NGOs

Moral Uncertainty

  • EAs should practise moral uncertainty/pluralism as well as talking about it
  • EAs who advocate using ethical safeguards such as “integrity” and “common-sense morality” should publicly specify what they mean by this, how it should be operationalised, and where the boundaries lie in their view
  • EA institutions that subscribe to moral uncertainty/pluralism should publish their policies for weighting different ethical views within 12 months

Let us know anything that needs explaining or clarifying, and especially which high-impact changes we have missed![81]

Contact Us

If you have any questions or suggestions about this article, EA, or anything else, feel free to email us at concernedEAs@proton.me

< >

Notes


  1. At least, it was supposed to be the final draft. ↩︎

  2. In general, we think most of our points apply to most of EA, but far more to the longtermist side than the global poverty/animal welfare communities. ↩︎

  3. Indeed, there is significant doubt about whether awareness of cognitive biases actually reduces one’s susceptibility to them. ↩︎

  4. This imbalance is interesting given the popularity of “Superforecasting'' within EA, where the vital importance of collaboration and the social organisation of that collaboration is well known, as well as the value assigned to improving institutional decision-making. ↩︎

  5. There is also a lot of relevant work in social epistemics and the philosophy of science. See, for instance, Longino’s (1990) criteria for objectivity in scientific communities: (1) recognised avenues of criticism, (2) shared standards, (3) community responsiveness to criticism, and (4) equality of intellectual authority. We contend that EA is much better at (1) and (2) than (3) and (4). ↩︎

  6. For a deeper engagement with the term, see here. ↩︎

  7. We’ve seen the term “heretical” used to describe beliefs (held by EAs) that significantly deviate from EA orthodoxy. ↩︎

  8. Defined by Toby Ord in The Precipice as a “mechanism for destroying humanity or our potential”, and including artificial intelligence, engineered pathogens, climate change, nuclear war, and so on. The closest term in Disaster Risk Studies would be “hazard”, but the usage of the word in The Precipice and beyond also seems to cover clusters of hazards, threats, drivers of vulnerability, indirect causes of hazard occurrence (artificial intelligence can’t kill you by itself, it needs “hands” as well as a “brain”), and several other concepts, few of which could be considered “mechanisms” in and of themselves. ↩︎

  9. We should remember that EA is sometimes worryingly close to racist, misogynistic, and even fascist ideas. For instance, Scott Alexander, a blogger that is very popular within EA, and Caroline Ellison, a close associate of Sam Bankman-Fried, speak favourably about “human biodiversity”, which is the latest euphemism for “scientific” racism. [Editor’s note: we left this in a footnote out of fear that a full section would cause enough uproar to distract from all our other points/suggestions. A full-length post exploring EA’s historical links to reactionary thought will be published soon]. ↩︎

  10. Several of the authors of this post fit this description eerily well. “Dan”, “Tom”, and “Chris” were other close contenders. ↩︎

  11. Many of the authors have backgrounds in the humanities and social sciences, and we see it as no coincidence that the issues we identify were noticed by people trained in modelling socio-cultural systems, critiquing arbitrary categorisations, and analysing structures of power. ↩︎

  12. It has been suggested that success in work or life may depend far more on emotional intelligence than “intellect”. ↩︎

  13. From Zoe Cremer’s Objections to Value-Alignment Among Effective Altruists: “Intellectual homogeneity is efficient in the short-term, but counter-productive in the long-run. Value-alignment allows for short-term efficiency, but the true goal of EA – to be effective in producing value in the long term – might not be met.“ ↩︎

  14. With the exception of (orthodox) economics and analytic philosophy. Note also that certain STEM areas have historically been neglected, even including (hardware) engineering until very recently. The “core” EA subjects are at once highly formal (i.e. mathematical/pure-logical), relatively un-empirical, and (typically) reductionist. There do not, for instance, seem to be very many EA anthropologists, historians, or social theorists, especially within the leadership. Perhaps if we had a few then the issues we describe would have been raised a long time ago. ↩︎

  15. Furthermore, diversity has a limited impact on decision-making if it is not combined with democracy; if EA was diverse but the leadership remained homogenous, there would still be problematic dynamics. ↩︎

  16. There is no one “EA response to critique” as each person is different, and nor is there one perfect classification scheme. This is simply a useful tool for thinking with. Alternatives welcome. ↩︎

  17. “It is difficult to get a man to understand something when his salary depends on his not understanding it.” - Upton Sinclair ↩︎

  18. In the broad political sense, rather than the American sense of “left of conservative”. ↩︎

  19. If you are new to the community or are reading this in the future: they were right. ↩︎

  20. To, and including, us. ↩︎

  21. Beyond simple downvoting, EA has developed its own rhetoric for subtly brushing off criticism: deep critiques are “poorly argued” or “likely net-negative” proposals made by people with “bad epistemics”. These and similar utterances, often simply asserted without any supporting argumentation, make dismissals seem intelligent and even-handed, even in cases where they are used as little more than EA code for “I disagree with this argument and I don’t like the author very much.” Elsewhere, critiques from outgroup writers are “bad optics”; PR problems to be solved rather than arguments to be engaged with. None of this is to say that the phrases are bad in themselves or that they are always used inappropriately, just that they should be used within logical arguments rather than as substitutes for them. ↩︎

  22. We have no idea how much of an impact this might have had on Sven Rone (the pseudonymous author of “The Effective Altruism movement is not above conflicts of interest”): theirs is an illustrative example, not a pillar of our argument. ↩︎

  23. The paper went through 27 revisions and almost as many reviewers to make sure it was written in a sufficiently conciliatory fashion to be taken seriously by EAs, but the authors faced accusations of being too “combative”, “uncharitable”, and “harsh” regardless, and were accused by some of “courage-signalling” or otherwise acting in bad faith. ↩︎

  24. Old Boy’s Network (British): “An exclusive informal network linking alumni of a particular (generally elite) school, or members of a social class or profession or organisation, in order to provide connections, information, and favours.” ↩︎

  25. Potential but speculative feedback loop: the longer you are in EA,the higher you are likely to climb, and the higher the risk of questioning orthodoxy, thus the more you tend to the EA mean, thus the more narrow and orthodox EA becomes. ↩︎

  26. The expected impact of deep critiques is further reduced by the fact that the leadership seems to rarely engage with them. In the case of Democratising Risk, leaders made a point of publicly stating that such critical work was valuable, but since then have not appeared to consider or discuss the content of the paper in any detail. Criticism can be de facto neutralised if those with power simply ignore it. ↩︎

  27. The vast majority of researchers, professionals, etc. do not try to quantitatively reason from first principles in this way. There seems relatively little consideration within EA of why this might be. ↩︎

  28. This is known in the philosophy of probability as the Problem of Priors. ↩︎

  29. That is, into the probability of making a given observation assuming that the hypothesis in question is true: P(E|H). ↩︎

  30. “There is no evidence that geopolitical or economic forecasters can predict anything ten years out." – Phillip Tetlock ↩︎

  31. The conclusions of what is by far the most comprehensive and rigorous study of quantification in existential risk (Beard, Rowe, and Fox, 2020) is that all the methods we have at the moment are rather limited or flawed in one way or another, that the most popular methods are also the least rigorous, and that the best route forward is to learn from other fields by transparently laying out our reasoning processes for others to evaluate. ↩︎

  32. There’s probably a link to the Rationalist community’s emphasis on IQ here. [Editor’s note: see Bostrom]. ↩︎

  33. As Noah Scale puts it, “EAs [can] defer when they claim to argue.” ↩︎

  34. To clarify, we’re not saying that this sort of hierarchical sensibility is purely due to number-centric thinking: other cultural and especially class-political factors are likely to play a very significant role. ↩︎

  35. Informal observation also strongly suggests selective employment of civility norms: it seems you can get away with much more if you are well-known and your arguments conform to EA orthodoxy. ↩︎

  36. Sometimes performed in 30 hours of research or less. ↩︎

  37. This seems to fit a wider theme of many non-leadership EAs being more orthodox about EA ideas than the originators of those ideas themselves. ↩︎

  38. This is not to say that we are wholly opposed to these ideas, just that there is surprisingly little academic scruitny and discussion of these ideas given their importance to our movement. ↩︎

  39. EA also seems to have a general hostility toward the Planetary Boundaries framework that is rarely explained or justified, and climate risk claims in general are subjected to a far higher burden of proof than claims about e.g. AI risk. We do not all agree with Lenton or Rockström, but are rather highlighting inconsistencies. ↩︎

  40. Many members of Existential Risk Studies are not EAs, or are somewhat heterodox/”heretical” EAs. Given our occasional tendency to conflate the value-alignment of an author with the value of their work, it is unfortunately not surprising that the outputs of less ideologically selective institutes like the Centre for the Study of Existential Risk or the Global Catastrophic Risk Institute (never mind those of authors not working at EA-linked bodies at all) can be ignored or dismissed at times. ↩︎

  41. See footnote [8]. ↩︎

  42. We are aware of one young EA with a background in Disaster Risk Reduction who, after expressing an interest in existential risk, was repeatedly told by EAs to leave DRR and go into AI alignment. ↩︎

  43. An “...unpredictable event that is beyond what is normally expected of a situation and has potentially severe consequences.” – Investopedia ↩︎

  44. Defined by the DMDU Society as when “parties to a decision do not know, or cannot agree on, the system model that relates action[s] to consequences, the probability distributions to place over the inputs to these models, which consequences to consider[,] and their relative importances.” EA has developed a remarkably similar concept called “cluelessness”. ↩︎

  45. This is particularly problematic given that current discussions of the importance of future generations are due in significant part to the tireless work of indigenous communities and climate justice activists – i.e. groups almost entirely excluded and/or devalued by EA. ↩︎

  46. Another: technological determinism, implicit in most longtermist work, is largely dismissed and derided within Science & Technology Studies. It’s not completely fringe, but it’s definitely a minority position, yet EA seems to have never explored any alternatives, e.g. constructivist or co-productionist approaches to technology. ↩︎

  47. Note the probable link to Silicon Valley “disruptor” culture. ↩︎

  48. This also counteracts the flaws found in “High Modernist” approaches popular in EA, also known as “seeing like a state”. ↩︎

  49. For example, the IPCC has argued that “[limiting warming below 1.5°C] would require … building the capability to utilise indigenous and local knowledge.” ↩︎

  50. AI, engineered biology, climate change, and nuclear war, in steeply descending order of (perceived) importance. ↩︎

  51. Though still far outnumbered by TUA-aligned works due to the hegemonic power of EA orthodoxy and funding. ↩︎

  52. Given that the ecological crisis is a wicked problem requiring complex systems analysis it is unsurprising that the IPCC has outlined the necessity for systemic changes across a huge variety of domains (or what we might call “cause areas”), from land use, food production, and energy, to carbon capture technologies and institutional adaptations. ↩︎

  53. We do not know of any EA that put a probability on an FTX collapse, never mind one with anything like the consequences EA faced as a result of the one we witnessed. ↩︎

  54. An interesting exercise: consider an extinction scenario, work back through the chain of causation in as detailed a fashion as you can, consider all the factors at play and their interrelation, and ask yourself how productive it is to label the initial scenario as e.g. “extinction from synthetic biology”. ↩︎

  55. Though it is potentially problematic that OpenPhil’s list of focus areas is fairly constrained compared to e.g. that of the FTX Fund. ↩︎

  56. As well as growing the EA community itself. ↩︎

  57. cf. EA’s implicit commitment to liberal technocracy. ↩︎

  58. Only a narrow possibility space can be explored if one needs to roughly align with preferences of funders, e.g. for particular methods. In general, monist, hegemonic funding structures promote scientific conservativism. ↩︎

  59. Note that this is despite the fact that the field of global catastrophic risk is rather small and homogenous in itself, though by definition less homogenous than global catastrophic risk within EA. ↩︎

  60. “Predict and Act” vs “Explore and Adapt” again. ↩︎

  61. Inclusive “we”: the authors of this text are also not immune to this either. ↩︎

  62. There’s an antisemitic element to this as well: crypto’s history is intimately bound up in far-right desires to create digital “sound money” to undermine the power of central banks, because in their eyes, (central) banks = Jews. Peter Thiel is also in the mix, as always. ↩︎

  63. The reticence of EAs to consider (political!) actions that might slow down AI progress is well-known, though this has begun to change recently. ↩︎

  64. This doesn’t discredit longtermism, and many of the authors are sympathetic to longtermism. ↩︎

  65. Like most EAs- including us! ↩︎

  66. At a minimum, poverty reduction has been dismissed as being “near-termist”, despite the descendents of people currently in poverty being far more likely to live in poverty themselves, and the fact that there is no guarantee that AI or other future technologies will actually reduce poverty (particularly as existing AI typically perpetuates or increases inequality). Several of us also wonder what evaluations of global poverty work would look like if they considered interventions that targeted the underlying causes of poverty rather than treating the symptoms. ↩︎

  67. We’re not trying to dismiss AI risk here – several of us work on AI risk – we just question why it is given such a huge emphasis. ↩︎

  68. Since this was initially written, there has been a lot of discussion about Wytham Abbey on the EA Forum. The purchase has been justified by the project leader, Owen Cotton-Barratt, who says that calculations were made which, depending on the numbers and analysis, may mean that this was a wise investment, as external conferences are expensive, and Effective Ventures could sell the abbey further down the line and potentially recoup a significant portion of the initial investment. However, we just don’t know: we have not seen the original numbers or a cost effectiveness analysis. Given the response, it is clear that many people believe Wytham Abbey to be a frivolous purchase, which is not unsurprising. There should have been a more transparent and proactive justification of the benefits of the purchase and why those benefits justified the high cost. ↩︎

  69. MacAskill has expressed concern about hero-worship within EA, but we have not been able to find any instances where he has made a concerted effort to reduce it. ↩︎

  70. There seems to be one exception to this, explained by the anonymity of the grantee. ↩︎

  71. We have lost track of how often we have made or been asked to make significant purchases using our personal accounts on the verbal assurance that we will be reimbursed at some point in the future. ↩︎

  72. Which clearly have extensive problems of their own. ↩︎

  73. See also the “Iron Law of Institutions”, where “people who control institutions care first and foremost about their power within the institution rather than the power of the institution itself.” ↩︎

  74. Or reverse it after it has already happened. ↩︎

  75. Our problem isn’t with the leaders, but rather the structures that give them large amounts of unaccountable power. If we were in the same position, we are sure we would need just as much accountability and transparency to ensure we were doing a good job. ↩︎

  76. Postcolonial perspectives within the fields of Public Health and Development Studies will hold most of the answers to these questions as far as Global Health & Wellbeing is concerned. Existential risk is another problem entirely, and figuring out how to make the task of existential risk reduction a democratic one sounds like a good project if anyone is looking for ideas. There’s already been some work on “participatory futures”, for example the list at the bottom of this page. ↩︎

  77. See our discussion of expert opinion aggregation tools at the end of this section. ↩︎

  78. At the very least, Annual Gathering Meetings that allow for broad community input would be a step in the right direction. ↩︎

  79. This in particular has been the experience of certain authors of this post. Being confidently dismissed by people you know to have negligible knowledge of your area of expertise gets tiring very quickly. ↩︎

  80. At least, as far as we know: few of us have much expertise in this domain ↩︎

  81. We may update the list of reforms in response to suggestions from others. ↩︎

Comments394
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[anonymous]416
130
28

I appreciate you taking the effort to write this. However, like other commentators I feel that if these proposals were implemented, EA would just become the same as many other left wing social movements, and, as far as I can tell, would basically become the same as standard forms of left wing environmentalism which are already a live option for people with this type of outlook, and get far more resources than EA ever has. I also think many of the proposals here have been rejected for good reason, and that some of the key arguments are  weak. 

  1. You begin by citing the Cowen quote that "EAs couldn't see the existential risk to FTX even though they focus on existential risk". I think this is one of the more daft points made by  a serious person on the FTX crash. Although the words 'existential risk' are the same here, they have completely different meanings, one being about the extinction of all humanity or things roughly as bad as that, and the other being about risks to a particular organisation. The problem with FTX is that there wasn't enough attention to existential risks to FTX and the implications this would have for EA.  In contrast, EAs have put umpteen pers
... (read more)

I agree with most of your points, but strongly disagree with number 1 and surprised to have heard over time that so many people thought this point was daft.

I don't disagree that "existential risk" is being employed in a very different sense, so we agree there, in the two instances, but the broader point, which I think is valid, is this:

There is a certain hubris in claiming you are going to "build a flourishing future" and "support ambitious projects to improve humanity's long-term prospects" (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot. 

Of course, the people who sank untold hours into existential risk research aren't to blame, and it isn't an argument against x-risk/longtermist work, but it does show that EA, as a community missed something dire and critical and importantly something that couldn't be closer to home for the community.  And in my opinion that does shed light on how successful one should expect the longer term endeavours of the community to be.

Scott Alexander, from "If The Media Reported On Other Things Like It Does Effective Altruism":

Leading UN climatologist Dr. John Scholtz is in serious condition after being wounded in the mass shooting at Smithfield Park. Scholtz claims that his models can predict the temperature of the Earth from now until 2200 - but he couldn’t even predict a mass shooting in his own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they can’t even protect themselves in the present?

The difference in that example is that Scholtz is one person so the analogy doesn't hold. EA is a movement comprised of many, many people with different strengths, roles, motives, etc and CERTAINLY there are some people in the movement whose job it was (or at a minimum there are some people who thought long and hard) to mitigate PR/longterm risks to the movement. 

I picture the criticism more like EA being a pyramid set in the ground, but upside down. At the top of the upside-down pyramid, where things are wide, there are people working to ensure the longterm future goes well on the object level, and perhaps would include Scholtz in your example. 

At the bottom of the pyramid things come to a point, and that represents people on the lookout for x-risks to the endeavour itself, which is so small that it turned out to be the reason why things toppled, at least with respect to FTX. And that was indeed a problem. It says nothing about the value of doing x-risk work.

I think that is a charitable interpretation of Cowen's statement: "Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be."
 

I think charitably, he isn't saying that any given x-risk researcher should have seen an x-risk to the FTX project coming. Do you?

I think I just don't agree with your charitable reading. The very next paragraph makes it very clear that Cowen means this to suggest that we should think less well of actual existential risk research:

Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.

I think that's plain wrong, and Cowen actually is doing the cheap rhetorical trick of "existential risk in one context equals existential risk in another context". I like Cowen normally, but IMO Scott's parody is dead on.

"EA didn't spot the risk of FTX and so they need better PR/management/whatever" would be fine, but I don't think he was saying that.

Yeah I suppose we just disagree then. I think such a big error and hit to the community should downgrade any rational person's belief in the output of what EA has to offer and also downgrade the trust they are getting it right.  

Another side point: Many EAs like Cowen and think he is right most of the time. I think it is suspicious that when Cowen says something about EA that is negative he is labeled stuff like "daft".

Hi Devon, FWIW I agree with John Halstead and Michael PJ re John's point 1.

If you're open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.

Last November I commented on Tyler Cowen's post to explain why I disagreed with his point:

I don't find Tyler's point very persuasive: Despite the fact that the common sense interpretation of the phrase "existential risk" makes it applicable to the sudden downfall of FTX, in actuality I think forecasting existential risks (e.g. the probability of AI takeover this century) is a very different kind of forecasting question than forecasting whether FTX would suddenly collapse, so performance at one doesn't necessarily tell us much about performance on the other.

Additionally, and more importantly, the failure to anticipate the collapse of FTX seems to not so much be an example of making a bad forecast, but an example of failure to even consider the hypothesis. If an EA researcher had made it their job to try to forecast the probability that FTX collapses and assigned a very low probability to it after much effort, that probab

... (read more)
6
Cornelis Dirk Haupt
I disagreed with the Scott analogy but after thinking it through it made me change my mind. Simply make the following modification: "Leading UN climatologists are in serious condition after all being wounded in the hurricane Smithfield that further killed as many people as were harmed by the FTX scandal. These climatologists claim that their models can predict the temperature of the Earth from now until 2200 - but they couldn’t even predict a hurricane in their own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they can’t even protect themselves or those nearby in the present?" Now we are talking about a group rather than one person and also what they missed is much more directly in their domain expertise. I.e. it feels, like the FTX Future fund team's domain expertise on EA money, like something they shouldn't be able to miss. Would you say any rational person should downgrade their opinion of the climatology community and any output they have to offer and downgrade the trust they are getting their 2200 climate change models right? I shared the modification with an EA that - like me - at first agreed with Cowen. Their response was something like "OK, so the climatologists not seeing the existential neartermist threat to themselves appears to still be a serious failure (people they know died!) on their part that needs to be addressed - but I agree it would be a mistake on my part to downgrade my confidence in their 2100 climate change model because if it" However, we conceded that there is a catch: if the climatology community persistently finds their top UN climatologists wounded in hurricanes to the point that they can't work on their models, then rationally we ought to update that their productive output should be lower than expected because they seem to have this neartermist blindspot to their own wellbeing and those nearby. This concession though comes with asterisks though. If we, for sake of argument, as
2
Michael_PJ
Tbh I took the Gell-Mann amnesia interpretation and just concluded that he's probably being daft more often in areas I don't know so much about.
4
peterhartree
This is what Cowen was doing with his original remark.
2
Linch
This feels wrong to me? Gell-Mann amnesia is more about general competency whereas I thought Cowen was referring to specficially the category of "existential risk" (which I think is a semantics game but others disagree)?  
2
Greg_Colbourn
Cowen is saying that he thinks EA is less generally competent because of not seeing the x-risk to the Future Fund.
1
Linch
Again if this was true he would not specifically phrase it as existential risk (unless maybe he was actively trying to mislead)
2
Greg_Colbourn
Fair enough. The implication is there though.

Imagine a forecaster that you haven't previously heard of told you that there's a high probability of a new novel pandemic ("pigeon flu") next month, and their technical arguments are too complicated for you to follow.[1]

Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:

a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.

b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics

c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.

I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/conscientiousness or videogame ability.

With a quote like 

Hardly anyone associated with Future Fund saw the existential risk to… Future

... (read more)
3
Greg_Colbourn
I agree that the focus on competency on existential risk research specifically is misplaced. But I still think the general competency argument goes through. And as I say elsewhere in the thread - tabooing "existential risk" and instead looking at Longtermism, it looks (and is) pretty bad that a flagship org branded as "longtermist" didn't last a year!
2
Linch
Funnily enough, the "pigeon flu" example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1. 
2
Devon Fritz 🔸
I agree that is the other way out of the puzzle. I wonder whom to even trust if everyone is susceptible to this problem...
-4
Anthony Repetto
Thank you! I remember hearing about Bayesian updates, but rationalizations can wipe those away quickly. From the perspective of Popper, EAs should try "taking the hypothesis that EA..." and then try proving themselves wrong, instead of using a handful of data-points to reach their preferred, statistically irrelevant conclusion, all-the-while feeling confident.
9
Jason
I don't think the parody works in its current form. The climate scientist claims expertise on climate science x-risk through being a climate-science expert, not through being an expert on x-risk more generally. So him being wrong on other x-risks doesn't update my assessment of his views on climate x-risk that much. In contrast, if the climate scientist's organization built its headquarters in a flood plain and didn't buy insurance, the resulting flood which destroyed the HQ would reduce my confidence in their ability to assess climate x-risk because they have shown themselves incompetent at least once in at assessing climate risks chose to them. In contrast, EA (and the FF in particular) asserts/ed expertise in x-risk more generally. For someone claiming this kind of experience, the events that would cause me to downgrade are different than for a subject-matter expert. Missing an x-risk under one's nose would count. While I don't think "existential risk in one context equals existential risk in another context," I don't think the past performance has no bearing on estimates of future performance either. I think assessing the extent to which the "miss" on FTX should cause a reasonable observer to downgrade EA's x-risk credentials has been made difficult by the silence-on-advise-of-legal-counsel approach. To the extent that the possibility of FTX drying up wasn't even on the radar of top leadership people, that would be a very serious downgrade for me. (Actually, it would be a significant downgrade in general confidence for any similarly-sized movement that lacked awareness that promised billions from a three-year old crypto company had a good chance of not materializing.) A failure to specifically recognize the risk of very  shady business practices (even if not Madoff 2.0) would be a significant demerit in light of the well-known history of such things in the crypto space. To the extent that there was clear awareness and the probabilities were just wrong in hinds
2
Michael_PJ
To perhaps make it clearer: I think EA is trying to be expert in "existential risks to humanity", and that really does have almost no overlap with "existential risks to individual firms or organizations". Or to sharpen the parody: if it was a climate-risk org that had got in trouble because it was funded by FTX, would that downgrade your expectation of their ability to assess climate risks?
8
Jason
But on mainstream EA assumptions about x-risk, the failure of the Future Fund materially increased existential risk to humanity. You'd need to find a similar event that materially changed the risk of catastrophic climate change for the analogy to potentially hold -- the death of a single researcher or the loss of a non-critical funding source for climate-mitigation efforts doesn't work for me. More generally, I think it's probably reasonable to downgrade for missing FTX on "general competence" and "ability to predict and manage risk" as well. I think both of those attributes are correlated with "ability to predict and manage existential risk," the latter more so than the former.  Given that existential-risk expertise is a difficult attribute to measure, it's reasonable to downgrade when downgrading one's assessment of more measureable attributes. Although that effect would also apply to the climate-mitigation movement if it suffered an FTX-level setback event involving insiders, the justification for listening to climate scientists isn't nearly as heavily loaded on "ability to predict and manage existential risk." It's primarily loaded on domain-specific expertise in climate science, and missing FTX wouldn't make me think materially less of the relevant people as scientists. To be clear, I'm not endorsing the narrative that EA is near-useless on x-risk because it missed FTX. My own assumption is that people recognized a risk that FTX funding wouldn't come through, and that the leaders recognized a risk that SBF was doing shady stuff (cf. the leaked leader chat) although perhaps not a Madoff 2.0. I think those risks were likely underestimated, which leads me to a downgrade but not a massive one. 
8
harfe
Alternatively, one could have said something like This, too, would not have been a good argument.
9
Robi Rahman
Scott's analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. It's not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, it's not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.
5
Greg_Colbourn
This. We can taboo the words "existential risk" and focus instead on Longtermism. It's damning that the largest philanthropy focused on Longtermism -- the very long term future of humanity -- didn't even last a year. A necessary part of any organisation focused on the long term is a security mindset. It seems that this was lacking in the Future Fund. In particular, nothing was done to secure funding.
-1
Ian Turner
Perhaps, you know, they were focused more on the long term and not the short term?
1
Greg_Colbourn
You can't build a temple that lasts 1000 years without first ensuring that it's on solid ground and has secure foundations. (Or even a house that lasts 10 years for that matter.)
9
Ian Turner
Are we trying to build a temple? My understanding of the thinking most longtermist causes and interventions is that they are mostly about slightly decreasing the probability of a catastrophic event; or to put it differently, the idea is that there is a high probability that the intervention does nothing and a small probability that it does something incredibly important. From that perspective I'm not sure that institutional longevity is really a priority and certainly don't think that we can infer that longtermists aren't indeed focused on the long term.
3
Greg_Colbourn
Longtermism is  wider than catastrophic risk reduction - e.g. it also encompasses "trajectory changes". It's about building a flourishing future over the very long term. (Personally I think x-risk from AGI is a short-term issue and should be prioritised, and Longtermism hasn't done great as a brand so far.)

Hi John,


Thank you for your response, and more generally thank you for having been consistently willing to engage with criticism on the forum.


We’re going to respond to your points in the same format that you made them in for ease of comparison.
 

Should EA be distinctive for its own sake or should it seek to be as good as possible? If EA became more structurally similar to e.g. some environmentalist movements in some ways, e.g. democratic decision-making, would that actually be a bad thing in itself? What about standard-practice transparency measures? To what extent would you prefer EA to be suboptimal in exchange for retaining aspects that would otherwise make it distinctive?
 

In any case, we’re honestly a little unsure how you reached the conclusion that our reforms would lead EA to be “basically the same as standard forms of left-wing environmentalism”, and would be interested in you spelling this out a bit. We assume there are aspects of EA you value beyond what we have criticised, such as an obsessive focus on impact, our commitment to cause-prioritisation, and our willingness to quantify (which is often a good thing, as we say in the post), etc., all of which are freque... (read more)

[anonymous]98
24
7

Thanks for the detailed response. 

I agree that we don't want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn't be 'democratic' in any meaningful sense. 

  1. I don't have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word 'existential risk' doesn't change that fact. 
  2. Since you don't want diversity essentially along all dimensions, what sort of diversity would you like? You don't want Trump supporters; do you want more Marxists? You apparently don't want more right win
... (read more)
5
Ian Turner
I don't disagree with what is written here but the tone feels a bit aggressive/adversarial/non-collegial IMHO.

A simple back-casting or systems-mapping exercise (foresight/systems-theoretical techniques) would easily have revealed EA’s significant exposure and vulnerability (disaster risk concepts) to a potential FTX crash. The overall level of x-risk is presumably tied to how much research it gets, and the FTX crash clearly reduced the amount of research that will get done on x-risk any time soon. 

This is not the first time I've heard this sentiment and I don't really understand it. If SBF had planned more carefully, if he'd been less risk-neutral, things could have been better. But it sounds like you think other people in EA should have somehow reduced EA's exposure to FTX. In hindsight, that would have been good, for normative deontological reasons, but I don't see how it would have preserved the amount of x-risk research EA can do. If EA didn't get FTX money, it would simply have had no FTX money ever, instead of having FTX money for a very short time.

7
David Mathers🔸
'it is career suicide to criticise diversity'  This seems seriously hyperbolic to me, though I agree that if your down diversity, a non-negligible number of people will disapprove and assume you are right-wing/racist, and that could have career consequences.  What's your best guess as to the proportion of academics who have had their careers seriously damaged for criticizing diversity in the fairly mild way you suggest here (i.e. that as a very generic thing, it does not improve accuracy of group decision-making), relative to those who have made such criticisms? 

What percentage of Chinese people have ever been arrested for subversion?

4
Nicholas / Heather Kross
Strong agree with most of these points; the OP seems to not... engage on the object-level of some of its changes. Like, not proportionally to how big the change is or how good the authors think it is or anything?
0
Noah Scales
EDIT: Oh! It was rockstrom, but the actual quote is: "The richest one percent must reduce emissions by a factor [of] 30, while the poorest 50% can actually increase emissions by a factor [of] 3" from Johan Rockström at #COP26: 10 New Insights in Climate Science | UN Climate Change. There he is talking about fair and just carbon emissions adjustments. The other insights he listed have economic implications as well, if you're interested. The accompanying report is available here. The quote is: "Action on climate change is a matter of intra- and intergenerational justice, because climate change impacts already have affected and continue to affect vulnerable people and countries who have least contributed to the problem (Taconet et al., Reference Taconet, Méjean and Guivarch2020). Contribution to climate change is vastly skewed in terms of wealth: the richest 10% of the world population was responsible for 52% of cumulative carbon emissions based on all of the goods and services they consumed through the 1990–2015 period, while the poorest 50% accounted only for 7% (Gore, Reference Gore2020; Oswald et al., Reference Oswald, Owen, Steinberger, Yannick, Owen and Steinberger2020). A just distribution of the global carbon budget (a conceptual tool used to guide policy) (Matthews et al., Reference Matthews, Tokarska, Nicholls, Rogelj, Canadell, Friedlingstein, Thomas, Frölicher, Forster, Gillett, Ilyina, Jackson, Jones, Koven, Knutti, MacDougall, Meinshausen, Mengis, Séférian and Zickfeld2020) would require the richest 1% to reduce their current emissions by at least a factor of 30, while per capita emissions of the poorest 50% could increase by around three times their current levels on average (UNEP, 2020). Rich countries' current and promised action does not adequately respond to the climate crisis in general, and, in particular, does not take responsibility for the disparity of emissions and impacts (Zimm & Nakicenovic, Reference Zimm and Nakicenovic2020). For instanc
1[comment deleted]

Overall this post seems like a grab-bag of not very closely connected suggestions. Many of them directly contradict each other. For example, you suggest that EA organizations should prefer to hire domain experts over EA-aligned individuals. And you also suggest that EA orgs should be run democratically. But if you hire a load of non-EAs and then you let them control the org... you don't have an EA org any more. Similarly, you bemoan that people feel the need to use pseudonyms to express their opinions and a lack of diversity of political beliefs ... and then criticize named individuals for being 'worryingly close to racist, misogynistic, and even fascist ideas' in essentially a classic example of the cancel culture that causes people to choose pseudonyms and causes the movement to be monolithically left wing. 

I think this is in fact a common feature of many of the proposals: they generally seek to reduce what is differentiated about EA. If we adopted all these proposals, I am not sure there would be anything very distinctive remaining. We would simply be a tiny and interchangeable part of the amorphous blob of left wing organizations.

It is true this does not apply to all of th... (read more)

Well stated. This post's heart is in the right place, and I think some of its proposals are non-accidentally correct. However, it seems that many of the post's suggestions boil down to "dilute what it means to be EA to just being part of common left-wing thought". Here's a sampling of the post's recommendations which provoke this:

  • EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia
  • EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews
  • EAs should not assume that we must attach a number to everything, and should be curious about why most academic and professional communities do not
  • EA institutions should select for diversity
  • Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”
  • EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good
  • EA institutions and community-builders shoul
... (read more)

I don't think the point is that all of the proposals are inherently correct or should be implemented. I don't agree with all of the suggestions (agree with quite a few, don't agree with some others), but in the introduction to the 'Suggested Reforms' section they literally say:

Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

Picking out in particular the parts you don't agree with may seem almost like strawmanning in this case, and people might be reading the comments not the full thing (was very surprised by how long this was when I clicked on it, I don't think I've seen an 84 minute forum post before). But I'm not claiming this was intentional on either of your parts.

If taking positions that are percieved as left wing makes EA more correct and more effective, then EA should still take up those positions. The OP has made great effort to justify these points from a practical position of pursuing truth, effectiveness, and altruism, and they should not be dismissed just because they happen to fall on one side of the political spectrum.  Similarly, just because an action makes EA less distinct, it doesn't mean it's not the correct thing to do. 

This is true, but to the extent that these changes would make EA look/act like already-existing actors, I think it is fair to consider (1) how effective the similiar actors are, and (2) the marginal benefit of having more muscle in or adjacent to the space those actors occupy.

Also, because I think a clear leftward drift would have significant costs, I also think identifying the drift and those costs is a fair critique. As you move closer to a political pole, the range of people who will want to engage with your movement is likely to dwindle. Most people don't want to work in, or donate to, a movement that doesnt feel respecting toward them -- which I think is a strong tendency of almost all political poles.

At present, I think you can be moderately conservative or at least centrist by US standards and find a role and a place where you feel like you fit in. I think giving that range up has significant costs.

Also, because I think a clear leftward drift would have significant costs, I also think identifying the drift and those costs is a fair critique. As you move closer to a political pole, the range of people who will want to engage with your movement is likely to dwindle.

 

I think a moderate leftward shift on certain issues would actually increase popularity. The current dominant politics of EA seems to be a kind of steven pinker style techno-liberalism, with a free speech absolutist stance and a vague unease about social justice  activism. Whether or not you agree with this position, I think it's popularity among the general public is fairly low, and a shift to mainstream liberal (or mainstream conservative) opinions would make EA more appealling overall.  For example, a policy of banning all discussion of "race science" would in the long term probably bring in much more people than it deterred, because almost everybody finds discussing that topic unpleasant. 

If your response to this is "wait, there are other principles at play that we need to take into consideration here, not just chasing what is popular", then  you understand the reasons why I don't find " these positions would make EA more left wing" to be a very strong argument against them.  If following principles pushes EA one way or the other, then so be it. 

Fwiw, I think your view that a leftward shift in EA would increase popularity is probably Americocentric. I doubt it is true if you were to consider EA as a global movement rather than just a western one.

Also, fwiw, I've lost track of how many people I've seen dismiss EA as "dumb left-wing social justice". EAs also tend to think the consequence of saying something is what matters. So we tend to be disliked both by free speech absolutists and by people who will never concede that properly discussing some controversial topics might be more net positive than the harm caused by talking about them. Some also see EA as tech-phobic. Steven Pinker famously dismissed EA concerns about AI Alignment. If you spend time outside of EA in tech-optimism-liberal circles you see a clear divide. It isn't culturally the same. Despite this, I think I've also lost count of how many people I've seen dismiss EA as " right-leaning libertarian tech-utopia make-billionaires-rich nonsense"

We can't please everyone and it is a fool's errand to try.

One person's "steven pinker style techno-liberalism, with a free speech absolutist stance and a vague unease about social justice activism" is another person's "Ludite free speech blocking SJW"

If following principles does not clearly push EA one way or the other, also then so be it.

3
titotal
My point was more that theres a larger audience for picking one side of the political spectrum than there is for awkwardly positioning yourself in the middle in a way that annoys both sides. I think this holds for other countries as well, but of course the political battles are different. If you wanted to appeal more to western europe you'd go left, to eastern europe you'd go right, to China you'd go some weird combination of left and right, etc.  Really, I'm making the same point as you: chasing popularity at the expense of principles is a fools errand. 

I think there's a difference between "people in EA tend to have X, Y, and Z views" and those views being actively promoted by major orgs (which is the most natural reading of the proposal to me). Also, although free speech absolutism may not be popular in toto, most points on the US political spectrum at least find some common ground with that stance (they will agree on the outcome for certain forms of controversial speech).

I also think it likely that EA will need significant cooperation from the political system on certain things, particularly involving x-risk, and that becoming strongly left-identified sharply increases the risk you'll be summarily dismissed by a house of Congress, the White House, or non-US equivalents.

I don't think "race science" has any place in EA spaces, by the way.

Agree with this. We should de-politicize issues, if anything. Take Climate Change for example. Heavily politicized. But EA is not left wing because  80k hours acknowledges the severity and reality of CC - it is simply very likely to be true. And if truth happens to be more frequent in left wing perspectives then so be it.

5
Ariel Simnegar 🔸
I agree with you that EA shouldn't be prevented from adopting effective positions  just because of a perception of partisanship. However, there's a nontrivial cost to doing so: the encouragement of political sameness within EA, and the discouragement of individuals or policymakers with political differences from joining EA or supporting EA objectives. This cost, if realized, could fall against many of this post's objectives: * We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?” * We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided * EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/“heretical” views * EAs should consider how our shared modes of thought may subconsciously affect our views of the world – what blindspots and biases might we have created for ourselves? * EA institutions should select for diversity * Along lines of: * Philosophical and political beliefs It also plausibly increases x-risk. If EA becomes known as an effectiveness-oriented wing of a particular political party, the perception of EA policies as partisan could embolden strong resistance from the other political party. Imagine how much progress we could have had on climate change if it wasn't a partisan issue. Now imagine it's 2040, the political party EA affiliates with is urgently pleading for AI safety legislation and a framework for working with China on reducing x-risk, and the other party stands firmly opposed because "these out-of-touch elitist San Francisco liberals think the world's gonna end, and want to collaborate with the Chinese!"

I agree that EA should be accepting of a wide range of political opinions (although highly extreme and hateful views should still be shunned). 

I don't think the suggestions there are necessarily at odds with that, though. For example, increasing demographic diversity is probably going to increase political diversity as well, because people from extremely similar backgrounds have fairly similar politics. If you expand to people from rural background, you're more likely to get a country conservative, if you encourage more women, you're more likely to get feminists, if you encourage people from Ghana, you'll get whole new political ideologies nobody at silicon valley has even heard of. The politics of nerdy white men like me represent a very tiny fraction of the overall political beliefs that exist in the world. 

When it comes to extreme views it's worth noting that what's extreme depends a lot of the context. 

A view like "homosexuality should be criminalized" is extreme in Silicon Valley but not in Uganda where it's a mainstream political opinion.  In my time as a forum moderator,  I had to deal with a user from Uganda voicing those views and in cases, like that you have to make choice about how inclusive you want to be of people expressing very different political ideologies. 

In many cases, where the political views of people in Ghana or Uganda substantially differ from those common in the US they are going to be perceived as highly extreme. 

The idea, that you can be accepting of political ideologies of a place like Ghana where the political discussion is about "Yes, we already have forbidden homosexuality but the punishment seems to low to discourage that behavior" vs. "The current laws against homosexuality are enough" while at the same time shunning highly extreme views, seems to me very unrealistic. 

You might find people who are from Ghana and who adopted woke values, but those aren't giving you deep diversity in political viewpoints. 

For all the talk about decolonization, Silicon Valley liberals seem always very eager when it comes to denying people from Ghana or Uganda to express mainstream political opinions from their home countries. 

While on it's face, increasing demographic diversity seems like it would result in an increase in political diversity, I don't think that is actually true. 

This rests on several assumptions:

  1. I am looking through the lens of U.S. domestic politics, and identifying political diversity by having representation of America's two largest political parties.
  2. Increases in diversity will not be evenly distributed across the American population. (White Evangelicals are not being targeted in a diversity push, and we would expect the addition of college grad+ women and BIPOC.)

Of all demographic groups, white college grad+ men, "Sams," are the most politically diverse group, at 48 D, 46R.  By contrast, the groups typically understood to be represented by increased diversity:

  • College Grad+ Women: 65 D, 30R

There is difficulty in a lack of BIPOC breakdown by education level, but assuming that trends of increased education would result in a greater democratic disparity, these are useful lower bounds:

  • Black: 83 D, 10R
  • Hispanic: 63 D, 29 R
  • Asian American: 72 D, 17R

While I would caution against partisanship in the evaluation of ideas and programs, I don't think there's anything inherently wrong in ... (read more)

Reducing "political diversity" down to the 2 bit question of "which american political party do they vote for" is a gross simplification. For example, while black people are more likely to vote democrat, a black democrat is half as likely as a white democrat to identify as "liberal".  This is because there are multiple political axes, and multiple political issues to consider, starting with the standard economic vs social political compass model.  

This definitely becomes clearest when we escape from a narrow focus on elite college graduates in the US, and look at people from different nations entirely. You will have an easier time finding a Maoist in china than in texas, for example. They might vote D in the US as a result of perceiving the party as less anti-immigrant, but they're not the same as a white D voter from the suburbs. 

As for your experiences where political and ethnic diversity were anti-correlated: did the organisation make any effort on other aspects of diversity, other than skin colour, or did they just, say, swap out a couple of  MIT grads of one race for a couple of MIT grads of a different race? Given that you say the culture didn't change either, the latter seems likely.

3
Ariel Simnegar 🔸
I agree with you that many of the broad suggestions can be read that way. However, when the post suggests which concrete groups EA should target for the sake of philosophical and political diversity, they all seem to line up on one particular side of the aisle: What politics are postcolonial critics of Western academia likely to have? What politics are academics, professional communities, or indigenous  Americans likely to have? When the term "traditionally underrepresented groups" is used, does it typically refer to rural conservatives, or to other groups? What politics are these other groups likely to have? As you pointed out, this post's suggestions could be read as encouraging universal diversity, and I agree that the authors would likely endorse your explanation of the consequences of that. I also don't think it's unreasonable to say that this post is coded with a political lean, and that many of the post's suggestions can be reasonably read as nudging EA towards that lean.
3
Dzoldzaya
Hmmm, a few of these don't sound like common left-wing thought (I hope democracy isn't a left-wing value now), but I agree with the sentiment of your point. I guess some of the co-writers lean towards identitarian left politics and they want EA to be more in line with this (edit: although this political leaning shouldn't invalidate the criticisms in the piece). One of the footnotes would seem to signal their politics clearly, by linking to pieces with what I'd call a left-wing 'hit piece' framing: "We should remember that EA is sometimes worryingly close to racist, misogynistic, and even fascist ideas. For instance, Scott Alexander, a blogger that is very popular within EA, and Caroline Ellison, a close associate of Sam Bankman-Fried, speak favourably about “human biodiversity”, which is the latest euphemism for “scientific” racism. "
5
ChristianKleineidam
Believing that democracy is a good way to run a country is a different view than believing that it's an effective way to run an NGO. The idea that NGOs whose main funding comes from donors as opposed to membership dues should be run democratically seems like a fringe political idea and one that's found in certain left-wing circles. 
1[comment deleted]

(Edited.)

This seems bordering on strawmanning. We should try to steelman their suggestions. It seems fine that some may be incompatible or all together would make us indistinguishable from the left (which I wouldn't expect to happen anyway; we'd probably still care far more about impact than the left on average), since we wouldn't necessarily implement them all or all in the same places, and there can be other ways to prevent issues.

Furthermore, overly focusing on specific suggestions can derail conversations too much into the details of those suggestions and issues with them over the problems in EA highlighted in the post. It can also discourage others from generating and exploring other proposals. It may be better to separate these discussions, and this one seems the more natural one to start with. This is similar to early broad cause area research for a cause (like 80,000 Hours profiles), which can then be followed by narrow intervention (and crucial consideration) research in various directions.

As a more specific example where I think your response borders on a strawman: in hiring non-EA experts and democratizing orgs, non-EAs won't necessarily make up most of the org, and they... (read more)

In a post this long, most people are probably going to find at least one thing they don't like about it. I'm trying to approach this post as constructively as I can, i.e. "what I do find helpful here" rather than "how I can most effectively poke holes in this?" I think there's enough merit in this post that the constructive approach will likely yield something positive for most people as well.

I like this comment.

I feel that EAs often have isolated demands for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) when it comes to criticisms.

I think the ideal way to read criticisms is to steelman as you read.

I think this post would have been significantly better as a series, partly so people could focus on/vote on the parts independently.

I don't think it's very surprising that 80% of the value comes from 20% of the proposed solutions.

4
Stone
There's a fairly even mix of good-faith and bad-faith criticism here. A lot of the good-faith criticism is almost a carbon copy of the winners of last year's EA criticism contest.
2[comment deleted]

First off, thank you to everyone who worked on this post. Although I don't agree with everything in it, I really admire the passion and dedication that went into this work -- and I regret that the authors feel the need to remain anonymous for fear of adverse consequences. 

For background: I consider myself a moderate EA reformer -- I actually have a draft post I've been working on that argues that the community should democratically hire people to write moderately concrete reform proposals. I don't have a ton of the "Sam" characteristics, and the only thing of value I've accepted from EA is one free book (so I feel free to say whatever I think). I am not a longtermist and know very little about AI alignment (there, I've made sure I'd never get hired if I wanted to leave my non-EA law career?). 

Even though I agree with some of the suggested reforms here, my main reaction to this post is to affirm that my views are toward incremental/moderate -- and not more rapid/extensive -- reform. I'm firmly in the Global Health camp myself, and that probably colors my reaction to a proposal that may have been designed more with longtermism in mind. There is too much ... (read more)

Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism ("DA")

 

I like the choice to distill this into a specific cluster.

I think this full post definitely portrays a very different vision of EA than what we have, and than what I think many current EAs want. It seems like some particular cluster of this community might be in one camp, in favor of this vision.

If that were the case, I would also be interested in this being experimented with, by some cluster. Maybe even make a distinct tag, "Democratic Altruism" to help organize conversation on it. People in this camp might be most encouraged to directly try some of these proposals themselves. 

I imagine there would be a lot of work to really put forward a strong idea of what a larger "Democratic Altruism" would look like, and also, there would be a lengthy debate on its strengths and weaknesses.

Right now I feel like I keep on seeing similar ideas here being argued again and again, without much organization.

(That said, I imagine any name should come from the group advocating this vision)

Yeah, I would love to see people go out and try this experiment and I like the tag "democratic altruism".  There's a chance that if people with this vision were to have their own space, then these tensions might ultimately dissipate.

[For context, I'm definitely in the social cluster of powerful EAs, though don't have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]

This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven't happened, and probably won't happen, is that no-one who would be able to make them happen has decided to make them happen. I personally think this is because these proposals aren't very good. And so:

  • people in EA roles where they could adopt these suggestions choose not to
  • and people who are capable/motivated enough that they could start new projects to execute on these ideas (including e.g. making competitors to core EA orgs) end up deciding not to.

And so I wish that posts like this were clearer about their theory of change (henceforth abbreviated ToC). You've laid out a long list of ways that you wish EA orgs behaved differently. You've also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced ... (read more)

I strongly downvoted this response.

The response says that EA will not change "people in EA roles [will] ... choose not to", that making constructive critiques is a waste of time "[not a] productive ways to channel your energy" and that the critique should have been better "I wish that posts like this were clearer" "you should try harder" "[maybe try] politely suggesting".

This response seems to be putting all the burden of making progress in EA onto those  trying to constructively critique the movement, those who are putting their limited spare time into trying to be helpful, and removing the burden away from those who are actively paid to work on improving this movement. I don’t think you understand how hard it is to write something like this, how much effort must have gone into making each of these critiques readable and understandable to the EA community. It is not their job to try harder, or to be more polite. It is our job, your job, my job, as people in EA orgs, to listen and to learn and to consider and if we can to do better.

Rather than saying the original post should be better maybe the response should be that those reading the original post should be better at conside... (read more)

I think this is a weird response to what Buck wrote. Buck also isn't paid  either to reform EA movement, or to respond to criticism on EA forum, and decided to spend his limited time to express how things realistically look from his perspective.

I think it is good if people write responses like that, and such responses should be upvoted, even if you disagree with the claims. Downvotes should not express 'I disagree', but 'I don't want to read this'.

Even if you believe EA orgs are horrible and should be completely reformed, in my view, you should be glad that Buck wrote his comment as you have better idea what people like him may think. 

It's important to understand the alternative to this comment is not Buck will write 30 page detailed response. The alternative is, in my guess, just silence. 

Thank you for the reply Jan. My comment was not about whether I disagree with any of the content of what Buck said. My comment was objecting to what came across to me as a dismissive, try harder, tone policing attitude (see the quotes I pulled out) that is ultimately antithetical to the kind, considerate and open to criticism community that I want to see in EA. Hopefully that explains where I'm coming from.

-34
Buck

One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly. Whilst they are busy, I'd be pretty disappointed if the core EAs didn't read this and take the ideas seriously (ive tried tagging dome on twitter), and if you're correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I'd be concerned about where there was places for people to get their ideas taken seriously. I'm lucky, I can walk into Trajan house and knock on peoples doors, but others presumably aren't so lucky, and you would hope that a forum post that generated a lot of discussion would be taken seriously. Moreover, if you are concerned with the ideas presented here not getting fair hearing, maybe you could try raising salient ideas to core EAs in your social circles?

I think that the class of arguments in this post deserve to be considered carefully, but I'm personally fine with having considered them in the past and decided that I'm unpersuaded by them, and I don't think that "there is an EA Forum post with a lot of discussion" is a strong enough signal that I should take the time to re-evaluate a bunch--the EA Forum is full of posts with huge numbers of upvotes and lots of discussion which are extremely uninteresting to me.

(In contrast, e.g. the FTX collapse did prompt me to take the time to re-evaluate a bunch of what I thought about e.g. what qualities we should encourage vs discourage in EAs.)

I'd be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.(Full disclosure: I don’t agree with all of them, but in general I think there pretty good) A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation, and mostly criticising it on tangential ToC grounds (which you don't think or want to succeed anyway?) seems like it could be construed to be pretty bad faith and problematic. I don’t normally reply like this, but I think your original replied has essentially tried to play the man and not the ball, and I would expect better from a self-identified 'central EA' (not saying this is some massive failing, and I'm sure I've done similar myself a few times)

I interpreted Buck's comment differently. His comment reads to me, not so much like "playing the man," and more like "telling the man that he might be better off playing a different game." If someone doesn't have the time to write out an in-depth response to a post that takes 84 minutes to read, but they take the time to (I'd guess largely correctly) suggest to the authors how they might better succeed at accomplishing their own goals, that seems to me like a helpful form of engagement.

6
Gideon Futerman
Maybe your correct, and that's definitely how I interpreted it initially, but Buck's response to me gave a different impression. Maybe I'm wrong, but it just strikes me as a little strange if Buck feels they have considered these ideas and basically rejects them, why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better. Maybe I'm wrong or have misinterpreted something though, I wouldn't be surprised

why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better

My guess was that Buck was hopeful that, if the post authors focus their criticisms on the cruxes of disagreement, that would help reveal flaws in his and others' thinking ("inasmuch as I'm wrong it would be great if you proved me wrong"). In other words, I'd guess he was like, "I think you're probably mistaken, but in case you're right, it'd be in both of our interests for you to convince me of that, and you'll only be able to do that if you take a different approach."

[Edit: This is less clear to me now - see Gideon's reply pointing out a more recent comment.]

I guess I'm a bit skeptical of this, given that Buck has said this to weeatquince "I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future". 

9
Aaron_Scher
This evidence doesn't update me very much.  I interpret this quote to be saying, "this style of criticism — which seems to lack a ToC and especially fails to engage with the cruxes its critics have, which feels much closer to shouting into the void than making progress on existing disagreements — is bad for the forum discourse by my lights. And it's fine for me to dissuade people from writing content which hurts discourse" Buck's top-level comment is gesturing at a "How to productively criticize EA via a forum post, according to Buck", and I think it's noble to explain this to somebody even if you don't think their proposals are good. I think the discourse around the EA community and criticisms would be significantly better if everybody read Buck's top level comment, and I plan on making it the reference I send to people on the topic.  Personally I disagree with many of the proposals in this post and I also wish the people writing it had a better ToC, especially one that helps make progress on the disagreement, e.g., by commissioning a research project to better understand a relevant consideration, or by steelmanning existing positions held by people like me, with the intent to identify the best arguments for both sides. 

My interpretation of Buck's comment is that he's saying that, insofar as he's read the post, he sees that it's largely full of ideas that he's specifically considered and dismissed in the past, although he is not confident that he's correct in every particular.

I think that the class of arguments in this post deserve to be considered carefully, but I'm personally fine with having considered them in the past and decided that I'm unpersuaded by them...

there are probably some suggestions made in this post that I would overall agree should be prioritized if I spent more time thinking about them

You want him to explain why he dismissed them in the past

I'd be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.

And are confused about why he'd encourage other people to champion the ideas he disagrees with

why they would want to suggest to these bunch of concerned EAs how to go about trying to push for the ideas that Buck disagrees with better

I think the explanation is that Buck is pretty pessimistic that these are by and large good ideas, enough not to commit more of his time to considering each one individually m... (read more)

I agree with the text of your comment but think it'd be better if you chose your analogy to be about things that are more contested (rather than clearly false like creationism or AGW denial or whatever). 

This avoids the connotation that Buck is clearly right to dismiss such criticisms. 

One better analogy that comes to mind is asking Catholic theologians about the implausibility  of a virgin birth, but unfortunately, I think religious connotations have their own problems.

2
DirectedEvolution
I agree that this would have been better, but it was the example that came to mind and I'm going to trust readers to take it as a loose analogy, not a claim about which side is correct in the debate.
3
Linch
Fair! I think having maximally accurate analogies that helps people be truth-seeking is hard, and of course the opportunity costs of maximally cooperative writing is high.

A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation

I'm sympathetic to the position that it's bad for me to just post meta-level takes without defending my object-level position.

Thanks for this, and on reading other comments etc, I was probably overly harsh on you for doing so.

I took the time to read through and post where I agree and disagree, however, I understand why people might not have wanted to spend the time given that the document didn't really try to engage very hard with the reasons for not implementing these proposals. I feel bad saying that because the authors clearly put a lot of time and effort into it, but I honestly think it would have been better if the group had chosen a narrower scope and focused on making a persuasive argument for that. And then maybe worked on next section after that.

But who knows? There seems to be a bit of energy around this post, so maybe something comes out of this regardless.

One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly.

I think you're right about this and that my comment was kind of unclearly equivocating between the suggestions that aimed at the community and the suggestions that aimed at orgs. (Though the suggestions aimed at the community also give me a vibe of "please, core EA orgs, start telling people that they should be different in these ways" rather than "here is my argument for why people should be different in these ways").

I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.

It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.

I thought Buck’s comment contained useful information, but was also impolite. I can see why people in favour of these proposals would find it frustrating to read.

I think all your specific points are correct, and I also think you totally miss the point of the post.

You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don't know how our community is run or why. 

On top of that, there are actually consequences for complaining or disagreeing too much. Funders like to give money to people who think like them. Again, this requires no explanation. This is just the natural state of things. 

So for non core EA, we notice things that seem wrong, and we're afraid to speak up against it, and it sucks. That's what this post is about.

And course it's naive and shallow and not adding much to anyone who has already though about this for years. For the authors this is the start of the conversation, because they, and most of the rest of us, was not invited to all the previous conversations like this.  

I don't agree with everything in the post. Lots of the suggestions seems nonse... (read more)

I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact. I disagree with this kind of view.

3
Buck
  Obviously it's the responsibility of EAs and EA orgs to actively seek out ways that they could do things better. But I'm just noting that it seems unlikely to me that this post will actually persuade EA orgs to do  things differently, and so if the authors had hoped to have impact via that route, they should try another plan instead.

If that's your goal, I think you should try harder to understand why core org EAs currently don't agree with your suggestions, and try to address their cruxes. For this ToC, "upvotes on the EA Forum" is a useless metric--all you should care about is persuading a few people who have already thought about this all a lot. I don't think that your post here is very well optimized for this ToC. 

... I think the arguments it makes are weak (and I've been thinking about these arguments for years, so it would be a bit surprising if there was a big update from thinking about them more.)

If you and other core org EAs have thoroughly considered many of the issues the post raises, why isn't there more reasoning transparency on this? Besides being a good practice in general (especially when the topic is how the EA ecosystem fundamentally operates), it would make it a lot easier for the authors and others on the forum to deliver more constructive critiques that target cruxes. 

As far as I know, the cruxes of core org EAs are nowhere to be found for many of the topics this post covers.

I think Lark's response is reasonably close to my object-level position.

My quick summary of a big part of my disagreement: a major theme of this post suggests that various powerful EAs hand over a bunch of power to people who disagree with them. The advantage of doing that is that it mitigates various echo chamber failure modes. The disadvantage of doing that is that now, people who you disagree with have a lot of your resources, and they might do stuff that you disagree with. For example, consider the proposal "OpenPhil should diversify its grantmaking by giving half its money to a randomly chosen Frenchman". This probably reduces echo chamber problems in EA, but it also seems to me like a terrible idea.

I don't think the post properly engages with the question "how ought various powerful people weigh the pros and cons of transferring their power to people they disagree with". I think this question is very important, and I think about it a fair bit, but I think that this post is a pretty shallow discussion of it that doesn't contribute much novel insight.

I encourage people to write posts on the topic of "how ought various powerful people weigh the pros and cons of transferring thei... (read more)

Some things that I might come to regret about my comment:

  • I think it's plausible that it's bad for me to refer to disagreeing with arguments without explaining why.
  • I've realized that some commenters might not have seen these arguments before, which makes me think that there is more of an opportunity for me to explain why I think these arguments are wrong. (EDIT I'm less worried about this now, because other commenters have weighed in making most of the object-level criticisms I would have made.)
  • I was not very transparent about my goal with this comment, which is generally a bad sign. My main goal was to argue that posts like this are a kind of unhealthy way of engaging with EA, and that readers should be more inclined to respond with "so why aren't you doing anything" when they read such criticisms.
5
ChanaMessinger
Fwiw I think there was an acknowledgement of soft power missing.

I strongly disagree with this response, and find it bizarre.  

I think assessing this post according to a limited number of possible theories of change is incorrect, as influence is often diffuse and hard to predict or measure.  

I agree with freedomandutility's description of this as an "isolated demand for [something like] rigor".
 

There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.

I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:

I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to 'Suggested Reforms'):

Below, we have a preliminary non-exhaustive list of suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

This suggests to me that instead of trying to convince the ‘EA leadership... (read more)

Thanks for your sincere reply (I'm not trying to say other people aren't sincere, I just particularly felt like mentioning it here).

Here are my thoughts on the takeaways you thought people might have.

  • There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)

As I said in my comment, I think that it's true that the actions of EA-branded orgs are largely influenced by a relatively small number of people who consider each other allies and (in many cases) friends. (Though these people don't necessarily get along or agree on things--for example, I think William MacAskill is a well-intentioned guy but I disagree with him a bunch on important questions about the future and various short-term strategy things.) 

  • If the authors of this post are asking for community opinion on which changes are good after giving concerns, the top (for a while at least) comment being criticising this for a lack of
... (read more)

I think that's particularly true of some of the calls for democratization. The Cynic's Golden Rule ("He who has the gold, makes the rules") has substantial truth both in the EA world and in almost all charitable movements. In the end, if the people with the money aren't happy with the idea of random EAs spending their money, it just isn't going to happen. And to the extent there is a hint of cutting off or rejecting donors, that would lead to a much smaller EA to the extent it was followed. In actuality, it wouldn't be -- someone is going to take the donor's money in almost all cases, and there's no EA High Council to somehow cast the rebel grantee from the movement.

Speaking as a moderate reform advocate, the flipside of this is that the EA community has to acknowledge the origin of power and not assume that the ecosystem is somehow immune to the Cynic's Golden Rule. The people with power and influence in 2023 may (or may not) be wise and virtuous, but they are not in power (directly) because they are wise and virtuous. They have power and influence in large part because it has been granted to them by Moskovitz and Tuna (or their delegates, or by others with power to move funding and other resources). If Moskovitz and Tuna decided to fire Open Phil tomorrow and make all their spending decisions based on my personal recommendations, I would become immensely powerful and influential within EA irrespective of how wise and virtuous I may be. (If they are reading, this would be a terrible idea!!)

"If elites haven't already thought of/decided to implement these ideas, they're probably not very good. I won't explain why. " 

"Posting your thoughts on the EA Forum is complaining, but I think you will fail if you try to do anything different. I won't explain why, but I will be patronising." 

"Meaningful organisational change  comes from the top down, and you should be more polite in requesting it. I doubt it'll do anything, though." 

Do you see any similarities between your response here and the problems highlighted by the original post, Buck? 

The tone policing, dismissing criticism out of hand, lack of any real object-level engagement, pretending community responsibility doesn't exist, and patronisingly trying to shut down others is exactly the kind of chilling effect that this post is drawing attention to. 

The fact that a comment from a senior community member has led to deference from other community members, leading to it becoming the top-voted comment, is not a surprise. But support for such weak critiques (using vague dismissals that things are 'likely net-negative' or just stating his own opinion with little to no justifications) is pretty low, however. 

And the wording is so patronising and impolite, too. What a perfect case study in the kinds of behaviours EA should no longer tolerate.

Interesting that another commenter has the opposite view, and criticises this post for being persuasive instead of explanatory!

May just be disagreement but I think it might be a result of a bias of readers to focus on framing instead of engaging with object level views, when it comes to criticisms.

One irony is that it's often not that hard to change EA orgs' minds. E.g. on the forum suggestion, which is the one that most directly applies to me: you could look at the posts people found most valuable and see if a more democratic voting system better correlates with what people marked as valuable than our current system. I think you could probably do this in a weekend, it might even be faster than writing this article, and it would be substantially more compelling.[1]

(CEA is actually doing basically this experiment soon, and I'm >2/3 chance the results will change the front page somehow, though obviously it's hard to predict the results of experiments in advance.)

 

  1. ^

    If anyone reading this actually wants to do this experiment please DM me – I have various ideas for what might be useful and it's probably good to coordinate so we don't duplicate work

Relatedly, I think a short follow-up piece listing 5-10 proposed specific action items tailored to people in different roles in the community would be helpful. For example, I have the roles of (1) low-five-figure donor, and (2) active forum participant. Other people have roles like student, worker in an object-level organization, worker in a meta organization, object-level org leader, meta org leader, larger donor, etc.  People in different roles have different abilities (and limitations) in moving a reform effort forward.

I think "I didn't walk away with a clear sense of what someone like me should do if I agree with much/all of your critique" is helpful/friendly feedback. I hesitant to even mention it because the authors have put so much (unpaid!) work into this post already, and I don't want to burden them with what could feel like the expectation of even more work. But I think it's still worth making the point for future reference if for no other reason.

I think it’s fairly easy for readers to place ideas on a spectrum and identify trade offs when reading criticisms, if they choose to engage properly.

I think the best way to read criticisms is to steelman as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.

2[anonymous]
Can you clarify this statement? I'm confused about a couple of things: * Why is only "arguable" that you had more power when you were an active grantmaker? * Do you mean you don't have much power, or that you don't use much power? 
8
Buck
I removed "arguable" from my comment. I intended to communicate that even when I was an EAIF grantmaker, that didn't clearly mean I had "that much" power--e.g. other fund managers reviewed my recommended grant decisions, and I moved less than a million dollars, which is a very small fraction of total EA spending. I mean that I don't have much discretionary power (except inside Redwood). I can't unilaterally make many choices about e.g. EA resource allocation. Most of my influence comes via arguing that other people should do things with discretionary power that they have. If other people decided to stop listening to me or funding me, I wouldn't have much recourse.
6[anonymous]
I appreciate the clarification!  It sounds to me that what you're saying is that you don't have any formal power over non-Redwood decisions, and most of your power comes from your ability to influence people. Furthermore, this power can be taken away from you without you having any choice in the matter. That seems fair enough. But then you seem to believe that this means you don't actually have much power? That seems wrong to me. Am I misunderstanding something? 

I agree the we ignore experts over people who are more value aligned. Seems like a mistake.

As a weak counter-point to this, I have found in the past that experts who are not value-aligned can almost find EA ways of thinking incomprehensible, such that it can be very difficult to extract useful information from them. I have experienced talking to a whole series of non-EA experts and really struggling to get them to even engage with the questions I was asking (a lot of "I just haven't thought about that"), whereas I got a ton of value very quickly from talking to an EA grad student in the area.

I empathise with this from my own experience having been quite actively involved in EA for 10 years and within my own area of expertise which is finance and investment, risk management and to a lesser extent governance ( as a senior partner and risk committee member of one the largest hedge fund in Europe), that sometimes we ignore ‘experts’ over people who are more value aligned.

It doesn’t mean I believe we should always defer to ‘experts’. Sometimes a fresh perspective is useful to explore and maximise potential upside , but sometimes ‘experts’  are useful in minimising downside risks that people with less experience may not be aware of, and also save time and effort in reinventing existing best practises upon which improvements could be made.

I guess it is a balance between the two which varies with the context, but more likely perhaps in areas such as operation, legal and compliance, financial risk management and probably others.

-5
Gideon Futerman

More broadly I often think a good way to test if we are right is if we can convince others. If we can't that's kind of a red flag in itself.

This is valuable, but at a certain point the market of ideas relies on people actually engaging in object level reasoning. There's an obvious failure mode in rejecting adopting new ideas on the sole meta-level basis that if they were good they would already be popular. Kind of like the old joke of the economist who refuses to pick up hundred-dollar bills off the ground because of the Efficient Market Hypothesis.

EA & Aspiring Rationalism have grown fairly rapidly, all told! But they're also fairly new. "Experts in related fields haven't thought much about EA approaches" is more promising than "experts in related fields have thought a lot about EA approaches and have standard reasons to reject them."

(Although "most experts have clear reasons to reject EA thinking on their subject matter" is closer to being the case in AI ... but that's probably also the field with the most support for longtermist & x-risk type thinking & where it's seen the fastest growth, IDK.)

We sort of seem to be doing the opposite to me - see for example some of the logic behind this post and some of the comments on it (though I like the post and think it's useful).

7
Chris Leong
Agree that it is a red flag. However, I also think that sometimes we have to bite the bullet on this.
5
Sharmake
Only a small red flag, IMO, because it's rather easy to convince people of alluring falsehoods, and not so easy to convince people of uncomfortable truths.

This seems quite hand-wavy and I’m skeptical of it. Could you give an example where “we” have ignored the experts? And when you say experts, you probably refer to expert reasoning or scientific consensus and not appeals to authority.

Your statement gained a lot of upvotes but “EA ignores expoerts” just fits the prevailing narrative too well but I haven’t seen any examples of it. Happy to update if I find one.

6
Tsunayoshi
For some related context: In the past GiveWell used to solicit external reviews by experts of their work, but has since discontinued the practice. Some of their reasons are (I can imagine similar reasons applying to other orgs): "There is a question around who counts as a “qualified” individual for conducting such an evaluation, since we believe that there are no other organizations whose work is highly similar to GiveWell’s." "Given the time investment these sorts of activities require on our part, we’re hesitant to go forward with one until we feel confident that we are working with the right person in the right way and that the research they’re evaluating will be representative of our work for some time to come."
4
Marcel D
I’ve hypothesized that one potential failure mode is that experts are not used to communicating with EA audiences, and EA audiences tend to be more critical/skeptical of ideas (on a rational level). Thus, it may be the case that experts aren’t always as explicit about some of the concerns or issues, perhaps because they expect their audiences to defer to them or they have a model of what things people will be skeptical of and thus that they need to defend/explain, but that audience model doesn’t apply well to EA. I think there may be a case/example to highlight with regards to nuclear weapons or international relations, but then again it is also possible that the EA skepticism in some of these cases is valid due to higher emphasis on existential risks rather than smaller risks.
2
Josh Jacobson
I generally/directionally agree, and also wrote about a closely related concern previously: https://forum.effectivealtruism.org/posts/tdaoybbjvEAXukiaW/what-are-your-main-reservations-about-identifying-as-an?commentId=GB8yfzi8ztvr3c6DC

Where I agree:

  • Experimentation with decentralised funding is good. I feel it's a real shame that EA may not end up learning very much from the FTX regrant program because all the staff at the foundation quit (for extremely good reasons!) before many of the grants were evaluated.
  • More engagement with experts. Obviously, this trades off against other things and it's easier to engage with experts when you have money to pay them for consultations, but I'm sure there are opportunities to engage with them more. I suspect that a lot of the time the limiting factor may simply be people not knowing who to reach out to, so perhaps one way to make progress on this would be to make a list of experts who are willing for people at EA orgs to reach out to them, subject to availability?
  • I would love to see more engagement from Disaster Risk Reduction, Future Studies, Science and Technology Studies, ect. I would encourage anyone with such experience to consider posting on the EA forum. You may want to consider extracting out this section in a separate forum post for greater visibility.
  • I would be keen to see experiments where people vote on funding decisions (although I would be surprised if this were
... (read more)
9
Gideon Futerman
I think I basically agree here, and I think it's mostly about a balance; criticism should, I think, be seen as pulling in a direction rather than wanting to go all the way to an extreme (although there definitely are people who want that extreme who I strongly disagree with!) On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities). I think I agree the post sees voting/epistemic democracy in a too rosy eyed way. On the other hand, I am aware of being told by a philosopher of science I know that xrisk was the most hierarchical field they'd seen. Moreover, I think democracy can come in gradations, and I don't think ea will ever be perfect. On your thing of youth, I think that's interesting. I'm not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism, or having had themselves sidelined by people with more power who disagree, or had the credit for their achievements taken by people more senior making it harder for them to have legitimacy to push for change etc. This is why I like the cultural points this post makes, as it does seem we need a better culture to achieve our ideals
9
Chris Leong
"On AI safety, I think in a few years it will look like EA overinvested in the wrong approaches (ie helping OpenAI and not opposing capabilities)" - I agree that this was a mistake. "I'm not sure the current culture would necessarily allow this though, with many critical EAs I know essentially scared to share criticism" - that's worrying. Hopefully seeing this post be highly upvoted makes people feel less scared.
7
ChristianKleineidam
Given that EA Global is already having an application process that does some filtering you likely could use the attendance lists.
7
Tom Gardiner
Lot of good points here. One slight critique and one suggestion to build on the above. If I seem at all confrontational in tone, please note that this is not my aim - I think you made a solid comment.  Critique: I have a great sense of caution around the belief that "smart, young EAs", and giving them grants to think about stuff, are the best solution to something, no matter how well they understand the community. In my mind, one of the most powerful messages of the OP is the one regarding a preference for orthodox yet inexperienced people over those with demonstrable experience but little value alignment. Youth breaking from tradition doesn't seem a promising hope when a very large portion of this community is, and always has been, in their youth. Indeed, EA was built from the ground up by almost the same people in your proposed teams. I'm sure there are smart, young EAs readily available in our labour force to accept these grants, far more readily than people who also deeply understand the community but do not consider themselves EAs (whose takes should be most challenging) or have substantial experience in setting good norms and cultural traits (whose insights will surely be wiser than ours). I worry the availability and/or orthodoxy of the former is making them seem more ideal than the latter.  Suggestion: I absolutely share your concerns about how the EA electorate would be decided upon. As an initial starting point, I would suggest that voting power be given to people who take the Giving What We Can pledge and uphold it for a stated minimum time. It serves the costly signalling function without expecting people to simply buy "membership". My suggestion has very significant problems, that many will see at first glance, but I share it in case others can find a way to make it work.  Edit: It seems others have thought about this a lot more than I have, and it seems intractable.
1
Chris Leong
I don’t see my suggestion of getting a few groups of smart, young, EAS as exclusive with engaging with experts. Obviously they trade off in terms of funds and organiser effort, but it wouldn’t actually be that expensive to pay the basic living expenses of a few young people.

Nice. Thanks. Really well written, very clear language, and I think this is pointed in a pretty good direction. Overall I learned a lot.

I do have the sense it maybe proves too much -- i.e. if these critiques are all correct then I think it's surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup.

I don't see you doing much acknowledging what might be good about the stuff that you critique -- for example, you critique the focus on individual rationality over e.g. deferring to external consensus. But it seems possible to me that the movement's early focus on individual rationality was the cause of attracting great people into the movement, and that without that focus EA might not be anything at all! If I'm right about that then are we ready to give up on whatever power we gained from making that choice early on?

Or, as a metaphor, you might be saying something like "EA needs to 'grow up' now" but I am wondering if EA's childlike nature is part of its success and 'growing up' would actually have a chance to kill the movement.

“I don't see you doing much acknowledging what might be good about the stuff that you critique”

I don’t think it’s important for criticisms to do this.

I think it’s fair to expect readers to view things on a spectrum, and interpret critiques as an argument in favour of moving in a certain direction along a spectrum, rather than going to the other extreme.

Criticisms don't have to do this, but it would be more persuasive if it did.

I agree but having written long criticisms of EA, doing this consistently can make the writing annoyingly long-winded.

I think it’s better for EAs to be steelmanning criticisms as they read, especially via “would I agree with a weaker version of this claim” and via the reversal test, than for writers to explore trade-offs for every proposed imperfection in EA.

3
Nicholas / Heather Kross
Agreed. When people require literally everything to be written in the same place by the same author/small-group, it disincentives writing potentially important posts.
9
Chris Leong
"I do have the sense it maybe proves too much -- i.e. if these critiques are all correct then I think it's surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup" Agreed. Chesterton's fence applies here.
2
Guy Raveh
In what ways is EA very successful? Especially if you go outside the area of global health?

hm, at a minimum: moving lots of money, and making a big impact on the discussion around ai risk, and probably also making a pretty big impact on animal welfare advocacy.

My loose understanding of farmed animal advocacy is that something like half the money, and most of the leaders, are EA-aligned or EA-adjacent. And the moral value of their $s is very high. Like you just see wins after wins every year, on a total budget across the entire field on the order of tens of millions.

2
Guy Raveh
I'm glad to hear that. I've been very happy about the successes of animal advocacy, but hadn't imagined EA had such a counterfactual impact in it.
6
Linch
To be clear, from my perspective what I said is moderate but not strong evidence that EA is counterfactual for said wins. I don't know enough about the details to be particularly confident.

A lot of organisations with totally awful ideas and norms have nonethless ended up moving lots of money and persuading a lot of people. You can insert your favourite punching bag pseudoscience movement or bad political party here.  The OP is not saying that the norms of EA are worse than those organisations, just that they're not as good as they could be

4
Guy Raveh
Are we at all sure that these have had, or will have, a positive impact?

We should absolutely not be sure, for example because the discussion around AI risk up to date has probably accelerated rather than decelerated AI timelines. I'm most keen on seeing empirical work around figuring out whether longtermist EA has been net positive so far (and a bird's eye, outside view, analysis of whether we're expected to be positive in the future). Most of the procedural criticisms and scandals are less important in comparison. 

Relevant thoughts here include self-effacing ethical theories and Nuño's comment here.

If 100% of these suggestions were implemented I would expect in 5 years' time EA to look significantly worse (less effective, helping less people/animals and possibly having more FTX type scandals).

If the best 10% were implemented I could imagine that being an improvement.

Possibly high effort, but what do you see as the best 10% (and worst 10%)?

I like this comment and I think this is the best way to be reading EA criticisms - essentially steelmanning as you read and not rejecting the whole critique because parts seem wrong.

1
Stone
Especially because bad-faith actors in EA have a documented history of spending large amounts of time and effort posing as good-faith actors, including heavy use of anonymous sockpuppet accounts.

I appreciate the large effort put into this post! But I wanted to throw out one small part that made me distrust it as a whole. I'm a US PhD in cognitive science, and I think it'd be hard to find a top cognitive scientist in the country (e.g., say, who regularly gets large science grants from governmental science funding, gives keynote talks at top conferences, publishes in top journals, etc.) who takes Iain McGilchrist  seriously as a scientist, at least in the "The Master & His Emissary" book. So citing him as an example of an expert whose findings are not being taken seriously makes me worry that you  handpicked a person you like, without evaluating the science behind his claims (or without checking "expert consensus"). Which I think reflects the problems that arise when you start trying to be like "we need to weigh together different perspectives". There's no easy heuristics for differentiating good science/reasoning from pseudoscience, without incisive, personal inquiry -- which is, as far as I've seen, what EA culture earnestly tries to do. (Like, do we give weight to the perspective of ESP people? If not, how do we differentiate them from the types of "domain experts" we should take seriously?)

I know this was only one small part of the post, and doesn't necessarily reflect the other parts -- but to avoid a kind of Gell-Mann Amnesia, I wanted to comment on the one part I could contribute to.

I think to some extent this is fair. This strikes me as a post put together by non-experts, so I wouldn't be surprised if there are aspects of the post that is wrong. I think the approach I've taken is to have this is a list of possible criticisms, but probably that contains a number of issues. The idea is to steelman the important ones and  reject the one's we have reason to reject, rather than reject the whole. I think its fair to have more scepticism though, and I certainly would have liked a fuller bibliography, with experts on every area weighing in, but I suspect that the 'ConcernedEAs' probably didn't have the capacity for this.

I agree with all that! I think my worry is that this one issue reflects the deep, general problem that it's extremely hard to figure out what's true, and relatively simple and commonly-suggested approaches like 'read more people who have studied this issue', 'defer more to domain-experts', 'be more intellectually humble and incorporate a broader range of perspectives' don't actually solve this deep problem (all those approaches will lead you to cite people like McGilchrist).

5
Gideon Futerman
Yes I think this is somewhat true, but I think that this is better than the status quo of EA at the moment. One thing to do, which I am trying to do, is actually get more domain experts involved in things around EA, and talk to them more about how this stuff works, rather than deferring to anonymous ConcernedEAs or to a small group of very powerful EAs on this, but rather actually try and build a diverse epistemic community with many perspectives involved, which is what I interpret as the core claim of this manifesto

I did a close read of "Epistemic health is a community issue." The part I think is most important that you're underemphasizing is that, according to the source you cite, "The diversity referred to here is diversity in knowledge and cognitive models," not, as you have written, diversity "across essentially all dimensions." In other words, for collective intelligence, we need to pick people with diverse knowledge and cognitive models relevant to the task at hand, such as having relevant but distinct professional backgrounds. For example, if you're designing a better malaria net, you might want both a materials scientist and an epidemiologist, not two materials scientists.

Age and cultural background might be relevant in some cases, but that really depends on what you're working on any why these demographic categories seem especially pertinent to the task at hand. If I was designing a nursing home in a team comprised of young entrepreneurs, I would want old people either to be on the team, or to be consulted with routinely as the project evolved, because adding that component of diversity would be relevant ot the project. If I was developing a team to deploy bed nets in Africa, I might... (read more)

If I was designing a nursing home in a team comprised of young entrepreneurs, I would want old people either to be on the team, or to be consulted with routinely as the project evolved, because adding that component of diversity would be relevant ot the project. If I was developing a team to deploy bed nets in Africa, I might want to work with people from the specific villages where they will be distributed.

And if you're trying to run a movement dedicated to improving the entire world? Which is what we are doing?

7
DirectedEvolution
That is a fair rebuttal. I would come back to the model of a value-aligned group with a specific set of tasks seeking to maximize its effectiveness at achieving the objective. This is the basis for the collective intelligence research that is cited here as the basis for their recommendations for greater diversity. If you frame EA as a single group trying to achieve the task of "make the entire world better for all human beings by implementing high-leverage interventions" then it does seem relevant to get input from a diverse cross-section of humanity about what they consider to be their biggest problems and how proposed solutions would play out. One way to get that feedback is to directly include a demographically representative sample of humanity in EA directly as active participants. I have no problem with that outcome. I just think we can 80/20 it by seeking feedback on specific proposals. I also think that basing our decisions about what to pursue based on the personal opinions of a representative sample of humanity will lead us to prioritize the selfish small issues of a powerful majority over the enormous issues faced by underrepresented minorities, such as animals, the global poor, and the denizens of the far future. I think this because I think that the vast majority of humanity is not value-aligned with the principle of altruistic utility maximization.  For these two main reasons - the ability to seek feedback from relevant demographics when necessary, and the value mismatch between EA and humanity in general - I do not see the case for us being unable to operate effectively given our current demographic makeup. I do think that additional diversity might help. I just think that it is one of a range of interventions, it's not obvious to me that it's the most pressing priority, and broadening EA risks to pursue diversity purely for its own sake risks value misalignment with newcomers. Please interpret this in a moderate stance along the lines of "I invit
9
ConcernedEAs
Hi AllAmericanBreakfast, The other points (age, cultural background, etc.) are in the Critchlow book, linked just after the paper you mention.
4
DirectedEvolution
Where exactly is that link? I looked at the rest of the links in the section and don’t see it.
6
ConcernedEAs
The word before the Yang & Sandberg link
3
DirectedEvolution
This is the phrase where you introduce the Yang & Sandberg link: The word before the link is "community," which does not contain a link.
9
ConcernedEAs
"For"
7
Guy Raveh
Yeah, this kind of multiple-links approach doesn't work well in this forum, since there's no way to see that the links are separate.
3
Cornelis Dirk Haupt
I'd recommend separating links that are in neighbouring words (e.g. see here and here).

Thanks a lot for writing this detailed and thoughtful post, I really appreciate the time you spent on putting this information and thinking together.

So, let's assume I am a 'leader' in the EA community being involved in some of the centralised decision-making you are talking about (which might or might not be true). I'm very busy but came across this post, it seemed relevant enough and I spent maybe a bit less than an hour skim-ish reading it.  I agree with the vast majority of the object-level points you make. I didn't really have time to think about any of the concrete proposals you are making and there are a lot of them, so it seems unlikely I will be able to find the time. However, since - as I said - I broadly agree with lots of what you're saying I might be interested in supporting your ideas. What, concretely do you want me to do tomorrow? Next week?

1
ConcernedEAs
Thank you so much for your response, DM'd!

It would have been nice to see a public response here!

Especially given all the stuff you just wrote about how EA is too opaque, insular, unaccountable etc. But mainly just because I, as a random observer, am extremely curious what your object-level answer to the question they posed is.

4
ConcernedEAs
Very fair: DMing for the sake of the anonymity of both parties.

As other comments have noted, a lot of the proposals seem to be bottlenecked by funding and/or people leading them.

I would recommend people interested in these things to strongly consider Earning to Give or fundraising, or even just actually donating much more of their existing income or wealth.
If the 10 authors of this post can find other 10 people sympathetic to their causes, and each donate or fundraise on average 50k/year, they would have $1M/year of funding for causes that they think are even better than the ones currently funded by EA! If they get better results than existing EA funds people and resources would flock to them!

If you think the current funding allocation is bad, the value of extra funding that you would be able to allocate better becomes much higher.

Especially if you want to work on climate change, I suspect fundraising would be easier than for any other global cause area. Instead of asking Moskovitz/Openphil for funding, it might be even higher EV to ask Gates, Bezos, or other billionaires. Anecdotally, when I talk to high net-worth people about EA, the first comment is almost always "but what about climate change, which clearly is the most important thing to f... (read more)

This piece is pretty long and I didn't think I'd like it, but I put that aside. I think it's pretty good with many suggestions I agree with. 

Thanks for writing it. I guess it wasn't easy, I know it's hard to wrestle with communities you both love and disagree with. Thanks for taking the time and energy to write this.

On democratic control:

Any kind of democratic control that tries to have "EAs at large" make decisions will need to decide on who will get to vote. None of the ways I can think of for deciding seem very good to me (donating a certain amount? having engaged a certain amount in a visible way?). I think they're both bad as methods to choose a group of decisionmakers and more broadly harmful. "You have done X so now you are A Real EA" is the message that will be sent to some and "Sorry, you haven't done X, so you're not A Real EA" to others, regardless of the method used for voter selection. I expect that it will become a distraction or discouragement from the actual real work of altruism.

I also worry that this discussion is importing too much of our intuitions about political control of countries.  Like most people who live in democracies, I have a lot of intuitions about why democracy is good for me. I'd put them into two categories:

  1. Democracy is good for me because I am a better decisionmaker about myself than other people are about me
    1. Most of this is a feeling that I know best about myself: I have the local knowledge that I need to make decisions about how I am ruled
    2. But other par
... (read more)
-2
Stone
Democratization changes the relative power distribution within EA. The people proposing it are usually power-seeking in some way and already have plans to capitalize off of a democratic shift.

Strongly agree with the idea that we should stop saying “EA loves criticism”.

I think everyone should have a very strong prior that they are bad at accepting criticism, and everyone should have a very strong prior that they overestimate how good they are at accepting criticism.

I think a better way of looking at this is that EA is very inviting of criticism but not necessarily that responsive to it. There are like 10 million critiques on the EA Forum, most with serious discussion and replies. Probably very few elicit actual change in EA. (I am of the opinion that most criticism just isn’t very good, and that there is a reason it hasn’t been adopted, but obviously this is debatable).

5
freedomandutility
I don’t think I like this framing, because being responsive to criticism isn’t inherently good, because criticism isn’t always correct. I think EA is bad at the important middle step between inviting criticism and being responsive to it, which is seriously engaging with criticism.
3
Nicholas / Heather Kross
Agree, I don't see many "top-ranking" or "core" EAs writing exhaustive critiques (posts, not just comments!) of these critiques. (OK, they would likely complain that they have better things to do with their time, and they often do, but I have trouble recalling any aside from (debatably) some of the responses to AGI Ruins / Death With Dignity.)
4
Denkenberger🔸
As was said elsewhere, I think Holden’s is an example. And I think Will questioning the hinge of history would qualify as a deep critique of the prevailing view in X risk. There are also examples of the orthodoxy changing due to core EAs changing their minds, like switching to the high fidelity model, away from earning to give, towards longtermism, towards more policy.  
-12
Nathan Young

In fact, we have good reasons to believe that democratic decisions outperform other kinds, in large part due to the collective intelligence properties we mentioned in previous sections. If the question of the Twitter purchase had been put to the membership or a representatively-sampled assembly of members, what would the outcome have been?

Uh do we? My sense is that democracies are often slow and that EA expert consensus has lead rather than followed the democractic consensus over time. I might instead say "democractic decisions avoid awful outcomes". I'm not a big reader of papers, but my sense is that democracies avoid wars and famines but also waste a lot of time debating tax policy. I might suggest that EA should feel obliged to explain things to members but that members shouldn't vote.

Consensus building tools gather the views of many people, identify cruxes, and help build consensus. Pol.is, for instance, has seen significant success when implemented in Taiwan, even on deeply polarised issues. EA could easily employ tools such as these to discover what the membership really believes about certain issues, create better-informed consensus on key issues, and rigorously update our

... (read more)

On Democratic Proposals - I think that more "Decision making based on democratic principles" is a good way of managing situations where power is distributed. In general, I think of democracy as "how to distribute power among a bunch of people".

I'm much less convinced about it as a straightforward tool of better decision making. 

I think things like Deliberative Democracy are interesting, but I don't feel like I've seen many successes. 

I know of very little use of these methods in startups, hedge funds, and other organizations that are generally incentivized to use the best decision making techniques.  

To be clear, I'd still be interested in more experimentation around Deliberative Democracy methods for decision quality, it's just that the area still seems very young and experimental to me.

8
andrewpei
Hi Ozzie,  while I agree it's true that there aren't many high-performing organizations which use democratic decision making. I believe Bridgwater Associates, the largest hedge fund in the world, does use such a system. They use a tool called the dot collector to gather real time input from a wide base of employees and use that to come up with a 'believability weighted majority'. The founder of the company Ray Dalio has said that he will generally defer to this vote even when he himself does not agree with the result.  https://www.principles.com/principles/3290232e-6bca-4585-a4f6-66874aefce30/ So not as democratic as 1 person 1 vote but far more egalitarian than the average company (or EA for that matter). 
3
ConcernedEAs
Hi Ozzie, Participedia is a great starting point for examples/success stories, as well as the RSA speech we linked. Also this: https://direct.mit.edu/daed/article/146/3/28/27148/Twelve-Key-Findings-in-Deliberative-Democracy And this: https://forum.effectivealtruism.org/posts/kCkd9Mia2EmbZ3A9c/deliberation-may-improve-decision-making
4
Ozzie Gooen
Thanks!
1
ConcernedEAs
Hi Nathan, If you're interested in the performance of democratic decision-making methods then the Democratic Reason book is probably the best place to start!

Without wanting to play this entire post out in miniature, you're telling me something I think probably isn't true and then suggesting I read an entire book. I doubt I'm gonna do that. 

-8
WatchYourProfanity

Just a small point here - quite a few of the links/citations in this post are to academic texts which are very expensive [1] (or cumulatively expensive, if you want to read more than a couple) unless you have access through a university/institution. While blogposts/google docs may have less rigour and review than academic papers, their comparative advantage is the speed with which they can be produced and iterated on. 

If anything, developing some of the critiques above in more accessible blogposts would probably give more 'social proof' that EA views are more heterodox than it might seem at first (David Thorstad's blog is a great example you link in this post). Though I do accept that current community culture may mean many people are, sadly but understandably, reluctant to do so openly.

  1. ^

    This is just my impression after a quick first read, and could be unrepresentative. I definitely intend to read this post again in a lot more detail, and thanks again for the effort that you put into this.

Putting numbers on things is good. 

We already do it internally, and surfacing that allows us to see what we already think. Though I agree that we can treat made up numbers too seriously. A number isn't more accurate than "high likely" it's more precise. They can both just as easily be mistaken.

I have time for people who say that quantifying feels exhausting or arrogant but I think those are the costs, to be weighed against the precision of using numbers.

-7
Apples to Apples

Thank you so much for writing this! I don't have much of substance to add, but this is a great post and I agree with pretty much everything. 

[anonymous]48
19
1

You should probably take out the claim that FLI offered 100k to a neo nazi group as it doesn't seem to be true

We align suspiciously well with the interests of tech billionaires (and ourselves)
 

 

So I think this is true of Sam, and ourselves, but I'm really convinced that Dustin defers to OpenPhil more than the other way around (see below). I guess I like Dustin so feel loyalty to him that biases me.

I guess I am wary that I work on things I think are cool and interesting to me. Seems convenient.

I guess my main disagreement with this piece is that I think core EAs do a pretty good job:

  • Decisions generally seem pretty good to me (though I guess I am pretty close to the white-male etc artchitype, but still). I think that that community decisionmaking wouldn't have fixed FTX, the FLI issue or the bostrom email. In fact the criticism of the CEA statement is more from the white-male EA, I guess
  • You want people empowered to take quick decisions who can be trusted based on their track record. 
  • I wish there was more understanding of community opinion and discussing with the community. I have long argued for a kind of virtual copy of many EA orgs that the community can discuss and criticise. 
9
freedomandutility
For the FLI issue, I think we can confidently say more democratic decision making would have helped. Most EAs would have probably thought we should avoid touching a neo Nazi newspaper with a 10 foot pole.

Oh okay, but grant level democratisation is a huge step.

9
MichaelStJules
On the other hand, more decentralized grantmaking, i.e. giving money to more individuals to regrant, increases the risks of individuals funding really bad things (unilateralist curse). I suppose we could do something like give regranters money to regrant and allow a group of people (maybe all grantmakers and regranters together) to vote to veto grants, say with a limited number of veto votes per round, and possibly after flagging specific grants and debating them. However, this will increase the required grantmaking-related work, possibly significantly.
5
Jason
I think some sort of community-involved decisionmaking could have reduced the risk of FLI. The community involvement could be on the process side instead of, or in addition to, the substantive side. Although there hasn't been any answer on how much of a role the family member of FLI's president played in the grant, the community could have pushed for adoption of strong rules surrounding conflict of interest.  Another model would be a veto jury empowered to nix tentatively approved grants that seemed poorly justified, too risky for expected benefit, or otherwise problematic. Even if the veto jury had missed the whole neo-nazi business, it very likely would have thrown this grant out for being terrible for other reasons.
6
Nathan Young
I think that would hugely slow down the process. 
1
Jason
I don't think that is necessarily correct. We know the FLI grant was "approved" by September 7 and walked back sometime in November, so there was time for a veto-jury process without delaying FLI's normal business process. I am ordinarily envisioning a fairly limited role for a veto jury -- the review would basically be for what US lawyers call "abuse of discretion" and might be called in less technical jargon a somewhat enhanced sanity check. Cf. Harman v. Apfel, 211 F.3d 1172, 1175 (9th Cir. 2000) (noting reversal under abuse of discretion standard is possible only “when the appellate court is convinced firmly that the reviewed decision lies beyond the pale of reasonable justification under the circumstances”). It would not be an opportunity for the jury to comprehensively re-balance the case for and against funding the grant, or merely substitute its judgment for that of the grantmaker. Perhaps there would be defined circumstances in which veto juries would work with a less deferential standard of review, but it ordinarily should not take much time to determine that there was no abuse of discretion.
-4
Guy Raveh
Why's that bad?
8
Pablo
Is it really that hard to think of reasons why a faster process may be better, ceteris paribus, than a slower process?
-2
Guy Raveh
The "ceteris paribus" is the key part here, and I think in real life fast processes for deciding on huge sums of money tend to do much worse than slower ones.

If someone says that A is worse than B because it has a certain property C, you shouldn't ask "Why is C bad?" if you are not disputing the badness of C. It would be much clearer to say, "I agree C is bad, but A has other properties that make it better than B on balance."

2
Guy Raveh
Re: FTX - I'm not sure what would have "fixed FTX", but I did think there were decisions here that made an impact. I don't think EA is to blame for the criminal activities, but we did platform SBF a lot (and listened to him talk about things he had no idea about), and sort of had a personality cult around him (like we still do aroung Will, for example). You can see this from comments of people saying how disappointed they were and how much they had looked up to him. So there are some cultural changes that would've had a chance of making it better - like putting less emphasis on individuals and more on communities; like leaving hard domain-knowledge questions to experts; like being wary of crypto, as a technology that involves a lot of immoral actions in practice from all big actors.
-16
Guy Raveh

It will take a while to break all of this down, but in the meantime, thank you so much for posting this. This level of introspection is much appreciated.

1
Stone
Most of the content here is amalgamated from winning entries in the EA criticism contest last year, and it rarely cites the original authors.

I strongly agree with this particular statement from the post, but have refrained stating it publicly before out of concern that it would reduce my access to EA funding and spaces.

EAs should consciously separate:

  • An individual’s suitability for a particular project, job, or role
  • Their expertise and skill in the relevant area(s)
  • The degree to which they are perceived to be “highly intelligent”
  • Their perceived level of value-alignment with EA orthodoxy
  • Their seniority within the EA community
  • Their personal wealth and/or power

I've been surprised how many researchers, grant-makers, and community organizers around me do seem to interchange these things. For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to a group "I rank [Researcher X] as an A-Tier researcher. I don't actually know what they work on, but they just seem really smart." I found this very epistemically concerning, but other people didn't seem to.

I'd like to understand this reasoning better. Is there anyone who disagrees with the statement (aka, disagrees that these factors should be consciously separated) who could help me to understand their position? 

I agree that it's important to separate out all of these factors, but I think it's totally reasonable for your assessment of some of these factors to update your assessment of others.

For example:

  • People who are "highly intelligent" are generally more suitable for projects/jobs/roles.
  • People who agree with the foundational claims underlying a theory of change are more suitable for projects/jobs/roles that are based on that theory of change.

For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to a group "I rank [Researcher X] as an A-Tier researcher. I don't actually know what they work on, but they just seem really smart." I found this very epistemically concerning, but other people didn't seem to.

I agree that this feels somewhat concerning; I'm not sure it's an example of people failing to consciously separate these things though. Here's how I feel about this kind of thing:

  • It's totally reasonable to be more optimistic about someone's research because they seem smart (even if you don't know anything about the research).
  • In my experience, smart people have a pretty high rate of failing to do useful research (by researching
... (read more)

Thanks for the nuanced response. FWIW, this seems reasonable to me as well:

I agree that it's important to separate out all of these factors, but I think it's totally reasonable for your assessment of some of these factors to update your assessment of others.

Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But that's a distinct concern from the one I quoted from the post.

In my experience, smart people have a pretty high rate of failing to do useful research (by researching in an IMO useless direction, or being unproductive), so I'd never be that confident in someone's research direction just based on them seeming really smart, even if they were famously smart.

I've personally observed this as well; I'm glad to hear that other people have also come to this conclusion.

I think the key distinction here is between necessity and sufficiency. Intelligence is (at least with a certain threshold) necessary to do good technical research, but it ... (read more)

9
Guy Raveh
To strengthen your point - as an IMO medalist: IMO participation signifies some kind of intelligence for sure, and maybe even ability to do research in math (although I've had a professor in my math degree, also an IMO medalist, who disagreed), but I'm not convinced a lot of it transfers to any other kind of research.
4
Buck
Yeah, IMO medals definitely don't suffice for me to think it's extremely likely someone will be AFAICT good at doing research.

Thanks for posting this. I have lot's of thoughts about lots of things, that will take longer to think about. So I start with one of the easier questions.

 

Regarding pear review, you suggest

  • EAs should place a greater value on scientific rigour
    • We should use blogposts, Google Docs, and similar works as accessible ways of opening discussions and providing preliminary thoughts, but rely on peer-reviewed research when making important decisions, creating educational materials, and communicating to the public
    • When citing a blogpost, we should be clear about its scope, be careful to not overstate its claims, and not cite it as if it is comparable to a piece of peer-reviewed research

Have you had any interaction with the academic pear review system? Have you seen some of the stuff that passes though pear review? I'm in favour of scientific rigour, but I don't think pear review solves that.  In reality, my impression is that academia relies as much as name recognition and informal consensus mechanisms, as we (the blogpost community) does. The only reason academia has higher standards (in some fields) is that these fields are older and have developed a consensus around what methods ar... (read more)

I did a PhD in theoretical physics and I was not impressed by the pear review responses I got on my paper. It was almost always very shallow comments. Which is not suppressing given that  at least in that corner of physics pear review was unpaid and unrecognised work. 

 

If you already did the work to make a high quality paper, then peer review probably won't add much. But the point is actually to prevent poor quality, incorrect research from getting through, and to raise the quality of publications as a whole. 

My PhD was on computational physics, and yeah, the peer review didn't add much to my papers. But because I knew it was there, I made sure to put a ton of work to make sure it was error free and every part of it was high quality. If I knew I could get the same reward of publication by being sloppy or lazy, I might be tended to do that. I certainly put orders of magnitude more effort into my papers than I do to my blog posts.

I certainly don't think peer review is perfect, or that every post should be peer reviewed or anything. But I think that research that has to pass that bar tends to be superior to work that doesn't. 

This comment is pretty long, but TLDR: peer review and academia have their own problems, some similar to EA, some not. Maybe a hybrid approach works, and maybe we should consult with people with expertise in social organisation of science. 

To some extent I agree with this. Whilst I've been wanting more academic rigour in X-Risk for a while, peer review certainly is no perfect panacea, although I think it is probably better than the current culture of deferring to blog posts as much as we do. 

I think you are right that traditional academia really has its problems, and name recognition is also still an issue (eg Nobel Prize winners are 70% more likely to get through peer review etc.). Nonetheless, certainly from the field (solar geoengineering) that I have been in, name recognition and agreement with the 'thought leaders' is definitely less incentivised than in EA. 

One potential response is to think a balance between peer review, the current EA culture and commissioned reports is a good balance. We could set up an X-Risk journal with some editors and reviewers who are a) dedicated to pluralism and b) will publish things that are methodologically sound irrespective of r... (read more)

Rethink Priorities occasionally pays non-EA subject matter experts (usually academics that are formally recognized by other academics as the relevant and authoritative subject-matter experts, but not always) to review some of our work. I think this is a good way of creating a peer review process without having to publish formally in journals. Though Rethink Priorities also occasionally publishes formally in journals.

Maybe more orgs should try to do that? (I think Open Phil and GiveWell do this as well.)

4
Linda Linsefors
Since you liked that though let me think out loud a bit more. I think it's practically impossible to be rigorous without a paradigm.  Old sciences have paradigms and mostly work well but the culture is not nice to people trying to form ideas outside the paradigm, because that is necessarily less rigours.  I remember some academic complaining on this on a podcast. They where doing some different approach within cognitive science and had problem with pear review because they where not enough focused on measuring the standard things.  On the other had there is EA/LW style AI Safety research, where everyone talks abut how preparadigmatic we are. Vague speculative ideas, with out inferential depth, get more appreciation and attention. By now there are a few paradigms, the clearest case being Vanessas research, which almost no one understand. I think part of the reason her work is hard to undertand is exactly because it is rigours within a paradigm research. It's specific proof with in a specific framework. It has both more details and more prerequisites. While reading pre paradigmatic blogposts is like reading the first intro chapter in a text book (which is always less technical), the with in paradigmatic stuff is more like reading chapter 11, and you really have to have read the previous chapters, which makes it less accessible. Especially since no one collected the previous chapters for you, and the person writing it is not selected for their pedagogical skills. Research has to start as pre paradigmatic. But I think that the dynamic described above makes it hard to move on, to pick some paradigm to explore and start working out the details. Maybe a field at some point needs to develop a culture of looking down at less rigours work, for any rigours work to really take hold? I'm really not sure. And I don't want to loose the explorative part of EA/LW style AI Safety research either. Possibly rigour will just develop naturally over time?  End of speculation
2
Gideon Futerman
I think this is pretty interesting and thanks for sharing your thoughys! There's things here I agree with, things I disagree with, and I might say more when I'm on my computer not phone!. However, I'd love to call about this to talk more, and see
1
Linda Linsefors
Is there a recording? I'm always happy to offer my opinions.  here's my email: linda.linsefors@gmail.com
2
Gideon Futerman
There is, it should be on the cea youtube channel at some point. It is also a forum post:https://forum.effectivealtruism.org/posts/cXH2sG3taM5hKbiva/beyond-simple-existential-risk-survival-in-a-complex#:~:text=It sees the future as,perhaps at least as important.
3
freedomandutility
FWIW I don’t think it’s usually very hard to find people with the right expertise. You’d just need to look for author names on a peer reviewed paper / look at who they cite / look at a university’s website / email a prof and ask them to refer you to an expert.

I liked this post, but I'm interested in hearing from someone who disagrees with the authors: Do you think it would be a bad idea to try these ideas, period? Or do you just object to overhauling existing EA institutions?

There were a few points at which the authors said "EA should..." and I'm wondering if it would be productive to replace that with "We, the authors of this post, are planning to... (and we want your feedback on our plans)"

I suppose the place to start would be with some sort of giving circle that operates according to the decisionmaking processes the authors advocate. I think this could generate a friendly and productive rivalry with EA Funds.

Implementing a lot of these ideas comes down to funding. The suggestions are either about distributing money (in which case you need money to hand out) or about things that will take a lot of work, in which case someone needs to be paid a salary. 

I also noticed that one of the suggestions was to get funding from outside EA. I have no idea how to fundraise. But anyone who know how to fundraise, can just do that, and then use that money to start working down the list. 

I don't think any suggestion to democratise OpenPhil's money will have any traction. 

9
Gideon Futerman
I think this hits the nail on the head. Funding is the issue, it always is. One thing I've been thinking about recently is maybe we should break up OpenPhil, particularly the XRisk side (as they are basically the sole XRisk funder) . This is not because I think OpenPhil is not great (afaik they are one of the best philanthropic funds out there), but because having essentially a single funder dictate everything that gets funded in a field isn't good, whether that funder is good or not. I wouldn't trust myself to run such a funding boday either.
4
Linda Linsefors
What would this mean exactly? I assume OpenPhil have already splitt up different types of funding between different teams of people. So what would it mean in practice to split up OpenPhil itself?  Making it into two legal entities? I don't think the number of legal entitets matters. Moving the teams working on different problems to different offices?
7
Gideon Futerman
So OpenPhil is split into different teams, but I'll focus specifically on their grants in XRisk/Longtermism. OpenPhil, either directly or indirectly, are essentially the only major funder of XRisk. Most other funders essentially follow OpenPhil. Even though I think they are very competent, the fact the field has one monolithic funder isn't great for diversity and creativity; certainly I've heard a philosopher of science describe xrisk as one of the most hierarchical fields they have seen, a lot due to this. OpenPhil/Dustin Moskovitz have assets. They could break up into a number of legal entities with their own assets, some overlapping on cause area (eg 2 or 3 xrisk funders). You would want them to be culturally different; work from different offices, have people with different approaches to xrisk etc. This could really help reduce the hierarchy and lack of creativity in this field. Some other funding ideas/structures are discussed here https://www.sciencedirect.com/science/article/abs/pii/S0039368117303278
5
Aptdell
Yep, that's why I suggested starting with a giving circle :-) Lots of people upvoted this post. Presumably some of them would be interested in joining. My guess would be that if the authors start a giving circle and it acquires a strong reputation within the community for giving good grants, OpenPhil/Dustin Moskovitz will become interested.
5
Aptdell
Along the same lines: The authors recommend giving every user equal voting weight on the EA Forum. There is a subreddit for Effective Altruism which has this property. I'll bet some of the authors of this post could become mods there if they wanted. Also, people could make posts on the subreddit and cross-post them here.
4
MugaSofer
I agree that I would be massively more in favour of basically all of these proposals of they were proposed to be tried in parallel with, rather than instead of/"fixing", current EA approaches. Even the worst of them I'd very much welcome seeing tried.

Thanks for the time you’ve put into trying to improve EA, and it’s unfortunate that you feel the need to do so anonymously!

Below are some reactions, focused on points that you highlighted to me over email as sections you’d particularly appreciate my thoughts on.

On anonymity - as a funder, we need to make judgments about potential grantees, but want to do so in a way that doesn't create perverse incentives. This section of an old Forum post summarizes how I try to reconcile these goals, and how I encourage others to. When evaluating potential grantees, we try to focus on what they've accomplished and what they're proposing, without penalizing them for holding beliefs we don't agree with.

  • I understand that it’s hard to trust someone to operate this way and not hold your beliefs against you; generally, if one wants to do work that’s only a fit for one source of funds (even if those funds run through a variety of mechanisms!), I’m (regretfully) sympathetic to feeling like the situation is quite fragile and calls for a lot of carefulness.
  • That said, for whatever it’s worth, I believe this sort of thing shouldn’t be a major concern w/r/t Open Philanthropy funding; “lack of ou
... (read more)
1
Noah Scales
Do you have specific concerns about how the capital is spent? That is, are you dissatisfied and looking to address concerns that you have or to solve problems that you have identified? I'm wondering about any overlap between your concerns and the OP's. I'd be glad for an answer or just a link to something written, if you have time.
1[anonymous]
What's best for spending Cari and Dustin's financial capital may not be what's best for the human community made up of EAs. One could even argue that the human capital in the EA community is roughly on par with or even exceeds the value of Good Ventures' capital. Just something to think about. 
-5
A.C.Skraeling

This post has convinced me to stay in the EA community. If I could give all the votes I have given to my own writings to this post, I would. Many of the things in this post I've been saying for a long time (and have been downvoted for) so I'm happy to see that this post has at least a somewhat positive reaction.

To add to what this post outlines. While the social sciences are often ignored in the EA community one notable exception to that is (orthodox) economics. I find it ironic that one of the few fields where EA's are willing to look outside their own insular culture, is in itself extremely insular. Other social studies like philosophy, political science, history, sociology, gender studies all make a lot of attempts to integrate themselves with all the other social sciences. This makes it so that learning about one discipline also teaches you about the other disciplines. Economists meanwhile have a tendency to see their discipline as better than the others starting papers with things like:

Economics is not only a social science, it is a genuine science. Like the physical sciences, economics uses a methodology that produces refutable implications and tests these implications using

... (read more)

Personally I think this problem boils down to how effective altruism represents itself, and how it is actually governed.

For instance in my own case, I became interested in effective altruism and started getting more involved in the space with the idea that it was a loose collection of aligned, intelligent people who want to take on the world’s most pressing problems. Over time, I’ve realize that like the post mentions, effective altruism is in fact quite hierarchical and not disposed to giving people a voice based solely on the amount of time and effort they put in the movement.

Admittedly, this is a pretty naive view to take when going into any social movement.

While I am sympathetic to the arguments that a small tightly knit group can get things done more quickly and more efficiently, there is a bit of a motte and bailey going on between the leadership of effective altruism and the recruiting efforts. From my perspective a lot of new folks that join the movement are implicitly sold on a dream, commit time and energy, then are not given the voice they feel they’ve earned.

Whether or not a more democratic system would be more effective, I still think many of the internal problems that have been surfacing recently would be fixed with better communication within effective altruism about how we make decisions and who has influence.

9
Chris Leong
I can see why investing time and effort and then not receiving as much influence as you would like could be frustrating. At the same time, I guess I've always taken it for granted that I would need to be persuasive too. And sometimes I write things and people like it; other times I write things and people really don't. I sometimes feel that my ideas don't get as much attention as they should, but I imagine that most people think their ideas are great as well, so guess I accept that if I had a biased view of how good my ideas are, it wouldn't necessarily feel that way. So I guess I'm suggesting that it might make sense to temper your expectations somewhat. I definitely think that we should experiment with more ways of ensuring that the best ideas float to the top. I really appreciate the recent red-teaming competition and cause exploration competition; I think the AI worldviews competition is great as well. Obviously, these aren't perfect, but we're doing better than we were before and I expect we'll do better as we iterate on these.
9
Wil Perkins
> I guess I've always taken it for granted that I would need to be persuasive too   I don't mind having to be persuasive, my problem is that EA leadership is not available or open to hearing arguments. It doesn't matter how persuasive one is if you can't get into EAG, or break into the narrow social circles that the high-powered EAs hang out in. Looking at Buck's comment above, he makes it clear that the leadership doesn't take EA forum comments or arguments here seriously, which is fair as they are busy. I think we need better mechanisms to surface criticisms to decision makers overall.
6
Michael_PJ
I'm not sure I agree with this? I think "EA leadership" probably isn't that open to arguments from unknown people. But if you show up and say sensible things you pretty quickly get listened to. I think that's about as good as we can hope for: we can't expect busy people to listen to every random piece of input; and it's not that unreasonable to expect people to show up and do some good work before they get listened to.
2
Chris Leong
I didn’t quite read him as saying that he didn’t take forum posts seriously, just that it wasn’t really written to engage people who disagreed with these ideas. But we definitely figure out if there’s any other better mechanisms of floating ideas to the top.
-1
Jonathan Claybrough
My take on Buck's comment is that he didn't update from this post because it's too high level and doesn't actually argue for most of its object level proposals. I have a similar reaction to Buck where I evaluate a lot of the proposals to be pretty bad, and since they haven't argued much for them and I don't feel much like arguing against them.  I think Buck was pretty helpful in saying (what I interpret to mean) "I would be able to reply more if you argued for object level suggestions and engaged more deeply with the suggestions you're bringing"

EA seems to have a bit of a "not invented here" problem, of not taking onboard tried and tested mechanisms from other areas. E.g. with the boring standard conflict of interest and transparency mechanisms that are used by charitable organisations in developed countries. 

Part of this seems to come from only accepting ideas framed in certain ways, and fitting cultural norms of existing members. (To frame it flippantly, if you proposed a decentralised blockchain based system for judging the competence of EA leaders you'd get lots of interest, but not if you suggested appointing non-EA external people to audit.)

There might be some value to posts taking existing good practices in other domains and presenting them in ways that are more palatable to the EA audience, though ideally you wouldn't need to.

3
Nathan Young
I agree but I think this is hard problem for everyone right. I don't know that any community can just fix it. 

Background
First, I want to say that I really like seeing criticism that's well organized and presented like this. It's often not fun to be criticized, but the much scarier thing is for no one to care in the first place. 

This  post was clearly a great deal of work, and I'm happy to see so many points organized and cited. 

I obviously feel pretty bad about this situation where, several people all felt like they had to do this in secret in order to feel safe. I think tensions around these issues feel much more heated than I'd like them to. Most of the specific points and proposals seem like things that in a slightly different world, all sides could feel much more chill discussing.

I'm personally in a weird position, where I don't feel like one of the main EAs who make decisions (outside of maybe RP), but I've been around for a while and know some of them. I did some grantmaking, and now am working on an org that tries to help figure out how to improve community epistemics (QURI).

Some Quick Impressions
I think one big division I see in discussions like this, is that between:

  1. What's in the best interest of EA leadership/funding, conditional on them not dramatically changing t
... (read more)
6
dan.pandori
Do you think that group bargaining/voting in EA would be a good thing for funding/prioritization? I personally like the current approach that has individual EAs and orgs make their own decisions on what is the best thing to do in the world. For example, I would be unlikely to fund an organization that the majority of EAs in a vote believed should be funded, but I personally believed to be net harmful. Although if this situation were to occur, I would try to have some conversations about where the wild disagreement was stemming from.

I think there's probably a bunch of different ways to incorporate voting. Many would be bad, some good. 

Some types of things I could see being interesting:

  • Many EAs vote on "Community delegates" that have certain privileges around EA community decisions.
  • There could be certain funding groups that incorporate voting, roughly in proportion to the amounts donated. This would probably need some inside group to clear funding targets (making sure they don't have any confidential baggage/risks) before getting proposed.
  • EAs vote directly on new potential EA Forum features / changes.
  • We focus more on community polling, and EA leaders pay attention to these. This is very soft, but could still be useful.
  • EAs vote on questions for EA leaders to answer, in yearly/regular events.
3
dan.pandori
I'd be interested to see some of those tried for sure! I imagine you'd also likely agree that these proposals tradeoff against everything else that the EA orgs could be doing, and it's not super clear any are the best option to pursue relative to other goals right now.
3
Ozzie Gooen
Of course. Very few proposals I come up with are a good idea for myself, let alone others, to really pursue. 

I think I'm probably sympathetic to your claims in "EA is open to some kinds of critique, but not to others", but I think it would be helpful for there to be some discussion around Scott Alexander's post on EA criticism. In it, he argued that "EA is open to some kinds of critique, but not to others" was an inevitable "narrative beat", and that "shallow" criticisms which actually focus on the more actionable implications hit closer to home and are more valuable.

I was primed to dismiss your claims on the basis of Scott Alexander's arguments, but on closer consideration I suspect that might be too quick. 

I feel it would be easier for me to judge this if someone (not necessarily the authors of this post) provided some examples of the sorts of deep critiques (e.g. by pointing to examples of deep critiques made of things other than EA). The examples of deep critiques given in the post did help with this, but it's easier to triangulate what's really meant when there are more examples.

7
Linda Linsefors
I also remember Scott's post and already when reading it it though that the next narrative beat argument was bad. The reason why it is the next narrative beat is because it is almost always true.  If I say that the sun will rise tomorrow, and you respond, "but you expect that the sun will raise every day, you have to give specific argument for this day in particular", that don't make sense. 

I think it's more or less true that "EA is open to some kinds of critique, but not to others", but I don't think the two categories exactly lines up with deep vs shallow critique. 

My current model is that powerful EAs are mostly not open to critique at all, but only pretend to welcome it for PR reasons, but mainly ignores it. As long as your critique is polite enough everyone involved will pretend to appreciate it, but if you cross the line to hurting anyone's feeling (which is individual and hard to predict) then there will be social and professional consequences.

My model might be completely wrong. It's hard to know given the opaqueness around EA power. I offered critique and there is never any dialogue or noticeable effect.

5
projectionconfusion
My own observation has been that people are open to intellectual discussion (your discounting formula is off for x reasons) but not to more concrete practical criticism, or criticism that talks about specific individuals. 
7
Linch
That was also Scott Alexander's point if I understood it correctly.
2
Denkenberger🔸
I don't think that is correct because of the orthodoxy changing due to powerful EAs changing their minds, like switching to the high fidelity model, away from earning to give, towards longtermism, and towards more policy.  
5
DirectedEvolution
I think he's arguing that you should have a little "fire alarm" in your head for when you're regurgitating a narrative. Even if it's 95% correct, that act of regurgitation is a time when you're thinking less critically and it's a perfect opportunity for error to slip through. Catching those errors has sufficiently high value that it's worth taking the time to stop and assess, even if 19 out of 20 times you decide your first thought was correct.
4
Denkenberger🔸
As I and another said elsewhere, I think Holden’s is an example. And I think Will questioning the hinge of history would qualify as a deep critique of the prevailing view in X risk. 

I think one of the reasons I loved this post is that my experience of reading it echoed in an odd way my own personal journey within EA. I remember thinking even at the start of EA there was a lack of diversity and a struggle at accept "deep critiques". Mostly this did not affect me – until I moved into an EA longtermist role a few years ago. Finding existing longtermist research to be lacking for the kind of work I was doing I turned to the existent disciplines on risk (risk management, deep uncertainty, futures tool, etc). Next thing I know a disproportionately large amount of my time seemed to be being sunk into trying and failing to get EA thinkers and funders take seriously governance issues and those aforementioned risk disciplines. Ultimately I gave up and ended up partly switching away from that kind of work. Yet despite all this I still find the EA community to be the best place for helping me mend the world.

 

I loved your post but I want to push back on one thing – these problems are not only in the longermist side of EA. Yes neartermist EA is epistemically healthier (or at minimum currently having less scandals), but that there are still problems and we should still ... (read more)

Thanks for this thoughtful and excellently written post. I agree with the large majority of what you had to say, especially regarding collective vs. individual epistemics (and more generally on the importance of good institutions vs. individual behavior), as well as concerns about insularity, conflicts of interest, and underrating expertise and overrating "value alignment". I have similarly been concerned about these issues for a long time, but especially concerned over the past year.

I am personally fairly disappointed by the extent to which many commenters seem to be dismissing the claims or disagreeing with them in broad strokes, as they generally seem true and important to me. I would value the opportunity to convince anyone in a position of authority in EA that these critiques are both correct and critical to address. I don't read this forum often (was linked to this thread by a friend), but feel free to e-mail me (jacob.steinhardt@gmail.com) if you're in this position and want to chat.

Also, to the anonymous authors, if there is some way I can support you please feel free to reach out (also via e-mail). I promise to preserve your anonymity.
 

5
Jason
Without defending all of the comments, I think some amount of "disagreeing  . . . in broad strokes" is an inevitable consequence of publishing all of this at once. The post was the careful work of ten people over an extended period of time (most was written pre-FTX collapse). For individuals seeking to write something timely in response, broad strokes are unavoidable if one wants to address key themes instead of just one or two specific subsections. I hope that, when ConcernedEAs re-post this in smaller chunks, there will be more specific responses from the community in at least some places.

We should probably read more widely

Summary: EA reading lists are typically narrow, homogenous, and biased, and EA has unusual social norms against reading more than a handful of specific books. Reading lists often heavily rely on EA Forum posts and shallow dives over peer-reviewed literature. EA thus remains intellectually insular, and the resulting overconfidence makes many attempts by external experts and newcomers to engage with the community exhausting and futile. This gives the false impression that orthodox positions are well-supported and/or difficult to critique.

Another option I like is to have the wiki include pages on "feminism",  "psychology" etc with summaries written in EA language of things people have found most valuable. I would read those.

You argue that funding is centralised much more than it appears. I find myself learning that this is the case more and more over time. 

I suspect it probably is good to decentralise to some degree, however there is a very real downside to this:

  • some projects are dangerous and probably shouldn't happen
  • the most dangerous of those are ones run by a charismatic leader and appear very good
  • if we have multiple funders who are not "informally centralised" (i.e. talking to each other) then there's a risk that dangerous projects will have multiple bites at the cherry, and with enough different funders, someone will fund them

I appreciate that there are counters to this, and I'm not saying this is a slam-dunk argument against decentralisation.

8
Gideon Futerman
One thing I think is decentralised funding will also make things like the FLI affair probably more likely. On the other hand, if this is happening already, and there are systematic biases anyway, and there is reduction in creativity, its a risk I'm willing to take. Lottery funding and breaking up funders into a few more bodies (eg 5-10 rather than the same roughly 2 or so?) Is what I'm most excited for, as they seem to reduce some of the risk whilst keeping a lot of the benefits
3
freedomandutility
As always, I’d say we should view things on a spectrum, and criticism of centralisation should be viewed as advocacy for less centralisation rather than rejecting centralisation entirely.
2
Jason
It seems that the weight of that downside would vary significantly by cause area.
1
Linda Linsefors
I think this is a real problem, and I think the solution is more open discussion. Encourage people to publicise what projects they plan to do, and let anyone critique it in an open discussion. This will catch more problem, and help improve projects.  Over centralised funding had too many bad side effects. It's not worth it.

I appreciate the extent of thoughtful consideration that has been put into this post. I looked at the list of proposed reforms to consider which ones I should implement in my (much smaller) organisation.

I currently find it very difficult to weigh the benefits and costs of this entire list. I understand that a post shouldn't be expected to do everything at once. But I would really appreciate it if someone explained which of these specific policies are standard practices in other contexts like universities/political parties/other NGOs.

Small point:
> Finally, we ask that people upvote or downvote this post on the basis of whether they believe it to have made a useful contribution to the conversation, rather than whether they agree with all of our critiques.

I think this makes a false dilemma, and recommends what seems like an unusual standard that other posts probably don't have.

"believe it to have made a useful contribution to the conversation" -> This seems like arguably a really low bar to me. I think that many posts, even bad ones, did something useful to the conversation. 

"whether they agree with all of our critiques." -> I never agree with all of basically any post. 

I think that more fair standards of voting would be things more like:
"Do I generally agree with these arguments?"
"Do I think that this post, as a whole, is something I want community members to pay attention to, relative to other posts?"

Sadly we don't yet have separate "vote vs. agreement" markers for posts, but I think those would be really useful here.

More specifically, EA shows a pattern of prioritising non-peer-reviewed publications – often shallow-dive blogposts[36] – by prominent EAs with little to no relevant expertise.

This is my first time seeing the "climate change and longtermism" report at that last link. Before having read it, I imagined the point of having a non-expert "value-aligned" longtermist applying their framework to climate change would be things like

  • a focus on the long-run effects of climate change
  • a focus on catastrophic scenarios that may be very unlikely but difficult to model or quantify

Instead, the report spends a lot of time on

  • recapitulation of consensus modeling (to be clear, this is a good thing that's surprisingly hard to come by), which mainly goes out to 2100
  • plausible reasons models may be biased towards negative outcomes, particularly in the most likely scenarios

The two are interwoven, which weakens the report even as a critical literature review. When it comes to particular avenues for catastrophe, the analysis is often perfunctory and dismissive. It comes off less as a longtermist perspective on climate change than as having an insider evaluate the literature because only "we" can be trusted to r... (read more)

Context: I've worked in various roles at 80,000 Hours since 2014, and continue to support the team in a fairly minimal advisory role.

Views my own.

I agree that the heavy use of a poorly defined concept of "value alignment" has some major costs.

I've been moderately on the receiving end of this one. I think it's due to some combination of:

  1. I take Nietzsche seriously (as Derek Parfit did).
  2. I have a strong intellectual immune system. This means it took me several years to get enthusiastically on board with utilitarianism, longtermism and AI safety as focus areas. There's quite some variance on the speed with which key figures decide to take an argument at face value and deeply integrate it into their decision-making. I think variance on this dimension is good—as in any complex ecosystem, pace layers are important.
  3. I insisted on working mostly remotely.
  4. I've made a big effort to maintain an "FU money" relationship to EA community, including a mostly non-EA friendship group.
  5. I am more interested in "deep" criticism of EA than some of my peers. E.g. I tweet about Peter Thiel on death with dignity, Nietzsche on EA, and I think Derek Parfit made valuable contributions but was not one of the Great
... (read more)

Reminder for many people in this thread:

"Having a small clique of young white STEM grads creates tons of obvious blindspots and groupthink in EA, which is bad."

is not the same belief as

"The STEM/techie/quantitative/utilitarian/Pareto's-rule/Bayesian/"cold" cluster-of-approaches to EA, is bad."

You can believe both. You can believe neither. You can believe just the first one. You can believe the second one. They're not the same belief.

I think the first one is probably true, but the second one is probably false.

Thinking the first belief is true, is nowhere near strong enough evidence to think the second one is also true.

(I responded to... a couple similar ideas here.)

This post is much too long and we're all going to have trouble following the comments.

It would be much better to split this up and post as a series. Maybe do that, and replace this post with links to the series?

Yup, we're going to split it into a sequence (I think it should be mentioned in the preamble?)

Thanks for all the care and effort which went into writing this!

At the same time, while reading, my reactions were most of the time "this seems a bit confused", "this likely won't help" or "this seems to miss the fact that there is someone somewhere close to the core EA orgs who understands the topic pretty well, and has a different opinion".

Unfortunately, to illustrate this in detail for the whole post would be a project for ...multiple weeks.

At the same time I thought it could be useful to discuss at least one small part in detail, to illustrate how the actual in-the-detail disagreement could look like.

I've decided to write a detailed response for a few paragraphs about rationality and Bayesianism. This is from my perspective not cherry-picked part of the of the original text which is particularly wrong, but a part which seems representatively wrong/confused.  I picked it for convenience, because I can argue and reference this particularly easily.
 

Individual Bayesian Thinking (IBT) is a technique inherited by EA from the Rationalist subculture, where one attempts to use Bayes’ theorem on an everyday basis. You assign each of your beliefs a numerical probability of being

... (read more)

I agree that we shouldn't pretend to be particularly good at self-criticim. I don't think we are. We are good at updating numbers, but I have giving criticism to orgs that's not been acted on for years before someone saying it was a great idea. Honestly I'd have preferred if they just told me they didn't want criticism than saying they did then ignoring it. 

I think EA is better than most movements at self criticism and engaging with criticism.

I think many EAs mistake this for meaning that EA is “good” at engaging with criticism.

I think EA is still very bad at engaging with criticism, but other movements are just worse.

6
freedomandutility
I’ll add that EAs seem particularly bad at steelmanning criticisms. (eg - if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)
8
Ozzie Gooen
By chance, can you suggest any communities that you think do a good job here?  I'm curious who we could learn from. Or is it like, "EAs are bad, but so are most communities." (This is my current guess at what I believe)
4
freedomandutility
Good question. The only other communities I know well are socialist + centre left political communities, who I think are worse than EA at engaging with criticism. So I’d say EA is better than all communities that I know of at engaging with criticism, and is still pretty bad at it. In terms of actionable suggestions, I’d say tone police a bit less, make sure you’re not making isolated demands for rigour, and make sure you’re steelmanning criticisms as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test. Sorry yes, essentially “EAs are bad, but so are most communities." But importantly we shouldn’t just settle for being bad, if we want to approximately do the most good possible, we should aim to be approximately perfect at things, not just better than others.
6
Ozzie Gooen
Thanks! I definitely agree that improvement would be really great. If others reading this have suggestions of other community examples, that would also be appreciated!

It's a shame that you feel you weren't listened to.

However, in general I think we should be wary of concluding that the criticism was "ignored" just because people didn't immediately do what you suggested.

If you ask for criticism, you'll probably get dozens of bits of criticism. It's not possible to act on all of the criticism, especially since some of this criticism is opposed to each other. Furthermore, it often takes time to process the criticism. For example, someone criticises you because they think the world is like X and you think the world is like Y. You think your model of the world is like Y, so you continue on that path, but over time you start to realise that the evidence is pointing more to the world being like X and eventually you update to that model. But maybe it would have taken longer if you hadn't received that feedback earlier that someone thought the world did look like X and what to do if it did.

I think criticism is really complicated and multifaceted, and we have yet to develop nuanced takes on how it works and how to best use it. (I've been doing some thinking here).

I know that orgs do take some criticism/feedback very seriously (some get a lot of this!), and also get annoyed or ignore a lot of other criticism. (There's a lot of bad stuff, and it's hard to tell what has truth behind it).

One big challenge is that it's pretty hard to do things. Like, it's easy to suggest, "This org should do this neat project", but orgs are often very limited in what things they could do at all, let alone what unusual things or things they aren't already thinking about and good at they could do.

There's definitely more learning to do here.

I find it very concerning/dissapointing that this post has so many downvotes. Like the authors said, don't use upvotes/downvotes to indicate agreement/disagreement!

I strongly upvoted this post because I think it's very valuable and I highly appreciate the months of effort put into it. Thanks for writing it! I don't know whether I agree or not on most proposals, but I think they're interesting nonetheless.

Personally I'm most skeptical of some of the democratization proposals. Like how would you decide who can vote? And I think it would drastically slow down grant making etc., making us less effective. Others have already better worded these concerns elsewhere.

In general I would love to see most of these ideas being tried out. Either incrementally so we can easily revert back, or as small experiments. Even with ideas I'm only say 30% sure they will work. If the ideas fail, then at least we'll have that information.

I very much agree that we should rely more on other experts and try to reinvent the wheel less.
I had indeed never heard of the fields you mentioned before, which is sad.

I didn't know your votes on the forum had more power the more karma you had! I'm really surprised. I was ... (read more)

7
Chris Leong
I didn't downvote this post (nor did I upvote it), but I can understand why people might have downvoted. I can also understand people upvoting it given the clear effort involved. I imagine that the people who downvoted it dislike proposals that don't engage very strongly with the reasons why the proposals could actually be bad. I see this as reasonable, although I do expect most people have some bias here where they apply this expectation more strongly to posts they disagree with. I expect that I probably have this bias as well.

This reads more like a list of suggestions than an argument. You're making twenty or thirty points, but as far as I've read (I admit I didn't get thru the whole thing) not giving any one of them the level of argumentation that would unpack the issue for a skeptic. There's a lot that I could dispute, but I feel like arguing things here is a waste of time; it's just not a good format to discuss so many issues in a single comment section. 

I will mention one thing:

The EA community is notoriously homogenous, and the “average EA” is extremely easy to imagine: he is a white male[9] in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent. He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above average. He hung around LessWrong for a while as a teenager, and now wears EA-branded shirts and hoodies, drinks Huel, and consumes a narrow range of blogs, podcasts, and v

... (read more)

I would appreciate a TL;DR of this article, and I am sure many others would too! It helps me to decide if it's worth spending more time digging into the content. 

It was even too long for chatGPT to summarize 🫠

0
WilliamKiely
I second this. FWIW I read from the beginning through What actually is "value-alignment"? then decided it wasn't worth reading further and just skimmed a few more points and the conclusion section. I then read some comments. IMO the parts of the post I did read weren't worth reading for me, and I doubt they're worth reading for most other Forum users as well. (I strong-downvoted the post to reflect this, though I'm late to the party, so my vote probably won't have the same effect on readership as it would have if I had voted on it 13 days ago).

Thank you so much for writing this, it feels like a breath of fresh air. There are a huge number of points in here that I strongly agree with. A lot of them I was thinking of writing up into full length posts eventually (I still might for a few), but I was unsure if anyone would even listen. This must have taken an immense amount of effort and even emotional stress, and I think you should be proud. 

I think if EA ever does break out of its current mess, and fulfills it's full potential as a broad, diverse movement, then posts like this are going to be the reason why. 

(It feels important to disclaim before commenting that I'm not an EA, but am very interested in EA's goal of doing good well.)

Thank you!! This post is a breath of fresh air and makes me feel hopeful about the movement (and I'm not just saying that because of the Chumbawamba break in the middle). In particular I appreciated the confirmation that I'm not the only person who has read some of the official EA writings regarding climate change and thought (paraphrasing here) "?!!!!!!!!!"

I know this cannot have been trivial or relaxing to write. A huge thank you to the authors. I really hope that your suggestions are engaged with by the community with the respect that they deserve.

We should use blogposts, Google Docs, and similar works as accessible ways of opening discussions and providing preliminary thoughts, but rely on peer-reviewed research when making important decisions, creating educational materials, and communicating to the public

 Bold  type mine. No I think peer review is cumbersome and that to use it would slow work down a lot. Are there never mistakes in peer reviewed science? No, there are. I think we should aim to build better systems of peer review. 

My take is different. As a working scientist/engineer in the hard sciences, I use peer-reviewed research when possible, but I temper that with phone calls to companies, emails to other labs, posts on ResearchGate, informal conversations with colleagues, and of course my own experiments, mechanistic models, and critical thinking skills. Peer-reviewed research is nearly always my starting point because it's typically more information and data-rich and specific than blog posts, and because the information I need is more often contained within peer-reviewed research than in blog posts.

That said, there are a lot of issues and concerns raised when blog posts are too heavily a source (although here that's very much the pot calling the kettle black, with most of the footnotes being the author's own unsourced personal opinions). When people lean too heavily on blog posts, it may illustrate that they're unfamiliar with the scientific literature relevant to the issue, and that they themselves have mostly learned about the information by consuming other blog posts. Also, a compelling post that's full of blog post links (or worse, unsourced claims) gives the interested reader little opportunity to check the underpinnings of the argument or get connected with working scientists in the field.

I'm fine with using the medium of blog posts to convey an idea, or of citing blog posts in specific circumstances. Where a peer-reviewed source is available, I think it's better to either use that, or to cite it and give the blog post as an accessible alternative.

Are there never mistakes in peer reviewed science? No, there are

The question isn't "are there zero mistakes", the question is, "is peer reviewed research generally of higher quality than blogposts?". To which the answer is obviously yes (at least in my opinion), although the peer review process is cumbersome and slow, and so will have less output and cover less area. 

When there are both peer reviewed research and blogposts on a subject matter, I think the peer-reviewed research will be of higher quality and more correct a vast majority of the time. 

-3
Nathan Young
Compared to EA blog posts weighted by karma? The Answer is not obviously yes in my opinion. I think we'll fare better in the replication crisis. 
5
titotal
Upvotes on an internet forum are not a good replacement for peer review.  I'm surprised I even have to argue for this, but here goes: *  the vast majority of people upvoting/downvoting are not experts in the topic of the blog post.  * The vast majority of upvoting/downvoting occurs before a blogpost has been thoroughly checked for accuracy. If theres a serious mistake in a blogpost, and it's not caught right away, almost no-one will see it.  * Upvoting/downvoting is mostly a response to the percieved effort of a post and on whether they personally agree with it. Yes, peer review is flawed, but the response isn't to revert to blogposts, it's to build a better system. 
2
Nathan Young
And yet argue it you shall. I think that peer review is so poor that probably just the forum alone produces work that is less in need of replication. I guess that's not really about the system. And yes, we should build a better system, but still. Peer review vs upvotes on journal sites, I would pick the latter.  Maybe we could discuss it in the comments of https://forum.effectivealtruism.org/topics/peer-review 

Epistemic status: Hot take

networking retreats taking place in the Bahamas

To me, FTX hosting EA events and fellowships in the Bahamas reeked of neocolonialism (which is a word I don't like to bandy about willy-nilly). 90.6% of The Bahamas' population are Black,[1] whereas the majority of EAs are White.[2] Relative to many countries in the Americas, The Bahamas has a superficially strong economy with a GDP (PPP) per capita of $40,274, but a lot of this economic activity is due to tourism and offshore companies using it as a tax haven,[1] and it's unclear to me how much of this prosperity actually trickles down to the majority of Bahamians. (According to UN data, The Bahamas has a Gini coefficient of 0.57, the highest in the Caribbean.[3]) Also, I've never heard anyone talk about recruiting Bahamians into the EA movement or EA orgs.

  1. ^

    The Bahamas - Wikipedia

  2. ^

    Every EA community survey ever, e.g. EA Survey 2020

  3. ^

I think it can sometimes fee a bit brutal to be downvoted with no explanation. I might say the Bahamas was glad to have FTX there and it's kind of patronising to deny them that opportunity because of their poverty and make it worse, right?

I get the sense FTX was actually giving quite a lot the Bahamas, though clearly not now and also unclear how much of that was corruption.

5
freedomandutility
I disagree because I would only count something as neocolonialism if there was a strong argument that it was doing net harm to the local population in the interest of the ‘colonisers’.

I mean, it plausibly did cause net harm to the Bahamas in this case, even if that wasn't what people expected.

4
Aptdell
It seems to me that you'd be better off arguing that an event in the Bahamas causes harms to Bahamians directly, instead of drawing an analogy with colonialism. See The noncentral fallacy - the worst argument in the world? (I'm not trying to be dismissive -- I think there are ways to make this argument, perhaps something like: "observing a retreat full of foreigners will cause Bahamians to experience resentment and a reduced sense of self-determination; those are unpleasant things to experience, and could also cause backlash against the EA movement". My claim is just that talking about harms directly is a better starting point for discussion.)

I have lots of things I want to say, but I will not say it publicly, because I'm currently working for a project that is dependent on EA Funding. I can't risk that. Even though I think this conversation is important, I think it is even more important that I can continue the work that I'm doing with that org, and similar orgs is similar situations. And I can't post anonymously, because I can't explain what I want to say without referring to specific anecdotes that will identify me. 

My sense is (from the votes on this post) is that most of these reforms are not broadly popular. Which while it doesn't undermine them in my opinion does create a contradiction for the authors. 

9
Linch
The authors believe that the EA Forum is profoundly antidemocratic (e.g. because of the karma weighting and selection effects of who is on the forum), so I don't think they would consider upvotes to be particularly strong evidence of democratic will.

As it stands, EA neglects the importance of collective epistemics and overemphasises individual rationality, and as a result cultivates a community that is homogenous, hierarchical, and intellectually insular. EA is overwhelmingly white, male, upper-middle-class, and of a narrow range of (typically quantitative) academic backgrounds. EA reading lists and curricula over-emphasise a very narrow range of authors, which are assumed to be highly intelligent and thus to be deferred to. This narrows the scope of acceptable thought, and generates path dependencies and intellectual blind-spots.

 

Yeah seems pretty accurate. I guess I agree with this like 70%. 

EA is overwhelmingly white, male, upper-middle-class, and of a narrow range of (typically quantitative) academic backgrounds.

 

Though these characteristics are over represented in EA, I think one should be careful about claiming overall majorities. According to the 2020 EA survey, EA is 71% male and 76% white. I couldn’t quickly find the actual distribution of EA income, but eyeballing some graphs here and using $100,000 household income as a threshold (say $60,000 individual income) and $600k household upper bound (upper class is roughly the 1% top earners), I would estimate around one third of EAs would be upper middle class now. But I think your point was that they came from an upper-middle-class background, which I have not seen data on. I would still doubt it would be more than half of EAs, so let’s be generous and use that. Using your list above of analytic philosophy, mathematics, computer science, or economics, that is about 53% of EAs (2017 data, so probably lower now). If these characteristics were all independent, that would indicate the product of about 14% of EAs would have all these characteristics. Now there is likely positive correlation between these characteri... (read more)

Quick thoughts on Effective Altruism and a response to Dr. Sarah Taber's good thread calling EA a cult (which it functionally is).

I have believed EA thinking for about 8 years, and continue to do so despite the scandals. HOWEVER, I agree fully with this part of "Doing EA Better" [2], which I'll call "the critique" from now on. : "Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights".

As a leftist who wants to see the end of capitalism, cops, climate change, wealth inequality, and the dooming of entire nations to death and despair to uphold a white supremacist order, I am not particularly attached to EA or its specific GiveWell top charities. I think EA works best when it's only an attempt to be "less wrong", and always iterating through a SCIENTIFIC process.

Without having spent much time in the weeds, I think there's a strong moral case that a GiveWell cause (say, mosquito nets) is superior to St. Jude's or the Red Cross. Keep in mind that EA is a small minority of all charitable giving, and all charitable mindshare (you've never seen a bell ringer for EA). The charities you see in public and ar... (read more)

3
Jeff Kaufman 🔸
If you do get to writing the post you probably want to include that mass incarceration was something Open Phil looked into in detail and spent $130M on before deciding in 2021 that money to GiveWell top charities went farther. I'd be very interested to read the post! Making money to donate hasn't been a top recommendation within EA for about five years: it still makes sense for some people, but not most. When you say "donating to EA" that's ambiguous between "donating it to building the EA movement" and "donating it to charities that EAs think are doing a lot of good". If you mean the latter I agree with you (ex: see what donation opportunities GWWC marks as "top rated"). When people go into this full time we tend to say they work in community building. But that implies more of an "our goal is to get people to become EAs" than is quite right -- things like the 80k podcast are often more about spreading ideas than about growing the movement. And a lot of EAs do this individually as well: I've written hundreds of posts on EA that are mostly read by my friends, and had a lot of in-person conversations about the ideas. Effectively addressing risk from future pandemics wouldn't look like "spend a lot more money on the things we are already doing". Instead it would be things like the projects listed in Concrete Biosecurity Projects (some of which could be big) or Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics. (Disclosure: I work for a project that's on both those lists). Personally my donations to deworming haven't been guided by my emotional reaction to parasites. My emotions are just not a very good guide to what most needs doing! I'm emotionally similarly affected by harm to a thousand people as a million: my emotions just aren't able to handle scale. Emotions matter for motivation, but they're not much help (and can hurt) in prioritization. You also write both: And then later on: These seem like they're in conflict?

I think the best way to discuss these criticisms is on a case by case basis, so I've put a number of them in a list here:

https://forum.effectivealtruism.org/posts/fjaLYSNTggztom29c/possible-changes-to-ea-a-big-upvoted-list 

“EA should cut down its overall level of tone/language policing”.

Strongly agree.

EAs should be more attentive to how motivated reasoning might affect tone / language policing.

You‘re probably more likely to tone / language police criticism of EA rather than praise, and you’re probably less likely to seriously engage with the ideas in the criticism if you are tone / language policing.

I would like to see no change or a slight increase in our amount of overall tone policing. There might be specific forms of tone policing we should have less of, but in general I think civil discourse is one of the main things that make EA functional as a movement.

I agree. I think that it's incredibly difficult to have civil conversations on the internet, especially about emotionally laden issues like morality/charity.

I feel bad when I write a snotty comment and that gets downvoted, and that has a real impact on me being more likely to write a kind argument in one direction rather than a quick zinger. I am honestly thankful for this feedback on not being a jerk.

I think the point regarding epidemics, and how EA excessively focuses on the individual aspects of good epistemics rather than the group aspect, is a really good point which I have surprisingly never heard before.

In thinking about the democratization - what role would real people impacted by the organization's decisions play? This is probably mostly related to global health & development - but it seems very strange to democratize to "effective altruists" but not to mention involving the people actually impacted by an EA organization's work. I don't think "democratizing" per se will probably be the right way to involve them - but finding some way of gathering the insights and perspectives of people impacted by each organizaiton's work would help with the expertise, epistemics, and power problems mentioned. 

The most important solution is simple: one person, one vote.

I disagree with this: I may have missed a section where you seriously engaged with the arguments in favor of the current karma-weighted vote system, but I think there are pretty strong benefits of a system that puts value on reputation. For example, it seems fairly reasonable that the views of someone who has >1000 karma are considered with more weight than someone who just created an account yesterday or who is a known troll with -300 karma.

I think there are some valid downsides to this app... (read more)

Great fun post!

I read the whole post. Thanks for your work. It is extensive. I will revisit it. More than once. You cite a comment of mine, a listing of my cringy ideas. That's fine, but my last name is spelled "Scales" not "Scale". :)

About scout mindset and group epistemics in EA

No. Scout mindset is not an EA problem. Scout and soldier mindset partition mindset and prioritize truth-seeking differently. To reject scout mindset is to accept soldier mindset.

Scout mindset is intellectual honesty. Soldier mindset is not. Intellectual honesty aids epistem... (read more)

I noticed this sentence in a footnote:

A full-length post exploring EA’s historical links to reactionary thought will be published soon

I don't think this post would be very useful -- see genetic fallacy.

“EAs should assume that power corrupts” - strongly agree.

6
Nathan Young
Or even just that power doesn't redeem. I think sometimes I've assumed a level of perfection from leaders that I wouldn't expect from friends.

I perceive this as a very good and thoughtful collection of criticism and good ideas for reform. It's also very long and dense and I'm not sure how to best interact with it.

As a general note, when evaluating the goodness of a pro-democratic reform in a non-governmental context, it’s important to have a good appreciation of why one has positive feelings/priors towards democracy. One really important aspect of democracy’s appeal in governmental contexts is that for most people, government is not really a thing you consent to, so it’s important that the governmental structure be fair and representative.

The EA community, in contrast, is something you have much more agency to choose to be a part of. This is not to say “if you don’... (read more)

We should not be afraid of consulting outside experts, both to improve content/framing and to discover blind-spots

If anything, we should be afraid of any tendency to stigmatize consulting outside experts. If anything, it'd be preferable all effective altruists are afraid of discouraging consultation from outside experts.

If you’re also reading the “diversify funding sources” and thinking BUT HOW? In a post where I make some similar suggestions, I suggest via encouraging entrepreneurship-to-give:

https://forum.effectivealtruism.org/posts/SBSC8ZiTNwTM8Azue/a-libertarian-socialist-s-view-on-how-ea-can-improve

4
Jason
Of course, that would probably be quite a few years out even if 100 people left tomorrow to do Ent-TG, so the timelines in the proposal would have to be moved back considerably.
4
freedomandutility
Yep I think the timeline in the proposal is unrealistic

Diversity is always a very interesting word and it's interesting that the call for more comes after two of the three scandals mentioned in the opening posts are about EA being diverse along an axis that many EAs disagree with. 

Similarly, it's very strange that a post that talks a lot about the problems of EAs caring too much about other people being value aligned and afterward talk in the recommendations about how there should be more scrutiny to checking whether funders are aligned with certain ethical values.

This gives me the impression that the mai... (read more)

EA reading lists are typically narrow, homogenous, and biased, and EA has unusual social norms against reading more than a handful of specific books. Reading lists often heavily rely on EA Forum posts and shallow dives over peer-reviewed literature.

Apparently, the problem of reading too narrowly also applies to many scientific research fields. Derek Thompson writes:

Chu and the University of Chicago scholar James Evans found that progress has slowed in many fields because scientists are so overwhelmed by the glut of information in their domain that they’re

... (read more)

Visavis peer review:

Here’s a simple question: does peer review actually do the thing it’s supposed to do? Does it catch bad research and prevent it from being published?

It doesn’t. Scientists have run studies where they deliberately add errors to papers, send them out to reviewers, and simply count how many errors the reviewers catch. Reviewers are pretty awful at this. In this study reviewers caught 30% of the major flaws, in this study they caught 25%, and in this study they caught 29%. These were critical issues, like “the paper claims to be a randomize

... (read more)

I appreciated "Some ideas we should probably pay more attention to".  I'd be pretty happy to see some more discussion about the specific disciplines mentioned in that section, and also suggestions of other disciplines which might have something to add. 

Speaking as someone with an actuarial background, I'm very aware of the Solvency 2 regime, which makes insurers think about extreme/tail events which have a probability of 1-in-200 of occurring within the next year.  Solvency 2 probably isn't the most valuable item to add to that list; I'm sure there are many others.

3
Gideon Futerman
Hi Sanjay, I'm actually working on a project on pluralism in XRisk, and what fields may have something to add to the discussion. Would you be up for a chat/put me in contact with people who would be up for a chat with me about lessons that can be learn from actuarial studies / Solvency 2?
2
Sanjay
Yes, we can arrange via DM

Nathan commenting atomistically is frustrating and I wish he'd put them all in one comment.

7
dan.pandori
[aside, made me chuckle] This is an inevitable issue with the post being 70 pages long. I think online discussions are more productive when its clear exactly what is being proposed as good/bad, so I appreciate you separately commenting on small segments (which can be addressed individually) rather than the post as a whole.
3
Nathan Young
Okay, but like they are all separate points. Putting them all into one comment means it's much harder to signal that you like some but not others.
2
Indra Gesink 🔸
And likely yields less karma overall!
-1
Nathan Young
Sorry you think this is a play to get more karma?
3
Indra Gesink 🔸
No. I do think that combining the comments would yield less karma, which could be a bad thing and - in the spirit of this post - in need of being done better, thereby not saying anything about your intentions. And I agree with your reply to your comment: therefore the "and". And I think what you say there is actually a very good reason, which also answers why I was reading all these distinct comments by you, which is in turn why I appreciated this one amongst them and responded. I'm sorry if it came across as an ad hominem attack instead! Best!
2
Nathan Young
Thanks for the clarification :)

I was inspired by your post, and I wrote a post about one way I think grant-making could be less centralized and draw more on expertise. One commenter told me grant-making already makes use of more expert peer reviewers than I thought, but it sounds like there is much more room to move in that direction if grant-makers decide it is helpful.

 

https://forum.effectivealtruism.org/posts/fNuuzCLGr6BdiWH25/doing-ea-better-grant-makers-should-consider-grant-app-peer 

I mostly agree with Buck’s comment and I think we should probably dedicate more time to this at EAGs than we have in the past (and probably some other events). I’m not sure what is the best format, but I think having conversations about it would allow us to feel it much more in our bones rather than just discussing it on the EA forum for a few days and mostly forget about it.

I believe that most of these in-person discussions seem to have mostly happened at the EA Leaders Forum, so we should probably change that.

That said, I’m concerned that a lot of people... (read more)

The right way to handle the suggested reforms section is to put them all as comments.

I will not be taking questions at this time. Sarcasm.

I’ll add that EAs seem particularly bad at steelmanning criticisms - (eg - if a criticism doesn’t explicitly frame ideas on a spectrum and discuss trade offs, the comments tend to view the ideas as black and white and reject the criticisms because they don’t like the other extreme of the spectrum)

In the interests of taking your words to heart, I agree that EAs (and literally everyone) are bad at steelmanning criticisms.

 

However, I think that saying the 'and literally everyone' part out loud is important.  Usually when people say 'X is bad at Y' they mean that X is worse than typical at Y. If I said, 'Detroit-style pizza is unhealthy,' then there is a Gricean implicature that Detroit-style pizza is less healthy than other pizzas. Otherwise, I should just say 'pizza is unhealthy'.

Likewise, when you say 'EAs seem particularly bad at steelmanning criticisms,' the Gricean implication is that EAs are worse at this than average. In another thread above, you seemed to imply that you aren't familiar with communities that are better at incorporating and steelmanning criticism (please correct me if I'm mistaken here).

There is an important difference between 'everyone is bad at taking criticism'/'EAs and everyone else are bad at taking criticism'/'EAs are bad at taking criticism'.  The first two statements implies that this is a widespread problem that we'll have to work hard to address, as the default is getting it wrong. The last statement implies that we are making a ... (read more)

3
freedomandutility
Apologies, I don’t mean to imply that EA is unique in getting things wrong / being bad at steelmanning. Agree that the “and everyone else” part is important for clarity. I think whether steelmanning makes sense depends on your immediate goal when reading things. If the immediate goal is to improve the accuracy of your beliefs and work out how you can have more impact, then I think steelmanning makes sense. If the immediate goal is to offer useful feedback to the author and better understand the author’s view, steelmanning isn’t a good idea. There is a place for both of these goals, and importantly the second goal can be a means to achieving the first goal, but generally I think it makes sense for EAs to prioritise the first goal over the second.
1
dan.pandori
Thanks, I think this is an excellent response and I agree both are important goals. I'm curious to learn more about why you think that steelmanning is good for improving one's beliefs/impact. It seems to me that that would be true if you believe yourself to be much more likely to be correct than the author of a post. Otherwise, it seems that trying to understand their original argument is better than trying to steelman it. I could see that perhaps you should try to do both (ie, both the author's literal intent and whether they are directionally correct)? [EDIT: I'm particularly curious because I think that my current understanding seems to imply that steelmanning like this would be hubristic, and I think that probably that's not what you're going for. So almost certainly I'm missing some piece of what you're saying!]

I find writing pretty hard and I imagine it was quite a task to compile all of these thoughts, thanks for doing that.

 I only read the very first section (on epistemic health) but I found it pretty confusing. I did try and find explanations in the rest of the epistemics section.



EA’s focus on epistemics is almost exclusively directed towards individualistic issues like minimising the impact of cognitive biases and cultivating a Scout Mindset. The movement strongly emphasises intelligence, both in general and especially that of particular “thought-leader

... (read more)
8
ConcernedEAs
Hi Caleb, Our two main references are the Yang & Sandberg paper and the Critchlow book, both of which act as accessible summaries of the collective intelligence literature. They're linked just a little after the paragraph you quoted.
6
calebp
I think my issues with this response and linking to that paper are better explained by looking at this post from SSC (beware the man of one study). To be clear I think we can learn things from the sources you linked - my issue is with the (imo) overconfidence and claims about what "the science" says.    

I haven't been around here for long, but is this the record for most comments on a post? Must be close....

6
Lizka
It's a lot of comments, but there's the collection of comments on this post, for instance, and likely others. 

This is not at all to say that Google Docs and blogposts are inherently “bad”

Indeed, only the Sith deal in absolutes.

Never in all my life have I seen someone go to so much trouble to disguise bad-faith criticism as good-faith criticism.

One of the things that this post has most clearly demonstrated is that EA has a terrible habit of liking, praising, and selectively reading really long posts to compensate for not reading most of it.

And as a result of not actually reading the entire thing, they never actually see how whacked the thing is that they just upvoted.

EA institutions should select for diversity

  • With respect to:
    • Hiring (especially grantmakers and other positions of power)
    • Funding sources and recipients
    • Community outreach/recruitment

 

I think that we should expect diversity if our hiring practises are fair and so that there isn't suggests we need to do more work. I sense that diversity leads to better decisionmaking and that's what we should want. I am somewhat against "selecting for diversity" though open to the discussion.

3
JackM
What do you mean by "fair" in this context? If we adopt some sort of meritocratic system based on knowledge/ability (which many would see as "fair") I don't think this would lead to very diverse hiring. I think it would lead to the hiring we currently see and which OP doesn't like. I tentatively think the only way to get diversity is to select for diversity. Diversity does have some instrumental benefit so not saying we shouldn't do this, at least to some extent.

Having some expertise in complex systems (several certifications from the Santa Fe Institute) and also in deliberative democracy/collective intelligence, I can fully support what the authors of this post say about EA's shortcomings in these areas. (I agree with most of the other points also.) The EA community would do well to put its most epistemically humble hat on and try to take these well-meant, highly articulate criticisms on board.

The Effective Altruism movement is not above conflicts of interest

If anyone still thinks effective altruism is above conflicts of interest, I have an NFT of the Brooklyn  Bridge to sell u, hmu on FTX if u r interested. 

EAs should see fellowships as educational activities first and foremost, not just recruitment tools

Yeah, seems right.

The EA community is notoriously homogenous, and the “average EA” is extremely easy to imagine: he is a white male[9] in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent. He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above aver

... (read more)

The list of proposed solutions here are pretty illustrative of the Pareto Principle:  80% of the value comes from 20% of the proposed solutions.

Just saw the karma drop from 50 to 37 in one vote.

This section seems particularly important.

That doesn't seem possible by one person; the max strong-upvote amount at the moment is 9. There's no downvote-upvote combination that leads to a difference of 13 in the change of 1 vote (though it would be possible for someone to strongly upvote and then change their vote to a strong downvote for a karma drop of 14,16, 18). A more likely explanation is that multiple people downvoted while (multiple people-1) retracted their upvote within the time you were refreshing, or some kind of bug on the forum.

Curated and popular this week
Relevant opportunities