Hide table of contents

Preamble to the Preamble

Last week we published a post called "Doing EA Better", which argued that EA's new-found power and influence obligates us to solve our movement's significant problems with respect to epistemics, rigour, expertise, governance, and power.

We have since received (and responded to) emails from very high-profile members of the movement, and we commend their willingness to discuss concrete actions.

As mentioned in that post's preamble, we are splitting DEAB up into a sequence to facilitate object-level discussion. This is the first of these, covering the introductory sections.

Each post will include the relevant parts of the list of suggested reforms. There isn't a perfect correspondence between the subheadings of the post and the reforms list, so not all reforms listed will be 100% relevant to the section in question.

As well as the sequence, there will be a post about EA's indirect but non-negligible links to reactionary thought, as well as a list of recommended readings.

Finally, we have tried (imperfectly) to be reasonably precise in our wording, and we ask that before criticising an argument of ours, commenters ensure that it is an argument that we are in fact making.

Preamble

It’s been a rough few months, hasn’t it?

Recent events, including the FTX collapse and the Bostrom email/apology scandal, have led a sizeable portion of EAs to become disillusioned with or at least much more critical of the Effective Altruism movement.

While the current crises have made some of our movement’s problems more visible and acute, many EAs have become increasingly worried about the direction of EA over the last few years. We are some of them.

This document was written collaboratively, with contributions from ~10 EAs in total. Each of us arrived at most of the critiques below independently before realising through conversation that we were not “the only one”. In fact, many EAs thought similarly to us, or at least were very easily convinced once thoughts were (privately) shared.

Some of us started to become concerned as early as 2017, but the discussions that triggered the creation of this post happened in the summer of 2022. Most of this post was written by the time of the FTX crash, and the final draft was completed the very day that the Bostrom email scandal broke.[1] Thus, a separate post will be made about the Bostrom/FLI issues in around a week in more than around a week.

A lot of what we say is relevant to the FTX situation, and some of it isn’t, at least directly. In any case, it seems clear to us that the FTX crisis significantly strengthened our arguments.

We reached the point where we would feel collectively irresponsible if we did not voice our concerns some time ago, and now seems like the time where those concerns are most likely to be taken seriously. We voice them in the hope that we can change our movement for the better, and have taken pains to avoid coming off as “hostile” in any way.

Experience indicates that it is likely many EAs will agree with significant proportions of what we say, but have not said as much publicly due to the significant risk doing so would pose to their careers, access to EA spaces, and likelihood of ever getting funded again.

Naturally the above considerations also apply to us: we are anonymous for a reason.

This post is also quite very long, so each section has a summary at the top for ease of scanning, and we’ll break this post up into a sequence to facilitate object-level discussion.

Finally, we ask that people upvote or downvote this post on the basis of whether they believe it to have made a useful contribution to the conversation, rather than whether they agree with all of our critiques.

Summary

  • The Effective Altruism movement has rapidly grown in size and power, and we have a responsibility to ensure that it lives up to its goals
  • EA is too homogenous, hierarchical, and intellectually insular, with a hard core of “orthodox” thought and powerful barriers to “deep” critiques
  • Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights
  • Some EA beliefs and practices align suspiciously well with the interests of our donors, and some of our practices render us susceptible to conflicts of interest
  • EA decision-making is highly centralised, opaque, and unaccountable, but there are several evidence-based methods for improving the situation

Introduction

As committed Effective Altruists, we have found meaning and value in the frameworks and pragmatism of the Effective Altruism movement. We believe it is one of the most effective broadly-focused social movements, with the potential for world-historical impact.

Already, the impact of many EA projects has been considerable and inspiring. We appreciate the openness to criticism found in various parts of the EA community, and believe that EA has the potential to avoid the pitfalls faced by many other movements by updating effectively in response to new information.

We have become increasingly concerned with significant aspects of the movement over our collective decades here, and while the FTX crisis was a shock to all of us, we had for some time been unable to escape the feeling that something was going to go horribly wrong.

To ensure that EA has a robustly positive impact, we feel the need to identify the aspects of our movement that we find concerning, and suggest directions for reform that we believe have been neglected. These fall into three major categories:

  1. Epistemics
  2. Expertise & Rigour
  3. Governance & Power

We do not believe that the critiques apply to everyone and to all parts of EA, but to certain – often influential – subparts of the movement. Most of us work on existential risk, so the majority of our examples will come from there.[2]

Not all of the ~10 people that helped to write this post agree with all the points made within, both in terms of “goes too far” and “doesn’t go far enough”. It is entirely possible to strongly reject one or more of our critiques while accepting others.

In the same vein, we request that commenters focus on the high-level critiques we make, rather than diving into hyper-specific debates about one thing or another that we cited as an example.

Finally, this report started as a dozen or so bullet points, and currently stands at over 20,000 words. We wrote it out of love for the community, and we were not paid for any of its writing or research despite most of us either holding precarious grant-dependent gig jobs or living on savings while applying for funding. We had to stop somewhere. This means that many of the critiques we make could be explored in far, far more detail than their rendition here contains.

If you think a point is underdeveloped, we probably agree; we would love to see others take the points we make and explore them in greater depth, and indeed to do so ourselves if able to do so while also being able to pay rent.

We believe that the points we make are vital for the epistemic health of the movement, that they will make it more accessible and effective, and that they will enhance the ability of EA as a whole to do the most good.

Two Notes:

  1. Some of the issues we describe are based on personal experience and thus cannot be backed by citations. If you doubt something we assert, let us know and we’ll give as much detail as we can without compromising our anonymity or that of others. You can also just ask around: we witnessed most of the things we mention on multiple independent occasions, so they’re probably not rare.
  2. This post ties a lot of issues together and is thus necessarily broad, so we will have to make some generalisations, to which there will be exceptions.

Suggested reforms

Below, we have a preliminary non-exhaustive list of relevant suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.

Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.

Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!

Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.

Critique

General

  • EAs must be more willing to make deep critiques, both in private and in public
    • You are not alone, you are not crazy!
    • There is a much greater diversity of opinion in this community than you might think
    • Don’t assume that the people in charge must be smarter than you, and that you must be missing something if you disagree – even most of them don’t think that!
  • EA must be open to deep critiques as well as shallow critiques
    • We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”
    • We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided
    • Our willingness to consider a critique should be orthogonal to the seniority of the authors of the subject(s) of that critique
    • When we reject critiques, we should present our reasons for doing so
  • EAs should read more deep critiques of EA, especially external ones
    • For instance this blog and this forthcoming book
  • EA should cut down its overall level of tone/language policing
    • Norms should still be strongly in favour of civility and good-faith discourse, but anger or frustration cannot be grounds for dismissal, and deep critique must not be misinterpreted as aggression or “signalling”
    • Civility must not be confused with EA ingroup signalling
    • Norms must be enforced consistently, applying to senior EAs just as much as newcomers
  • EAs should make a conscious effort to avoid (subconsciously/inadvertently) using rhetoric about how “EA loves criticism” as a shield against criticism
    • Red-teaming contests, for instance, are very valuable, but we should avoid using them to claim that “something is being done” about criticism and thus we have nothing to worry about
    • “If we are so open to critique, shouldn’t we be open to this one?”
    • EAs should avoid delaying reforms by professing to take critiques very seriously without actually acting on them
  • EAs should state their reasons when dismissing critiques, and should be willing to call out other EAs if they use the rhetoric of rigour and even-handedness without its content
  • EAs, especially those in community-building roles, should send credible/costly signals that EAs can make or agree with deep critiques without being excluded from or disadvantaged within the community
  • EAs should be cautious of knee-jerk dismissals of attempts to challenge concentrations of power, and seriously engage with critiques of capitalist modernity
  • EAs, especially prominent EAs, should be willing to cooperate with people writing critiques of their ideas and participate in adversarial collaborations
  • EA institutions and community groups should run discussion groups and/or event programmes on how to do EA better

Institutions

  • Employees of EA organisations should not be pressured by their superiors to not publish critical work
  • Funding bodies should enthusiastically fund deep critiques and other heterodox/“heretical” work
  • EA institutions should commission or be willing to fund large numbers of zero-trust investigations by domain-experts, especially into the components of EA orthodoxy
  • EA should set up a counter foundation that has as its main goal critical reporting, investigative journalism and “counter research” about EA and other philanthropic institutions [within 12 months]*
    • This body should be run by independent people and funded by its own donations, with a “floor” proportional to other EA funding decisions (e.g. at least one researcher/community manager/grant program, admin fees in a certain height)
    • If this foundation is established, EA institutions should cooperate with it
  • EA institutions should recruit known critics of EA and offer them e.g. a year of funding to write up long-form deep critiques
  • EA should establish public conference(s) or assemblies for discussing reforms within 6 months, with open invitations for EAs to attend without a selection process. For example, an “online forum of concerns”:
    • Every year invite all EAs to raise any worries they have about EA central organisations
    • These organisations declare beforehand that they will address the top concerns and worries, as voted by the attendees
    • Establish voting mechanism, e.g. upvotes on worries that seem most pressing

Red Teams

  • EA institutions should establish clear mechanisms for feeding the results of red-teaming into decision-making processes within 6 months
  • Red teams should be paid, composed of people with a variety of views, and former- or non-EAs should be actively recruited for red-teaming
    • Interesting critiques often come from dissidents/exiles who left EA in disappointment or were pushed out due to their heterodox/”heretical” views (yes, this category includes a couple of us)
  • The judging panels of criticism contests should include people with a wide variety of views, including heterodox/”heretical” views
  • EA should use criticism contests as one tool among many, particularly well-suited to eliciting highly specific shallow critiques

Other

  • EAs should see EA as a set of intentions and questions (“What does it mean to ‘do the most good’, and how can I do it?”) rather than a set of answers (“AI is the highest-impact cause area, then maybe biorisk.”)
  • More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent “independent researchers”
  • EA funders should explore the possibility of funding more stable, safe, and permanent positions, such as professorships

Contact Us

If you have any questions or suggestions about this article, EA, or anything else, feel free to email us at concernedEAs@proton.me

Comments25
Sorted by Click to highlight new comments since: Today at 1:38 PM

Minor complaint / disagreement:

Norms must be enforced consistently, applying to senior EAs just as much as newcomers. 

I think this is unreasonable, in the other direction. Newcomers (and outsiders) can and should be reminded or informed about norm violations the first time or times they screw up, and it should certainly take more than one time before there is any censure. Senior EAs should get far less flexibility and understanding.  And I'm not blameless - there are at least two times that can recall I was rebuked for things privately, and I think it was good, and plausibly should have been more stringent and/or public; I have little excuse.

This is especially unfortunate because the level of discourse we aspire to, in my view, needs to be even better than what is already normal here. And many of the things that improvement entails are the points being made in this post!  (And that's true even though the current norms are far better than what is seen elsewhere - the failures that the current post suggests correcting are already fairly egregious compared to my current expectations.) So I, at least, can say I hope to be held to a higher standard, and am happy to be told if and when I am failing to do so.

and plausibly should have been more stringent and/or public

Just wanted to comment on the public aspect. I think in general the old maxim remains true: "praise in public, criticise in private". I think it's generally better when people receive feedback about norm violations privately. I also think it's likely that this does happen today, and that does mean that it's hard to know how much norm violation policing is going on, because much of it may be private.

I think there's a place for private criticism,  but it needs to be accompanied by public removal of the offending behavior, and/or apologies. Otherwise, other people don't see that the norms are being reinforced. (And as originally, this applies much more to the more senior / well known EAs.)

JWS
1y22
4
1

Thank you for breaking this up into specific sections, I think that this will encourage better discussion on the object-level points, and hopefully keep up community discussion from the initial post. I want to thank the authors for the research and effort that they've put into this series, and I sincerely hope that it has a positive impact on the community.

 *    *    *

From a personal perspective, I feel like these summary points:

  1. The Effective Altruism movement has rapidly grown in size and power, and we have a responsibility to ensure that it lives up to its goals
  2. EA is too homogenous, hierarchical, and intellectually insular, with a hard core of “orthodox” thought and powerful barriers to “deep” critiques
  3. Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights
  4. Some EA beliefs and practices align suspiciously well with the interests of our donors, and some of our practices render us susceptible to conflicts of interest
  5. EA decision-making is highly centralised, opaque, and unaccountable, but there are several evidence-based methods for improving the situation

Can themselves be summarised into 2 different thrusts: [1]

Critique 1: The Effective Altruism movement needs to reform its internal systems

From points 1, 2, and 5. Under this view: EA has grown rapidly, however its institutional structure has not adapted optimally, leading to a hierarchical, homogenous, and unaccountable community structure. This means that it does not have the feedback loops necessary to identify cases where there are conflicts of interest, or where the evidence no longer matches the movement's cause prioritisation/financial decisions. In order to change this, the EA movement needs to accept various reforms (which are discussed later in the DEAB piece).

Critique 2: The Effective Altruism movement is wrong

From points 3 and 4. Under this view: Many EA beliefs are not supported by evidence or subject-matter experts. Instead, the movement does little real-world good and its 'revealed preference' is to reputation launder for tech billionaires in the Global North. The movement is rotten to the core, and the best way to improve the world would be to end the enterprise altogether.

(I personally, and presumably most people who have self-selected into reading and posting on the EA Forum, have a lot more sympathy for Critique 1 rather than Critique 2.)

 *    *    *

These are two very different, perhaps almost orthogonal approaches. They remind me of the classic IIDM split between 'improving how institutions make their decisions' and 'improving the decisions that the institutions make'. I think this also links to:

EAs should see EA as a set of intentions and questions (“What does it mean to ‘do the most good’, and how can I do it?”) rather than a set of answers (“AI is the highest-impact cause area, then maybe biorisk.”)

In a simplistic sense, this remains true. But in another sense, it's a bit of EA-Judo.[2] To the extent the evidence supports having very high credence of AI risk this century (and this isn't by any means unanimous in the community or the forum), or that the world massively underinvests in work to prevent the next global pandemic and mitigate its impacts, then these priorities are what EA is. If were to change what the community thinks about them radically in response to deep critiques, then the resulting movement wouldn't look like EA at all and many, both current supporters and critics, probably wouldn't think EA would be a useful label for the resulting movement.

Perhaps this is related to the overarching theme of being open to "deep critiques". However this summary, and a quick Ctrl+F of the original post, didn't give me a clear definition to me of what a 'deep critique' is[3] or examples of them which the movement is bad at engaging with. Of the two examples which are referenced, I tend to view Thorstad's blog as following critique 1 - that the EA movement does not take its critics/issues seriously enough and needs to change because of it. However, the forthcoming book (which I am intending to review subject to irl constraints) seems to be more along the lines of critique 2. I think most of the movements most caustic critics, and the ones that probably get most air-time, follow this line of attack.[4]

From my perspective, a lot of the forum pushback probably comes from the intuition that if EA is open to critique 2 and accepts its conclusions and implications, then it won't be EA anymore. So I think perhaps there needs to be more attention of separating discussions which are closer to critique 1 to critique 2. This isn't not to say that EA shouldn't engage with arguments along the lines of critique 2, but I think it's worth being open about the strengths of those critiques, and the corresponding strength of evidence that would be needed to support them.

  1. ^

    For clarity, I'm trying to summarise the arguments I think they imply, and not making arguments I personally agree with. I also note that as there a multiple authors from the account they may vary in how much the agree/disagree with the quality of my summarisation.

  2. ^

    This refers to the tendency to refute any critique of EA by saying, "if you could show this was the right thing to do, then that would be EA." Which may be philosophically consistent but has probably convinced ~0 people in real life.

  3. ^

    If the authors, or other forum users, could help me out here I would definitely appreciate it :)

  4. ^

Employees of EA organisations should not be pressured by their superiors to not publish critical work

If it's possible without putting whistleblowers in jeopardy, I would like to know if you have specific examples of this. I've heard a number of worrying claims about stuff 'an EA org' or 'some EA orgs' do, including this, but without knowing the alleged specifics I have no idea how to update on them. Since we've got some tacit norm about not naming the people or even organisations responsible, I suspect that means a lot of problematic behaviours go unaddressed.

Not exactly an answer, but as an anecdote I know an EA employee who was asked not to publish something 100% supportive of their employer in response to some criticism. We both found it a bit weird, but I assume that's how all organisations work

Yes, that's how all organizations work. Obviously there are cases where employees of an organization should not be publicly commenting to support their organization, because that can be harmful compared to allowing the organization to manage its own reputation.  That's not at all the same as suppressing criticism. For example, not responding to trolls is a good thing. I'm all in favor of, say, telling employees not to "defend" EA against claims that it's a secret conspiracy to help rich people. Telling someone not to engage in highlighting dumb bad-faith arguments isn't suppressing their opinions.

I am very concerned that there is implicit pressure not to criticize, but the explicit encouragement by funders and orgs seems to have done a good job pushing back - and the criticism contest was announced well before FTX and the recent attention, and criticisms of EA were common among EA org employees well before any of the Cremer criticism. And I'd note that the highlighted blog is by someone who doesn't identify as EA, but works for GPI.

EAs should read more deep critiques of EA, especially external ones

  • For instance this blog and this forthcoming book

The blog post and book linked do not seem likely to me to discuss "deep" critiques of EA. In particular, I don't think the problem with the most harmful parts of EA are caused by racism or sexism or insufficient wokeism.

In general, I don't think many EAs, especially very new EAs with little context or knowledge about the community, are capable of recognizing "deep" from "shallow" criticisms, I also expect them to be overly optimistic about the shallow criticisms they preach, and to confuse "deep & unpopular" with 'speculative & wrong'.

What do you think are the most harmful parts of EA?

The decisions which caused the FTX catastrophe, the fact that EA is counterfactually responsible for the three primary AGI labs, Anthropic being entirely run by EAs yet still doing net negative work, and the funding of mostly capabilities oriented ML work with vague alignment justifications (and potentially similar dynamics in biotech which are more speculative for me right now), with the creation of GPT and[1] RLHF as particular examples of this.


  1. I recently found out that GPT was not in fact developed for alignment work. I had gotten confused with some rhetoric used by OpenAI and employees during the earlier days which turned out to be entirely independent from modern alignment considerations. ↩︎

Strong disagree for misattributing blame and eliding the question.

To the extent that "EA is counterfactually responsible for the three primary AGI labs," you would need to claim that the ex-ante expected value of specific decisions was negative, and that those decisions were because of EA, not that it went poorly ex-post. Perhaps you can make those arguments, but you aren't. 

Ditto for "The decisions which caused the FTX catastrophe" - Whose decisions, where does the blame go, and to what extent are they about EA? SBF's decision to misappropriate funds, or fraudulently misrepresent what he did? CEA not knowing about it? OpenPhil not investigating? Goldman Sachs doing a bad job with due diligence?

I agree with this, except when you tell me I was eliding the question (and, of course, when you tell me I was misattributing blame). I was giving a summary of my position, not an analysis which I think would be deep enough to convince all skeptics.

You say you agree, but I was asking questions about what you were claiming and who you were blaming.

EAs are counterfactually responsible for DeepMind?

Off topic, but can you clarify why you think Anthropic does net negative work?

Basically, there are simple arguments around 'they are an AGI capabilities organization, so obviously they're bad', and more complicated arguments around 'but they say they want to do alignment work', and then even more complicated arguments on those arguments going 'well, actually it doesn't seem like their alignment work is all that good actually, and their capabilities work is pushing capabilities, and still makes it difficult for AGI companies to coordinate to not build AGI, so in fact the simple arguments were correct'. Getting more into depth would require a writeup of my current picture of alignment, which I am writing, but which is difficult to convey via a quick comment.

I upvoted and did not disagreevote this, for the record. I'll be interested to see your writeup :)

Do you disagree, assuming my writeup provides little information or context to you?

I don't feel qualified to say. My impression of Anthropic's epistemics is weakly negative (see here), but I haven't read any of their research, but my prior is relatively high AI scepticism. Not because I feel like I understand anything about the field, but because every time I do engage with some small part of the dialogue, it seems totally unconvincing (see same comment), so I have the faint suspicion many of the people worrying about AI safety (sometimes including me) are subject to some mass-Gell-Mann amnesia effect.

Mass Gell-Mann amnesia effect because, say, I may look at others talking about my work or work I know closely, and say "wow! That's wrong", but look at others talking about work I don't know closely and say "wow! That implies DOOM!" (like dreadfully wrong corruptions of the orthogonality thesis), and so decide to work on work that seems relevant to that DOOM?

Yeah, basically that. Even if those same people ultimately find much more convincing (or at least less obviously flawed) arguments, I still worry about the selection effects Nuno mentioned in his thread.

I could list my current theories about how these problems are interrelated, but I fear such a listing would anchor me to the wrong one, and too many claims in a statement produces more discussion around minor sub-claims than major points (an example of a shallow criticism of EA discussion norms).

Agreed. It takes quite a bit of context to recognise the difference between deep critiques and shallow ones, whilst everyone will see their critique as a deep critique.

Are you planning to augment the sections so that they engage further with counter-arguments? I recognise that this would take significant effort, so it’s completely understandable if you don’t have the time, but I would love to see this happen if that’s at all possible. Even if you leave it as is, splitting up the sections will still aid discussion, so is still worthwhile.

I made a critique of EA that I think qualifies as "deep" in the sense that it challenges basic mechanisms established for bayesianism as EA's practice it, what you call IBT, but also epistemic motives or attitude. This was not my red-team, but something a bit different.

The Scout Mindset offers a partitioning of attitudes relevant to epistemics if its categories of "scout" and "soldier" are interpreted broadly. If I have an objection to Julia Galef's book "The Scout Mindset", it is in its discussion of odds. Simply the mention of "odds." I see it as a minor flaw in an otherwise wonderful and helpful book. But it is a flaw. Well, it goes further, I know, but that's an aside.

A current of betting addiction running through EA could qualify as a cause for acceptance of FTX money. These crypto-currency markets are known financial risks and also known purveyors to corrupt financial interests. Their lack of regulation has been noted by the SEC and for years, crypto has been associated with scams. For the last couple years, the addition of obviously worthless financial instruments via "web3" was an even bigger sign of trouble. However, to someone who sees betting as a fun, normal, or necessary activity, an investment or placement of faith in FTX makes more sense. It's just another bet.

The vice of betting, one of the possibilities that explains IBT results, is in my view obvious, and has been known for 1000's of years, to have bad results. While you EA folks associate betting with many types of outcomes other than earnings for yourselves, and many scenarios of use of money (for example, investments in charitable efforts), overall, betting should have the same implications to you as it has had to human communities for 1000's of years. It leads away from positive intentions and outcomes, and corrupts its practitioners. The human mind distorts betting odds in the pursuit of the positive outcome of a bet. Far from improving your epistemics, betting hinders your epistemics. On this one point, folks like Julia Galef and Annie Duke are wrong.

When did EA folks decide that old, generations-tested ideas of vices, were irrelevant? I think, if there's a failure in the "smartest people in the room" mentality that EA fosters, it's in the rejection of common knowledge about human failings. Consequences of vices identify themselves easily. However you consider their presence in common-sense morality, common knowledge is there for you.

Meanwhile, I don't know the etiology of the "easy going" approach to vices common now. While I can see that many people's behaviors in life remain stable despite their vices, many others fall, and perhaps it's just a question of when. In a group, vices are corrosive. They can harm everyone else too, eventually, somehow. You built EA on the metaphor of betting. That will come back to bite you, over and over.

Your many suggestions are worthwhile, and Scout Mindset is a necessary part of them, but Galef didn't address vices, and you folks didn't either, even though vices wreck individual epistemics and thereby group epistemics. They're an undercurrent in EA, just like in many other groups. Structural changes that ignore relevant vices are not enough here.

You folks lost billions of dollars promised by a crypto guy. Consider the vice of betting as a cause, for your choice to trust in him and his actions in response and in general. Regardless of whether it was corrupt or sanctioned betting, it was still betting, the same movie, the typical ending. Well, actually, since betting is now a sport and skilled bettors are now heroes, I guess common knowledge isn't so common anymore, at least if you watch the movies.

More from ConcernedEAs
262
· 1y ago · 84m read
Curated and popular this week
Recent opportunities in Building effective altruism