I think it’s likely that institutional effective altruism was a but-for cause of FTX’s existence[1] and therefore that it may have caused about $8B in economic damage due to FTX’s fraud (as well as potentially causing permanent damage to the reputation of effective altruism and longtermism as ideas). This example makes me feel it’s plausible that effective altruist community-building activities could be net-negative in impact,[2] and I wanted to explore some conjectures about what that plausibility would entail. I recognize this is an emotionally charged issue, and to be clear my claim is not “EA community-building has been net-negative” but instead that that’s plausibly the case (i.e. something like >10% likely). I don’t have strong certainty that I’m right about that and I think a public case that disproved my plausibility claim would be quite valuable. I should also say that I have personally and professionally benefitted greatly from EA community building efforts (most saliently from efforts connected to the Center for Effective Altruism) and I sincerely appreciate and am indebted to that work. Some claims that are related and perhaps vaguely isomorphic to the above which I think are probably true but may feel less strongly about are: • To date, there has been a strong presumption among EAs that activities likely to significantly increase the number of people who explicitly identify as effective altruist (or otherwise increase their identification with the EA movement) are default worth funding. That presumption should be weakened. • Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode. • Leadership within social movements are likely to (consciously or unconsciously) overvalue measures that increase the leadership’s own control and influence and under-value measures that reduce it, which is a trap EA community-building efforts may have unintentionally fallen into. • Pre-FTX, there was a reasonable assumption that expanding the EA movement was one of the most effective things a person could do, and the FTX catastrophe should significantly update our attitude towards that assumption. • FTX should significantly update us on principles and strategies for EA community/movement-building and institutional structure, and there should be more public discourse on what such updates might be. • EA is obligated to undertake institutional reforms to minimize the risk of creating an FTX-like problem in the future. Here are some conjectures I’d make for potential implications of believing my plausibility claim: • Make Impact Targets Public: Insofar as new evidence has emerged about the impact of EA community building (and/or insofar as incentives towards movement-building may map imperfectly onto real-world impact), it is more important to make public, numerical estimates of the goals of particular community-building grants/projects going forward and to attempt public estimation of actual impact (and connection to real-world ends) of at least some specific grants/projects conducted to date. Outside of GiveWell, I think this is something EA institutions (my own included) should be better about in general, but I think the case is particularly strong in the community-building context given the above. • Separate Accounting for Community Building vs. Front-Line Spending: I have argued in the past that meta-level and object-level spending by EAs should be in some sense accounted for separately. I admit this idea is, at the moment, under-specified but one basic example would be “EAs/EA grant makers should say their “front-line” and “meta” (or “community building”) donation amounts as separate numbers (e.g. “I gave X to charity this year in total of which, Y was to EA front-line stuff, Z to EA community stuff, and W was non-EA stuff”). I think there may be intelligent principles to develop about how the amounts of EA front-line funding and meta-level funding should relate to one another, but I have less of a sense of what those principles might be than a belief that starting to account for them as separate types of activities in separate categories will be productive. • Integrate Future Community Building More Closely with Front-Line Work: Insofar as it makes sense to have less of a default presumption towards the value of community building, a way of de-risking community building activities is to link them more closely to activities where the case for direct impact is stronger. For example, personally I hope for some of my kidney donation, challenge trial recruitment, and Rikers Debate Project work to have significant EA community-building upshots, even though that meta level is not those projects’ main goal or the metric I use to evaluate them. For what it’s worth, I think pursuing “double effect” strategies (e.g projects that simultaneously have near-termist and longtermist targets or animal welfare and forecasting-capacity targets) is underrated in current EA thinking. I also think connecting EA recruitment to direct work may mitigate certain risks of community building (e.g. the risks of creating an EA apparatchik class, recruiting “EAs” not sufficiently invested in having an actual impact, or competing with direct work for talent) • Implement Carla Zoe Cremer’s Recommendations: Maybe I’m biased because we’re quoted together in some of the same articles but I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing (e.g. whistleblowing protections). Some (such as democratizing funding decisions) are more complicated to implement, and I acknowledge the concern that these procedural measures create friction that could reduce the efficacy of EA organizations, but I think (a) minimizing unnecessary burden is a design challenge likely to yield fairly successful solutions and (b) FTX clearly strengthens the arguments in favor of bearing the cost of that friction. Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification. • Consideration of a “Pulse” Approach to Funding EA Community Building: It may be the case that large EA funders should do time-limited pulses of funding towards EA community building goals or projects with the intention of building institutions that can sustain themselves off of separate funds in the future. The logic of this is: (a) insofar as EAs may be bad judges of the value of our own community building, requiring something appealing to external funders helps check that bias, (b) creating EA community institutions that must be attractive to outsiders to survive may avoid certain epistemic and political risks inherent to being too insular • EA as a Method and not a Result: The concept of effective altruism (rationally attempting to do good) has broad consensus but particular conceptions may be parochial or clash with one another.[3] A “thinner” effective altruism that emphasizes EA as an idea akin to the scientific method rather than a totalizing identity or community may be less vulnerable to FTX-like mistakes. • Develop Better Logic for Weighing Harms Caused by EA against EA Benefits: An EA logic that assumes resources available to EAs will be spent at (say) GiveWell benefit levels (which I take to be roughly$100/DALY or equivalent) but that resources available to others are spent at (say) US government valuations of a statistical life (I think roughly 100,000/DALY) seems to justify significant risks of incurring very sizable harms to the public if they are expected to yield additional resources for EA. Clearly, EA's obligations to avoid direct harms (or certain types of direct harms) are at least somewhat asymmetric to obligations/permissions to generate benefits. But at the same time, essentially any causal act will have some possibility of generating harm (which in the case of systemic change efforts can be quite significant), so a precautionary principle designed in an overly simplistic way would kneecap the ability of EAs to make the world better. I don't know the right answer to this challenge, but clearly "defer to common sense morality" has proven insufficient, and I think more intellectual work should be done. I'm not at all certain about the conjectures/claims above, but I think it's important that EA deals with the intellectual implications of the FTX crisis, so I hope they can provoke a useful discussion. 1. ^ Am basing this on reporting in Semafor and the New Yorker. To be clear, I'm not saying that once you assume Alameda/FTX's existence, the ideology of effective altruism necessarily made it more likely that those entities would commit fraud. But I do think it is unlikely they would have existed in the first place without the support of institutional EA. 2. ^ To be clear, my claim is not "the impact of the FTX fraud incident plausibly outweighs benefits of EA community building efforts to date" (though that may be true and would be useful to publicly disprove if possible) but that the FTX fraud should demonstrate there are a range of harms we may have missed (which collectively could plausibly outweigh benefits) and that "investing in EA community building is self-evidently good" is a claim that needs to be reexamined. 3. ^ I find the distinction between concept and conception to be helpful here. Effective altruism as a concept is broadly unobjectionable, but particular conceptions of what effective altruism means or ought entail involve thicker descriptions that can be subject to error or clash with one another. For example, is extending present-day human lifespans default good because human existence is generally valuable or bad because doing so tends to create greater animal suffering that outweighs the human satisfaction in the aggregate? I think people who consider the principles of effective altruism important to their thinking can reasonably come down on both sides of that question (though I, and I imagine the vast majority of EAs, believe the former). Moreover efforts to build a singular EA community around specific conceptions of effective altruism will almost certainly exclude other conceptions, and the friction of doing so may create political dynamics (and power-seeking behavior) that can lead to recklessness or other problems. # 153 New comment 78 comments, sorted by Click to highlight new comments since: Some comments are truncated due to high volume. Change truncation settings ok, an incomplete and quick response to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don't spend my time on your forum, so please do just pm me if you think I should respond to something) 1. - regarding blaming Will or benefitting from the media attention - i don't think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below) - my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures - given that he only says this after FTX collapsed, I'm saying, it's annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised - journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position 2. - i'm rather sick of hearing from EAs that i'm arguing in bad faith - if I wanted to play nasty it wouldn't be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse t... For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort. Thank you for taking the time to write this up, it is encouraging - I also had never thought to check my karma ... 0Noah Scales3mo I have read the Democratizing Risk paper that got EA criticism and think it was spot on. Not having ever been very popular anywhere (I get by on being "helpful" or "ignorable"), I use my time here to develop knowledge. Your work and contributions could have good timing right now. You also have credentials and academic papers, all useful to establish your legitimacy for this audience. It might be useful to check to what extent TUA had to do with the FTX crisis, and whether a partitioning of EA ideologies combines or separates the two. I believe that appetite for risk and attraction to betting is part and parcel of EA, as is a view informed more by wealth than by poverty. This speaks to appetite for financial risk and dissonance about charitable funding. Critiques of EA bureaucracy could have more impact than critiques of EA ideology. Certainly your work with Luke Kemp on TUA seems like a hard sell for this audience, but I would welcome another round, there's a silent group of forum readers who could take notice of your effort. Arguments against TUA visions of AGI just get an ignoring shrug here. Climate change is about as interesting to these folks as the threat of super-fungi. Not very interesting. Maybe a few 100 points on one post, if the author speaks "EA" or is popular. I do think the reasons are ideological rather than epistemic, though ideologies do act as an epistemic filter (as in soldier mindset). It would be a bit rude to focus on a minor part of your comment after you posted such a comprehensive reply, so I first want to say that I agreed with some of the points. With that out of the way, I even more want to say that the following perspective strikes me as immoral, in that it creates terrible, unfair incentives: - I actually do think that outsiders are permitted to ask you to fix problems because your stated ambition is to do risk analysis for all of us, not just for effective altruism, but for, depending on what kind of EA you are, a whole category of sentient beings, including catagories as large as 'humanity' or 'future beings'. That means that even if I don't want to wear your brand, I can demand that you answer the questions of who gets to be in the positions to influence funding and why? And if it's not transparent, why is it not transparent? Is there a good reason for why it is not transparent? If I am your moral patient, you should tell me why your current organizational structures are more solid, more epistemically trustworthy than an alterntive ones. The problem I have with this framing is that it "punishes" EA (by applying isolated demands of "justify yourselves") ... Indeed Lukas, I guess what I'm saying is: given what I know about EA, I would not entrust it with the ring 7Chris Leong3mo I can understand why you mightn't trust us, but I would encourage EA's to consider that we need to back ourselves, even though I've certainly been shaken by the whole FTX fiasco. Unfortunately, there's an adverse selection effect where the least trustworthy actors are unlikely to recurse themselves in terms of influence, so if the more trustworthy actors recurse themselves, we will end up with the least responsible actors in control. So despite the flaws I see with EA, I don't really see any choice apart from striving as hard as we can to play our part in building a stronger future. After all, the perfect is the enemy of the good. And if the situation changes such that there are others better equipped than us to handle these issues and who would not benefit from our assistance, we should of course recurse ourselves, but sadly I believe this is unlikely to happen. I think the global argument is that power in EA should be deconcentrated/diffused across the board, and subjected to more oversight across the board, to reduce risk from its potential misuse. I dont think Zoe is suggesting that any actor should get a choice on how much power to lose or oversight to have. Could you say more about how adverse selection interacts with that approach? -1Chris Leong3mo Even if every actor in EA agreed to limit its power, we wouldn’t be able to limit the power of actors outside of EA. This is the adverse selection effect. This means that we need to carefully consider the cost-benefit trade off in proposals to limit the power of groups. In some cases, ie. seeing how the FTX fiasco was a larger systematic risk, it’s clear that there’s a need for more oversight. In other cases, it’s more like the analogy of putting Frodo’s quest on hold until we’ve conducted an opinion survey of Middle Earth. (Update: Upon reflection, this comment makes me sound like I’m more towards ‘just do stuff’ then I am. I think we need to recognise that we can’t assume someone is perfectly virtuous just because they’re an EA, but I also want us to retain the characteristics of a high trust community (and we have to check up on every little decision is a characteristic of a low trust community). 6Jason3mo Thanks. That argument makes sense on the assumption that a given reform would reduce EA's collective power as opposed to merely redistributing it within EA. 4[anonymous]3mo I don't understand what this means, exactly. If you're talking about the literal one ring from LOTR, then yeah EA not being trustworthy is vacuously true, since no human without mental immunity feats can avoid being corrupted. 8bruce3mo Immoral? This is a surprising descriptor to see used here. The standard of "justify yourselves" to a community soup kitchen, or some other group / ideology is very different to the standard of "justify yourselves" to a movement apparently dedicated to doing the most good it can for those who need it most / all humans / all sentient beings / all sentience that may exist in the far future. The decision relevant point shouldn't be "well, does [some other group] justify themselves and have transparency and have good institutions and have epistemically trustworthy systems? If not, asking EA to reach it is an isolated demand for rigour, and creates terrible incentives." Like - what follows? Are you suggesting we should then ignore this because other groups don't do this? Or because critics of EA don't symmetrically apply these criticisms to all groups around the world? The questions (imo) should be something like - are these actions beneficial in helping EA be more impactful?[1] Are there other ways of achieving the same goals better than what's proposed? Are any of these options worth the costs? I don't see why other groups' inaction justifies EA's, if it's the case that these actions are in fact beneficial. If EA wants to be in a position to work out the constitution of a world government about to be installed, it needs to first show outsiders that it's more than a place of interesting intellectual ideas, but a place that can be trusted to come up with interventions and solutions that will actually work in practice. If the standard for "scrutinising EA" is when EA is about to work out the constitution of a world government about to be installed, it is probably already too late. I don't want to engage in a discussion about the pros and cons of the Democratising Risk paper, but from an outsider's perspective it seems pretty clear to me that Carla did engage in a good faith "EA-insider" way, even if you don't think she's expressing criticism in a way you like now. Immoral? This is a really surprising descriptor to see used here. Yeah. I have strong feelings that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk. For example, I'm annoyed when people punish others for honesty in cases where it would have been easy to tell a lie and look better. Likewise, I find it unfair if having the stated goal to make the future better for all sentient beings is somehow taken to imply "Oh, you care for the future of all humans, and even animals? That's suspicious – we're definitely going to apply extra scrutiny towards you." Meanwhile, AI capabilities companies continue to scale up compute and most of the world is busy discussing soccer or what not. Yet somehow,"Are EAs following democratic processes and why does their funding come from very few sources?" is made into the bigger issue than widespread apathy or the extent to which civilization might be acutely at risk. The question shouldn't be "well, does [some other group] justify themselves and have transparency and have good institutions and have epistemically trustworthy systems? If not, as ... 8bruce3mo Thanks for sharing! We have some differing views on this which I will focus on - but I agree with much of what you say and do appreciate your thoughts + engagement here. It sounds like you are getting the impression that criticism directed at EA indicates that people criticising EA think this is a larger issue than AI capabilities or widespread apathy etc, if they aren't spending their time lobbying against those larger issues. But there might be other explanations for their focus - any given individual's sphere of influence, tractability, personal identity, and others can all be factors that contribute here. "It's important to have good institutions" is clearly something that "serious EAs" are strongly incentivised to do. But people who have a lot of power and influence and funding also face incentives to maintain a status quo that they benefit from. EA is no different, and people seeking to do good are not exempt from these kinds of incentives. And EAs who are serious about things should acknowledge that they are subject to these incentives, as well as the possibility that one reason outsiders might be speaking up about this is because they think EAs aren't taking the problem seriously enough. The benefit of the outside critic is NOT that EAs have some special obligation towards them (though, in this case, if your actions directly impact them, then they are a relevant stakeholder that is worth considering), but because they are somewhat removed and may be able to provide some insight into an issue that is harder for you to see when you are deeply surrounded by other EAs and people who are directly mission / value-aligned. I think this goes too far, I don't think this is the claim being made. The standard is just "would better systems and institutional safeguards better align EA's stated ideals and what happens in practice? If so, what would this look like, and how would EA organisations implement these?". My guess is you probably agree with this though? 4Cullen3mo I think this is an undervalued idea. But I also think that there's a distinct but closely related idea, which is valuable, which is that for any Group X with Goal Y, it is nearly always instrumentally valuable for Group X to hear about suggestions about how it can better advance Goal Y, especially from those who believe that Goal Y is valuable. Sometimes this will read as (or have the effect of) disincentivizing adopting Goal Y (because it leads to criticism), but in fact it's often much easier to marginally improve the odds of Goal Y being achieved by attempting to persuade Group X to do better at Y than to persuade Group ~X who believes ~Y. I take Carla Zoe to be doing this good sort of criticism, or at least that's the most valuable way to read her work. 4Cullen3mo I would also point out that I think the proposition that " that social norms or norms of discourse should never disincentivize trying to do more than the very minimum one can get away with as an apathetic person or as a jerk" is both: 1. Probably undesirable to implement in practice because any criticism will have some disincentivizing effect. 2. Probably violated by your comment itself, since I'd guess that any normal person would be disincentivized to some extent by engaging in constructive criticism (above the baseline of apathy or jerkiness) that is likely to be labeled as immoral. This is just to say that I value the general maxim you're trying to advance here, but "never" is way too strong. Then it's just a boring balancing question. 6Lukas_Gloor3mo "Never" is too strong, okay. But I disagree with your second point. I feel like I was only speaking out against the framing that critics of EA are entitled to a lengthy reply because of EA being ambitious in its scope of caring. (This framing was explicit at least in the quoted paragraph, not necessarily in her post as a whole or her previous work.) I don't feel like I was discouraging criticism. Basically, my point wasn't about the act of criticizing at all, it was only about an added expectation that went with it, which I'd paraphrase as "EAs are doing something wrong unless they answer to my concerns point by point." 4Cullen3mo Ah, okay. That seems more reasonable. Sorry for misunderstanding. 4Jason3mo I agree insofar as status as an intended EA beneficiary does not presumptively provide someone with standing demand answers from EA about risk management. However, non-EA persons are also potentially subject to the risk of harms generated by EA, and that status gives them at least some degree of standing. I think the LOTR analogy is inapt. Taking Zoe's comment here at face value, she is not suggesting that everyone put Project Mount Doom on hold until the Council of Elrond runs some public-opinion surveys. She is suggesting that reform ideas warrant further development and discussion. That's closer to asking for some time of a mid-level bureaucrat at Rivendell and a package of lembas than diverting Frodo. Yes, it may be necessary to bring Frodo in at some point, but only if preliminary work suggests it would be worthwhile to do so. I recognize that there could be some scenarios in which the utmost single-mindedness is essential: the Nagzul have been sighted near the Ringbearer. But other EA decisions don't suggest that funders and leaders are at Alert Condition Nagzul. For example, while I don't have a clear opinion on the Wytham purchase, it seems to have required a short-term expenditure of time and lock-up of funds for an expected medium-to-long-run payoff. 4Lukas_Gloor3mo Yeah, I agree that if we have reason to assume that there might be significant expected harms caused by EA, then EAs owe us answers. But I think it's a leap of logic to go from "because your stated ambition is to do risk analysis for all of us" to "That means that even if I don't want to wear your brand, I can demand that you answer the questions of [...]" – even if we add the hidden premise "this is about expected harms caused by EA." Just because EA does "risk analysis for all sentient beings" doesn't mean that EA puts sentient beings at risk. Having suboptimal institutions is bad, but I think it's far-fetched to say that it would put non-EAs at risk. At least, it would take more to spell out the argument (and might depend on specifics – perhaps the point goes through in very specific instances, but not so much if e.g., an EA org buys a fancy house). There are some potentially dangerous memes in the EA memesphere around optimizing for the greater good (discussed here [https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous], recently), which is the main concern I actually see and share. But if that was the only concern, it should be highlighted as such (and it would be confusing why many arguments then seem to be about seemingly unrelated things). (I think risks from act consequentialism was one point out of many in the Democratizing risk paper – I remember I criticized [https://forum.effectivealtruism.org/posts/gx7BEkoRbctjkyTme/democratising-risk-or-how-ea-deals-with-critics-1?commentId=XW9Gjww7rJsHjo4eD] the paper for not mentioning any of the ways EAs themselves have engaged with this concern.) By contrast, if the criticism of EA is more about "you fail at your aims" rather than "you pose a risk to all of us," then my initial point still applies, that EA doesn't have to justify itself more so than any other similarly-sized, similarly powerful movement/group/ideology. Of course, it seems very much wo 6Jason3mo I would have agreed pre-FTX. In my view, EA actors meaningfully contributed -- in a causal sense -- to the rise of SBF, which generated significant widespread harm. Given the size and lifespan of EA, that is enough for a presumption of sufficient risk of future external harm for standing. There were just too many linkages and influences, several of them but-for causes. EA has a considerable appetite for risk and little of what some commenter are dismissing as "bureaucracy," which increases the odds of other harms felt externally. So the presumption is not rebutted in my book. I think ever since EA has become more of an “expected value maximisation” movement rather than a “doing good based on high quality evidence” movement, it has been quite plausible for EA activity overall, or community building specifically, to turn out to be net-negative in retrospect, but I think the expected value of community building remains extremely high. I support more emphasis on thin EA and the development of a sort of rule of thumb for what a good ratio of meta spending vs object level impact spending would be. Strongly agree that it is surprising that some of Carla Zoe Kremer’s reforms haven’t been implemented. Frankly, I would guess the reason is that too many leadership EAs are overconfident in their decision making and are much too focused on “rowing” instead of “steering” in Holden Karnofsky’s terms. “ Social movements are likely to overvalue efforts to increase the power of their movement and undervalue their goals actually being accomplished, and EA is not immune to this failure mode.” Why do you think this, it is mostly intuition? My view of other social movements is that they undervalue efforts to increase power which is why most are unsuccessful. I credit a lot of EA’s success in terms of object level impact to a healthy degree of focus on increasing power as a means to increasing impact. 8RobertJMoore3mo While I am unaware of any actual studies supporting it (indeed, the nature of the problem makes it rather resistant to study), that statement sounds like a rephrasing or redevelopment of what's sometimes known as Pournelle's Iron Law of Bureaucracy [https://jerrypournelle.com/archives2/archives2mail/mail408.html#Iron]: Your last line, if I'm understanding you correctly, is to suggest that this is a good thing because of the nature of those in the second category in EA. One can imagine situations where this would be the case, such as Plato's philosopher-kings worthy of trust. Just wanted to flag that I personally believe - most of Cremer's proposed institutional reforms are either bad or zero impact, this was the case when proposed, and is still true after updates from FTX - it seems clear proposed reforms would not have prevented or influenced the FTX fiasco - I think part of Cremer's reaction after FTX is not epistemically virtuous; "I was a vocal critic of EA" - "there is an EA-related scandal" - "I claim to be vindicated in my criticism" is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you present yourself as are willing to cooperate in being presented as some sort of virtuous insider who was critical of the leaders and saw this coming, but I hope upon closer scrutiny people are actually able to see through this. edit: present yourself as replaced with are willing to cooperate in being presented I don't think this is a fair comment, and aspects of it reads more of a personal attack rather than an attack of ideas. This feels especially the case given the above post has significantly more substance and recommendations to it, but this one comment just focuses in on Zoe Cremer. It worries me a bit that it was upvoted as much as it was. For the record, I think some of Zoe's recommendations could plausibly be net negative and some are good ideas; as with everything, it requires further thinking through and then skillful implementation. But I think the amount of flack she's taken for this has been disproportionate and sends the wrong signal to others about dissenting. I think this aspect of the comment is particularly harsh, which is in and of itself likely counterproductive. But on top of that, it's not the type that should be made lightly or without a lot of evidence that that is the person's agenda (bold for emphasis): - I think part of Cremer's reaction after FTX is not epistemically virtuous; "I was a vocal critic of EA" - "there is an EA-related scandal" - "I claim to be vindicated in my criticism" is not sound reasoning, when the criticisms are mostly tangentially related to the scandal. It will get you a lot of media attention, in particular if you present yourself as some sort of virtuous insider who was critical of the leaders and saw this coming, but I hope upon closer scrutiny people are actually able to see through this. This discussion here made me curious, so I went to Zoe's twitter to check out what she's posted recently. (Maybe she also said things in other places, in which case I lack info.) The main thing I see her taking credit for (by retweeting other people's retweets saying Zoe "called it") is this tweet from last August: EA is to me to be unwilling to implement institutional safeguards against fuck-ups. They mostly happily rely on a self-image of being particularly well-intentioned, intelligent, precautious. That’s not good enough for an institution that prizes itself in understanding tail-risk. That seems legitimate to me. (We can debate whether institutional safeguards would have been the best action against FTX in particular, but the more general point of "EAs have a blind spot around tail risks due to an elated self-image of the movement" seems to have gotten a "+1" score with the FTX collapse (and EAs not having seen it coming despite some concerning signs). There's also a tweet by a journalist that she retweeted: 3) Critics (eg @CarlaZoeC @LukaKemp) warned that EA should decentralize funding so it doesn’t become a closed validation loop where the people in SBF’s inner circle get ... 3) Critics (eg @CarlaZoeC @LukaKemp) warned that EA should decentralize funding so it doesn’t become a closed validation loop where the people in SBF’s inner circle get millions to promote his & their vision for EA while others don’t. But EA funding remained overcentralized I think the FTX regranting program was the single biggest push to decentralize funding EA has ever seen, and it's crazy to me that anyone could look at what FTX Foundation was doing and say that the key problem is that the funding decisions were getting more, rather than less, centralized. (I would be interested in hearing from those who had some insight into the program whether this seems incorrect or overstated.) That said, first, I was a regrantor, so I am biased, and even aside from the tremendous damage caused by the foundation needing to back out and the possibility of clawbacks, the fact that at least some of the money which was being regranted was stolen makes the whole thing completely unacceptable. However, it was unacceptable in ways that have nothing to do with being overly centralized. This seems right within longtermism, but, AFAIK, the vast majority of FTX's grantmaking was longtermist. This decision to focus on longtermism seemed very centralized and might otherwise have shaped the direction and composition of EA disproportionately towards longtermism. 4Chris Leong3mo If FTX's decentralised model had been proven successful for long-termism, I suspect it would have influenced the way funding was handled for other cause areas as well. 3MichaelStJules3mo In case my wording was confusing, I meant that a community shift towards longtermism seems to have been decided by a small number of individuals (FTX founders). I'm not talking about centralization within causes, but centralization in deciding prioritization between causes. Also, I'm skeptical that global health and poverty or animal welfare would shift towards very decentralized regranting without a massive increase in available funding first, because 1. some of the large cost-effective charities that get funded are still funding-constrained, and so the bars to beat seem better defined, and 2. there already are similar experiments on a smaller scale through the EA Funds. 2Chris Leong3mo Yeah, I got that, I was just mentioning an effect that might have partially offset it. I agree that a small number of individuals decided that the funds should focus on long-terminal, although this is partially offset by how the EA movement was shifting that direction anyway. 4Davidmanheim3mo Yes, that seems correct. 8Jan_Kulveit3mo I think you lack part of the context where Zoe seems to claim to media the suggested reforms would help - this Economist piece, mentioning Zoe about 19 times [https://www.economist.com/1843/2022/11/15/the-good-delusion-has-effective-altruism-broken-bad] - WP [https://web.archive.org/web/20221123010431/https://www.washingtonpost.com/technology/2022/11/17/effective-altruism-sam-bankman-fried-ftx-crypto/] - this New Yorker piece [https://www.newyorker.com/news/annals-of-inquiry/sam-bankman-fried-effective-altruism-and-the-question-of-complicity], with Zoe explaining "My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.” -this twitter thread [https://twitter.com/CarlaZoeC/status/1591333694132097024] - this New Yorker piece, with Zoe explaining "My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.”"My recommendations were not intended to catch a specific risk, precisely because specific risks are hard to predict” but still saying ... “But, yes, would we have been less likely to see this crash if we had incentivized whistle-blowers or diversified the portfolio to be less reliant on a few central donors? I believe so.” To be fair, this seems like a reasonable statement on Zoe's part: • If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely. • If EA had diversified our portfolio to be less reliant on a few central donors, this would have also (quite obviously) mean the ... • If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely. Why do you think so? Whistleblowers inside of FTX would have been protected under US law, and US institution like SEC offer them multi-million dollar bounties. Why would EA scheme create stronger incentive? Also: even if the possible whistleblowers inside of FTX were EAs, whistleblowing about fraud at FTX not directed toward authorities like SEC, but toward some EA org scheme, would have been particularly bad idea. The EA scheme would not be equipped to deal with this and would need to basically immediately forward it to authorities, leading to immediate FTX collapse. Main difference would be putting EAs in the centre of the happenings? If EA had diversified our portfolio to be less reliant on a few central donors, this would have also (quite obviously) mean the crash had less impact on EA overall, so this also seems true. I think the 'diversified our portfolio' frame is ... Only real option how to have much less FTX money in EA was to not accept that much FTX funding. Which was a tough call at the time, in part because FTX FF seemed like the biggest step toward decentralized distribution of funding, and a big step toward diversifying from OP. And even then, decisions about accepting funding are made by individuals and individual organizations. Would there be someone to kick you out of EA if you accept "unapproved" funding? The existing system is, in a sense, fairly democratic in that everyone gets to decide whether they want to take the money or not. I don't see how Cremer's proposal could be effective without a blacklist to enforce community will against anyone who chose to take the money anyway, and that gives whoever maintains the blacklist great power (which is contrary to Cremer's stated aims). The reality, perhaps unfortunate, is that charities need donors more than donors need specific charities or movements. 6Denkenberger3mo It depends on how you define wealthiest minority, but if you mean billionaires, the majority of philanthropy is not from billionaires. EA has been unusually successful with billionaires. That means if EA mean reverts, perhaps by going mainstream, the majority of EA funding will not be from billionaires. CEA deprioritized GWWC for several years-I think if they had continued to prioritize it, funding would have gotten at least somewhat more diversified. Also, I find that talking with midcareer professionals it's much easier to mention donations rather than switching their career. So I think that more emphasis on donations from people of modest means could help EA diversify with respect to age. If we had incentivised whistle-blowers to come forward around shady things happening at FTX, would we have known about FTX fraud sooner and been less reliant on FTX funding? Very plausibly yes. She says "likely" which is obviously not particularly specific, but this would fit my definition of likely. Why do you believe this? To me, FTX fits more in the reference class of financial firms than EA orgs, and I don't see how EA whistleblower protections would have helped FTX employees whistleblow (I believe that most FTX employees were not EAs, for example). And it seems much more likely to me that an FTX employee would be able to whistle-blow than an EA at a non-FTX org. Also, my current best guess is that only the top 4 at FTX/Alameda knew about the fraud, and I have not come across anyone who seems like they might have been a whistleblower (I'd love to be corrected on this though!) I was reacting mostly to this part of the post I’ve honestly been pretty surprised there has not been more public EA discussion post-FTX of adopting a number of Cremer's proposed institutional reforms, many of which seem to me obviously worth doing ... Also, insofar as she'd be willing (and some form of significant compensation is clearly merited), integrally engaging Cremer in whatever post-FTX EA institutional reform process emerges would be both directly helpful and a public show of good faith efforts at rectification. I think it's fine for a comment to engage with just a part of the original post. Also, if a posts advocates for giving someone some substantial power, it seems fair to comment on media presence of the person. Overall, to me, it seem you advocate for double-standard / selective demand for rigour. Post-FTX discussion of Zoe's proposals seems mostly on the level 'Implement Carla Zoe Cremer’s Recommendations' or 'very annoyed this all had to happen before a rethink, given that 10 months earlier, I sat in his office proposing whistleblower protections, transparency over funding sources, bottom-up control over risky donations' or similar high level supportive comm... Thanks Jan! Could you elaborate on the first point specifically? Just from a cursory look at the linked doc, the first three suggestions seem to have few drawbacks to me, and seem to constitute good practice for a charitable movement. • Set up whistleblower protection schemes for members of EA organisations • Transparent listing of funding sources on each website of each institution • Detailed and comprehensive conflict of interest reporting in grant giving 7Eli_Nathan3mo I'll note that many EA orgs already have whistleblower protection policies in place and that there are also various whistleblowing protection laws in many jurisdictions (including the US and the UK) which I assume any EA affiliated organization or employee would have to follow. I can't speak to orgs, but the scope of legal protection for whistleblowing protection for US private employees is quite narrow -- I think people are calling for something much more robust. Also, I believe those protections often only cover an organization's actions against current employees -- not non-employer actions like blacklisting the whistleblower against receiving grants or trashing them to potential future employers. 6Jan_Kulveit3mo Unfortunately not in detail - it's a lot of work to go through the whole list and comment on every proposal. My claim is not 'every item on the list is wrong', but 'the list is wrong on average' so commenting on three items does not solve possible disagreement. To discuss something object-level, let's look at the first one 'Whistleblower protection schemes' sound like a good proposal on paper, but the devil is in detail: 1. Actually, at least in the EU and UK, whistleblowers pointing out things like fraud or illegal stuff are protected by the law. The protection offered by law is probably stronger than an internal org policy for some cases, and does not apply in other cases. Also, in some countries there are regulations what whistleblower protections you should have in place - I assume orgs do follow it where it applies. 2. Many orgs where it makes sense have some policies/systems in this direction, but not necessarily under the name of 'whistleblower protection'. 3. Majority of EA orgs are orgs which are quite small. I don't think if you have a team of e.g. four people, having a whistleblower protection scheme works the same way as in org with four hundred people. In my view, what actually often makes more sense, is having external contacts for all sort of issues - e.g. the community health team. 4. Overall, I think often the worst situation is when you have a system which seemingly does something, but actually does not. For example: a campus mental health support system which is actually not qualified to help with mental health problems, but keeps track who reached out to them, is probably worse than nothing. My bottom line is something like ... 'whistleblower protection scheme' may be good to implement in some cases, and some orgs have them. But it is too bureaucratic in other cases. Blanket policy requiring every org to have a formal scheme, no matter what the size or circumstances, seems bad. The Cremer document mixes two different types of whistleblower policies: protection and incentives. Protection is about trying to ensure that organisations do not disincentivize employees or other insiders from trying to address illegal/undesired activities of the organisation through for example threats or punishments. Whistleblower incentives are about incentivizing insiders to address illegal/undesired activities. The recent EU whistleblowing directive for example is a rather complex piece of legislation that aims to protect whistleblowers from e.g. being fired by their employers in some situations. The US SEC whistleblowing program on the other hand incentivizes whistleblowing by providing financial awards, some 10-30% of sanctions collected, for information that leads to significant findings. This policy, for the US, has a quickly estimated return of 5-10x through first order effects, and possibly many times that in second order effects through stopping fraud and reducing the expected value of fraud in general. The SEC gives several awards each month. A report about the program is available here for those interested. Whistleblower protections tend to be more bureaucra... "It seems clear proposed reforms would not have prevented or influenced the FTX fiasco" doesn't really engage with the original poster's argument (at least as I understand it). The argument, I think, is that FTX revealed the possibility that serious undiscovered negatives exist, and that some of Cremer's proposed reforms and/or other reforms would reduce those risks. Given that they involve greater accountability, transparency, and deconcentration of power, this seems plausible. Maybe Cremer is arguing that her reforms would have likely prevented FTX, but that's not really relevant to the discussion of the original post. 4Jan_Kulveit3mo I'm not confident what the whole argument is. In my reading, the OP updated toward the position "it’s plausible that effective altruist community-building activities could be net-negative in impact, and I wanted to explore some conjectures about what that plausibility would entail" based on FTX causing large economic damage. One of the conjectures based on this is "Implement Carla Zoe Cremer’s Recommendations". I'm mostly arguing against the position that 'the update of probability mass on EA community building being negative due to FTX evidence is a strong reason to implement Carla Zoe Cremer’s Recommendations' For comparison: I held the position that effective altruist community-building activities could be net-negative in impact before FTX and did not update much on the FTX evidence. In my view, the main reason for plausible negativity is EA seems much better at "finding places of high leverage" where you can influence the trajectory of the world a lot, than in figuring out what to actually do in those places. In my view, interventions against the risk include emphasis on epistemics, pushing against local consequentialist reasoning, and pushing against free-floating "community building" where people not working on the object level try mostly to bring in a lot of new people. Personally, I think implementing Zoe Cremer’s Recommendations as a whole either does not impact the largest real risks, or would make the negative outcomes more likely. Repeated themes in the recommendations are 'introduce bureaucracy' and 'decide democratically'. I don't think bureaucracies are wise, and in 'democratizing' things the big question is 'who is the demos?'. 0linnea3mo I strongly downvoted this for not making any of the reasoning transparent and thus contributing little to the discussion beyond stating that "Jan believes this". This could sometimes be reasonable for the purpose of deferring to authority, but that is riskier in this case because Jan has severe conflicts of interest due to being employed by a core EA organisation and being a stakeholder in for example a ~4.7 million grant to buy a chateau [https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/why-did-cea-buy-wytham-abbey?commentId=KWnqd6Hw5BdEbeKD3].
8Jan_Kulveit3mo
When the discussion is roughly at the level  'seem to me obviously worth doing ' it seem to me fine to state dissent of the form 'often seems bad or not working to me'. Stating an opinion is not 'appeal to authority'. I think in many cases it's useful to know what people believe, and if I have to choose between a forum where people state their beliefs openly and more often, and a forum, where people state beliefs only when they are willing to write a long and detailed justification, I prefer the first. I'm curious in which direction you think the supposed 'conflict of interests' point: I'm employed at the same institution (FHI) as Zoe works, and we were part of the same RSP program (although in different cohorts).  This mostly creates some incentive to not criticize Zoe's ideas publicly and would preclude me from e.g. reviewing Zoe's papers, because of favourable bias. Also ... I think while being a stakeholder in a grant to buy a cheap and cost-saving events venue has not much to do with the topics in question, it mostly creates some incentive to be silent, because by engaging critically with the topic, you increase the risk someone will summon an angry twitter mob to attack you. Overall ...  it's probably worth noticing people like you strong downvoting  my comment (now at karma 5, yours at 12) are the side actually trying to silence the critic here, while agreement with  "it is surprising that some of Carla Zoe Kremer’s reforms haven’t been implemented"or vague criticisms of "EA leadership" are what's in vogue on EA forum now.
9Jason3mo
I don't think (almost) anyone is trying to silence you here; the agreevotes on your top comment are pretty high and I'd expect a silencing campaign to target both. That suggests to me that the votes are likely due to what some perceive as an uncharitable tone toward Zoe, or possibly a belief that having the then-top comment be one that focuses heavily on Zoe's self-portrayal in the media risks derailing discussion of the original poster's main points (Zoe's potential involvement being a subpoint to a subpoint).
4Guy Raveh3mo
I disagree with Jan here entirely, but also with you. First of all, I don't see what the problem is with commenting one's opinion; "Reasoning transparency" is a thing that's only sometimes appropriate. Second, I wouldn't call FHI a "core EA organisation" and I frankly don't see the conflict of interest at all.

Thank you for this post. The framing of your points as conditional is especially helpful.

I strongly agree with lots here. As someone who has worked on community building-ish projects that are very far from or very close to frontline/object-level work, this part rang especially true:

Insofar as it makes sense to have less of a default presumption towards the value of community building, a way of de-risking community building activities is to link them more closely to activities where the case for direct impact is stronger.

People interested in the claim might be interested in this related post and discussion.

A milder statement of this is almost certainly already accepted by EA leadership and we should see the impact when the EA brownout ends.

A year ago, generating more SBFs was the brief argument for the high EV of community building. A common refrain: "SBF is contributing so much to EA causes, if what we're spending on community building generates even just one more SBF it will be worth it."

Now turn SBF to a negative value in that equation, or even merely a zero. The end result may be non-negative, but the EV of community building is greatly reduced.

Many in EA positions who have funded community-building orgs are probably now smarting at having mis-invested based on a false perception of SBF's value.

If there is a hard part, it will be convincing ourselves that although SBF was not high value, it will be hard to resist including hypothetical non-fraudulent  SBFs in our EV calculations, as we have habituated to that way of thinking.

I don’t see how this statement can be justified:

$8B in economic damage due to FTX’s fraud 8 billion in value was not destroyed. The net effect is mainly distributional. Financial markets are largely zero sum. Some investors lost a lot, others gained. If it hurt the price of crypto assets this means that overall, those who have assets other than crypto are marginally better off. Of course the chaos causes some value to be lost, but not 8 billion. If someone steals my car, is there is no "economic damage" because the thief is now better off to the extent of my loss? I would say I suffered economic damage and someone else got a benefit; the existence of that benefit does not negate the damage I incurred. There is economic damage, but not necessarily equal to the headline number. It is reduced by netting against the gains to the thief, but increased by things like stress, required investments in security, disruption to plans, degraded incentives, and so on. In this case I would guess the economic damage is very large but still less than$8bn. In the case of a personal mugging I would guess the economic damage far exceeds the value of the contents of your wallet.

You might also reasonably object that the gains to the thief shouldn't count because they are illegitimate. However, in the FTX case many of the gains seem to have gone to other traders who profited without being guilty.

4Habryka3mo
I feel pretty confused by this and would love better estimates of the actual amount of money that was "lost" in the FTX situation. It seems plausible to me (though not likely) that it's above $8B, since a lot of people made plans conditional on FTX being legitimate in a way that now wiped out a lot of economic gains, and the long-term trust that was lost in the markets was worth more than$8B.  My best guess number for this is something in the \$3-4B range, but that's really very much an ass-number.
1david_reinstein3mo
As usual, the best definition depends of the term on the use you want to make of it. From a social welfare standpoint, if the thief values the car as much as you did, and he doesn’t spend resources covering up his crime, and you don’t incur an expense in filing police reports etc., there is no social loss. I wouldn’t want to, e.g., count the FTX blowup as an 8 billion dollar loss in making cost effectiveness analysis comparisons to something like GiveDirectly.
1Dean Abele2mo
In general, I thought economic studies say the damage of fraud is much bigger than the distributional effect due to loss of trust, etc. I can try to find sources if anyone is interested.

I find it implausible that EA movement building is net-negative (<10%). However, I do appreciate the importance of not being unconditionally enthusiastic about movement-building as some specific forms may very well be net-negative. Some things I'd like to be aware of going forward:
1.  Attempt to do things that reasonable non-EA entities will find valuable (e.g., by not being dependent on EA funders and collaborating more with non-EA actors).
2. Be very aware of who we put on a pedestal as promoters and social role models. E.g., I appreciate Macaskill in many ways and have been inspired by him but I think he's too emphasized as the EA leader/role model and would like to hear other voices better represented.

4Misha_Yagudin3mo
If you think that movement building is effective in supporting the EA movement, you need to think that the EA movement is negative. I honestly can't see how you can be very confident in the latter. Skrewing things up is easy; unintentionally messing up AI/LTF stuff seems easy and given high-stakes causing massive amounts of harm is an option (it's not an uncommon belief that FLI's Puerto Rico conferences turned out negatively, for example).
4SebastianSchmidt3mo
"If you think that movement building is effective in supporting the EA movement, you need to think that the EA movement is negative."I think you might mean something like "If you think that movement building is effective in supporting the EA movement, you need to think that the EA movement is definitely not negative."?. I think it depends on how we operationalize community-building. I can definitely see how some forms of community-building is probably negative and I'd want for it to be high quality and relatively targetted. What are some of the reasons why people think the Puerto Rico conference is negative?
2Misha_Yagudin2mo
The point was that there is a non-negligible probability that EA will end up negative.
1SebastianSchmidt2mo
Yes, I agree that there's a non-negligible P that this will happen and that some events will be very harmful (heavy-tailed). Currently, however, saying that it's >10% seems too high but I could definitely change my mind. But I'm sufficiently worried about this to be skeptical of broad and low-fidelity outreach and I solicit advice from people who are generally skeptical of all forms of movement-building to be sure that we're sufficiently circumspect in what we do.

I think I'm not following the first stage of your argument. Why would the FTX fiasco imply that community building specifically (rather than EA generally) might be net-negative?

I think the idea is that EA institutions look much worse after FTX but EA causes do not. SBF being a fraud may cause you to update about whether (e.g.) CEA is a good organization but should not cause you to update on bednets/AI.

Reading the first paragraph of the OP, here's me trying to excavate the argument:

• Just like positive impact is likely "heavy-tailed," so is negative impact (see also this paper)
• Introducing people to EA ideas increases their agentiness and "attempts to optimize"
• Sometimes when people try to optimize something, things go badly wrong (e.g., FTX)
• It's conceivable, therefore, that EA community building has net negative impact

I think the argument is incomplete. Other things to think about:

• Are there any reasons why it might be systematically easier to destroy value than to create it?
• Seems plausible.
• But: What's the alternative, what's the default trajectory without an EA "movement" of some sort?
• Doesn't seem like much value?
• Beware of false dichotomies: Instead of movement building vs. no movement building, are there ways to increase the robustness of movement building?
• E.g., not promoting individuals with a particular psychology who may be disproportionally likely to end up with outsized negative impact?
• Edit: worth saying that the OP does provide constructive suggestions!

After reading also the other parts of the post, I think the OP makes further claims about how the best way to counteract the risks of unintended negative impact is via "institutional reforms" and "democratization."

I'm not convinced that this is the best response. I think overdoing it with institutional reforms would add a bunch of governance overhead that unnecessarily* slows down good actors and can easily be exploited/weaponized (or even just sidestepped) by bad actors. Also, "democratization" sounds virtuous in theory, but large groups of people collectively tend to have messed up epistemics, since the discourse amplifies applause lights or even quickly becomes toxic because of dynamics where we mostly hear from the most vocal skeptics (who often have a personal grudge or some other problem)  and all the armchair quarterbacks who don't have a clue of what they're missing. There comes a point where you'll get scrutinized a lot more for bad actions than for bad omissions (or for other things that somewhat randomly and unjustifiably evoke moral outrage in specific people – see this comment by Jonas Vollmer).

Maybe I'm strawmanning the calls for reform and people who want govern...

The problem is that most calls for reform lack specifics, and it is very difficult to meaningfully assess most reform proposals without them.

However, that is not necessarily the reformers' fault. In my view, it's not generally appropriate to deduct points for not offering more specific proposals if the would-be reformer has good reason to believe that reasonable proposals would be summarily sent to the refuse bin.

If Cremer's proposals in particular are getting a lot of glowing media attention, it seems like it would be worthwhile to do a clearer job as a community explaining why her specific proposals lack enough promise in their  summary form to warrant further investigation, and to make an attempt to operationalize ideas that might be warranted and feasible. Even if the ideas were ultimately rejected, "the community discussed the ideas, fleshed some of them out, and decided that the benefits did not exceed the costs" is a much more convincing response from an optics perspective than blanket dismissals.

My own tentative view is that her specific ideas range from the fanciful (and thus unworthy of further investigation/elaboration) to the definitely-plausible-if-fleshed-out, so I think it's important to take each on its own merits. That is, of course, not a suggestion that any individual poster here has an obligation to do that, only that it would be a good thing if done in some manner. On the average, the ideas I've seen described on the forum are better because they are less grand / more targeted + specific.

3Jack_S3mo
Yeah, makes sense. I just don't know why it's not just: "It's conceivable, therefore, that EA community building has net negative impact."  If you think that EA is/ EAs are net-negative value, then surely the more important point is that we should disband EA totally / collectively rid ourselves of the foolish notion that we should ever try to optimise anything/ commit seppuku for the greater good, rather than ease up on the community building.
5Davidmanheim3mo
...because we have object level data on the impact on many things, but very little on the net impact of community building on object level outcomes we care about. And community building is a very indirect impact, so on priors we should be less certain of how useful it is.
4joshcmorrison3mo
I think I did a poor job of distinguishing what I call "institutional EA" (or "EA community building") from EA (or "EA as an idea"). But basically, there's a difference between the idea of attempting to do good using evidence (or whatever your definition of EA might be) and particular efforts to expand the circle of people who identify as/affiliate with effective altruists. The former is what I'm calling EA/idea of EA and the latter is community building.  As might be obvious from this description, there are many possible ways to do EA community building, which might have better or worse effects (and one could think that community building efforts on average will have positive or negative effects). My claim is that it is plausible that the set of EA community building efforts conducted to date plausibly may have had net negative effects.

[Giving myself 5 minutes to reply with a quick point - and failing!] Thank you for writing this. Here are some quick low confidence thoughts on the main argument you made.

I don't think I understand why you attribute any issues from FTX to community building specifically.  The FTX outcome was a convergence of many factors, and movement building doesn't obviously seem to be the most important. So many other EA adjunct practices like philosophising, overconfidence, prioritisation or promoting earning to give could be similarly implicated etc.

I agre...

2joshcmorrison3mo
Thanks for this comment! My argument about community building's particular role  is that I think there were certain "community building" efforts specifically that caused the existence of FTX. The founder was urged to work in finance rather than on animal welfare, and then worked at CEA prior to launching Alameda. Alameda/FTX were seen as strategies to expand the amount of funding available to effective altruist causes and were founded and run by a leadership team that identified as effective altruist (including the former CEO of the Center for Effective Altruism). The initial funding was from major EA donors.  To me the weight of public evidence really points to Alameda as having been incubated by Center for Effective Altruism in a fairly clear way.  It's possible that in the absence of Alameda/FTX's existence its niche would have been filled by another entity that would have done similarly bad things, but it seems hard for me to imagine that without institutional EA's backing FTX would have existed.
8PeterSlattery3mo
Thanks for explaining, Josh! I understand your position a little better, but I still don't agree that it makes sense to \weight the impact of movement building on this outcome more heavily than all the other EA related (and unrelated) inputs involved, and accordingly, I am still relatively unconvinced that we need to react to the event by significantly changing our perspective on the value of movement building. Having said that, I still agree with you that we should be careful with movement building, expect and mitigate downside risks, and keeping evaluating it and trying to do it better.  Just as an FYI - I probably won't respond to any more comments because of time constraints.

Could you say more about the possibility of "external" funders for EA community building? It's probably not realistic to get major funding from a Big-Name Generalist Foundation, given that many of EA's core ideas inevitably constitute a severe criticism of how Big Philantrophy works. And it would be otherwise hard to decide who an "external" funder was -- in my book, "gives lots of money to EA community building" is pretty diagnostic for being an EA and thus not external.

One possibility might be that major funders would only pick up (say) 50% of the tab fo...

3Davidmanheim3mo
To answer this, from my perspective, I'll quote from my post [https://forum.effectivealtruism.org/posts/56CHyqoZskFejWgae/ea-is-a-global-community-but-should-it-be] a few months back:
4Jason3mo
Thanks, David. I think the best approach is probably more complicated than my 10,000 foot comment -- "work spaces and similar" are in a different category to me than EAGs, which are in turn in a different category than funding early EA community-building work in middle-income countries. The appropriate "coinsurance" will vary depending on the specific project, but I think you're right that it may be 100 percent for some of them.
3Davidmanheim3mo
Strongly agree - and if Dustin Moskowitz or Jann Tallinn wants to fund early groups in universities or in developing countries, that seems like a great place to give part of the far-more-than-10%. (But I'd still like it more if that giving wasn't called or considered EA donations.)