Frankly, I'm unsure how much there is to learn from or about Leverage Research at this point. Having been in the effective altruism movement for almost as long as Leverage Research has been around, an organization which has had some kind of association with effective altruism since soon after it was founded, Leverage Research's history is one of failed projects, many linked to the mismanagement of Leverage Research as an ecosystem of projects. In effective altruism, one of our goals is learning from mistakes, including the mistakes of others, is so we don't make the same kind of mistakes ourselves. It's usually more prudent to judge mistakes on a case-by-case basis, as opposed to the actor or agency that perpetuates them. Yet other times there is a common thread. When there is evidence for repeated failures borne of systematic errors in an organization's operations and worldview, often the most prudent lesson we can learn from that organization is why they repeatedly and consistently failed, and about their environment, for why it enabled a culture of an organization barely ever course-correcting, or being receptive to feedback. What we might be able to learn from Leverage Research is how EA(-adjacent) organizations should not operate, and how effective altruism as a community can learn to interact with them better.
Alright, thanks for letting me know. I'll remember that for the future.
Hi. I'm just revisiting this comment now. I don't have anymore questions. Thanks for your detailed response.
I saw this post had negative karma, and I upvoted it again to positive karma. I'm making this comment to signal-boost that I believe this article belongs on the EA Forum; and, that, if one is going to downvote articles like this that by all appearances are appropriate for the EA Forum, it would be helpful to provide a constructive explanation/criticism of them.
I've been in the EA community since 2012. As someone who has been in EA for that long, I entered the community taking to heart the intentional stance of 'doing the most good'. Back then, a greater proportion of the community wanted EA to primarily be about a culture of effective, personal, charitable giving. The operative word of that phrase is 'personal,' since even though there are foundations behind the EA movement like the Open Philanthropy Project that have a greater endowment than the rest of the EA community combined might ever hope to earn to give, for different reasons a lot of EAs still think it's important EA emphasizes a culture of personal giving regardless. I understand and respect that stance, and respect its continued presence in EA. I wouldn't even mind if it became a much bigger part of EA once again. This is a culture within EA that frames effective altruism as more of an obligation. Yet personally I believe it's more effective, and does more good, by doing so in a more diverse array of that. I am glad EA has evolved in that direction, and so I think it's fitting this definition of EA reflects that.
I've adopted to develop exclusion criterion for entryists into EA that EA, as a community, by definition, would see as bad actors, e.g., white supremacists. While one set of philosophical debates within EA, and with other communities, is how far, and how fast, the circle of moral expansion should grow. This common denominator in EA seems to imply a baseline agreement common to all of EA that we would be opposed to people who see to rapidly and dramatically shrink the circle of moral concern of the current human generation. So, to the extent someone:
1. shrinks the circle of moral concern;
2. does so to a great degree/magnitude;
3. does so very swiftly;
EA as a community should beware uncritically tolerating them as members of the community.
I've been thinking more that we may want to split up "Effective Altruism" into a few different areas. The main EA community should have an easy enough time realizing what is relevant, but this could help organize things for other communities.
People have talked about "splitting up" EA in the past to streamline things, while other people worry about how that might needlessly balkanize the community. My own past observations of trying to 'split up' EA, into specialized compartments, it that, more than being good or bad, it doesn't have much consequence at all. So, I wouldn't recommend more EAs just make an uncritical try of doing so again, if for no other reason than it strikes me as a waste of time and effort.
As mentioned in this piece, the community's take on EA may be different from what we may want for academics. In that case one option would be to distill the main academic-friendly parts of EA into a new term in order to interface with the academic world.
The heuristic I use to think about this is to leave the management with the relationship between the EA community and "Group X", is to let members of the EA community who are part of Group X manage EA's relationship with Group X. That heuristic could break down in some places, but it seems to have worked okay so far for different industry groups. For EA to think of 'academia' as an industry like 'the software industry' is probably not the most accurate thing to do. I just think the heuristic fits because EAs in academia will, presumably, know how to navigate academia on behalf of EA better than the rest of us will.
I think what has worked best is for different kinds of academics in EA to lead the effort to build relationships with their respective specializations, within both the public and private sectors (there is also the non-profit sector, but that is something EA is basically built out of to begin with). To streamline this process, I've created different Facebook groups for networking and discussions for EAs in different respective profession/career streams, as part of a EA careers public resource sheet. It is a public resource, so please feel free to share and use it however you like.
This is similar to how I describe effective altruism to those whom I introduce to the idea. I'm not in academia, and so I mostly introduce it to people who aren't intellectuals. However, I can trace some of the features of your more rigorous definition in the one I've been using lately. It's: " 'effective altruism' is a community and movement focused on using science, evidence, and reason to try solving the world's biggest/most important problems". It's kind of clunky, and it's imperfect, but it's what I've replaced "to do the most good" with, which generically stated presents the understandable problems you went over above.
This is a recent criticism of Givewell that I didn't see responded to or accounted for in any clear way in the linked post. I haven't read the whole thing closely yet, but no section appears to go over the considerations raised in that post. If they were sound, these criticisms incorporated into the analysis might make Givewell's top-recommended charities look more 'beatable'. I was wondering if I was missing something in the post, and Open Phil's analysis either accounts for or incorporates for that possibility.
Do you know if these take into account criticisms of Givewell's methodology for estimating the effectiveness of their recommended charities?