As part of my recent application to the Charity Entrepreneurship Incubation Program[1], I was asked to spend ~1hr putting together a one page critique of a charitable community. I picked the EA community and wrote this.
The critique is summarised as follows:
I am reasonably confident that the Effective Altruism (EA) community is neglecting trying to influence non-EA actors (funders, NGOs and individuals). I am uncertain about the extent to which this represents a missed opportunity in the short-term. However, I believe that influencing non-EA actors will become increasingly important in the future and that the EA community should begin to explore this soon, if not now.
I'm sharing as:
- others have previously suggested they would be interested in reading something like this[2]
- I'd like to see if anyone else is interested in collaborating in some way to build on my thinking and develop a more robust critique ..... and possibly to make an argument for what the EA community could/should do differently.
Let me know if you'd be interested in collaborating, or otherwise please do leave comments on this post or the Google doc - I'm open to any and all engagement!
- ^
For those interested in the CE program, I wasn't selected for the upcoming cohort but did make it to the final round of interviews. I have no way of knowing how well (or not!) I scored on this task, as it was completed alongside two other written assignments
- ^
I found that this post shares, in bullet-points and in the comments, some of the critiques I articulate and others that I agree with but couldn't fit onto one page. I'd expect to flesh these out when I spend more time on this
I agree. Involving other actors forces us to examine deeply EA's weirdness and unappealing behaviours, and brings a ton of experience, network, and amplifies impact.
This is something that I have been seriously thinking about when organizing big projects, especially when it comes to determine the goals of a conference and the actors that we choose to invite. Specifically in a theme such as AI safety, where safety concerns should be propelled and advertised in policy among other non-EA policy actors.