After seeing some of the debate last month about effective altruism's information-sharing / honesty / criticism norms (see Sarah Constantin's follow-up and replies from Holly Elmore (1,2), Rob Wiblin (1, 2), Jacy Rees, Christopher Byrd), I decided to experiment with an approach to getting less filtered feedback. I asked folks over social media to anonymously answer this question:
If you could magically change the effective altruism community tomorrow, what things would you change? [...] If possible, please mark your level of involvement/familiarity with EA[.]
I got a lot of high-quality responses, and some people suggested that I cross-post them to the EA Forum for further discussion. I've posted paraphrased version of many of the responses below. Some cautions:
1. I have no way to verify the identities of most of the respondents, so I can't vouch for the reliability of their impressions or anecdotes. Anonymity removes some incentives that keep people from saying what's on their mind, but it also removes some incentives to be honest, compassionate, thorough, precise, etc. I also have no way of knowing whether a bunch of these submissions come from a single person.
2. This was first shared on my Facebook wall, so the responses are skewed toward GCR-oriented people and other sorts of people I'm more likely to know. (I'm a MIRI employee.)
3. Anonymity makes it less costly to publicly criticize friends and acquaintances, which seems potentially valuable; but it also makes it easier to make claims without backing them up, and easier to widely spread one-sided accounts before the other party has time to respond. If someone writes a blog post titled 'Rob Bensinger gives babies ugly haircuts', that can end up widely shared on social media (or sorted high in Google's page rankings) and hurt my reputation with others, even if I quickly reply in the comments 'Hey, no I don't.' If I'm too busy with a project to quickly respond, it's even more likely that a lot of people will see the post but never see my response.
For that reason, I'm wary of giving a megaphone to anonymous unverified claims. Below, I've tried to reduce the risk slightly by running comments by others and giving them time to respond (especially where the comment named particular individuals/organizations/projects). I've also edited a number of responses into the same comment as the anonymous submission, so that downvoting and direct links can't hide the responses.
4. If people run experiments like this in the future, I encourage them to solicit 'What are we doing right?' feedback along with 'What would you change?' feedback. Knowing your weak spots is important, but if we fall into the trap of treating self-criticism alone as virtuous/clear-sighted/productive, we'll end up poorly calibrated about how well we're actually doing, and we're also likely to miss opportunities to capitalize on and further develop our strengths.
I agree that this is a problem, but I don't agree with the causal model and so I don't agree with the solution.
I'd guess that the majority of the people who take the EA Survey are fairly new to EA and haven't encountered all of the arguments etc. that it would take to change their minds, not to mention all of the rationality "tips and tricks" to become better at changing your mind in the first place. It took me a year or so to get familiar with all of the main EA arguments, and I think that's pretty typical.
TL;DR I don't think there's good signal in this piece of evidence. It would be much more compelling if it were restricted to people who were very involved in EA.
I'd propose a different model for the regional EA groups. I think that the founders are often quite knowledgeable about EA, and then new EAs hear strong arguments for whichever causes the founders like and so tend to accept that. (This would happen even if the founders try to expose new EAs to all of the arguments -- we would expect the founders to be able to best explain the arguments for their own cause area, leading to a bias.)
In addition, it seems like regional groups often prioritize outreach over gaining knowledge, so you'll have students who have heard a lot about global poverty and perhaps meta-charity who then help organize speaker events and discussion groups, even though they've barely heard of other areas.
Based on this model, the fix could be making sure that new EAs are exposed to a broader range of EA thought fairly quickly.