I ran a 20-day pre-registered experiment where 6 different participants who have not recently talked about Giving What We Can were asked to contact their friends to talk about Giving What We Can. A total of 14 people were contacted, 6 expressed interest, and as of 2/8/2017, 0 of them have taken the Giving What We Can pledge.
There was some unanticipated methodological difficulties, and I do not think you should take the outcome of this experiment too seriously.
I originally proposed the experiment here: Can talking about GWWC for 90 minutes actually get somebody to take the Pledge?
(The experiment protocol has not visibly changed)
I proposed to start the experiment if I have at least five interested participants. 7 people expressed interest, so I decided to launch the experiment. I asked the 7 to reconfirm interest; 6 replied.
The 6 participants were asked to each contact between 5-20 friends to talk about the Giving What We Can pledge in the next five days. 4 of the 6 initiated contact with at least one person. The 4 contacted a total of 14 people within five days.
At least 6 of the 14 people expressed interest. We then waited 15 days to see if any of them went on to take the pledge or Try Giving. None of them did, which concludes the experiment.
Jan.8: Experiment First Proposed
Jan.9 - Jan.15: People emailed or otherwise contacted me expressing interest in participating in this experiment.
Jan.17: Participants informed that the experiment will be launched
Jan.19: Participants started contacting their friends. A total of 14 people were contacted.
Jan.24: Participants asked to stop initiating contact with their friends. A total of 6 people expressed interest in participating
Feb.8: Experiment wrap-up
- I was originally hoping for at least 25 data-points, and hopefully closer to 50-100, so this experiment did not really settle the question it was originally set out to settle.
The biggest lesson for me is that I should definitely anticipate volunteer attrition and set minimum manpower at 2-4x more than what I would naively expect.
I should have been more proactive in providing support to the experimentees. I contacted them an average of 3 times each. If I was to do this again, I (or an assistant) would probably contact the experimentees daily and provide more explicit scripts for interaction.
The experiment came right after the Giving What We Can Pledge Drive, and I think there was some level of “talking about GWWC” attrition, such that most people who wanted to volunteer on things relating to GWWC have already done so during the pledge drive. Thus, in the future I will be careful not to schedule similar volunteering projects really close to each other, unless it is to explicitly build off of momentum.
The participants who wound up contacting their friends were people who emailed me to express interest, whereas the participants who didn’t were people I already know. This suggest that I’ve somewhat saturated my social network in terms of willingness to do additional EA Outreach (see above).
- Overall, I consider this more of a failed experiment than a negative result. What I mean by this is that negative results give strong evidence for no evidence of effect, but I think there is not nearly enough information here for this to be clear.
Lessons You should NOT have from This
You should not update significantly towards “casual outreach about EA is ineffective”, or “outreach has a very low probability of success” since the study is FAR too underpowered to detect even large effects. For example, if talking about GWWC to likely candidates has a 10% chance of making them take the pledge in the next 15-20 days, and the 14 people who were contacted are exactly representative of the pool of “likely candidates”, then we have a .9^14=23% chance of getting 0 pledges.
If your hypothesis is 1%: 87%
How much you should actually update depends on your distribution of prior probabilities. I’m happy to explain the basic Bayesian statistics further if there’s interest, but do not want to digress further from this post.
You should not decrease your trust in the usefulness of volunteer work broadly.
I’m interested in running this experiment again with a much greater sample size and acceptance that a decent % of volunteers will drop out because of lack of time, etc.
This would likely have to wait until 3-6 months later, so pledge drive fatigue dies down.
I will also probably advertise using either the EA or GWWC mailing list, instead of the EA Forum, which I believe is better for presentations of intellectual work than for calls to action.
When I first proposed this experiment, there was an interest in doing 6-month and 12-month followups on the people contacted. I set my calendar to evaluate this again in 6 months, but I do not expect any interesting results (and do not plan to publish uninteresting ones).
Running (even very simple) experiments is harder than it looks!
Be wary of volunteer fatigue.
For anything that requires volunteer work, consider recruiting significantly more volunteers than you need.