I’m working on a pamphleting pilot program with the folks over at The Life You Can Save, and I wanted to lay out our plans and hopefully field some useful thoughts and criticisms. First, a couple notes on the motivation for this project. Pamphleting has been used by other advocacy groups (with positive, although debatable, results), but to my knowledge hasn't been tried by an EA-flavored global poverty organization, so we're giving it a shot. There is arguably something old-fashioned and appealing about being handed a physical booklet from a real person that differentiates this medium from, say, online ads, and promotes (or so the argument goes) a deeper level of engagement. We're hoping that the data we collect, which TLYCS will be using to decide whether to continue it's program, will be useful to other groups as well, so we wanted to field some feedback here. 

The basic thrust of the project is right out of the Vegan Outreach playbook; put together a leaflet that targets college age students with material relating to global poverty interventions and effective altruism, and hand them out on college campuses on a rolling, semester basis. Whereas for VO the ultimate goal of pamphleting is to make vegans, the goal for the TLYCS pamphlets is to get students to engage with the material on the website and then take TLYCS's pledge. We have our pamphlet nearly finished and plan to start handing them out this spring semester at Southern California universities. In this pilot program, we really want to, 1.) explore the logistical challenges in running this kind of activity, and 2.) try and gauge the effectiveness of pamphleting versus other methods of spreading the message. It is on point two that I am keen to get feedback, so I’ll outline our current thoughts on this.

I plan to hand out pamphlets on discrete days during the pilot period (say, every other friday for several months), and keep track of how many booklets are handed out during each distribution. This should allow us to monitor traffic spikes (or lack of them) to the website from IP addresses in the distribution zone, as well as spikes in pledge signups, which can serve as a preliminary indication of engagement. We will also be pushing a special landing page that is only advertised in the pamphlets as another metric to gauge impact. This would be analogous to VO’s vegetarian starter guide offer, in that the offer require's the reader to actually do something, and therefore ostensibly reflects a more significant level of engagement. We initially envisioned directing students to a “special message” from TLYCS, which might be a video from Peter Singer or someone else in the organization offering an in-person invitation to explore the site and take the pledge. Another thought was to invite students to play an online version of TYLCS’s Giving Game, where they visit the website and choose which of four charities will receive a dollar donation. However, there was some disagreement on the team whether offering a strong “hook” to get people to come to the site would muddy the effectiveness of the pamphlet qua pamphlet. For instance, TLYCS offers in-person Giving Game workshops, and it was thought that pitching an online version of the Giving Game in the pamphlets would make it harder to compare the effectiveness of the pamphlets versus the in-person workshops. As something of a compromise, we are now inviting students to take a quiz (just a shell site right now) in which they answer questions and then get paired up with one of TLYCS’s recommended charities. I’d be very interested in what people think on this point; what’s the right balance between pushing a really strong hook in the pamphlet (e.g. video from Peter Singer, invitation to play Giving Game), versus keeping the pamphlet as independent as possible so we can tell how engaging of a medium it is?

Ultimately, we’d like to come out of the pilot program with a metric that falls somewhere close to “cost per TLYCS pledger via pamphlets” that we can compare with other outreach efforts (online ads, in-person workshops, etc), and use in deciding whether to expand the program or abandon it. To summarize, I’m really interested in hearing comments on:

 

  • Are we making any obvious mistakes in our approach to measuring the effectiveness of the pamphlets?
  • What is the right balance to strike between pitching a strong “hook” in our pamphlets versus keeping them “clean” so that only the appeal of the pamphlets as a medium is measured?
  • Is there anything we can tweak in our pilot that would make it more relevant and useful to other organizations considering a pamphleting program?

Thanks all, and look forward to hearing from you!

 

Comments34


Sorted by Click to highlight new comments since:

What is the right balance to strike between pitching a strong “hook” in our pamphlets versus keeping them “clean” so that only the appeal of the pamphlets as a medium is measured?

What you want to assess is the marginal cost-effectiveness of pamphlets, so I think the right approach is to include the best hook that you can which scales up at zero marginal cost. This probably excludes an online giving game. It should allow an online video, but ideally one that can be reused if distributing these elsewhere rather than too personalised to the target audience.

Really good point here; I was a fan myself of the online Giving Game, but that would be hard to scale with the program without securing a donor willing to finance it at a pretty large level.

If the hook is worth it, how expensive would it be to scale and hard would that be to finance?

I suppose if the initial pamphlet run is worth it, you could then A-B test it with a Giving Games pamphlet.

Hey Jonathon, this is a really great initiative! Giving What We Can is currently in the process of designing an experiment to test the effectiveness of our pamphlets. We were hoping to run it in London some time late January, early February. We should coordinate on our experiment design [I will post more details on the forum once we have firmed up details about the experiment design].

Cool, will the results be public?

Hey Tom, not sure about TLYCS's study, but we plan to make ours public (and I imagine they will too!)

After trailing and fiddling to see what works - how much would 20 million copies of a pamphlette aimed at a general audience cost? The post office gives a lot to charity and I can imagine that it wouldn't be impossible to persuade them to send this out as a one off free of charge - at least to the houses they're already posting mail to.. Perhaps different language for different postcodes? Chelsea does not equal Bradford in terms of how appeals might work (religious backgrounds, education levels, size of household, disposable income etc.)

That would be great! I'll connect with you on Facebook and we can open up a line of communication there.

Also, have you got in touch with the good people at Charity Science?

Just took a look at their website, very cool stuff. You suggesting I email them and get their feedback on our plan?

Definitely. Some of the team at least are EA insiders and lurking on this very forum, and they'll already know about TLYCS for sure.

We lurk amongggggg youuuuu.

Hi Jonathon, will the results be public?

That is definitely the intention. We are really hoping that the data we gather will be useful to other orgs considering a similar program, which was part of the motivation for posting up here ahead of time to get feedback.

There was an "Effective Altruism Brochure" thread on Facebook's Effective Altruists group. Might be a good starting template to use for a handout: see here

Pretty sharp! If I had seen this before, I definitely would have passed it along to our designer as something to work from.

Ultimately, we’d like to come out of the pilot program with a metric that falls somewhere close to “cost per TLYCS pledger via pamphlets”

I noticed what might be a significant confounder in getting this estimate: you are likely to be particularly enthusiastic/eloquent about the whole thing, which is an extra input which will help the effectiveness of the pamphlets, but is very hard to budget properly on the 'costs' side.

To account for this, you would probably be better hiring someone else to distribute the pamphlets; probably someone without a deep existing commitment to TLYCS (but some contact should be fine -- the question is whether you could get similarly good people when trying to scale it up).

But if you're going to scale this, you'll probably get LYCS members or EAs to hand out the pamphlets anyway, right? I mean, we kind-of do need more concrete volunteery tasks that we can give to student groups anyway.

So it's best to get random EAs or LYCS members to do it in the study, right?

I agree that if the people running the study are also distributing the pamphlets then you end up with bias.

I wasn't sure what the scaling model was, but if there are enough plausible volunteers then this sounds right.

The general point is that you want to try and produce a typical case, not a special case.

This is a really good point. Yeah, the scaling model is to have local TLYCS chapters organizing volunteers to do this as a regular, rolling semester activity. I hadn't really considered myself a confounding variable in this sense, because I'm definitely not a master pamphleteer. I'm an engineer by trade, and if this program takes off, I'll eventually just be another volunteer in the LA area that helps hand out leaflets occasionally. We're also thinking about splitting crews on Friday distribution days - so I would have a crew that hits up two universities, and there would be another volunteer crew hitting up two different campuses. Any thoughts on this?

Great idea!

Does the pamphleting have to be done on Fridays, or can it be done on pseudo-random days? (I'm thinking about distinguishing the signal from the pamphlets from e.g. people spending more time on the Internet during weekends. Pseudo-random spikes might require fancier math to pick out though, and of course you need to remember which days you handed out pamphlets!

Can you ask people, when they take the pledge, how they found out about TLYCS? (This will provide an under-estimate, but it can be used to sanity-check other estimates). (Also it's a bit ambiguous if someone had e.g. vaguely heard of TLYCS or Singer before, but pamphleting prompted them to actually take the pledge)

There's a typo in your text ("require's") - make sure you get the pamphlets proof-read :)

Do you know in advance what you expect, in terms of:

  • How many pamphlets you will distribute
  • What the effect will be?

(Last I heard, EA was using predictionbazaar.com and predictionbook.com as its prediction markets)

Statistically, the situation you don't want to get into is leafleting every Friday so there's no Fridays left to provide your control condition.

Oh yeah, good point.

Some really good points here. I never considered that handing out the leaflets only on Fridays might skew the results (I just happen to have every other Friday off, thanks California), I'll have to think that through. And it would definitely be a good idea to have a "Where did you hear about the pledge?" question on the pledge site, I'll check into that as well.

I'm not sure what our initial run on the pamphlets will be, but I'm thinking in the 5K-15K range. I haven't done any analysis to figure out how many we'd need to hand out to get good statistics; not even really sure how to go about doing that, to be honest. And absolutely no idea what to expect in terms of a response rate. Any thoughts on how to estimate that?

I'm not sure what our initial run on the pamphlets will be, but I'm thinking in the 5K-15K range. I haven't done any analysis to figure out how many we'd need to hand out to get good statistics; not even really sure how to go about doing that, to be honest.

Please talk to a real statistician if you're designing an experiment! Random Internet people picking your design apart is actually pretty good as far as review goes (if they're the right Internet people), but actual statisticians are orders of magnitude better. Experiment design is very tricky and good statisticians are aware of both lots of tools to make your job easier, and lots of pitfalls for you to avoid. To quote Ronald Fisher:

To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.

Statistics Without Borders may be a good place to start.

I figure most people don't know a statistician (I don't) but there are plenty of people in LessWrong discussion who know how to do a power calculation so it might be good to start there (or just to dig a bit deeper here).

It really won't help address the problem I'm talking about at all, which is unknown design flaws/statistical techniques/study design tools. Once you've figured out that you have a problem like "how should I power my study?", smart people plus Internet is fine; I'm worried about the other 10 issues we haven't noticed yet. That's the kind of thing that statisticians are useful for.

Fortunately, it turns out you can still talk to statisticians even if you don't know them personally. If you're spending money on your study, you could even go so far as to hire a consultant. I also know statisticians and would be happy to refer Jonathon.

Makes sense. Also, if the EA survey is redone, that might be an even more important place to have a statistician.

As someone who did a lot of study design in undergraduate, is currently a "data scientist", and considers myself smart, I can confirm that I still make approximately 10 huge mistakes every time I run a study.

Yeah, if you give me the contact info of a statistician that you recommend that would be great. I don't know if we have the budget for it, but I would definitely reach out.

I'm checking for people who would be interested in doing it pro bono. If that doesn't work, I'm 99% sure you can find some people to fund a couple consultant-hours.

I don't know if we have the budget for it

Not to put too fine a point on it, but if the alternative is TLYCS designing the experiment themselves, this is pretty much like running a charity that spends nothing on overhead. It looks good on paper, but in reality, that last bit of money is a huge effectiveness multiplier.

I'd consider funding this if it's "worth it" and not too much money. I'm sure others would as well.

I'm fairly surprised the EA movement doesn't have official statisticians. The EAA movement has a lot of people claiming to be official statisticians.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig