Hide table of contents

Since a lot of people lately have been discussing donation matching and the various issues about counterfactuals that surround it, I ran a survey a couple weeks ago to answer various of my questions about donation matching.

You can read the full post here. I'm not going to cross-post the whole thing because it becomes difficult to keep the two versions in sync, but I'll include the "conclusion for matching donors" below to pique your interest. (Note that because it was only a survey of a small convenience sample, you should put limited weight on these conclusions! But I think they're useful nevertheless.)


Be transparent about your matching

Where will any unmatched funds go? Will they be donated to the same organization anyway? Given to a different charity? Burned? A strong majority thought that all matches were fully counterfactually valid, so if this isn’t true of your match, you should say so. This could affect how much people donate, and how deceived they feel, so it’s very important to be totally honest here.

One thing that I didn’t look at in this survey, but that merits future research, is another type of “partial validity” in which unmatched funds don’t go to the same charity, but go to another charity, sometimes a quite similar one. It’s hopefully clear that this always happens for foundations, but it’s not clear for private donors like HEA’s anonymous matcher or one’s friends. It’s probably wise to be transparent about this in your fundraiser as well.

Separately from concerns about honesty, I think transparency in matching is great for other reasons as well. It seems to me that by far the biggest benefits of many EA fundraisers are not just that they raise additional funds—the best part is the flow-through effects from getting people to be more public about their giving, to discuss effective altruism more, and to get their friends interested. From that standpoint, it seems like a huge win to use the matches to introduce people to two central practices of effective altruism, transparency and counterfactual reasoning. It would also make the campaigns stand out more from typical fundraisers.

Consider running a challenge instead of a match

The survey suggested that people found challenge fundraisers just as compelling as matches, and seemed to reduce their donations less when the challenge target was reached than when a match expired. Furthermore, with challenges, the counterfactual effects are much clearer: it’s obvious where your money went, and the funding dynamics are much more intuitive.

This is consistent with my interpretation of the donation matching literature, where I wrote that I expected matches to work mostly through social proof and urgency effects rather than through making people’s donations bigger (and found, consistent with this, that changing the amount of the match tended not to matter). Challenge fundraisers don’t make people’s donations bigger like matches do, but they share the same urgency effect and function as stronger social proof. So it’s not surprising that they work just as well.

Consider running a larger experiment

This survey produced some useful info, but it would be even better to have actual field experiments (and a larger sample size). The academic literature on matches is sparse enough, and the literature on challenges even more so. So any additional experiments could add a lot to our knowledge.


Once again, check out the full post if you're interested in learning more, or have questions that aren't answered above. I hope this is useful to those considering running matching campaigns!

5

0
0

Reactions

0
0

More posts like this

Comments3


Sorted by Click to highlight new comments since:

When donation challenges become the new high-status thing in the EA community, please remember to credit Ben Kuhn.

Really cool post (I just read it on Ben's blog, but am commenting here because he wants to consolidate discussion in a single place).

A strong majority thought that all matches were fully counterfactually valid, so if this isn’t true of your match, you should say so.

To be crystal clear for people who haven't read the survey, people didn't express an explicit opinion on whether they thought the matches were "counterfactually valid" (using those explicit terms). What they were saying was that they thought more money would go to the charities in the matching cases. (When I first saw the survey it looked like I was being asked a maths problem and I answered it as such.)

Whether they explicitly thought about counterfactuals probably depended on whether they were EAs who were familiar with these - I'd guess that many/most were, since it was posted on Facebook by EAs and would have been of most interest to them. I imagine a typical matching fundraiser audience are merely generally motivated by the matching without explicitly thinking about counterfactuals. I don't think they take there to be even an implication about counterfactuals, which I imagine is why charities are comfortable with matches (which GiveWell apparently think are typically non-counterfactual). So talking about dishonesty is too strong - not being actively transparent about this element is more on the mark.

they thought more money would go to the charities in the matching cases.

In particular, they thought that for each $10 donated, a full additional $10 would go to the charity if the match was still active. (Some people thought that more money would go to the charity in the matching case, but less than the full $10.)

Curated and popular this week
trammell
 ·  · 25m read
 · 
Introduction When a system is made safer, its users may be willing to offset at least some of the safety improvement by using it more dangerously. A seminal example is that, according to Peltzman (1975), drivers largely compensated for improvements in car safety at the time by driving more dangerously. The phenomenon in general is therefore sometimes known as the “Peltzman Effect”, though it is more often known as “risk compensation”.[1] One domain in which risk compensation has been studied relatively carefully is NASCAR (Sobel and Nesbit, 2007; Pope and Tollison, 2010), where, apparently, the evidence for a large compensation effect is especially strong.[2] In principle, more dangerous usage can partially, fully, or more than fully offset the extent to which the system has been made safer holding usage fixed. Making a system safer thus has an ambiguous effect on the probability of an accident, after its users change their behavior. There’s no reason why risk compensation shouldn’t apply in the existential risk domain, and we arguably have examples in which it has. For example, reinforcement learning from human feedback (RLHF) makes AI more reliable, all else equal; so it may be making some AI labs comfortable releasing more capable, and so maybe more dangerous, models than they would release otherwise.[3] Yet risk compensation per se appears to have gotten relatively little formal, public attention in the existential risk community so far. There has been informal discussion of the issue: e.g. risk compensation in the AI risk domain is discussed by Guest et al. (2023), who call it “the dangerous valley problem”. There is also a cluster of papers and works in progress by Robert Trager, Allan Dafoe, Nick Emery-Xu, Mckay Jensen, and others, including these two and some not yet public but largely summarized here, exploring the issue formally in models with multiple competing firms. In a sense what they do goes well beyond this post, but as far as I’m aware none of t
LewisBollard
 ·  · 6m read
 · 
> Despite the setbacks, I'm hopeful about the technology's future ---------------------------------------- It wasn’t meant to go like this. Alternative protein startups that were once soaring are now struggling. Impact investors who were once everywhere are now absent. Banks that confidently predicted 31% annual growth (UBS) and a 2030 global market worth $88-263B (Credit Suisse) have quietly taken down their predictions. This sucks. For many founders and staff this wasn’t just a job, but a calling — an opportunity to work toward a world free of factory farming. For many investors, it wasn’t just an investment, but a bet on a better future. It’s easy to feel frustrated, disillusioned, and even hopeless. It’s also wrong. There’s still plenty of hope for alternative proteins — just on a longer timeline than the unrealistic ones that were once touted. Here are three trends I’m particularly excited about. Better products People are eating less plant-based meat for many reasons, but the simplest one may just be that they don’t like how they taste. “Taste/texture” was the top reason chosen by Brits for reducing their plant-based meat consumption in a recent survey by Bryant Research. US consumers most disliked the “consistency and texture” of plant-based foods in a survey of shoppers at retailer Kroger.  They’ve got a point. In 2018-21, every food giant, meat company, and two-person startup rushed new products to market with minimal product testing. Indeed, the meat companies’ plant-based offerings were bad enough to inspire conspiracy theories that this was a case of the car companies buying up the streetcars.  Consumers noticed. The Bryant Research survey found that two thirds of Brits agreed with the statement “some plant based meat products or brands taste much worse than others.” In a 2021 taste test, 100 consumers rated all five brands of plant-based nuggets as much worse than chicken-based nuggets on taste, texture, and “overall liking.” One silver lining
 ·  · 1m read
 ·