Idk if there are best practices for reporting study results in a forum post, but I’ve decided to keep most the nerdy stuff out. My write-up, along with my methods and data are available here: (https://osf.io/bkgvr/). I spent the first semester of my psychology undergrad thesis trying to better understand how to introduce people to EA.

Background/Summary: There are mixed results in the literature about whether giving people information about cost-effectiveness differences between charities will increase their donations to effective charities. Recent work from Lucius Caviola and others, using cost-effectiveness differences of 100x per expert opinion, seems promising. I wanted to see if such information would increase people’s interest in effective altruism. It didn’t, and I also found no significant effect of such information on Effective Donations (though read below for more details), contrary to expectations. Possible takeaway: I’ve sometimes used cost-effectiveness variance information in my EA pitches – this pilot study suggests that such information is not effective at increasing interest in EA. Evidence for selection effect regarding interest in EA and political beliefs. More to explore: People were highly interested in EA, but were not very enthusiastic about getting involved in EA – how do we bridge this gap? Stronger manipulation, larger sample, etc.

Hypotheses: H1: Correcting misconceptions around charity effectiveness will increase people’s interest in the EA movement. H2: Correcting misconceptions around charity effectiveness will increase donations to effective charities. Various exploratory hypotheses.

Participants: 44 undergrad students at a small liberal arts college in the US, majority female, majority white, majority politically left/liberal, mostly intro psych students. Participants had not learned about EA for more than 5 minutes.

Methods: Recruited by email, participants completed an online Qualtrics survey lasting 6.5 minutes. I used a 2x2 factorial design. Participants either did an exercise where they were asked to estimate the difference in cost-effectiveness between average and highly effective charities – or they had a control task. Then, participants either were told that the difference in cost-effectiveness is 100x – or they were told control, irrelevant information. All participants received an explanation of why charities can vary in cost-effectiveness (different problems, different methods, or different efficiencies/overhead). Participants then split a donation (real money, but not theirs) between an average cost-effectiveness charity and a highly effective charity. All participants had the same procedure for the rest of the survey. All read a short description of EA “Effective Altruism is community and a social movement dedicated to answering the question “how can we do the most good with the resources we have?” Effective Altruism advises that before jumping in to trying to improve the world, we should carefully consider how to do so most effectively both so that we don’t mistakenly make things worse, and so that we can help as many people as possible. By using evidence and reasoning we can figure out which methods of doing good are the most impactful, and then we can focus our resources – be that our time, money, or careers – on these methods of improving the world.” Then, participants answered 6 questions asking about how interesting EA sounds and generally their favorability toward it. Next, participants answered 5 questions about whether they wanted to do certain things to get involved (while this was not numeric, I made a numeric scale out of it for analysis purposes). They next answered a comprehension check question to ensure they had read the description of EA (almost all participants answered correctly, and those who did not were excluded). Participants answered demographic questions including the following question (response 1-7): “Here is a 7-point scale on which the political views that people might hold are arranged from extremely liberal (left) to extremely conservative (right). Where would you place yourself on this scale?” All participants were then informed about the nature of the study and the fact that not everybody had been given complete information. All then saw that the most cost-effective charities save 100x more lives than average charities for the same amount of money. Participants then made a Final Effective Donation in which they did the same donation task as before – but with full information for everybody.

Measures: % donated to highly effective charity (Effective Donation). % donated to highly effective charity at end of survey (Final Effective Donation). Final Effective Donation – Effective Donation (Donation Change). Score based on 6 questions asking about interest/favorability toward EA, 2 reverse scored (Interest Score). Score based on 5 questions asking about getting involved in EA (Involvement Score).

Primary Hypothesis Results: H1: No main effects and no interaction – neither cost-effectiveness information nor doing the exercise about cost-effectiveness increased interest in EA. H2: No main effects and no interaction – neither cost-effectiveness information nor doing the exercise about cost-effectiveness increased Effective Donations.

Interesting other results: Political identification was associated with EA where being more left or liberal was associated with greater interest in EA (no effect on Effective Donations or Involvement Score). I interpret this result as evidence for a selection effect regarding the significant political left demographics of the EA community; that is, people on the left are more interested in EA initially, rather than our community having a treatment effect of making people left (though I didn’t study this). Caveats: participants were generally left, average of 1.5 on a scale of 0-6 (converted from 1-7) explained above; sample size was small; exploratory analysis; question about political beliefs was unidimensional whereas actual beliefs are complex. Donation Change was associated with Info Condition. This change was larger for participants who did not receive the info at the time of their first donation but did by the time of their second. These participants had larger increases in effective donations, which is what we would expect in line with H2. I’ve heard, but can’t track down the source, that less than 10% of Oxford students hear about EA before graduating (or this used to be the case). This was not the case in my study. I screened participants for previous experience with EA. 28 (52%) participants had never heard of EA before. 21 (39%) had heard of EA but learned about it for less than 5 minutes, and 4 (7%) had learned about EA for more than 5 minutes so were excluded from participating. Note: this includes people who did not complete the study and/or failed comprehension check, which is why n > 44. Caveat: some people who had learned about EA for more than 5 mins may not have clicked on the survey at all due to the exclusion criteria in recruiting materials. Big caveat: People who know me are probably more likely to participate in my study than random students; most of these people have also heard me talk about EA and therefore have heard of it, making this (likely) a biased sample. These results should be taken with caution, but they indicate that the % of students at my school who have heard about EA is above 10% (but probably not as high as 46%), although the % of students who have actually learned about EA is still quite low (anecdotal).

Less interesting other results: Final Effective Donation was higher than initial Effective Donation for almost all participants. Exercise responses were in line with other studies. Overall, high interest/favorability toward EA (~5.5/Scale of 1-7). Participants were not as inclined to get involved in EA, as the average answer on most questions was 0.64, or about halfway between “No thanks” (0) and “Maybe” (1), where the other option was “Yes!” (2).

Sorry I didn't include graphics in this post, LaTeX is confusing, and they don't seem necessary. Some are in the Presentation on OSF. Feel free to ask questions. For more about everything, check out all my materials, my full write-up, and data (stripped of most demographic information to preserve anonymity): https://osf.io/bkgvr/

Comments4


Sorted by Click to highlight new comments since:
[anonymous]20
0
0

Thanks for sharing this, Aaron! Really interesting pilot work.

One quick thought-- I wouldn't rely too heavily on statistical significance tests, particularly with small sample sizes. P-values are largely a function of sample size, and it's nearly impossible to get statistical significance with 44 participants (unless your effect size is huge!). 

Speaking of effect sizes, it seems like you powered to detect an effect of d=0.7. For a messaging study with rather subtle manipulations, an effect of d=0.7 seems huge! I would be pretty impressed if giving people CE info resulted in an effect size of d=0.2 or d=0.3, for instance. I'm guessing you were constrained by the # of participants you could recruit (which is quite reasonable-- lots of pilot studies are underpowered). But given the low power, I'd be reluctant to draw strong conclusions.

I also appreciate that you reported the mean scores in the results section of your paper, which allowed me to skim to see if there's anything interesting. I think there might be!

There was no significant difference in Effective Donation between the Info (M = 80.21, SD = 18.79) and No Info (M = 71.79, SD = 17.05) conditions, F(1, 34) = 1.85, p = .183, ηp2 = .052. 

If this effect is real, I think this is pretty impressive/interesting. On average, the Effective Donation scores are about 10% higher for the Info Group participants than the No Info group participants (and I didn't do a formal calculation for Cohen's d but it looks like it'd be about d=0.5). 

Of course, given the small sample size, it's hard to draw any definitive conclusions. But it seems quite plausible to me that the Info condition worked-- and at the very least, I don't think these findings provide evidence against  the idea that the info condition worked.

Would be curious to see if you have any thoughts on this. If you end up having an opportunity to test this with a larger sample size, that would be super interesting. Great work & excited to see what you do next!

Thanks for your thorough comment! Yeah I was shooting for about 60 participants, but due to time constraints and this being a pilot study I only ended up with 44, so even more underpowered.

Intuitively I would expect a larger effect size, given that I don't consider the manipulation to be particularly subtle; but yes, it was much subtler than it could have been. This is something I will definitely explore more if I continue this project; for example, adding visuals and a manipulation check might do a better job of making the manipulation salient. I would like to have a manipulation check like "What is the difference between average and highly cost-effective charities?" And then set it up so that participants who get it wrong have to try again.

The fact that Donation Change differed significantly between Info groups does support that second main hypothesis, suggesting that CE info affects effective donations. This result, however, is not novel. So yes, the effect you picked up on is probably real – but this study was underpowered to detect it at a level of p<.05 (or even marginal significance).

In terms of CE info being ineffective, I'm thinking mainly about interest in EA – to which there really seems to be nothing going on, "There was no significant difference between the Info (M = 32.52, SD = 5.92) and No Info (M = 33.12, SD = 4.01) conditions, F(1, 40) = .118, p = .733, ηp2 = .003." There isn't even a trend in the expected direction. This was most important to me because, as far as I know, there is no previous empirical evidence to suggest that CE info affects interest in EA. It's also more relevant to me as somebody running an EA group and trying to generate interest from people outside the group.

Thanks again for your comment! Edit: Here's the previous study suggesting CE info influences effective donations: http://journal.sjdm.org/20/200504/jdm200504.pdf

Another consideration here: participants knew they were in an experiment, and probably had a good sense of what you were aiming at.

The difference in treatment and control was whether people

  1. were asked to estimate the difference in cost-effectiveness between average and highly effective charities – or they had a control task."

and whether (2x2 here)

  1. "participants either were told that the difference in cost-effectiveness is 100x – or they were told control, irrelevant information. "

If either of these increased their stated interest in EA or their giving behavior, it would be informative, but we still might want to be careful in making inferences to the impact of these activities and this 'de-biasing' in real world contexts.

Either of these tasks might have heightened the 'desirability bias' or the extent to which people considered their choices in a particular analytical way that they might not have done had they not known they were in an experiment.

Thanks for sharing and for putting this on OSF. Some thoughts and suggestions, echoing those below.

Maybe consider rewriting/re-titling this? To say "did not increase" seems too strong and definitive.

You "failed to find a statistically significant effect" in standard tests that were basically underpowered. This is not strong evidence of a near-zero true effect. If anything, you found evidence suggesting a positive effect, at least on the donation 'action' (if I read Aaron's comment carefully).

You might consider a Bayesian approach, and then put some confidence bounds on the true effects, given a reasonably flat/informative prior. (You can do something similar with 'CIs' in a standard frequentist approach.)

Then you will be able to say something about 'with this prior, our posterior 80% credible interval over the true effect is between -X% and +X%' (perhaps stated in terms of Cook's d or something relatable) ... if that interval rules out a 'substantial effect' then you could make a more meaningful statement. (With appropriate caveats about the nature of the sample, the context, etc., as you do.)

(Also, if you rewrite, can you break this into shorter paragraphs -- the long paragraph chunks become overwhelming to read.)

Curated and popular this week
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Omnizoid
 ·  · 9m read
 · 
Crossposted from my blog which many people are saying you should check out!    Imagine that you came across an injured deer on the road. She was in immense pain, perhaps having been mauled by a bear or seriously injured in some other way. Two things are obvious: 1. If you could greatly help her at small cost, you should do so. 2. Her suffering is bad. In such a case, it would be callous to say that the deer’s suffering doesn’t matter because it’s natural. Things can both be natural and bad—malaria certainly is. Crucially, I think in this case we’d see something deeply wrong with a person who thinks that it’s not their problem in any way, that helping the deer is of no value. Intuitively, we recognize that wild animals matter! But if we recognize that wild animals matter, then we have a problem. Because the amount of suffering in nature is absolutely staggering. Richard Dawkins put it well: > The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In fact, this is a considerable underestimate. Brian Tomasik a while ago estimated the number of wild animals in existence. While there are about 10^10 humans, wild animals are far more numerous. There are around 10 times that many birds, between 10 and 100 times as many mammals, and up to 10,000 times as many both of reptiles and amphibians. Beyond that lie the fish who are shockingly numerous! There are likely around a quadrillion fish—at least thousands, and potentially hundreds of thousands o
Recent opportunities in Building effective altruism
2
32
CEEALAR
· · 1m read