It's my great pleasure to announce that, after seven months of hard work and planning fallacy, the EA Survey is finally out.

It's a long document, however, so we've put it together in an external PDF.



In May 2014, a team from .impact and Charity Science released a survey of the effective altruist community. The survey offers data to supplement and clarify those anecdotes, with the aim of better understanding the community and how to promote EA.

In addition it enabled a number of other valuable projects -- initial seeding of EA Profiles, the new EA Donation Registry and the Map of EAs. It also let us put many people in touch with local groups they didn’t know about, and establish presences in over 40 new cities and countries so far.


Summary of Important Findings

  • The survey was taken by 2,408 people, 1,146 (47.6%) of whom provided enough data to be considered, and 813 of whom considered themselves members of the EA movement (70.9%) and were included for the entire analysis.

  • The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall… more on this later.)

  • 66.9% of the EAs in our sample are from the United States, the United Kingdom, and Australia, but we have EAs in many countries. You can see the public location responses visualized on a map!

  • The Bay Area had the most EAs in our sample, followed by London and then Oxford. New York and Washington DC have surprisingly many EAs and may have flown under the radar.

  • The EAs in our sample in total donated over $5.23 million in 2013. The median donation size was $450 in 2013 donations.

  • 238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income. You can see the past and planned donations that people have chosen to made public on the EA Donation Registry.

  • The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 -- AMF, SCI, and GiveDirectly. MIRI was the fourth largest donation target, followed by unrestricted donations to GiveWell.

  • Poverty was the most popular cause among EAs in our sample, followed by metacharity and then rationality.

  • 33.1% of EAs in our sample are either vegan or vegetarian.

  • 34.1% of EAs in our sample who indicated a career indicated that they were aiming to earn to give.


 The Full Document

You can read the rest at the linked PDF! -->


A Note on Methodology

One concern worth putting in the forefront is that we used a convenience sample, trying to sample as many EAs as we can in places we knew where to find them.  But we didn't get everyone.

It’s easy to survey, say, all Americans in a reliable way, because we know where Americans live and we know how to send surveys to a random sample of them. Sure, there may be difficulties with subpopulations who are too busy or subpopulations who don’t have landlines (though surveys now call cell phones).

Contrast this with trying to survey effective altruists. It’s hard to know who is an EA without asking them first, but we can’t exactly send surveys to random people all across the world and hope for the best. Instead, we have to do our best to figure out where EAs can be found, and try to get the survey to them.

We did our best, but some groups may have been oversampled (more survey respondents, by percentage, from that group than are actually in the true population of all EAs) or undersampled (not enough people in our sample from that subpopulation to be truly representative). This is a limitation that we can’t fully resolve, though we’ll strive to improve next year. At the bottom of this analysis, we include a methodological appendix that has a detailed discussion of this limitation and why we think our survey results are still useful.

You can find much more than you’d ever want in the methodological appendix at the bottom of the PDF.


In sum, this is probably the most exhaustive study of the effective altruism movement in existence.  It certainly exhausted us!

I'm really excited about the results and look forward to how they will be able to inform our movement.


New Comment
73 comments, sorted by Click to highlight new comments since: Today at 5:57 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Thank you for doing this survey and analysis. I regret that the feedback from me was primarily critical, and that this reply will follow in a similar vein. But I don’t believe the data from this survey is interpretable in most cases, and I think that the main value of this work is as a cautionary example.

A biased analogy

Suppose you wanted to survey the population of Christians at Oxford: maybe you wanted to know their demographics, the mix of denominations, their beliefs on ‘hot button’ bioethical topics, and things like that.

Suppose you did it by going around the local churches and asking the priests to spread the word to their congregants. The local catholic church is very excited, and the priest promises to mention at the end of his sermon; you can’t get through to the Anglican vicar, but the secretary promises she’ll mention it in the next newsletter; the evangelical pastor politely declines.

You get the results, and you find that Christians in Oxford are overwhelmingly catholic, that they are primarily White and Hispanic, and tend conservative on most bioethical issues, and are particularly opposed to abortion and many forms of contraception.

Surveys and Sampling

Of course, yo... (read more)

Thanks for sharing such detailed thoughts on this Greg. It is so useful to have people with significant domain expertise in the community who take the time to carefully explain their concerns.

It's worth noting there was also significant domain expertise on the survey team.

Why isn't the survey at least useful count data? It allows me to considerably sharpen my lower bounds on things like total donations and the number of Less Wrong EAs. I think count data is the much more useful kind to take away even ignoring sampling bias issues, because the data in the survey is over a year old, i.e. Even if it were a representative snapshot of EA in early 2014, that snapshot would be of limited use. Whereas most counts can safely be assumed to be going up.
0Gregory Lewis8y
I agree the survey can provide useful count data along lines of providing lower bounds. With a couple of exceptions though, I didn't find the sort of lower bounds the survey gives hugely surprising or informative - if others found them much moreso, great!
Very thoughtful post. Are there any types of analysis you think could be usefully performed on the data?
1Gregory Lewis8y
Once could compare between clusters (or, indeed, see where there are clusters), and these sorts of analyses would be more robust to sampling problems: even if LWers are oversampled compared to animal rights people, one can still see how they differ. Similar things like factor analysis, PCA etc. etc. could be useful to see whether certain things trend together, especially for when folks could pick multiple options. Given that a regression-style analysis was abandoned, I assume actually performing this sort of work on the data is much easier said than done. If I ever get some spare time I might look at it myself, but I have quite a lot of other things to do...
One approach would be to identify a representative sample of the EA population and circulate among folks in that sample a short survey with a few questions randomly sampled from the original survey. By measuring response discrepancies between surveys (beyond what one would expect if both surveys were representative), one could estimate the size of the sampling bias in the original survey. ETA: I now see that a proposal along these lines is discussed in the subsection 'Comparison of the EA Facebook Group to a Random Sample' of the Appendix. In a follow-up study, the authors of the survey randomly sampled members of the EA Facebook group and compared their responses to those of members of that group in the original survey. However, if one regards the EA Facebook group as a representative sample of the EA population (which seems reasonable to me), one could also compare the responses in the follow-up survey to all responses in the original survey. Although the authors of the survey don't make this comparison, it could be made easily using the data already collected (though given the small sample size, practically significant differences may not turn out to be statistically significant).
I think it's right to say that the survey was premised on the idea that there is no way to know the true nature of the EA population and no known-to-be-representative sampling frame. If there were such a sampling frame or a known-to-be-representative population, we'd definitely have used that. Beforehand, and a little less so now, I would have strongly expected the EA Facebook group to not be representative. For that reason I think randomly sampling the EA FB group is largely uninformative- and I think that this is now Greg's view too, though I could be wrong.
0Gregory Lewis8y
I agree that could work, although doing it is not straightforward - for technical reasons, there aren't many instances where you get added precision by doing a convenience survey 'on top' of a random sample, although they do exist. (Unfortunately, random FB sample was small, with something like 80% non-response, thus making it not very helpful to sample sampling deviation from the 'true' population. In some sense the subgroup comparisons do provide some of this information by pointing to different sub-populations - what they cannot provide is a measure as to whether these subgroups are being represented proportionally or not. A priori though, that would seem pretty unlikely.) As David notes, the 'EA FB group' is highly unlikely to be a representative sample. But I think it is more plausibly representative along axes we'd be likely to be interested in the survey. I'd guess EAs who are into animal rights are not hugely more likely to be in facebook in contrast to those who are into global poverty, for example (could there be some effects? absolutely - I'd guess FB audience skews young and computer savvy, so maybe folks interested in AI etc. might be more likely to be found there, etc. etc.) The problem with going to each 'cluster' of EAs is that you are effectively sampling parallel rather than orthogonal to your substructure: if you over-sample the young and computer literate, that may not throw off the relative proportions of who lives where or who cares more about poverty than the far future; you'd be much more fearful of this if you oversample a particular EA subculture like LW. I'd be more inclined to 'trust' the proportion data (%age male, %xrisk, %etc) if the survey was 'just' of the EA facebook group, either probabilistically or convenience sampled. Naturally, still very far from perfect, and not for all areas (age, for example). (Unfortunately, you cannot just filter the survey and just look at those who clicked through via the FB link to construct thi

Thanks for running the survey, writing it up, and posting the data. I think this is chiefly valuable for giving people an approximate overview of what we know about the movement, so it's great to have the summary document which does that.

I would have preferred fewer attempts to look for statistical significance, as I'm not sure they ever helped much and think they have led you to at least one misleading conclusion. In particular:

Reading the “The Four Focus Areas of Effective Altruism”, one would expect a roughly even split between (1) poverty, (2) metacharity, (3) far future / x­risk / AI, and (4) nonhuman animals. Above, instead of equal splits, poverty emerges as a clear leader [Footnote: Statistically significant with a t­-test, p < 0.0001]

On the contrary, I think the main message from the data is that in the sample collected, they are roughly evenly split. The biggest of the four beats the smallest by less than a factor of two -- this is a relatively small difference when there are no mechanisms I can see which should equalise their size (I would not have been shocked if you'd found an order of magnitude difference between some two of them).

Doing a test here for statisti... (read more)

Thanks for the feedback. I agree that particular test/conclusion was unnecessary/misleading. I think we'll be more careful to avoid tests like that in future survey analyses :)
3Peter Wildeford8y
It's hard to say. Others have told me that they greatly preferred backing up these kinds of statements with statistical testing. I guess I can't make everyone happy. :)
1Owen Cotton-Barratt8y
OK, I guess I inferred the causality as being you did the test, then wrote the statement. If you were going to use the same language anyway, I agree that the test doesn't hurt -- but I think that this statement might have been better left out or weakened.
I agree with the spirit of this criticism, though it seems that the problem is not significance testing as such, but a failure to define the null hypothesis adequately.

Thank you to the survey team for completing what is an easy-to-underestimate volume of work. Thank you also to the many who completed this survey, helping us to both understand different EA communities better and to improve this process of learning about ourselves as a wider group in future years.

I have designed and analysed several consumer surveys professionally as part of my job as a strategy consultant.

There is already a discussion of sample bias so I will leave those issues alone in this post and focus on three simple suggestions to make the process ... (read more)

Thanks Chris, all very useful info. (On the 0 donors question: I've written about this elsewhere in the comments and a sizeable majority of these respondents were full time students or low income or had made significant past donations or had pledged at least (and often much more) of future income). Once all these people are taken account of, the number of 0 donors was pretty low. There was a similar (if not even stronger) trend for people donating <$500).
Thanks Chris, this is useful feedback and we'll go through it. For example, I think trying out draft versions would be valuable. I may ask you some more questions, e.g. about SurveyMonkey's features.
Happy to answer these any time, and happy to help out next year (ideally in low time commitment ways, given other constraints).

Thanks for this, and thanks for putting the full data on github. I'll have a sift through it tonight and see how far I get towards processing it all (perhaps I'll decide it's too messy and I'll just be grateful for the results in the report!).

I have one specific comment so far: on page 12 of the PDF you have rationality as the third-highest-ranking cause. This was surprisingly high to me. The table in imdata.csv has it as "Improving rationality or science", which is grouping together two very different things. (I am strongly in favour of improving science, such as with open data, a culture of sharing lab secrets and code, etc.; I'm pretty indifferent to CFAR-style rationality.)

1Peter Wildeford8y
Good point. Yes, this is my bad, I forgot that part. Definitely a mistake. Definitely will break that apart next time.

"238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income."

I was surprised by this. In particular, 22% (127/588) of people identifying as EAs do not donate. (Of course they may have good reasons for not donating, e.g. if they are employed by an EA charity or if they are currently investing in order to give more in the future). Do we know why so many people identify as EAs but do not presently donate?

Probably because the average age is so low (~25) - lots of students and people just starting out their careers.
Because self-identifying as EA is a lot easier than being self-sacrificing and donating. I saw the numbers with students removed and they did not improve as much as you would think.
The raw data seems to show that a lot of people who have donated zero have nevertheless pledged to donate a significant amount (e.g. everything above living expenses etc.).
The survey question only asked if people though they could be described 'however loosely' as an effective altruist. I suspect this question did not perform as intended - we know it included people who said they not heard of the term.
People will say anything on surveys. Many respondents go through clicking randomly. You can write a question that says, "Please answer C," and >10% of respondents will still click something other than C.
This year it might be worth including a mandatory questions saying something like "Check C to promise not to go through clicking randomly", both as a test and a reminder.
I regularly do this when designing consumer surveys as part of m professional work - the concern in those instances is that respondents are mainly completing the survey for a small monetary reward and so are incentivised to click through as fast as possible. To help my own survey development skills, I participate in several online panels and can confirm that whilst not exactly standard practice, a non-negligible proportion of online consumer surveys will include questions like this used to screen out respondents who are not paying attention. This is less of a concern for the EA survey, but is almost costless to include such a screening question so seems like an easy way to help validate any resulting analysis or conclusions.
What evidence could we get now or in the future that'd speak to the different hypotheses being offered in response to this?
I think as it stands the evidence is already pretty good (on this particular question). I only skimmed through the zero donors briefly, and I may go through in more detail in the future, or someone else can do it, but I found that a pretty substantial majority of the zero donors were either students or on a "low income" based on my quick informal and pretty conservative coding of people as "low income" if they were on roughly the UK minimum wage or less. I would guesstimate at least around 70% were full time or "low income" by this measure. But I then went through counting those who had donated a significant sum in the past (roughly $1000 or higher) or had pledged more than 10% in the future, and again, this was a majority, including some people (roughly 10) who weren't in the student/low income group, so yet more were students or low income or pledged 10% or previously donated substantial amount. I also extended this analysis to the people pledging <$500 but more than $0, and the figures were even more overwhelming: by my count roughly 90% were students or low income. I'm being conservative given the rough nature of my coding and counting, but I think these figures are still too low, because I didn't count people who were on around $20,000 per year (which was quite a few, and still a lowish income), people who pledged 5% (which is still respectable) or just said something vague like "Yes" about future donations. Just to give a flavour of the pledges: a significant number were even higher than the GWWC pledge (i.e. for roughly every 2 people pledging 10%, within the zero donors, there was 1 pledging much more than that: 50% income, 90% income, everything above basic living expenses and so on). So far from being not-EAs at all, a lot of these people seem like exemplary EAs!
Is this is a typo? Do you mean "not full time" or not working or "full time students"?
Yeh, thanks for the spot, that is just a typo: it should be "full time students"
I haven't looked at correlations between various data-sets yet, so I'm not confident. I'll update on this later. Anyway, many who took the survey are likely university students, including perhaps a disproportionate number of graduate students as effective altruism is skewed towards the academic world. This may delay who's in a good position to donate by a few years.

I've made a bar chart plotter thing with the survey data: link.

0Peter Wildeford8y
Also, do you have the GitHub code for your plotter? Would love to see.
0Peter Wildeford8y
Woah, this is an impressive data viz accomplishment! You should make it a top-level post -- it's cooler than a comment. :) - Also, ... How did you do that so quickly? We had to pay $60 to get it done manually via virtual assistants.
Thanks Peter! I'll make the top-level post later today. (I might have given the impression that I did this all during a weekend. This isn't quite right -- I spent 2-3 evenings, about 8 hours in total, going from the raw csv files to nice and compact .js function. Then I wrote the plotter on the weekend.) I did this bit in Excel. If the money amounts were in column A, I insert three columns to the right: B for the currency (assumed USD unless otherwise specified), C for the min of the range given, D for the max. In column C, I started with =IF(ISNUMBER(A2), A2, "") and dragged that formula down the column. Then I went through line by line, reading off any text entries, and turning them into currency/min/max (if a single value was reported, I entered it as the min, and left the max blank). currency, tab, number, enter, currency, tab, number, tab, number, enter, currency, tab... It's not a fun way to spend an evening (hence why I didn't do the lifetime donations as well), but it doesn't actually take that long. Then: new column E for the AVERAGE(C2:D2) dragged down the column. Then I typed in the average currency conversions for 2013 into a new sheet and did a lookup (most users would use VLOOKUP I think, I used MATCH and OFFSET) to get my final USD numbers in column F. As a fierce partisan of the "_final_really_2" school of source control, I'm yet to learn how to GitHub. You can view the Javascript source easily enough though, and save it locally. (I suggest deleting the Google Analytics "i,s,o,g,r,a,m" script if you do this, or your browser might go looking for Google in your file system for a few seconds before plotting the graphs). The two scripts not in the HTML file itself are d3.min.js and (EDIT: can't be bothered fixing the markdown underscores here.) A zip file with my ready-to-run CSV file and the R script to turn it into a Javascript function is here [].
0Peter Wildeford8y
Well I admire your dedication to do it yourself and not use my conversions. :) - Aww man, you should learn. It's tremendously useful. Not to mention a requirement for any programming job.

PDF link doesn't exist anymore. @Peter_Hurford

3Peter Wildeford3y
Thanks. We're working on relocating it and will fix the link when we do. UPDATE: fixed
2Peter Wildeford3y
The link is fixed now.

The first 17 entries in imdata.csv have some mixed-up columns, starting (at latest) from

Have you volunteered or worked for any of the following organisations? [Machine Intelligence Research Institute]

until (at least)

Over 2013, which charities did you donate to? [Against Malaria Foundation].

Some of this I can work out (volunteering at "6-10 friends" should obviously be in the friends column), but the blank cells under the AMF donations have me puzzled.

1Peter Wildeford8y
Yes, it looks like the first 17 entries are corrupted for some reason. I'll look into it.

The previous link to the survey results died, so I edited to update the link.

Do you have any sense of the extent to which people who put down 'friend' as how they got into effective altruism might have learned about EA via their friend taking them to a meet-up group, or might have ended up getting properly committed because they made friends with people through a meet-up group? I was just thinking about how I would classify myself as having got into EA through friends, but that you might think it was more accurate to describe it as a meet-up group in Oxford getting me involved.

So, there were 2 questions, one speaking to how people "learned about EA" and one to how they "ended up getting properly committed": "How did you first hear about 'Effective Altruism'?" (single response) "Which factors were important in 'getting you into' Effective Altruism, or altering your actions in its direction?" (multiple response) The second question covers the intersection you describe; I think you can see the overlap by selecting it as both the primary and secondary categories at [] . Beyond that, we don't have any data (except comments at the end of the survey). I don't know of course, but I'd guess that in your case talking to friends was important in getting you into it and attending local group events regularly maybe wasn't - even if those friends attended local group events.
Thanks, that's interesting. The reason I was thinking that it might be more accurately attributed to a local group was that it seems unlikely I would have really formed friendships with any of the people around if they hadn't been setting up Giving What We Can.
Ah, OK - I was thinking of you being friends with Will MacAskill before from being in his masters cohort. I guess the central GWWC organisation in Oxford counts as a local group, just of a different sort than I was thinking of! (And then of course you'd have gone to events GWWC put on in Oxford University too.)

I'm trying to make sense of all the missing data. It seems very strange to have such a high non response rate (nearly 20%) to simple demographic questions such as gender and student status, and this suggests a problem with the data.

You say here that a 'survey response' was generated each time somebody opened the survey, even if they answered no questions. Does that mean there wasn't a 'complete and submit' step? Was every partially completed survey considered a separate 'person'? If so, was there any way to determine if individuals were opening multiple ... (read more)

When ACE and HRC talked to statisticians and survey researchers as part of developing our Survey Guidelines [] for animal charities beginning to evaluate their own work, they consistently said that demographic questions should go at the end of the survey because they have high non-response rates and some people don't proceed past questions they aren't answering. So while it's intuitively surprising that people don't answer these simple questions, it's not obviously different from what (at least some) experts would expect. I don't know, however, whether 20% is an especially high non-response rate even taking that into account.
That's interesting to know, thank you for sharing it! Looking at this study (comparing mail and web surveys) they cited non response rates to demographic items at 2-5%. However I don't know how similar the target population here is to the 'general population' in these behaviours. []
Yes, these questions were right at the end. You can see the order of the questions in the spreadsheet that Peter linked to - they correspond to the order of the columns.
Thanks Tom. I'm limited in my spreadsheet wrangling at the moment I'm afraid, but looking at non response rates that are cited in the document above, and comparing to the order or questions, non responses seem to be low (30-50) until the question on income and donation specifics, after which they are much higher (150-220). A question that requires financial specifics seems likely to require someone to stop and seek documents, so could well cause someone to abandon the survey at least temporarily. If somebody abandoned the survey at that point, would the information they had entered so far be submitted? Or would they have to get to the end and cluck submit for any of their data to be included?
That's a good point, could well have happened, and is something we should consider changing. The questions were split into a few pages, and people's answers got saved when they clicked the 'Continue' button at the bottom of each page - so if they only submitted 2 pages, only those pages would be saved. We searched for retakes and saw a small number which we deleted.
Oh cool. How were you able to identify duplicates?
I looked for identical names or email addresses, and then manually checked them. The other thing we could do would be to record people's IP addresses and look for identical ones. However I chose not to record them due to privacy concerns. I would promise not to use them to try guessing who people are, and this identifying data never gets shared with anyone but me - I'd appreciate feedback on whether people would be comfortable with IP addresses getting recorded given this.
0Peter Wildeford8y
Tom was the one who created the survey taking architecture, so I've asked him to get back to you, just to make sure I don't give you incorrect information.

Thanks for the survey! An interesting read. One question, two comments:

1 How do I read the graph on p10?


"Reading the “The Four Focus Areas of Effective Altruism”, one would expect a roughly even split between (1) poverty, (2) metacharity, (3) far future / x­risk / AI, and (4) nonhuman animals. Above, instead of equal splits, poverty emerges as a clear leader \footnote: Statistically significant with a t­test, p < 0.0001." Though, maybe this isn’t fair. If we redefine meta­charity to also include rationality and cause prioritization, it take

... (read more)
0Peter Wildeford8y
x-axis is the percent of income donated. y-axis is the number of people who have donated that percent or more. - It is "clearly leading the four" under one interpretation, which I go on to suggest is probably a narrow one.
1 thanks, that makes sense. Maybe it's just me, but when I see this kind of graph, I'm expecting the y-axis to be the response variable e.g. percent of income donated, and the x-axis to represent the rank of the people like this []. 2 I know the report acknowledges that its interpretation is questionable there but I think that understates the problem. Rather, I think it proposes then misapplies a categorisation... Imagine that I make a survey regarding the world's religions, and I made survey with options "Anglican", "Catholic", "Islamic", "Buddhist", "Hindu", et cetera, and then I say that reading theological scholar so-and-so's categorisation of the four great religions - Christian, Islamic, Buddhist and Hindu - Hindu is clearly the most popular. And then I go on to clarify that if you add Anglican and Catholic together, you end up with more than in Hindu. Then I go on to say that these are two different "interpretations", one of which is "probably narrow"...
I'm not au fait with metacharity or rationality - can you explain why rationality should be bundled under metacharity? What is the meta plan behind promotion of rationality (particular in it's specific forms, like the organisation CFAR)? Is this really true as strongly as that Anglican and Catholic should be bundled under Christian? I guess the point of your analogy may have been that Four Focus Areas of Effective Altruism [] classed rationality/CFAR under meta EA, which is fair.
Yeah, I don't think that they're as similar as Anglicanism or Catholicism, I'm just saying that you have to consistently apply your chosen scheme. But I do in fact think that they're decently related. EA Outreach wants to increase the number of people trying to do good. 80,000 Hours wants to move altruists to impactful careers (increasing quality of altruistic activities). GPP wants to get altruists working on higher-value projects. CFAR wants altruists to be equipped with better thinking skills (and non-altruists). The common thread is that they all want to increase the number, build the capacities and improve the target projects of altruists.
Makes sense, but points to meta being an unusually broad and unspecific description.
0Peter Wildeford8y
That's fair. It's a cumulative frequency graph. - I mean it's a matter of opinion, so I thought I'd provide the different interpretations. Sorry if you feel that I editorialized too much.

It's interesting that only around 10% of self-identified EA's report donating 10% or more of their income. That makes me feel less guilty about "only" giving 10%. :)

Is there a way to easily pull out the % of their income that people gave in 2013 in a more granular way - e.g. people who gave 10.0% vs. 10.1% vs 12%? I don't see the data with the currencies converted in github.

This is action relevant for us at Charity Science because assuming GWWC pledgers stick carefully to 10.0% getting them to donate to a particular fundraiser has no counterfactual value.

0Peter Wildeford8y
23 people donated precisely 10%. 1 person donated precisely 15%. 46 people donated between 10 and 15%, such as 11.8%, 12.4% 12.6%, 14.9%, 10.6%... So it does seem more common than not.
0Peter Wildeford8y
Someone at GWWC might be able to confirm with data from MyGiving.

Tom mentioned that the raw data would be shared (in an anonymized form)?

Also, I recall there being questions about politics - maybe about which other movements we were involved in?

3Peter Wildeford8y
Yes. All data and scripts are publicly available in the GitHub repository for the project []. - Yes, these questions were in the survey (and available in the anonymized data), but were not included in our analysis. Other people can feel free to do further analysis with the politics data.
Right: 21 Centre-right: 17 Centre: 60 Centre-left: 222 Left: 299 Libertarian: 128 Particularly high non-response rate though (was one of the last questions).
I'm sure you'd get high non-response on political questions regardless, for obvious reasons.