I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team.
The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.
The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:
Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.
My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.
Survey methodology and data analysis.
That's a good question. Although it somewhat depends on your purposes, there are multiple reasons why you might want to measure both separately.
Note that often affect is measured at the level of individual experiences or events, not just an overall balance. And there is evidence suggesting that negative and positive affect contribute differently to reported life satifaction. For example, germane to your earlier question, this study finds that "positive affect had strong effects on life satisfaction across the groups, whereas negative affect had weak or nonsignificant effects."
You might also be interested in measuring negative and positive affect for other reasons. For example, you might just normatively care about negative states more than symmetrical positive states, or you might have concerns about the symmetry of the measures.
We have some data on public support for WAW interventions of different kinds from our recent Attitudes Towards Wild Animal Welfare Scale paper. The academic paper itself does not highlight the results that would be most interesting to EAs in this context, so I'll reproduce them below, and also link to this more accessible summary on the Faunalytics website. (credit to @Willem Sleegers who was the first author of both).
We asked respondents about their level of support for different specific interventions. All the interventions tested had a plurality of support except for genetically modifying wild animals.[2] Helping wild animals in natural disasters, vaccinating and healing sick wild animals, and supplying food, water and shelter, all had large majorities in support. Note that the survey sample was not weighted to be representative (as the goal of the studies was to validate the measures not to assess public opinion), but I would not expect this to change the basic pattern of the results.
| Oppose | Neither | Support | |
| Helping wild animals in fires and natural disasters | 0.8% | 3.7% | 95.5% |
| Vaccinating and healing sick wild animals | 6.7% | 10.5% | 82.7% |
| Providing for the basic needs of wild animals (e.g., supplying food and water, creating shelters) | 9.5% | 11.5% | 79.1% |
| Conducting research into how to alter nature to improve the lives of wild animals | 26.5% | 19.1% | 54.3% |
| Controlling the fertility of wild animals to manage their population size | 33.9% | 23.8% | 42.3% |
| Genetically modifying wild animals to improve their welfare or the welfare of other wild animals | 70.3% | 16.2% | 13.5% |
Of course, some might claim that the popular interventions are not characteristic of WAW or are too small scale (though I do not think this is true of vaccinations). But I think it is notable than even research into "how to alter nature" has majority support, and fertility control has plurality support.
We also asked, more abstractly, about whether people endorse the attitudes that intervening to help wild animals is infeasible (what we call 'intervention ineffectiveness), with these items:
With the caveat that this measure was not designed for polling absolute levels of public support, but rather than reliably capturing a specific underlying attitude, on average respondents did not strongly endorse this attitudes, and slightly leaned towards disagreement.

Stepping back, I would not take a very strong stance on the FAW vs WAW normality/popularity question. I think this is very likely to vary at the level of individual intervention, or depend on framing when presented abstractly. As an example of the latter point, when we presented FAW and WAW as abstract cause areas, each with a 'moderate' and 'controversial' framing, WAW was competitive with a moderate framing (non-significantly ahead of Climate Change as well), while it lagged when presented with a framing that mentioned genetic engineering. We'd be happy to gather further evidence to examine a wider variety of framings or interventions.
Though this was a secondary aspect of the paper. The core aim of the paper was to develop and validate a new set of measures towards wild animal welfare, and these questions about support for different interventions were intended only to validate those measures.
Which is, perhaps, not surprising, many people oppose GM tout court.
It seems like life satisfaction (a cognitive evaluation) and affect are simply different things, even if they are related, and have different correlates. It doesn't strike me as that intuitively surprising that people would often evaluate their lives positively even if they experience a lot of negative affective states, in part because people care a lot about things other than their hedonic state (EAs/hedonic utilitarians may be slightly ununusual in this respect), and partly due to various biases (e.g. self-protective and self-presentation biases) that might inclined people towards positive reports.
Thanks gergo!
A small number of small orgs do meet the bar, though! AIS Cape town, EA Philippines, EA & AIS Hungary (probably at least some others) has been funded consistently for years. The bar is really high though for these groups, and I guess funders don't see enough good opportunities to support to justify investing more resources into them through external services. (Maybe this is what you meant anyway but I wasn't sure)
My claim is actually slightly different (though much closer to your second claim than your first). It's definitely not that no small groups are funded (obviously untrue), but that funders are often not interested in funding work on the strength of it supporting smaller groups, where "smaller" includes the majority of orgs.
The highest counterfactual impact comes from working with organisations that could benefit but haven’t budgeted for marketing due to a lack of understanding.
As JS from Good Impressions told us:
“Willingness to pay is not as strong a predictor of commitment or perceived value as I thought it would be.”This creates a chicken-and-egg problem: funders expect clients to pay, but clients lack the means.
This matches our own experience (with the Rethink Priorities, Surveys and Data Analysis team).
I would add that, in our experience, the situation is worse than the chicken-and-egg problem as stated. As you note, funders are often not interested in funding work which is supporting smaller, more peripheral or less established groups (and to be clear, this seems to be a matter of 'most orgs don't meet the bar' rather than 'all but a few smaller groups do meet the bar').
But we have also been told by more than one funder that if our work is supporting core, well-resourced orgs, then those orgs ought to fund it themselves, and you shouldn't need central funding.[1] This creates a real catch-22 situation, where projects of this kind can neither be funded if they are supporting the biggest orgs or if they're not.
I also find that people often significantly overestimate the ability to pay of even the largest orgs to pay. We often find that orgs are willing to invest tens of staff hours in working with us on a project- implying they value it- but they still have hard limits of whether they can spend $500-1000 on costs for the project.[2]
I've not directly experienced this response recently, as we've not been applying for funding on this sort of basis, so YMMV.
Perhaps explained by (i) even well-resourced orgs don't have large amounts of unrestricted funds that they can spend on whatever unforeseen expenses they want (ii) internal approvals for funding being difficult, (iii) needing/wanting to stick to some, pretty low, sense of what reasonable costs for advertising/experiments are.
I am curious on the communities thoughts on this lack of diversity
In previous surveys, diversity/JEID related issued have often been mentioned as reasons for dissatisfaction. That said, there are diverse views about the topic (many of which you can read here, it's much discussed).
Community Health Supplementary Survey


Thanks Jakob!
One thing I'll add to this, which I think is important, is that it may matter significantly how people are engaged in prioritisation within causes. I think it may be surprisingly common for within-cause prioritisation, even at the relatively high sub-cause level, to not help us form a cross-cause prioritization to a significant extent.
To take your earlier example: suppose you have within-animal prioritisers prioritising farmed animal welfare vs wild animal welfare. They go back and forth on whether it's more important that WAW is bigger in scale, or that FAW is more tractable, and so on. To what extent does that allow us to prioritise wild animal welfare vs biosecurity, which the GCR prioritisers have been comparing to AI Safety? I would suggest, potentially not very much.
It might seem like work that prioritises FAW vs WAW (within animals) and AI Safety vs biosecurity (within GCR), would allow us to compare any of the considered sub-causes to each other. If these 'sub-cause' prioritisation efforts gave us cost-effectiveness estimates in the same currency then they might, in principle. But I think that very often:
Someone doing within-cause prioritisation could complain that most of the prioritisation they do is not like this, that it is more intervention-focused and not high-level sub-cause focused, and that it does give cost-effectiveness estimates. I agree that within-cause prioritisation that gives intervention-level cost-effectiveness estimates is potentially more useful for building up cross-cause prioritisations. But even these cases will typically still be limited by the second and third bulletpoints above (I think the Cross-Cause Model is a rare example of the kind of work needed to generate actual cross-cause prioritisations from the ground up, based on interventions).
Thanks Vasco.
I think there are two distinct questions here: the sample and the analysis.
When resources allow, I think it is often better to draw a representative sample of the full population, and then analyse the effect of different traits / the effects within different sub-populations. Even if the people getting involved in EA have tended to be younger, more educated etc., I think there are still reasons to be concerned about the views of the broader population. Of course, if you are interested in a very niche population then this approach will either not be possible or will be very resources inefficient and you might have to sample more narrowly.
When looking for any interactions with demographics for these results, I found no significant demographic interactions, which is not uncommon. I focus here on the replication study in order to have a cleaner manipulation of the "doing good better" effect, though it's a smaller sample, and we gathered fewer demographics than in the representative sample in the first study.
For age, we see a fairly consistent effect (the trends are fairly linear even if I allow them to be non-linear, fwiw).
For student status, we see no interaction effect and the same main effect.
For sex, we see no effect and a consistent main effect.[1]
If there's a lot of interest this we could potentially look at education and income in the first survey.
People sometimes ask why we use sex rather than gender in public surveys and it's usually to match the census so that we can weight.
there’s a limit to how much you can learn in a structured interview, because you can’t adapt your questioning on the fly if you notice some particular strength or weakness of a candidate.
I agree. Very often I think that semi-structured interviews (which has a more or less closely planned structure, with the capacity to deviate), will be the best compromise between fully structured and fully unstructured interviews. I think it's relatively rare that the benefits of being completely structured outweigh the benefits of at least potentially asking a relevant followup question, and rare that the benefits of being completely unstructured outweigh the benefits of having at least a fairly well developed plan, with key questions to ask going on.
Perhaps they mentioned this elsewhere, but it could have added to their defence of this kind of question to note that 'third-person' questions like this are often asked to ameliorate social desirability and incentivise truth-telling, e.g. here or as in the bayesian truth serum. Of course, that's a separate question to whether this specific question was worded in the best way.