Does "mainly a subset" mean that a significant majority of responses coded this way were also coded as cause prio?
That's right, as we note here:
The Cause Prioritization and Focus on AI categories were largely, but not entirely, overlapping. The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused on insufficient attention being paid to other causes, primarily animals and GHD.
Specifically, of those who mention Cause Prioritization, around 68% were also coded as p...
We did note this explicitly:
As we noted in our earlier report, individuals who are particularly dissatisfied with EA may be less likely to complete the survey (whether they have completely dropped out of the community or not), although the opposite effect (more dissatisfied respondents are more motivated to complete the survey to express their dissatisfaction) is also plausible.
I don't think there's any feasible way to address this within this, smaller, supplementary survey. Within the main EA Survey we do look for signs of differential attrition.
Thanks Ulrik!
We can provide the percentages broken down by different groups. I would advise against thinking about this in terms of 'what would the results be if weighted to match non-actual equal demographics' though: (i) if the demographics were different (equal) then presumably concern about demographics would be different [fewer people would be worried about demographic diversity if we had perfect demographic diversity], and (ii) if the demographics were different (equal) then the composition of the different demographic groups within the community wou...
Hopefully in the next couple/few weeks, though we're prioritising the broader community health related questions from that followup survey linked above.
I can confirm that there's not been so dramatic a shift since the 2020 cause results (below for reference), i.e. global poverty and AI are still very similarly ranked. The new allocation-of-resources data should hopefully give an even clearer sense of 'how much of EA' people want to be this or that.
We did gather cause prioritization data in the most recent EA Survey, we just delayed publishing that report because we gathered additional cause prioritization data in this followup survey, which we ran in December. This was looking at what share of resources EAs would allocate to different causes, rather than just their rating of different causes, which I think adds an important new angle.
We stopped gathering information about donations to individual charities in 2020 as part of a drive to make the EA Survey shorter to increase participation. However, th...
Thanks! I agree that allocating a percentage of "resources", where this contains very different kinds of resources (money and labour) can be difficult. Still, we wanted this to largely match the question asked at the Meta Coordination Forum, which also combined this, so we matched their wording.
Thanks!
We'll definitely be reporting on changes in awareness of and attitudes towards EA in our general reporting of EA Pulse in 2024. I'm not sure if/when we'd do a separate dedicated post towards changes in EA awareness/attitudes. We have a long list (this list is very non-exhausive) of research which is unpublished due to lack of capacity. A couple of items on that list also touch on attitudes/awareness of EA post-FTX, although we have run additional surveys since then.
Feel free to reach out privately if there are specific things it would be helpful to ...
Thanks Luke! Everyone who opted to receive additional surveys by email in the last EA Survey, will have received an email with this survey (December 11-12th). We find they often get sent to the spam folder though, so you might want to check.
If you follow the link in that email, it won't automatically pre-fill your email or automatically track you, but you will be able to know which email address it was sent to, so that you can enter that one.
It sounds like you are reading my comment as saying that "center left" is very similar to "left". But I think it's pretty clear from the full quote that that's not what I'm saying.
The OP says that EA is 80% "center-left". I correct them, and say that EA is 36.8% left and 39.8% "Center left."
The "(so quite similar)" here refers to the percentages 36.8% and 39.8% (indeed, these are likely not even statistically significant differences).
I can see how, completely in the abstract, one could read the claim as being that "Left" and "Center left" are s...
It's worth noting that:
Confirmed. And not only that, but French EAs are more likely to say that they are Left, rather than Center left.
... EAs calling themselves 'center-left' and that apparently make 80% of EA according to Rethink Priorities surveys
Roughly 80% (76.6%) consider themselves left or center left, of which 36.8% consider themselves "Left", while 39.8% consider themselves "Center left" (so quite similar).
See also 'How Donors Choose Charities' (Breeze, 2013), where even unusually engaged donors are explicit about basing their donations on personal preference and often donating quite haphazardly, with little deliberation.
See also 'Impediments to Effective Altruism' (Berman et al, 2018 [full paper]), where people endorsed making charitable decisions based on subjective preferences and often did not elect to donate to the most effective charities, even when this information was available.
See also this review by Caviola et al (2021).
The most common places that people first or primarily heard about EA seem to be Leaf itself, Non-Trivial, and school — none of these categories show up on the EA survey.
"School" does appear in the EA Survey, it's just under the superordinate "Educational course" category (3%) of respondents.
We have 3 surveys of our own assessing where non-EAs more broadly heard of EA, which also find that education is among the most important source of hearing about EA:
Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly... My guess is these changes are (almost entirely) driven by PR concerns about longtermism.
It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. I'm not aware of any empirical work directly assessing this even though it seems a clearly empirical...
I'd be excited about a world where CEA posts generic costs+benefits of all of the big programs. I wouldn't fault CEA, as no one else does this yet, but I think some of this would be very useful, though perhaps too confrontational.
Agreed that this could be very useful. It could also (as I argued here) be useful to have more such models produced by independent evaluators.[1]
Although, I also think there is value in seeing models from the orgs themselves for ~ reasoning transparency purposes.
Points in favour of cortical neuron counts as a proxy for moral weight:
- Neuron counts correlate with our intuitions of moral weights. Cortical counts would say that ~300 chicken life years are morally equivalent to one human life year, which sounds about right.
That neuron counts seem to correlate with intuitions of moral weight is true, but potentially misleading. We discuss these results, drawing on our own data here.
I would quite strongly recommend that more survey research be done (including more analysis, as well as additional surveys: we ha...
I think that revealed preference can be misleading in this context, for reasons I outline here.
It's not clear that people's revealed preferences are what we should be concerned about compared to, for example, what value people would reflectively endorse assigning to animals in the abstract. People's revealed preference for continuing to eat meat, may be influenced by akrasia, or other cognitive distortions which aren't relevant to assessing how much they actually endorse animals being valued.[1] We may care about the latter, not the former, when asses...
"We want to publish but can't because the time isn't paid for" seems like a big loss, and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it's a crisply defined chunk of work with clear outcomes.
Thanks! I'm planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:
I definitely agree that funding is a significant factor for some institutional actors.
For example, RP's Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don't receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.
Example of this sort of unpublished research include:
Agreed. As we note in footnote 2:
...There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts
The full process is described in our earlier post, and included a variety of other checks as well.
But, in brief, the "stringent" and "permissive" criteria refer to respondents' open comment explanations of what they understand "effective altruism" to means, and whether they either displayed clear familiarity with effective altruism, such that it would be very unlikely someone would give that response if they were not genuinely familiar with effective altruism (e.g. by referring to using evidence and reason to maximise the amount of good done with you...
Thanks for the reply!
I think there are also important advantages to internal impact evaluation: the results are more likely to be bought into internally, and important context or nuance is less likely to be missed.
I agree there are some advantages to internal evaluation. But I think "the results are more likely to be bought into internally" is, in many cases, only an advantage insofar that orgs are erroneously more likely to trust their own work than external independent work.
That said, I agree that the importance of orgs bringing important context a...
Thanks for writing this!
For me, the value of independent impact evaluation seems particularly clear- though I would agree that orgs doing it in-house is still usually better than nothing.
You mention difficulty, orgs being busy, and orgs having strong priors as possible reasons for the lack of impact evaluation. I'd speculate that financial cost is perhaps the largest factor. Orgs that want to have an impact evaluation, but can't afford the time cost, could readily commission external evaluations (were finance no issue).[1] RP's Surveys and Data Analys...
I agree that "regulation" may be easier to advocate for than a "pause". Our results (see below) are in that direction, though the difference is perhaps not so big as one might imagine (and less stark than the "NIMBY" case), though I would expect this to depend on the details of the regulation and the pause and their presentation.
...
- Pause on AI Research. Support for a pause on AI research outstrips opposition. We estimate that 51% of the population would support, 25% would oppose, 20% remain neutral, and 4% don’t know (compared to 58-61% support and 19-23
Many thanks for collating this data!
...I believe the community should assign responsibility to, and funding for, one or more people or organizations to conduct and disseminate this sort of high-level analysis of community growth metrics. I honestly find it baffling that measuring the growth of EA and reporting findings back to the community isn’t someone’s explicit job…
It seems obvious to me that numerous stakeholders-- including organization leaders, donors of all sizes, group leaders, and entrepreneurs-- would all benefit from having an accurate understandi
Do you know roughly when we can expect these results [from the followup RP survey] to be published?
I'm hopeful that we can launch the survey sometime between the end of this month and the middle of November (the next 2-4 weeks). Like with Ben's report, we're just waiting on input from a variety of different orgs (the survey is addressing multiple different aims, besides looking at community health/FTX, so there are quite a few different stakeholders). Allowing another 2-4 weeks for the survey to run (taking us up to early-mid December), I would still aim report on the FTX/community health results before the end of the year.
From some more polls I did, it seems like after Documentaries, speeches are the single form of activism that gets most people to go vegan... This is also – to a degree – backed up by larger and more professional polls like this one.)
You might find our recently published survey about what prompted vegetarians/vegans to go vegetarian/vegan relevant. We argue in the post that previous surveys often suffered from two problems:
This is the kind of scenario where something that would typically be welfare maximising (and right according to commonsense morality) is actually not welfare maximising and is wrong according to commonsense morality. i.e. typically people who are greatly in need of pain medication are the people who would benefit most from pain medication; typically you shouldn't give strong pain medication to people with no medical need of it; typically there are flow-through effects to consider like addiction, upholding norms, social relations and moral character (becaus...
But there are also polls showing that almost half of U.S. adults "support a ban on factory farming." I think the correct takeaway from those polls is that there's a gap between vaguely agreeing with an idea when asked vs. actually supporting specific, meaningful policies in a proactive way.
I broadly agree with the conclusion as stated. But I think there are at least a couple of important asymmetries between the factory farming question and the AI question, which mean that we shouldn't expect there to be a gap of a similar magnitude between stated pub...
Investigating the effects of talking (or not talking) about climate change in different EA/longtermist context seems to be neglected (e.g. through surveys/experiments and/or focus groups), despite being tractable with few resources.
It seems like we don't actually know either the direction or magnitude of the effect, and are mostly guessing, despite the stakes potentially being quite high (considered across EA/longtermist outreach).
We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.
I worry this heuristic works if and only if people have reasonable substantive views about what kind of thing they want to see more/less on the Forum.
For example, if people vote in accordance with the view 'I want to see more/less [things I like/dislike or agree/disagree with], then this heuristic functions just the same as like/dislike or agree/disagree vote (which I think would be bad). If people vote in accordance with the view 'I want to see more/less [posts which make substantive contributions, which others may benefit from, even if I strongly disagree with them/don't think they are well made]', then the heuristic functions much more like Matt's.
I think Monmouth's question is not exactly about whether the public believe AI to be an existential threat. They asked:
"How worried are you that machines with artificial intelligence could eventually pose a
threat to the existence of the human race – very, somewhat, not too, or not at all worried?" The 55% you cite is those who said they were "Very worried" or "somewhat worried."
Like the earlier YouGov poll, this conflates an affective question (how worried are you) with a cognitive question (what do you believe will happen). That's why we deliberately spli...
Still, it’s hard to see how tweaking EA can lead to a product that we and others be excited about growing.
It's not clear to me how far this is the case.
I agree that:
I agree, the most striking part of this article was that this core assumption had no numerical data to back it up. Only his own discussions with high level EAs.
"Due to the reputational collapse of EA"
High level EAs are more likely to have closer involvement with SBF/FTX and therefore more likely to have higher levels of reputational Loss than the average EA, or even the movement as a whole. I would confidently guess that the "200-800" EAs who lost big on FTX would skew heavily towards the top of the leadership structure.
The three studies cited here in the...
Thanks for this. I found the uncited claims about EA's "reputational collapse" in the OP quite frustrating and appreciated this more data-driven response.
There are people who I would consider "EA" who I wouldn't consider a "community member" (e.g. if they were not engaging much with other people in the community professionally or socially), but I'd be surprised if they label themselves "EA" (maybe they want to keep their identity small, or don't like being associated with the EA community).
Fwiw, I am broadly an example of this category, which is partly why I raised the example: I strongly believe in EA and engage in EA work, but mostly don't interact with EAs outside professional contexts. So I ...
Thanks!
For instance, I personally found it surprising how few people disbelieve AI being a major risk (only 23% disbelieve it being an extinction level risk)
Just to clarify, we don't find in this study that only 23% of people disbelieve AI is an extinction risk. This study shows that of those who disagreed with the CAIS statement 23% explained this in terms of AI not causing extinction.
So, on the one hand, this is a percentage of a smaller group (only 26% of people disagreed with the CAIS statement in our previous survey) not everyone. On the other h...
Many people I would consider "EA" in the sense that they work on high impact causes, socially engage with other community members etc. don't consider themselves EA, might I think would likely consider themselves community members
This is reasonable, but I think the opposite applies as well. i.e. people can be EA (committed to the philosophy, taking EA actions) but not a member of the community. Personally, this seems a little more natural than the reverse, but YMMV (I have never really felt the intuitive appeal of believing in EA and engaging in EA activities but not describing oneself as "an EA").
Thanks!
it might be easiest if you share a draft of the planned questions so that people can see what is already in there and what seems in scope to include.
Makes sense. We're trying to elicit another round of suggestions here first (since people may well have new requests since the original announcement).
Thanks!
We've spoken to a few different orgs/researchers about animal welfare requests, but would welcome more.
Thanks for asking. We just re-announced it!
It was originally going to be supported by the FTX Future Fund and was therefore delayed while we sought alternative funding. We have now acquired alternative funding for this project for one year. However, the project will now be running on a quarterly basis, rather than monthly, to make the most efficient use of limited funds.
I'd be very curious to see predictions (ideally backed up with bets) from people on different sides of this debate as to how widespread animal product consumption would be 1 year, 5 years or 10 years after plant-based meat reaching PTC-parity (suitably operationalized). Perhaps a survey of experts might facilitate this? Prediction markets would also be relevant.
This seems like it would cut through some of the not action-relevant meta-debate about whether people previously believed that PTC were merely necessary or sufficient.
(Jacob and I both work for Rethink Priorities, but this was written in a private capacity.)
It’s hard to take these responses too literally since the median response for chicken was 1,000 and the average American consumes over 1,000 chickens per lifetime.
Personally I think the largest part of the explanation of this is what we say here:
...For example, we anticipate that participants would likely give different responses were questions posed not in terms of the moral value of different species in the abstract, but in terms of concrete trade-offs, e.g., whether to save 1 human life or x animals. We would anticipate that this would likely lead to lower
Thanks for the comment!
I think more research into whether public attitudes towards AI might be influenced by the composition of the messengers would be interesting. It would be relatively straightforward to run an experiment assessing whether people's attitudes differ in response to different messengers.
That said, the hypothesis (the AI risk communication being largely done by nerdy white men influences attitudes via public perceptions of whether 'people like me' are concerned about AI) seems to conflict with the available evidence. Both our previous...
Thanks for the comment!
Thanks for your question Jessica.
There are no significant differences between the racial categories (unsurprising given the small sample sizes).
Thanks!
For satisfaction, we see the following patterns.
- Looking at actual satisfaction scores post-FTX, we see more engaged people were more highly satisfied than less engaged people. In comparison, for current satisfaction, this is no longer the case or is only minimally so (setting aside the least engaged who remain less satisfied than the moderately to highly engaged). Every group's satisfaction has decreased, with moderate to highly engaged EAs' satisfaction declining to similar levels (implying a larger decrease among the more highly engaged).
- The
... (read more)