All of David_Moss's Comments + Replies

Thanks!

For satisfaction, we see the following patterns.

  • Looking at actual satisfaction scores post-FTX, we see more engaged people were more highly satisfied than less engaged people. In comparison, for current satisfaction, this is no longer the case or is only minimally so (setting aside the least engaged who remain less satisfied than the moderately to highly engaged). Every group's satisfaction has decreased, with moderate to highly engaged EAs' satisfaction declining to similar levels (implying a larger decrease among the more highly engaged). 
    • The
... (read more)

Does "mainly a subset" mean that a significant majority of responses coded this way were also coded as cause prio? 

 

That's right, as we note here:

The Cause Prioritization and Focus on AI categories were largely, but not entirely, overlapping. The responses within the Cause Prioritization category which did not explicitly refer to too much focus on AI, were focused on insufficient attention being paid to other causes, primarily animals and GHD.  

Specifically, of those who mention Cause Prioritization, around 68% were also coded as p... (read more)

We did note this explicitly:

As we noted in our earlier report, individuals who are particularly dissatisfied with EA may be less likely to complete the survey (whether they have completely dropped out of the community or not), although the opposite effect (more dissatisfied respondents are more motivated to complete the survey to express their dissatisfaction) is also plausible.

I don't think there's any feasible way to address this within this, smaller, supplementary survey. Within the main EA Survey we do look for signs of differential attrition.

4
AnonymousEAForumAccount
2d
As I mentioned in point 3 of this comment: This suggests we could crudely estimate the selection effects of people dropping out of the community and therefore not answering the survey by assuming that there was a similar increase in scores of 1 and 2 as there was for scores of 3-5. My guess is that this would still understate the selection bias (because I’d guess we’re also missing people who would have given ratings in the 3-5 range), but it would at least be a start. I think it would be fair to assume that people who would have given satisfaction ratings of 1 or 2 but didn’t bother to complete the survey are probably also undercounted in the various measures of behavioral change. 

Thanks Ulrik!

We can provide the percentages broken down by different groups. I would advise against thinking about this in terms of 'what would the results be if weighted to match non-actual equal demographics' though: (i) if the demographics were different (equal) then presumably concern about demographics would be different [fewer people would be worried about demographic diversity if we had perfect demographic diversity], and (ii) if the demographics were different (equal) then the composition of the different demographic groups within the community wou... (read more)

Hopefully in the next couple/few weeks, though we're prioritising the broader community health related questions from that followup survey linked above.

I can confirm that there's not been so dramatic a shift since the 2020 cause results (below for reference), i.e. global poverty and AI are still very similarly ranked. The new allocation-of-resources data should hopefully give an even clearer sense of 'how much of EA' people want to be this or that.

We did gather cause prioritization data in the most recent EA Survey, we just delayed publishing that report because we gathered additional cause prioritization data in this followup survey, which we ran in December. This was looking at what share of resources EAs would allocate to different causes, rather than just their rating of different causes, which I think adds an important new angle.

We stopped gathering information about donations to individual charities in 2020 as part of a drive to make the EA Survey shorter to increase participation. However, th... (read more)

1
andrew_richardson
2mo
Thank you for collecting the data in these surveys! Do you have an idea for the ETA for the 2023 data? My motivation is that when I talk to people about EA, I often hear the criticism that EA is all about AI risk, and I respond by saying that actually EAs in practice mostly prioritize and donate to global health causes. I've been linking the 2019 survey for a few years now and I'm starting to notice that it's out of date.

Thanks! I agree that allocating a percentage of "resources", where this contains very different kinds of resources (money and labour) can be difficult. Still, we wanted this to largely match the question asked at the Meta Coordination Forum, which also combined this, so we matched their wording.

Thanks!

We'll definitely be reporting on changes in awareness of and attitudes towards EA in our general reporting of EA Pulse in 2024. I'm not sure if/when we'd do a separate dedicated post towards changes in EA awareness/attitudes. We have a long list (this list is very non-exhausive) of research which is unpublished due to lack of capacity. A couple of items on that list also touch on attitudes/awareness of EA post-FTX, although we have run additional surveys since then.

Feel free to reach out privately if there are specific things it would be helpful to ... (read more)

Thanks Luke! Everyone who opted to receive additional surveys by email in the last EA Survey, will have received an email with this survey (December 11-12th). We find they often get sent to the spam folder though, so you might want to check.

If you follow the link in that email, it won't automatically pre-fill your email or automatically track you, but you will be able to know which email address it was sent to, so that you can enter that one.

2
Luke Freeman
3mo
Thanks! Found it (yep, was in spam unfortunately).

It sounds like you are reading my comment as saying that "center left" is very similar to "left". But I think it's pretty clear from the full quote that that's not what I'm saying.

The OP says that EA is 80% "center-left". I correct them, and say that EA is 36.8% left and 39.8% "Center left." 

The "(so quite similar)" here refers to the percentages 36.8% and 39.8% (indeed, these are likely not even statistically significant differences). 

I can see how, completely in the abstract, one could read the claim as being that "Left" and "Center left" are s... (read more)

3
prisonpent
4mo
Now that you point it out I agree that's the more plausible reading, but it genuinely wasn't the one that occurred to me first. 

It's worth noting that:

  • Results don't vary so dramatically across most countries in our data, with none of the countries with the largest number of EAs showing less than ~35% identifying as "Left".
  • The majority of EAs and the majority of EA left/center-leftists are outside the US
2
David Mathers
4mo
You're right sorry. Will move it! 

Confirmed. And not only that, but French EAs are more likely to say that they are Left, rather than Center left.

2
David Mathers
4mo

I'm curious why this post got -3 worth of downvotes (at time of writing). It seems like a pretty straightforward statement of our results.

3
prisonpent
4mo
I didn't downvote you, but I would guess those who did were probably objecting to this Self-identified leftists, myself included, generally see modern liberalism as a qualitatively different ideology. Imagine someone at Charity Navigator[1] offhandedly describing EA as "basically the same as us". Now imagine that the longtermism discourse had gotten so bad that basically every successful EA organization could expect to experience periodic coup attempts, and "they're basically Charity Navigator" was the canonical way to insult people on the other side. That's what "left = very liberal" looks like from here.  1. ^ before they started doing impact ratings

... EAs calling themselves 'center-left' and that apparently make 80% of EA according to Rethink Priorities surveys

 

Roughly 80% (76.6%) consider themselves left or center left, of which 36.8% consider themselves "Left", while 39.8% consider themselves "Center left" (so quite similar).

1
David_Moss
4mo
I'm curious why this post got -3 worth of downvotes (at time of writing). It seems like a pretty straightforward statement of our results.
3
Vaipan
4mo
Thanks David, I was thinking about this survey. I guess my point still stands--a leftist EA in Scandinavia doesn't mean the same thing as a leftist in the US, and my guess is that the majority of what these EAs call 'left' would be seen as center-left or even moderate right-wing in other countries (such as France or Sweden). 

See also 'How Donors Choose Charities' (Breeze, 2013), where even unusually engaged donors are explicit about basing their donations on personal preference and often donating quite haphazardly, with little deliberation.

See also 'Impediments to Effective Altruism' (Berman et al, 2018 [full paper]), where people endorsed making charitable decisions based on subjective preferences and often did not elect to donate to the most effective charities, even when this information was available.

See also this review by Caviola et al (2021).

The most common places that people first or primarily heard about EA seem to be Leaf itself, Non-Trivial, and school — none of these categories show up on the EA survey.

 

"School" does appear in the EA Survey, it's just under the superordinate "Educational course" category (3%) of respondents. 

We have 3 surveys of our own assessing where non-EAs more broadly heard of EA, which also find that education is among the most important source of hearing about EA:

  • In a survey of US students more broadly (not just EAs), we ran for CEA (referenced
... (read more)
2
Jamie_Harris
4mo
Those additional unpublished-but-referenced results are v helpful comparisons, thank you! I've noticed a fair few times when people (myself included, in this case) are gesturing or guessing about certain factors, and then you notice that and leave a detailed comment adding in relevant empirical data. I'm a big fan of that, so thank you for your contributions here and elsewhere! I'll tone down the phrasing about Singer and Ted talks and make a couple of other wording tweaks. Agree with your caveats!

Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly... My guess is these changes are (almost entirely) driven by PR concerns about longtermism.

 

It seems worth flagging that whether these alternative approaches are better for PR (or outreach considered more broadly) seems very uncertain. I'm not aware of any empirical work directly assessing this even though it seems a clearly empirical... (read more)

I'd be excited about a world where CEA posts generic costs+benefits of all of the big programs. I wouldn't fault CEA, as no one else does this yet, but I think some of this would be very useful, though perhaps too confrontational. 

 

Agreed that this could be very useful. It could also (as I argued here) be useful to have more such models produced by independent evaluators.[1]

 

  1. ^

    Although, I also think there is value in seeing models from the orgs themselves for ~ reasoning transparency purposes.

Points in favour of cortical neuron counts as a proxy for moral weight:

  1. Neuron counts correlate with our intuitions of moral weights. Cortical counts would say that ~300 chicken life years are morally equivalent to one human life year, which sounds about right.

 

That neuron counts seem to correlate with intuitions of moral weight is true, but potentially misleading. We discuss these results, drawing on our own data here

I would quite strongly recommend that more survey research be done (including more analysis, as well as additional surveys: we ha... (read more)

I think that revealed preference can be misleading in this context, for reasons I outline here.

It's not clear that people's revealed preferences are what we should be concerned about compared to, for example, what value people would reflectively endorse assigning to animals in the abstract. People's revealed preference for continuing to eat meat, may be influenced by akrasia, or other cognitive distortions which aren't relevant to assessing how much they actually endorse animals being valued.[1] We may care about the latter, not the former, when asses... (read more)

"We want to publish but can't because the time isn't paid for" seems like a big loss, and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it's a crisply defined chunk of work with clear outcomes.


Thanks! I'm planning to post something about our funding situation before the end of the year, but a couple of quick observations about the specific points you raise:

  • I think funding projects from multiple smaller donors is just gene
... (read more)
2
Elizabeth
5mo
Yeah, "objective" wasn't a great word choice there. I went back and forth between "objective", "object", and "object-level", and probably made the wrong call. I agree there is an objective answer to "what percentage of people think positively of malaria nets?" but view it as importantly different than "what is the impact of nets on the spread of malaria?" I agree the right amount of social meta-investigation is >0. I'm currently uncomfortable with the amount EA thinks about itself and its presentation; but even if that's true, professionalizing the investigation may be an improvement. My qualms here don't rise to the level where I would voice them in the normal course of events, but they seemed important to state when I was otherwise pretty explicitly endorsing the potential posts.  I can say a little more on what in particular made me uncomfortable. I wouldn't be writing these if you hadn't asked and if I hadn't just called for money for the project of writing them up, and if I was I'd be aiming for a much higher quality bar. I view saying these at this quality level as a little risky, but worth it because this conversation feels really productive and I do think these concerns about EA overall are important, even though I don't think they're your fault in particular: * several of these questions feel like they don't cut reality at the joints, and would render important facets invisible. These were quick summaries so it's not fair to judge them, but I feel this way about a lot of EA survey work where I do have details.  * several of your questions revolve around growth; I think EA's emphasis on growth has been toxic and needs a complete overhaul before EA is allowed to gather data again.  * I especially think CEA's emphasis on Highly Engaged people is a warped frame that causes a lot of invisible damage. My reasoning is pretty similar to Theo's here. * I don't believe EA knows what to do with the people it recruits, and should stop worrying about recruiti

I definitely agree that funding is a significant factor for some institutional actors.

For example, RP's Surveys and Data Analysis team has a significant amount of research that we would like to publish if we had capacity / could afford to do so: our capacity is entirely bottlenecked on funding and as we are ~ entirely reliant on paid commissions (we don't receive any grants for general support) time spent publishing reports is basically just pro bono, adding to our funding deficit.

Example of this sort of unpublished research include:

  • The two reports mention
... (read more)
4
Elizabeth
5mo
"We want to publish but can't because the time isn't paid for" seems like a big loss[1], and a potentially fixable one. Can I ask what you guys have considered for fixing it? This seems to me like an unusually attractive opportunity for crowdfunding or medium donors, because it's a crisply defined chunk of work with clear outcomes. But I imagine you guys have already put some thought into how to get this paid for.  1. ^ To be totally honest, I have qualms about the specific projects you mention, they seem centered on social reality not objective reality. But I value a lot of RP's other work,  think social reality investigations can be helpful in moderation, and my qualms about these questions aren't enough to override the general principle. 

Agreed. As we note in footnote 2:

There are many reasons why respondents may erroneously claim knowledge of something. But simply put, one reason is that people like demonstrating their knowledge, and may err on the side of claiming to have heard of something even if they are not sure. Moreover, if the component words that make up a term are familiar, then the respondent may either mistakenly believe they have already encountered the term, or think it is sufficient that they believe they can reasonably infer what the term means from its component parts

... (read more)

The full process is described in our earlier post, and included a variety of other checks as well. 

But, in brief, the "stringent" and "permissive" criteria refer to respondents' open comment explanations of what they understand "effective altruism" to means, and whether they either displayed clear familiarity with effective altruism, such that it would be very unlikely someone would give that response if they were not genuinely familiar with effective altruism (e.g. by referring to using evidence and reason to maximise the amount of good done with you... (read more)

Thanks for the reply!

I think there are also important advantages to internal impact evaluation: the results are more likely to be bought into internally, and important context or nuance is less likely to be missed.

I agree there are some advantages to internal evaluation. But I think "the results are more likely to be bought into internally" is, in many cases, only an advantage insofar that orgs are erroneously more likely to trust their own work than external independent work. 

That said, I agree that the importance of orgs bringing important context a... (read more)

2
callum
5mo
I actually think it could be quite reasonable for an org to trust or place more weight on an internal evaluation more than an external one, but apart from that fully agree with all you say!

Thanks for writing this!

For me, the value of independent impact evaluation seems particularly clear- though I would agree that orgs doing it in-house is still usually better than nothing.

You mention difficulty, orgs being busy, and orgs having strong priors as possible reasons for the lack of impact evaluation. I'd speculate that financial cost is perhaps the largest factor. Orgs that want to have an impact evaluation, but can't afford the time cost, could readily commission external evaluations (were finance no issue).[1] RP's Surveys and Data Analys... (read more)

8
James Herbert
5mo
Just chiming in to say that for EA Netherlands, financial cost is definitely a big factor. Another factor is that for a long time, we didn't have a sufficiently established programme to evaluate. A third is that, until recently, we didn't know anything about M&E other than 'we should do an impact evaluation!'.  Fortunately, the second and third factors are beginning to change, so hopefully we'll actually be able to commission something soon.  However, realistically, we'd only have a few thousand to spend, and I don't know how much expertise that would get us. So, if there's anyone in this thread who thinks they can help us given our low budget, please do reach out! 
2
callum
5mo
Thanks David! I agree independence has advantages. OTOH I think there are also important advantages to internal impact evaluation: the results are more likely to be bought into internally, and important context or nuance is less likely to be missed. For making a theory of change specifically, I think it's quite important this is done internally, usually. Overall I think the ideal setup would quite often be for organisations to have their own internal impact evaluation function. And that's interesting on funder interest. In a few cases, organisations I've spoken to have been able to get specific grants for impact evaluation. But also org's might choose to reallocate their existing budget, without needing additional funding, if they consider impact evaluation an essential function. E.g. for a fixed budget, they might decide that they should be allocating at least e.g. 5 / 10% to impact evaluation. (But I guess this might be harder if it required pulling back on existing activities) I kind've see this as more at the org-level than funder-level tbh, similarly to any other spending decision facing an organisation. Perhaps because I'm thinking most about the benefits to org's themselves. But I definitely still agree that funder interest is a big driver.

I agree that "regulation" may be easier to advocate for than a "pause". Our results (see below) are in that direction, though the difference is perhaps not so big as one might imagine (and less stark than the "NIMBY" case), though I would expect this to depend on the details of the regulation and the pause and their presentation.

  1. Pause on AI Research. Support for a pause on AI research outstrips opposition. We estimate that 51% of the population would support, 25% would oppose, 20% remain neutral, and 4% don’t know (compared to 58-61% support and 19-23
... (read more)

Many thanks for collating this data!

I believe the community should assign responsibility to, and funding for, one or more people or organizations to conduct and disseminate this sort of high-level analysis of community growth metrics. I honestly find it baffling that measuring the growth of EA and reporting findings back to the community isn’t someone’s explicit job…

It seems obvious to me that numerous stakeholders-- including organization leaders, donors of all sizes, group leaders, and entrepreneurs-- would all benefit from having an accurate understandi

... (read more)
8
AnonymousEAForumAccount
5mo
I wholeheartedly endorse a more rigorous analysis of the data, I just didn't have the capacity or capability to undertake that. The EA Funds dashboard allows users to export the raw data. The CEA dashboard doesn't, though I assume CEA could provide the raw data without too much hassle (and hopefully they'll add an export functionality).

Do you know roughly when we can expect these results [from the followup RP survey] to be published?

 

I'm hopeful that we can launch the survey sometime between the end of this month and the middle of November (the next 2-4 weeks). Like with Ben's report, we're just waiting on input from a variety of different orgs (the survey is addressing multiple different aims, besides looking at community health/FTX, so there are quite a few different stakeholders). Allowing another 2-4 weeks for the survey to run (taking us up to early-mid December), I would still aim report on the FTX/community health results before the end of the year.

2
AnonymousEAForumAccount
1mo
Hi David! Any update on what (if anything) is going on with this survey and sharing its results? Was this part of the survey that was conducted in late December?
2
AnonymousEAForumAccount
5mo
Luke Freeman from GWWC has shared some excellentobservations around reduced willingness for community members to advocate for GWWC post-FTX (partially but not solely due to FTX). I think it would be quite valuable if the follow-up survey examined the dynamics he describes, as I doubt they are unique to GWWC.
4
AnonymousEAForumAccount
5mo
Thanks for the update David! These results should be very interesting.

From some more polls I did, it seems like after Documentaries, speeches are the single form of activism that gets most people to go vegan... This is also – to a degree – backed up by larger and more professional polls like this one.)

 

You might find our recently published survey about what prompted vegetarians/vegans to go vegetarian/vegan relevant. We argue in the post that previous surveys often suffered from two problems: 

  • Highly unrepresentative samples (e.g. people who are active in an online group about veganism may be very different to the w
... (read more)
3
PreciousPig
5mo
Thanks, very interesting indeed!

This is the kind of scenario where something that would typically be welfare maximising (and right according to commonsense morality) is actually not welfare maximising and is wrong according to commonsense morality. i.e. typically people who are greatly in need of pain medication are the people who would benefit most from pain medication; typically you shouldn't give strong pain medication to people with no medical need of it; typically there are flow-through effects to consider like addiction, upholding norms, social relations and moral character (becaus... (read more)

1
alexherwix
6mo
I agree with your general thrust. The thought experiment is a little bit contrived but deliberately designed to make both options look somewhat plausible. A value monist negative utilitarian could also give the medicine to Alice, so it's not even clear what option one would go for.   However, what I really wonder though is if "welfare" is the only thing we care about at the end of times? Or is there maybe also the question of how we got there? How we handled ourselves in difficult situations? What values we embodied when we were alive? Are we not at risk of losing our humanity if we subordinate all of our behavior to a "principled" but "acontextual" value monist algorithm (e.g., always maximize "expected welfare")? These are the kind of questions that I want to trigger reflection about with the thought experiment.

But there are also polls showing that almost half of U.S. adults "support a ban on factory farming." I think the correct takeaway from those polls is that there's a gap between vaguely agreeing with an idea when asked vs. actually supporting specific, meaningful policies in a proactive way.

 

I broadly agree with the conclusion as stated. But I think there are at least a couple of important asymmetries between the factory farming question and the AI question, which mean that we shouldn't expect there to be a gap of a similar magnitude between stated pub... (read more)

Investigating the effects of talking (or not talking) about climate change in different EA/longtermist context seems to be neglected (e.g. through surveys/experiments and/or focus groups), despite being tractable with few resources. 

It seems like we don't actually know either the direction or magnitude of the effect, and are mostly guessing, despite the stakes potentially being quite high (considered across EA/longtermist outreach).

We could potentially survey the EA community on this later this year. Please feel free to reach out if you have specific requests/suggestions for the formulation of the question.

I worry this heuristic works if and only if people have reasonable substantive views about what kind of thing they want to see more/less on the Forum. 

For example, if people vote in accordance with the view 'I want to see more/less [things I like/dislike or agree/disagree with], then this heuristic functions just the same as like/dislike or agree/disagree vote (which I think would be bad). If people vote in accordance with the view 'I want to see more/less [posts which make substantive contributions, which others may benefit from, even if I strongly disagree with them/don't think they are well made]', then the heuristic functions much more like Matt's.

I think Monmouth's question is not exactly about whether the public believe AI to be an existential threat. They asked:
"How worried are you that machines with artificial intelligence could eventually pose a
threat to the existence of the human race – very, somewhat, not too, or not at all worried?" The 55% you cite is those who said they were "Very worried" or "somewhat worried."

Like the earlier YouGov poll, this conflates an affective question (how worried are you) with a cognitive question (what do you believe will happen). That's why we deliberately spli... (read more)

Still, it’s hard to see how tweaking EA can lead to a product that we and others be excited about growing. 

 

It's not clear to me how far this is the case. 

  • Re. the EA community: evidence from our community survey, run with CEA, suggests a relatively limited reduction in morale post-FTX. 
  • Re. non-EA audiences, our work reported here and here (though still unpublished due to lack of capacity) suggest relatively low negative effects in the broader population (including among elite US students specifically).

I agree that:

  • Selection bias (from E
... (read more)
7
AnonymousEAForumAccount
5mo
FYI, I’ve just released a post which offers significantly more empirical data on how FTX has impacted EA. FTX’s collapse seems to mark a clear and sizable deterioration across a variety of different EA metrics.

I agree, the most striking part of this article was that this core assumption had no numerical data to back it up. Only his own discussions with high level EAs.

"Due to the reputational collapse of EA"

High level EAs are more likely to have closer involvement with SBF/FTX and therefore more likely to have higher levels of reputational Loss than the average EA, or even the movement as a whole. I would confidently guess that the "200-800" EAs who lost big on FTX would skew heavily towards the top of the leadership structure.

The three studies cited here in the... (read more)

Thanks for this. I found the uncited claims about EA's "reputational collapse" in the OP quite frustrating and appreciated this more data-driven response.

There are people who I would consider "EA" who I wouldn't consider a "community member" (e.g. if they were not engaging much with other people in the community professionally or socially), but I'd be surprised if they label themselves "EA" (maybe they want to keep their identity small, or don't like being associated with the EA community). 

 

Fwiw, I am broadly an example of this category, which is partly why I raised the example: I strongly believe in EA and engage in EA work, but mostly don't interact with EAs outside professional contexts. So I ... (read more)

Thanks!

For instance, I personally found it surprising how few people disbelieve AI being a major risk (only 23% disbelieve it being an extinction level risk)

Just to clarify, we don't find in this study that only 23% of people disbelieve AI is an extinction risk. This study shows that of those who disagreed with the CAIS statement 23% explained this in terms of AI not causing extinction

So, on the one hand, this is a percentage of a smaller group (only 26% of people disagreed with the CAIS statement in our previous survey) not everyone. On the other h... (read more)

Many people I would consider "EA" in the sense that they work on high impact causes, socially engage with other community members etc. don't consider themselves EA, might I think would likely consider themselves community members

 

This is reasonable, but I think the opposite applies as well. i.e. people can be EA (committed to the philosophy, taking EA actions) but not a member of the community. Personally, this seems a little more natural than the reverse, but YMMV (I have never really felt the intuitive appeal of believing in EA and engaging in EA activities but not describing oneself as "an EA"). 

7
Vaidehi Agarwalla
7mo
There are people who I would consider "EA" who I wouldn't consider a "community member" (e.g. if they were not engaging much with other people in the community professionally or socially), but I'd be surprised if they label themselves "EA" (maybe they want to keep their identity small, or don't like being associated with the EA community).  I think there's actually one class of people I've forgotten - which is "EA professionals" - someone who might professionally collaborate or even work at an EA-aligned organization, but doesn't see themselves as part of the community. So they would treat an EAG as a purely professional conference (vs. a community event). 

Thanks!
 

it might be easiest if you share a draft of the planned questions so that people can see what is already in there and what seems in scope to include.

Makes sense. We're trying to elicit another round of suggestions here first (since people may well have new requests since the original announcement).

Thanks! 

We've spoken to a few different orgs/researchers about animal welfare requests, but would welcome more.

Thanks for asking. We just re-announced it! 

It was originally going to be supported by the FTX Future Fund and was therefore delayed while we sought alternative funding. We have now acquired alternative funding for this project for one year. However, the project will now be running on a quarterly basis, rather than monthly, to make the most efficient use of limited funds.

I'd be very curious to see predictions (ideally backed up with bets) from people on different sides of this debate as to how widespread animal product consumption would be 1 year, 5 years or 10 years after plant-based meat reaching PTC-parity (suitably operationalized). Perhaps a survey of experts might facilitate this? Prediction markets would also be relevant.

This seems like it would cut through some of the not action-relevant meta-debate about whether people previously believed that PTC were merely necessary or sufficient.

(Jacob and I both work for Rethink Priorities, but this was written in a private capacity.)

2
Jacob_Peacock
7mo
Agree, forecasts would be great and I'd work on this is I end up spending more time on the future prospects of PBM!

It’s hard to take these responses too literally since the median response for chicken was 1,000 and the average American consumes over 1,000 chickens per lifetime.

Personally I think the largest part of the explanation of this is what we say here:

For example, we anticipate that participants would likely give different responses were questions posed not in terms of the moral value of different species in the abstract, but in terms of concrete trade-offs, e.g., whether to save 1 human life or x animals. We would anticipate that this would likely lead to lower

... (read more)

Thanks for the comment! 

I think more research into whether public attitudes towards AI might be influenced by the composition of the messengers would be interesting. It would be relatively straightforward to run an experiment assessing whether people's attitudes differ in response to different messengers.

That said, the hypothesis (the AI risk communication being largely done by nerdy white men influences attitudes via public perceptions of whether 'people like me' are concerned about AI) seems to conflict with the available evidence. Both our previous... (read more)

4
SiebeRozendal
7mo
Good point, thanks David

Thanks for the comment!

  1. This is probably largely explained by a difference between the questions in 2020 and 2022. In 2022, the question specified "within the last 12 months", but the 2020 question was not time limited. I've updated the post to make this clearer. 
  2. This was just one of quite a large number of questions which got cut this year to reduce the length. We might add it back into future surveys, but we'd likely have to cut some other question(s) to make room for it.
2
EdoArad
8mo
Thanks, that makes sense :)

Thanks for your question Jessica.
 

There are no significant differences between the racial categories (unsurprising given the small sample sizes). 

 

2
Jessica Wen
8mo
Thanks for sharing these data. The y-axes aren't shown so it's difficult to compare these plots, but I think it's interesting that the distribution of community satisfaction for Black or African American people in the EA community seems to stand out in comparison to other groups. The small sample size definitely makes drawing conclusions difficult (maybe apart from showing that EA skews very white for a movement focused on big global issues!)
Load more