If you somehow could convince a research group, not selected for caring a lot about animals, to pursue this question in isolation, I'd predict they'd end up with far less animal-friendly results.
I think this is a possible outcome, but not guaranteed. Most people have been heavily socialized to not care about most animals, either through active disdain or more mundane cognitive dissonance. Being "forced" to really think about other animals and consider their moral weight may swing researchers who are baseline "animal neutral" or even "anti-animal" more than you'd think. Adjacent evidence might be the history of animal farmers or slaughterhouse workers becoming convinced animal killing is wrong through directly engaging in it.
I also want to note that most people would be less surprised if a heavy moral weight is assigned to the species humans are encouraged to form the closest relationships with (dogs, cats). Our baseline discounting of most species is often born from not having relationships with them, not intuitively understanding how they operate because we don't have those relationships, and/or objectifying them as products. If we lived in a society where beloved companion chickens and carps were the norm, the median moral weight intuition would likely be dramatically different.
Thanks for raising this, Thomas! I agree impact is the goal, rather than community for community's sake. This particular Forum post was intended to focus on the community as a whole and its size, activity, and vibe, rather than on EA NYC as an organization. We plan to discuss EA NYC's mission, strategy, and (object-level) achievements more in a future post. There's a lot to say on that front and I don't think I'll do it justice here in a comment. If there are certain details you'd find especially interesting or useful to see in a future post about EA NYC, we'd love to know!
While I think I disagree pretty strongly with the idea CEA CH should be disbanded, I would like to see an updated post from the team on what the community should and should not expect from them, with the caveat that they may be somewhat limited in what they can say legally about their scope.
Correct me if I'm wrong but I believe CEA was operating without in-house legal counsel until about a year ago. This was while engaging in many situations that could have easily led to a defamation suit should they have investigated someone sufficiently resourced and litigious. I think it makes sense their risk tolerance will have shifted while EVF is under Charity Commission investigation post-FTX and with the hiring of attorneys who are making risk assessments and recommendations across programs.
The issue for me is less "are they doing everything I'd like them to do" and more "does the community have appropriate expectations for them," which is in keeping with the general idea EA projects should make their scopes transparent.
EA NYC is soliciting applications for Board Members! We especially welcome applications submitted by Sunday, September 24, 2023, but rolling applications will also be considered. This is a volunteer position, but crucial in both shaping the strategy of EA NYC and ensuring our sustainability and compliance as an organization. If you have questions, Jacob Eliosoff is the primary point of contact. I think this is a great opportunity for deepened involvement and impact for a range of backgrounds!
These comments are helpful but I'm still having a difficult time zeroing in on a guiding heuristic here. And I feel mildly frustrated by the counterexamples in the same way I do reading "well, they were always nice to me" comments on a post about a bad actor who deeply harmed someone or hearing someone who routinely drives drunk say "well, I've never caused an accident." I think most (but not all) of my list falls into a category something like "fine or tolerable 9 times out of 10, but really bad, messy, or harmful that other 1 time such that it may make those other 9 times less/not worth it." I'm not sure of the actual probabilities and they definitely vary by bullet point.
In your case in particular, I'll note that a good chunk of your examples either directly involve Julia or involve you (the spouse of Julia, who I assume had Julia as a sounding board). This seems like a rare situation of being particularly well-positioned to deal with complicated situations. Arguably, if anyone can navigate complicated or risky situations well, it will be a community health professional. I'd assume something like 95% of people will be worse at handling a situation that goes south, and maybe >25% of people will be distinctly bad at it. So what norms should be established that factor in this potential? And how do we universalize in a way that makes the risk clear, promotes open risk analysis, and prevents situations that will get really bad should they get bad?
Jeff Sebo on his work launching multiple EA-aligned programs at NYU that advance and legitimize "fridge" causes, including the Wild Animal Welfare Program and the Mind, Ethics, and Policy Program. Jeff is also a great presenter, and many students describe him as their favorite professor.
Thanks! I’m happy to expound.
I’ve tried categorizing the public attendee list by their area of meta EA work. There are many different ways to categorize and this is just one version I put together quickly. It looks something like:
While the people listed make critical decisions regarding resource allocation, granting, setting strategic directions, or providing critical infrastructure, their experience is fundamentally different from those who are directly involved in "on the ground" organizations. Vaidehi writes that "issues pertinent to the community need to have meaningful, two way, sustained engagement with the community." "On the ground" organizations likely do this among the most of any orgs in the EA ecosystem.
I think the perspective of the wider breadth of “on the ground” community leaders is important, but I’ll speak to regional EA organizations, as that’s what I know best:
Before the FTX collapse, there was a heavy emphasis on making community building a long-term and sustainable career path. As a result, there are now dozens of people working professionally and often full-time on meta EA regional organizations (MEAROs).[1] By and large, we are a team of sorts: we’re in regular communication with each other, we have a shared and evolving sense of what MEAROs are and can be, and our strategic approaches intertwine and are mutually reinforcing. We essentially function as extended colleagues in a niche profession that feels very distinct to me from even other “on the ground” meta-EA community building (such as professional or uni groups). I don’t think anyone on the attendee list has run a MEARO, and certainly not in 2023.
There is a distinct zeitgeist among MEAROs. Consistently, I’ve been amazed how MEARO leaders seem to independently land on the same conclusions and strategic directions as our peers across the globe, “multiple discovery” if you will. This zeitgeist is not captured in larger EA discourse, from the Forum to conversations I have with non-MEARO community leaders. And this MERAO zeitgeist is evolving rapidly, such that it looks very different from even four months ago. As a result, I don’t think anyone who hasn’t been intimately involved in MEAROs in the past 3-6 months can represent our general shared perspective.
This shared perspective is born out of three main ingredients:
I think segments of #1 and #3 are captured by some of the publicly listed attendees, and I imagine the attendees have an equally good or even substantially better experience of #2, but it is the unique perspective that the combination of the three enables that I’m referencing.
At an event focused on meta coordination, it seems really important to have the perspective of those engaging constantly and deeply with “the EA masses,” immersed in regional strategy, and among the best able to shape the future of EA perception as the on-the-ground representatives of EA to thousands of people worldwide.
I talked this through with @James Herbert a bit and we discussed three possible cruxes here:
Yes, I totally just coined this acronym.
Chiming in just to second James. There are dozens of us operating large regional meta EA organizations and I don't see anyone representative of that perspective on the public list. I think it would be extremely valuable to have at least one leader from the CBG organizations present, ideally nominated by other CBGs such that they could represent our collective "on the ground" perspective. I'm happy to write a full list of why I think this perspective is valuable and not covered by the (also very valuable) perspectives in the public attendee list, if that would be useful.
I agree and I point to that more so as evidence that even in environments that are likely to foster a moral disconnect (in contrast to researchers steeped in moral analysis) increased concern for animals is still a common enough outcome that it's an observable phenomenon.
(I'm not sure if there's good data on how likely working on an animal farm or in a slaughterhouse is to convince you that killing animals is bad. I would be interested in research that shows how these experiences reshape people's views and I would expect increased cognitive dissonance and emotional detachment to be a more common outcome.)