Scattered first impressions:
Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank to understand better how the intelligence community monitors various kinds of risk, such as biological threats ($6,000)
Maybe Lightspeed? But I worry there isn't currently other coverage for funding needs of this sort.
Thanks for asking this and clearly giving the issue thought and care!
In short, the lives of "pasture-raised" hens are still, in my opinion, one of the worst existences imaginable, even in the best of cases. Speaking for the US:
This is just a brief overview. Anecdotally, some of the worst conditions I've seen were on "pasture-raised" farms.
In your shoes, I would consult with a veg-friendly nutritionist to come up with an individualized diet plan that will be sustainable for you, meet your health needs, and align with your ethics.
I find pieces like this frustrating because I don't think EA ever "used to be" one thing. Ten people who previously felt more at home in EA than they currently do will describe ten different things EA "used to be" that it no longer is, often in direct conflict with the other nine's narratives. I'd much prefer people to say, "Here's a pattern I'm noticing, I think it is likely bad for these reasons, and I think it wasn't the case x years ago. I would like to see x treated as a norm."
Thank you for the update and all of the work you're putting into these events. I know you're likely busy with EAG Boston, but a few questions when you have the time:
1. Is the decision to run an east coast EAG in 2024 primarily about cost? And if an east coast EAG does happen in 2024, will it definitely be in Boston vs. DC or a cheaper city?
2. If you had 2x or 3x the budget for EAGs, do you think you would organize a cause-neutral EAG in the Bay Area in addition to a GCR conference? How would more funding affect cause-specific vs. big-tent event planning?
3. Do you envision content focused on digital sentience and s-risks at the GCR conference? I'm personally worried that AI risk and biorisk are reducing the airtime for other risks (nuclear war, volcanoes, etc.), including suffering risks. Likewise, I'd still love to see GCR-oriented content focused on topics like how climate change might accelerate certain GCRs, the effects of GCRs on the global poor, the effects of GCRs on nonhuman animals, etc.
(Also, I hope all EAG events remain fully vegan, regardless of the cause area content!)
I agree and I point to that more so as evidence that even in environments that are likely to foster a moral disconnect (in contrast to researchers steeped in moral analysis) increased concern for animals is still a common enough outcome that it's an observable phenomenon.
(I'm not sure if there's good data on how likely working on an animal farm or in a slaughterhouse is to convince you that killing animals is bad. I would be interested in research that shows how these experiences reshape people's views and I would expect increased cognitive dissonance and emotional detachment to be a more common outcome.)
If you somehow could convince a research group, not selected for caring a lot about animals, to pursue this question in isolation, I'd predict they'd end up with far less animal-friendly results.
I think this is a possible outcome, but not guaranteed. Most people have been heavily socialized to not care about most animals, either through active disdain or more mundane cognitive dissonance. Being "forced" to really think about other animals and consider their moral weight may swing researchers who are baseline "animal neutral" or even "anti-animal" more than you'd think. Adjacent evidence might be the history of animal farmers or slaughterhouse workers becoming convinced animal killing is wrong through directly engaging in it.
I also want to note that most people would be less surprised if a heavy moral weight is assigned to the species humans are encouraged to form the closest relationships with (dogs, cats). Our baseline discounting of most species is often born from not having relationships with them, not intuitively understanding how they operate because we don't have those relationships, and/or objectifying them as products. If we lived in a society where beloved companion chickens and carps were the norm, the median moral weight intuition would likely be dramatically different.
Thanks for raising this, Thomas! I agree impact is the goal, rather than community for community's sake. This particular Forum post was intended to focus on the community as a whole and its size, activity, and vibe, rather than on EA NYC as an organization. We plan to discuss EA NYC's mission, strategy, and (object-level) achievements more in a future post. There's a lot to say on that front and I don't think I'll do it justice here in a comment. If there are certain details you'd find especially interesting or useful to see in a future post about EA NYC, we'd love to know!
While I think I disagree pretty strongly with the idea CEA CH should be disbanded, I would like to see an updated post from the team on what the community should and should not expect from them, with the caveat that they may be somewhat limited in what they can say legally about their scope.
Correct me if I'm wrong but I believe CEA was operating without in-house legal counsel until about a year ago. This was while engaging in many situations that could have easily led to a defamation suit should they have investigated someone sufficiently resourced and litigious. I think it makes sense their risk tolerance will have shifted while EVF is under Charity Commission investigation post-FTX and with the hiring of attorneys who are making risk assessments and recommendations across programs.
The issue for me is less "are they doing everything I'd like them to do" and more "does the community have appropriate expectations for them," which is in keeping with the general idea EA projects should make their scopes transparent.
EA NYC is soliciting applications for Board Members! We especially welcome applications submitted by Sunday, September 24, 2023, but rolling applications will also be considered. This is a volunteer position, but crucial in both shaping the strategy of EA NYC and ensuring our sustainability and compliance as an organization. If you have questions, Jacob Eliosoff is the primary point of contact. I think this is a great opportunity for deepened involvement and impact for a range of backgrounds!
Makes sense, thank you! Maybe my follow-up questions would be: How confident would they need to be that they'd use the experience to work on biorisk vs. global health before applying to the LTFF? And if they were, say, 75:25 between the two, would EAIF become the right choice -- or what ratio would bring this grant into EAIF territory?