I often see people talking past each other when discussing x-risks because the definition[1] covers outcomes that are distinct in some worldviews. For some, humanity failing to reach its full potential and humanity going extinct are joint concerns, but for others they are separate outcomes. Is there a good solution to this?
Fwiw, just to state it publicly: We hope that if EAGxNYC goes well, NYC can serve as the location for an EAG in future years and I think there are many compelling reasons to have NYC as a primary EAG location.
Thank you for this post and the work you're doing. Given the small size and newness of the EA community and many orgs/projects, I'm personally also very worried about something like "right group of people, great practices, unforeseen or unpreventable dependencies that lead to major risk of collapse should a few key things go wrong in succession." My impression is that in some cases things have been going very well but the external pressures have been so substantial and consistent the past few months that even very stable teams are trembling. Sound practices can help mitigate this, but I also want to see more people feeling ok saying, "this is a shit time and we're treading water, but we're able to tread water until we reach shore because we prioritized a healthy infrastructure beforehand."
Thank you for the thorough feedback. Those involved in drafting the statement considered much of what you laid out and created a more substantive, action-specific version before ultimately deciding against it. There were several reasons for this decision, among them: not wanting to commit (often under-resourced) groups to obligations they would currently be unable to fulfill, the various needs and dynamics of different EA communities, and the time-sensitive nature of getting a statement out. We do not intend for this to be the final word and there is already discussion about follow-up collaborations. We also chose to use the footnote method in the statement document to allow groups to make their own additional individual commitments publicly now.
I do want to push back on the idea that this statement is vacuous, counterproductive, and/or harmful. We chose to create this because of our collective, global, on-the-ground experiences discussing recent events with the communities we lead. I agree it should be silly or meaningless to declare one's opposition to racism and sexism. But right now, for many following EA discourse, it unfortunately isn't obvious where much of the community stands. And this is having a tangible impact on our communities and our community members' sense of belonging and safety. This statement doesn't solve this. But by putting our shared commitment in plain language, I believe we've laid a pavestone, however small, on the path toward a version of EA where statements like this truly are not needed.
Thank you for the update and insight. A few questions:
1. What can the community expect regarding the renewal of funding for projects previously supported by OP that are now below the new bar? Should we expect a wave of projects to see their funding discontinued?
OP is also working on a longer-term project to revisit how we should allocate our resources between longtermist and global health and wellbeing funding; it’s possible that longtermist work will end up with more than 50%, which would leave more room to grow.
2. Can you share more about this process and any potential or anticipated effects for global health and wellbeing program areas?
Wow, thank you for writing this. I'm really interested in parasites and their potential suffering generally and this seems like an area where a greater understanding and reasonable, welfare-promoting/population-abolishing intervention would also possibly have public appeal from a "yuck" perspective.
Thank you all for such a warm response to this post and for your thoughtful comments. I was actually very hesitant to share this publicly and braced myself for flak, so thank you for proving me wrong! And thank you to Megan Nelson, Kyle Lucchese, and Irina Gueorguiev for reading a draft and giving me the encouragement I needed to share ❤️ Sharing personal experiences on the Forum can be scary, but also very gratifying!
Thank you for having such a whimsical username!
I think this was downvoted so heavily because of the title, rather than the content. I'm glad you raised this and I personally did not know about it previously. Maybe republish under a title like "OpenAI under fire for underpaying Kenyans used to filter ChatGPT traumatic content" or something along those lines?
I think even with this line of thinking, the growth of insect farming specifically for use as feed for farmed aquatic animals probably tips this in the direction of "bad".