Rockwell

Director @ EA NYC
Working (6-15 years of experience)
1892Joined Aug 2021
effectivealtruism.nyc

Bio

Participation
5

  • Full-time Director for EA NYC
  • Part-time advisor to farmed animal-focused philanthropists
  • Feel free to contact me by email for any questions or collaboration ideas or book a time in my Calendly.

Comments
53

Topic Contributions
1

Answer by RockwellMar 26, 20234-1

"Urgency" strikes me as very much the wrong word here. If you want to feel the urgency of e.g. ending animal farming, just watch these numbers tick by for a few seconds. It sounds like you're pretty directly describing x-risk (which has two distinct flavors). Prioritizing x-risk is a reasonable choice but an outcome or step in the process of cause prioritization, rather than a primary input.

Thanks for sharing this, including the text! It was one of my favorite presentations I've seen at an EAG. Also, I really loved the line, "And it increasingly seems he was that most dangerous of things — a naive utilitarian," and I'm waiting for the memes to roll in!

I think my primary hope is for reduced text-wrapping of titles on mobile. One way to achieve that might be by providing an option that only shows titles? I don't have a clear mental image of what it looked like on mobile before the change, if there are screenshot comparisons for that.

Thank you for the explanation! Fwiw, I personally find the new front page fairly difficult to skim on mobile (though much easier on desktop). I'm not sure which aspect of the change is causing this, but I think the bold post titles (that are also confined to a narrower margin) might be causing more text wrapping. Is it possible to add display options, like the below from Reddit?

Rockwell15d4-2

I think even with this line of thinking, the growth of insect farming specifically for use as feed for farmed aquatic animals probably tips this in the direction of "bad".

I often see people talking past each other when discussing x-risks because the definition[1] covers outcomes that are distinct in some worldviews. For some, humanity failing to reach its full potential and humanity going extinct are joint concerns, but for others they are separate outcomes. Is there a good solution to this?

  1. ^

    "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." (source)

Rockwell1mo1812

Fwiw, just to state it publicly: We hope that if EAGxNYC goes well, NYC can serve as the location for an EAG in future years and I think there are many compelling reasons to have NYC as a primary EAG location.

Rockwell1mo3223

Thank you for this post and the work you're doing. Given the small size and newness of the EA community and many orgs/projects, I'm personally also very worried about something like "right group of people, great practices, unforeseen or unpreventable dependencies that lead to major risk of collapse should a few key things go wrong in succession." My impression is that in some cases things have been going very well but the external pressures have been so substantial and consistent the past few months that even very stable teams are trembling. Sound practices can help mitigate this, but I also want to see more people feeling ok saying, "this is a shit time and we're treading water, but we're able to tread water until we reach shore because we prioritized a healthy infrastructure beforehand."

Rockwell2mo376

Thank you for the thorough feedback. Those involved in drafting the statement considered much of what you laid out and created a more substantive, action-specific version before ultimately deciding against it. There were several reasons for this decision, among them: not wanting to commit (often under-resourced) groups to obligations they would currently be unable to fulfill, the various needs and dynamics of different EA communities, and the time-sensitive nature of getting a statement out. We do not intend for this to be the final word and there is already discussion about follow-up collaborations.  We also chose to use the footnote method in the statement document to allow groups to make their own additional individual commitments publicly now.

I do want to push back on the idea that this statement is vacuous, counterproductive, and/or harmful. We chose to create this because of our collective, global, on-the-ground experiences discussing recent events with the communities we lead. I agree it should be silly or meaningless to declare one's opposition to racism and sexism. But right now, for many following EA discourse, it unfortunately isn't obvious where much of the community stands. And this is having a tangible impact on our communities and our community members' sense of belonging and safety. This statement doesn't solve this. But by putting our shared commitment in plain language, I believe we've laid a pavestone, however small, on the path toward a version of EA where statements like this truly are not needed. 

Load more