Hey! I'm Edo, married + 2 cats, I live in Tel-Aviv, Israel, and I feel weird writing about myself so I go meta.
I'm a mathematician, I love solving problems and helping people. My LinkedIn profile has some more stuff.
I'm a forum moderator, which mostly means that I care about this forum and about you! So let me know if there's anything I can do to help.
I'm currently working full-time at EA Israel, doing independent research and project management. Currently mostly working on evaluating the impact of for-profit tech companies, but I have many projects and this changes rapidly.
Thanks for your perspective Conor! Looking into these activities in more detail, I have some notes:
only insofar as you think it can be trusted
Note that if you place a high degree of trust, then the correct approach to maximize direct impact would generally be to delegate a lot more (and, say, focus on the particularities of your specific actions). I think that it makes a lot of sense to mostly trust the cause-prioritization enterprise as a whole, but maybe this comes at the expense of people doing less independent thinking, which should address your other comment.
When people say that they want EA to stay weird, they mean that they want people exploring all kinds of crazy cause areas instead of just sticking the main ones (in tension with your definition of cause-first).
I think this is an important point, and I may be doing a motte and bailey here which I don't fully understand. Under what I imagine as a "cause-first" movement strategy, you'd definitely want more people engaging in the cause-prioritization effort. However, I think I characterize it as more top-down than it needs to be.
Also: one the central arguments for leaning more towards EA being small and weird is that you end up with a community more driven by principle because a) slower growth makes it easier for new members to absorb knowledge from more experienced ones vs. from people who don't really understand the philosophy very well themselves yet b) lower expectations for growth make it easier to focus on people with whom the philosophy really resonates vs. marginally influencing people who aren't that keen on it.
This feels true to me.
(I generally don't feel that happy with my proposed definitions and the categorization in the table, and I hope other people could make better distinctions and framing for thinking about EA community strategy. )
I don't quite share your intuition on the couple of examples you suggest, and I wonder whether that's because our definitions differ or because the categorization really is off/misleading/inaccurate.
For me, your first example shows that the relation to deference doesn't necessarily result from a choice of the overall strategy, but I still expect it to usually be correlated (unless strong and direct effort is taken to change focus on deference).
And for the second example, I think I view a kind of "member first" strategy as (gradually) pushing for more cause-neutrality, whereas the cause-first is okay with stopping once a person is focused on a high-impact cause.
Any thoughts on the effectiveness of reducing antimicrobial use in factory farming? Say, GFI gave this some attention recently (GFI blog post, and a corresponding commentary piece on Nature Foods), and made the argument that similar problems can (and should) be solved in the cultivated meat industry