I help lead Product at Momentum,
I live in the Bay and advise Asia-based community builders. I also run Pineapple Operations. I previously worked in consulting, recruiting and marketing, with a BA in Sociology and focused on social movement theory and structural functionalism. I've written a little bit about my journey to EA.
/'vɛðehi/ or VEH-they-hee
Some posts I've written and particuarly like:
Advice I frequently give:
If you think you have different views to me (on anything!), reach out -I want to hear more from folks with different views to me.
If you have deep domain expertise in a very specific area (EA or not) I'd love to learn about it!
Connect me to product designers, people with ops/recruiting backgrounds and potential PA/ops folks!
I can give specific feedback on movement building & meta EA project plans and career advising.
I can also give feedback on posts and grant applications.
Reflecting on the question of CEA's mandate, I think it's challenging that CEA has always tried to be both, and this has not worked out well.
1) a community org
2) a talent recruitment org
When you're 1) you need to think about the individual's journey in the movement. You invest in things like community health and universal groups support. It's important to have strong lines of communication and accountability to the community members you serve. You think about the individual's journey and how to help addres those issues. (Think your local Y, community center or church)
When you're 2) you care about finding and supporting only the top talent (and by extension actors that aid you in this mission). You care about having a healthy funnel of individuals who are at the top of their game. You care about fostering an environment that is attractive (potentially elite), prestigious and high status. (Think Y-Combinator, Fullbright or Emergent Ventures Fellows).
I think these goals are often overlapping and self-reinforcing, but also at odds with each other.
It is really hard to thread that needle well - it requires a lot of nuanced, high-fidelity communication - which in turn requires a lot of capacity (something historically short-of-stock in this movement).
I don't think this is a novel observation, but I can't remember seeing it explicitly stated in conversation recently.
Appreciate this update. I'd love to know more about what data you're drawing from in Section 3 to draw your lessons learnt e.g.
This sounds great! I love the idea of experimenting with new formats.
I like the idea of the debate trying to be judged on different goals e.g. How truth seeking the person is trying to be, how well presented the evidence is etc rather than being right.
(Also potentially bringing back some old ones e.g. poster presentations?)
Debate topics:
Critiques of givewell methodology ( e.g. Wellbys etc) Different ai research agendas (e.g prosaic alignment vs nonprosaic alignment) Economic development vs RCT (pull in non EAs here) Maybe the case for GiveDirectly type interventions (many EAs give to them despite GW trying to find 10x more effective interventions) Maybe debates on more fringe EA causes and the case for and against them? Person affecting views Moral offsetting When to explore vs exploit in your career (maybe focused on uni students)
This post reminded me of a related point:
The way many people find out about EA is through reading (books & articles), but the community is not built to work (well) for people through those mediums.
Much valuable networking (which opens up opportunities, strategic clarity, support, friendship) happens in person at conferences or by living in the right cities, and I think there's an assumption that this is just clearly better.
It feels like there isn't much concerted effort from people who benefit from the in-person (and i've done this too in the past!) to adapt to that reality (covid helped somewhat, but things feel like they've mostly bounced back, in part aided by the FTX bubble in 2022).
- Social status gradients within EA pushing people toward the highest-regarded causes, like AI safety.[1]
I think this is relatively underdiscussed / important. I previously wrote about the availability bias in EA jobhunting and have anecdotally seen many examples of this both in terms of social pressures and norms, but also just difficulty of forging your own path vs sticking to the "defaults". It's simply easier to try and go for EA opportunities where you have existing networks, and there are additionally several monetary, status, & social rewards for pursuing these careers.
I think it's sometimes hard for people to decouple these when making career decisions (e.g. did you take the job because it's your best option, or because it's a stable job which people think is high status)
Caveats before I begin:
Here are some concrete examples of how the presence of upskilling opportunities & incentives in
more specifically x-risk and AIS space) in the last 12-24 months , with comparisons of some other options and how they stack up:
(written quickly of the top of my head, I expect some specific examples may be wrong in details or exact scope. If you can think of counter-examples please let me know!)
Written quickly, prioritizing sharing information over polish. Feel free to ask clarifying qs!
Have been considering this framing for some time, and have quite a lot of thoughts. Will try to comment more soon.
Very rough thoughts are that I don't /quite/ agree with all the examples in your table and this changes how I define the difference between the two approaches. So e.g. I don't quite think the difference you are describing is people vs cause it's more principles vs cause.
Then there is a different distinction that I don't think your post really covers (or maybe it does but not directly?) Which is the difference between seeing your (a community builders) obligation towards improving the existing community vs finding more talented / top people
Arjun and I wrote something on this: https://forum.effectivealtruism.org/posts/PbtXD76m7axMd6QST/the-funnel-or-the-individual-two-approaches-to-understanding
Funnel model = treat people in accordance with how much they contribute (kind of cause first)
Individual model = treat people wrt how they are interacting with the principles and what stage they are in their own journey (kind of people)
Thank you for this - really appreciate the specificity :)