Vaidehi Agarwalla

Senior Product Manager @ Momentum
6006 karmaJoined Oct 2018Working (0-5 years)Berkeley, CA, USA
vaidehiagarwalla.com

Bio

Participation
1

I help lead Product at Momentum,

I live in the Bay and advise Asia-based community builders. I also run Pineapple Operations. I previously worked in consulting, recruiting and marketing, with a BA in Sociology and focused on social movement theory and structural functionalism. I've written a little bit about my journey to EA.

/'vɛðehi/ or VEH-they-hee

Some posts I've written and particuarly like: 

Advice I frequently give:

How others can help me

If you think you have different views to me (on anything!), reach out -I want to hear more from folks with different views to me. 

If you have deep domain expertise in a very specific area (EA or not) I'd love to learn about it!

Connect me to product designers, people with ops/recruiting backgrounds and potential PA/ops folks! 

How I can help others

I can give specific feedback on movement building & meta EA project plans and career advising. 

I can also give feedback on posts and grant applications. 

Posts
61

Sorted by New

Sequences
6

Operations in EA FAQs
Events in EA: Learnings & Critiques
EA Career Advice on Management Consulting
Exploratory Careers Landscape Survey 2020
Local Career Advice Network
Towards A Sociological Model of EA Movement Building

Comments
570

Topic Contributions
54

Reflecting on the question of CEA's mandate, I think it's challenging that CEA has always tried to be both, and this has not worked out well.

1) a community org

2) a talent recruitment org

When you're 1) you need to think about the individual's journey in the movement. You invest in things like community health and universal groups support. It's important to have strong lines of communication and accountability to the community members you serve. You think about the individual's journey and how to help addres those issues. (Think your local Y, community center or church)

When you're 2) you care about finding and supporting only the top talent (and by extension actors that aid you in this mission). You care about having a healthy funnel of individuals who are at the top of their game. You care about fostering an environment that is attractive (potentially elite), prestigious and high status. (Think Y-Combinator, Fullbright or Emergent Ventures Fellows).

I think these goals are often overlapping and self-reinforcing, but also at odds with each other. 

It is really hard to thread that needle well - it requires a lot of nuanced, high-fidelity communication - which in turn requires a lot of capacity (something historically short-of-stock in this movement). 

I don't think this is a novel observation, but I can't remember seeing it explicitly stated in conversation recently.

Appreciate this update. I'd love to know more about what data you're drawing from in Section 3 to draw your lessons learnt e.g.

  • How many people total did you draw these insights from, and what was the spread of cause areas & geographies?
  • Did you do research outside of what knowledge you gained through career advising / interviews (e.g. did you run surveys)?

Agreed, and does e.g. a PhD program / grad school count towards years of experience?

Answer by Vaidehi AgarwallaJun 06, 202366

This sounds great! I love the idea of experimenting with new formats.

I like the idea of the debate trying to be judged on different goals e.g. How truth seeking the person is trying to be, how well presented the evidence is etc rather than being right.

(Also potentially bringing back some old ones e.g. poster presentations?)

Debate topics:

Critiques of givewell methodology ( e.g. Wellbys etc) Different ai research agendas (e.g prosaic alignment vs nonprosaic alignment) Economic development vs RCT (pull in non EAs here) Maybe the case for GiveDirectly type interventions (many EAs give to them despite GW trying to find 10x more effective interventions) Maybe debates on more fringe EA causes and the case for and against them? Person affecting views Moral offsetting When to explore vs exploit in your career (maybe focused on uni students)

This post reminded me of a related point:

The way many people find out about EA is through reading (books & articles), but the community is not built to work (well) for people through those mediums. 

Much valuable networking (which opens up opportunities, strategic clarity, support, friendship) happens in person at conferences or by living in the right cities, and I think there's an assumption that this is just clearly better. 

It feels like there isn't much concerted effort from people who benefit from the in-person (and i've done this too in the past!) to adapt to that reality (covid helped somewhat, but things feel like they've mostly bounced back, in part aided by the FTX bubble in 2022). 

  • Social status gradients within EA pushing people toward the highest-regarded causes, like AI safety.[1]


I think this is relatively underdiscussed / important. I previously wrote about the availability bias in EA jobhunting and have anecdotally seen many examples of this both in terms of social pressures and norms, but also just difficulty of forging your own path vs sticking to the "defaults". It's simply easier to try and go for EA opportunities where you have existing networks, and there are additionally several monetary, status, & social rewards for pursuing these careers. 

I think it's sometimes hard for people to decouple these when making career decisions (e.g. did you take the job because it's your best option, or because it's a stable job which people think is high status) 

Caveats before I begin: 

  1. I think it's really good for people who need to (e.g. from low SES backgrounds) to take financial security into consideration when making important career decisions. But I think this community also has a lot of privileged people who could afford to be a little more risk-taking. 
  2. I don't think it's bad that these programs and resources exist - I'm excited that they exist. But we need to acknowledge how they affect the EA ecosystem. I expect the top pushback will be the standard one, which is that if you have very short timelines, other considerations simply don't matter if you do a E(V) calculation. 
  3. I think that people should take more ownership of exploring other paths and trying difficult things than they currently do, but I also think it's important to consider the ecoystem impacts and how it can create lock-in effects on certain causes. 
  4. These projects exist for a reason - the longtermist space is less funding constrained than the non-longtermist one, it's a newer field, and so many of the opportunities available are field building ones.  

Here are some concrete examples of how the presence of upskilling opportunities & incentives in 
more specifically x-risk and AIS space) in the last 12-24 months , with comparisons of some other options and how they stack up:

(written quickly of the top of my head, I expect some specific examples may be wrong in details or exact scope. If you can think of counter-examples please let me know!)

  • Career advising resources:  
    • 80K has been the key career resource for over 10 years, and they primarily investing resources in expanding their LT career profiles, resources & advice (without a robust alternative for several years. 
    • 80K made a call to get others interested in various aspects of career advising they are not covering and have posted about it in 2020, 2021, and 2022 but (as far as I can tell) with limited traction.
      • There are some other career options - Animal Advocacy Careers and Probably Good - they are at early stages and still ramping up (even in 2023). 
  • Career funding / upskilling opportunities: 
    • There are the century fellowship & early career funding for AI / bio & Horizon for longtermist policy careers (there is nothing similar for any other cause AFAIK). These are 1-2 year long open-ended funding opportunities. (There is the Charity Entrepreneurship incubator, which mostly funds neartermist and meta orgs and accepts about 20 applicatns per round (historically one per year, from 2023 will be 2 rounds per year))
    • When Future Fund was running (and LTFF has also done this), there were several opportunities for people interested in AI safety (possibly other LT causes too, my guess was the bulk was AIS) to visit the bay for the summer, or do career transition grants and so on (there was no equivalent for other causes) 
    • Since 2021, we now have multiple programs to skill up in AI and other X-risks (AGISF & biosecurity program from BlueDot, SERI MATS, various other ERIx summer internships). (somewhat similar programs with fewer resources are the alt proteins fellowship from BlueDot, a China-based & South East Asia-based farmed animal fellowship in 2022, and AAC's programming)
    • There are paid general LT intro programs like the Global Challenge Project retreats, Atlas Fellowship, Nontrivial (There is Intro to VP program, community retreats organized by some local groups & LEAF which have less funding / monetary compensation)
    • There are now several dedicated AIS centers at various universities (SERI @ Stanford, HAIST @ Harvard, CBAI @ Harvard / MIT) and a few X-risk focused (ERA @ Cambridge (?), CHERI in Switzerland). As far as I know, there are no such centers for other causes (and even non-AI x-risk causes). These centers are new, but can provide better quality advice, resources and guidance for pursuing these career paths over others. 
  • Networking: This seems rougly equal. 
    • The SERI conference has run since 2021 (there is EA Global, and several EAGx's per year, but no dedicated opportunities for other causes.)
  • Funding for new community projects
    • Bulk (90%) of EA movement building from OP is funded by the longtermist team, and most univesity EA groups funding is from the longtermist team. I'd love to know more about how those groups and projects are evaluated and how much funding ends up going to more principles-first community building, as opposed to cause-specific work.
    • Most of OP's neartermist granting has gone towards effective giving (because it has the highest ROI)
    • There are even incentives for infrastructure providers (e.g. Good Impressions, cFactual, EV, Rethink etc.) to primarily support the longtermist ecosystem as that's where the funding is (There are a few meta orgs supporting the animal space, such as AAC, Good Growth, and 2 CE incubated orgs - Animal Ask & Mission Motor)
  • Career exploration grants: 
    • At various points when Future Fund was running, lots of small grants for folks to spend time in the Bay (link), do career exploration, etc. The LTFF has also given x-risk grants that are somewhat similar (as far as I know, the EAIF or others have not given more generic career exploration grants, or grants for other causes)

Written quickly, prioritizing sharing information over polish. Feel free to ask clarifying qs!

Have been considering this framing for some time, and have quite a lot of thoughts. Will try to comment more soon.

Very rough thoughts are that I don't /quite/ agree with all the examples in your table and this changes how I define the difference between the two approaches. So e.g. I don't quite think the difference you are describing is people vs cause it's more principles vs cause.

Then there is a different distinction that I don't think your post really covers (or maybe it does but not directly?) Which is the difference between seeing your (a community builders) obligation towards improving the existing community vs finding more talented / top people

Arjun and I wrote something on this: https://forum.effectivealtruism.org/posts/PbtXD76m7axMd6QST/the-funnel-or-the-individual-two-approaches-to-understanding

Funnel model = treat people in accordance with how much they contribute (kind of cause first)

Individual model = treat people wrt how they are interacting with the principles and what stage they are in their own journey (kind of people)

(and even if you know many people, you can still feel that way!)

Load more