All of Vaidehi Agarwalla 🔸's Comments + Replies

Changelog: added directorysf (https://www.directorysf.com/) to the list of places to look for housing. It's pretty active, but you will need an invite from an existing user to join.

Some norms I would like to see when folks use LLMs substantively (not copy editing or brainstorming):

  1. explaining the extent of LLM use

  2. explaining the extent of human revision or final oversight

  • ideally I'd love to know section by section. Personally I'd prefer it if authors only included sections they have reviewed and fully endorse, even if this means much shorter posts
  1. not penalizing people who do 1) and 2) for keeping AI-speak or style
  • I think this unfairly discriminates against people who are busy, weak in writing and/or English. I personally dislike it, but don't think it's fair to impose this on others

If this tool could be used by orgs I think it would be super useful. E.g. analyzing the HIP talent directory or inbound applications against a job you're recruiting for. Any chance you could make the prompts/tool available to orgs as well?

2
JP Addison🔸
That seems like the reverse, right? Given candidates, find me the best ones? So you'd want different prompts, though potentially some of the same logic carries over. In any case with the primitive nature of my prompts, and the complexity of the approach, I would probably advise someone to start from scratch. I'm generally a fan of open source though, and I could imagine releasing it.

Thanks for this write up! It was really insightful. A few questions:

People who apply to found an NGO come with all sorts of motivations.

Could you say more about what motivations they come with? 

As this is a regional program, we couldn’t have a cohort composed entirely of only 4 countries, even though several were outstanding candidates.

Base on my experience working in India, I've seen a lot of benefits of having multiple orgs working in the same geographies at the same time/stage to share resources, advice, talent, etc. Curious what you were limited b... (read more)

1
Verónica Suárez M.
Thank you so much, Vaidehi, for this thoughtful comment and for taking the time to engage. On motivations: we saw a wide spectrum. Some applicants were driven by very personal experiences, e.g. having lived close to poverty or discrimination themselves, and wanting to “fix” what they endured. Others were motivated by specific issues they’ve worked on professionally (education, environment, public health). A few were drawn by the “founder identity” itself, the idea of building something new and leading a team. Part of our methodology is to surface motivations early and help participants refine them. Even with evidence-based tools, unclear or misaligned motivations can steer an org sideways over time. I’ll write a dedicated post on motivations later, but it’s important to flag certain drivers we need to watch out for, such as resentment, ego, the need for power, feelings of superiority, or even a saviour complex. Unfortunately, these do exist in the sector, and because we work with vulnerable populations, we have to be especially careful, not only for founders, but all of us that work on these issues. On geography and cohort diversity: you’re right, there can be real benefits to multiple orgs in the same geography, especially around resource-sharing and peer support. We didn’t avoid that altogether, in fact, we do have overlaps. Out of the 20 fellows, five are the sole representatives of their country, with one of them currently living in another, more represented country. The constraint was more about balance: we had so many strong candidates from a handful of countries, but since this is the very first program of its kind in the region, we felt it was important to deliberately seed it across more geographies, so that in the future we can create regional clusters while still representing the breadth of Latin America. It’s definitely a trade-off. On the “good intentions vs. impact” point: thanks for catching that nuance. I didn’t mean to suggest that EA as a whole

Recent example - an op ed piece on the AI safety pipeline having too many researchers is labelled community, a post advocating for more AI field building by an OP grant maker is not. 

Thanks for flagging those! I think my original point probably still stands regarding more projects for different species etc., since the incubated projects are from the existing recommended ideas list.

4
Aaron Boddy🔸
To build on Michael's point - AIM has been recommending "Fish Welfare Initiative in a new country" since at least 2023. And another fish welfare charity in Europe can be thought of as taking Shrimp Welfare Project's model and applying it to fishes. For (what became) Scale Welfare, my understanding is that many potential co-founder pairings fell apart due to the time needed in country (and I would also guess that the Program attracts people who want to start something new, and founding a similar project isn't as exciting as something brand new). I also think a main reason AIM probably aren't recommending more is because of their modest prioritization value, and wanting to recommend charities that maximise impact over a range of worldviews. I imagine there probably could be a world where AIM exclusively incubated aquatic animal welfare projects, but they (understandably) have epistemic uncertainty about this. (There's also probably an argument that the ecosystem can only really accommodate 1-2 new projects per year, and not a flood of new projects all at once).

I noticed that AIM has  recommended 0 aquatic animal charities in 2026, 1 in 2025 and 0 in 2022-2024. 

Curious if either of you (or AIM researchers) have thoughts on why that is, and to what degree that is an indicator of a lack of high-EV, foundable (e.g. talent exists) projects in the space. 

FWIW, since 2022 (so after SWP and FWI), I count:

  1. Scale Welfare (founded in 2025, but I think the idea was recommended for a few years?)
  2. A charity working on fish welfare in Europe (founded in 2024), not listed on their website
  3. Another working on fish started because of AIM in 2023, but not an official incubatee and not listed on their website.

Have not thought about compounding returns to orgs! I can think of some concrete examples with AIM ecosystem charities (e.g one org helping bring another into creation or creating a need for others to exist). Food for thought.

Curious how you see the communitarianism playing out in practice?

There's definitely a cooperative side to things that makes it a lot easier to ask for help amongst EAs than the relevant professional groups someone might be a part of, but not sure I'm seeing obvious implications.

5
tylermjohn
I'm only saying it's in tension with the diagnosis as "emphasis on individual action, behavior & achievement over collective." I agree with all of your concrete discussion and think it's important. 

So keen to hear from the disagrees (currently about 21% (5/23) votes) on which parts folks disagree with!

Yeah, and I can probably cite like 3-4 other prominent-ish articles. I think these efforts feel more like a bandaid and not like actually changing the fundamental core principles on which you do your thinking.

Very curious if you can describe the types of people you know, their profiles, what cause areas and roles they are have applied for, what constraints they have if any. 

But typically (not MECE, written quickly, not in order of importance, some combination could work etc.):

  • "Relentlessly resourceful" from Paul Graham covers a bunch of it better than I could
  • Strong intuitions for people management and org building (likely from experience)
  • Strong manager / leader
  • Gets shit done - can move quickly, decisive, keeps the momentum going, strong prioritization skil
... (read more)
2
Ian Turner
Would you say that Teryn Mattox or Dan Brown at GiveWell or Alexander Berger or Emily Oehlsen at Open Philanthropy meet this description?
1
Peter
Ok thanks for the details. Off the top of my head I can think of multiple people interested in AI safety who probably fit these (though the descriptions I think still could be more concretely operationalized) and fit into categories such as: founders/cofounders, several years experience in operations analytics and management, several years experience in consulting, multiple years experience in events and community building/management. Some want to stay in Europe, some have families, but overall I don't recall them being super constrained. 

The 'enable the org to succeed' implies 'at it's stated goals or mission'.

Like, the by the org or it's leaders own lights.

For many years I've been trying to figure out a core disagreement I have with a bunch of underlying EA/rationalist school of thought. I think I've sort of figured it out: the emphasis on individual action, behavior & achievement over collective. And an understanding of how the collective changes individuals - through emergent properties (e.g. norms, power dynamics, etc.), and an unwillingness to engage. 

This has improved a bunch since I first joined in 2017 (the biggest shock to the system was FTX and subsequent scandals). Why I think these issues... (read more)

1
tylermjohn
Nice. I encountered a similar crux the other week in a career advice chat when someone said "successful people find the skills with which they really excel and exploit that repeatedly to get compounding returns" to which I responded with "well, people aren't the only things that can have compounding returns, organizations can also have compounding returns, so maybe I should keep helping organizations succeed to capture their compounding returns." On the flip side, the fact that EA has focused so much on community building and talent seems like a certain kind of communitarianism, putting the success of the whole above any individual. 
3
Vaidehi Agarwalla 🔸
So keen to hear from the disagrees (currently about 21% (5/23) votes) on which parts folks disagree with!
7
Peter
What does strong non-technical founder-level operator talent actually mean concretely? I feel like I see lots of strong people struggle to find any role in the space. 
2
Chris Leong
Whether or not this is the right decision is highly circumstantial. Honestly, I'd typically prefer an organisation to fail than compromise its mission.
5
Mo Putera
80K to their credit have been trying to push back on single-player thinking since at least 2016 but it doesn't seem to have percolated more widely.
2
Guy Raveh
Strongly up voted. Compare with this quote from MacAskill's "What We Owe the Future chapter 7", showing exactly the problem you describe:
5
Charlie_Guthmann
and nearly 0 support for a political system within the movement. Decentralized = Money & Status -> Power.  This movement combines the people who are probably engaging in measurability bias (global dev, animal welfare) and those who don't care much (longtermists) for the sake of maximizing EV.  Both get to be on the Pareto frontier of EV/Variance plane. Things that fall more near the middle of this plane ("medium termism" - culture, politics) get disapproval from both sides.  I'm pretty ok with this. I think a lot of breakthroughs require narcissistic delusion to push where a reasonable person might assume the fruit has already been plucked. Maybe on the margins you are right though. 

I think this would be very valuable as a top level post so more people can see this!

3
lauren_mee 🔸
I have been procastinating writing it up for 6 months so i thought...... at least do a quick take :) sludge is real and I am just really bad at finding time to write on the EA forum 
1[comment deleted]

Not a recruiter from this ama but just wanted to add:

I've seen a number of marketing roles advertised in the past year across field building and effective giving orgs in particular, but also (IIRC) some more direct work AI safety orgs.

There's also been calls for e.g. and AI Safety focused marketing agency and things like that.

Probably stemming from two things:

  • recent influx in new effective giving orgs, which are now making their first hires (naturally, marketing + reaching new counterfactual audience is a top priority)
  • in general other orgs in the EA spa
... (read more)

Other (probably more important, if combined) reasons :

  • wanting to have direct impact (i.e. risk aversion within longtermist interventions)
  • personal fit for founding (and specifically founding meta orgs, where impact is even harder to quantify)
  • not quite underfunding, but lack of funding diversity if your vision for the org differs from what OP is willing to fund at scale.
  • lack of founder-level talent

Hey Ben! You might want to check out Probably Good (https://probablygood.org/) - they do global career advice but more focused on GH&D and animal welfare etc.

I don't think they are explicitly targeting talent from Africa or other LMICs, but they have already written up a bunch of career path profiles+ content and I think many could apply.

(+ Animal Advocacy Careers might be interesting too)

1
Benh713
Yes definitely some overlap in the core content with Probably Good or 80,000 Hours. Thanks!
1
Eli Svoboda🔸
Occasionally! I think I also just find myself discussing those ideas fairly often and wanting better answers/resources. 

I'm curious if you tracked career changes and/or relevant researched published by the people you had calls with over time?

5
Arkose
Good question! We're choosing to be cautious around data privacy, which unfortunately makes it hard to share specific wins publicly. However, we can share this graph which people fill out six months after their call:

Yes! The overview/title captures what I've seen as well, esp from newer community members. I spend a lot of my time telling people that they know their situation better than I do (and have probably infuriated people by not answering questions directly :)).

One point I'd highlight: I find that people often lack confidence in the plans they make, and that makes them more uncertain, less likely to act, and maybe have less motivation or drive.

This is often caused by imposter syndrome, or chasing a unrealistic sense of certainty or assurance that doesn't exist. ... (read more)

What groups of people do you see this most commonly with?

On priors, I would expect most ea aligned donors (or researchers / evaluators) to take things like this into account because they seem pretty fundamental.

4
Linch
I think people take this into account but not enough or something? I strongly suspect when evaluating research many people have a vague, and not sufficiently precise, sense of both the numerator and denominator, and their vague intuitions aren't sufficiently linear. I know I do this myself unless it's a grant I'm actively investigating.  This is easiest to notice in research because it's both a) a large fraction of (non-global health and development) EA output and b) very gnarly. But I don't think research is unusually gnarly in terms of EA outputs or grants, advocacy, comms, etc have similar issues. 
5
Joey🔸
Mostly in EA meta...

Oh interesting. I want to dig into this more now, but my impression is that an individual's giving portfolio - both major donors & retail donors, but more so people who aren't serious philanthropists and/or haven't reflected a lot on their giving - is that they are malleable and not as zero-sum. 

i think with donors likely to give to ea causes, a lot of them haven't really been stewarded & cultivated and there probably is a lot of room for them to increase their giving. 

The total funding pie is pretty fixed; I expect it to be quite rare to grow it.

 

Could you say more on how you came to this conclusion?  

4
Joey🔸
Mostly basing this on the macro data I have seen that seems to suggest giving as a % of GDP has stayed pretty flat year to year (~2%).

Hey Sebastian! Very curious how you calculated that amount?

Ah sorry I should have just said "3 main / larger scale funders" (op, eaif + meta funding circle). Funders from those groups include individuals.

But I was also unclear in my comment - I'll clarify this soon.

There actually is a lot stopping people from doing this independently - if you would ever want to scale and get funding you basically have 3 sources of funders, and if they don't approve what you are doing you won't get to become a serious competitor

4
Joseph
I agree with you. Hypothetically, anyone can 'compete' by providing an alternative offering. But realistically there are barriers to entry. (I know that I wouldn't be able to put on a conference or run an online forum without lots of outside funding and expertise.) Maybe we could make an argument that there are some competitors with CEA's services (such as Manifest, AVA Summit, LessWrong, Animal Advocacy Forum) but I suspect that the target market is different enough that these don't really count as competitors. Of all the things that CEA does, running online intro EA programs would probably be the easiest thing to provide an alternative offering for: just get a reading list and start running sessions. Heck, I run book clubs that meet video video chat, and all it takes in 15-45 minutes of administrative work each month. On a local/national level, maybe university/city group support could realistically be done? But I'm fairly skeptical. My informal impression is that for most of what CEA does it wouldn't make sense for alternative offerings to try to 'compete.'
5
Lorenzo Buonanno🔸
I'm probably less informed than you are, but depending on what you mean by "sources of funders" I disagree. I think if you can demonstrate getting valuable results and want funding to scale, people will be happy to fund you. My impression is that several people influencing >=6 digit allocations are genuinely looking for projects to fund that can be even more effective than what they're currently funding. I'm fairly confident that if anyone hosted a conference or online program, got good results, had a clear theory of change with measurable metrics, and gradually asked for funding to scale, people will be happy to fund that.

I agree he's not offering alternatives, as I mentioned previously. It would be good if Leif gave examples of better tradeoffs. 

I still think your claim is too strongly stated. I don't think Leif criticizing GW orgs means he is discouraging life saving aid as a whole, or that people will predictably die as a result. The counterfactual is not clear (and it's very difficult to measure). 

More defensible claims would be :

  • People are less likely to donate to GW recommended orgs
  • People will be more skeptical of bednets / (any intervention he critiques) an
... (read more)
4
Richard Y Chappell🔸
My claim is not "too strongly stated": it accurately states my view, which you haven't even shown to be incorrect (let alone "unfair" or not "defensible" -- both significantly higher bars to establish than merely being incorrect!) It's always easier to make weaker claims, but that raises the risk of failing to make an important true claim that was worth making. Cf. epistemic cheems mindset.

I didn't read the article you linked, I think it's plausible. (see more in my last para) 

I'd like to address your second paragraph in more depth though: 

He's clearly discouraging people from donating to GiveWell's recommendations. This will predictably result in more people dying. I don't see how you can deny this. 

I don't think GW recommendations are the only effective charities out there, so I don't think this is an open-and-shut case.

  • GW's selection criteria for charities includes, amongst other things, room for more funding. So if an org
... (read more)
2
Richard Y Chappell🔸
I'm happy for people to argue that there are even better options than GW out there. (I'd agree!) But that's very emphatically not what Wenar was doing in that article.

I agree with the omission bias point, but the second half of the paragraph seems unfair.

Leif never discourages people from doing philanthropy (or, aid as he calls it). Perhaps he might make people unduly skeptical of bednets in particular - which I think is reasonable to critique him on. 

 

But overall, he seems to just be advocating for people to be more critical of possible side effects from aid. From the article (bold mine)

Making responsible choices, I came to realize, means accepting well-known risks of harm. Which absolutely does not mean that

... (read more)
6
Richard Y Chappell🔸
Did you read my linked article on moral misdirection? Disavowing full-blown aid skepticism is compatible with discouraging life-saving aid, in the same way that someone who disavows xenophobia but then spends all their time writing sensationalist screeds about immigrant crime and other "harms" caused by immigrants is very obviously discouraging immigration whatever else they might have said. ETA: I just re-read the WIRED article. He's clearly discouraging people from donating to GiveWell's recommendations. This will predictably result in more people dying. I don't see how you can deny this. Do you really think that general audiences reading his WIRED article will be no less likely to donate to effective charities as a result?

This comment is mostly about the letter, not the wired article. I don't think this letter is particularly well argued (see end of article for areas of disagreement), but I'm surprised by the lack of substantive engagement with it. 

This is fairly rough, i'm sure i've made mistakes in here, but figured it's better to share than not. 

Here’s some stuff i think is reasonable (but would love for folks to chime in if i'm missing something) 

  • Questioning GiveWell's $4500 estimate - seems worth questioning! I am no expert in developmental economics, bu
... (read more)

It's much easier to fundraise for GH&D (less "weird" / more legible)

8
david_reinstein
I agree, but I'm not sure that's relevant to what the question is asking? I think it presumes you have the money to spend ... or have the ability to shift the funds.

Thanks for your time Lizka! As someone who has shared a bunch of feedback on the forum, I appreciated your willingness to always engage and stay curious.

Moderation is one of important and invisible jobs where it's really hard to please everyone. i think you / the team did a really good job in what was probably the hardest period of time to be a mod on this forum.

+1 and I'd go further - I think that "steady hands" are even more critical at the leadership level.

+1 to preparing to be in a position to do E2G. I think this is true for many career paths, but it's easier to justify it when you're doing a PhD in ML to work in TAIS research, or working in an entry level position in Congress to try to gain career capital and influence policy.

One general hesitation I had with parts of the post's framing was that it may not look at this as a long term career path (which means e.g. ramping up giving %'s , doing things to psychologically / emotionally feel good + confident about giving away more money).

well worth the time, and for sure! here are a few thoughts:

  • importance of targeted channels / personas, building a funnel, focus on the user

+10000 and advice i've given to folks working on any kind of CB / meta work. targeting users is always a good think (and you can always increase the personas you support over time). careers just take time to change, very much a marathon not a sprint (low hanging fruit are limited).

EA overall (EA thinking, funders, some parts of the EA community) have more blindspots / a lot of suspicion around longer impact timeline... (read more)

Thank you for writing this! Your observations match many of my intuitions about the career advising landscape, it's really helpful to get the confirmation since your team has been doing this for so many years.

This is one of the most useful posts I've read on the forum this year.

3
lauren_mee 🔸
Thank you Vaidehi, this means a lot, it’s always such a trade-off between the amount of time it takes to write a post that is understandable to other vs. using our time on something else, so this was really nice to hear. Would be curious to hear what intuitions you have that resonated the most with this post? And any that you have that weren’t mentioned 👀

Thanks so much for sharing these insights! Over the past few years I've seen the inner workings of leadership at many orgs, and come to appreciate how complex and difficult navigating this space can be, so I appreciate your candor (and humor/fun!)

Sebastian addressed this in a comment below. I'll also add that the Hub is a volunteer-run project, and we have limited time / resources. 

Fair point, I couldn't find a link to point to the budget, but:

"We launched this program in July 2022. In its first 12 months, the program had a budget of $10 million."

From their website - https://www.openphilanthropy.org/focus/ea-global-health-and-wellbeing/

I don't think they had dramatically more money in 2023, and (without checking the numbers again to save time) I am pretty sure they mostly maxed out their budget both years.

They also have a much smaller budget (as indicated by total spend per year). 

 

You can see a direct comparison of total funding in this post I wrote: https://forum.effectivealtruism.org/posts/nnTQaLpBfy2znG5vm/the-flow-of-funding-in-ea-movement-building#Overall_picture

2
Rebecca
I agree it’s likely they have a smaller budget, but equating budget with total spend per year (rather than saying that one is an indication of the other) is slightly begging the question - any gap between the two may reflect relevant CEAs.

However, distancing yourself from 'small r' rationality is far more radical and likely less considered.

 

Could you share some examples of where people have done this or called for it? 

From what I've seen online and the in person EA community members I know, people seem pretty clear about separating themselves from the Rationalist community. 

It would be indeed very strange if people made the distinction, thought about the problem carefully, and advocated for distancing from 'small r' rationality in particular.

I would expect real cases to look like
- someone is deciding about an EAGx conference program; a talk on prediction markets sounds subtly Rationality-coded, and is not put on schedule
- someone applies to OP for funding to create rationality training website; this is not funded because making the distinction between Rationality and rationality would require too much nuance
- someone is decid... (read more)

4
NickLaing
Yeah for sure I don't really understand how you could be an Effective Altruist without implementing a heavy dose of "small r" rationality. I agree with the post and think its a really important point to make and consolidate, but I don't think people are really calling for being less rational...

Good Ventures have stopped funding efforts connected with the rationality community and rationality

 

Since that post doesn't specify specific causes they are exiting from, could you clarify if they specified that they are also not funding lower case r "rationality"?

More broadly, they are ultimately scared about the world returning to the sort of racism that led to the Holocaust, to segregation, and they are scared that if they do not act now, to stop this they will be part of maintaining the current system of discrimination and racial injustice.

 

This feels somewhat uncharitable. 

3
Nathan Young
I think given that his own example uses McCarthyism, while it might be incorrect he seems to at least not be attempting hyperbole - both examples end up in outcomes many people consider at least disasterous.  

Huh - this both feels like something I'm sympathetic to worrying about and matches what I've seen people say about similar issues around the internet. Why does it seem uncharitable to you?

Curious if you think there was good discussion before that and could point me to any particularly good posts or conversations?

5
NickLaing
There are still bunch of good discussions (see mostly posts with 10+ comments) in the last 6 months or so, its just that we can sometimes even go a week or two without more than one or two ongoing serious GHD chats. Maybe I'm wrong and there hasn't actually been much (or any) meaningful change in activity this year looking at this. https://forum.effectivealtruism.org/?tab=global-health-and-development

I wonder if the forum is even a good place for a lot of these discussions? Feels like they need some combination of safety / shared context, expertise, gatekeeping etc?

7
Ozzie Gooen
If it's not, there is a question of what the EA Forum's comparative advantage will be in the future, and what is a good place for these discussions. Personally, I think this forum could be good for at least some of this, but I'm not sure.

I think mhendric was asking whether an applicant's anonymity preferences affect their chance of getting funding

3
Lowe Lundin
Ah, sorry that makes a lot more sense haha!  All decisions are made by funders individually, so I can't speak for everyone, but overall I don't think it has influenced people much so far.  Of course, if we thought applicants had bad reasons for wanting to remain anonymous then that would be a red flag. 

Oh thanks for the clarification, I didn't realize that! I'd expect there to be less wealth in LMIC countries though - I assume the vast majority of wealth (not sure what reasonable numbers are here) is held in HIC's and by HNWIs / corporations / governments in those countries. 

Also global GDP increased 43% between 2010 and 2022.

GDP per capita numbers are 2022 estimates, didn't make that clear earlier.  

Load more