Hide table of contents

Re: What to do with people? and After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation

Epistemic status: just my personal impression. Please prove me wrong.


So we know that:

1) In aggregate, there are billions of dollars in EA

2) There are lots of surplus talented people looking for EA work, that can't get it

and I would like to add:

3) I estimate that there are at least 10-20 budding organisations that would love to use this money to get these people to work, and scale beyond their current size, and properly tackle the problem they aim to solve. I know at least 5 founders like that personally.

So with all the ingredients in place for amazing movement growth, why isn't the magic happening?

Knowing who to delegate to

I agree with the idea that if you want to solve problems, you need to organise your system in a hierarchy, with some kind of divide-and-conquer strategy where a principal figures out the subproblems of a problem and delegates them to some agents, and recurse.

One problem here is that, even if the agent is aligned, the principal needs some way to tell that the agent is capable of carrying out a problem to a certain standard.

Different systems solve this problem in different ways. A company might have some standards for hiring and structurally review the performance of their employees. Academia relies on prestige and some metrics that are proxies of quality. The market gives the most money to those that sell the most popular products. Communities kick out members that cross boundaries, and deprive uninteresting people of attention.

EA, by which I mean the established organisations of EA, does this kind of thing in two ways: by hiring directly, and by vetting projects. For the latter there are grantmakers. As professionals that have thought a lot about what projects need to happen, they take a long hard look at an application by a startup founder, and if they expect it to work out well, they fund it. Simple enough.

The state of vetting in EA

I want to clarify that none of the following is meant to be accusatory. Grantmaking sounds like one of the hardest jobs in the world, and projects are by no means entitled to EA money just because they call themselves EA. I hope that this post keeps a spirit of high trust, which I think is very important.

So why aren't we seeing more new EA organisations getting funding? Two hypotheses come to mind:

  • The “high standards” hypothesis. Grantmakers think that these new organisations just aren't up to standard, and they would therefore cause damage. Perhaps their model is that EA should retain a very high standard to make sure that the prestige of the movement stays intact. After all that's what the movement might need to influence big institutions like academia and government.
  • The "vetting bottleneck" hypothesis. Grantmaking organisations are just way understaffed. It's not that they're sure that these organisations don't meet the bar, it's just that they can't verify that in time, so the safest option is to hold off on funding, or fund a more established organisation instead.

In reality, it is probably a combination of both of these. Some anecdotal evidence:

  • When one startup got rejected by a grantmaking organisation, and they pressed for feedback, there were told that "We do not possess the domain expertise to evaluate scalable existential risk reduction projects in the way that [other org] would be better placed to do." And "as such, we rely more on the strength and quality of references when modeling out the potential impact of projects." This was after they were invited to the interview stage. It suggests that grantmakers fall back on prestige because they don’t always have the resources to properly evaluate ideas.
  • Another startup contacted at least 4 grantmaking organisations. Three of them deferred to the fourth. This organisation would inform them of their decision in November, then postponed to December, then postponed to "after the holidays", but they haven't responded yet. They once mentioned to the startup that "we believe that we don't currently have the capacity to review your application sufficiently".
  • Quote 80k: "One reason why these donors don’t give more is a lack of concrete “shovel-ready” opportunities. This is partly due to a lack of qualified leaders able to run projects in the top problem areas (especially to found non-profits working on research, policy and community building). But another reason is a lack of grantmakers able to vet these opportunities or start new projects themselves."
  • If you look at where, for example, EA Funds spends their money, it seems like most of the funds are just going to safe bets that don't need much vetting.

So what does all of this suggest?

What kind of standards grantmakers should have, is up for debate. I'm personally under the impression that the stardards are too high. There are a lot of startups out there that would increase total direct value. In you believe the prestige of EA (through the average quality of its projects) is more important than its total direct value, consider that prestige is a negative-sum game. but that's just my 2 low-effort cents and it's off-topic.

Regardless of the bar that we think an EA startup should meet, I don't think the current pattern of payouts reflects the set of organisations that meet that bar. I'd be very surprised if exactly all of the established orgs are doing better work per marginal dollar than exactly all of the new ones.

This especially because established organisations are precisely the ones that aren't funding constrained. Even the rationale of the EA Funds payout mentions “a sense that their work is otherwise much less funding constrained than it used to be”. The grantmaker suggests spending the money on child care and upgrading electronics. He doesn’t seem to be aware of any good funding constrained organisations (This was in August, and it looks like they've moved to smaller projects now).

So scale up the vetting! Then fund more orgs! And all of those amazingly competent people will eventually find a job in one of them, and who knows, net utility of EA might end up orders of magnitude larger.

Again, just my impression. Please prove me wrong.

Comments30
Sorted by Click to highlight new comments since: Today at 4:13 AM

I'm very interested in hearing from grantmakers about their take on this problem (especially those at or associated with CEA, which it seems like has been involved in most of the biggest initiatives to scale out EA's vetting, through EA Grants and EA Funds).

  • What % of grant applicants are in the "definitely good enough" vs "definitely (or reasonably confidently) not good enough" vs "uncertain + not enough time/expertise to evaluate" buckets?
  • (Are these the right buckets to be looking at?)
  • What do you feel your biggest constraints are to improving the impact of your grants? Funding, application quality, vetting capacity, something else?
  • Do you have any upcoming plans to address them?

Note also that the EA Meta and Long-Term Future Funds seem to have gone slightly in the direction of "less established" organizations since their management transition, and it seems like their previous conventionality might have been mostly a reflection of one specific person (Nick Beckstead) not having enough bandwidth.

(Funding manager of the EA Meta Fund here)

We have run an application round for our last distribution for the first time. I conducted the very initial investigation which I communicated to the committee. Previous grantees came all through our personal network.

Things we learnt during our application round:

i) We got significantly fewer applications than we expected and would have been able to spend more time vetting projects. This was not a bottleneck. After some investigation through personal outreach I have the impression there are not many projects being started in the Meta space (this is different for other funding spaces).

ii) We were able to fund a decent fraction of the applications we received (25%?). For about half of the applications I was reasonably confident that they did not meet the bar so I did not investigate further. The remaining quarter felt borderline to me, I often still investigated but the results confirmed my initial impression.

My current impression for the Meta space is that we are not vetting constrained, but more mentoring/pro-active outreach constrained. One thing we want to do in the future is to run a request for proposals process.

One year later, do you think Meta is still less constrained by vetting, and more constrained by a lack of high-quality projects to fund?

And for other people who see vetting constraints: Do you see vetting constraints in particular cause areas? What kinds of organizations aren't getting funding?

Yes, everything I said above is sadly still true. We still do not receive many applications per distribution cycle (~12).

(Just saw this via Rob's post on Facebook) :)

Thanks for writing this up, I think you make some useful points here.

Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct. It's more like there's a distribution of projects, and we've picked some of the low-hanging fruit, and on the current margin, grantmaking in this space requires more effort per grant to feel comfortable with, either to vet (e.g. because the case is confusing, we don't know the people involved), to advise (e.g. the team is inexperienced), to refocus (e.g. we think they aren't focusing on interventions that would meet our goals, and so we need to work on sharing models until one of us is moved), or to find. 

Often I feel like it's an inchoate combination of something like "a person has a vague idea they need help sharpening, they need some advice about structuring the project, they need help finding a team, the case is hard to understand and think about". 

Importantly, I suspect it'd be bad for the world if we lowered our bar, though unfortunately I don't think I want to or easily can articulate why I think that now. 

Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement.

Importantly, I suspect it'd be bad for the world if we lowered our bar, though unfortunately I don't think I want to or easily can articulate why I think that now. 

Do you think it is bad that other pools of EA capital exist, with perhaps lower thresholds, who presumably sometimes fund things that OP has deliberately passed on?

Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement.

Do you have any thoughts on how to best do this, and on who is in a position to do this? For example, my own weakly held guess is that I could have substantially more impact in a "grantmaker/mentor for new projects" role than in my current role, but I have a poor sense of how I could go about getting more information on whether that guess is correct; and if it was correct, I wouldn't know if this means I should actively try to get into such a role or if the bottleneck is elsewhere (e.g. it could be that there are many people who have the skills to be a good grantmaker/mentor but that actors who hold other required resources such as funding or trust don't have the capacity to utilize more grantmakers/mentors). (My current guess is the second, which is why I'm not actively pursuing this.)

I would guess the bottleneck is elsewhere too, think the bottleneck is something like managerial capacity/trust/mentorship/vetting of grantmakers. I recently started thinking about this a bit, but am still in the very early stages.

I'm intermittently working on a project to provide more scaleable and higher quality feedback for project proposals for several months. First alpha-stage test should start in a time-horizon of weeks and I'll likely post the draft of the proposal soon.

Very rough reply ... the bottleneck is a combination of both of the factors you mention, but the most constrained part of the system is actually something like the time of senior people with domain expertise and good judgement (as far as we are discussing projects oriented on long-term, meta, AI alignment, and similar). Adding people to the funding organisations would help a bit, but less than you would expect: the problem is, for evaluating e.g. somewhat meta- oriented startup trying to do also something about AI alignment, as a grantmaker, you often do not have the domain experience, and need to ask domain experts, and sometimes macrostrategy experts. (If the proposal is sufficiently ambitious or complex or both, even junior domain experts would be hesitant to endorse it.) Unfortunately, the number of people with final authority is small, their time precious, and they are often very busy with other work.

edit: To gesture toward the solution ... the main thing the proposed system will try to do is "amplify" the precious experts. For some ideas how you can do it see Ozzie's posts, other ideas can be ported from academic peer review, other from anything-via-debate.

[meta: I'm curious, why was this posted anonymously?]

Sounds good! Glad to hear that this is being worked on.

why was this posted anonymously?

I didn't want to jeopardize the projects I'm associated with by criticizing those that might fund it. It's not that I expect a bad reaction, but the stakes were too high. Especially because I would be taking a risk on behalf of other people. I'm more than happy to reveal my identity if this post works out well, and/or if there's a reason that anonymity is a bad rule.

This post seems very insightful to me, and it seems like it worked out very well in terms of upvotes (and it seems like it would increase your chances of getting funding)? I'd be interested to learn who wrote this, but of course no need to say if you prefer not to. :)

Thanks! I wrote it :)

I worked on this problem for a few years and agree that it's a bottleneck just in EA, but globally. I do think that the work on prediction is one potential "solution", but there are additional problems with getting people to actually adopt solutions. The incentives for the people in power to change to a solution that gives them less power is low, and there are lots of evolutionary pressures that lead to the current vetting procedures. I'd love to talk more to you about this as I'm working on similar things, although have moved away from this exact problem.

Very rough reply ... the bottleneck is a combination of both of the factors you mention, but the most constrained part of the system is actually something like the time of senior people with domain expertise and good judgement

This makes sense and leads to me to somewhat downgrade my enthusiasm for my "Earn to Learn To Vett" comment (although I suspect it's still good on the margin)

I am unclear on whether or not the main constraint of evaluating EA projects in general is the "time of senior people with domain expertise." For-profit venture capitalists are usually not the world's leading experts in a particular area. Domain familiarity is valuable, but it does not seem like a "senior" or "expert" level of domain knowledge is all that helpful in assessing the likelihood of something succeeding or not. Like VCs, many EA funders I've spoken with rely strongly on factors that do not require a high level of domain familiarity to determine whether or not to fund a project, such as the strength of the founding team. Some amount of domain expertise may be helpful in evaluating certain types of highly complex or research-heavy projects, but most of the projects that I've seen and that other funders are funding do not seem to involve this level of deep domain complexity.

[comment deleted]5y4
0
0
Ward
5y26
0
0

This seems like a good point, and I was surprised this hadn't been addressed much before. Digging through the forum archives, there are some relevant discussions from the past year:

  • A post by RandomEA suggesting an EA crowdfunding platform (Raemon in the comments suggests having a 'common app' for the various funders, which seems like a good idea)
  • benjamin-pence's announcement of the EA Angel Group (current status unclear)
  • Brendon_Wong's post on three ideas for improving EA funding: a 'kickstarter' for projects, a platform for distributed grantmakers to share expertise and grant opportunities, and improved centralized grantmaking. Lots of interesting discussion in the comments.
  • CEA's EA Meta Fund grants post mentioned a $10K grant for Let's Fund, an org that "helps people to discover and crowdfund breakthrough research, policy and advocacy projects" via performing "in-depth research into fledgling projects" and sharing their recommendations. So far, Let's Fund has a couple of object-level posts about specific areas (improving scientific norms and doing climate change research), and is running a crowdfunding campaign for $75,000.

I found a few more discussions that seemed relevant, but not that many.* One reason this might not be very discussed is major donors are often pretty well plugged-into the community, so their social networks might do a good enough job of bringing valuable opportunities to their attention. (And that goes double for big grantmakers.) Still, it seems to me like a hub of useful centralized information could benefit everyone, if it can establish itself as a Schelling point and not as another competing standard. And improving information flow to small donors alone is obviously still valuable, though I'd worry a bit about duplicated work and low-quality analyses resulting from a norm of distributed vetting.

*It's interesting that most of the relevant posts are from the past year. Maybe a result of the 'talent-constrained' discourse getting people interested in what value small donors can provide beyond more funding for big projects?

To provide more information on the status of the EA Angel Group, Benjamin Pence and I are working together on the EA Angel Group (and its parent project Altruism.vc). The EA Angel Group is operating, although it received a lower than expected number of referrals from angels within the group which has significantly reduced the benefit that the group currently provides to its members.

I anticipated this concern months ago and tried to resolve the issue, but was delayed by ~5 months in our attempt to discuss sharing grant proposals with EA Grants. I felt like sharing grant proposals would be more efficient than launching our own "competing" grant application. I think a common app with rolling submissions is a much more sensible idea than having many separate applications that all do not share the applications they receive with other funders. To my understanding EA Grants currently doesn't have an opinion on whether sharing grant applications with other funders is a good idea or not, and it is unclear when they will develop an opinion on this topic.

One objection to sharing grant applications among funders is that a funder would fund all of the grant proposals they felt were good and classify all other grant proposals as not suitable to be funded. From the funder's perspective, sharing the unfunded grant proposals would be bad since other organizations could subsequently fund them, and the funder classified those grant proposals as not worth funding. I personally disagree with this objection because the argument assumes that a funder has developed a grant evaluation process that can actually identify successful projects with a high degree of accuracy. Since the norm in the for-profit world involves large and successful venture capital firms with lots of experienced domain experts regularly passing on opportunities that later become multibillion-dollar companies, I find it unlikely that any EA funding organization will develop a grant evaluation process that is so good it justifies hiding some or all unfunded applications.

Around the time I became more concerned that application sharing with EA Grants would be indefinitely delayed, I began to think an EA Project Platform would be a really great way to share not only grant opportunities but also other project-related opportunities like volunteering opportunities with the community. After building a prototype and seeking feedback, much of it positive, one EA decided to try to unilaterally block our platform from launching for reasons like wanting a central organization like CEA to back such a platform rather than a newer team like Ben and I. I personally disagreed with their reasoning, since it does not appear like a major organization has indicated any substantial interest in launching such a platform in the near future, and the launch of such a platform does not preclude the possibility of CEA or some other organization having a key role in the platform in the future. Not wanting to upset this person I decided to pause work on the EA Project Platform.

Ben and I are currently evaluating whether or not we want to work on a common app for funders or defer that plan and launch our own separate grant application.

Since our project to improve the EA project space is itself an EA project, our project also has the same capacity and funding constraints as other EA projects. If anyone would like to collaborate with us or provide some funding, please let me know!

"If you look at where, for example, EA Funds spends their money, it seems like most of the funds are just going to safe bets"

I notice that this link is to the August 2018 disbursement, which was indeed all to established orgs. The two disbursements since then have included at least some grants to less established projects (November 2018, March 2019).

This topic seems even more relevant today compared to 2019 when I wrote it. At EAG London I saw an explosion of initiatives and there is even more money that isn't being spent. I've also seen an increase in attention that EA is giving to this problem, both from the leadership and on the forum. 

Increase fidelity for better delegation

In 2021 I still like to frame this as a principal-agent problem.

First of all there's the risk of goodharting. One prominent grantmaker recounted to me that back when one prominent org was giving out grants, people would just frame what they were doing as EA, and then they would keep doing what they were doing anyway.

This is not actually an unsolved problem if you look elsewhere in the world. Just look at your average company. Surely employees like to sugarcoat their work a bit, but we don't often see a total departure from what their boss wants from them. Why not?

Well I recently applied for funding to the EA meta fund. The project was a bit wacky, so we gave it a 20% chance of being approved. The rejection e-mail contained a whopping ~0.3 bits of information: "No". It's like that popular meme where a guy asks his girlfriend what she wants to eat, makes a lot of guesses, and she just keeps saying "no" without giving him any hints. 

So how are we going to find out what grantmakers want from us, if not by the official route? Perhaps this is why it seems so common for people close to the grantmaker to get funded: they do get to have high-fidelity communication.

If this reads as cynicism, I'm sorry. For all I know, they've got perfect reasons for keeping me guessing. Perhaps they want me to generate a good model by myself, as a proof of competence? There's always a high-trust interpretation and despite everything I insist on mistake theory.

The subscription model

My current boss talks to me for about an hour, about once a month. This is where I tell him how my work is going. If I'm off the rails somehow, this is where he would tell me. If my work was to become a bad investment for him, this is where he would fire me. 

I had a similar experience back when I was doing RAISE. Near the end, there was one person from Berkeley who was funding us. About once a month, for about an hour, we would talk about whether it was a good idea to continue this funding. When he updated away from my project being a good investment, he discontinued it. This finally gave me the high-fidelity information I needed to decide to quit. If not for him, who knows for how much longer I would have continued.

So if I was going to attempt for a practical solution: train more grantmakers. Allow grantmakers to make exploratory grants unilaterally to speed things up. Fund applicants according to a subscription model. Be especially liberal with the first grant, but only fund them for a small period. Talk to them after every period. Discontinue funds as soon as you stop believing in their project. Give them a cooldown period between projects so they don't leech off of you.

I think this is basically accurate. As I mentioned in another thread, the issue is that the scaling-up-of-vetting is still generally network constrained.

But, this framing (I like this framing) suggests to me that the thing to do is a somewhat different take on Earning to Give.

I had previously believed that Earning-to-Give people should focus on networking their way into hubs where they can detect early stage organizations, vett them, and fund them. And that this was the main mechanism by which their marginal dollars could be complementary to larger funders.

But, the Vetting-Constrained lens suggests that Earners-to-Give should be doing that even harder, not because of the marginal value of their dollars, but because this could allow them to self-fund their own career capital as a future potential grantmaker.

And moreover, this means that whereas before I'd have said it's only especially worth it to Earn-to-Give if you make a lot of money, now I'd recommend harder for marginal EAs to join donor lotteries. If a hundred people each put in $10k into 10 different donor lotteries, now you have 10 people with $100k, enough to seed fund an org for a year. And this is valuable because it gives them experience think about whether organizations are good.

There could be some systemization to this to optimize how much experience a person gets and how the org(s) they funded turned out to fare. (Maybe with some prediction markets thrown in)

I like this frame of maximizing the learning of the vetting skill. How can we get as many EA's as possible to get as much experience as possible on evaluating charities, while also ensuring some minimum level of quality with the charities that actually get funded?

Sounds like we want every (potential) grantmaker to be working on vetting the orgs that are on the edge of their skill. That's how you maximize learning.

Also re Jan's comment, some kind of "upward delegation" system where juniors defer to seniors but only if they can't handle an application sounds like it would have this property, plus it would minimize the time that seniors have to spend.

I also like to imagine sending small teams to the EA hotel to start an org for 3 months, explicitly with the intention to just test an idea, then write up their results and feed this back into the vetters.

Just shooting random ideas. Seems like we have some nice building blocks to create something here.

Another data point suggesting a vetting bottleneck is Open Phil’s recent shift in how they’re funding EA meta/community organizations, including those who work on long-termist causes. This was motivated in part by “high uncertainty about how to set the right grant amounts for these organizations and our sense that we aren’t providing the level of accountability, oversight and vetting that we ideally would like to. We believe that individual donors (particularly to these organizations) sometimes seem to think our investigations into the organizations in question have been deeper than is actually the case.” (emphasis added/shifted)

In other words the funder with the most incentives, capabilities, and resources to vet these organizations (which I’d guess are abnormally hard to vet) doesn’t think it’s doing enough vetting, and is worried other donors are also under-vetting (based on erroneous assumptions). And it’s not just small projects that are under-vetted, the problem seems much broader.

Re: my comment about smaller projects being undervetted, I should note the level of detail provided in the last grant report from the Long Term Future EA Fund looks like a substantial step forward, “raising the bar on the amount of detail given in grant explanations.”

Excellent post, although I think about it using a slightly different framing. How vetting-constrained granters are depends a lot on how high their standards are. In the limit of arbitrarily high standards, all the vetting in the world might not be enough. In the limit of arbitrarily low standards, no vetting is required.

If we find that there's not enough capability to vet, that suggests that either our standards are correct and we need more vetters, or that our standards are too high and we should lower them. I don't have much inside information, so this is mostly based on my overall worldview, but I broadly think it's more the latter: that standards are too high, and that worrying too much about protecting EA's reputation makes it harder for us to innovate.

I think it would be very valuable to have more granters publicly explaining how they make tradeoffs between potential risks, clear benefits, and low-probability extreme successes; if these explanations exist and I'm just not aware of them, I'd appreciate pointers.

Another startup contacted at least 4 grantmaking organisations. Three of them deferred to the fourth.

One "easy fix" would simply be to encourage grantmakers to defer to each other less. Imagine that only one venture capital fund was allowed in Silicon Valley. I claim that's one of the worst things you could do for entrepreneurship there.

grantmakers fall back on prestige because they don’t always have the resources to properly evaluate ideas

It seems like this recent post describes the opposite pattern, of someone with a highly prestigious resume spending a lot of resources getting evaluated, and getting rejected despite their resume. I wonder why the pattern would be different between hiring and grantmaking?

Anyway, one idea for helping address the bottleneck is to maintain a shared open-source grantmaking algorithm. The algorithm could include forecasting best practices, a list of ways projects can cause harm, etc. Every time a project fails despite our hopes, or succeeds despite our concerns, we could update the algorithm with our learnings. It could be shared between established EA grantmakers, donor lottery winners, independent angels, etc.

I don't think such an algorithm would eliminate the need for domain expertise. But it might make it less of a bottleneck. The ideal audience might be an EA who is EtG and thinking of donating to a friend's project. They can vouch for their friend and they have a limited amount of domain expertise in the area of their friend's project. They could do some fraction of the algorithm on their own, then maybe step 7 would be: "Find a domain expert in the EA community. Have them glance over everything you've done so far to evaluate this project and let you know what you're missing." (Arguably the biggest weakness of amateurs relative to experts is amateurs don't know what they don't know. Plausibly it's also valuable to involve at least one person who is not friends with the project leader to fight social desirability bias etc. Another way to help address the unknown unknowns problem is making a post to this forum and paying for critical feedback. OpenPhil has a relevant essay re: what they aim for in their writeups.)

I can't help but appreciate the irony that 5 hours after having been posted this is still awaiting moderator approval.

This happens to posts by accounts which have never posted before; established accounts (at least one post or comment) don't have to wait. This was instituted on both LW and EA Forum because of a steady stream of bot-generated spam.

8 hours actually, but most of that was night time in the US.

Thank you for writing this post!!

Curated and popular this week
Relevant opportunities