Hide table of contents

TL;DR

Here are the key points I want you to take away from this post:

  1. There are maybe 30 to 60 people in the world doing AI safety grantmaking, collectively directing hundreds of millions of dollars a year. Soon, there will be >$1B being directed per year, and potentially multiple billions.
  2. AI safety grantmaking orgs like CG have a strong track record of counterfactually seeding impactful organizations and careers.
  3. Grantmaking involves a lot more than evaluating a stack of inbound proposals. You also proactively generate new grants (e.g., headhunting founders, designing new funding programs), provide strategic advice to grantees, write memos that shape funding strategy, and generally serve as connective tissue in the ecosystem.
  4. The AI safety grantmaking ecosystem is currently leaving good grant opportunities on the table due to a lack of grantmaker capacity. This is bad.
    1. More grantmakers would also unlock more capital, because funders are more willing to write cheques when there are people who can find and vet promising opportunities.
  5. Everyone who reads this should not necessarily rush to become a grantmaker. Direct work is great, and the entire ecosystem is talent-starved in so many ways. But my sense is that grantmaking is underrated relative to other paths that high-context AI safety people tend to consider, like research or policy.
  6. Grantmaking also has some real downsides — you won’t go as deep as you might want to, the work is largely invisible, active grantmaking can be frustratingly poorly scoped, and saying no to people is hard. I discuss these in the appendix.

Intro

A few weeks ago, I wrapped up my two and a half year stint as a grantmaker on the AI governance and policy team at Coefficient Giving (or “CG”). I’ll soon be joining Astralis Foundation to work on their grantmaking strategy.

CG was my first real, full-time, big boy AI safety job after finishing grad school. The EA part of me wishes I could tell a story where I sat cross-legged in an ivory tower, thinking (mostly from first principles, of course) about how I can most reduce existential risk from ASI, whereupon I decided grantmaking was the most impactful path to pursue.

Nope. I took this job for reasons like:

  • I wanted an AI safety job, and this was the only one available to me at the moment.
  • CG (then called “Open Philanthropy”) had a bunch of cool people working there, such as (but not limited to) Ajeya Cotra and Luke Muehlhauser. “Working with cool people is good” seemed like a reasonable heuristic, I guess?
  • I was running out of savings, and I needed to pay rent and buy food for myself.

Fortunately, I ended up quickly concluding that grantmaking is a very high-leverage role in the AI safety ecosystem. Thus my main goals here are to (a) attempt to demystify what grantmakers do and (b) make the case that grantmaking is being underrated as a career opportunity by high-context AI safety people.

I’ll also try to address some common misconceptions about things like the marginal value of more grantmakers, mention some downsides of the role, and outline a basic call to action.

What do grantmakers do?

Grantmaking mostly involves three key activities: (1) evaluating inbound grant proposals (or “passive grantmaking”), (2) proactively generating new grants (or “active grantmaking”), and (3) a grab-bag of non-grantmaking activities.

I’ll describe each of these below in more detail, but the basic idea is that grantmakers are in the business of figuring out what the AI safety ecosystem needs and then taking advantage of the biggest levers available to them to make it happen.1

Passive grantmaking

This is what most people picture when they think about what grantmakers do. Someone comes to you with a proposal, you read it, conduct an investigation, and if you think it’s worth funding, write up a recommendation that senior people at your organization can get behind.

A common misconception about passive grantmaking is that it basically just involves hitting accept or reject. That’s false: you have a lot of levers at your disposal to shape an inbound grant into something even better. You can give the applicant feedback on their theory of change/plans/strategy, ask them for a budget that scales up one workstream and scales down another, lengthen or shorten the grant period, make the second half of the grant conditional on hitting certain milestones, suggest they hire for a role they hadn’t considered, and/or push them to be even more ambitious.

Active grantmaking

There’s a second “style” of grantmaking called active grantmaking.

Instead of waiting for exciting proposals to land on your desk, you go out and actually make things happen. For instance, you could write up a project proposal for a new organization focused on a sub-problem you think is important and pitch a number of potential founders to start it. You could also design and advertise a new funding program from scratch (e.g., an RFP or something like CG’s CDTF program), and/or pitch an existing grantee to start a new workstream.

Active grantmaking requires you to develop models of what to prioritize. You have to form views on questions like:

  • What are the most important threat models to focus on?
  • What sub-problems are most worth prioritizing?
    • US policy development?
    • Information security field-building?
    • Technical AI governance research?
    • EU policy?
    • Strategic communications?
    • Talent pipelines?
  • What does the AI governance landscape actually look like right now? Who is working on what?
  • Which organizations and people are doing the best work? What’s currently bottlenecking them?
  • What’s the biggest gap in the ecosystem that nobody’s filling?

You develop these views through a mix of reading (papers, memos, blog posts, Slack messages), talking to a lot of people (researchers, founders, policy experts, other grantmakers), and occasionally just sitting with a hard question for a while.

That said, even at an organization that’s been grantmaking for a decade, there are a surprising number of important areas that few people have spent much time digging into. Even just a week or so of shallowly investigating an area that had been on the team’s radar but never properly investigated can surface genuinely exciting opportunities. Every organization has blindspots, and sometimes the highest-value thing a new grantmaker can do is simply be the first person to take a serious look at something that seems vaguely promising.

Non-grantmaking activities

A surprising amount of the job doesn’t involve making grants directly. As a grantmaker, you can easily spend a decent chunk of your time on high-value things like:

  • Providing strategic advice to grantees — helping them think through organizational strategy, prioritization, etc.
  • Writing internal memos that shape how your team or your leadership thinks about entire funding areas, grantmaking practices, strategic priorities, etc.
  • Writing external memos that shape how the broader ecosystem thinks about various strategy questions.
  • Hiring, training, and managing other grantmakers.
  • Making introductions between people who might work together on important projects (grantmakers are often good at being the connective tissue in the ecosystem).
  • Providing leads and referrals to grantees that are looking to hire for key roles.
  • Convening grantees to try to rapidly advance the conversation on a particular topic (e.g., “What are the top AI policy priorities for 2026?”).
  • Building relationships with other funders.
  • Talking to other grantmakers (both internally and externally) about what you’re both working on, so you can advise each other when making relevant grant decisions.

Throughout my time at CG, I’d guess I spent a third of my time on non-grantmaking activities.2

Why I think grantmaking is underratedly impactful

Grantmaking has a strong track record

During my time at CG, I saw first-hand how a number of small, speculative grants made years ago helped create organizations that are now pillars in the AI safety ecosystem.

Alexander Berger (CG’s CEO) recently shared an example of this:

“Many of the grantees that have gone on to be among our most important and impactful didn’t start off looking that way at all. For instance, we made our first $250,000 grant to the program that would eventually become ML Alignment & Theory Scholars (MATS) in 2019, when it was a side project by some students affiliated with the Stanford Existential Risks Initiative who thought there should be a summer program to prepare software engineers for careers in AI safety. The MATS 1.0 cohort had 5 fellows and no permanent full-time staff. They have since expanded to run multiple cohorts a year of around 100 scholars with an admission rate of 4-7%, and report that over 80% of their alumni are now working full-time in AI safety and security (accounting for a meaningful portion of safety staff at some of the biggest companies and government institutes).”

There are more examples in this piece that CG published back in October 2025. I also like this anecdote about how Jake Mendel encouraged the folks at Theorem to be even more ambitious with their plans, and this post Asya Bergal wrote about CG’s capacity building efforts.3

Unfortunately, many of the most impressive wins I’m familiar with are fairly sensitive, so I kinda just have to unsatisfyingly gesture at a couple of fairly well-known examples and say “trust me bro”. If you’re seriously considering a grantmaking career and this is a crux for you, my advice would be to ask for more evidence directly from grantmakers you speak with. Maybe they’ll have a few less obvious examples they can share.

The ratio of AI safety philanthropic capital to grantmakers is kinda wild

Here’s something that I think people really don’t appreciate: there are maybe 30-60 FTE in the world doing the object-level work of investigating and recommending AI safety grants.4

These people collectively directed hundreds of millions of dollars a year in 2025. In 2026, I expect this number to be greater than a billion, with potentially enormous growth coming in the next few years as AI safety issues grow in urgency and salience. Depending on how you do the math, you’re looking at each grantmaker being responsible for directing tens of millions of dollars per year. That’s an extraordinary amount of leverage.

Of course, basically the entire AI safety ecosystem is talent-starved, so these anecdotes can’t fully carry the argument I’m trying to make. But still, my intuition is that grantmaking is underrated relative to other popular talent-starved roles. If you’re a high-context AI safety person deciding between, say, working as a researcher at a think tank or becoming a grantmaker, I think the grantmaker path deserves more weight than I sense many people give it. This seems especially true if you’re someone with technical AI safety chops who is mostly considering technical research roles.

Grantmaking on current margins looks pretty solid

Like a good grantmaker, you should think on the margin. You might reasonably be wondering something like: “Aren’t the most obvious grants going to get funded either way? Are more grantmakers on the margin really going to make a significant difference to what gets funded?”.

I think the answer is pretty clearly yes, for a few reasons.

We’re leaving good grants on the table right now due to a lack of grantmakers. When I was at CG, I regularly saw plausibly-above-the-bar proposals either get rejected outright or sit in the queue longer than they should have, mostly because we didn’t have enough grantmaker capacity to properly evaluate them. CG’s AI governance RFP was recently paused in part because they want to reallocate staff capacity toward more active grantmaking. On the active grantmaking side, there was a regular stream of potentially promising ideas that never got seriously explored because we never had enough staff capacity.

This could get even worse if philanthropic capital grows but grantmaker hiring remains slow. I’m seriously worried that we’re not on track to deploy all of the philanthropic capital that could go toward good AI safety opportunities over the next few years.

More grantmakers would unlock more capital. More grantmaker capacity doesn’t just divide the existing pie into smaller slices; it makes the pie bigger, because funders will be more willing to write cheques if there are more skilled grantmakers who can actually find and vet promising opportunities.

Grantmakers do a lot more than filter through marginal proposals. As I touched on above, there’s a common misconception that the job is just sorting through a pile of applications and deciding which ones to say yes or no to. That’s not true. You can go out and seize the opportunities you wish to see in this world, especially in sub-areas where we are not yet seeing strong diminishing returns. This can be an even bigger deal if you have specific domain expertise that uniquely enables you to do a specific flavour of active grantmaking (e.g., if you’re someone with an information security background).

Jake Mendel on CG’s technical AI safety team recently wrote about this:

“Some people think that being a grantmaker at Coefficient means sorting through a big pile of grant proposals and deciding which ones to say yes and no to. As a result, they think that the only impact at stake is how good our decisions are about marginal grants, since all the excellent grants are no-brainers.

But grantmakers don’t just evaluate proposals; we elicit them. I spend the majority of my time trying to figure out how to get better proposals into our pipeline: writing RFPs that describe the research projects we want to fund, or pitching promising researchers on AI safety research agendas, or steering applicants to better-targeted or more ambitious proposals.”

I’d also push back on the idea that the “obviously above the bar” grants are actually obvious. They might be obvious5 to a full-time grantmaker who has spent months embedded in a particular sub-area, but not at all obvious to the people who approve grants — say, the CEO of a grantmaking organization who has to juggle many different responsibilities. A big part of your job as a grantmaker is to internally translate and advocate for the good stuff to people who don’t have the time or context to investigate it themselves.

I could one day imagine a world where money or ideas are the bottleneck, but we are currently far from that world.

Grantmaking vs direct work

To be clear, I’m not saying everyone should drop what they’re doing and try to become grantmakers. Direct work is great! The majority of people in the AI safety ecosystem should absolutely be doing things like research, advocacy, communications, policy, or founding organizations rather than trying to become grantmakers.6

The claim I can more confidently stand by is that grantmaking currently seems quite underrated by high-context AI safety people. After running hiring rounds, pitching a ton of people on applying, and watching folks’ career moves play out, my sense is that there’s a meaningful gap between how excited people are about grantmaking and how excited I think they should be. I suspect this is partly due to misconceptions about the role (hopefully addressed above) and also that grantmaking is just kind of an opaque career path.

As an exercise, try BOTECing what you could make happen with $10 to $30 million7 in grantmaking funds and a year to brainstorm new project ideas, vet potential founders, and launch new RFPs. Even if you include some counterfactuality haircuts, that’s enough to fund a large number of people to go work on problems you think are important. Then compare that to what you’d counterfactually produce as a single researcher or policy professional over the same period. I’m not saying the answer is always obvious, or that this is a bulletproof argument in favour of grantmaking, but I think it’s worth trying to be concrete about it.

Call to action

Start by thinking about whether you’d be a fit for a grantmaking role.

You might be a good fit for a grantmaker role if:

  • You’re good at spotting gaps. You are able to notice something important that nobody’s working on. Ideally, you’re also able to think of creative solutions for filling those gaps.
  • You like breadth over depth. This can look like forming a bird’s eye view of what the ecosystem as a whole is doing (or at least a large chunk of it). Ideally, you’re comfortable with being somewhat knowledgeable about many sub-areas rather than building world-class expertise in one.
  • You have strong people judgment skills. You go beyond evaluating whether a theory of change is sound on paper to evaluating whether this particular person/team is going to pull it off.
  • You’re entrepreneurial. For active grantmaking in particular, you get to prioritize between different problems, design interventions to solve them, persuade others to work on them, and deploy capital to fulfill your own strategic vision. You can get a lot of leverage if you’re good at these things.
  • You have strong communication skills. Grantmaking is a communications-heavy role. In particular, you might thrive if you’re good at reasoning transparency and clearly explaining why you’re excited about some opportunity to senior decisionmakers.

That said, I want to be clear that grantmakers come from all kinds of backgrounds. I wouldn’t over-index on whether you check every box above. If what I’ve described in this post sounds interesting, talk to some grantmakers, and seriously consider just applying. You’ll learn a lot about the role from the process itself even if it doesn’t work out. I did a lot of hiring at CG, and while these rounds are very competitive, I would’ve loved to see even more high-context AI safety people apply.

If you want to pursue this, note that there are several organizations that are worth keeping on your radar (or maybe even proactively reaching out to). These include (but are not limited to):

  • Coefficient Giving: the largest AI safety funder by a wide margin, with teams covering technical AI safety, AI governance and policy, biosecurity, forecasting, and capacity building.
    • Despite my recent departure, I’m still very bullish on CG!
  • Longview Philanthropy: they have their own AI program and advise major donors on AI safety giving.
  • Macroscopic Ventures: they make grants and investments in AI safety and related areas.
  • Astralis Foundation: where I’m heading next. We’re a newer and smaller funder, but we’re growing.
  • The Long-Term Future Fund: fund managers evaluate applications on a rolling basis across a range of longtermist cause areas, including AI safety.
  • Future of Life Foundation: FLF does grantmaking across a number of AI safety sub-areas.

There are also opportunities to do part-time grantmaking work at places like the Survival and Flourishing Fund. You could also do independent grantmaking or set up your own new thing, which seems like a great option if you’re particularly entrepreneurial and if you can secure funding for it.

Acknowledgements: Thank you to Catherine Brewer, Michael Townsend, and Trevor Levin for their helpful comments. All views expressed here are my own and do not necessarily reflect any other organizations or individuals I’m affiliated with.

Appendix - Things that aren’t great about grantmaking

In the interest of not writing a pure sales pitch, here are some things I think are genuine downsides of being a grantmaker.

You might not go as deep on the object-level as you might want to. I’d guess there’s a fairly strong correlation between people who are bought into AI safety and people who intrinsically love forming deep, rich inside views on specific questions. Grantmaking isn’t really set up for that. As I described above, you’ll spend some of your time developing views, and you might have one or two focus areas you know particularly well. But generally speaking your mandate will be pretty broad and you’ll have to defer a decent amount. If what you really want is to spend six months going deep on a single research question, grantmaking is probably not the right fit for you.

The work is somewhat invisible. If you make a great grant, your broader network of peers will not obviously know about it. There’s no public artifact to point to. Research, for instance, has a built-in status mechanism — you produce something legible that people can evaluate and credit you for. Grantmaking doesn’t really have that. Of course, you do get some status from people correctly perceiving that grantmakers are important tastemakers in the ecosystem, but the actual work is largely behind the scenes.

People will interact with you differently because you can direct money. You have to be somewhat wary of people trying to bamboozle you. In practice, this was far less of an issue than I expected going in, as the vast majority of people I interacted with were relatively honest and well-intentioned. But there are grifters out there, and developing a nose for this is part of the job.

Active grantmaking can be really tricky. The most entrepreneurial parts of grantmaking are often very poorly scoped. If you aren’t an intense self-starter, it can be easy to spin your wheels in the mud. This can be some of the most rewarding work that a grantmaker does, but also some of the hardest.

You sometimes can’t fund things you think are good. Depending on where you work, there may be constraints on what you can fund. Let me stress an obvious point: it is incredibly important as a grantmaker to be a faithful and responsible steward of your funders’ capital. And sometimes they’ll have firm preferences against funding things you’d otherwise want to support, or there might be other organizational constraints that get in your way. That’s just the way it is.8

Saying no is hard. It kinda sucks to say no to someone who is really passionate about their idea, but that’s part of the job. This is especially true when the main reason you’re saying no is because of bandwidth constraints rather than their proposal being below your bar.

Communicating can be quite effortful. You have to be pretty careful about how you communicate certain things to people due to power dynamics, which requires extra mental bandwidth. A poorly worded email from a grantmaker can carry more weight than you intend.

1 As you might imagine, the biggest lever available is often philanthropic capital. But sometimes it can be your network or your particular object-level knowledge.

2 Of course, other grantmakers might have vastly different experiences with this.

3 I'm drawing mostly on CG examples here because that's what I know best, not because CG is the only funder with wins like these. My sense is that other grantmaking orgs have similar stories to tell.

4 This was just a 20 minute low-confidence estimate I put together as of March 2026. If you expand the criteria to include program leadership, advisory roles, and people in grantmaking-adjacent positions, you get to maybe 70-90.

5 Even then, I think people overestimate how obvious these are!

6 One example of something that I think is probably even more neglected than grantmaking is founding and scaling highly ambitious organizations. But even that’s not clear-cut. There are some founders who wouldn’t be good grantmakers, sure, but if you’re someone who could either start a new org or join a grantmaking org like CG at a senior level, it might be kind of a close call.

7 This depends on factors such as what organization you work at, what area(s) you focus on, and your level of seniority.

8 In practice, I didn’t feel like this was a huge issue for me during my time CG. I know others who have had much bigger issues with this though, so YMMV.

24

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities