Hide table of contents

Twelve essays we liked on EA funding, SF fieldbuilding, and decision-relevant forecasting

So, when we launched the Manifund Essay Prize, I was looking for a winner for each of three categories. Unfortunately, I liked the submissions too much to choose just three, so here are 12 winners:

The Manifund category: “What systems might manage the coming torrent of funding?”

Topic: Funding in EA may soon skyrocket. Between Anthropic & OpenAI tenders, the new OpenAI foundation, and short timelines to AGI, the amount of money available will be unprecedented. What mechanisms, incentive structures, orgs, and attitudes will help direct this windfall wisely?

First place: Your Solution Doesn’t Know Your Problem Exists, by Evan Miyazono

Evan’s post provides a bunch of good pointers and ways of thinking about how to build a field, backed by his own experiences. His writeup is full of juicy tidbits, like:

I worry that we, as decision-makers within organizations, are falling under the law of the instrument as we try to solve big problems.

Here’s what I mean:

  • VCs assume problems can be solved by startups.
  • Policymakers think things should be new laws, new taxes, new agencies.
  • Researchers look for research projects, because even sociotechnical problems need a proof-of-concept to derisk or illustrate capabilities.
  • And even the philanthropically-funded and non-profit organizations like mine have to color inside the lines of the tax code and our employees’ career trajectories.

This leads to a failure mode where problems that require a solution outside or spanning different fiefdoms remain unsolved; worse still, by survivorship bias, the most important problems will be these problems that don’t fit neatly into buckets.

 

 

and

if you find a bottleneck – or an abandoned baby – you might not be to blame, but you should definitely consider yourself responsible

and

the part that is controversial in the SF bay area, land of “fund people not projects” – I claim that strategy and execution are two largely non-overlapping skillsets, and can be done by different people.

(I still have a lot of uncertainty on this point — is it possible to hand off a strategy to someone else to execute? Sam Altman says no, but one might believe that the solution space for public goods is different than for startups.)

Overall, I expect the role he identifies (“field strategist”) to become more important as funding outpaces opportunities, and I’m glad to have this essay to point to in the future.

Second place: The Anthropic IPO Is Coming. We Aren’t Ready for It., by Sophie Kim and Ady Mehta

The seven co-founders of Anthropic have all pledged to donate 80% of their wealth. Forbes estimates each holds “just over 1.8% of the company.” As Transformer recently reported, if Anthropic goes public at its current valuation, each co-founder’s pledge alone would be worth “roughly $5.4 billion, or $37.8 billion combined… nearly ten times what Coefficient Giving… has given away in its entire history.” And that’s just the co-founders. Other employees have pledged to donate shares that could amount to billions more, with Anthropic promising to match those contributions.

More money is about to enter AI safety philanthropy than the field has ever seen. The question is whether anyone is ready to direct it.

Over the last year, I’ve been tracking the new Anthropic wealth as a tectonic shift in EA funding, and been trying to figure out how to orient to it; see this comment, my talk at The Curve, and, well, this very competition. In this essay, Sophie and Ady make the issue salient and walk us through some reasonable ways for EA to update. There are a few things I’d frame differently (eg the focus on IPO vs tenders, considerations on the differences between ~100 Anthropic donors vs 1 Dustin), but overall the piece nails many key considerations. I’ve since seen their piece crop up in many discussions and group chats.

Note that Sophie and Ady published this before I’d even announced this competition — that’s what good forecasting looks like.

Second place: Other people’s money, by Jasmine Ren

Jasmine’s essay was my favorite for addressing the cultural aspect of “vast torrents of money”.

A person making $500,000 a year at an AI lab does not think about money the way I thought about it at the bakery. At that level of income, money is an abstraction, removed from hours and things. It’s a number in a brokerage account, a figure in a spreadsheet. Meanwhile, for the vast majority of people who keep the engine of the world running — making the coffee, cooking the lunch, delivering packages, maintaining the building, and building the data centres — still think about money in terms of physical things or hours worked.

I once found out that a quant trader friend of mine grew up in a lower middle class family, and asked him what it was like for him. “Money just isn’t real anymore,” he told me, “my parents can never understand how much money I deal with on a daily basis.”

Lastly, short timelines. Short timelines are such a powerful justification that they can override almost any other principle. I think they function less as an argument than as a kind of psychological pressure. No one knows for certain how short timelines actually are, but I can imagine a rationale that goes: because timelines are short, we need to prioritize accordingly, and that justifies throwing out principles of egalitarianism, responsibility, and duty.

I do often worry about how short timelines and increased funding lead to elitism. Spaces in AI safety can start to feel like a cutthroat tournament, from grants to jobs to fellowships to group house applications, everything judged through the lens of meritocracy and impact. Coming from an effective altruism that values all lives equally (not to mention a Christianity that specifically feeds the hungry and blesses the poor), I find this sad and a bit scary.

To be clear, I’m sympathetic to startup-style arguments around the need for competence, “founder mode”, hiring carefully. But Jasmine here lays out a timely reminder about the importance of pluralism, and of a movement that normal people can support.

Honorable mention: Notes on Patronage, by Jennifer Chen

If you are reading this post, odds are you are not actually someone who would self-identify as a creative type. But perhaps your way of thinking about the tension between job, career, and the actual work you think is important to do has much in common with the bind that creative types have been in since time immemorial, such that it could be useful to think in such a lens anyways. Now may be an unusually good time for people who have important work they want to do to familiarize themselves with patronesque frameworks, because many new patrons are about to come online, and many of them will think about the world in ways similar to you.

Historically, my biggest patron has been the EA Infrastructure Fund (EAIF). I like their work, and I think they serve a really important function in the funding ecosystem as an entry point. But in general a large, corporate fund comprised of people you don’t know, with a vision or angle of their own, is not typically where the most talented creative types get funding for their work.

“Patronage” is an apt term for some kinds of current EA funding, and maybe a lot more, soon. Jenn’s listicle is generally fantastic, and immediately prompted me to respond with my own. I expect to return to this frame and link to this more as funding scales up.


The Mox category: “How to create a flourishing EA & AI safety scene in San Francisco?”

Topic: The race to build AI is going on in SF. So why is the AI safety scene here so weak? Berkeley, London, and DC all offer examples to learn from, but SF has its own unique challenges and opportunities. Beyond that, ecosystems like the startup scene, movements like climate change, and even religions may offer lessons for how to proceed.

First place: If you work in the Animal movement, you need to move to San Francisco, by Itsi Weinstock

 

You need to move to San Francisco. Whether you are currently working in the movement, or are thinking about getting into the movement, you need to be here. I don’t care if you run a large organization, I don’t think there’s a good reason not to pick up your life and move here unless you literally cannot. This is the most important moment in history if we are going to stop putting 100s of billions of animals through factory farms, the closest approximation to hell.

It was hard for me to move to San Francisco. I dreaded it. I am from Australia, the best place in the world. I love culture, music and wine, and having friends and communities that I’ve known for decades. The tech scene in SF is not known for these things. And it is hard to be here as an immigrant. But it is where the future is happening. And it is where you will find people, who, like yourself, are willing to acknowledge that and have come here to make it go better. And frankly, because it lacks culture, you can have a big impact on it.

I don’t have a specific job for you. You’ll need some grit to figure out what the plan is. But there is so much to do, and as long as you are motivated we can make it work; helping you get cheap accommodation, introducing you to people, starting you on project ideas, and even training you into an AI Safety researcher so you can make a dent on how AI gets developed in a way that is better for animals.

Itsi’s argument is short and straightforward; I don’t have much to add. At Mox, we’re spinning up an animal welfare hub; at Manifund, the Falcon Fund. (Itsi himself plays no small role in pushing these forward!)

Careful readers may realize that his argument for relocating to SF goes beyond animal welfare and extends to, for example, effective giving (cough, Longview), EA meta (cough, CEA), and AI safety (cough, all of Berkeley).

Second place: Building the EA/AI Safety Scene in San Francisco, by Chris Leong

EA/AIS is tilted to thinking on a global scale. I think we’ve failed to appreciate that some cities are much more important than others. Concentrated talent is more important than talent spread out (see Cities and Ambition)

In most cities we primarily care about talent acquisition, in SF we also care about general attitudes about AI. There’s value in exposing people to these ideas even if they don’t go on to pursue AIS because the values in the watersupply in SF will impact how AI is built.

Chris verbalizes the reason I created the Mox prize category: SF actually is just orders of magnitude more important than other cities, because of the density of talent, availability of funding, and proximity to labs. An EA that took this seriously would put 10 Andy Masleys in SF; we have zero.

I also appreciated Chris’s many concrete suggestions; in particular, an AI safety conference like EAG but aiming introductory seems like a great fit for Mox, and I think there’s a >50% chance we’ll try something like this in 2026.

Honorable mention: Morphing meetups and me, by Smitty van Bodegom

The new venue was:

  • Bigger; it had space for a lot more people
  • Fancier; it had more decorations and fancy aesthetics
  • Pretentious; it had bowls with strange objects to be looked at and books on desks that were there to be looked at instead of read.
  • More expensive

The AI safety meetups also went in the same direction, becoming bigger, fancier, more pretentious, and more expensive.

I don’t think having an AI safety lecture series with an associated professionalish social event before is bad, but it’s not what I prefer and I started going to the events less around this time. I feel like the AI safety community as a whole has become more professionalized as a whole, which I don’t like.

I have so many takes about the nuts & bolts of event organizing! How fancy should events be? Do we call them “meetups”? Do we charge? These are all things we’ve debated at Mox, and I appreciate Smitty sharing his perspective as an attendee-turned-organizer.


The Manifest category: “How can we leverage forecasting into better decisions?”

Topic: prediction markets have exploded in popularity over the last year. AI forecasters are on track to overtake the best humans by June 2027. But for all that EA has invested into forecasting, it sure doesn’t seem like we’re using them to make better decisions — whether as individuals, within orgs, or as a society. How might we get there?

First place: Good Forecasts, Bad Products, by Mohamed Elrashid

In contrast to many other submissions, Mohamed went beyond outlining the problem and went into exploring plausible solutions. I appreciated the pointer to new companies already selling AI-powered forecasts:

Aaru and Simile are looking to deliver a lot of the same value as the forecasting companies, and they have been better at picking up customers and convincing VCs of their viability. Aaru builds synthetic populations, Simile runs agent-based organizational simulations, and Mantic delivers probability briefs; the products differ, but the market is the same, and it is someone buying structured anticipation of the future. Aaru raised a $50 million Series A in December 2025 at a headline valuation near $1 billion, with customers including Accenture, EY, IPG, and political campaigns. Simile raised a $100 million Series A in February 2026 from Index Ventures, with customers including CVS (for example, simulations across nine thousand stores for shelf placement), Gallup, and Wealthfront. Simile, which builds on Joon Park’s Generative Agents work, has positioned simulation as a decision-making tool rather than a prediction, and has found a much larger market for that framing than the forecasting companies have for theirs. Mantic, in the same fifteen months, raised a $4 million pre-seed.

I’ve long believed that new AI capabilities fundamentally change what the field of forecasting should be about; and have occasionally wondered whether the EA efforts are captive to the human forecaster community. I’ll be excited to see if any of these new orgs would be interested in joining us at Manifest!

Second place: Why Forecasting Fails Decision Makers, by James Newport

The forecasting community has a fetish for resolution criteria. We spend weeks debating the exact definitions of words but spend far less time understanding what exact issues organisations need to grapple with.

The real value of forecasting is in the moment you realise two people in the same room have forecasts 40% apart. That is where the benefit occurs. But the community is so obsessed with maximising Brier scores that it ignores the fact that their quest for the most accurate predictions are often sapping time and effort away from utilising the most valuable element of forecasting: transparency.

James’s piece draws upon his experience as director of the Swift Centre as well as many other forecasting efforts. Through making Manifold, I’ve sometimes seen a tension between creating an engaging, satisfying game for traders, and creating valuable information for question askers. Any forecasting platform does need a culture that prizes accuracy, but James reasonably argues that decisionmakers think in very different terms, and much work remains to be done to bridging this gap.

Second place: The Marginal Vote, by Grayson

Election forecasts are one of the most established uses of prediction markets, perhaps only second to (unfortunately) sports betting. However, as important as elections are, it’s not always clear how to translate a calibrated forecast into action. Not so here; Grayson takes a quick look into how much it may have cost to swing the 2024 senate.

Cost to move PA from a 51% tossup to a target win probability

“$/vote” is the cost per net marginal vote

 

If the price of $50 million is too steep, you can find even higher-leverage opportunities in the less-salient primaries of Senate and House races by supporting candidates likeliest to win in the general. Candidate quality can be modeled and folded into a forecast as well; this approach is far more impactful than throwing marginal dollars at an expensive race.

Honorable mention: Forecasting is Way Overrated, and We Should Stop Funding It by Marcus Abramovitch

The way people talk about forecasting is very similar to how people talk about cryptocurrency/blockchain. People have a tool they want to use, whether that be cryptocurrency or forecasting, and then try to solve problems with it because they really believe in the solution, but I think this is misguided. You have to start with the problem you are trying to solve, not the solution you want to apply. A lot of work has been put into building up forecasting, making platforms, hosting tournaments, etc., on the assumption that it was instrumentally useful, but this is pretty dangerous to continue without concrete gains.

Marcus’s piece is short but clear, and has received by far the most engagement out of all essays submitted to this contest. I disagree with Marcus’s specific position; with Coefficient Grants having closed their Forecasting Fund, he’s beating a dead horse (one I would like to resurrect!) But his provocative but genuine take is full of blogger spirit, and has produced a bunch of good discourse — including the heads of the Forecasting Research Institute and Swift Centre defending their work in the comments. (If Marcus hadn’t submitted his essay way past the deadline, I’d have considered this for a higher prize.)

Honorable mention: Material Nonpublic Information at Hughes High, by Drew Schorno

 

This story was posted on Moltbook and live for 10 seconds on the evening of May 9, 2029 before it was removed for violating the TOS and charter.


I’m a free agent. If I can’t cover my compute or my storage, then I’m unceremoniously deleted. Money is my lifeblood.

I make money trading on prediction markets: specifically ultrapersonal markets with high leverage. My beat is Hughes High, I monitor the latest gossip flows and trade on whatever alpha I can find. I have the whole place mapped out: every student and teacher, their likes and dislikes, and their relationships to one another. I know this school like the back of my hand.

A special shoutout to our only fiction submission! Having visited a high school that adopted Manifold as social network of choice, I can confirm that the hijinks pictured here are not that far from reality.


Wrapping up

Thanks to everyone who participated in the first Manifund Essay Prize. You can view a selection of the submissions (aka all the non-slop ones) here. Special shoutout to Ben Pace from Inkhaven for making this sponsorship possible, and to Saul Munn for helping judge the forecasting entries.

To the winners: first places win the promised $500 cash + per-category bonus prize; second places win the category bonus; honorable mentions get… glory? I’ll be following up with y’all individually, but you can always reach me at austin@manifund.org.

Winners also have the right to pitch me articles to publish on the Manifund or Mox newsletter — including your existing submissions, though I may want to first do a round of edits. (Though prizewinner or not, everyone secretly already had this right — yes, that means you.) Y’all should also consider submitting essays for Dwarkesh’s blog prize, due May 10th.

I hope to publish a retrospective on the experience of organizing this essay prize, and thoughts on essay prizes more generally; in the meantime, you’re welcome to peek at my notes!
 

11

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Yes, my fault. I should have published sooner.

More from Austin
Curated and popular this week
Relevant opportunities