Manifund is sponsoring 3 prizes for the best new essays that give us insight into how to navigate the future. Winners receive a $500 cash prize + a bonus prize, and winning essays may be republished (with attribution) on our newsletters.
This prize is in partnership with Inkhaven, though non-Inkhaven folks are welcome to submit.
Essays due midnight April 24; submit here!
Manifund: “What systems might manage the coming torrent of funding?”
Funding in EA may soon skyrocket. Between Anthropic & OpenAI tenders, the new OpenAI foundation, and short timelines to AGI, the amount of money available will be unprecedented. What mechanisms, incentive structures, orgs, and attitudes will help direct this windfall wisely?
- Bonus prize: $1000 in donation credits on your Manifund account
- Examples of essays we love:
- https://worksinprogress.co/issue/how-to-start-an-advance-market-commitment/
- https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update
- https://forum.effectivealtruism.org/posts/vpPee6NgMbPcdsam3/the-funding-conversation-we-left-unfinished
- https://thecounterfactual.substack.com/p/the-anthropic-ipo-is-coming-we-arent
- https://blog.jacobtrefethen.com/10-technologies-that-wont-exist-in-5-yrs/
- See also How to spend 100x more on safety?
Mox: “How to create a flourishing EA & AI safety scene in San Francisco?”
The race to build AI is going on in SF. So why is the AI safety scene here so weak? Berkeley, London, and DC all offer examples to learn from, but SF has its own unique challenges and opportunities. Beyond that, ecosystems like the startup scene, movements like climate change, and even religions may offer lessons for how to proceed.
- Bonus prize: complimentary Mox membership through end of 2026
- Examples of essays we love:
- https://www.andymasley.com/writing/a-playbook-for-new-ea-groups/
- https://www.jenn.site/dialogue-cultivating-gardens/
- https://frommatter.substack.com/p/effective-altruism-will-be-great
- https://forum.effectivealtruism.org/posts/mQoq3RXtwhgNwLBGN/ea-san-francisco-needs-a-new-lead-organizer
- https://minutes.substack.com/p/rented-virtue
- See also Whither SF AI safety? and How to build a field
Manifest: “How can we leverage forecasting into better decisions?”
Prediction markets have exploded in popularity over the last year. AI forecasters are on track to overtake the best humans by June 2027. But for all that EA has invested into forecasting, it sure doesn’t seem like we’re using them to make better decisions — whether as individuals, within orgs, or as a society. How might we get there?
- Bonus prize: complimentary ticket for Manifest 2026 (transferrable)
- Examples of essays we love:
Prize terms
- Must be 500+ words, published on the internet and written in the month of April 2026
- The format is up to you! Besides thinkpieces, we’d be excited for summaries, reviews, case studies, listicles, deep dives, interviews, fiction & many others
- Very long articles (eg 5k+ word effortposts) are welcome
- AI use permitted; disclosure required
- Maximum 2 submissions per person or team
- Submissions can come from outside Inkhaven!
- Submissions due by midnight PT on April 24, to this Airtable form.
- Winners announced on April 25, at the Inkhaven Fair
- Will be judged by me (Austin), plus maybe other judges I pull in
- Contact austin@manifund.org with any questions!
Appendix: some problems one might grapple with
(but you don’t have to write about any of these!)
- Funding:
- Avoiding “the vultures are circling” feeling among new donors
- Everyone wants to be the one to direct this funding
- When should individuals decide themselves, vs donate to a fund?
- When do prizes work?
- Balancing FTX-era freespending optics & community dynamics, vs actually getting things done in light of short timelines
- When money is plentiful, what inputs are scarce?
- How will the ratio change of spending on salaries vs compute?
- Avoiding “the vultures are circling” feeling among new donors
- SF:
- Why are EA/AIS college groups so successful, relative to city groups?
- AI safety as a whole is not that big, maybe 1000 people fulltime
- SF-ians have a very different view of AI than the rest of US (and the world)
- 50% of talent-weighted TAIS folks work within Anthropic; what does that imply?
- Beyond AI safety, what other EA fields might be developed in SF?
- Forecasting:
- What do the failures of company internal prediction markets prove?
- ChatGPT/Claude are often the first source people turn to; how might they use forecasts to better serve their users?
- What are forecasters good for once LLMs surpasses them?
- And how do we take advantage of abundant cheap forecasting ability?
- How do bots perform on Metaculus-style polls vs on prediction markets?
