Alexander Berger's 2026 CoeffG annual letter describes their shift from marginalism to "inframarginal" funding, emphasis mine:
(I do wish Berger gave a bit more detail than just "we should be intentional about trying to strike the right balance between GM and marginalist approaches", but I suppose the annual letter isn't the right place for this.)
Nan Ransohoff's piece on how there should be more GMs owning delivery of specific outcomes is a great read too (emphasis mine):
As I've gotten more work experience (year 10 now, jeez) I've become increasingly a fan of the DRI approach, and by extension the GM ("super-senior-DRI") approach. You could think of incubators like AIM and SMA as "GM factories for orphaned problems".
Unjournal AI-assisted research prioritization dashboard (very early prototype)
We've been experimenting with using LLMs to help identify and prioritize research for Unjournal evaluation, to work with and complement human prioritization (and learn). We now have a public prototype dashboard:
uj-prioritization-dashboard.netlify.app
What it does: Automatically discovers recent papers from NBER, arXiv (econ), CEPR, SSRN, Semantic Scholar, EA Forum paper links, and OpenAlex, then scores them using AI models (GPT-5.4 family) against our prioritization criteria — decision relevance, prominence, timing value, and methodological potential.
Important caveats:
* This is very preliminary and the AI recommendations are not yet well-calibrated. Many of the suggestions are mediocre we're sharing it for transparency and feedback, not because it's producing great output yet.
* This is supplementary to our existing Public Database of Prioritized Research on Coda (https://coda.io/d/Unjournal-Public-Pages_ddIEzDONWdb/Public-Database-of-Prioritized-Research_sutD341G#_luToq6IH
* Scores reflect evaluation priority (expected value of commissioning an independent review), not research quality.
* ATM The AI only sees paper metadata and abstracts, not full texts.
There's also a statistics page showing the breakdown by source, cause area, and score distribution.
Feedback welcome. You can also comment directly on the page via Hypothes.is, and we'll adapt
Question for AIM folks: what's the thinking behind running a very involved process twice per year, as opposed to recruiting from near-misses from previous rounds?
Are there savings to be made here? Asking as someone deeply concerned with cost effectiveness as a vital principle of EA... and a former finalist!
Awhile back I came across this slide from the Money for Good project, which I thought was a sobering quantification of the rarity of donor decision-making based on nonprofit outperformance (cost-effectiveness etc). Hope Consulting got this data by surveying 4,000 US individuals with household incomes >$80k (top 30% income back in 2009, comprising 75% of overall individual donations), of which 2,000 were in the >$300k bracket.
Opportunity size for US retail donors in 2009 was ~$45B, so this works out to ballpark $1-1.5B which is still sizeable, e.g. it's more than total annual EA grantmaking has ever been:
How did Hope Consulting get the 3% figure? Top of funnel:
Middle of funnel steep drop-off:
and:
Bottom of funnel has even steeper drop-off, because confirmation bias is the default:
How to raise the 3% figure for donors who give based on nonprofit outperformance? Hope Consulting suggest this framing:
(I disagree with Hope Consulting on that last point, but the rest seems useful.)
What are midsized retail donors like? I used to work in marketing analytics so this piqued my interest. Max / diff to elicit donor value trade-offs -> cluster analysis (a few rounds) yielded these "donor personas":
The lack of demographic variation somewhat surprised me:
As a closing note, the Money for Good project was a major undertaking: 6 months, 4 major funders (including Rockefeller), 4 research orgs (!) partnering with Hope Consulting, etc. This makes me wonder what the 80/20 version of this could look like, with judicious use of Claude Code and such.
Reminder that the symposium kicks off in an hour! If you want to help the conversation go well, you can write up particular considerations, cruxes or questions you have as comments on the symposium post. Invited guests and other participants will respond to them later.
If AGI goes well for humans, will it go well for other animals?