Hide table of contents

Here’s a version of the database that you filter and sort however you wish, and here’s a version you can add comments to.

Update: I've been slow to properly update the database, but am collecting additional orgs in this thread for now.

Key points

  • I’m addicted to creating collections and have struck once more.

  • The titular database includes >130 organizations that are relevant to people working on longtermism- or existential-risk-related issues, along with info on:

    • The extent to which they’re focused on longtermism/x-risks
    • How involved in the EA community they are
    • Whether they’re still active
    • Whether they aim to make/influence funding, policy, and/or career decisions
    • Whether they produce research
    • What causes/topics they focus on
    • What countries they’re based in
    • How much money they influence per year and how many employees they have[1]
  • I aimed for (but likely missed) comprehensive coverage of orgs that are substantially focused on longtermist/x-risk-related issues and are part of the EA community.

  • I also included various orgs that are relevant despite being less focused on longtermism/x-risks and/or not being part of the EA community. But one could in theory include at least hundreds of such orgs, whereas I just included a pretty arbitrary subset of the ones I happen to know of.

  • I made this relatively quickly, based it partly on memory & guesswork, and see it as a minimum viable product that can be improved on over time. So please:

    • If you spot any errors or if you know any relevant info I failed to mention about these orgs, let me know via an EA Forum message or via following this link and then commenting there
    • Fill in this quick form if you know of other orgs worth mentioning.
    • Let me know if you have questions about how best to use the database or how to interpret parts of it. (I expect many things will turn out to be confusing/unclear, and I’m relying on people to ask questions.)

Here’s a snippet of what the database looks like (from the "view" focused on "Funders/funding-influencers"):

I made this database and wrote this post in a personal capacity, not as a representative of my employers.

How, why, and when to use the database

(This is all how I use the database myself.)

You can filter, sort, and search the database based on the causes/topics and types of work (e.g., grantmaking vs policy advising vs research) you’re interested in.

You can use the database to:

  1. Generally learn about the landscape of actors in a given area
  2. Get ideas about what orgs could “provide inputs to you” (funding, advice, feedback, connections)
  3. Get ideas about what orgs could act as “nodes on your path to impact”, e.g. whose actions could be improved by a research project you’re considering doing or who could translate and transmit your findings on to key decision-makers

This could be useful in situations such as when you’re:

  1. Getting oriented to a new area
  2. Trying to build career capital in an area
  3. Generating project ideas, generating theories of change for those project ideas, and prioritising among them
  4. Conducting a project
  5. Helping someone else do any of the above things

(For elaboration on points 3 and 4 in the context of research projects specifically, see here, especially Slides 14-15. Those points are more relevant the more you aim to operate like a consultancy or think tank.)

These benefits could occur via:

  1. The database making you aware of orgs you didn’t know about
  2. The database making you aware of info you lacked on some orgs, or
  3. The database “jogging your memory”
    • I find it’s easier to notice that an org is worth mentioning to someone I’m giving advice to or considering when making a project plan if I’m scanning a filtered list of maybe-relevant orgs than if I’m just doing free recall

Why I made this

Answer 1: As noted, I’m addicted to creating collections.

Answer 2: 18 months ago, I thought EAs should post more summaries and collections, and I still think that, and people seem to often like it when I do that.

Answer 3: 12 months ago, I made a smaller version of this database in hopes that it’d benefit the work of Rethink Priorities’ longtermism team (which I’m a part of) in the ways outlined in the previous section. I feel like it has indeed been useful (though mostly just through guiding my own work and my suggestions to other people; I think other people rarely use it directly). And I’ve also ended up fairly often using the database when giving career or project people advice (e.g., to remind myself what orgs I should suggest a person might want to talk to or check out the work of if they’re interested in nuclear risk or forecasting), or sharing snippets of it with people. So I figured I should make a publicly accessible version.

Caveats

Mainly just what I said earlier, but I’ll say it again in bold for good measure:

  1. I aimed for (but likely missed) comprehensive coverage of orgs that are substantially focused on longtermist/x-risk-related issues and are part of the EA community
  2. I also included various orgs that are relevant despite being less focused on longtermism/x-risks and/or not being part of the EA community. But one could in theory include at least hundreds of such orgs, whereas I just included a pretty arbitrary subset of the ones I happen to know of.
  3. I created this fairly quickly and based partly on memory & guesswork

Other caveats:

  • A high level of focus on longtermism/x-risks and a high level of involvement in EA are of course neither necessary nor sufficient for an org to be impactful, “good”, wise, etc.
  • Obviously I had to make many debatable judgement calls when filling the database in
  • These orgs vary massively in their significance and in their relevance to longtermism-/x-risks

Possible next steps

  • More orgs could be added (using this form)
  • Info could be added and corrected (people can leave comments in the Airtable and then I’ll make the appropriate edits)
  • Perhaps some other way to structure/display the info would be good?
  • Perhaps this should be somehow integrated with other things, like 80k’s job board or my list of EA funding opportunities?
  • People could duplicate and then adapt this database in order to make:
    • A version that’s relevant to all EA cause areas
    • A version that’s relevant to a particular other large EA cause area (e.g., animal welfare)
    • A version that “zooms in on” some specific longtermist/x-risk-related area - adding more orgs, individuals, and info relevant to that area and cutting out other things

See also

If this database seems useful to you, you may also be interested in one or more of the following:

Acknowledgements

I drew on Pablo Stafforini’s and Jamie Gittins’ lists of EA-related orgs. An earlier version of the database benefitted from comments by Janique Behman, David Rhys Bernard, Juan Gil, and perhaps other people who I’m forgetting. The current version of the database and/or this post benefitted from comments from Will Aldred, Aaron Gertler, Jaime Sevilla, Ben Snodin, Pablo Stafforini, and Max Stauffer.


    1. ...well, I haven’t actually entered that info, but I’ve made fields for it in hopes of crowdsourcing it from you. ↩︎

Comments65


Sorted by Click to highlight new comments since:

Two lists I'm considering making:

  1. Software developers who are interested in doing paid EA work (According to 80000 hours, it seems to be hard to hire software developers for EA orgs even though lots of software developers seem to exist in our community. Seems confusing. This would be a cheap first try at solving it)
  2. Pain points that could potentially be solved by software - from EA orgs (see #6 here. The post is about looking for places to invest in software. I think the correct place to approach this would be to start from actual needs. But there's no place for orgs to surface such needs beyond posting a job)

Any thoughts?

I'll note:

  1. When you say "paid", do you mean full-time? I've found that "part-time" people often drop off very quickly. Full-time people would be the domain of 80,000 Hours, so I'd suggest working with them on this.
  2. "no place for orgs to surface such needs beyond posting a job" -> This is complicated. I think that software consultancy models could be neat, and of course, full-time software engineering jobs do happen. Both are a lot of work. I'm much less excited about volunteer-type arrangements, outside of being used to effectively help filter candidates for later hiring.

I think that a lot of people just really can't understand or predict what would be useful without working in an EA org or in an EA group/hub. It took me a while! The obvious advice would be for people who want to really kickstart things, is to first try to work in or right next to an EA org for a year or so; then you'll have a much better sense.

Just throwing a thought: if many EA orgs have software needs and are struggling to employ people who'll solve them; and on the other hand, part-time employees or volunteer directories don't help that much - would it make sense to start a SaaS org aimed at helping EA orgs?

I could see a space for software consultancies that work with EA orgs, that basically help build and maintain software for them. 

I'm not sure what you mean by SaaS in this case. If you only have 2-10 clients, it's sort of weird to have a standard SaaS business model. I was imagining more of the regular consultancy payment structure.

EA Software Consultancy: In case you don't know these posts:

 

Part 1

In part 1, I argue that tech work at EA orgs has three predictable problems:[1]

  • It’s bad for developing technical skills
  • It's inefficiently allocated
  • It’s difficult to assess hires

Part 2

In this part I argue that each problem could be mitigated or even fixed by consolidating the workers into a single agency. I focus here on the benefits common to any form of agency

Part 3

This post explicitly compares the low-bono option with various others on two axes: on entity type (ie individual or agency) and on different funding models.

Yea, I was briefly familiar. 

I think it's still tough, and agree with Ben's comment here. 
https://forum.effectivealtruism.org/posts/kQ2kwpSkTwekyypKu/part-1-ea-tech-work-is-inefficiently-allocated-and-bad-for?commentId=ypo3SzDMPGkhF3GfP

But I think consultancy engineers could be a fit for maybe ~20-40% of EA software talent. 

  1. Developers who'd like to do EA work: Not only full time
  2. I'm talking about discovering needs here. I'm not talking at all about how the needs would be solved

Working at an EA org to discover needs: This seems much slower than asking people who work there, no? (I am not trying to guess the needs myself)

Working at an EA org to discover needs: This seems much slower than asking people who work there, no? (I am not trying to guess the needs myself)

It really depends on how sophisticated the work is and how tied it is to existing systems.

For example, if you wanted to build tooling that would be useful to Google, it would probably be easiest just to start a job at Google, where you can see everything and get used to the codebases, than to try to become a consultant for Google, where you'd ask for very narrow tasks that don't require you to be part of their confidential workflows and similar.

I agree I won't get everything

 

Still, I don't think Google is a good example. It is full of developers who have a culture of automating things and even free time every week to do side projects. This is really extreme.

 

A better example would be some organization that has 0 developers. If you ask someone in such an organization if there's anything they want to automate, or some repetitive task they're doing a lot, or an idea for an app (which is probably terrible but will indicate an underlying need) - things come up

But also, I tried, and I think 0 such needs surfaced

That's what the experimental method is for, so that we don't have to resolve things just by arguing

:)

Both sound to me probably at least somewhat useful! I'm ~agnostic on how likely they are to be very useful, how they compare to other things you could spend your time on, or how best to do them, which is mostly because I haven't thought much about software development.

I expect some other people in the community (e.g., Ozzie Gooen, Nuno Sempere, JP Addison) would have more thoughts on that. But it might make sense to just spend like 0.5-4 hours on MVPs before asking anyone else, if you already have a clear enough idea in your head.

I can also imagine having a Slack workspace / Slack channel in an existing workspace for people in EA who are doing software development or are interested in that could perhaps be useful.

(Sidenote: You may also be interested in posts tagged software engineering and/or looking into their authors/commenters.)

Great work Michael, I've already included this Airtable in the curriculum  of Training For Good's upcoming impactful policy careers workshop. Well done, this work is of high value!

Glad to hear that you think this'll be helpful!

(Btw, your comment also made me realise I should add Training For Good to the database, so I've now done so. )

Also note that there are EA Forum Wiki entries for many of the orgs in this database, which will in some cases be worth checking out either for the text of the entry itself, for the links in the Bibliography section, or for the tagged posts.

Cool that you made this, and that you even made a Softr page! Although I think the Softr page is worse than just sharing a public grid view of the Airtable.

I realize it would be cool to have a similar database for all EA-related organisations. Jamie Gittins made one on Notion and has a Forum post here listing EA orgs, but they're both not easily filterable. It could have similar attributes to the Airtable you have. I saw that Taymon also has a Google Sheet, but it would be nice to have it on an Airtable and have it have more attributes, to make it more easily filterable and more colorful.

Can you share a public grid view of the Airtable in a way that allows people to filter and/or sort however they want but then doesn't make that the filtering/sorting that everyone else sees? I wasn't aware of how to do that, which is the sole reason I added the Softr option. I think the set of Airtable views I also link people to is probably indeed better if people are happy with the views (i.e., combos of filters and orders) that I've already set up.

Agreed that an all-of-EA version of this would also be useful, and that Airtable would be better for that than Notion, a Forum post, or a Google Sheet. I also expect it's something that literally anyone reading this could set up in less than a day, by:

  • duplicating my database
  • manually adding things from Gittins' and Taymon's database
  • maybe removing anything that was in mine that might be out of scope for them (e.g., if they want to limit the scope to just orgs that are in or "aware of & friendly to" EA, since a database of all orgs that are merely quite relevant to any EA cause area may be too large a scope)
  • looking up how to do Airtable stuff whenever stuck (I found the basics fairly easy, more so than expected)

You can share this link instead, which is better than the Softr view, and this means people don't need to get comment access to be able to view the Airtable grid. It also prevents people from being able to see each other's emails if they check the base collaborators. To find that link, I just pressed "Share" at the top right of the base, and scrolled down to the bottom of that modal/pop-up to find the link.

Ah, nice, thanks for that! It seems that that indeed allows for changing both "Filtered by" and "Sorted by", including from each of my pre-set views, without that changing things for other people, so that's perfect!

I still want to provide the comment access version as well, so people can more easily make suggestions on specific entries. But I'll edit my post to swap the softr link for the link you suggested and to make the comment access link less prominent.

No problem!

I just wanted to leave a note saying that I found this database useful in my work.

Epoch

We’re a team of researchers investigating and forecasting the development of advanced AI.

Labour for the Long Term

Is Britain prepared for the challenges ahead?
We face significant risks, from climate change to pandemics, to digital transformation and geopolitical tensions. We need social-democratic answers to create a fair and resilient future.

Our vision
A leading role for the UK
Many long-term issues have an important political dimension in which the UK can play a leading role. Building on the work of previous Labour governments, we see a future where the UK can play a larger role in areas such as in reducing international tensions and in becoming a world leader in green technology.

EffiSciences

EffiSciences is a collective of students founded in the Écoles Normales Supérieures (ENS) acting for more involved research in the face of the problems of our world. [translated from French]

Confido Institute

Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.

We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.

In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting.  See our demo or request your Confido workspace for free.

The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.

Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding. [emphasis added]

Epistea

We are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.

Our current projects are FIXED POINTPrague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).

SaferAI

SaferAI is developing the technology that will allow to audit and mitigate potential harms from general-purpose AI systems such as large language models.

Apart Research

A*PART is an independent ML safety research and research facilitation organization working for a future with a benevolent relationship to AI.

We run AISI, the Alignment Hackathons, and an AI safety research update series.

Also the European Network for AI Safety (ENAIS)

TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!
 

Riesgos Catastróficos Globales

Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world. 

There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. [Quote from Introducing the new Riesgos Catastróficos Globales team]

International Center for Future Generations

The International Center for Future Generations is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.

As of today, their website lists their priorities as:

  • Climate crisis
  • Technology [including AI] and democracy
  • Biosecurity

Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)

These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.

Policy Foundry

an Australian-based organisations dedicated to developing high-quality and detailed policy proposals for the greatest challenges of the 21st century. [source]

The Collective Intelligence Project

We are an incubator for new governance models for transformative technology.

Our goal: To overcome the transformative technology trilemma.

Existing tech governance approaches fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety.

Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation.

Collective flourishing requires all three. We need CI R&D so we can simultaneously advance technological capabilities, prevent disproportionate risks, and enable individual and collective self-determination.

Also Cavendish Labs:

Cavendish Labs is a 501(c)(3) nonprofit research organization dedicated to solving the most important and neglected scientific problems of our age.

We're founding a research community in Cavendish, Vermont that's focused primarily on AI safety and pandemic prevention, although we’re interested in all avenues of effective research.

projectspapersprizes

Also the Forecasting Research Institute

The Forecasting Research Institute (FRI) is a new organization focused on advancing the science of forecasting for the public good. 

[...] our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.

[...] Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forecasting space.

Also School of Thinking

School of Thinking (SoT) is a media startup.

Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT's website. 

Our priority is to produce content in English and Italian, but we will cover more languages down the line. We have been funded by the Effective Altruism Infrastructure Fund (EAIF) and the FTX Future Fund.

Palisade Research

"At Palisade, our mission is to help humanity find the safest possible routes to powerful AI systems aligned with human values. Our current approach is to research offensive AI capabilities to better understand and communicate the threats posed by agentic AI systems."

Jeffrey Ladish is the Executive Director.

Admond

"Admond is an independent Danish think tank that works to promote the safe and beneficial development of artificial intelligence."

"Artificial intelligence is going to change Denmark. Our mission is to ensure that this change happens safely and for the benefit of our democracy."

Senter for Langsiktig Politikk

"A politically independent organisation aimed at creating a better and safer future"

A think tank based in Norway.

To the best of my knowledge, Samotsvety is a group of forecasters, not an organization (although some of its members have recently launched or will soon launch forecasting-related orgs).

Also AFTER (Action Fund for Technology and Emerging Risk)

Also Future Academy (but maybe that's not an org and instead a project of EA Sweden?).

Also anything in Alignment Org Cheat Sheet that's not in here. And maybe adding that post's 1-sentence descriptions to the info this database has on each org listed in that post.

Also fp21 and maybe Humanity Forward.

(Reminder: This is a database of orgs relevant to longtermist/x-risk work, and includes some orgs that are not part of the longtermist/x-risk-reduction community, don't associate with those labels, and/or don't focus specifically on those issues.)

Times I have used this post in the course of my research: II.

Is that 11 or 2?

(Either way, thanks for letting me know :) )

2. Cheers.

How do I submit notes / corrections on orgs in the table?

"If you spot any errors or if you know any relevant info I failed to mention about these orgs, let me know via an EA Forum message or via following this link and then commenting there."

(The very first link I provide in this post allows changing the filtering & sorting, but not commenting, so you have to instead either send a message or use that other link.)

Thanks for your interest in suggesting extra info / correction :) 

I suggested as one possible next step "People could duplicate and then adapt this database in order to make [a] version that’s relevant to all EA cause areas"

I think such a database has now been made! (Though I'm not sure if that was done by duplicating & adapting my one.) Specifically, Michel Justen has made A Database of EA Organizations & Initiatives. I imagine this'd be useful to some people who find their way to this post.*

Here's the summary section of their post, for convenience:

"I’ve created a new database of EA organizations and initiatives that I host on the recently revamped EA Opportunities page. Here’s the raw Airtable

  • I think this is the most comprehensive collection of organizations in or closely involved with EA to date. It features orgs explicitly within or adjacent to EA, as well as a non-comprehensive list of other orgs working on global catastrophic risks, even if they have little involvement with EA. As of writing this, there are 276 organizations in this database. Of these, 130 are labeled as “Part of EA community” and the rest are labeled as either “aware of and friendly to EA” or uninvolved.
  • I still recommend this database as the most valuable database of organizations doing longtermist/x-risk work given its more comprehensive indicators for how orgs are aiming to reduce x-risk.
  • If you see any mistakes in this database, please let us know. You can also submit new organizations."

*I guess I should flag that I haven't looked closely at Michel's post or database, so can't personally vouch for its accuracy, comprehensiveness, etc.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f