Hello everyone:
TL;DR:
It seems that the majority of EAs who care about AI s-risks cannot be hired or funded by EA-aligned organizations. For these people, an important question is the relative impact of earning to give vs doing direct work in the non-EA world. Therefore, understanding the feasibility of contributing to reducing AI s-risks in the non-EA world would be valuable for many people's career decisions.
Even a 1-minute gut intuition on this question would already be very helpful.
Main reasoning
My reasoning is roughly as follows.
We can contribute through direct work or earning to give. Direct work can occur in the EA world (mostly nonprofits relying on donations, such as the Center for Long-Term Risk and Center for Reducing Suffering) or the non-EA world (for-profit companies or government roles).
However, jobs and grants in the EA world (e.g., CLR/CRS positions or funded independent research) are extremely competitive. Only like 10%-20% of people may obtain these opportunities. Therefore, it seems important to consider whether meaningful impact is possible in the non-EA world.
However, as a layperson without technical AI expertise, contributing to reducing AI s-risks in the non-EA world (especially through technical safety) seems difficult to me. The main reason is that most AI companies appear to prioritize profit and capabilities, rather than concerns about s-risks or altruistic goals.
As a result, an engineer trying to reduce AI s-risks in a non-EA company might face two problems:
(1) Research constraints
Suppose you work at OpenAI, but your manager wants you to focus on capabilities work or safety related to extinction risks—not digital suffering. If you spend significant time working on digital suffering instead of assigned tasks, you may fail to meet your KPIs or even risk losing your job.
(2) Safety-tax constraints
Another possibility is trying to incorporate s-risk-reducing design choices into AI systems. However, many such interventions might involve a high safety tax (though I am uncertain whether this is actually true). If they significantly reduce capabilities, companies may not allow them.
If these concerns are correct, earning more and funding additional EA researchers might have higher marginal value—for example, supporting research trying to find interventions that reduce s-risks without large safety taxes.
(These scenarios are mostly speculation from an AI outsider and is probably woefully off, so please feel free to correct me.)
Stylized comparison
I’m curious about people’s intuitions about the following simplified comparison: (I believe many people in the community would face decisions similar to this scenario in the future).
- Work as a dentist, earn $200,000 per year, and contribute purely through earning to give.
- Work as a software engineer in a non-EA organization (e.g., a mid-level AI company), earn $150,000 per year, and contribute through both earning to give and whatever direct work is feasible there.
All else equal, which option would you tentatively expect to have more impact on reducing AI s-risks, and why?
Final notes
I realize my reasoning likely contains blind spots, so I would be very grateful for any critiques.
Please also don’t feel pressure to write a long response—even 1–2 sentences or a quick intuition would already be very helpful.
Even if others have replied, additional perspectives are still valuable because people often approach this question from very different angles. People who don't work majorly in AI s-risks is also very welcome to reply.
You are also welcome to DM me via the forum if you prefer not to comment publicly. Thanks very much for you reading and answering.
