I think there could be compelling reasons to prioritise Earning To Give highly, depending on one's options. This is a "hot takes" explanation of this claim with a request for input from the community. This may not be a claim that I would stand by upon reflection.

I base the argument below on a few key assumptions, listed below. Each of these could be debated in their own right but I would prefer to keep any discussion of them outside this post and its comments. This is for brevity and because my reason for making them is largely a deferral to people better informed on the subject than I. The Intelligence Curse by Luke Drago is a good backdrop for this.

  • Whether or not we see AGI or Superintelligence, AI will have significantly reduced the availability of white-collar jobs by 2030, and will only continue to reduce this availability.
  • AI will eventually drive an enormous increase in world GDP.
  • The combination of these will produce a severity of wealth inequality that is both unprecedented and near-totally locked-in. 

If AI advances cause white-collar human workers to become redundant by outperforming them at lower cost, we are living in a dwindling window in which one can determine their financial destiny. Government action and philanthropy notwithstanding, one's assets may not grow appreciably again once their labour has become replaceable. An even shorter window may be available for starting new professions as entry-level jobs are likely the easiest to automate and companies will find it easier to stop hiring people than start firing them. 

That this may be the fate of much of humanity in the not-too-distant future seems really bleak. While my ear is not the closest to the ground on all things AI, my intuition is that humanity will not have the collective wisdom to restructure society in time to prevent this leading to a technocratic feudal hierarchy. Frankly, I'm alarmed that having engaged with EA consistently for 7+ years I've only heard discussion of this very recently. Furthermore, the Trump Administration has proven itself willing to use America's economic and military superiority to pressure other states into arguably exploitative deals (tariffs, offering Ukraine security guarantees in exchange for mineral resources) and shed altruistic commitments (foreign aid). My assumption is that if this Administration, or a similar successor, oversaw the unveiling of workplace-changing AI, the furthest it would cast its moral circle would be American citizens. Those in other countries may have very unclear routes to income. 

Should this scenario come to pass, altruistic individuals having bought shares in companies that experience this economic explosion before it happened could do disproportionate good. The number of actors able to steer the course of the future at all will have shrunk by orders of magnitude and I would predict that most of them will be more consumed by their rivalries than any desire to help others. Others have pointed out that this generally was the case in medieval feudal systems. Depending on the scale of investment, even a single such person could save dozens, hundreds, or even thousands of other people from destitution. If that person possessed charisma or political aptitude, their influence over other asset owners could improve the lives of a great many.  Given that being immensely wealthy leaves many doors open for conventional Earning To Give if this scenario doesn't come to pass (and I would advocate for donating at least 10% of income along the way), it seems sensible to me for an EA to aggressively pursue their own wealth in the short term.

If one has a clear career path for helping solve the alignment problem or achieve the governance policies required to bring transformative AI into the world for the benefit of all, I unequivocally endorse pursuing those careers as a priority. These considerations are for those without such a clear path. I will now apply a vignette to my circumstances to provide a concrete example and because I genuinely want advice! 

I have spent 4 years serving as a military officer. My friend works at a top financial services firm, which has a demonstrable preference for hiring ex-military personnel. He can think of salient examples of people being hired for jobs that pay £250k/year with CVs very arguably weaker, in both military and academic terms, than mine. With my friend's help, it is plausible that I could secure such a position. I am confident that I would not suffer more than trivial value drift while earning this wage, or on becoming ludicrously wealthy thereafter, based on concrete examples in which I upheld my ethics despite significant temptation not to. I am also confident that I have demonstrated sufficient resilience in my current profession to handle life as a trader, at least for a while. With much less confidence, I feel that I would be at least average in my ability to influence other wealthy people to buy into altruistic ideals. 

My main alternative is to seek mid to senior operations management roles at EA and adjacent organisations with longtermist focus. I won't labour why I think these roles would be valuable, nor do I mean to diminish the contributions that can be made in such roles. This theory of impact does, of course, rely heavily on the org I get a job in delivering impactful results; money can almost certainly buy results but of a fundamentally more limited nature.

So, should one such as I Earn To Invest And Then Give, or work on pressing problems directly?

Comments12


Sorted by Click to highlight new comments since:

I think this effect is completely overshadowed by the fact if what you are saying is true, we have 5-10 years on the technical alignment/governance of AI to get things to go well.

Now is the time to donate and work on AI safety stuff. Not to get rich and donate to it later in the hopes that things worked out.

I'm sympathetic to this point and stress that my argument above only applies if one is relatively optimistic about solving alignment and relatively pessimistic about these governance/policy problems. I don't think I'm informed enough to be optimistic on alignment but I do feel very pessimistic on preventing immense wealth inequality. The amount of coordination between so many actors for this not to be the default seems unachievable to me. 

This may be available elsewhere and I accept that I might not have looked hard enough, but are there impactful, funding-constrained donation opportunities to solve these problems?

The other two things I want to point out are:

  1. It's very tempting to be biased towards "the thing I should be doing is making money". I've seen a shocking amount of E2Gers that don't seem to do much giving, particularly in AI safety. There should be a small anti-correction bias against the thing you should be doing is making money and investing it to earn more money. That looks a lot like selfish non-impact.
  2. £250k/year after taxes and expenses, just isn't that much to donate. I think in the UK (where £250k/year would be paid) would incur income tax of ~35-40% depending on deductions. Let's call it £95k. After say £45k/year in personal expenses (more if you have a family), we are talking about about £110k/year. Invested or not, this just isn't that much money to move the needle on AI safety by enough to write home about. AI governance organizations would very happily have a very good mid to senior operations management roles at EA and adjacent organisations with longtermist focus or other role. These orgs spend  £110k/year like its nothing.

Re. 2, that maths is the right ballpark is trying to save but if donating I do want to remind people that UK donations are tax-deductible and this deduction is not limited the way I gather it is in some countries like the US. 

So you wouldn’t be paying £95k in taxes if donating a large fraction of £250k/yr. Doing quick calcs, if living off £45k then the split ends up being something like:


Income: 250k

Donations: 185k

Tax: 20k

Personal: 45k

(I agree with the spirit of your points.)

£110k seems like it would probably be impactful, and that's just one person giving right? That's probably at least one FTE. Also SERI MATS only costs about ~£500k per year so it could be expanded substantially with that amount.

This is generally less than one FTE for an AI safety organization. Remember, there are other costs than just salary.

MATS is spending far more than £500k/year. I don't know how accurate it is but it looks like they might have spent ~$4.65MM. I'm happy to be corrected but I think my figure it more accurate.

Some simplifying assumptions:

  • £50k starting net worth
  • Only employed for the next 4 years
  • £300k salary, £150k after tax, £110k after personal consumption
  • 10% interest on your savings for 4 years
  • Around £635k at end of 2030

This is only slightly more than the average net worth of for UK 55 to 64 year olds

Overall, if this plan worked out near perfectly, it would place you in around the 92 percentile of wealth in the UK

This would put you in a good, but not great position to invest to give. 

Overall it seems to me as if you’re trying to speedrun getting incredibly wealthy in 4 years. This is generally not possible with salaried work (the assumption above put you around the 99-99.5 percentile of salaries), but might be more feasible through entrepreneurship.

Some other considerations:

  • Working in such a high paying job, even in financial services, will probably not allow you to study and practice investing. You will not be an expert on AI investing and investing in general in 2030, which would be a problem if you believe such expertise was necessary for you to invest to give.
  • Quite a lot of EAs will be richer than this in 2030. My rough guess is more than 500. Your position might be useful but is likely to be far from unique.
  • You might want to think through your uncertainties about how useful money will be to achieve your goals in 2030-2040. If there’s no more white collar jobs in 2030, in 2035 the world might be very weird and confusing.
  • If there is a massive increase of overall wealth in 2030-2040 due to fast technological progress, a lot of problems you might care about will get solved by non-EAs. Charity is a luxury good for the rich, more people will be rich, charity on average solves much more problems than it creates.
  • Technological progress itself will potentially solve a lot of the problems you care about.
  • (Also agree with Marcus’s point.)

The way I understood his post was that even a few hundred thousand or a few million dollars, if invested pre-explosive growth, might become astronomical wealth post-explosive growth. Whereas people without those investments may have nothing due to labor displacement. Which is an interesting theory?
Maybe we need a hedge fund for EAs to invest in AI lol, though that would create hairy conflicts of interest!

That was the point I had meant to convey, Aaron. Thanks for clarifying that. 

This seems like an important critique, Tobias, and I thank you for it. It was a useful readjustment to realise I wouldn't be exceptionally wealthy for doing this in either society at large or the EA community. My sense is still that even being in the 92nd percentile of the UK going into this would be really valuable. Not world-changing valuable, but life-changing for many. That everything might get solved by technology and richer people is plausible, given the challenges in predicting how the future will pan out. I see this strategy mainly as a backstop to mitigate the awfulness of the most S-risk intensive ways this could go. 

(Thanks for providing lots of details in the post. Standard disclaimer that you know the most about your strengths/weaknesses, likes/dislikes, core values, etc)

I recommend going for the job. It sounds like you have a uniquely good chance at getting it, and otherwise I'd assume it'd go to someone who wasn't going to donate a lot of the salary.

After you get the job, I'd recommend thinking/reading/discussing a lot about the best way and time to give.

Regarding:
> This may not be a claim that I would stand by upon reflection.

> my reason for making them is largely a deferral to people better informed on the subject than I


You say you're not currently an expert, but I'd guess it wouldn't take so long (100 hours, so maybe a few months of weekends) for you to become an expert in the specific questions that determine when and how you should donate. Questions like:
- When will we develop superintelligence?
- Given that we do, how likely are humans to stay in control?
- Given that we stay in control, what would the economy look like?
- Given that the future economy looks like [something], what's the most impactful time and way to donate?
 - Wild guess that I haven't thought about much: even if you'd be much richer in the future because the stock market will go up a lot, maybe it's still better to donate all you can to AMF now. Reasoning: you can't help someone in the future if they died of malaria before the AI makes the perfect malaria vaccine

Whatever your final beliefs are, having the high-paying job allows you to have a large impact.

It looks like the other path you're considering is "mid to senior operations management roles at EA". I would guess you could give enough money to EA orgs so they could hire enough ops people to do more work than you could have done directly (but maybe there's some kind of EA ops work where you have a special hard-to-buy talent?)
 

Thanks for the input, Theodore!

I agree that my chances of getting a trader role are higher than average and whoever would get the job instead is almost certainly not going to donate appreciable sums. Naturally, I would devote a very large amount of time and energy to the decision of how to give away this money. 

I'm very sceptical about my ability to become an "expert" on these questions surrounding AI. This is largely based on my belief that my most crippling flaw is a lack of curiosity but I also doubt that anyone could come up with robust predictions on these questions through casual research inside a year.

My intuition is strongly in the other direction regarding donating to AMF now (with the caveat that I have been donating to GiveWell's top charity portfolio for years). I don't have strong credence on how the cost of a DALY will change in the future, but I am confident it won't increase by a greater percentage than tactful investments. It is a tragedy that anyone dies before medicine advances to the point of saving them but we must triage our giving opportunities. 

I'd never been convinced that Earning To Give in the conventional sense would be a more impactful career for me than operations management work. My social network (which could be biased) consistently implies the EA community has a shortage of management talent. A large amount of money is already being thrown at solving this problem, particularly in the Bay Area and London. 

Executive summary: Given the potential for AI-driven economic upheaval and locked-in wealth inequality, now may be an unusually good time to prioritize Earning To Give—especially for those with lucrative career prospects—so they can later redistribute wealth in a way that mitigates future harms.

Key points:

  1. AI is likely to significantly reduce white-collar job availability by 2030 while also driving enormous GDP growth, leading to unprecedented and entrenched wealth inequality.
  2. Those who accumulate wealth before their labor becomes replaceable may have a unique opportunity to do significant good, as future redistribution mechanisms could be limited.
  3. If AI-induced economic concentration leads to a "technocratic feudal hierarchy," wealthy altruists could become rare actors capable of steering resources toward helping the destitute.
  4. The geopolitical implications of AI-driven economic shifts may further restrict wealth distribution, particularly under nationalistic policies that prioritize domestic citizens over global needs.
  5. While directly working on AI alignment or governance remains a higher priority, individuals without a clear path in those areas might do more good by aggressively pursuing wealth now to give later.
  6. The author personally considers shifting from a military career to high-earning finance roles, weighing whether Earning To Give would be more impactful than working in longtermist EA organizations.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f