Hide table of contents

Upgrading and updating this short-form

Preamble

Employees at EA orgs and people doing direct work are often also donors/pledgers to other causes. But charitable donations are not always exactly 1-1 deductible from income taxes.  E.g., in the USA it’s only deductible if you forgo the standard deduction and ‘itemize your deductions’, and in many countries in the EU there is very limited tax deductibility.

So, if you are paid $1 more by your employer/funded and donate it to the Humane League, Malaria Consortium, etc, the charity only ends up with maybe $0.65 on the margin in many cases. There are ways to do better at this (set up a DAF, bunch your donations…) but they are costly (DAF takes fees) and imperfect (whenever you itemize you lose the standard deduction if I understand.)

This might be somewhat timely because of (1) loss of funds from FTX thing (2) EA employees feeling guilty if they think they benefited from the FTX thing.

Proposal

Funders/orgs (e..g, Open Phil, RP, FHI, CEA) could agree that employees are allowed relinquish some share of their paycheck into some sort of general fund. The employees who do so are allowed to determine the use of these funds  (or ‘advise on’, with the advice generally followed). I think this should generally not go back to the org they work for itself, for reasons alluded to below. 

 

Key anticipated concerns, responses

Concern: pressure

This will lead to a ‘pressure to donate/relinquish’ if the employers, managers, funders are aware of it.

Response: This process could be managed by ops and by someone at arms-length who will not share the data with the employers/managers/funders. (Details need working out, obviously, unless something like this already exists.)

This is also a reason to make this explicitly not go back to the employing organization.

 

 Is this feasible? Would these relinquishments be seen by governments as actually income?

Response: I've consulted a one person with expertise who suggest this would not be a problem as long as

- It is clearly a salary reduction 
- The promise (to target the cause the employee wants) is only implicit; the employer/organization has the ultimate control; 
     - This is something like the situation with a donor advised fund (DAF) if I understand it

Note, 'it is pretty normal' for one nonprofit to pass money to another nonprofit. 
 

Concern - crowding out

 If the funder knows that the people/orgs it funds give back to other charities, they may shift their funding away from these charities, nullifying the employee's counterfactual impact.

Response:  This is hardly a new issue, hardly unique to this context; it’s a major question for donors in general, through all modes; so maybe not so important to consider here.

… To the extent it is important, it could be reduced if we can keep the exact target and amount of the donations unknown to the funders.

 

Concern - “Org reputation … why not give back to the org?”

Maybe a stretch, but I could imagine someone arguing “If EA-ORG's  employees ask you to redirect paychecks to a fund, which largely goes to the Humane League, Malaria Consortium, … does this indicate EA-ORG's employees don’t think EA-ORG is the best use of funds”?

Response 1: Unlikely to be a concern. Employees may want to ‘hedge their bets’ because of moral uncertainty, and because of the good feeling they get from direct impact of donations.

Response 2: Keep the recipient of these funds hidden to outsiders.

 

Thoughts? Do you think many people would take this up? What am I overlooking here?

24

0
0

Reactions

0
0

More posts like this

Comments8


Sorted by Click to highlight new comments since:

My guess is that this would be considered akin to an anticipatory assignment of income and would be charged against the wage earner as income, but I didn't look at it for more than three minutes. So that is something you would want to run by a tax lawyer before actually doing (standard disclaimer that I can't give legal advice).

I can think of two other ways you might be able to pull something like this off, although they involve additional complications:

  • If everyone in Organization X already donated at least $5000 per year to charity, Organization X could potentially cut salaries by $3750 and announce a 3:1 employee charitable matching program (up to $1250 in employee giving). 
  • If you (an employee of Organization X) want to contribute $5000 to Organization Y, and an employee of Organization Y wants to contibute $5000 to Organization X, you might be able to agree to each petition your employers for a $5000 pay cut. In theory, one could develop an algorithm to match people who wanted to do this across organizations in any number of combinations.
  • I haven't given much thought to either of these, but they don't strike me as assignments of income in the same way as the initial suggestion. Definitely do not try without obtaining actual legal advice from a tax lawyer!
  • The common method to mitigate effects of losing the standard deduction -- which I use -- is to donate nothing in half of the years (drawing the money into a savings account instead), and donate twice as much in the other half. Yes, I mail a number of checks in December and January of odd-number years. Yes, I take a video of myself putting the December ones in a USPS mailbox that is uploaded to the cloud. :)

FYI: people usually say "modest proposal" for things they think are actually bad ideas, in the tradition of Swift's satirical https://en.wikipedia.org/wiki/A_Modest_Proposal

Oh good point. I forgot

Thanks for posting this! 

At a skim, this looks related to Passing Up Pay by Jeff Kaufman, and Should effective altruism have a norm against donating to employers? by Owen Cotton-Barratt. I don't remember what was in those posts, exactly, but imagine that readers who find this interesting might also find the discussion on those posts useful. 

Would these relinquishments be seen by governments as actually income?

I'm not an expert on this, but when I've looked into this before I've been told it probably would count as income as long as (a) the amount to relinquish was up to the employee and (b) the employee had influence on where the money was donated.

I've heard from someone that Open Phil-sponsored companies are now doing essentially what you suggest. If you look at for example Anthropic's job board you can see one of their benefits is, "Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant." By donating equity they avoid income taxes, and perhaps there are other tax implications of donating tax instead of cash (I'm not an expert).

Donating equity seems a bit different but also interesting! Thanks

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something