We are excited to announce that Basefund has entered its trial phase. Starting today, individuals who have previously donated to effective charities and are currently facing financial trouble can apply for hardship assistance at basefund.org.

During this trial phase, if your application is accepted, you will receive the lowest amount among the following three options:

  1. The payout suggested by our hardship examiners
  2. 50% of your donations to cost-effective charities made in 2022 and 2023
  3. 1,000 USD or the equivalent amount in another currency

Beware that during the trial phase, Basefund may halt operations or change its rules without warning.

If you're aware of anyone who has previously donated to effective charities and is currently facing financial hardship, please let them know about our fund. If you're not quite sure whether you qualify as experiencing hardship, we recommend you to submit an application.

62

0
0

Reactions

0
0
Comments18


Sorted by Click to highlight new comments since:

Consider hardcore volunteers as well; some EAs unironically volunteer 30 or 40 hours a week while working part-time to support themselves. I did that for 3.5H years or so.

This is something we'd like to expand to, but it's much harder to define "EA volunteer" than "donor to effective charities". Once donor assistance is running smoothly, we'll likely give volunteer assistance a try.

I really like this org. When I play board games, I play faster when I know I'm allowed to take moves back. Similarly I can imagine that if I know I can get money back if I am in hardship, I can imagine I might give more.

Conceivably in a better world, many people might have "charity accounts" where the difference between some baseline and the invested money goes to charity but you can pull it out at any time. Many people wouldn't and so orgs would get the interest as donations. 

Yeah, huge fan of this concept; I'm eager to see that people are trying it out!

A few random questions:

  1. Is a more operationalized definition of "financial hardship" envisioned at some point?

(Totally agree with leaving it undefined during the trial phase, and can see arguments for leaving it undefined in an operational phase too. On the other hand, I think transparency is particularly important for charities whose most direct benefit is to EA or EA-adjacent people, and given Basefund's purpose that transparency probably needs to be at a policy level rather than an individual-application level).

  1. Could you explain the rationale for possibly extending the eligibility period backward to 2012? If the theory of change is to encourage people to donate boldly because they can get a partial refund if they experience future financial trouble, it seems that theory would only work prospectively.

  2. One policy quirk to consider in the future: Should an intervening bankruptcy filing disqualify a significant payout?

There are some potential scenarios in which the effect would -- in my view -- be inequitable toward the donor's creditors. It doesn't seem appropriate to me to give a payout to someone based on pre-petition donations if they saddled their creditors with pre-petition losses.

Because the potential payout would fully be within Basefund's control, that asset could potentially evade the equitable principles of the Bankruptcy Code. This would be particularly problematic if the donor were insolvent when the donation was made, but the creditors couldn't recover the donation under fraudulent conveyance law because of 11 USC § 548(a)(2)'s protection for certain charitable contributions by individuals.

  1. The US legal definition of hardship is more stringent than ours, and we can only assist people who experience US-level hardship because we operate under a US charity. No choice there. I think pointing people to legal definitions won't help.
  2. Extending the eligibility period is less of a utilitarian choice than something deontological/oriented at community building. We believe we have a duty to help EAs who'd be able to support themselves if they hadn't given away their money. Besides that, we hope current and potential EAs will see that we're looking out for each other, which will make the EA community a more attractive place to be in. There's also an argument to be made that getting donors back on their feet might get them to donate again, and it could stop them from leaving EA altogether.
  3. Interesting point

(The link to the form produces a 404 error on mobile for me.)

Thanks for letting us know, fixed!

Two comments on the form:

Website says only donations made in 2022/2023 can be considered, but form asks about those made after Jan 1 2021. Is this intentional data collection about an ineligible year, or an out of date form?

I'd encourage a more secure means of collecting banking info than what appears to be unencrypted email.

The form was indeed outdated, and I agree that moving away from email would be a good thing.

As a charity operator, I love this idea.

Also I would hope (although I think it's unlikely) that if donors feel upon hard times then they felt they could ask us as OneDay health for their money back - this would be a rare scenario and we would be very happy to return it!

I wonder if there could be a way for charities to specifically sign up to a refund agreement through something like basefund, and then basefund could almost be an "underwriter" for situations where the charity couldn't manage paying the money back.

Maybe this is impractical but it makes some sense to me at this point.

Also I would hope (although I think it's unlikely) that if donors feel upon hard times then they felt they could ask us as OneDay health for their money back - this would be a rare scenario and we would be very happy to return it!

Are you legally able to do this? I thought that once a donation had been made you were obliged to use it to advance your charitable objectives, and giving money to donors in the US does not seem likely to promote health in rural Uganda. 

One should check applicable law before returning a donation, which could vary from jurisdiction to jurisdiction. This article by a CPA suggests that "if a donor asks for a smaller donation back, it’s usually best to return it. Larger donations may be harder to return. In this circumstance, talk to your legal and financial advisors — and possibly your state’s not-for-profit agency." The rationale isn't stated, but I suspect the business-judgment rule might apply here. In other words, it may be possible for the charity to decide that furtherance of good donor relations justifies refunds under certain circumstances as a means of achieving the charity's objectives.

Thanks!

Thanks you might well right, I never would have figured something like this! Us doctors arent always the great at legal stuff!

Seems awfully sad to me though. Donations are done volunteraly and generously, so it seems a bit sad not to be able to give them back in a very rare case. I can't imagine anyone would ever sue a charity for this.

I can't imagine anyone would ever sue a charity for this.

I think the main issue isn't the charity being sued, but loss of 501c3 status?

Here's an example of a situation the IRS might be worried about: I give $100k to charity in 2023, the charity gives me a "no goods and services" receipt, and I deduct $100k from my income in figuring taxes, and I save, say $30k. Then in 2024 I tell the charity "I'm sorry, I lost my job, I'm in danger of losing my house, and I need the money back". They give me the money back, and I don't tell the IRS.

Or problems with the state charity regulator.

My checklist would probably go something like this:

  • tax issues (as you described)
  • concerns about potential preferential treatment of an insider 
  • concerns about whether a refund could materially impair the charity's operations (and be seen as a breach of the public trust or a fiduciary duty)
  • checking to confirm that the refund wouldn't make the charity close-to-insolvent (this is the only scenario I see in which a suit by a non-governmental actor seems plausible -- if the charity ended up unable to pay other obligations, the refund could be a constructive fraudulent conveyance)
  • donor bankruptcy issues (as described in my top-level comment)

Yeah thanks, given all these concerns, I think my idea is bad unfortunately. Seemed nice at first but not practical.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Building effective altruism
6
2 authors
· · 3m read