EA experimentation is fantastic, but it’s really difficult to set up an official nonprofit for each experiment.  Therefore if we want to help fund experimentation, it’s good to do so at earlier stages than official nonprofit registration.

A few of us at .impact have started experimenting with Gratipay to put money the hands of EAs. Gratipay works by providing a system for some people to make weekly donations to individuals or projects.  The founders of Gratipay are themselves paid for on Gratipay, so there they take no financial cut (they do charge around 3% for credit card fees).  So far it does not support charity deductions, but it’s not meant for that.  It’s meant to share money with people.

We’ve started an ‘Effective Altruist’ community of 30 people with profiles and donations.  While there haven’t been many donors yet, there have been several people who posted profiles of what they are up to and they intend to continue doing.  There are larger groups such as Charity Science , new ones like Effective Altruism Berkeley, and many individuals like Diego CaleiroTom Ash, and Justin Shovelain.  A few professionals are there who don’t request funding, but still appreciate tokens of appreciation.  

If you’re interested in helping funding some EA groups or people, it’s super easy to get started.  If you have a project and want funding, it’s super easy to make a page.  If you’re just curious what’s going on, there are many profiles to look through.

There are some limitations.  Gratipay is not great for one-time payments, group ‘Kickstarter’ payments, or incentivized ‘unlock’ payments, registered charity payments, and I’m sure a long list of other things.  That said, Gratipay is a really simple way for us to get started.

Comments11


Sorted by Click to highlight new comments since:

In most cases I don't see a compelling reason to fund an individual rather than an organization. I'm also worried about what kind of message this sends. To be crass, it makes EA look like a circlejerk.

The fear is, put bluntly that this is another way to turn EA people away from focusing on rigourously evidence backed projects for the poor to funding and supporting the lives of rich white people who they are internet friends with to work on speculative projects with no evidence backing.

So long as the amounts and numbers remain fairly small, I think that some kind of speculative VC style funding of people or projects in the EA space is fine. But if people did start to make a career out of gratipay - I think a lot more skepticism would be needed.

There are over 1k communities on Gratipay, many doing very arguably less directly important things. This isn't just money for anyone, it's supposed to be for projects done by people who haven't set up official nonprofits (very few people have done this). It also does work for organizations, such as those online.

In most cases I don't see a compelling reason to fund an individual rather than an organization.

I don't see the difference between funding individuals and organisations, since you can treat individuals as one-person organisations with a narrower range of projects, and by funding organisations you're ultimately funding individuals. Some of the things produced by .impact members, which I believe are mostly done by individuals or small groups, compare quite well with those produced by organisations.

Lila, see my comment above. Also notice the current Status quo is, as AlasdairGives mentioned, rich white people who are receiving the money through institutions, which sometimes have to pay other rich white people fees and other costs, people who are not doing EA work. Overall my best guess is that donating to institutions is about twice worse than donating to individuals in terms of cost per employee and task. It could be more.

It is important that at least one or two institutions remain affiliated with high status entities. But we no longer need to guarantee this. With Singer on LYCS, Musk and Freeman suporting Superintelligence related NGOs, FHI at Oxford and CSER and FLI burgeoning, we have solved the status question almost to satisfaction. Signalling is not our problem, efficiency is. And 90k can pay one person within institutions, and three or more outside institutions. As long as the feedback and trust mechanisms are good, the next marginal dollar should go to direct donations to individuals. Furthermore since those dollars are donations, and are not conditional on any work, they can be obtained by those who are not currently in their country of citizenship, which enormously facilitates moving for those whose efforts are better allocated elsewhere.

My concern is that feedback and trust mechanisms aren't good. Even the best of us I think would struggle to produce quality work without a boss and coworkers and deadlines. If organizations are actually just using gratipay to pay their employees without taxes, there are some legal concerns. People don't take kindly to allegations of tax-dodging, and if something like this were to get out, it would probably hurt donations.

I'm excited for this decentralized funding community, and I'm very grateful for the $248 I've personally collected through the service to date. However, I feel like individual funders need to further elaborate on what they're funding needs are, what they would do with marginal funds, and a brief sketch of how they expect to have an impact.

I just updated my profile to give an example of what I would mean (in my case, it's an example of how funding me does not lead to impact).

Diego's profile is the closest to what I'm looking for, though I'd want to know more about what he plans for each of his projects and why his living expenses are >$50K a year. (This is not meant to call out Diego in particular, so sorry if that's the case!)

I definitely would also encourage donors to add more to their profiles! Many are still quite minimal and could be improved.

On the other hand, I think that there's not too much incentive right now with so few people donating. This is a good reason for others to donate more, specifically to ones with nice profiles (if they want to specifically encourage such a thing).

Not asking you Peter, just people reading this in general.

I'll soon edit my profile with my EA plan.

My living costs are much closer to 25k a year than to 50k actually. Additional donations, far from my current funding, would go into other things that can be found described in my Patreon account. I believe my Patreon account maps better into what you think people should be doing on Gratipay. I posted recently about shifting to Patreon while costs are low on the EA facebook group, and some people agreed that was a good idea.

Regardless, there is a much more important issue from my point of view: donating to individuals is much cheaper than to institutions, or universities, because individuals don't have many of the fixed costs and bureocratic costs those institutions bear. Even if it were the case that a donee wanted to receive 50k for their living expenses, this is still less than the overal cost of employing an individual in most of the EA organizations to which most people make donations. Young EAs cost their institutions (and thus their donors) up to 90k per year, which is enough money to sustain three EAs even if they live in the most expensive areas of the planet. The math is simple and clear. People however are wary of donating to individuals at the moment. I suspect over time this will be much more common among EAs, since many of the donees already have, or will build, a track record of doing good work and and being accountable and reliable in a timely manner. I'm trying to spearhead this shift in part because I'm one of those who currently can, since I still have some funding for the next months, and I've been sponsoring myself as an EA for years. Most people can't afford to participate in this Beta test, which is why I invite everyone who wants to work as an EA and can still sustain themselves for a few months to join Patreon or Gratipay and help incrase the flow of EA to EA direct money on the receiving end, not just the donation end.

Regardless, there is a much more important issue from my point of view: donating to individuals is much cheaper than to institutions, or universities, because individuals don't have many of the fixed costs and bureaucratic costs those institutions bear.

First, I'm fully in support of this kind of model, and I hope that maybe someday I am funded by it. But I don't think your core thesis is correct, because:

1.) Some organizations don't have high costs. The only organization I have knowledge of, Charity Science, does not have such costs.

2.) When you're funding an organization on the margin, you aren't funding the fixed or bureaucratic costs.

3.) These costs carry with them corresponding advantages of coordination and shared access to resources.

4.) Donating to an organization can be done tax deductibly, which, in the US, is often a savings of 25%.

-

Young EAs cost their institutions (and thus their donors) up to 90k per year

Can you explain how you arrived at this figure?

I'd be interested in joining Gratipay but I don't seem well suited for it as I'm not currently working on a specific project that requires funding.

More from Ozzie Gooen
82
Ozzie Gooen
· · 9m read
Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or