Executive summary: The author argues that, given the moral weight of conscious experience and the role of luck in determining life circumstances, a voluntary simplicity pledge tied to the world’s average income lets them meet their ethical duties while still maintaining a balanced and meaningful life.
Key points:
The apples being unbounded thing was just a brief intuition pump. It wasn't really connected to the other stuff.
I don't think the argument actually requires that different value systems can be compared in fungible units. You can just compare stuff that is, in one value system, clearly better than something in another value system. So, assume you have a credence of .5 in fanaticism and of .5 in bounded views. Well, creating 10,000 happy people given bounded views is less good than creating 10 trillion suffering people given un...
Nice write-up on the issue.
One thing I will say is that I'm maybe unusually optimistic on power concentration compared to a lot of EAs/LWers, and the main divergence I have is that I basically treat this counter-argument as decisive enough to make me think the risk of power-concentration doesn't go through, even in scenarios where humanity is basically as careless as possible.
This is due to evidence on human utility functions showing that most people have diminishing returns on utility on exclusive goods to use personally that are fast enough that altruism...
See also this post https://forum.effectivealtruism.org/posts/vqaK5y5ksiDSfMzqd/the-crisis-ea-cannot-afford-to-ignore from a former USAID lead. It contains some other possibles.
The real answer is that war and genocide isn't a condition in which a randomised controlled trial can work.
Your client may be interested in Dabanga https://www.dabangasudan.org/en/about-us for Sudanese media reporting, which relies on external funding as it operates from exile and is probably saving a lot of lives via shortwave radio broadcasting so displaced civilians can avoid travel routes where they get murdered.
"I don’t think there’s an especially important sense in which “my” money is mine; I think the state would be justified in expropriating and redistributing way more of my income.[3]"
I've been interested to see this (or a similar) sentiment expressed over a number of posts which was quite unexpected!
Agreed that extreme power concentration is an important problem, and this is a solid writeup.
Regarding ways to reduce risk: My favorite solution (really a stopgap) to extreme power concentration is to ban ASI [until we know how to use it well], a solution that is notably absent from the article's list. I wrote more about my views here and about how I wish people would stop ignoring this option. It's bad that the 80K article did not consider what is IMO the best idea.
Londoners!
@Gemma 🔸 is hosting a co-writing session this Sunday, for people who would like to write "Why I Donate" posts. The plan is to work in poms, and publish something during the session.
For as long as I can remember, I've struggled with the idea that I'm among the wealthiest people in the history of the human race. I have goals and life projects that most people would never even dream of having the opportunity to pursue. Who am I to have deserved such a privilege among other people? What makes me so special? I'm not special, and the fact that I have all this privilege and wealth fills me with guilt. By donating a portion of my income to charity and contributing to efforts to help make the world a better place, these feelings of guil...
Hi Ian,
To be honest, the prompt was actually quite simple: I asked Gemini 3 to review the PDF presentation and draft a participatory commentary. My goal was simply to invite those interested in effective altruism to collaborate on a course initiative.
After reviewing the output and tweaking it on my phone, I saw that the content was valuable and that using AI support acts as a helpful bridge, not a rule-breaker. For me, this is a purely altruistic effort stemming from my journey into Effective Altruism.
I would invite you to reconsider the merit of the artic...
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won't fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn't want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don't believe it, and most rationalists are not right-wing extremists as far as I can tell.
Thanks for sharing!
What you would do to decrease the uncertainty about interspecies comparisons of expected hedonistic welfare as much as possible with 1 k, 10 k, 100 k, 1 M, and 10 M$? The picks should account not only for the outcomes of the research which was directly funded, but also for any additional research that is done to decrease the uncertainty further (supported by other funds).
I think Ambitious Impact (AIM), Animal Charity Evaluators (ACE), and the Animal Welfare Fund (AWF) use the welfare ranges initially presented by Rethink Priorities (RP),...
Thanks Charlotte!
Since this is still visible on the frontpage, I've changed the title. @Arthurwantstoknowmore, feel free to change it or suggest an alternative.
Hey Arthur! I don't have any valuable advice to give you, but I just wanted to say I was also a BFI student in France who finished their Global Affairs project and graduated just last year. I didn't really know what project idea to pick and wish I would've had the foresight to pick AGI since it would've been a lot more interesting than what I picked, so good for you! Good luck on your search for your Global Partner, good luck on your baccalaureate and try not to stress too much despite French schools being very stressful as we know :)
Option B clearly provides no advantage to the poor people over Option A. On the other hand, it sure seems like Option A provides an advantage to the poor people over Option B.
This isn't clear to me.
If the countries in question have been growing much slower than the S&P 500, then the money at the future point might be far more money to them than it is to them now. And they aren't going to invest in the S&P 500 in the meantime.
Maybe I'm being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there's enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey...
I appreciate the correction on the Suez stuff.
If we're going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I've said in the past. They were also early to crypto, early to AI, early to Covid. It's sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don't mention those, I think you're probably fudging the numbers.
...For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving
What a wonderful idea! Mayank referred me over to this post, and I think EA at UIUC might have to hop on this project. I'll see about starting something in the next month or so and sharing a link to where I'm compiling things in case anyone else is interested in collaborating on this. Or, it's possible an initiative like it already exists that I'll stumble upon while investigating (though such a thing may well be outdated).
I think this point is potentially significant, but the post is clearly LLM-generated, and thus, most of the paragraphs don't add much beyond the initial point of "there's no Script of Truth and it depends on the person's context". In practice, I have no clear examples of people making wrong choices based on overconfident EA advice - in fact, my experience has been the opposite: people don't want to give high-level advice, because they think it depends too much on the options that are available to me, and they couldn't choose from there. Sure, counterexampl...
A few points:
On the margin I’d expect more AI safety donations, from them. But any guess to how much the cost effectiveness may change for health & biosecurity areas?
I’d initially think there is a lot of room to absorb more funding with…
-Malaria vaccines
-Near HIV vaccine
-Chronic diseases (https://ourworldindata.org/causes-of-death)
-Sentinel / biosecurity global disease monitoring system
-Advanced Market Commitments for various vaccines & tests (https://blog.jacobtrefethen.com/10-technologies-that-wont-exist-in-5-yrs/)
Also promotion of more free trade alway...
All of these sound super exciting. Small note- the website for First Embrace is pretty scrappy-feeling and there's a reasonable number of spelling/grammar issues which, were I not already familiar with AIM's work, would (?unfairly) make me less confident in the charity itself.
Examples:
By the way, the job market is getting worse while oil & inflation aren’t likely to see increases due to decreased demand/a recession.
Bad labor market & no extra inflation mean likely more FED cuts which makes it more likely the bubble will extend a bit further. I’ve continued to invest in AI for now but I plan to build a hedge position sometime in 2026.
So lets say we are targeting population P - these are the recipients. P is the population of people most in need that will be around in 100 years. Most of them do not currently exist. We want to spend money to help P either in the form of direct cash transfers or health interventions.
We can do that by investing our money and then handing it out to the individuals in P once they come into existence. This is option A, aka the patient philanthropy strategy.
We could also give to the parents or grandparents of P, some of which are alive today, which we can call...
If by Option B you meant that the recipients would invest most or all of the cash transfers in index funds, why is Option A preferable?
If the answer is they’re too desperately poor to not spend a large portion of the money on consumption, then rather than respecting the global poor’s rational preferences, this is a paternalistic argument.
It seems very implausible that there will be any low-income countries (~$1,100 per capita GDP or less) in 50 years that are not currently low-income countries. So, donating to people in low-income countries now is a sure thing.
You can make a slightly more complicated version of the rational preference argument to also answer this objection, but that added twist seems like an unnecessary complication, given what I just said above.
see my previous comment:
- Option A: Put money in an index fund, let it grow, spend it in 100 years
- Option B: give it to people who will spend some of it now, invest some, pass on some to their children, who will in turn spend some of it, invest some of it, and pass it onto their children, leaving some for their grandchildren to spend in 100 years.
Option A leads to a bigger counterfactual increase in spending-100-years-from-now which is what we care about in this (admittedly contrived) example
Giving it to their ancestors is choosing option B
One of the benefits of patient philanthropy is that it allows you to select the people to receive your money in, say, 50 years.
Assume the poorest people in the world are in Ghana. There is no guarantee that the poorest people in the world will be in Ghana in 50 years. If we want to help people in Ghana in 50 years, your two arguments strike me as quite plausible. However, if we want to help the global poor in 50 years, donating to Ghanans seems much less likely to maximize this.
Thanks for your answering a lot. I'm really grateful for this.
Your response makes sense to me. However, if today you have to decide between earn to give (suppose you can donate $100000 USD a year) and to work directly in EA organisations, how can you make the decision given your donation ability and talent?
Of course if you have high talent you should work directly, but how do you decide if you only have average or low talent in your cause area?
Thank you for asking this question! I definitely agree we should be exploring EAs treatment of mid-size donors and whether we are ready for them (especially in the context of EA fund diversification). I’m not confident that we need more research tools or rankers for such donors - I think there are already many resources for those (ex: Giving What We Can provided a wide range of recommendations across cause areas for such donors, not to mention others like GiveWell, Giving Green, Founders Pledge, ACE, Longview Philanthropy, Ultra Philanthropy, Ark Philanthr...
If you think that investing in index funds is sure to lead to the best outcome, why not give each recipient enough money so that they can invest in index funds? Or otherwise arrange it so that the recipients own and control the capital? Is it plausible to think that the donor owning the capital for 100 years is preferable to the recipient owning the capital for 100 years? What advantage could possibly accrue to the recipient from that arrangement?
...Not necessarily. Let’s say a patient philanthropy foundation wants to expropriate, say, half of the inherited wealth between this generation and the next (in order to invest it for a future generation). The current generation would rationally prefer this not to happen. Conversely, the current generation would prefer to receive from the foundation an amount of wealth equivalent to half the wealth the next generation will be able to inherit.
The fundamental point is that investing wealth to accumulate wealth might make your impact numbers go up, because of an
The rational preference argument only applies for giving to the current generation of recipients at some later point in their lives.
Not necessarily. Let’s say a patient philanthropy foundation wants to expropriate, say, half of the wealth that would be inherited from this generation by the next (in order to invest it for a future generation). The current generation would rationally prefer this not to happen. The next generation would also rationally prefer for this not happen. Conversely, the current generation would rationally prefer to receive from th...
The rational preference argument only applies for giving to the current generation of recipients at some later point in their lives. If the most cost effective generation to help is really, say, 3 generations in the future we should save and then give to them instead.
The cost-effectiveness argument is more convincing but the case for patient philanthropy only requires there to be some large-enough region that stagnates growth-wise which unfortunately seems likely given the perennial nature of bad governance. I believe that if you make some simplifyin...
Serving my fellow man has always been a major source of personal meaning for me. I guess that makes me a do-gooder. From late childhood I had already committed myself to giving 10% of my income after taxes, but at a certain point I realized point that actually money is probably one of the greatest things I had to offer was money (and probably the greatest thing I have to offer strangers). I have the luck and privilege of having more money to offer than most, and its decreasing marginal utility means I can help others without making a big sacrifice myself.
I still donate blood, and might donate a kidney someday, but I suspect that when I look back on my life I'll count my cash donations among my proudest accomplishments.
The object-level arguments here have merit, but they aren't novel and there are plausible counterarguments to them. It remains unclear to me what the sign of talking about these topics more or less openly is, and I do think there's a lot of room for reasonable disagreement. (I'd probably recommend maintaining roughly the current level of caution on the whole - maybe a little more on some axes and a little less on others.)
But on the meta-level, I think posting a public argument for treating a potential infohazard more casually - especially with a somewhat a...
My gut feeling based on knowledge, reasoning, and experience is that the low-hanging fruit like diet and lighting is quite low-impact and probably has like low to middling cost-effectiveness — but I haven’t done any math, nor any experiments.
If I had research bucks to spend on experimental larks, I would try to push the psychotherapeutic frontier. For example, I might fund grounded theory research into depression. Or I might do a clinical trial on the efficacy of schema therapy for depression — there have been some promising results, but not many studies.
I...
Full-length post here. Feel free to comment if you want or not comment if you don’t want.
I didn’t understand your argument about economic growth above. I was hoping you’d give an argument based on empirical data or forecasts rather than a purely theoretical argument (e.g. utils don’t really exist, the percentage chances assigned to spurring economic growth at different funding levels are completely arbitrary, the scenario is overall contrived). So, I wasn’t convinced by that. But I acknowledge there is high uncertainty with regard to future growth, and whe...
Thanks for the post, Jeff!
At a floated $300B valuation and many EAs among their early employees, the amount of additional funding could be in the billions. [...]
One way to get a sense of the impact of donating sooner is to imagine that others will donate $1M to my preferred charity this year, and $10M next year.
EA-related funding is around 900 M$/year. So thinking about donations to one's top organisationg becoming 10 (= 10*10^6/(1*10^6)) times as large would make sense for expected EA-related funding in 2026 of 9 billion $ (= 10*900*10^6), 3 % (= 9*10^9/(...
Thanks for the relevant post, Nathan!
Come on folks, what are we doing? How is our wannabe philanthropist meant to know whether they ought to donate to AI, shrimp welfare or GiveWell. Vibes? [2]
I am in the process of building such a thing, but this seems like an oversight.
Feel free to get in touch if you think I may be able to help with something.
Hi Elliot and Nathan.
I [Nathan] think that shrimp QALYs and human QALYs have some exchange rate, we just don't have a good handle on it yet.
I think being able to compare the welfare of shrimps and humans is far from enough. I do not know about any interventions which robustly increase welfare in expectation due to dominant uncertain effects on soil animals. I would be curious to know your thoughts on these.
...Oh, this [the point from Nathan quoted above] is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp
I think it depends on the time horizon. If catch-up growth is not near-guaranteed in 100 years, I think waiting 100 years is probably better than spending now. If it is near-guaranteed, I think that the case for waiting 100 years ambiguous, but there is some longer period of time which would be better.
Add, thanks for the recommendations. I always perpetually feel I need to learn more about economics, but I never get around to reading about it.
I would probably add Thinking In Systems by Donella H. Meadows as another peripheral book since it tackles systems thinking, which can be applied to virtually any EA-related subject.
I don't think Option A is available in practice: I think the recipients will tend save too little of the money. That's the primary argument by which I have argued for Option B over giving now (see e.g. here).
But with all respect, it seems to me that you got a bit confused a few comments back about how to frame the question of when it's best to spend on an effort to spur catch-up growth, and when that was made clear, instead of acknowledging it, you've kept trying to turn the subject to the question of when to give more generally. Maybe that's not how you s...
I thought of a way to sketch this out.
Let’s say I have $10 billion to donate.
Option A. I donate all $10 billion now through GiveDirectly. It is disbursed to poor people who invest it in the Vanguard FTSE Global All Cap Index Fund and earn a 7% CAGR. In 2126, the poor people’s portfolios will have collectively grown to $8.68 trillion.
Option B. I invest all $10 billion in the Vanguard FTSE Global All Cap Index Fund for 100 years. In 2126, I have $8.68 trillion. I then disburse all the money to poor people through GiveDirectly.
Option B clearly provides no adv...
Executive summary: The author argues in an exploratory and uncertain way that alternative proteins may create large but fragile near-term gains for animals because they bypass moral circle expansion, and suggests longtermists should invest more in durable forms of moral advocacy alongside technical progress.
Key points:
2031 is far too far away for me to take an interest in a bet about this, but I proposed one for the end of 2026.
To be clear, "10 new OpenPhils" is trying to convey like, a gestalt or a vibe; how I expect the feeling of working within EA causes to change, rather than a rigorous point estimate
Though, I'd be willing to bet at even odds, something like "yearly EA giving exceeds $10B by end of 2031", which is about 10x the largest year per https://forum.effectivealtruism.org/posts/NWHb4nsnXRxDDFGLy/historical-ea-funding-data-2025-update.
A semi-regular reminder that anybody who wants to join EA (or EA adjacent) online book clubs, I'm your guy.
Copying from a previous post:
...I run some online book clubs, some of which are explicitly EA and some of which are EA-adjacent: one on China as it relates to EA, one on professional development for EAs, and one on animal rights/welfare/advocacy. I don't like self-promoting, but I figure I should post this at least once on the EA Forum so that people can find it if they search for "book club" or "reading group." Details, including links for joining each
You have the core ones, so I'll a few that are slightly more peripheral.
Glad to see another reader here! You've got the core books. Previous posts on the EA Forum have explored similar things, specifically this infographic/poster, and this scraping of Goodreads. I'd broadly recommend skimming through the 'books' tag to see what else you turn up.
No, it is more confusing than anything. What matters is to have an impact. Impactful orgs have been doing days and years of research on what is the most cost-effective and impactful. With a basic knowledge of EA principles you can identify which organizations meet your criteria of impact and which do not, and then apply. If you get a job there, you will learn by yourself how they think about impact and refine your view. If you prefer to go to a non-EA org to make more EA-alike (like WHO or UN) then I would certainly dive deeper into the principles and metrics of impact.
But in general I do not overwhelm people with philosophical conundrums. humility, scout mindset and solid skill-building are what matters to me.
Same question, I'm talking to a German donor looking for an American donor for a donation swap, let me know if you're interested!
They want to donate to lightcone, and could donate to any effective charity listed on effektivspenden.de (cause areas AI, bio, animals, climate, global health & poverty)
Some factors that could raise giving estimates:
Also, the Anthropic situation seems like it'll be different than Dustin in that the number of individual donors ("principals") goes up a lot - which I'm guessing leads to more grants at smaller sizes, rather than OpenPhil's (relatively) few, giant grants
You're right to be concerned about the incentives of cooperators who had their own legal exposure. But those witnesses stood up to days of cross-examination about specifics by SBF's lawyers. Those attorneys had access to documentary evidence with which to try to impeach the witness testimony -- so it's not like the witnesses could just make up a story here.
Thanks for asking, JD! It is also good to know Nick played a role in your interest!
I would like to see more research informing how to i) increase the welfare of soil animals, and ii) compare hedonistic welfare across species. Rethink Priorities (RP) has a research agenda covering the latter.
I am planning to donate 3 k$ over the next few months to a project on the welfare of springtails, mites, or nematodes. It is not public, but it will most likely start next year. I hope there will be more related projects in the future. People interested in funding resea...
They don't have to be in conflict. But people feel like they are. Why else don't people give more? Most people just aren't as excited about giving as they are about spending that money on other things in life.
Ideally giving springs from heart to hands. And the best way to motivate someone else is probably to point to the heart, and the excitement, not the obligation (unless it's an opening hook - the e.g. drowning child experiment is just really strong).
It was lovely chatting to you in Melbourne @Tristan Katz! Thanks for this write-up and sharing your thoughts with others:)
It seems to me that this argument derives its force from the presupposition that value can be cleanly mapped onto numerical values. This is a tempting move, but it is not one that makes much sense to me: it requires supposing that ‘value’ refer to something like apples, considered as a commodity, when it doesn’t.
Begin with the intuition pump. For the intuition pump to function, we must grant that somebody would want to maximise the number of apples in the world; but I find it hard to see why anyone should grant this; this seems a pointless objective....
Agreed, titotal! As I commented, "technological development requires coordination, and coordination often requires technological develoment, so they cannot be analysed separately".
Some of these technological developments were themselves a result of social coordination. For example, solar panels are extremely cheap now, but they used to be very expensive. Getting them to where they are now involved decades of government funded research and subsidies to get the industry up and running, generally motivated by environmental concerns.
It seems like there are many cases where technology is used to solve a problem, but we wouldn't have actually made the switch without regulation and coordinated action. Would you really attribute the b...
The more I think about it, the less paradoxical it seems. I don't think those two are in conflict so much. I think we absolutely are compelled to give, but compelled from "Its the right and best thing to do" perspective, not from a "Do it even if you hate-it-kicking-and-screaming" perspective.
I think giving springing on a personal/heart level from gratitiude, but the underlying principle being that hey, this is the right/correct thing to do might actually combine without much paradox. I think if you give because you begrudgingly feel obliged you might be better off not doing it and checking yo heart first?
No, I still work on this problem every day. My goal is to acknowledge a passing feeling many of us may have, the sometimes emotional difficulties of working hard on a big challenging problem, and the difficulty of figuring out what exactly to do. The conclusion I draw is that there is probably something I could do, despite it sometimes feeling like a big unsolvable problem. And I do it!
Thanks for the post, Whitney and Lily, and welcome to the EA Forum! I did not know there were lots of broilers in cages.
Scale: China raises ~15 billion broiler chickens annually, and ~10 billion (60–70%) spend their lives in cramped cages with even less space than battery-caged layers. This is 2x the total number of caged laying hens worldwide, with severe welfare implications.
What are your sources for this? I only found 2 links to the Lever Foundation across the whole post.
According to the Food and Agriculture Organisation (FAO), only 11.6 billion chicken...
I am in the process of building such a thing
Is it available online / open-source / etc by any chance? Even just as a butterfly idea.
Thanks for answering, makes sense.
@Philip Popien took the "Progressive Pledge", whereby he gradually increases his pledged percentage upon any salary increase.
Agreed, Emre! In addition, I have little idea about whether veganism increases or decreases animal welfare due to effects on soil animals. I would be curious to know your thoughts on this.
I'm really sorry you feel that way.
And yes, I can "relate". In fact, since I listened to it yesterday, I have not been able to gather the strength to focus on studying. I expect this to pass until tomorrow, but I do wonder what your goal is here. Are you trying to spread defeatist attitudes among EAs?
Can confirm, and happy to vouch.
Tax-effective Australian charities and funds:
My intention would be to gradually increase. So in the past I was earning just slightly above the median, but gave 15%. In general I think it's good to have an idea of what income you're comfortable with, and then increase donations significantly as you pass that point. But I set the bar really high here just because I'm aware that my perception of what is enough might change in different life-stages.
To be honest I think my model is super crude and probably not ideal, I would really like to see other models like this!
Thanks for sharing this. It is the first time for me to hear about this dynamic pledge.
You mention giving 10 percent once you earn the median income in your city, and 60 percent once you earn double the median. Have you also thought about what you would give between those two points, when you are earning more than the median but not yet double it? Would you keep giving 10 percent until you hit the higher threshold, or do you have a gradual plan in mind?
But if it’s $21 billion total in Anthropic equity, that $21 billion is going to be almost all of the employees’ lifetime net worth — as far as we know and as far as they know. So, why would this $21 billion all get spent in the next 2-6 years?
If we assume, quite optimistically, half of the equity belongs to people who want to give to EA-related organizations, and they want to give 50% of their net worth to those organizations over the next 2-6 years, that’s around $5 billion over the next 2-6 years.
If Open Philanthropy/Coefficient Giving is spending $1 bil...
Dustin Moskovitz's net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so that's at least $6 billion.
I think this pledge is over their lifetime, not over the next 2-6 years. OP/CG seems to be spending in the realm of $1 billion per year (e.g. this, this), which would mean $2-6 billion over Austin's time frame.
Personally, I agree that pursuing research into soil animal welfare would likely be valuable. In general, I’m extremely impressed by how much salience you have brought to this issue over this past year. My intuitions around how to think about these animals currently seem to generally align with Bob Fischer’s thoughts.
Even if soil animals become the most cost effective use of marginal dollars, I still think we need opportunities in the animal space with high absorbency. I don’t think that this research could absorb millions in the way other animal orgs coul... (read more)