[I have medium confidence in the broad picture, and somewhat lower confidence in the specific pieces of evidence. I'm likely biased by my commitment to an ETG strategy.]
Earning to Give (ETG) should be the default strategy for most Effective Altruists (EAs).
Five years ago, EA goals were pretty clearly constrained a good deal by funding. Today, there's almost enough money going into far future causes, so that vetting and talent constraints have become at least as important as funding. That led to a multi-year trend of increasingly downplaying ETG that was initially appropriate, but which has gone too far.
Nothing in this post should be interpreted to discourage people from devoting a year or two of their life, at some early stage, to searching for ways that they can do better than ETG. A 10% chance of becoming a good AI safety researcher, or the founder of the next AMF, is worth a good deal of attention.
I'm assuming for the purposes of this post that nonhuman animals have negligible moral importance, since I'm mainly aiming at people who focus on human wellbeing. If I were to alter that assumption, I'd be a lot more uncertain about whether any specific cause should get more funding, but I'd also see many additional funding opportunities that look like low-hanging fruit.
I'll also assume that we should expect there will be less low-hanging fruit in the future, so that philanthropic money should be spent soon. That assumption ought to be somewhat controversial, and I'll discuss the alternative near the end of this post.
I will try to err in this post in the direction of being too pessimistic about our ability to distinguish good charities, in order to demonstrate that the value of ETG is not strongly dependent on the wisdom of donors.
Is there an efficient market in charity?
If there are well-endowed charities that are sufficiently wise and altruistic (Open Philanthropy and the Gates Foundation?), then maybe ETG is unimportant because we can count on them to do all the funding.
I find that less plausible than the idea that the two best VCs can fund all the startups that need funding. The VC world has better incentives than the EA world, and maybe better feedback. Yet I still see little reason for confidence that the right startups are being funded.
Also, philanthropy in general has a track record which suggests mediocre, but improving, efficiency. Those seem to be keys points of the original ideas that GiveWell, Peter Singer, and Will MacAskill made when helping to create the EA movement. It would be somewhat surprising if a pattern involving billions of dollars disappeared quickly after being criticized.
These reasons suggest we should have a clear prior that philanthropic institutions are not close to being as efficient as the stock market.
If we had wise billionaires who were funding all worthwhile charities, then most EAs should do direct work. But we should expect any really small group of funders to have quirks, blind spots, and selfish desires to avoid spending weirdness points. So we should expect them to leave funding gaps that can be filled by somewhat average EAs, and also plenty of need for further vetting by above-average EAs.
Near term opportunities
Here are some educated guesses about which EA charities can productively use more money this year:
Open Philanthropy's plans to fund at most half of the needs of most good charities creates a presumption that some of them won't be fully funded, unless there's some other large charity that seems willing and able to evaluate them. I see some large charities that partly qualify, but I don't see them as having a broad enough scope to fund all the opportunities of this type.
It's possible that some of these charities have been expanding as fast as their skill/manpower allows, and look underfunded because donors wait to donate until the charities need funds. But this seems to require a somewhat implausible degree of wisdom on the part of donors. I'm confident that the startup world hasn't worked that well, so why should it work better with young charities?
Maybe those opportunities will be fully funded soon. Many people should look a decade or so into the future before deciding whether to pursue an ETG strategy. Also, it would be nice to know whether ETG will become irrelevant if the Gates Foundation spends most of its money optimally and soon.
So I've decided that these near-term opportunities aren't all that important to my main point, and I'll focus instead on a longer term outlook.
Bigger, harder causes
I won't try to identify the best causes in this section. Instead, I'll describe causes that are beneficial enough and verifiable enough to justify an ETG strategy. I expect that by the time there are enough donors to fund the causes in this section, there will be more wisdom available to donors, who will find opportunities that are better than many of these. So please take this section as being somewhat closer to a worst case analysis than to a prediction of what EAs will fund.
One or two of these opportunities may surprise me by being cheap to solve, but I expect I've chosen hard enough problems that some of them will be expensive to solve.
My first category is prizes for medical advances.
For example, an institution might offer $50 billion for a cure for aging, or for a general-purpose cure for cancer (maybe with partial payments for significant progress).
No, I don't mean offering rewards for a drug that would delay aging or cancer by a few months - there's plenty of money going into that already (mostly treating Western disease or a small subset of cancers). Practically all of that appears to be following a paradigm that shows little promise of curing aging. I mean something more audacious, in the sense that Aubrey de Grey's approach is audacious.
Producing a treatment that cures aging is likely more expensive and failure prone than is producing a drug that yields a small benefit in a large number of people, yet we've got a system that rewards the two about the same. That means there's little pressure to focus research on cures for aging. Prizes can be much more result-oriented than current funding of medical research, so I expect them to provide opportunities to redirect some of that research to the most valuable cures.
Aging is an area where it's unreasonably hard for most of us to evaluate whether any one research program is doing a good job, and I suspect that most investors in this area are doing poorly. But with prizes, the funders only need to be able to evaluate results well after they've been achieved, and whether they're giving the prizes to the people who are responsible for those results. There still needs to be somebody with skill at predicting which research will succeed, but it can be a VC-style expert who is reacting to good financial incentives.
Prizes aren't as efficient as direct grants to the best research programs, so it takes a fair amount of humility for a funder to choose prizes. I expect it requires an unusual person to adopt arrogant goals like curing aging, without also having arrogant beliefs about their understanding of which strategies are worth funding.
Sarah Constantin estimates that aging research is competitive with GiveWell's current top charities. I'll be pessimistic here, and estimate that it will be more like 1/20 as cost effective as GiveWell's charities, due to a combination of inherently low tractability and our poor ability to identify the best research strategies. That seems likely to look competitive after the next $10 billion of low-hanging philanthropic fruit has been picked.
Ok, some of you are probably saying I'm not being pessimistic enough. Maybe aging and cancer are intractable problems, and prizes for them won't attract any legitimate treatments. I don't have a simple way to convince you to trust my intuitions about their tractability.
It's likely that there are some other medical problems that are tractable, and where multi-billion dollar prizes would be productive.
If a cure for aging is intractable, then there's likely plenty of room for improvement in quality of life for the elderly. Note that hunter-gatherers seem to not get certain debilitating age-related diseases such as diabetes and dementia, and the elderly tend to remain active until a few days before death (see Lindeberg's Food and Western Disease).
Mental health seems like another area there's large room for improvement, but also large uncertainty about what strategies are tractable. Alas, I feel rather uncertain whether progress in mental health will be constrained by money or by something else.
I admit that the difficulty of choosing the right kinds of prizes weakens my argument by a modest amount. Yet even if half the prize money goes to poorly thought out goals, this approach will still shift medical research into directions that focus much more on maximizing benefits than is currently the case.
My next idea is Michael Kremer's proposal for drug patent buyouts (or see this version that's a bit more oriented toward laymen), under which a wealthy institution buys most drug patents and puts them in the public domain. This would dramatically reduce the problems associated with patent monopolies.
For example, drug companies sometimes try to sell drugs to poor countries at a relatively low price, and recoup their drug development costs by charging much higher prices in wealthy countries. Alas, that leads to drug smuggling. This makes it expensive to sell drugs in poor countries, likely leading to a situation where people in poor countries can't afford drugs that they would be able to afford if they could guarantee they wouldn't resell the drug. Patent buyouts can eliminate this perversity for a substantial fraction of drugs.
This strategy has some risks  if you try to implement it with a budget that's comparable to the market value of the drugs, so I'm reluctant to recommend attempting it with a budget as small as that of the Gates Foundation.
Global warming is likely to cause widespread harm under standard forecasts, and we should worry more about small risks of a larger catastrophe from unexpected weather changes that might be triggered by warming.
There are a number of interventions that seem promising, such as preventing deforestation, reforestation, and albedo enhancement.
I don't know which interventions of this kind ought to be funded, but this seems like an obvious candidate for spending $10+ billion per year if we're running low on other philanthropic opportunities.
Seasteads / Charter Cities
For $100 billion or so, we could build a bunch of new territories that will provide people with more options to move away from regions with bad weather or bad governments. I.e. seasteads, or maybe charter cities if they're politically feasible.
In keeping with my pessimistic assumptions, I'll ignore the standard hopes for seasteads, and assume for this post that seasteads will mainly just provide real estate and fairly average governance, in order to enable the world's less fortunate people to lift themselves up to something close to the global average.
A universal basic income has some potential to protect against technological unemployment, and do a relatively efficient job of eliminating poverty, without the incentive problems of means-testing.
I'm not suggesting a political movement, since I'm trying to err in this post on the side of pessimism about our ability to identify good institutions, and it's too easy for political movements to end up being bent toward other goals.
This might be achieved by a large expansion of GiveDirectly.
There have been some concerns that GiveDirectly has problems with money going to positional goods. I'm unsure whether we should be concerned about that, but I expect those problems would diminish if GiveDirectly expands donations to give money to everyone in a given village, or larger region.
Manna is another example of a strategy that might lead to a UBI, although I don't want to endorse that particular organization.
Some of these causes may be taken on by major governments, making ETG irrelevant for those causes. But governments don't have a particularly great track record compared to the best charities. I'm betting that governments will either ignore some big causes of this nature, or will bungle the solutions in ways that leave needs for more EA money.
Why isn't the Gates Foundation funding these? Here are some guesses:
- the foundation expects to find enough better strategies to use up all their money before the low-hanging fruit is exhausted [requires moderate optimism about the foundation's abilities].
- the foundation expects that many of their recipients can't handle more money effectively today, but that some recipients will soon expand their ability to handle more.
- the strategies aren't prestigious enough, or look too weird.
- they are following standard philanthropic procedures for deciding where to spend money, and something like peer pressure has discouraged them from evaluating the alternatives.
- the foundation is unwilling to admit that there are important limits to how many projects their employees can supervise. Switching from direct grants to prizes would bypass some of those limits, at the cost of requiring more humility than I expect from someone who makes a career out of evaluating charities.
- See also some comments by Carl Shulman.
More low-hanging fruit next year?
How would the value of ETG be affected if, instead, we assume that we'll find better giving opportunities in the future?
It implies greater need for people to look for those opportunities, but still, the history of philanthropy suggests that only a tiny number of people succeed in creating a better opportunity than philanthropy previously had.
"Generate new charity" is much less amenable to decomposition into easy parts than, say, microprocessor design, so if many people attempt it, it will end up more like a competition to create the next Google or Facebook.
Maybe it's good for a hundred new people each year to enter some sort of competition to find/create the next EA charity, or try to do x-risk research, even if less than one per year will succeed. But most of these people should notice after a few years that they're not better at it than the people who created AMF, FHI, etc., and fall back on another strategy.
"The best way to save lives or reduce suffering" should be expected to produce a much narrower set of answers than "something consumers will pay for", so we should expect there to be a much smaller number of EA charities than good businesses.
Are EAs more productive at direct work?
I imagine that there a large fraction of EAs who expect to be more productive in direct work than in an ETG role. But I'm not too clear why we should believe that. The skills and manpower needed by EA organizations appear to be a small subset of the total careers that the world needs, and it would seem an odd coincidence if the comparative advantage of people who believe in EA happens to overlap heavily with the needs of EA organizations. Remember that EA principles suggest that you should donate to approximately one charity (i.e. the current best one). The same general idea applies to need for talent: there are a relatively small number of tasks that stand out as unusually in need of more talent. Talent is more complex than money, so it doesn't mean there's only one kind of talent that matters. But a heuristic which treats most talent as equally valuable seems as suspicious to me as a heuristic that treats all charities as equally valuable.
I can imagine that some of the hopes for doing direct work are due to selfish desires to signal one's EA credentials. That kind of selfishness seems fine if done honestly, but I want to keep that goal separate from altruistic goals.
What about vetting?
Don't we need more people doing vetting, and isn't that more like direct work than it is like ETG?
I'm fairly confident that a suboptimal amount of vetting is being done. But vetting more vettors doesn't seem any easier than vetting charities that do object-level work. If anything, it's harder.
As an analogy, I as an investor believe there's lots of money to be made by good VCs and VC-like funds, and I've seen a fair number of opportunities to invest in VC or similar funds. Yet I haven't invested in any such funds, because the ones that want more money are ones that I don't expect to be able to evaluate with a reasonable amount of effort.
When investors become eager to trust new VC funds, as often happens near stock market peaks, amateurs enter the field and run VC funds without acquiring much skill, and end up with poor returns on investment. If EAs were that eager to trust new people to disburse charitable donations, the same kind of problem would arise, and would be harder to detect, since charitable donors don't have feedback mechanisms that are as hard to fake as getting double their investment back.
There are some tough calls that need to be made, by anyone doing ETG, about comparing safe donations to donations that look more promising but run higher risks of biased evaluation. Still, there is a range of plausible-looking answers for which we ought to be moderately confident that ETG will remain valuable for quite some time.
Most likely there will be a trillion dollars or more in opportunities remaining for the foreseeable future. Trillions per year if we include a fully charity-driven UBI, but I'll guess that we'll end up instead with a patchwork of basic income programs that are funded by charities in some nations, and funded by governments in others.
My best guess is that the current marginal cost of saving a human life is around $5k to $10k, and that under mildly optimistic assumptions about the growth of EA style charity, it will rise to $100k in a decade or so. $100k is well within the range of costs that lead me to encourage ETG.
It's quite possible that an ETG strategy will produce poor results, but it sure looks like the most likely source of failure is poor choices of where to donate, not a shortage of low-hanging fruit.
Or maybe AI will render this all irrelevant soon. But that's not an ETG-specific risk.
 - [Highly technical point, which most readers shouldn't worry about:] The joint randomization for substitutes works well if there's unlimited money to buy patents. I'm worried about what happens when a charity with a $10 billion budget tries to buy a patent that's worth about $5 billion. If patent holders with several $2 billion patents claim, with some exaggeration, that those other drugs are substitutes for the drug being bought, then the charity faces problems with either being unable to afford the purchase, or political fallout from unfairly(?) undercutting sales of some of the drugs. I'm unclear whether this causes problems in realistic cases.
My take: rank-and-file-EAs (and most EA local communities) should be oriented around donor lotteries.
I think the "default action" for most EAs should be something that is:
I don't think it's really worth it for someone donating a few thousand dollars to put a lot of effort into evaluating where to donate. But if 50 people each put $2000 into a donation lottery, then they collectively have $100,000, which is enough to justify at least one person's time in thinking seriously about where to put it. (It's also enough to angel-invest in a new person or org, allowing them to vet new orgs as well as existing ones)
I think it's probably more useful for one person to put serious effort into allocating $100,000, than 50 people to put token effort into allocating $2000.
This seems better than generic Earning to Give to me (except for people who make enough for donating, say, $25,000 or more realistic)
What about donor coalitions instead of donor lotteries?
Instead of 50 people putting $2000 into a lottery, you could have groups of 5-10 putting $2000 into a pot that they jointly agree where to distribute.
-People might be more invested in the decision, but wouldn't have to do all the research by themselves.
-Might build an even stronger sense of community. The donor coalition could meet regularly before the donation to decide where to give, and meet up after the donation for updates from the charity.
-Avoids the unilateralist's curse.
-Less legally fraught than a lottery.
-Time consuming for all members, not just a few.
-Decision-making by committee often leads to people picking 'safe', standard options.
I like this idea, though to boost your signal I'd switch the "donor coalitions" for "donor crews," in reference to the Microsolidarity movement, which I hope will collide with the EA community soon enough.
In a nutshell, Microsolidarity argues for (1) a theory of social groups with more categories - those below - and (2) more organizational plans to consider different strategies for different categories. Therefore, I'd describe your strategy as experimenting with "donor crews" as opposed to the much more common "donor selves" where donors choose charities alone or "donor crowds" where everyone settles on donating to GiveWell or some other common aggregator. I think there is wide-open space for EA strategies revolving around crews
This certainly seems like a viable option. I agree with the pros and cons described here, and think it'd make sense for local groups to decide which one made more sense.
I also think there's some potential to re-orient the EA pipeline around this concept. If local EA meetups did a collective donor lottery, then even if only one of them ends up allocating the money, they could still solicit help from others to think about it.
My experience is that EA meetups struggle a bit with "what do we actually do to maintain community cohesiveness, given that for many of us our core action is something we do a couple times per year, mostly privately." If a local meetup did a collective donor lottery, than even if only one person wins the lottery, they could still solicit help from others to evaluate donor targets, and make it a collective group project. (while being the sort of project that's okay if some people flake on)
My intuition is that the EA Funds are usually a much better opportunity in terms of donation impact than donor lotteries and having one person do independent research themself (instead of relying almost entirely on recommendations), unless you think you can do better (according to your own ethical views) than the researchers for each fund. They typically have at least a few years of experience in research in their respective areas, often full-time, they have the time to consider many different neglected opportunities, and they probably get more feedback than you'll seek. I think the average EA is unlikely to have the time or expertise to compete, especially if they're working full-time in an unrelated area. If your ethical views are similar to the grantmakers of your preferred EA fund, I'd expect every dollar not given to the fund (either directly or after winning the lottery) would be better given to the fund, and the difference could be pretty big.
Of course, you could enter a donor lottery and, if you win, just give it all to an EA fund without doing any research yourself. I don't know if this would be better or worse than just donating directly to the EA funds. I don't think the argument for economies of scale really applies here, since the grantmakers are already working full-time on research in the areas they're making grants for.
Maybe a good approach would be to enter the lottery, and if you win, do research on charities, seeking feedback from the EA community and specifically the grantmakers of the EA fund you align most with, and then just donate everything to the fund. If your research is good enough, it'll inform their recommendations. Maybe this would be less motivating if you expect your research to not be used, but it could be more motivating, because of the feedback and the extra external pressure (to produce something valuable for the fund and to not look stupid).
"EA Funds ... have the time to consider many different neglected opportunities"
I just want to point out that the administrators of EA Funds are volunteers working other full time jobs.
Those other jobs often involve looking at many different opportunities, e.g. grantmaking, donor advising, prioritization research or charity evaluation. Global Health and Development has Elie Hassenfield from GiveWell, and each of the others at least has a Program Officer from the OPP, either Lewis Bollard or Nick Beckstead.
My understanding (not confident) is that those people (at least Nick Beckstead) are more something like advisors acting as a sanity check or something (or at least that they aren't the ones putting most of the time into the funds)
It seems to me like this is unlikely to be worse. Is there some mechanism you have in mind? Risk-aversion for the EA fund? (Quantitatively that seems like it should matter very little at the scale of $100,000.)
At a minimum, it seems like the EA funds are healthier if their accountability is to a smaller number of larger donors who are better able to think about what they are doing.
In terms of upside from getting to think longer, I don't think it's at all obvious that most donors would decide on EA funds (or on whichever particular EA fund they initially lean towards). And as a norm, I think it's easy for EAs to argue that donor lotteries are an improvement over what most non-EA donors do, while the argument for EA funds comes down a lot to personal trust.
I don't think all of the funds have grantmakers working fulltime on having better views about grantmaking. That said, you can't work fulltime if you win a $100,000 lottery either. I agree you are likely to come down to deciding whose advice to trust and doing meta-level reasoning.
I think it might be, in one way, better for the grantmakers if the total of donations they receive each year has lower variance, for decisions about bringing in more grantmakers or allocating more time to thinking about grants. I think many of the grantmakers already work more than full-time, so they may not be so flexible in choosing how much extra time they can spend on research for the grants. I suppose they could just save most of the "extra" donations for future disbursements, though.
Besides more talent (as Stefan added) and expertise (including awareness of a much larger number of opportunities) on average, I think grantmakers also have better processes in place for their research, e.g. more feedback. I think at least one of the following four will apply to almost all EAs:
1. They have different priors or ethical views from the grantmakers and these have a large impact on how good the different charities would look as opportunities, if they had the same information. I think this could apply to a significant proportion.
2. They would be roughly as good at research for grantmaking for one of the EA funds, considering also the time they'll have to think about it. This seems unlikely to apply to a significant proportion. I'd guess < 1% of EAs, and < 1% of EAs to which 1 doesn't apply.
3. They have (or will have) important information about specific opportunities the grantmakers wouldn't have that would be good enough to change the grants made by the grantmakers. I'd guess this would be much less than half of EAs, including much less than half of EAs to which 1 doesn't apply.
4. They should actually defer to the grantmakers.
So, for most EAs, if 1 doesn't apply to them, i.e. they don't differ too much in their priors and ethical views from the grantmakers of one fund, then they should be giving to that fund.
Global Health and Development has Elie Hassenfield from GiveWell, and each of the others at least has a Program Officer from the OPP, either Lewis Bollard or Nick Beckstead. 3 others in Animal Welfare work at a charity fund (Kieran Greg, Farmed Animal Funders), an org that gives donation advice (Natalie Cargill, Effective Giving), and a charity that does prioritization and charity foundation research (Karolina Sarek, Charity Entrepreneurship) in charity research or donor advice roles, and the last one leads another grant program (Alexandria Beck, Open Wing Alliance).
Besides the fact that some people have much more experience, another consideration is differences in talent. My guess is that some people have much greater talent for researching donation opportunities than others.
My background assumption is that it's important to grow the number of people who can work fulltime on grant evaluation.
Remember that Givewell was originally just a few folk doing research in their spare time.
The "one charity" argument is only true on the margin. It would be incorrect to conclude from this that nobody should start additional charities—for instance, even though GiveWell's current highest-priority gap is AMF, I'm still glad that Malaria Consortium exists so that it could absorb $25m from them earlier this year. Similarly, it's incorrect to conclude from this style of argument that the social returns to talent should be concentrated in specific fields. While there may be a small number of "most important tasks" on the margin, the EA community is now big enough that we might expect to see margins changing over time.
Also, the majority of people who are earning to give would probably be able to fund less than one person doing direct work. If your direct work would be mostly non-replaceable, then this compares unfavorably to direct work. (Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.)
(edit: whoops, responded to wrong comment)
I agree with most of your comment.
>Seems like e.g. 80k thinks that on the current margin, people going into direct work are not too replaceable.
That seems like almost the opposite of what the 80k post says. It says the people who get hired are not very replaceable. But it also appears to say that people who get evaluated as average by EA orgs are 2 or more standard deviations less productive, which seems to imply that they're pretty replaceable.
To add to Ben's argument, uncertainty about which cause is the best will rationalize diversifying across multiple causes. If we use confidence intervals instead of point estimates, it's plausible that the top causes will have overlapping confidence intervals.
This seems to rely on the assumption that existing prestigious orgs are asking for all the funding they can effectively use. My best guess is that these orgs tend to not ask for a lot more funding than what they predict they can get. One potential reason for this is that orgs/grant-seekers regard such requests as a reputational risk.
Here's some supporting evidence for this, from this Open Phil blog post by Michael Levine (August 2019):
I like the big thinking! I agree that there are many tens of billions of dollars we could spend as we work our way down the marginal cost effectiveness curve of existential risk mitigation. Some other things to include are biosecurity interventions, preventing supervolcanic eruptions, comet detection and deflection (much more expensive than asteroid detection and deflection).
See also Joey's Cause X Guide & its comments.
FWIW, all of the EA Funds except for the Global Health and Development one split their grants across multiple recipients, with many in the range $10K-20K, too. Looking at the most recent grants from the other 3 funds, I see 10 from the Animal Welfare fund, 13 from the Long-Term Future fund (mostly to individual EAs) and 9 from the Meta fund. Many of the groups receiving grants are pretty small, so the value of donations may vary a lot the more they get.
Yes, large donors more often reach diminishing returns on each recipient than do small donors. The one charity heuristic is mainly appropriate for people who are donating $50k per year or less.
You could just give to a fund, and indirectly, that's giving to several recipients. Maybe that should be thought of as giving to their bottom ranked recipients, though, and your donations would only causally contribute to a few of them.
I think that for some of us this is a basic assumption. I can only speak to this personally, so please ignore me if this isn't a common sentiment.
First, direct roles are (in principle) high-leverage positions. If you work, for example, as a grantmaker at an EA org, a 1% increase in your productivity or aptitude could translate into tens of thousands of dollars more in funds for effective causes. In many ETG positions, a 1% increase in productivity is unlikely to result in any measurable impact on your earnings, and even an earnings impact proportional to the productivity gain would be negligible in absolute terms. So I tend to feel like, all other things being equal, my value is higher in a direct role.
But I don't think all other things are even equal. There seems to be an assumption underlying the ETG conversation that most EA-capable people are also capable of performing comparably well in ETG roles. In a movement with many STEM-oriented individuals, this may be a statistical truth, but it's not clear to me that it's necessarily true. Though it's obviously important to be intelligent, analytical, rational, etc. in many high-impact EA roles, the skills required to get and keep a job as, say, a senior software engineer, are highly specific. They require a significant investment of time and energy to acquire, and the highest-earning positions are as competitive as (or more competitive than) top EA jobs. For EAs without STEM backgrounds, this is a very long road, and being very smart isn't necessarily enough to make it all the way.
Some EAs seem capable of making these investments solely for the sake of ETG and the
opportunity for an intellectual challenge. Others find it difficult to stay motivated to make these investments when we feel we have already made significant personal investments in building skills that would be uniquely useful in a direct role and might not have the same utility in an ETG role. Familiarity with the development literature, for example, is relatively hard-won and not particularly well-compensated outside EA.
I recognize that there's a sort of collective action problem here: there simply cannot be a direct EA role for every philosophy MA or social scientist. But I wanted to argue here that the apparent EA preference for direct roles makes some good amount of sense.
I myself have split the difference, working as a data scientist at a socially-minded organization that I hope to make more "EA-aware" and giving away a fixed percentage of my earnings. I make less than I would in a more competitive role, but I believe there is some possibility of making a positive impact through the work itself. This is my way of dealing with career uncertainty and I'm curious to hear everyone's thoughts on it.
How did you arrive at the 100k figure?
Ra could be seen partially as everyone's credibility heuristics being way too highly correlated. People seem happy with exploration and diversity on the object level, but much less comfortable, due to lack of clear signals on how to evaluate, exploration and diversity on the heuristic/methods level.
I think the history of how much trouble MAPS had is instructive.
What is MAPS? (It's a hard term for Google)
Also there are other funding gaps in that space where ETG donors could make a big difference. Shoot me a message if you'd like more info.
This is a great comparison.
It doesn't look like OPP completely overshadows individual EA donations.
For a rough idea of scale, OPP made grants totalling ~$50-100 million (EDIT: corrected by Stefan to ~$120-170 million) in each of 2018 and 2019, and ~3,500 people took the EA survey in 2018. To match OPP, they would need to donate $14K-30K per survey respondent per year. Of course, they wouldn't have to match OPP; I think it's feasible for individual EAs to get within 5% of OPPs grants, e.g. if ~1000 EAs donated ~$5,000 per year on average.
For the subsample they looked at from those 3,500, the median amount donated in 2017 was ~$700, the mean was ~$10,000, and the total was ~$18 million, with a maximum of $5 million from one respondent.
Also, the four EA funds each make grants totalling > $1 million per year, so > $4 million per year altogether.
It says in the May 2019 EA London Newsletter that Open Philanthropy Project granted over $170 million in 2018. It also says that they had made grants totalling almost $120 million during 2019 at that point. (I haven't verified these numbers myself.)
Yes. The post Drowning children are rare seemed to be saying that OPP was capable of making most EA donations unimportant. I'm arguing that we should reject that conclusion, even if many of that post's points are correct.
Hey, regarding aging: you might be interested to know I'm writing a series of articles to evaluate the cost-effectiveness of any project related to aging research. I've found that the cost-effectiveness of aging research might be much higher, in certain cases, than what Sarah Constantin found. That's mostly because I'm also accounting for the fact that new aging research brings the date of Longevity Escape Velocity closer in time, and this increses the scope by many orders of magnitude. Each single year "buyed" means averting 36,500,000,000QALYs, using a conservative estimate (36,500,000 deaths by aging per year multiplied by 1000 estimated years free of disability after LEV). Check out my profile for all the articles I've written.
Regarding your proposal of prizes: I think prizes for a general-purpose cure for aging should involve prizes for intermediate steps, that may look... less than incredible. All of the intermediary steps, even inside "paradigm-shift-like" plans like Aubrey de Grey's will look similar on the outside: they will delay aging. Unless you are measuring if tissues are actually rejuvenated and you have a robust theoretical framework you won't recognize what will be necessary in the long-term. For cancer, the picture looks much better, but the incremental steps, even for a general-purpose cure, will probably belong to many different groups.
You will probably be pleased to know that recently Peter Diamandis' XPrize Foundation started a similar project: to give prizes for specific innovations identified as important in aging research and adjacent areas.
With almost all of those proposed intermediate goals, it's substantially harder to evaluate whether the goal will produce much value. In most cases, it will be tempting to define the intermediate goal in a way that is easy to measure, even when doing so weakens the connection between the goal and health.
E.g. good biomarkers of aging would be very valuable if they measure what we hope they measure. But your XPrize link suggests that people will be tempted to use expert acceptance in place of hard data. The benefits of biomarkers have been frequently overstated.
It's clear that most donors want prizes to have a high likelihood of being awarded fairly soon. But I see that desire as generally unrelated to a desire for maximizing health benefits. I'm guessing it indicates that donors prefer quick results over high-value results, and/or that they overestimate their knowledge of which intermediate steps are valuable.
A $10 million aging prize from an unknown charity might have serious credibility problems, but I expect that a $5 billion prize from the Gates Foundation or OpenPhil would be fairly credible - they wouldn't actually offer the prize without first getting some competent researchers to support it, and they'd likely first try out some smaller prizes in easier domains.