Hide table of contents

I won the 2020/2021 $500,000 donor lottery. I’m interested in maximizing my positive impact on the long-term future, and I thought it would be helpful to elicit thoughts from the effective altruism community about how best to do this.

In this post I outline the high-level options I’m considering. (I wrote more about the options that readers may be less familiar with.) If you’d like to share your opinion, please fill out this short survey. I’m most interested in

  • whether I failed to mention any considerations that would make one option much worse or much better, and
  • your subjective opinion about how good each option is.

Donate directly to charities

Previous donor-lottery winners who have written about their donation decisions have given directly to charities rather than to re-granting organizations (see here, here, and here[1]). But my impression is that I don’t have sufficient local knowledge or relevant expertise that would give me an advantage over existing longtermist grantmakers. This could change if I were to invest a lot of time into discovering donation opportunities. Having more effort put into discovery and evaluation of EA funding opportunities would be valuable. But grant evaluation that’s sufficiently well-informed would require both getting up to speed as a grant evaluator and evaluating the grants themselves, and this would be a lot of work.

Long-Term Future Fund (LTFF)

The Long-Term Future Fund gives out small grants (typically less than $100k), usually to individuals. Its managers also consider making larger grants to organizations, though these account for a minority of their grantmaking. In the last funding round, six people worked part-time on evaluating grant applications, with the Centre for Effective Altruism providing additional support. They say they have room for more funding:

We anticipate being able to spend $3–8 million this year (up from $1.4 million spent in all of 2020). To fill our funding gap, we’ve applied for a $1–1.5 million grant from the Survival and Flourishing Fund, and we hope to receive more funding from small and large longtermist donors. [source]

They plan to add more fund managers so that they can evaluate more grants.

Longview Philanthropy

Longview Philanthropy advises large donors (primarily those who give $1 million or more per year). Though the LTFF and Longview each give both to organizations and to individuals, Longview tends to give relatively more grants to organizations and fewer to individuals. Another difference is that the LTFF has a high volume of grant applications that it evaluates quickly, whereas Longview does fewer, more in-depth grant investigations. Finally, the LTFF publishes its grant evaluations publicly, but Longview shares information about its grant decisions with only its donors and other grantmakers. (Because of its focus on large donors, communicating this information publicly is less valuable.)

If I were to give to Longview, the donation would go to their recently created general-purpose fund. This fund has some advantages over Longview’s other grantmaking:

  • Longview can better take advantage of time-sensitive giving opportunities, such as funding that would affect hiring decisions. (Job candidates may not be willing to wait for weeks or months for funding to come through.)
  • Grantees would have increased certainty about funding, which would help with planning.

Longview currently has one full-time staff member, Kit Harris, whose primary focus is grantmaking (there are plans to hire more full-time grantmakers); four other staff members are also involved in the grantmaking process, and Longview has recently hired four part-time research assistants. Longview is in regular contact with people working at related organizations such as Open Philanthropy (see a partial list of Longview’s advisers here).

Effective Altruism Infrastructure Fund

I think effective altruism is probably good, so it could make sense to support it financially. Here’s the write-up of the grants made since the EA Infrastructure Fund’s management was replaced. They say they have room for more funding:

We expect that we could make valuable grants totalling $3–$6 million this year. The fund currently holds around $2.3 million. This means we could productively use $0.7–$3.7 in additional funding above our current reserves, or $0–$2.3 million (with a median guess of $500,000) above the amount of funding we expect to get by default by this November. [source]

This is a smaller shortfall than that of the LTFF, and the median guess matches the amount I’m giving away.

Like the LTFF, the EA Infrastructure Fund plans to add more fund managers.

Patient Philanthropy Fund (PPF)

The Patient Philanthropy Fund is under development and will likely be launched later this year. It will be incubated by Founders Pledge, but if it’s successful it’ll be spun out into its own organization. The fund would invest its money, potentially for centuries, before disbursing it. This article lists some advantages of the approach:

In brief, we currently see three main potential ways in which investing to give later may be better than giving now:

  1. By exploiting the pure time preference in the market, i.e. that non-patient people are willing to sell the future (and especially the long-term future) cheaply
  2. By exploiting the risk premium in the market, to the extent that longtermist altruists should price risks differently to the market
  3. By giving us more time to learn and get better at identifying high-impact giving opportunities to benefit the long term

This article goes into more detail. Additional benefits of a patient-philanthropy fund include

  • The fund could act as insurance against the possibility of longtermist giving—currently concentrated in a few large donors—declining in the future. (There's a countervailing consideration—perhaps less important, though more likely—which is that the funding space could become more crowded, diminishing the value of future donations.)
  • The fund could access investment strategies that might offer the potential for higher expected returns, such as hedge funds, private equity, or leverage. (This would be worth doing only if the fund had enough money for the additional returns to be worth the cost of managing a more-complex investment strategy.)

It seems plausibly good for such a fund to exist, but some people believe that greater returns are to be had by spending money now (see here). I myself lean toward the view that now is an unusually good time to give, but I think there’s a reasonable chance that a long-term investment fund would be better.

Even if the general idea of investing to give later isn’t the best use of these funds, donating to help get the PPF off the ground could still be. This is because (1) the idea might appeal to people (especially in the Founders Pledge community) who would otherwise not give to the highest-impact longtermist causes, (2) creating such a fund would require dealing with a number of legal and practical challenges, which could pave the way for future giving in this vein, and (3) the existence of the fund would help advance discussion about this approach to patient philanthropy—it would become a real rather than a merely theoretical option for giving.

So if I thought a donation would significantly boost the fund’s chances of attracting more donations, I would lean toward supporting it with at least a portion of the donor-lottery funds. There are two ways I could leverage them to help get the PPF off the ground: I could be part of the pre-launch commitments, or I could pledge the money as part of a post-launch matching pool.

Invest the money and wait a few years

I directed CEA to invest the lottery funds in diversified equity mutual funds.[2] In a few years, I’ll know more and the stock market will be higher (in expectation). Many of the considerations regarding the Patient Philanthropy Fund also apply to this option. (Continuing to hold the donor-lottery money in investments at CEA would preserve flexibility in where to donate, but the Patient Philanthropy Fund could have access to investment expertise and strategies not available to CEA and could plausibly generate higher returns.)

Pay someone to help me decide

Five hundred thousand dollars is enough money that it could be worthwhile to pay someone to research the question of where to give the money. A problem with this approach is that the most-qualified people are busy with other things, and finding and managing the right person would be a significant undertaking in itself. But if I could find someone qualified and willing to take on this task, this would increase the grant-evaluation capacity of effective altruism.

If you’re potentially interested in such an arrangement, please email me (pbrinichlanglois@gmail.com) or indicate your interest in the survey.

Something else

Option X. The dark horse. Something out of left field.

Perhaps there’s something I should be considering besides what’s listed above.

Conclusion

If you read this far, I would greatly appreciate hearing your thoughts through this brief survey. Also feel free to comment below or email me at pbrinichlanglois@gmail.com.

Acknowledgements

Thanks to Ozzie Gooen, who had the idea for this article and provided feedback about it.


  1. The last of these comprised two grants to the Good Food Institute, which does give out grants. But I wouldn’t put this into the same category as the EA Funds or Longview Philanthropy, since (1) grants account for a minority of GFI’s budget, and (2) GFI is pursuing a specific strategy (accelerating the development of meat alternatives) to the broader problem of animal welfare. ↩︎

  2. I think that having donor-lottery funds invested in a diversified portfolio of global equities should be the default. CEA can easily invest in Vanguard’s mutual funds. About three months passed between my winning the lottery and the funds being invested, mostly because I procrastinated. If the average donor-lottery winner donates the money after a year, keeping the funds in equities instead of a bank account would increase the expected amount of money donated by tens of thousands of dollars. ↩︎

107

0
0

Reactions

0
0

More posts like this

Comments52
Sorted by Click to highlight new comments since: Today at 5:11 AM

I have to say I'm pretty glad you won the lottery as I like the way you’re thinking! I have a few thoughts which I put below. I’m posting here so others can respond, but I will also fill out your survey to provide my details as I would be happy to help further if you are interested in having my assistance!

TLDR: I think LTFF and PPF are the best options, but it’s very hard to say which is the better of the two.

  • Longview Philanthropy: it’s hard to judge this option without knowing more about their general-purpose fund - I didn’t see anything on this on their website at first glance. With my current knowledge, I would say this option isn’t as good as giving to LTFF. Longview is trying to attract existing philanthropists who may not identify as Effective Altruists, which will to some extent constrain what they can grant to as granting to something too “weird” might put off philanthropists. Meanwhile LTFF isn’t as constrained in this way, so in theory giving to LTFF should be better as LTFF can grant to really great opportunities that Longview would be afraid to. Also, LTFF appears to have more vetting resource than Longview and a very clear funding gap.
  • Effective Altruism Infrastructure Fund: it seems to me that if your goal is to maximise your positive impact on the long-term future then giving to LTFF would be better. This is simply because EA is wider in scope than longtermism so naturally the Infrastructure Fund will fund things that will be somewhat targeted to ‘global health and wellbeing’ opportunities which don’t have a long-term focus. If you look at LTFF’s Fund Scope you will see that LTFF funds opportunities to directly reduce existential risks, but also opportunities to build infrastructure for people working on longtermist projects and promoting long-term thinking - so LTFF also has a “growth” mindset if that's what you're interested in.
  • Patient Philanthropy Fund: personally I’m super excited about this but it’s very difficult to say which is better out of PPF or LTFF. Founders Pledge’s report is very positive about investing to give, but even they say in their report that “giving to investment-like giving opportunities could be a good alternative to investing to give”. I think that which is better of investment-like giving opportunities or investing to give is very much an open, and difficult, question. You do say that “even if the general idea of investing to give later isn’t the best use of these funds, donating to help get the PPF off the ground could still be”. I agree with this and like your idea of “supporting it with at least a portion of the donor-lottery funds”. How much exactly to give is hard to say.
  • Invest the money and wait a few years: do you have good reason to believe that you/the EA community will be in a much better position in a few years? Why? If it’s just generally “we learn more over time” then why would 'in a few years' be the golden period? If 'learning over time' is your motivation, PPF would perhaps be a better option as the fund managers will very carefully think about when this golden period is, as well as probably invest better than CEA.
  • Pay someone to help me decide: doubtful this would be the best option. LTFF basically does this for free. If you find someone / a team who you think is better than the LTFF grant team then fine, but I’m sceptical you will. LTFF has been doing this for a while which has let them develop a track record, develop processes, learn from mistakes etc. so I would think LTFF is a safer and better option.

So overall my view would be that LTFF and PPF are the best options, but it’s very hard to say which is the better of the two. I like the idea of giving a portion to each - but I don't really think diversification like this has much philosophical backing so if you do have a hunch one option is better than the other, and won't be subject to significant diminishing returns, then you may want to just give it all to that option.

I really liked this comment. Three additions:

  • I would take a close look at who the grantmakers are and whether their reasoning seems good to you. Because there is significant fungibility and many of these funding pools have broad scopes, I personally expect the competence of the grantmakers to matter at least as much as the specific missions of the funds.
  • I don't think it's quite as clear that the LTFF is better than the EA Infrastructure Fund; I agree with your argument but think this could be counterbalanced by the EA Infrastructure Fund's greater focus on talent recruitment, or other factors.
  • I don't know to what degree it is hard for Longview to get fully unrestricted funding, but if that's hard for Longview, giving it unrestricted funding may be a great idea. They may run across promising opportunities that aren't palatable to their donors, and handing them over to EA Funds or Open Philanthropy may not be straightforwardly easy in some cases.

(Disclosure: I run EA Funds, which hosts the LTFF and EA Infrastructure Fund. Opinions my own, as always.) 

Thanks for the comment, this raises a few good points. 

Longview is trying to attract existing philanthropists who may not identify as Effective Altruists, which will to some extent constrain what they can grant to as granting to something too “weird” might put off philanthropists.

Good point. I got the impression that their new, general-purpose pool would still be fairly longtermist, but it's possible they will have to make sacrifices. We'll ping them about them (or if any of them are reading this, please do reply directly!)

> If you find someone / a team who you think is better than the LTFF grant team then fine, but I’m sceptical you will.

To be clear, one of the the outcomes could be that this person decides to give to the LTFF. These options aren't exclusive. But I imagine in this case, they shouldn't have that much work to do, they would essentially be making a choice from the options we list above.

I got the impression that their new, general-purpose pool would still be fairly longtermist, but it's possible they will have to make sacrifices.

To clarify it's not that I don't think they would be "longtermist" it's more that I think they may have to give to longtermist options that "seem intuitively good to a non-EA", e.g. giving to an established organisation like MIRI or CHAI, rather than give to longtermist options that may be better on the margin but seem a bit weirder at first glance like "buying out some clever person so they have more time to do some research".

That pretty much gets to the heart of my suspected difference between Longview and LTFF - I think LTFF funds a lot of individuals that may struggle to get funding from elsewhere whereas Longview tends to fund organisations that may struggle a lot less - although I do see on their website that they funded Paul Slovic (but he seems a distinguished academic so may have been able to get funding elsewhere).

I think that looking at their track record is only partially representative. They used to follow a structure where they would recommend donation opportunities to particular clients. Recently they've set up a fund that works differently; people would donate to the fund, then the fund will make donations at their will. My guess is that this will help a bit around this issue, but not completely. (Maybe they'll even be extra conservative, to prove to donors that they will match their preferences.)

 

Another (minor) point is that Longview's donations can be fungible with LTFF. If they spend $300K on something that LTFF would have otherwise spent money on, then the LTFF would have $300K more to spend on whatever it wants. So if Longview can donate to, say, only 90% of interesting causes, up to $10Mil per year, the last 10% might not be that big of a deal.

Brian Tomasik wrote this article on his donation recommendations, which may provide you with some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. In terms of the long-term future, reducing suffering in the far future may be more important than reducing existential risk. If life in the far future is significantly bad on average, space colonization could potentially create and spread a large amount of suffering.

My understanding is that Brian Tomasik has a suffering-focused view of ethics in that he sees reducing suffering as inherently more important than increasing happiness - even if the 'magnitude' of the happiness and suffering are the same.

If one holds a more symmetric view where suffering and happiness are both equally important it isn't clear how useful his donation recommendations are.

Even if you value reducing suffering and increasing happiness equally, reducing S-risks would likely still greatly increase the expected value of the far future. Efforts to reduce S-risks would almost certainly reduce the risk of extreme suffering being created in the far future, but it's not clear that they would reduce happiness much.

I'm not saying that reducing S-risks isn't a great thing to do, nor that it would reduce happiness, I'm just saying that it isn't clear that a focus on reducing S-risks rather than on reducing existential risk  is justified if one values reducing suffering and increasing happiness equally.

I think robustness (or ambiguity aversion) favours reducing extinction risks without increasing s-risks and reducing s-risks without increasing extinction risks, or overall reducing both, perhaps with a portfolio of interventions. I think this would favour AI safety, especially that focused on cooperation, possibly other work on governance and conflict, and most other work to reduce s-risks (since it does not increase extinction risks), at least if we believe CRS and/or CLR that these do in fact reduce s-risks. I think Brian Tomasik comes to an overall positive view of MIRI in his recommendations page, and Raising for Effective Giving, also a project by the Effective Altruism Foundation like CLR, recommends MIRI in part because "MIRI’s work has the ability to prevent vast amounts of future suffering.".

Some work to reduce extinction risks seems reasonably likely to me on its own to increase s-risks, like biosecurity and nuclear risk reduction work, although there may also be arguments in favour related to improving cooperation, but I'm skeptical.

For what it's worth, I'm not personally convinced any particular AI safety work reduces s-risks overall, because it's not clear it reduces s-risks directly more than it increases them by reducing extinction risks, although I would expect CLR and CRS to be better donation opportunities for this given their priorities. I haven't spent a lot of time thinking about this, though.

If one values reducing suffering and increasing happiness equally, it isn't clear that reducing existential risk is justified either. Existential risk reduction and space colonization means that the far future can be expected to have both more happiness and more suffering, which would seem to even out the expected utility. More happiness + more suffering isn't necessarily better than less happiness + less suffering. Focusing on reducing existential risks would only  seem to be justified if either A) you believe in Positive Utilitarianism, i.e. increasing happiness is more important than reducing suffering, B) the far future can be reasonably expected to have significantly more happiness than suffering, or C) reducing existential risk is a terminal value in and of itself.

B) the far future can be reasonably expected to have significantly more happiness than suffering

I think EAs who want to reduce x-risk generally do believe that the future should have more happiness than suffering, conditional on no existential catastrophe occurring. I think these people generally argue that quality of life has improved over time and believe that this trend should continue (e.g. Steven Pinker's The Better Angels of Our Nature). Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.

Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option. In the meantime it make sense to reduce existential risk if we are uncertain about the sign of the value of the future, to leave open the possibility of an amazing future.

Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.

I think there's recently more skepticism about cultured meat (see here, although I still expect factory farming to be phased out eventually, regardless), but either way, it's not clear a similar argument would work for artificial sentience, used as tools, used in simulations or even intentionally tortured. There's also some risk that nonhuman animals themselves will be used in space colonization, but that may not be where most of the risk is.

Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.

It seems unlikely to me that we would go extinct, even conditional on "us" deciding it would be best. Who are "we"? There will probably be very divergent views (especially after space colonization, within and between colonies, and these colonies may be spatially distant and self-sufficient, so influencing them becomes much more difficult). You would need to get a sufficiently large coalition to agree and force the rest to go extinct, but both are unlikely, even conditional on "our" judgement that extinction would be better, and actively attempting to force groups into extinction may itself be an s-risk. In this way, an option value argument may go the other way, too: once TAI is here in a scenario with multiple powers or space colonization goes sufficiently far, going extinct effectively stops being an option.

I'm not really sure what to think about digital sentience. We could in theory create astronomical levels of happiness, astronomical levels of suffering, or both. Digital sentience could easily dominate all other forms of sentience so it's certainly an important consideration.

It seems unlikely to me that we would go extinct, even conditional on "us" deciding it would be best.

This is a fair point to be honest!

Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.

Note that this post (written by people who agree that reducing extinction risk is good) provides a critique of the option value argument.

There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving. Even though poverty is far lower and medical care is far better than in the past, there may also be more mental illness and loneliness than in the past. The mutational load within the human population may also be increasing. Taking the hedonic treadmill into account, happiness levels in general should be roughly stable in the long run regardless of life circumstances. One may object to this by saying that wireheading may become feasible in the far future. Yet wireheading may be evolutionarily maladaptive, and pure replicators may dominate the future instead. Andrés Gómez Emilsson has also talked about this in A Universal Plot - Consciousness vs. Pure Replicators.

Regarding averting extinction and option value, deciding to go extinct is far easier said than done. You can’t just convince everyone that life ought to go extinct. Collectively deciding to go extinct would likely require a singleton to exist, such as Thomas Metzinger's BAAN scenario. Even if you could convince a sizable portion of the population that extinction is desirable, these people will simply be removed by natural selection, and the remaining portion of the population will continue existing and reproducing. Thus, if extinction turns out to be desirable, engineered extinction would most likely have to be done without the consent of the majority of the population. In any case, it is probably far easier to go extinct now while we are confined to a single planet than it would be during the age of galaxy-wide colonization.

There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving.

Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.

happiness levels in general should be roughly stable in the long run regardless of life circumstances.

Maybe, but if we can't make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.

Regarding averting extinction and option value, deciding to go extinct is far easier said than done.

This is a fair point. What I would say though is that extinction risk is only a very small subset of existential risk so desiring extinction doesn't necessarily mean you shouldn't want to reduce most forms of existential risk.

Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.

Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right,  you are extrapolating billions of years from the past few thousand years. To say that this is wild extrapolation is an understatement. I know Jacy Reese talks about it in this post, yet he admits the possibility that the expected value of the far future could potentially be close to zero. Brian Tomasik also wrote this article about how a "near miss" in AI alignment could create astronomical amounts of suffering.

Maybe, but if we can't make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.

Sure, it's possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point of the population and make everyone have hyperthymia. But you must remember that millions of years of evolution put our hedonic set-points where they are for a reason. It's possible that in the long run, genetically engineered hyperthymia might be evolutionarily maladaptive, and the "super happy people" will die out in the long run.

Would you mind linking some posts or articles assessing the expected value of the long-term future?

You're right to question this as it is an important consideration. The Global Priorities Institute has highlighted "The value of the future of humanity" in their research agenda (pages 10-13). Have a look at the "existing informal discussion" on pages 12 and 13, some of which argues that the expected value of the future is positive.

Sure, it's possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point

I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.

Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you're a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you're a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn't necessarily increase expected utility.

Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian.

Yes that is true. For what it's worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the "sadistic conclusion" whereby one can make things better by bringing into existence people with terrible lives, as long as they're still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.

Would you mind linking some posts or articles assessing the expected value of the long-term future?

The most direct (positive) answer to this question I remember reading is here.

Toby Ord discusses it briefly in chapter 2 of The Precipice

Some brief podcast discussion here.

I suspect  that many of the writings by people associated with the Future of Humanity Institute address this in some form or other. One reading of any and everything by transhumanists / Humanity+ people (Bostrom included) is that the value of the future seems pretty likely to be positive. Similarly, I expect that a lot of the interviewees other than just Christiano on the 80k (and Future of Life?) podcast express this view in some form or other and defend it at least a little bit, but I don't remember specific references other than the Christiano one.

And there's suffering-focused stuff too, but it seemed like you were looking for arguments pointing in the opposite direction.

I lend some credence to the trendlines argument but mostly think that humans are more likely to want to optimize for extreme happiness (or other positive moral goods) than extreme suffering (or other negatives/moral bads), and any additive account  of moral goods will shake out to in expectation have a lot more positive moral goods than moral bads, unless you have really extreme inside views to think that optimizing for extreme moral bads is as as likely (or more likely) than optimizing for extreme moral goods. 

I do think there are nontrivial probability of P(S-risk | singularity), eg a) our descendants are badly mistaken or b) other agents follow through with credible pre-commitments to torture, but I think it ought to be surprising for classical utilitarians to believe that the EV of the far future is negative.

This is easy for me to say as someone who agrees with these donation recommendations, but I find it disappointing that this comment apparently has gotten several downvotes. The comment calls attention to a neglected segment of longtermist causes, and briefly discusses what sorts of considerations would lead you to prioritize those causes. Seems like a useful contribution.

Note that s-risks are existential risks (or at least some s-risks are, depending on the definition). Extinction risks are specific existential risks, too.

Two approaches not mentioned in the article that I would advocate:

  1. Giving to global priorities research. You mentioned patient philanthropy (whether a few years or centuries), and one of the main motivations of waiting to give is to benefit from a more-developed landscape of EA thought. If the sophistication of EA thought is a key bottleneck, why not contribute today to global priorities research efforts, thus accelerating the pace of intellectual development that other patient philanthropists are waiting on? I'm not confident that giving to global priorities research today beats waiting and giving later, since it's unclear how much the intellectual development of the movement would be accelerated by additional cash, but it should be on the table of options you look at. (To some extent, new ideas are generated naturally & for free as people think about problems, write comments on blog posts, etc. Meanwhile, there might be some ways where gaining experience simply takes calendar time. So perhaps only a small portion of the EA movement's development could actually be accelerated with more global-priorities-research funding. On the other hand, a marginally more well-developed field would almost certainly pull in marginally more donations, so helping to kick-start the growth and (hopefully) eventual mainstreaming of EA while we are still in its early days could be very valuable. Anyways, if you are considering waiting for the EA community to learn more, I think it's worth also considering being the change you want to see in the movement, and trying to accelerate the global-priorities-research timeline.)

  2. Giving to various up-and-coming cause areas within EA. Despite being a very nimble and open-minded movement actively searching for new cause areas, it seems to me that there is still some inertia and path-dependency when it comes to bringing new causes online alongside traditional, established EA focus areas. In my mind, this creates a kind of inefficiency, where new causes are recognized as "likely to become a bigger EA focus in the future", but haven't yet fully scaled up due in part to intellectual inertia within the movement. You could help accelerate this onboarding process by making grants to a portfolio of newer and less-familiar causes. For example:

  • The "global health and wellbeing" side of EA has for years been focused on GiveWell top charities. Recently, OpenPhil has expanded into new programs devoted to south asian air quality and global aid advocacy. These interventions seem like great ideas, which plausibly do even better than GiveWell's recommendations, so it might be helpful to jump in early and help get projects in these areas off the ground.
  • Charter cities have been studied for their EA potential in several ways -- reducing poverty directly via economic growth, providing a model for improved governance that might spread to nearby regions, and (most exciting from my longtermist EA perspective) acting as laboratories to experiment with new institutions, new policies, and new forms of government. As far as I know, charter city initiatives haven't yet received large support from EA donors, but personally I think that ought to change.
  • As I mentioned in my previous comment, I'm slightly pessimistic about the idea of actually doing patient philanthropy over centuries on a large scale, but the idea is nevertheless promising enough that we should help get some experiments up and running.
  • There are a whole host of promising, niche ideas within EA that might benefit from dedicated funding -- although some of these areas are so small that there's no organization ready and waiting to accept the cash. Research into things like wild-animal welfare or risks of stable totalitarianism seem like good things to investigate, as would be experiments with improved institution-design mechanisms (like prediction markets, quadratic funding, improved voting systems, etc) or civilizational resilience plans along the lines of ALLFED.

Thanks for the thoughts here!

I'd note that the LTFF definitely invests money into some global priorities research, and some up-and-coming cause areas. Longview is likely to do so as well. 

Right now we don't seem to have many options to donate to funders that will re-fund to non-longtermist (a broadly defined longtermist), experimental work. In this particular case, Patrick is trying to donate to longtermist causes, so I think the funding options are acceptable, but I imagine this could be frustrating to non-longtermists.

Thanks for the post, I found your thoughts interesting. I’m always glad to see discussions of where people are donating.

In general, it kind of seems like the "point" of the lottery is to do something other than allocate to a capital allocator. The lottery is "meant" to minimise work on selecting a charity to give to, but if you're happy to give that work to another allocator I feel like it makes less sense?

With that in mind, I have a couple of thoughts for things you might consider:

  • Lottery again! You could sponsor CEA to do a $1m lottery. If you thought it was worth it for $500k, surely it would be worth it for $1m!
  • Be quite experimental, give largish grants to multiple young organisations, see how they do, and then direct your ordinary giving toward them in the future. This money can buy access to more organisations, and setup relationships for your future giving.
  • Do you know of people outside established organisations, in your personal network for example, who could use EA funding? If so, that represents an edge over capital allocators and you could exploit that.

In general, it kind of seems like the "point" of the lottery is to do something other than allocate to a capital allocator.

If you enter a donor lottery your expected donation amount is the same as if you didn't enter the lottery. If you win the lottery, it will be worth the time to think more carefully about where to allocate the money than if you had never entered, as you're giving away a much larger amount. Because extra time thinking is more likely to lead to better (rather than worse) decisions, this leads to more (expected) impact overall, even though your expected donation size stays the same. More on all of this here.

So the point of the lottery really is just to think very carefully about where to give if you win, allowing you to have more expected impact than if you hadn't entered. It seems quite possible (and in my opinion highly likely) that careful thinking  would lead one to give to a capital allocator as they have a great deal of expertise.

I could imagine that happening in some situations where after a lot of careful thought you decide to defer to another grantmaker, but if you know in advance that you'd like to give your money to a grantmaker, shouldn't you just do that?

Yeah you probably should - unless perhaps you think there are scale effects to giving which makes you want to punt on being able to give far more.

Worth noting of course that Patrick didn’t know he was going to give to a capital allocator when he entered the lottery though, and of course still doesn’t. Ideally all donor lottery winners would examine the LTFF very carefully and honestly consider whether they think they can do better than LTFF. People may be able to beat LTFF, but if someone isn’t giving to LTFF I would expect clear justification as to why they think they can beat it.

I disagree. One of the original rationales for the lottery if I recall correctly was to increase the diversity* of funding sources and increase the number of grantmakers. I think if the LTFF is particularly funding constrained, there's a good chance the Open Philanthropy Project or a similar organisation will donate to them. I value increased diversity and number of grantmakers enough that I think it's worth trying to beat LTFF's grantmaking even if you might fail.

*By diversity, I don't mean gender or ethnicity, I just mean having more than one grantmaker doing the same thing, ideally with different knowledge, experience and connections.

I'm not sure I understand how the lottery increases the diversity of funding sources / increases the number of grantmakers if one or a small number of people end up winning the lottery. Wouldn't it actually reduce diversity / number of grantmakers? I might be missing something quite obvious here...

Reading this it seems the justification for lotteries is that it not only saves research time for the EA community as a whole, but also improves the allocation of the money in expectation. Basically if you don't win you don't have to bother doing any research (so this time is saved for lots of people), and if you do win you at least have the incentive to do lots of research because you're giving away quite a lot of money (so the money should be given away with a great deal of careful thought behind it).

Of course if everyone in the EA community just gives to an EA Fund and knows that they would do so if they won the lottery, that would render both of the benefits of the lottery redundant. This shouldn't be the case however as A) not everyone gives to EA Funds - some people really research where they give, and B) people playing donor lotteries shouldn't be certain of where they would give the money if they won - the idea is that they would have to research. I see no reason why this research shouldn't lead to giving to an EA Fund.

I think perhaps we agree then - if after significant research, you realize you can't beat an EA Fund, that seems like a reasonable fallback, but that should not be plan A.

Re: increasing grantmakers, I meant increasing the number of grantmakers who have spent significant time thinking about where to donate significant capital - obviously having hundreds of people donating $1k each would have more diversity but in practice I think most $1k donors defer their decision-making to someone else, like an EA Fund or GiveWell.

I think perhaps we agree then - if after significant research, you realize you can't beat an EA Fund, that seems like a reasonable fallback, but that should not be plan A.

Yeah that sounds about right to me.

I meant increasing the number of grantmakers who have spent significant time thinking about where to donate significant capital

I still don't understand this. The lottery means one / a small number of grantmakers get all the money to allocate. People who don't win don't need to think about where to donate. So really it seems to me that the lottery reduces the number of grantmakers and indeed the number of who spend time thinking about where to donate.

I still don't understand this. The lottery means one / a small number of grantmakers get all the money to allocate. People who don't win don't need to think about where to donate. So really it seems to me that the lottery reduces the number of grantmakers and indeed the number of who spend time thinking about where to donate.

The model is this:

  • A bunch of people each have $5,000 to donate.
  • Many put in a bit of effort - they spend a bit of time on the GiveWell website, read some stuff by MIRI, and chat to a couple of friends. But this isn't enough to catch them up on the state of the art, let alone make some novel contribution to the grant application discrimination project.
  • Others can't find the time to do even this much research.
  • So overall very little grant evaluation has really been done, and what has been done is highly duplicative. Given they all fail to pass the bar of 'as good as the EA funds', this work was essentially wasted.

But if they instead did a lottery:

  • One person gets $500,000 to donate.
  • He now puts in a lot of effort - reading a huge amount of literature, and doing calls with the leaders of multiple organizations. Perhaps he also discusses his approaches with several other EAs for advice.
  • By the end he has a novel understanding of some aspect of the charitable funding landscape, which exceeds that of the EA fund grantmakers.
  • The overall amount of time spent is actually less than before, but the depth is far greater, and with dramatically less redundancy.

So by using the lottery we have both saved time and increased the amount of effective evaluation work being done.

Thanks, I understand all that. I was confused when Khorton said:

I meant increasing the number of grantmakers who have spent significant time thinking about where to donate significant capital

I wouldn't say the lottery increases the number of grantmakers who have spent significant time thinking, I think it in fact reduces it.

I agree with you when you say however:

The overall amount of time spent is actually less than before, but the depth is far greater, and with dramatically less redundancy.


 

I think deciding between capital allocators is a great use of the donor lottery, even as a Plan A. You might say something like: "I would probably give to the Long-Term Future Fund, but I'm not totally sure whether they're better than the EA Infrastructure Fund or Longview or something I might come up with myself. So I'll participate in the donor lottery so if I win, I can take more time to read their reports and see which of them seems best." I think this would be a great decision.

I'd be pretty unhappy if such a donor then felt forced to instead do their own grantmaking despite not having a comparative advantage for doing so (possibly underperforming Open Phil's last dollar), or didn't participate in the donor lottery in the first place. I think the above use case is one of the most central one that I hope to address.

I tentatively agree that further diversification of funding sources might be good, but I don't think the donor lottery is the right tool for that.

In general, it kind of seems like the "point" of the lottery is to do something other than allocate to a capital allocator. The lottery is "meant" to minimise work on selecting a charity to give to, but if you're happy to give that work to another allocator I feel like it makes less sense?

When I entered the lottery, I hadn't given much thought to what I'd do if I won—I was convinced by the argument that giving to the lottery dominated giving to the LTFF (for example), since if I won the lottery I could just decide to give the money to the LTFF. I think you're right that it makes less sense to enter the donor lottery if you think you'll end up giving the money to a regranting organization, but I think it still makes some sense.

Lottery again! You could sponsor CEA to do a $1m lottery. If you thought it was worth it for $500k, surely it would be worth it for $1m!

Someone else suggested that to me a while ago, but I'm not sure how much it would change things—if I don't have interesting ideas about what to do with $500k, I probably wouldn't have interesting ideas about what to do with $1m. There would also be some overhead to setting up another lottery.

Be quite experimental, give largish grants to multiple young organisations, see how they do, and then direct your ordinary giving toward them in the future. This money can buy access to more organisations, and setup relationships for your future giving.

Thanks for suggesting that—it seems like an idea worth considering for at least a portion of the money.

I've been impressed recently with the work of the Simon Institute for Long-Term Governance, which might match the brief for new, experimental long-termist organizations. https://www.simoninstitute.ch/

Besides others mentioned, consider also getting in touch with

  1. https://www.cooperativeai.com/
  2. https://emergingrisk.ch/ (I think this is s-risk-focused, given the team running it)

I appreciate that you're going meta and considering such a full mix of re-granting options, rather than just giving to charities themselves as past lottery winners have. Your point about not having as much local knowledge as the big granting organizations makes a lot of sense. Longview, the LTFF, and the EA Infrastructure fund all seem like worthy targets, although I don't know much about them in particular. Here are a few thoughts on the other approaches:

Paying someone to help decide: This idea doesn't make much sense to me. After all, figuring out the most effective ways to donate to charity is already the core research project of effective altruism! It seems to me that paying someone to research what to do with the money would just be a strange, roundabout way to support cause prioritization research. Better to just explicitly donate to a cause prioritization research initiative. That way, a team of researchers could work on whatever cause prioritization problems seem most important for the overall EA movement, rather than employing one person to deliberate on this specific pot of $500K.

Patient philanthropy fund: This is an intriguing idea, but I wonder if patient philanthropy is well-developed enough that money would be best used to actually fill up the fund, versus studying the idea and working out various details in the plan. As Founder's Fund says, there are significant risks of expropriation and value drift, and there is probably more research and planning that can be done to investigate how to mitigate these risks. To their list of dangers, I would add:

  • The risk of some kind of financial collapse or transition, such that the contents of the fund are no longer valuable and/or no longer honored. (For instance, as a result of nations defaulting on their debt, or a sudden switch away from today's currencies.) This seems similar to, but distinct from, expropriation.

  • Somewhat related to value drift, the risk that a fund designed to last for millennia and to be highly resistant against expropriation and value drift, would fail to also be nimble enough to recognize changing opportunities and actually deploy its assets at a crucial time when they could do the most good. Figuring out how best to mitigate this seems like a very tricky institution-design problem. But making even a small amount of progress on it could be really valuable, especially since the problem of staying on-mission while also being nimble and maintaining organizational skill/capacity is a fundamental paradox that bedevils all kinds of institutions.

…Anyways, I'm sure that people more involved in patient philanthropy have thought about this stuff in more depth than I. But my point is that right now, it's possible that funding should mostly go towards designing and testing and implementing patient-philanthropy funds, rather than just putting large amounts of cash in the fund itself.

Invest & wait a few years: Although similar in some ways to the patient-philanthropy plan, I think the motivations for choosing this option are actually quite different:

  • Giving to a patient-philanthropy fund is somewhat incompatible with "urgent longtermism" focused on AI and other X-risks, while a plan to wait 5 years and then give is perfectly compatible with urgent longtermism.

  • Two benefits of waiting are the growth in capital, and the ability to learn more as the EA movement makes intellectual progress. Presumably, over a timespan of centuries, the EA movement will start running into diminishing intellectual returns, so the economic-growth benefit (if we assume steady returns of a few percent per year) would be proportionately larger. By waiting just five years, I'd guess that the larger benefit would come from the development of the EA movement.

Personally, I'm more sympathetic to the idea of waiting just a few years to take advantage of the rapidly increasing sophistication of EA thought, rather than waiting centuries. But you'd have to balance this consideration against how much funding you expect EA to receive in the future. If you think EA is currently in a boom and will decline later, you should save your money and give later (when ideas are well-developed but money is scarce). If you think EA will be pulling in much bigger numbers in the future, it's best to give now (so future funding can benefit from a more well-developed EA movement).

Re patient philanthropy funds: Spending money on research rather than giving money to a fund does seem more focused and efficient. I think there are limits to how much progress you can make with research (assuming that research hasn't ruled the idea out), so it does make sense to try creating such a fund at some point. Some issues would become apparent with even a toy fund (one with a minimal amount of capital produced as an exercise). A real fund that has millions of dollars would be a better test of the idea, but whether contributing to such a fund is a good use of money is less clear to me now.

Yes, I was definitely thinking of stuff along the lines of "help fund the creation of a toy fund and work out the legal kinks, portfolio design, governance mechanisms, etc", in addition to pure blog-post-style research into the idea of investing-to-give.

Admittedly it's an odd position for me to be pessimistic about patient philanthropy itself but still pretty psyched about setting up the experiment. I guess for the argument to go through that funding the creation of the PPF is a great idea, it relies on one or more of the following being true:

  • Actually doing patient philanthropy turns out to be, in fact, extremely effective. However, we won't definitively know this for decades! A leading indicator might be if the perceived problems/drawbacks of PPF turn out to be more easily solved than we thought. (Perhaps everyone looks at the legal mechanisms of the newly-launched toy fund and thinks, "Wow, this is actually a really innovative and promising structure!")

  • If the PPF draws in lots more EA donations that wouldn't have otherwise happened, it could be a great idea even if it's not as competitive on effectiveness.

  • Designing the PPF might somehow have positive spillover effects. (Are there other areas in EA calling for weird long-term institution design or complex financial products? Surely a few...)

A thought that motivates my other comments on this thread: reviewing my GWWC donations a while ago, I realised that if I suddenly had lots of money, one of the first questions I would ask myself is "what friends and acquaintances should I fund?". To an outsider this kind of thing can look like rather non-altruistic nepotism, but from the inside it seems like betting on the opportunities that you are unusually able to see. I think it actually is the latter, at least sometimes. My impression is that for profit investors do a lot of "nepotistic investing", but I suspect that values like altruism and impartiality and transparency (as well as constraints of charitable legal status) make EA funders reluctant to go hard on this method.

I would consider starting some kind of "major achievement" prize scheme.

Roughly, the idea I have in mind is to give large no-strings-attached lump sums to individuals who have:

(a) done exceptionally valuable work at non-trivial personal cost (e.g. massive salary sacrifice)

(b) a high likelihood of continuing to do extremely valuable work.

The aims would be:

(i) to help such figures become personally "set for life" in the way that successful startup founders sometimes do.

(ii) to improve the personal incentive structure faced by people considering EA careers.

This idea is very half baked. A couple quick comments:

  1. On (i): I'm surprised how often I meet people doing very valuable work who seem to have significant personal finance issues that (a) distract them and (b) mean that they don't buy time aggressively enough. Perhaps more importantly, I suspect that (c) personal financial security enables people to take riskier bets on their inside views, in a way that is valuably generative and/or error-correcting; also that (d) people who are doing very valuable work often have lists of good ideas for turning $$$ into good outcomes, so giving these people greater financial security would be one merit-based means of increasing the number of EA-sympathetic angel investors.

  2. On (ii): I have no idea if this would actually work out well. In theory, it'd make the personal incentives look a bit more like they do in for-profit entrepreneurship, i.e. small chance of large financial upside if you do well. In practice I could imagine a well known prize scheme causing various sorts of trouble.

  3. E.g. I see major PR risks to this kind of thing ("effective altruists conclude that the most effective use of money is to make themselves rich") and internal risk of resentment or even corruption scandals. I've not looked into how science prizes fare on this kind of thing.

  4. On (i): one possible counter is that IIRC there's some evidence for a "personal wealth sweet spot" in entrepreneurship. I think the story is supposed to be that too little financial security means you can't afford the risks, but too much security (both financial and status) makes you too complacent and lazy. My guess is that the complacency thing happens for many but not all people. Maybe one can filter for this.

On (1): Have you encouraged any of these people to apply for existing sources of funding within EA? Did any of them do so successfully?

On (3): The most prominent EA-run "major achievement prize" is the Future of Life Award, which has been won by people well outside of EA. That's one way to avoid bad press — and perhaps some extremely impactful people would become more interested in EA as a result of winning a prize? (Though I expect you'd want to target mid-career people, rather than people who have already done their life's work in the style of the FLA.)

  1. In some cases yes, but only when they were working on specific projects that I expected to be legible and palatable to EA funders. Are there places I should be sending people who I think are very promising to be considered for very low strings personal development / freedom-to-explore type funding?

The Infrastructure and LTF Funds have both (I think) made grants of the "help someone develop/save money" variety, mostly for students and new academics, but also in a couple of cases for people who were trying to pick up particular skills.

I also think it's perfectly valid for people to post questions about this kind of thing on the Forum — "I'm doing work X, which I think is very valuable, but I don't see an obvious way to get it funded to the point where I'd be financially secure — any suggestions?"

I would consider allocating at least $100K to trying my own version of something like Tyler Cowen's Emergent Ventures.

A post on this topic, discussing the Thiel Fellowship, Entrepreneur First, and other attempts: https://www.strangeloopcanon.com/p/on-medici-and-thiel

I would unrestrictedly give it to individual EAs you trust. 

Curated and popular this week
Relevant opportunities