Hide table of contents

Good news, I've finally allocated the rest of the donor lottery funds from the 2016-2017 Donor Lottery (the first one in our community)! It took over 3 years but I'm excited about the two projects I funded. It probably goes without saying, but this post is about an independent project and does not represent CFAR (where I work).

This post contains several updates related to the donor lottery:

CZEA

My previous comments on the original donor lottery post share the basics of how the first $25k was used for CZEA (this was $5k more than I was originally planning to donate due to transfer efficiency considerations). Looking back now, I believe that donation likely had a strong impact on EA community building.

My donation was the largest that CZEA had received (I think they previously had received one other large donation—about half the size) and it was enough for CZEA to transition from a purely volunteer organization into a partially-professional organization (1 FTE, plus volunteers). Based on conversations with Jan Kulveit, I believe it would have taken at least 8 more months for CZEA to professionalize otherwise. I believe that in the time they bought with the donation, they were able to more easily secure substantial funding from CEA and other funders, as well as scale up several compelling initiatives: co-organizing Human-aligned AI Summer School, AI Safety Research Program, and a Community Building Retreat (with CEA).

I also have been glad to see a handful of people get involved with EA and Rationality through CZEA, and I think the movement is stronger with them. To pick an example familiar to me, several CZEA leaders were recently part of CFAR's Instructor Training Program: Daniel Hynk (Co-founder of CZEA), Jan Kulveit (Senior Research Scholar at FHI), Tomáš Gavenčiak (Independent Researcher who has been funded by EA Grants), and Irena Kotíková (President of CZEA).

For more detail on CZEA's early history and the impact of the donor lottery funds (and other influences), see this detailed account.

EpiFor

In late April 2020, I heard about Epidemic Forecasting—a project launched by people in the EA/Rationality community to inform decision makers by combining epidemic modeling with forecasting. I learned of the funding opportunity through my colleague and friend, Elizabeth Garrett.

The pitch was immediately compelling to me as a 5-figure donor: A group of people I already believed to be impressive and trustworthy were launching a project to use forecasting to help powerful people make better decisions about the pandemic. Even though it seemed likely that nothing would come of it, it seemed like an excellent gamble to make, based on the following possible outcomes:

  • Prevent illness, death, and economic damage by helping governments and other decision makers handle the pandemic better, especially governments that couldn't otherwise afford high-quality forecasting services
  • Highlight the power of—and test novel applications of—an underutilized tool: forecasting (see the book Superforecasting for background on this)
  • Test and demonstrate the opportunity for individuals and institutions to do more good for important causes by thinking carefully (Rationality/EA) rather than relying on standard experts and authorities alone
  • Engage members of our community in an effort to change the world for the better, in a way that will give them some quick feedback—thus leading to deeper/faster learning
  • Cross-pollinate our community with professional fields engaged by EpiFor—possibly improving both those external fields and the EA/Rationalist community

I decided to move forward as quickly as possible; EpiFor was already making decisions that would go differently based on whether they had secured funding or not. In particular, the timing of the funding commitment affected how many superforecasters and software engineers they could afford to hire and onboard, and it enabled them to make the transition from a part-time/volunteer organization to a full-time salaried staff. It also seemed like some of the standard institutional donors in the community had pre-determined funding cycles that might take much longer to commit funding—and that even accelerating the project by a week might be quite valuable, especially in early days of the pandemic.

With that in mind it was an easy call for me to make, and I committed the remaining $23,500 from the donation lottery, as well as some personal funds on top of that. Notably, EpiFor is now conducting its next funding round, and I continue to suspect that more donations may have a substantial (though high variance) impact—particularly since funding is currently affecting which opportunities they pursue.

The project's concrete outputs so far include some research (cited in a Vox article yesterday) and being short-listed by pharmaceutical companies looking for help designing vaccine trials.

Looking Back on the Donor Lottery

Some observations from my experience:

  • Using the donor lottery to turn a ~$5k donation into a ~$46k donation meant the difference between spending a couple hours deciding among organizations I already knew, and actually looking for projects that would behave noticeably differently because of my money.
  • The donor lottery ended up slowing down my donation, because at the larger level I was no longer satisfied just giving to well-known opportunities that seemed to be getting the money they need from Open Phil or other institutional grants. It ultimately took a little over 3 years for me to distribute the donations. Depending on discount rates, that delay might have been a significant cost.
  • In the three years I continued to hold (at least some of) the money the nominal value increased by almost $3k, but I think it would have increased at least $5k more if I had been paying more attention (simply putting it all in S&P 500 index until Feb 2020, as I did with most of my 'own' savings). This is the clearest/simplest mistake I made.
  • Since I had only used about half of the winnings after two years I considered plowing the rest of it back into the 2019 donor lottery, but ultimately I did not. In hindsight I believe it was good that I didn't—since on expectation the money would have been less impactful than funding EpiFor—but I don't think the choice was obvious at the time.
  • Even after I found the CZEA opportunity, I wasn't sure how easy it would be for me to find attractive funding opportunities that weren't already sufficiently funded by institutional donors with heavily overlapping values (and more time to research and recruit applications) such as Open Phil, LTFF, and EA Grants.
  • I have come to believe that living and working in the EA/Rationality community in the Bay Area made it much more likely I would hear about attractive opportunities that weren't yet funded by larger donors. I have also updated that there does seem to be more of a niche for 5-to-6-figure long-termist/EA donors in the community than I had originally thought (perhaps only for people that are well-positioned to hear about the opportunities).
  • Now having found such opportunities twice in three years, I'm guessing I could find more of them at a rate of one per year or more, especially if I did more to advertise my desire to find 5-figure donation opportunities. Speaking of which, while I've finished distributing the donor lottery funds, I'm continuing to seek 5-figure donation opportunities for (at least) the rest of 2020. See details below.
  • Both of the organizations that received donations did not know I was looking for donation opportunities until a friend or I found them—I believe that not doing more to advertise my interest in making donations was likely the biggest mistake I made. Hence my final section:

Looking for more projects like these

If you are launching (or know about) a project that you believe may have a strong EA impact and has room for more funding, I'd be happy to hear about it. I'm most interested in projects that have:

  • One or more leaders with a track record of making good stuff happen
  • Obviously low downside risk (I think projects that risk doing real harm are better suited for giving mechanisms that include a solid due diligence process and have multiple reviewers, such as institutional grantmakers)
  • 501(c)3 status or sponsorship (a way for me to get a U.S. tax deduction). I'm also open to non-tax-deductible opportunities that are sufficiently attractive.
  • Some reason to prefer an individual donor like me rather than institutional donors like Open Phil, LTFF, EA Grants, and GiveWell, such as high time-sensitivity.
Comments17
Sorted by Click to highlight new comments since: Today at 2:49 AM

If you are launching (or know about) a project that you believe may have a strong EA impact and has room for more funding, I'd be happy to hear about it. 

 

I'd be bad at my job if I didn't mention that Rethink Priorities is looking for money ;)

Some reason to prefer an individual donor like me rather than institutional donors like Open Phil, LTFF, EA Grants, and GiveWell

Worth nothing that one reason as an individual donor to support an organization that benefits from institutional giving is that there are only so many of these institutions and they're frequently willing to only be so much of an organization's budget. This can actually allow an individual to act almost as a 1:1 match by unlocking more institutional funding.

Registering predictions:

1) You will hear about 10-50 EA projects looking for funding, over the next 2 months (80%).

2) >70% of these projects will not be a registered tax-deductible charities (but might be able to get fiscal sponsorship). (80%)


Becomming a registered charity is a lot of work. It would be interesting for someone to look into when it is and isn't worth the time investment.

How did these predictions resolve?

I don't know. I totaly forgott about this. Tell me if you find out what the outcome is. Unfortuatly I will not make it a priority to find out. But I would apriciate to know this.

Effective Thesis is looking for funding. I believe the downside risk is very small and we could likely find a way to ensure U.S. tax deductibility. Since this is small meta project, it's not that easy to find institutional support and individual donations might thus have quite large impact on continuation of this project.

Looking for more projects like these

CEEALAR (formerly the EA Hotel) is looking for funding to cover operations from Jan 2021 onward.

With that in mind it was an easy call for me to make, and I committed the remaining $23,500 from the donation lottery, as well as some personal funds on top of that. Notably, EpiFor is now conducting its next funding round, and I continue to suspect that more donations may have a substantial (though high variance) impact—particularly since funding is currently affecting which opportunities they pursue.

Thanks for sharing Tim. If anyone would like to discuss a potential donation of $5,000 or more, please feel free to reach out to me at josh@epidemicforecasting.org

Thank you for writing this up. I think it would be helpful if this post (and future debriefs like it) were linked to from the donor lottery page.

Looking for more projects like these

AI Safety Support is looking for both funding and fiscal sponsorship. We have two donation pledges which are conditional on the donations being tax-deductible (one from Canada and one from the US). But even if we solve that, we still have a bit more room for funding.

The money will primarily be used for sallary for me and JJ Hepburn.

AI Safety Support's mission is to help aspiring and early career AI Safety researcher in any way we can. There are currently lots of people who wants to help with this problem but who don't have the social and institutional support from organisations and people around them.

We are currently running monthly online AI Safety discussion days, where people can share and discuss their research ideas, independent of their location. These events are intended as a complement to the Alignment Forum and other written forms of publication. We believe that live talks conversation are a better way to share early stage ideas, and that blogpost and papers comes later in the proses.

We also have other projects in the pipe line, e.g. our AI Safety career bottleneck survey. However, these things are currently on hold until we've secured enough funding so that we know we will be able to keep going for at least one year (to start with).

AI Safety Support have only existed since May, but both of us have a track record of organising similar events in the past, e.g. AI Safety Camps.

I have come to believe that living and working in the EA/Rationality community in the Bay Area made it much more likely I would hear about attractive opportunities that weren't yet funded by larger donors

I am sceptical about this. There are *lots* of non Bay-area projects and my impression (low confidence) is that it is harder for us to get funding. This is becasue even the official funding runs mostly on contacts, so they also mostly fund stuff in the hubs.

I know of two EA projects (not including my own) which I think should be funded, and I live in Sweden.

You could both be right. My impression is that there are a whole bunch of ambitious people in the Bay, so being there for funding has advantages. I also think that non-Bay ventures are fairly neglected. Overall I (personally) would like to see more funding and clarity in basically all places. 

Also, note that the two ventures Tim funded were non-bay ventures. Bay connections are useful even for understanding international projects.

I am quite curious to understand the funding situation among 'EA startup projects' better. Perhaps the survey Jade Leung recently conducted as part of her incubator project will help shed light on this.

My questions/confusions include:

  • How many EA projects that 'should' get funding don't—and for that reason don't happen? (ie What's the 'false negative' rate for our community answering the question, "Should X startup project be funded?")
  • What are the biggest costs of funding too many such projects?
    • Is the cost of 'false positives' primarily just the money lost?
    • Should we model the time spent by the founders etc as a major cost? (I suspect not—because I would guess that doing projects like these are a great way to increase the skills of the founders, regardless of project success.)
    • Are there significant downside risks for projects with no obvious way to do real harm—such as attracting less-aligned founders into the community?
  • Is it valuable to the success of the project to require certain things in the funding application project (e.g. a solid business plan)? (I have generally attempted to cause as little extra work for donees as possible, but I could imagine the right application process being helpful.)

I suspect that once we answer these questions, the issue of Bay/Hub (or not) will sort of fade away, though there will still be questions of the best ways to get the best would-be-projects to actually happen, and connect them with the right funders.

You are correct that people in the Bay can find out about project in other places. The project I know about are also not in the same location as me. I don't expect being in the Bay has an advantage for finding out about projects in other places, but I could be wrong.

When it comes to project in the Bay, I would not expect people who lack funding to be there in the first place, given that it is ridiculously expensive. But I might be missing something? I have not investigated the details, since I'm not allowed to just move their my self, even if I could afford it. (Visa reason, I'm Swedish)

I think a lot of people in the Bay lack funding.

Then maybe these lots of people should gang up and start a new hub, literally anywhere else. Funding problem mostly solved.

If people are not seriously trying this, then it's hard for me to take seriously any claims of lack of funding. But as I said, I might be missing something. If so, pleas tell me.

Earning potential goes down with distance to the Bay (less so in COVID times, but even then that is still true, as many companies still adjust their salaries based on cost of living), which matters because people have friends and spouses who don't want to live an ascetic EA lifestyle.

Also, many, if not most of these projects could not be started outside of the Bay or any of the other global hubs, because they benefit from being part of an ecosystem of competent people. You could maybe pull them off in other major global cities (like New York, London, Hong Kong, Tokyo), but the rent prices won't differ that much between them, because the demand for being close to all the other good people drives prices up. The best people are in the big cities because that's where the other good people are. Not moving to one of the hubs of the world is for most people career suicide, and in general I am much more optimistic about projects and organizations that are located in one of the global talent hubs, because they get to leverage the local culture, service ecosystem, talent availability and social networks that come with those hubs that extend far beyond what the EA and Rationality communities can provide on their own. 

I know that my effectiveness would have dropped drastically had I moved out of a global hub, and my overall impact trajectory would have been much worse, so I am hesitant to recommend that anyone else do so, at least for the long term (I think temporarily moving to lower cost places is a good strategy for many people, and many should consider it, but it's not really solving the funding problem much, since I don't really think people should do that for more than 6 months, or maybe at the most a year).

Edit: Also COVID changes all of this at least a bit, though I don't really know how much and for how long. But it seems likely to me that the overall trends here are pretty robust and we will continue seeing high prices in the places where I would want people to be located.

Looking for more projects like these

AI Safety Camp is seeking funding to professionalise management.

Feel free to email me on remmelt[at}effectiefaltruisme.nl. Happy to share an overview of past participant outcomes + sanity checks, and a new strategy draft.

Curated and popular this week
Relevant opportunities