Name: Patrick Brinich-Langlois


Pronouns: they/she/he

Ice-cream flavor: raspberry sorbet

Toothpaste: Colgate (original)

Wiki Contributions


EA Infrastructure Fund: Ask us anything!

I emailed CEA with some questions about the LTFF and EAIF, and Michael Aird (MichaelA on the forum) responded about the EAIF. He said that I could post his email here. Some of the questions overlap with the contents of this AMA (among other things), but I included everything. My questions are formatted as quotes, and the unquoted passages below were written by Michael.

Here are some things I've heard about LTFF and EAIF (please correct any misapprehensions):

You can apply for a grant anytime, and a decision will be made within a few weeks.

Basically correct. Though some decisions take longer, mainly for unusually complicated, risky, and/or large grants, or grants where the applicant decides in response to our questions that they need to revisit their plans and get back to us later. And many decisions are faster. 

The application process is meant to be low-effort, with the application requiring no more than a few hours' work. 

Basically correct, though bear in mind that that doesn't necessarily include the time spent actually doing the planning. We basically just don't want people to spend >2 hours on actually writing the application, but it'll often make sense to spend >2 hours, sometimes much more than 2 hours, on actual planning.

The funds don't put many resources into evaluation, which is ad hoc and focuses on the most-controversial grants—the goal is to decide whether to make more such grants in the future. (Question: how do you decide whether a controversial grant was successful?) [Author's note: I was unclear here—I was asking about post-hoc evaluation, but Michael's answer is about evaluating grant applications.]

 These statements seem somewhat fuzzy so it's hard to say if I'd agree. Here's what I'd say:

  • My understanding is that we tend to spend something like 1 hour per $10k of grants made. (I haven't actually checked this, but I'm pretty sure it'd be the right order of magnitude at least.)
  • When I joined EAIF, I was surprised by that and felt kind-of anxious or uncomfortable about it, but overall I do think it makes sense.
  • We tend to spend more time on grants that are larger, have at first glance higher upside potential plus higher downside risk, and/or are harder to evaluate for some reason (e.g., they're in areas the fund managers are less familiar with, or the plan is pretty complex).
  • I don't think I'd say that what grants we focus more time on is driven by deciding whether to make more grants of that type in the future.

The typical grant is small and one-off (more money requires a new application), and made to an individual. Grants are also made to organizations, and these might be a little bigger but still on the small side (probably not more than $300k).

I guess this is about right, but:

  • "small" is ambiguous. Some specific numbers: Grants I've been involved in evaluating have ranged (if I recall correctly) from ~$5k to ~$400k, and there are two ~$250k grants I recommended and that were made. People can definitely apply for larger grants, but often it'd make more sense for another funder to evaluate and fund those.
  • We do make quite a few grants to organizations.
  • You could compile info on individuals vs orgs and on grant sizes from the public payout reports.

Your specific questions:

How many grants come through channels other than people applying unbidden (e.g., referrals/nominations by third parties or active grantmaking by fund managers)? What's the most common such channel?

  • I don't have these numbers (possibly someone else does), but I'd fairly confidently guess at least 10% of applicants whose applicants are approved had at an earlier point had someone (whether a fund manager or not) specifically encourage them to apply.
  • I'm not sure your question uses a useful way of carving up the space of possibilities. E.g., many people seem to apply in response to fund managers publicly or semi-publicly encouraging people in general to apply, e.g. via Forum posts or via posting in relevant Slack workspaces. E.g., many people presumably apply after 80k advisors or community builders encourage them to. I guess I mean it seems likely that some active promotion effort was involved in the vast majority of grants received, but the active promotion effort can vary a lot in terms of how targeted it is, who it's from, etc.

The LTFF's fund managers all have backgrounds in AI or CS. Is the process for evaluating grants in areas outside the managers' areas of expertise any different?

  • I don't know since I'm on the EAIF, but I'm also not sure this is quite the right question to ask. I don't think it's really like there's a set of three different pre-specified processes that are engaged under different conditions; it's more ad hoc than that. And there could be many AI/CS projects that are outside their area of expertise and many non-AI/CS projects inside their area of expertise (e.g., my understanding is that Oliver and Evan both have experience trying to do things like building research talent pipelines / infrastructure / mentorship structures, so they'd have some expertise relevant to projects focused on doing that for non-AI issues).
  • Another thing to note is that some guest fund managers earlier this year had other backgrounds.
  • I do think it can be problematic to have all fund managers have too narrow a range of areas of expertise and interest, and I think EA Funds sometimes arguably has that. But I also think this is mostly an unfortunate result of talent constraints. And I also think the guest manager system has helped mitigate this, and that the existing permanent fund manager's areas of expertise isn't super overlapping.

What's the role of the advisers to the LTFF and EAIF listed on the website? Do managers commonly discuss grants with people not listed on the website (e.g., experts at other nonprofits)?

  • Advisors other than Nicole Ross are only involved in maybe something like 10% of grant evaluations, and usually just for quite quick input. They're also sometimes involved in higher-level strategic questions, and sometimes they proactively suggest things (e.g., maybe we should reach out to X to ask if they want to apply to EA Funds or to ask if a larger grant would be useful since they seem to be relying on volunteers).
  • Nicole Ross checks recommended grants for possible issues of various kinds before the grant is actually made. I think it's pretty rare that this actually changes a grant decision, but sometimes it results in further discussion with applicants that helps double-check or mitigate the potential issues.
  • Fund managers very often discuss grants with specific people not listed on the website. I'd guess that an average of ~3 external people are asked for input on each grant that ends up being approved. (Sometimes 0, often >5.) This is done in an ad hoc way based on the key uncertainties about that particular grant. We also explicitly ask that these consulted people keep the fact the applicant applied confidential.

What's the process for a grant's being approved or rejected? E.g., can a primary grant evaluator unilaterally reject a grant? Do grants have to be unanimously approved by all managers? Do all mangers have a say in all grants?

  • By default, all fund managers on a given Fund get 5 days in which to vote after a grant is put up for vote by the primary evaluator. Then the final decision is based on whether the average of the votes exceeds a particular threshold. On the EAIF, this average is a weighted average, with the primary evaluator having a weight of 2 by default and everyone else having a weight of 1 by default.
  • Usually only ~2 people actually give a vote, in my experience.
  • Usually the final decision is the decision the PI recommended.
  • Sometimes the voting period is shortened if a grant is time-sensitive.
  • Sometimes a given fund manager recuses themselves due to possible conflicts of interest, in which case they don't vote and may also be removed from the doc with notes and such.

What are the motivations for having guest managers—increased capacity, identifying or training promising grantmakers, diversity of viewpoints?

  • This is discussed in some recent AMAs, if I recall correctly
  • We also now have an assistant fund manager on the EAIF, helping Buck with his evaluations. I personally think this is a great move, for all 3 reasons you mentioned, just as I think the guest fund manager role was a good thing to have created.

I know that sometimes you give feedback to unsuccessful grant recipients. What does this feedback look like—e.g., is it a 3-sentence email, or an arbitrarily long phone conversation with the primary evaluator?

  • Basically, either, or anything in between, though I think "arbitrarily long" seems unlikely - I'd guess it's rarely or never been a >1 hour phone call.

What processes do you have to learn from mistakes or sub-optimal decisions?

  • We get reports from grantees on their progress etc. - though I don't think we actually heavily use this to improve over time
  • I personally make forecasts relevant to most grants I recommend before the grants are made, and I plan to look back at them later to see how calibrated I was and what I can learn from that. I think some other people do this as well, but I think most don't, and unfortunately I've come to feel that that's reasonable given time constraints. (I think this is a shame, and that more capacity such that we could do that and various other things would be good, but there's a severe talent constraint.)
  • There are various ad hoc and/or individual-level things
  • There may be things Jonas, fund chairs, and/or permanent fund managers do that I'm aware of
  • We've discussed whether and how to better evaluate our performance and improve over time, what we'd want to learn, etc. I think this is something people will continue to think more about. I personally expect there's more we should be doing, but it's not super obvious that that's the case (there are also many other good things we could do if we were willing to spend extra hours on something new), nor precisely what it'd be best to do.
Comments for shorter Cold Takes pieces

So I believe we're simply not judging more recent art works by the same standards, resulting in a huge bias towards older works.

Why is it wrong to credit past art for innovations that have since become commonplace? If a musician's innovations became widespread, I would count that as evidence of the musician's skill. Similarly, Euclid was a big deal even though there are millions of people who know more math today than he did.

Beethoven is only noteworthy because his works are a cultural meme at this point - he was a great musician for his time, sure, but right now there's probably tens of thousands of musicians who could make music of the same caliber straight on their laptops. Today's Beethoven publishes his amazing tracks on SoundCloud and toils in obscurity.

This sounds like an extreme overstatement, at least if applied to classical music. Some modern classical music it is pretty good, and better than Beethoven's less-acclaimed works. And the best of it is probably on par with Beethoven's greatest hits. But much of it is unmemorable—premiered, then mercifully forgotten. The catalog of the Boston Modern Orchestra Project is representative of modern classical orchestral music, and I think most of it falls far short of Beethoven's best symphonies. The concertgoing public strongly prefers the old stuff, to the consternation of adventurous conductors.

Democratising Risk - or how EA deals with critics

One reason it might be a reductio ad absurdum is that it suggests that in an election in which supporters of one side were rational (and thus would not vote, since each of their votes would have a minuscule chance of mattering) and the others irrational (and would vote, undeterred by the small chance of their vote mattering), the irrational side would prevail.

If this is the claim that John G. Halstead is referring to, I regard it as a throwaway remark (it's only one sentence plus a citation):

For instance, a simple threshold or plausibility assessment could protect the field’s resources and attention from being directed towards highly improbable or fictional events.

Democratising Risk - or how EA deals with critics

I would've found it helpful if the post included a definition of TUA (as well as saying what what it stands for). Here's a relevant excerpt from the paper:

The TUA [techno-utopian approach] is a cluster of ideas which make up the original paradigm within which the field of ERS [existential-risk studies] was founded. We understand it to be primarily based on three main pillars of belief: transhumanism, total utilitarianism and strong longtermism. More precisely: (1) the belief that a maximally technologically developed future could contain (and is defined in terms of) enormous quantities of utilitarian intrinsic value, particularly due to more fulfilling posthuman modes of living; (2) the failure to fully realise or have capacity to realise this potential value would constitute an existential catastrophe; and, (3) we have an overwhelming moral obligation to ensure that such value is realised by avoiding an existential catastrophe, including through exceptional actions.

What would you do if you had half a million dollars?

Re patient philanthropy funds: Spending money on research rather than giving money to a fund does seem more focused and efficient. I think there are limits to how much progress you can make with research (assuming that research hasn't ruled the idea out), so it does make sense to try creating such a fund at some point. Some issues would become apparent with even a toy fund (one with a minimal amount of capital produced as an exercise). A real fund that has millions of dollars would be a better test of the idea, but whether contributing to such a fund is a good use of money is less clear to me now.

What would you do if you had half a million dollars?

In general, it kind of seems like the "point" of the lottery is to do something other than allocate to a capital allocator. The lottery is "meant" to minimise work on selecting a charity to give to, but if you're happy to give that work to another allocator I feel like it makes less sense?

When I entered the lottery, I hadn't given much thought to what I'd do if I won—I was convinced by the argument that giving to the lottery dominated giving to the LTFF (for example), since if I won the lottery I could just decide to give the money to the LTFF. I think you're right that it makes less sense to enter the donor lottery if you think you'll end up giving the money to a regranting organization, but I think it still makes some sense.

Lottery again! You could sponsor CEA to do a $1m lottery. If you thought it was worth it for $500k, surely it would be worth it for $1m!

Someone else suggested that to me a while ago, but I'm not sure how much it would change things—if I don't have interesting ideas about what to do with $500k, I probably wouldn't have interesting ideas about what to do with $1m. There would also be some overhead to setting up another lottery.

Be quite experimental, give largish grants to multiple young organisations, see how they do, and then direct your ordinary giving toward them in the future. This money can buy access to more organisations, and setup relationships for your future giving.

Thanks for suggesting that—it seems like an idea worth considering for at least a portion of the money.

EARadio - more EA podcasts!

velutvulpes, could you update the RSS link to point to I'm working on migrating to a new podcast host (Buzzsprout). The old feed currently redirects there, but my understanding is that it will stop redirecting after I complete the migration.

Semi-regular Open Thread #35

This isn't your first EA podcast. This is not so much because the content is difficult, but because it has relatively low production value (it's just EA conference talks in podcast format). The 80,000 Hours Podcast, Hear This Idea, and The FLI Podcast are more entertaining and polished while still being similarly informative, and I'd recommend listening to those first.

A case against strong longtermism

I will primarily focus on The case for strong longtermism, listed as “draft status” on both Greaves and MacAskill’s personal websites as of November 23rd, 2020. It has generated quite a lot of conversation within the effective altruism (EA) community despite its status, including multiple podcast episodes on 80000 hours podcast (one, two, three), a dedicated a multi-million dollar fund listed on the EA website, numerous blog posts, and an active forum discussion.

"The Case for Strong Longtermism" is subtitled "GPI Working Paper No. 7-2019," which leads me to believe that it was originally published in 2019. Many of the things you listed (two of the podcast episodes, the fund, and several of the blog and forum posts) are from before 2019. My impression is that the paper (which I haven't read) is more a formalization and extension of various existing ideas than a totally new direction for effective alturism.

The word "longtermism" is new, which may contribute to the impression that the ideas it describes are too. This is true in some cases, but many people involved with effective altruism have long been concerned about the very long run.

Load More