Hide table of contents

Introduction

  • The Effective Altruism Infrastructure Fund (EAIF) made the following grants as part of its Q1 2021 grant cycle:
    • Total grants: $1,221,178 (assuming all grantees accept the full grants)
    • Number of grants: 26
    • Number of applications (excluding desk rejections): 58
    • Payout date: April 2021
  • We expect that we could make valuable grants totalling $3–$6 million this year. The fund currently holds around $2.3 million. This means we could productively use $0.7–$3.7 in additional funding above our current reserves, or $0–$2.3 million (with a median guess of $500,000) above the amount of funding we expect to get by default by this November.
  • This is the first grant round led by the EAIF’s new committee, consisting of Buck Shlegeris, Max Daniel, Michelle Hutchinson, and Ben Kuhn as a guest fund manager, with Jonas Vollmer temporarily taking on chairperson duties, advising, and voting consultatively on grants. For more detail on the new committee selection, see EA Funds has appointed new fund managers.
  • Some of the grants are oriented primarily towards causes that are typically prioritized from a ‘non-longtermist’ perspective; others primarily toward causes that are typically prioritized for longtermist reasons. The EAIF makes grants towards longtermist projects if a) the grantseeker decided to apply to the EAIF (rather than the Long-Term Future Fund), b) the intervention is at a meta level or aims to build infrastructure in some sense, or c) the work spans multiple causes (whether the case for them is longtermist or not). We generally strive to maintain an overall balance between different worldviews according to the degree they seem plausible to the committee.
  • One report includes an embedded forecast; you can add your own prediction and related comments as you read. We’re interested to see whether we find the community’s prediction informative.
  • The reports from this round are unusually thorough, with the goal of providing more transparency about the thinking of the fund managers.
  • Would you like to get funded? You can apply for funding at any time.
  • If you have any question for fund managers not directly related to the grants described here, you’re welcome to ask it in our upcoming AMA.

Highlights

Our grants include:

  • Two grants totalling $139,200 to Emma Abele, James Aung, Bella Forristal, and Henry Sleight. They will work together to identify and implement new ways to support EA university groups – e.g., through high-quality introductory talks about EA and creating other content for workshops and events. University groups have historically been one of the most important sources of highly engaged EA community members, and we believe there is significant untapped potential for further growth. We are also excited about the team, based significantly on their track record – e.g., James and Emma previously led two of the globally most successful university groups.
  • $41,868 to Zak Ulhaq to develop and implement workshops aimed at helping highly talented teenagers apply EA concepts and quantitative reasoning to their lives. We are excited about this grant because we generally think that educating pre-university audiences about EA-related ideas and concepts could be highly valuable; e.g., we’re aware of (unpublished) survey data indicating that in a large sample of highly engaged community members who learned about EA in the last few years, about ¼ had first heard of EA when they were 18 or younger. At the same time, this space seems underexplored. Projects that are mindful of the risks involved in engaging younger audiences therefore have a high value of information – if successful, they could pave the way for many more projects of this type. We think that Zak is a good fit for efforts in this space because he has a strong technical background and experience with both teaching and EA community building.
  • $5,000 to the Czech Association for Effective Altruism to give away EA-related books to people with strong results in Czech STEM competitions, AI classes, and similar. We believe that this is a highly cost-effective way to engage a high-value audience; long-form content allows for deep understanding of important ideas, and surveys typically find books have helped many people become involved with EA (e.g., in the 2020 EA Survey, more than ⅕ of respondents said a book was important for getting them more involved).
  • $248,300 to Rethink Priorities to allow Rethink to take on nine research interns (7 FTE) across various EA causes, plus support for further EA movement strategy research. We have been impressed with Rethink’s demonstrated ability to successfully grow their team while maintaining a constant stream of high-quality outputs, and think this puts them in a good position to provide growth opportunities for junior researchers. They also have a long history of doing empirical research relevant to movement strategy (e.g., the EA survey), and we are excited about their plans to build upon this track record by running additional surveys illuminating how various audiences think of EA and how responsive they are to EA messaging.

Grant recipients

Grants made during this grant application round:

  • Emma Abele, James Aung, Bella Forristal, Henry Sleight ($84,000): 12 months' salary for a 3-FTE team developing resources and programs to encourage university students to pursue highly impactful careers
  • Emma Abele, James Aung ($55,200): Enabling university group organizers to meet in Oxford
  • Zakee Ulhaq ($41,868): 6-12 months' funding to help talented teenagers apply EA concepts & quantitative reasoning to their lives
  • Irena Kotikova & Jiří Nádvorník, Czech Association for Effective Altruism (CZEA) ($30,000): 6 months’ salaries for two people (0.5 FTE each) to work on development, strategy, project incubation, and fundraising for the CZEA national group; also includes $5,000 in seed funding for incubated projects at the discretion of CZEA
  • Terezie Kosikova, CZEA ($25,000): 12 months’ salaries for one person (0.2 FTE) and contractors to work on strategic partnership-building with EA-aligned organizations and individuals in the Czech Republic
  • Jiří Nádvorník, CZEA ($8,300): Creating a short Czech book (~130 pages) and brochure (~20 pages) with a good introduction to EA in digital and print formats
  • Irena Kotikova, CZEA ($5,000): Giving away EA-related books to people with strong results in Czech STEM competitions, AI classes, and similar
  • YouTube channel “Rational Animations” ($30,000): Funding a YouTube channel recently created by two members of the EA community
  • Jeroen Willems ($24,000): Funding for the YouTube channel “A Happier World”, which explores exciting ideas with the potential to radically improve the world
  • Alex Barry ($11,066): 2 months' salary to develop EAIF funding opportunities and run an EA group leader unconference
  • Aaron Maiwald ($1,787): Funding production costs for a German-language podcast devoted to EA ideas
  • Tony Morley, ‘Human Progress for Beginners’ ($25,000): Early-stage grant for a children’s book presenting an optimistic and inspiring history of how the world has been getting better in many ways
  • Pablo Stafforini, EA Forum Wiki ($34,200): 6-month grant to Pablo for leading the EA Forum Wiki project, including pay for an assistant.
  • Effective Institutions Project ($20,417): Developing a framework aimed at identifying the globally most important institutions
  • The Impactmakers ($17,411): Co-financing the 2021 Impact Challenge at the Dutch Ministry of Foreign Affairs, a series of workshops aimed at engaging civil servants with evidence-based policy and effective altruism
  • Effective Environmentalism ($15,000): Strategy development and scoping out potential projects
  • Disputas ($12,000): Funding an exploratory study for a potential software project aimed at improving EA discussions
  • Steven Hamilton ($5,000): Extending a senior thesis on mechanism design for donor coordination
  • Rethink Priorities ($248,300): Compensation for 9 research interns (7 FTE) across various EA causes, plus support for further EA movement strategy research
  • Stefan Schubert ($144,072): Two years of funding for writing a book on the psychology of effective giving, and for conducting related research
  • The Centre for Long-Term Resilience ($100,000): Improving the UK’s resilience to existential and global catastrophic risks
  • Jakob Lohmar ($88,557): Writing a doctoral thesis in philosophy on longtermism at the University of Oxford
  • Joshua Lewis, New York University, Stern School of Business ($45,000): Academic research into promoting the principles of effective altruism
  • Giving Green (an initiative of IDinsight) ($50,000): Improving research into climate activism charity recommendations
  • High Impact Athletes ($50,000): Covering expenses and growth for 2021 for a new nonprofit aimed at generating donations from professional athletes
  • High Impact Athletes ($60,000): Enabling a first hire for a nonprofit aimed at generating donations from professional athletes

Grant reports

Note: Many of the grant reports below are very detailed. If you are considering applying to the fund, but prefer a less detailed report, simply let us know in the application form. We are sympathetic to that preference and happy to take it into account appropriately. Detailed reports are not mandatory.

We run all of our payout reports by grantees, and we think carefully about what information to include to maximize transparency while respecting grantees’ preferences. If considerations around reporting make it difficult for us to fund a request, we are able to refer to private donors whose grants needn’t involve public reporting. We are also able to make anonymous grants.

Grant reports by Buck Shlegeris

Emma Abele, James Aung, Bella Forristal, Henry Sleight ($84,000)

12 months' salary for a 3-FTE team developing resources and programs to encourage university students to pursue highly impactful careers

This grant is the main source of funding for Emma Abele, James Aung, Bella Forristal, and Henry Sleight to work together on various projects related to EA student groups. The grant mostly pays for their salaries. Emma and James will be full-time, while Bella and Henry will be half-time, thus totalling 3 FTE.

The main reason I’m excited for this grant is that I think Emma and James are energetic and entrepreneurial, and I think they might do a good job of choosing and executing on projects that will improve the quality of EA student groups. Emma and James have each previously run EA university groups (Brown and Oxford, respectively) that seem generally understood to be among the most successful such groups. James co-developed the EA Student Career Mentoring program, and Emma ran an intercollegiate EA projects program. I’ve been impressed with their judgment when talking to them about what kinds of projects in this space might produce value, and they have a good reputation among people I’ve talked to.

I think it would be great if EA had a thriving ecosystem of different projects which are trying to provide high-quality services and products to people who are running student groups, e.g.:

  • Traveling around and giving really high-quality introductory talks about EA (which Bella will be doing)
  • Creating the content for workshops and events which student groups can run
  • Providing support and holding events for people who are running student groups

CEA is working on providing support like this to university groups (e.g., they’re hiring for this role, which I think might be really impactful). But I think that it’s so important to get this right that we should be trying many different projects in this space simultaneously. James and Emma have a strong vision for what it would be like for student groups to be better, and I’m excited to have them try to pursue these ideas for a year.

In order to evaluate an application for grant renewal, I’ll try to determine whether people who run student groups think that this team was particularly helpful for them, and I’ll also try to evaluate content they produce to see if it seems high-quality.

Emma Abele, James Aung ($55,200)

Enabling university group organizers to meet in Oxford

We're also providing funding for about ten people who work on student groups to live together in Oxford over the summer. This is a project launched by the team described in the previous report. Concretely, Emma and James will run this project (while Bella and Henry won't be directly involved). The funding itself will be used for travel costs and stipends for the participants, as Emma and James's salaries are covered by the previous grant.

I am excited for this because I think that the participants are dedicated and competent EAs, and it will be valuable for them to know each other better and to exchange ideas about how to run student groups effectively. A few of these people are from student groups that aren't yet well established but could be really great if they worked out; I think that these groups are noticeably more likely to go well given that some of their organizers are going to be living with these experienced organizers over the summer.

Zakee Ulhaq ($41,868)

6-12 months' funding to help talented teenagers apply EA concepts and quantitative reasoning to their lives

Zakee (“Zak”) is running something roughly similar to an EA Introductory Fellowship for an audience of talented high schoolers, and will also probably run a larger in-person event for the participants in his fellowships. Most of this grant will pay for Zak’s work, though he may use some of it to pay others to help with this project.

Zak has this opportunity because of a coincidental connection to a tutoring business which mostly works with high school students whose grades are in about the top 1% of their cohorts.

I think that outreach to talented high schoolers seems like a plausibly really good use of EA money and effort, because it’s cheaper and better in some ways than outreach to talented university students.

I think Zak seems like a good but not perfect fit for this project. He has teaching experience, and he has a fairly strong technical background (which in my experience is helpful for seeming cool to smart, intellectual students). I’ve heard that he did a really good job improving EA Warwick. Even if this project largely fails, I think it will likely turn out to have been worth EAIF’s money and Zak’s time. That’s because it will teach Zak useful things about how to do EA movement building and high school outreach more specifically, which could be useful if he either tries again or can give good advice to other people.

Projects from the Czech Association for Effective Altruism (CZEA)

Irena Kotikova & Jiří Nádvorník

$30,000: 6 months’ salaries for two people (0.5 FTE each) to work on development, strategy, project incubation, and fundraising for the CZEA national group

This grant funds Irena Kotikova and Jiri Nadvornik (who run CZEA) to spend more time on various other projects related to the Czech EA community, e.g., fundraising and incubating projects.

I mostly see this grant as a gamble on CZEA. In the world where this grant was really good, it’s probably because CZEA winds up running many interesting projects (that it wouldn’t have run otherwise), which have positive impact and teach their creators lots of useful stuff. The grant could also help Irena and Jiri acquire useful experience that other EAs can adopt.

Someone who I trust had a fairly strong positive opinion of this grant, which made me more enthusiastic about it.

$25,000: 12 months’ salary for one person (0.2 FTE) and contractors to work on strategic partnership-building with EA-aligned organizations and individuals in the Czech Republic

CZEA is fairly well-connected to various organizations in Czechia, e.g., government organizations, nonprofits, political parties, and companies. They want to spend more time running events for these organizations or collaborating with them.

I think the case for this grant is fairly similar to the previous case – I’m not sure quite how the funds will lead to a particular exciting result, but given that CZEA seems to be surprisingly well connected in Czechia (which I found cool and surprising), it seems reasonable to spend small amounts of money supporting similar work, especially because CZEA’s team might learn useful things in the process.

Jiří Nádvorník

$8,300: Creating a short Czech-language book (~130 pages) and brochure (~20 pages) with a good introduction to EA in digital and print formats

This grant pays for CZEA to make high-quality translations of articles about EA, turn those translations into a brochure and a book, and print copies of them to give away.

I think that making high-quality translations of EA content seems like a pretty good use of money. (I think that arguments like Ben Todd’s against translating EA content into other languages apply much more to Chinese than into many other languages.) I am aware of evidence suggesting that EAs who are non-native English speakers are selected to be unusually good at speaking English compared to their peers, which is evidence that we’re missing out on some of their equally-promising peers.

It seems tricky to ensure high translation quality, and one of the main ways this project might fail is if the translator contracted for the project does a poor job. I’ve talked about this with Jiri a little and I thought he had a reasonable plan. In general, I think CZEA is competent to do this kind of project.

Irena Kotikova

$5,000: Giving away EA-related books to people with strong results in Czech STEM competitions, AI classes, and similar

This grant provides funds for CZEA to give away copies of books related to EA to talented young people in Czechia, e.g. people who do well in STEM competitions.

I think that giving away books seems like a generally pretty good intervention:

  • Books are a common influence on people who get involved in EA. When the 2020 EA Survey asked about important factors in respondents’ involvement, 23% of respondents cited books. This question was worded differently in the 2019 survey — asking about “books, articles, or blog posts” — and 30% of respondents chose that as one of their answers.
  • Books seem more likely to provoke deep engagement than most other resources; someone who finishes a book will have spent more time thinking about EA-related ideas than someone who watches a video or reads an article.

I also think that CZEA seems to be competent at doing this kind of project. So it seems like a solid choice.

YouTube channel “Rational Animations” ($30,000)

Funding a YouTube channel recently created by two members of the EA community

Rational Animations (which I’ll abbreviate RA) is a new YouTube channel created by members of the EA community.

The case for this grant:

  • I think that Robert Miles produces a lot of value with his YouTube channel, via a few different mechanisms:
    • Causing people to hear about AI safety
    • Causing people who’ve heard five hours of content on AI safety to hear another one hour of content, which I suspect is pretty helpful for getting people more involved
    • Making it easier for engaged people to understand recent AI safety progress (this kind of benefit is also one of the main reasons I am enthusiastic about the 80,000 Hours podcast)
  • Currently, RA’s videos are nowhere near as well-made as Robert Miles’s. But I think that RA is sufficiently likely to get better that it’s worth their time to try making more videos out for a while. Key reasons for my enthusiasm:
    • The people behind RA seem to be dedicated and enthusiastic, and have a clear understanding of EA ideas. I can imagine them growing to make videos that had interesting and original perspectives on EA-related topics.
    • They seem to be pretty interested in receiving feedback about how their work is going, which makes me less worried that they’ll produce low-quality or harmful content.
    • They have a potentially useful connection to someone who can maybe give them useful advice about being successful YouTubers.
  • Even if the channel never produces much strong content (which seems like the most likely outcome), the creators will still be developing experience in video production, which might be useful for any number of other EA projects.
  • If RA applies for a grant renewal, I’ll assess their work mostly on the basis of whether it seems to be high-quality, rather than e.g. based on the number of views or subscribers their channel has.

Jeroen Willems ($24,000)

Funding for the YouTube channel “A Happier World”, which explores exciting ideas with the potential to radically improve the world

A Happier World is a YouTube channel run by Jeroen Willems, who recently graduated with a master's degree in television directing. He wrote an EA Forum post about the project here.

The argument for this grant is similar to the argument for Rational Animations: EA-related YouTube channels might produce a bunch of value, and Jeroen seems to be capable of making good content. I was very impressed by the video about pandemics he made as part of his degree, and I hope this grant will give him the time and incentive to improve his skills further.

Alex Barry ($11,066)

2 months' salary to develop EAIF funding opportunities and run an EA group leader unconference

Alex previously worked on EA group support at CEA.

This grant funds Alex to work on some combination of two different things, chosen at his discretion:

  • Developing ideas for projects that the EAIF should fund, finding people who might do those projects, and encouraging them to apply for funding.
    • I think Alex is fairly good at thinking of potentially promising projects, and might come up with some clever ideas that turn out to be valuable.
    • Alex will also spend a bit of his time doing short versions of some of these projects, both for the direct impact and to test whether the projects seem promising enough for someone else to scale up.
    • This half of the grant is kind of a long shot. It’s hard to produce value with such a nonspecific goal. But I’ve heard some interesting ideas along these lines from Alex before, which made me more enthusiastic about it.
  • Running an unconference-style event for EA student group leaders. (This grant just pays for Alex’s time; we’d obviously need to provide more money to pay for the event itself, and we’d be open to consider a future funding application for this purpose.)
    • Alex has run events like these before, and therefore seems likely to be able to competently run these unconference-type events.
    • I think that these events are pretty valuable, and I am glad to pay for more of them to happen.

Aaron Maiwald ($1,787)

Funding production costs for a German-language podcast devoted to EA ideas

This grant provides a little bit of funding to cover some expenses for a new podcast in German about EA ideas, by Aaron Maiwald and Lia Rodehorst. Lia has experience working on podcasts and doing science journalism.

This grant seemed like a reasonable opportunity because it wasn’t very much money and it seems plausible that they’ll be able to make some good content. In order to get a grant renewal, I’d want to see that the content they’d produced was in fact good, by asking some German speakers to review it for me.

Grant reports by Max Daniel

General thoughts on this cycle of grants

My most important uncertainty for many decisions was where the ‘minimum absolute bar’ for any grant should be. I found this somewhat surprising.

Put differently, I can imagine a ‘reasonable’ fund strategy based on which we would have at least a few more grants; and I can imagine a ‘reasonable’ fund strategy based on which we would have made significantly fewer grants this round (perhaps below 5 grants between all fund managers).

Tony Morley, ‘Human Progress for Beginners’ ($25,000)

Early-stage grant for a children’s book presenting an optimistic and inspiring history of how the world has been getting better in many ways

This is an early-stage grant to support the production of a children’s book aimed at presenting history from a ‘progress studies’ and ‘new optimist’ perspective: That is, highlighting the many ways in which the world and human well-being have arguably massively improved since the Industrial Revolution.

The prospective author was inspired by the success of the children’s book Good Night Stories for Rebel Girls, and specifically envisions a book for children below age 12 with about 100 pages, each double page featuring a large illustration on one page and 200-500 words of text on the other.

The grant’s central purpose is to pay for professional illustrator Ranganath Krishnamani to be involved in the book project. The book’s prospective author, Tony Morley, is planning to work on the book in parallel with his job and has not asked for a salary. However, I view this grant primarily as a general investment into the book’s success, and would be happy for Tony to use the grant in whatever way he believes helps most achieve this goal. This could include, for example, freeing up some of his time or paying for marketing.

The idea of funding a children’s book was initially controversial among fund managers. Stated reasons for skepticism included unclear benefits from an EA perspective; a long ‘lag time’ between educating young children and the time at which the benefits from that education would materialize; and reputational risks (e.g., if the book was perceived as objectionably influencing children, or as objectionably exposing them to controversial issues).

However, I am very excited that we eventually decided to make this grant, for the following reasons:

  • Several fund managers and advisors, including myself, thought that they would have really enjoyed reading such a book as children, and that this may have had various positive effects on their personal and professional trajectories. Some also reported they would buy such a book (if well executed) now, give it to their friends, etc. More broadly, they were positive about the effects of promoting the book’s main message, both among children specifically and more widely. Very briefly, here are some reasons why I think that promoting that message is valuable from an EA perspective:
    • Becoming aware of past ‘big wins’ for human or animal well-being can inspire people to become more optimistic and ambitious about their own opportunities for doing good. I would guess this is one reason that Doing Good Better told the stories of smallpox eradication and the ‘Green Revolution’.
    • Appreciating the impacts of the Industrial Revolution arguably is highly relevant for cause prioritization. For instance, it informs questions such as: Are we living at an unusual time in history, and if so, in which ways? From an ‘outside view’ perspective, how likely is ‘transformative change’ this century? What could ‘trajectory changes’ to the path of civilization look like? And so on.
    • Telling the story of ‘human progress’ since the Industrial Revolution is an example of looking at the world from a perspective that focuses on a feature that matters from an ethical perspective – in this case, impacts on human well-being. For the two reasons mentioned above, I think it is useful to see good implementations of activities informed by this perspective – both because it might be helpful for understanding effects on well-being in other contexts and because it can provide ‘social proof’ for adopting such a perspective.
    • My impression is that this perspective – as well as many key facts on the Industrial Revolution and its effect – is rarely emphasized elsewhere. For instance, I worry that it is somewhat rare to be exposed to this perspective by learning about history in the way it is typically taught in schools, by reading the news, or by reading most nonfiction books that are more focused on current affairs.
    • To be clear, I’m not saying any of the following things: that the Industrial Revolution is the single most important thing for children to learn about; that the Industrial Revolution had only positive effects; or that the Industrial Revolution is the only event one could talk about when promoting a perspective that looks at history from a perspective that emphasizes impacts on well-being. Overall, I think that it would also be good if, for instance, the impacts of colonialism or of flawed attempts at large-scale social engineering (e.g. the Soviet Union) were widely understood. I could at least in principle see myself recommending a grant for a children’s book focused on one of these or other topics. However, my sense is that the impacts of the Industrial Revolution are something that children (and adults) are particularly unlikely to learn about ‘by default’, and in any case we didn’t receive any other applications proposing children’s books.
  • I have encountered the claim that one reason for optimism about a startup is when the founders are genuinely trying to solve a problem they’ve encountered themselves. I have not vetted this claim, but it sounds plausible to me, and I note that it seems to apply to some EA ‘success stories’ – GiveWell and 80,000 Hours – as well. While not a startup, this book project fits this profile: The prospective author is a father of three who realized there was a children’s book he wanted to read to his kids, but which doesn’t currently exist.
  • The EA and “progress studies” communities are both sizable groups of potential “evangelists” (people who might get really excited about the book, tell all their friends about it, etc.). I think this is a reason for optimism about the book’s sales and reach.
  • I did a back-of-the-envelope cost-effectiveness estimate which suggested that this grant’s expected cost-effectiveness might be in the same ballpark as that of 80,000 Hours’ average activity. Others pointed out ways in which my estimate may be significantly too optimistic, but I also think there are ways in which it may be too pessimistic. Overall, I’m not taking the quantitative results seriously at all, and I wouldn’t generally be comfortable making any grant based only on this kind of calculation. However, I think that such estimates can sometimes ‘disqualify’ a grant, e.g. by revealing that it would be hard to see how it could even possibly be competitive (in terms of cost-effectiveness) with established orgs like 80,000 Hours. This grant passed this test.
  • I have a very positive impression of the prospective author Tony Morley and his fit for executing this project. In particular:
    • I thought Tony’s early track record at the time of the grant application was encouraging. It included having secured $10,000 seed funding from a prominent funder; having found a potential illustrator with a good track record and reasonable ‘price tag’, despite not having a publisher; and having produced a public written case for the book with accompanying tweet and crowdfunding campaign.
    • Tony made tangible progress during the time we were considering his grant application by securing additional funding and references, and developing a more detailed project plan.
    • Tony has a history of popularizing ‘progress studies’ ideas as a writer (e.g., this article for Quillette) and on social media (e.g., on Twitter).
    • On the few occasions when we discussed the book’s potential content, I was impressed by Tony’s ability to identify appropriate content and present it in an engaging way. I also discussed the relationship between EA and “progress studies” with him, and I thought the conversation went well.
    • In our conversations, it became clear that Tony had independently thought about the risks involved in creating a children’s book. He also had plans for how to avoid some of them, e.g., by steering clear of overtly ‘political’ topics.

Since we decided to make this grant, we became aware of additional achievements by Tony: he secured a grant from Tyler Cowen’s Emergent Ventures, and Steven Pinker tweeted about the book. These further increase my confidence in the project.

To be clear, I overall still consider this to be a ‘risky’ grant in the spirit of ‘hits-based giving’. That is, I think that base rates suggest a significant chance of the book never being completed or getting very little attention – but also there is a sufficiently large chance of a big success that the grant is a good bet in expectation.

I’m not sure whether Tony will apply for further funding. If so, I would look for signs of continued implementation progress such as draft pages, sample illustrations, and thoughts on the later stages of the project (e.g. marketing). In reviewing content, I expect I would focus on generic ‘quality’ – is it true, well written, and engaging for the intended audience? – rather than ‘alignment’ with an effective altruism perspective. This is because I think that, given its basic theme, the book’s value isn’t reliant on EA alignment, and because I think that this project will go best if the author retains editorial control and focuses on claims he deeply understands and stands behind.

Pablo Stafforini, EA Forum Wiki ($34,200)

6-month grant to Pablo for leading the EA Forum Wiki project, including pay for an assistant

This is a renewal of a previous $17,000 grant from the Long-Term Future Fund (LTFF) to allow Pablo to continue to lead the EA Forum wiki project. With the previous grant, Pablo had focused on content creation. The wiki has since launched publicly on the EA Forum, and the recent ‘Editing Festival’ was aimed at encouraging more people to contribute. While the previous grant was made from the LTFF, we made this grant through the EAIF because the wiki’s content will not be restricted to issues relevant to the long-term future and because we consider a wiki on EA topics to be a prime example of ‘EA infrastructure’.

This grant covers a 6-month period. About 55% is a salary for Pablo, while the additional funds can be used at Pablo’s discretion to pay for assistants and contractors. After the period covered by this grant, we will consider a further renewal or an ‘exit grant’.

I think that a wiki, if successful, could be highly valuable for multiple reasons.

Perhaps most notably, I think it could help improve the ‘onboarding’ experience of people who have recently encountered effective altruism and want to learn more about it online. For a couple of years, I have often encountered people – both ‘new’ and ‘experienced’ members of the EA community – who were concerned that it was hard to learn more about research and methods relevant to effectively improving the world, as well as about the EA community itself. They cited problems like a lack of ‘canonical’ sources, content being scattered across different online locations, and a paucity of accessible summaries. I believe that an actively maintained wiki with high-quality content could help address all of these problems.

Other potential benefits of a wiki:

  • Wikis can establish shared terminology, and contribute to common knowledge in other ways.
  • Contributing to a wiki is a concrete way to add value and contribute to the community that is accessible to basically all community members. My impression is that opportunities like this are in significant demand, and currently severely undersupplied by the community. If the project goes well, many students, researchers, and professionals might contribute to the wiki in their spare time, and find the experience motivating and satisfying.
  • Wiki articles provide natural focal points for content curation and ‘editorial decisions’. On many topics of relevance to EA, there is ample material scattered across academic journals, blogs, and other locations; however, there is little public information on which of these materials are most important, what the key takeaways are, and which areas are controversial versus broadly agreed upon. Writing wiki articles requires answering such questions. The wiki could thus incentivize more people to engage in this ‘editorial’ and ‘interpretative’ work, thereby reducing (some aspects of) ‘research debt’.

My most significant reservation about the wiki as a project is that most similar projects seem to fail – e.g., they are barely read, don’t deliver high-quality content, or are mostly abandoned after a couple of months. This seems to be the case both for wikis in general and for similar projects related to EA, including EA Concepts, PriorityWiki, the LessWrong Wiki, and Arbital. While some of these may be ambiguous successes rather than outright failures, my impression is that they provide only limited value – they certainly fall short of what I envision as the realistic best case for the EA Forum wiki.

I think that Pablo is extremely well-placed to execute this project in several ways. He has been involved in the effective altruism community from its very start, and has demonstrated a broad knowledge of many areas relevant to it; he is, based on my own impression and several references, a very strong writer; and he has extensively contributed to Wikipedia for many years.

I also think that Pablo met the expectations from his previous EAIF grant (in which I was not involved) by producing a substantial amount of high-quality content (80,000 words in 6 months).

I feel less sure about Pablo’s fit for strategy development and project management. Specifically, I think there may have been a case for focusing less on extensive content production and more on getting some initial content in front of readers who could provide feedback. I also expect that someone who is especially strong in these areas would have had more developed thoughts on the wiki’s governance and strategic questions such as ‘how much to rely on paid content creators vs. volunteers?’ at this stage of the project. Lastly, I would ideally have liked to see an analysis of how past similar projects in the EA space failed, and an explicit case for why this wiki might be different.

However, I also believe that these are issues on which it would be hard for me to have a confident view from the outside, and that to some extent such projects go best if their leaders follow a strategy that they find easy to envision and motivating to follow. I also consider it an encouraging sign that I felt it was easy to have a conversation about these issues with Pablo, that he contributed several good arguments, and that he seemed very receptive to feedback.

When considering a renewed grant, I will look for a more developed strategy and data on user engagement with the wiki (including results from the ‘editing festival’). I will also be interested in the quality of content contributed by volunteers. I might also review the content produced by Pablo and potential other paid contractors in more detail, but would be surprised if the decision hinged on that.

For the wiki’s longer-term future I would also want to have a conversation about its ideal funding base. This includes questions such as: Is there a point beyond which the wiki works best without any paid contributors? If not, which medium and large funders should contribute their ‘fair share’ to its budget? Would it be good if the wiki fundraised from a broad range of potential donors, potentially after setting up a dedicated organization?

Effective Institutions Project ($20,417)

Developing a framework aimed at identifying the globally most important institutions

What is this grant?

This grant is for a project to be undertaken as part of the working group on ‘Improving Institutional Decision Making’ (IIDM). The group is led by Ian David Moss, Vicky Clayton, and Laura Green. The group’s progress includes hosting meetups at EA conferences, mapping out a strategy for their first 2–3 years, launching their working group with an EA Forum post, setting up a Slack workspace with more than 220 users, and more broadly rekindling interest in the eponymous cause area that had seen little activity since Jess Whittlestone’s problem profile for 80,000 Hours from 2017.

Specifically, the grant will enable Ian David Moss to work part-time on the IIDM group for 3–4 months. Alongside continuing to co-lead the group, Ian is going to use most of this time to develop a framework aimed at identifying the world’s key institutions – roughly, those that are most valuable to improve from the perspective of impartially improving the world. A potential later stage of the project would then aim to produce a list of these key institutions. This is highly similar to a planned project the IIDM group has described previously as one of their main priorities for this year.

The IIDM group had initially applied for a larger grant of $70,000, also mostly for buying Ian’s time. This would have covered a longer period, and would have allowed the working group to carry out additional projects. We were not comfortable making this larger upfront commitment. We are open to considering future grants (which may be larger), which we would do in part by assessing the output from this initial grant.

I think the most likely way in which this grant might not go well is if we didn’t get much new evidence on how to assess the IIDM group’s potential, and find ourselves in a similarly difficult position when evaluating their future funding requests. (See below under “Why was this grant hard for us to evaluate?” for more context on why I think we were in a difficult position.)

If this grant goes well, the intermediate results of the ‘key institutions project’ will increase my confidence that the IIDM group and its leaders are able to identify priorities in the area of ‘improving institutions’. In the longer term, I would be excited if the IIDM group could help people working in different contexts to learn from each other, and if it could serve as a ‘bridge’ between EA researchers who work on higher-level questions and people who have more firsthand understanding of how institutions operate. The working group leadership told me that this vision resonates with their goals.

Why was this grant hard for us to evaluate?

We felt that this grant was challenging to evaluate for multiple reasons:

  1. The need to clear an especially high bar, given the risks and potential of early field-building
  2. Difficulty in assessing the group’s past work and future prospects
  3. Cultural differences between some of the fund’s managers and the IIDM community

Most significantly, we felt that the grant would need to clear an unusually high bar. This is because the IIDM group is engaged in early field building efforts that could have an outsized counterfactual impact on the quality and amount of total EA attention aimed at improving institutions. The group’s initial success in drawing out interest – often from people who feel like their interests or professional backgrounds make them a particularly good fit to contribute to this area rather than others – suggests the potential for significant growth. In other words, I think the group could collect a lot of resources which could, in the best case, be deployed with large positive effects – or else might be misallocated or cause unintended harm. In addition, even somewhat successful but suboptimal early efforts could discourage potential top contributors or ‘crowd out’ higher-quality projects that, if the space had remained uncrowded, would have been set up at a later point.

In addition, I feel that there were at least two reasons for why it was hard for us to assess whether the IIDM group more broadly, or the specific work we’d fund Ian for, meet that high bar.

First, the group’s past output is limited. It’s certainly been successful at growing its membership, and at high-level strategic planning. However, I still found it hard to assess how well-placed the group or its leaders are to identify the right priorities within the broad field of “improving institutions”. As I explain below (“What is my perspective on improving institutions?”), I also think that correctly identifying these priorities depends on hard research questions, and I’m not sure about the group’s abilities to answer such questions.

While I found it hard to assess the group’s potential, I do think they have a track record of making solid progress, and their lack of other outputs (e.g. recommending or implementing specific interventions, or published research) is largely explained by the group having been run by volunteers. In addition, the working group’s leadership told me that, motivated by an awareness of the risks I discussed earlier, in their early efforts they had deliberately prioritized behind-the-scenes consultations and informal writing over more public outputs.

Second – and here I’m particularly uncertain whether other fund managers and advisors agree – my perception is that there might be a ‘cultural gap’ between (1) EAIF fund managers (myself included) and their networks, and (2) some of the people in the EA community most interested in improving institutions (including within the IIDM working group). I think this gap is reflected in, for instance, the intellectual foundations one draws on when thinking about institutions, preferred terminology, and professional networks. To be clear, this gap is not itself a reason to be skeptical about the group’s potential; however, it does mean that getting on the same page (about the best strategy in the space and various other considerations) would require more time and effort than otherwise.

A few further clarifications about this potential ‘gap’:

  • My perception is based on only a few data points, and I’m making a noisy inference about group averages rather than a claim covering all individuals.
  • The gap I perceive is small when considering the world at large. Due to the shared commitment to effective altruism and participation in the EA and adjacent communities, there overall is a lot more ‘cultural similarity’ than between two randomly picked communities (out of the set of all the world’s communities).
  • For this reason, I think it would be significantly easier to reduce any disagreements about improving institutions that I might have with members of the IIDM group than it would be to reduce such a disagreement with someone randomly picked from the world’s population. The key reason for why fund managers and the IIDM group haven’t engaged in more discussions are time constraints.
  • I don’t think that reducing this gap would necessarily be good. I think it’s overall best if there is some variance in the intellectual foundations, underlying assumptions, and heuristics that different people or groups adopt when aiming to improve the world in a broadly EA-inspired way.

For these reasons, we may not have been in a great position to evaluate this grant ourselves. We therefore asked a number of people for their impression of the IIDM group’s past work or future potential. Most impressions from those who had substantially engaged with the IIDM group were positive. We also encountered some skeptical takes on the group’s potential, but they were disproportionately from people not very familiar with the group’s past work and plans. While these conversations were useful, they ultimately weren’t able to resolve our key uncertainties with sufficient confidence.

Overall, the reasons discussed above and my impressions more generally make me somewhat skeptical of whether the IIDM group’s leadership team and strategy are strong enough that I’d be excited for them to play a major role in shaping EA thought and practice on ‘improving institutions’ – in particular from the perspective I discuss below (under “What is my perspective on ‘improving institutions’?“). On the other hand, these reasons also make me less confident in my own ability to identify the best strategy in this space. They also make me more skeptical about my ability to adequately evaluate the group leaders’ strengths and potential. I’m therefore wary of a “false negative”, which makes me more sympathetic to giving the group the resources they need to be able to ‘prove themselves’, and more willing to spend more time to engage with the group and otherwise ‘stress test’ my view of how to best approach the area.

I would also like to emphasize that I think there are some unreservedly positive signs about the group and its leadership’s potential, including:

  • My impression is that for past work they did a good job at actively seeking out and integrating feedback.
  • My perception is that the group’s leadership – Ian, Vicky, and Laura – have complementary strengths, and between them cover a lot of the abilities and networks which I suspect are important for the group.
  • I have received a very positive reference on Ian’s contributions to identifying COVID-related donation opportunities in mid 2020, from someone familiar with his work in this space.
  • In my view, Ian has demonstrated strong leadership and communication skills in many ways. I also have a broadly good impression of, and have received positive references about, Vicky and Laura’s abilities.

What is my perspective on ‘improving institutions’?

I am concerned that ‘improving institutions’ is a hard area to navigate, and that the methodological and strategic foundations for how to do this well have not yet been well developed. I think that in many cases, literally no one in the world has a good, considered answer to the question of whether improving the decision quality of a specific institution along a specific dimension would have net good or net bad effects on the world. For instance, how to weigh the effects from making the US Department of Defense more ‘rational’ at forecasting progress in artificial intelligence? Would reducing bureaucracy at the UN Development Programme have a positive or negative net effect on global health over the next decade? I believe that we are often deeply uncertain about such questions, and that any tentative answer is liable to be overturned upon the discovery of an additional crucial consideration.

At the very least, I think that an attempt at answering such questions would require extensive familiarity with relevant research (e.g., in macrostrategy); I would also expect that it often hinges on a deep understanding of the relevant domains (e.g., specific policy contexts, specific academic disciplines, specific technologies, etc.). I am therefore tentatively skeptical about the value of a relatively broad-strokes strategy aimed at improving institutions in general.

I am particularly concerned about this because some prominent interventions for improving institutions would primarily result in making a given institution better at achieving its stated goals. For instance, I think this would often be the case when promoting domain-agnostic decision tools or policy components such as forecasting or nudging. Until alternative interventions will be uncovered, I would therefore expect that some people interested in improving institutions would default to pursuing these ‘known’ interventions.

To illustrate some of these concerns, consider the history of AI policy & governance. I know of several EA researchers who share the impression that early efforts in this area were “bottlenecked by entangled and under-defined research questions that are extremely difficult to resolve”, as Carrick Flynn noted in an influential post. My impression is that this is still somewhat true, but that there has been significant progress in reducing this bottleneck since 2017. However, crucially, my loose impression (often from the outside) is that this progress was to a large extent achieved by highly focused efforts: that is, people who focused full-time on AI policy & governance, and who made large upfront investments into acquiring knowledge and networks highly specific to the AI domain or particular policy contexts such as the US federal government (or could draw on past experience in these areas). I am thinking of, for example, background work at 80,000 Hours by Niel Bowerman and others (e.g. here) and the AI governance work at FHI by Allan Dafoe and his team.

Personally, when I think of what work in the area of ‘improving institutions’ I’m most excited about, my (relatively uninformed and tentative) answer is: Adopt a similar approach for other important cause areas; i.e., find people who are excited to do the groundwork on, e.g., the institutions and policy areas most relevant to official development assistance, animal welfare (e.g. agricultural policy), nuclear security, biosecurity, extreme climate change, etc. I think that doing this well would often be a full-time job, and would require a rare combination of skills and good networks with both ‘EA researchers’ and ‘non-EA’ domain experts as well as policymakers.

The Impactmakers ($17,411)

Co-financing to implement the 2021 Impact Challenge at the Dutch Ministry of Foreign Affairs, a series of workshops aimed at engaging civil servants with evidence-based policy and effective altruism

This is an extension of an earlier $13,000 EAIF grant to co-finance a series of workshops at the Dutch Ministry of Foreign Affairs (MFA). The MFA is covering the remaining part of the budget. The workshops are aimed at increasing civil servants’ ability to identify and develop high-impact policies, using content from sources on evidence-based policy as well as Doing Good Better.

The workshops are developed and delivered by a team of five that includes long-term members of EA Netherlands: Jan-Willem van Putten, Emil Iftekhar, Jason Wang, Reijer Knol, and Lisa Gotoh. The earlier grant allowed them to host a kick-off session and to recruit 35 participants. Based on feedback from the MFA, the team will structure the remaining workshop series as a ‘challenge’ in which teams of participating civil servants will address one of these three areas: 1) increasing policy impact; 2) improving decision-making processes; 3) increasing personal effectiveness. During the 10-month challenge teams will research and test ideas for a solution in these areas. This differs from their original plan, and the team thus requires a larger budget. This differs from their original plan, and the team thus requires a larger budget.

I am positive about this grant because it seems like a reasonably cheap way to introduce EA-related ideas to a valuable audience. Based on the written grant application, some workshop materials I reviewed, a reference from someone with both EA and policy experience, my conversation with Lisa, and the group’s track record so far, I also feel sufficiently confident that the team’s quality of execution will be sufficiently high to make an overall positive impression on the audience.

I am not sure if this team is going to seek funding for similar projects in the future. Before making further grants to them, I would try to assess the impact of this workshop series, more carefully vet the team’s understanding of relevant research and methods, and consider whether the theory of change of potential future projects was appropriately aimed at particularly high-leverage outcomes.

Effective Environmentalism ($15,000)

Strategy development and scoping out potential projects

This is an early-stage grant to the Effective Environmentalism group led by Sebastian Scott Engen, Jennifer Justine Kirsch, Vaidehi Agarwalla, and others. I expect that most of this grant will be used for exploratory work and strategic planning by the group’s leadership team, and that they might require renewed funding for implementing their planned activities.

Similar to the grant to the IIDM group described at length above, I view this as a high-variance project:

In the best case, I think the Effective Environmentalism group could evolve into a subcommunity that builds useful bridges between the EA community on one hand, and the environmentalism and climate activism communities on the other hand. It could help these communities learn from each other, expose a large number of people to the ideas and methods of effective altruism, improve the impact of various environmentalist and climate change mitigation efforts, and help the EA community to figure out its high-level views on climate change as a cause area as well as how to orient toward the very large communities predominantly focused on climate change.

In a disappointing case, I think this group will produce work and offer advice that is poorly received by both the EA and the environmentalist or climate activism communities. A typical EA perception might be that their work is insufficiently rigorous, and that they’re unduly prioritizing climate change relative to other cause areas. A typical environmentalist perception might be that their work is unappealing, insufficiently focused on social change, insufficiently focused on grassroots activism, or even a dishonest attempt to lure climate activists into other cause areas. In the worst case, there could be other harms, such as an influx of people who are not in fact receptive to EA ideas into EA spaces, an increase of negative perceptions of EA in the general public or environmentalist communities, or tensions between the EA and climate activism communities.

I am currently optimistic that the Effective Environmentalism team is aware of these and other risks, that they’re well placed to avoid at least some of them, and that they might have a shot at achieving a best-case outcome.

I also have been impressed with the progress this recently-created team has made between the time when they first submitted their grant application and the time at which their grant was approved. I also liked that they seem to have started with a relatively broad intended scope, and then tried to identify particularly high-value ‘products’ or projects within this scope (while remaining open to pivots) – as opposed to having an inflexible focus on particular activities.

Overall, I remain highly uncertain about the future trajectory of this team and project, as I think is typical given their challenging goals and limited track record. Nevertheless, I felt that this was a relatively easy grant to recommend, since I think that the work covered by it will be very informative for decisions about future funding, and that most of it will be ‘inward-facing’, or engage with external audiences only on a small scale (e.g. for ‘user interviews’ or small pilots) and thus incur few immediate risks.

Disputas ($12,000)

Funding an exploratory study for a potential software project aimed at improving EA discussions

This is a grant to the startup Disputas aimed at funding a feasibility study for improving the digital knowledge infrastructure for EA-related discussions. This feasibility study will consist of problem analysis, user interviews, and potentially producing wireframes and sketching out a development plan for a minimum viable product.

Disputas’s proposed project bears some similarity to argument mapping software. I am generally pessimistic about argument mapping software, both from an ‘inside view’ and because I suspect that many groups have tried to develop such software, but have always failed to get wide traction.

I decided to recommend this grant anyway, for the following reasons:

  • While there are some similarities, Disputas’s proposed project also differs from traditional argument mapping software in some ways. While I still have a skeptical prior about other software projects in the ‘digital knowledge infrastructure’ space, I’m not as pessimistic about them.
  • I am not confident in my pessimistic take on argument mapping software, and I think that results from a feasibility study could potentially change my mind.
  • I thought the application was well executed. It notably included the suggestion to start with funding a feasibility study, then potentially a minimum viable product, and only then potentially funding for the full project.
  • I have a broadly positive impression of Paal Kvarberg, Disputas’s CEO.
  • My impression is that Disputas is open to pivots, and that funding such exploratory work could lead them to identifying alternative projects I would be more optimistic about.

Steven Hamilton ($5,000)

Extending a senior thesis on mechanism design for donor coordination

This grant will allow Steven Hamilton, who recently graduated with a BA in economics and a minor in mathematics, to extend his senior thesis on mechanism design for donor coordination. Steven will undertake this work in the period between his graduation and the start of his PhD program.

Steven’s thesis was specifically about a mechanism to avoid charity overfunding – i.e. the issue that, absent coordination, the total donations from some group of donors might exceed a charity’s room for more funding. For instance, suppose there are 10 donors who each want to donate $50. Suppose further that there is some charity MyNonProfit that can productively use $100 in additional donations, and that all 10 donors most prefer filling MyNonProfit’s funding gap. If the donors don’t coordinate, and don’t know what other donors are going to do, they might each end up giving their $50 to MyNonProfit, thus exceeding its room for more funding by $400. This $400 could have been donated to other charities if the donors would have coordinated.

I am personally not convinced that charity overfunding is a significant problem in practice. However, I do think that there is room for useful work on donor coordination more broadly. Nevertheless, the application was sufficiently well-executed, and the grant amount sufficiently low, that I felt comfortable recommending the grant. If it turns out well, I suspect it will be either because I was wrong about charity overfunding being unimportant in practice – or because the grant causes a potentially promising young researcher to spend more time thinking about donor coordination, thus enabling more valuable follow-up work.

Steven has since told me that his work might also apply to other issues, e.g. charity underfunding or the provision of public goods. This makes me more confident in my optimistic perspective, and makes my reservations about the relevance of charity overfunding less relevant.

Grant reports by Michelle Hutchinson

Rethink Priorities ($248,300)

Compensation for 9 research interns (7 full-time equivalents) across various EA causes, plus support for further EA movement strategy research

Rethink Priorities is a research organization working on (largely empirical) questions related to how to do the most good, including questions like what moral weights we should assign to different animal species, or understanding what the current limitations of forecasting mean for longtermism.

Roughly half of this grant supports 9 interns (7 FTE), with the main aim of training them in empirical impact-focused research. Our perception is that it would be useful to have more of this research done and that there currently aren’t many mentors who can help people learn to do it.

Rethink Priorities has some experience of successfully supporting EA researchers to skill up. The case study of Luisa Rodriguez seemed compelling to us: she started out doing full-time EA research for Rethink Priorities, went on to become a research assistant for William MacAskill's forthcoming book about longtermism, and plans to work as a researcher at 80,000 Hours. Luisa thinks it's unlikely she would have become a full-time EA researcher if she hadn't received the opportunity to train up at Rethink Priorities. My main reservation with the program was the lack of capacity at RP from people who had significant experience doing this type of research (though they have a number of staff members who have many years of other research experience). This was ameliorated by senior staff’s ready willingness to provide comments on research across the team, and by RP’s intention to seek external mentorship for their interns in addition to internal.

The second half of the grant goes toward growing Rethink’s capacity to conduct research on EA movement strategy. The team focuses on running surveys, aimed both at EAs and the broader public. The types of research this funding will enable include getting a better sense of how many people in the broader public are aware of and open to EA. This research seems useful for planning how much and what kinds of EA outreach to do. For example, a number of people we asked found RP’s survey on longtermism useful. The committee was split as to whether a better model for funding such research was having the EA organizations who would do EA outreach commission the surveys. The latter model would increase the chance of the research being acted on. Ensuring this type of research is directly action-relevant for the organizations most responsible for shaping the direction of the EA movement strikes me as pretty difficult, and decidedly easier if they’re closely involved in designing the research. The research being action-relevant is particularly important because much of the research being done involves surveying the EA community. The cost to the community of surveys like the EA survey is fairly large. (I’d guess, assuming a 40 hour work-week, that the filling-in-the-survey work costs 25 weeks of EA work per EA survey).

Collaborations often seem tricky to pull off smoothly and efficiently. For that reason, a funding model we considered suggesting was EAIF paying for specific pieces of research ‘commissioned’ by groups such as CEA. This model would have the benefit that the group commissioning the research would be responsible for applying for the funding, and so the onus would be on them to make sure they would use the research generated. On the other hand, we hope that this type of research will be useful to many different groups, including ones like local groups who typically don’t have much funding. We therefore decided in favor of approving this funding application as is. We’d still be interested in continued close collaborations between RP and the groups who will be using the research, such as CEA and local EA groups.

The Long-Term Future Fund (see payout report) and Animal Welfare Fund (see payout report) have also made grants to Rethink Priorities (for different work).

Stefan Schubert ($144,072)

Two years of funding for writing a book on the psychology of effective giving, and for conducting related research

We’ve granted $144,072 to Stefan Schubert to write a book and a number of related papers on the psychology of effective giving, alongside Lucius Caviola. The book describes some of the reasons that people donate ineffectively, followed by ideas on how to make philanthropy more effective in the future.

Stefan and Lucius have a strong track record of focusing their work on what will help others most. An important way of ensuring this type of research is impactful is by drawing different considerations together to make overall recommendations, as opposed to simply investigating particular considerations in isolation. (An example of this might be: It’s often suggested that asking people to donate a small amount initially, which they are happy to give, makes people happier to give more to that place in future. It’s also often suggested that it’s a good idea to make a big ask of people, because when you then present them with a smaller ask it seems more reasonable and they’re more likely to acquiesce. These psychological results are each interesting, but if you’re trying to figure out how much to ask someone to donate, it’s hard to decide if you’ve only heard each presented in isolation as a consideration.) Describing individual considerations in isolation is common in academia because novelty is highly prized (so reiterating considerations others investigated is not) and because academics are suspicious of overreach and doing the comparison of considerations is very difficult. This makes research often very hard to draw action-relevant implications from, which strikes me as a major failing. My hope is that this book will do more ‘bringing together and comparing’ considerations than is typical.

Another reason for optimism about this grant is that Stefan has a track record of making his research accessible to those for whom it’s most relevant, for example by speaking at Effective Altruism Global (EAG), posting on social media, and writing blog posts in addition to peer-reviewed articles. In general, I worry that research of this kind can fail to make much of an impact because the people for whom it would be most action-relevant might not read it, and even if they do it’s complicated to figure out the specific implications for action. It seems to me that this kind of ‘translating research into action’ is somewhat neglected in the EA community, and we could do with more of it, both for actions individuals might take and those specific organizations might take. I’d therefore be particularly excited for Stefan to accompany his research with short, easily digestible summaries including ‘here are the things I think individual EAs should do differently because of this research; here are some implications I think it might have for how we run EAG, for how 80,000 Hours runs its advising program etc’.

The Centre for Long-Term Resilience ($100,000)

Improving the UK’s resilience to existential and global catastrophic risks

The Centre for Long-Term Resilience (CLTR; previously named Alpenglow) is a non-profit set up by Angus Mercer and Sophie Dannruther, with a focus on existential, global catastrophic, and other extreme risks. (It has also looked a bit into the UK’s global development and animal welfare policies.) CLTR’s aim is to facilitate discussions between policymakers and people doing research into crucial considerations regarding the long-run future. We’ve granted the centre a preliminary $100,000 in this round. Its scale-up plans mean it has a lot of room for more funding, so we plan to do a further investigation as to whether to fund it more significantly in our next round (should their funding gap not have been filled by then).

The organization seems to have been successful so far in getting connected to relevant researchers and policymakers, and is in the early stages of seeing concrete outputs from that. A key concern we have is that there may not be sufficient specific policy recommendations coming out of research in the longtermist space, which would be a major block on CLTR’s theory of change.

Jakob Lohmar ($88,557)

Writing a doctoral thesis in philosophy on longtermism at the University of Oxford

Recusal note: Based on an updated conflict of interest management plan for Max Daniel's position at the University of Oxford, he retroactively recused himself from this grant decision.

This is a grant for Jakob Lohmar to write a doctoral thesis in philosophy on longtermism, studying under Hilary Greaves. He currently plans for his thesis to examine the link between longtermism and different kinds of moral reasons. Examples of the type of question he expects to examine:

  • Typically, if the difference in value between the consequences of actions is large enough, people think that the right action is the one with the better consequences. The value of the consequences of our actions seem overwhelmingly determined by the long-run future. The future being so large can mean the difference in the value of the consequences of our actions is enormous. Are there moral reasons strong enough to outweigh this effect?
  • How should we take non-consequentialist longtermist reasons into account when deciding between reducing the chance of existential risks and bringing about other trajectory changes?

Whether or not we allow considerations about the long term to dominate our actions, the choice will make a huge difference to how we help others, as will how to weigh reducing existential risks against trajectory changes. Having more of this research therefore seems likely to be useful. We expect the primary impact of this grant to come from allowing Jakob to skill up in this area, rather than the research itself.

We’re particularly excited about people doing this type of research who have been thinking for a significant while about how to help people most. Action relevance isn’t prized in academic philosophy, so it can be hard to keep your research directed in a useful direction. Jakob’s background shows strong evidence he will do this.

Joshua Lewis, New York University, Stern School of Business ($45,000)

Academic research into promoting the principles of effective altruism

We granted Joshua Lewis $45,000 for doing academic research into promoting the principles of effective altruism over the next 6 months. Joshua Lewis is an Assistant Professor of Marketing at the NYU Stern School of Business. He has a long-term vision of setting up a research network of academics working in psychology and related fields. We intended this grant to be for a shorter time period so that we can assess initial results before long, but to be generous over that time period to ensure he isn’t bottlenecked by funding.

Joshua’s research agenda seems interesting and useful, covering questions such as understanding people’s risk aversion towards interventions with high expected value but low probability of impact. These topics seem important if we’re going to increase the extent to which people are taking the highest-impact actions they know of. Some committee members were primarily excited about the research itself, while others were also excited about the flow-through effects from getting other academics excited about working on these problems.

Grant reports by Ben Kuhn

Giving Green (an initiative of IDinsight) ($50,000)

Improving research into climate activism charity recommendations

Giving Green is trying to become, more or less, GiveWell for climate change. This grant provides them with funding to hire a researcher to improve their recommendation in the grassroots activism space.

This is a relatively speculative grant, though it has high potential upside. Giving Green has shown promise in media, user experience, and fundraising, and has appeared in the New York Times, Vox, and The Atlantic. At the same time, we have serious reservations about the current quality of their research (largely along the lines laid out in alexrjl’s EA Forum post). Like several commenters on that post, we found their conclusions about grassroots activism charities, and specifically The Sunrise Movement, particularly unconvincing. That said, we think there’s some chance that, with full-time work, they’ll be able to improve the quality of their research—and they have the potential to be great at fundraising—so we’re making this grant to find out whether that’s true. This grant should not be taken as an endorsement of Giving Green’s current research conclusions or top charity recommendations.

Giving Green originally requested additional funding for a second researcher, but we gave a smaller initial grant to focus the research effort only on the area where we’re most uncertain, with the remaining funding conditional on producing a convincing update on grassroots activism. I (Ben) currently think it’s moderately unlikely (~30%?) that they’ll hit this goal, but that the upside potential is worth it. (Obviously, I would be extremely happy to be wrong about this!)

*The eligible domain experts will be a set of climate researchers from other EA-affiliated organizations; we plan to ask several for their views on this research as part of our follow-up evaluation.

Grants by the previous fund managers

Note: These grants were made in December 2020 and January 2021 as off-cycle grants by the previous EAIF fund managers, and Megan Brown was the main contributor to these grants as an advisor. However, these payout reports have been written by current fund manager Max Daniel based on the written documentation on these grants (in which Max was not directly involved).

High Impact Athletes ($50,000)

Covering expenses and growth for 2021 for a new nonprofit aimed at generating donations from professional athletes

At the time of this first grant application, High Impact Athletes (HIA) was a recently launched nonprofit aimed at generating donations to effective charities from professional athletes. It had gained some initial traction, having generated over $25,000 in donations by the time of its official launch. This grant was intended to cover growth and expenses for 2020 and 2021.

HIA’s recommended charities align closely with charities that are commonly regarded as having unusually high impact in the EA community (e.g., GiveWell top charities).

CEO and founder Marcus Daniell is a professional tennis player himself. With his networks in the sporting world, as well as his long history as a donor to effective charities, we thought he was very well placed to get other professional athletes to join him in his giving. The grant includes a part-time salary for Marcus so he can continue to lead HIA.

High Impact Athletes ($60,000)

Enabling a first hire for a nonprofit aimed at generating donations from professional athletes

This grant is intended to allow High Impact Athletes (HIA) to hire a first staffer who would help CEO Marcus Daniell to focus on fundraising and growth, leveraging his unique network in the sporting world. For more context on HIA, see the previous writeup.

We were impressed that, by the time of this grant application, and not long after its launch, HIA had made substantial progress: in just one month, it had influenced $80,000 of donations, and had secured contributions from 42 athletes, including Olympic gold medalists.

Feedback

If you have any feedback, we would love to hear from you. Let us know in the comments, submit your thoughts through our feedback form, or email us at eainfrastructure@effectivealtruismfunds.org.

To comment on this payout report, please join the discussion on the Effective Altruism Forum.

Comments100
Sorted by Click to highlight new comments since: Today at 7:35 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Thanks for this writeup! 

I was surprised to find that reading this instilled a sense of hope, optimism, and excitement in me. I expected to broadly agree with the grant decisions and think the projects sound good in expectation, but was surprised that I had a moderate emotional reaction accompanying that. 

I think this wasn't exactly because these grants seem much better than expected, but more because I got a sense like "Oh, there are actually a bunch of people who want to focus intensely on a specific project that someone should be doing, and who are well-suited to doing it, and who may continue to cover similar things indefinitely, gradually becoming more capable and specialised. The gaps are actually gradually getting filled - the needs are gradually getting "covered" in a way that's more professional and focused and less haphazard and some-small-number-of-excellent-generalists-are-stretched-across-everything."

I often feel slightly buried under just how much there is that should be getting done, that isn't getting done, and that I in theory could do a decent job of if I focused on it (but I of course don't have time to do all the things, nor would I necessarily do the ... (read more)

Thanks for writing up this detailed account of your work; I'm glad the LTFF's approach here seems to be catching on!

Thanks a lot for the write-up! Seems like there's a bunch of extremely promising grants in here. And I'm really happy to see that the EAIF is scaling up grantmaking so much. I'm particularly excited about the grants to HIA, CLTR and to James Aung & Emma Abele's project.

And thanks for putting in so much effort into the write-up, it's really valuable to see the detailed thought process behind grants and makes me feel much more comfortable with future donations to EAIF. I particularly appreciated this for the children's book grant, I went from being strongly skeptical to tentatively excited by the write-up.

Thank you for writing these! I really like these kind of long writeups, it really feels like it helps me get a sense of how other people think about making grants like this.

Hello everyone, Dan from Giving Green here. As noted in the explanation above, the main purpose of this grant is to deepen and improve our research into grassroots activism, hopefully coming up with something that is more aligned with research norms within the EA community. We'd love to bring an experienced EA researcher on board to help us with that, and would encourage any interested parties to apply. 

We currently have two jobs posted, one for a full-time or consultant researcher, and the second for a full-time program manager. We're also interested in hearing from people who may not exactly fit these job descriptions but can contribute productively. If interested, please submit an application at the links above or reach out at givinggreen@idinsight.org.

I really like that Ben made an explicit prediction related to the Giving Green grant, and solicited community predictions too! I currently think grantmakers (at least the EA Fund fund managers) should do this sort of thing more (as discussed here and here), so it's nice to see a first foray into that.

That said, it seems both hard to predict on the question as stated and hard to draw useful inferences from it, since no indication is given of how many experts will be asked. The number I'd give, and what the numbers signify, would be very different if you expect to ask 3 experts vs expecting to ask 20, for example. Do you have a sense of roughly what that denominator will be?

7
Jonas V
3y
My guess is 1-3 experts.
3
MichaelA
3y
Thanks. I now realise that I have another confusion about the question: Are experts saying whether they found the research high quality and convincing in whatever conclusions it has, or saying whether the researcher strongly updated the experts specifically towards viewing grassroots activism more positively?  This is relevant if the researcher might form more mixed or negative conclusions about grassroots activism, yet still do so in a high-quality and convincing way.  I'm gonna guess Ben either means "whether the researcher strongly updated the experts specifically towards viewing grassroots activism more positively" or he just assumes that a researcher Giving Green hires and manages is very likely to conclude that grassroots activism is quite impactful (such that the different interpretations of the question are the same in practice). (My forecast is premised on that.)
4
Jonas V
3y
This.

Thank you for the write-up super helpful. Amazing to see so much good stuff get funding.

Some feedback and personal reflections as a donor to the fund:

  • The inclusion of things on this list that might be better suited to other funds (e.g the LTFF) without an explanation of why they are being funded from the Infrastructure Fund makes me slightly less likely in future to give directly to the  Infrastructure Fund and slightly more likely to just give to one of the bigger meta orgs you give to (like Rethink Priorities).

    It basically creates some amount of uncertainty and worry in my mind that the the funds will give to the areas I expect them to give too with my donation.

    (I recognise it is possible that there are equal amounts going across established EA cause areas or something like that from this fund  but if that is the case it is not very clear)

 

This should not take away form the fact that I think the fund has genuinely done a great job here. For example saying that I would lean towards directly following the funds recommendations is recognition that I trust the fund and the work you have done to evaluate these projects – so well done!

Also I do support innovative longtermist projects (especially love CLTR – mega-super to see them funded!!) it is just not what I expect to see this fund doing so leaves me a bit confused / tempted to give elsewhere.

 

[This comment is no longer endorsed by its author]Reply

Thanks for the feedback! 

I basically agree with the conclusion MichaelA and Ben Pace have below. I think EAIF’s scope could do with being a bit more clearly defined, and we’ll be working on that. Otoh, I see the Lohmar and CLTR grants as fitting fairly clearly into the ‘Fund scope’ as pasted by MichaelA below. Currently, grants do get passed from one fund to the other, but that happens mostly when the fund they initially applied to deems them not to fall easily into their scope, rather than if they seem to fall centrally into the scope of the fund they apply for and also another fund. My view is that CLTR, for example, is good example of increasing the extent to which policy makers are likely to use EA principles when making decisions, which makes it seem like a good example of the kind of thing I think EAIF should be funding. 

I think that there are a number of ways in which someone might disagree: One is that they might think that ‘EA infrastructure’ should be to do with building the EA _community_ specifically, rather than being primarily concerned with people outside community. Another is that they might want EAIF to only fund organisations which have the same portfoli... (read more)

Thanks for writing this reply and, more generally, for an excellent write-up and selection of projects!

I'd be grateful if you could address a potential, related concern, namely that EAIIF might end up as a sort of secondary LTFF, and that this would be to the detriment of non-longtermist applicants to the fund, as well being, presumably,  against the wishes of EAIIF's current donors.  I note the introduction says:

we generally strive to maintain an overall balance between different worldviews according to the degree they seem plausible to the committee.

and also that Buck, Max, and yourself are enthusiastic longtermists - I am less sure about Ben and Jonas is a temporary member. Putting these together, combined with what you say about funding projects which could/should have applied to the LTFF, it would seem to follow you could (/should?) put the vast majority of the EAIIF towards long-terminist projects.

Is this what you plan to do? If not, why not? If  yes,  do you plan to inform the current donors?

I emphasise I don't see any signs of this in the current round, nor do I expect you to do this. I'm mostly asking so you can set my mind at rest, not least because th... (read more)

8
MichaelA
3y
FWIW, a similar question was raised on the post about the new management teams, and Jonas replied there. I'll quote the question and response. (But, to be clear, I don't mean to "silence this thread", imply this has been fully covered already, or the like.) Question: Jonas's reply:
9
Michelle_Hutchinson
3y
Thanks for finding and pasting Jonas' reply to this concern MichaelA. I don't feel I have further information to add to it. One way to frame my plans: I intend to fund projects which promote EA principles, where both 'promote' and 'EA principles' may be understood in a number of different ways. I can imagine the projects aiming at both the long-run future and at helping current beings. It's hard to comment in detail since I don't yet know what projects will apply. 
9
MichaelPlant
3y
Hello Michelle. Thanks for replying, but I was hoping you would engage more with the substance of my question - your comment doesn't really give me any more information than I already had about what to expect. Let me try again with a more specific case. Suppose you are choosing between projects A and B - perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF - the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good. What would you do? I can't think of any other information you would need. FWIW, I think you must pick A. I think we can assume donors expect the funds not to be overlapping - otherwise, why even have different ones? - and they don't want their money to go to another fund's area - otherwise, that's where they have put it. Hence, picking B would be tantamount to a breach of trust. (By the same token, if I give you £50, ask you to put it in the collection box for a guide dog charity, and you agree, I don't think you should send the money to AMF, even if you think AMF is better. If you decide you want to spend my money somewhere else from what we agreed to, you should tell me and offer to return the money.)
4
Jonas V
3y
In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them. See, e.g., Ajeya Cotra on the 80,000 Hours podcast. I personally feel excited to fund high-quality projects that develop or promote EA principles, whether they're longtermist or not. (And Michelle suggested this as well.) For the EAIF, I would evaluate a project like HLI based on whether it seems like it overall furthers the EA project (i.e., makes EA thinking more sophisticated, leads to more people making important decisions according to EA principles, etc.). FWIW, I think this example is pretty unrealistic, as I don't think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination. In practice, I would probably recommend a split between A and B (recommending my 'fair share' to B, and the rest to A); I would probably coordinate this explicitly with the other funds. I would probably also try to refer both A and B to other funders to ensure both get fully funded.
2
MichaelPlant
3y
Thanks for this reply, which I found reassuring.  Okay, this is interesting and helpful to know. I'm trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor's wishes vs fund managers allocating the money by their own lights of what's best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere.  To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better. I suspect you would agree with this in principle: you wouldn't want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused. However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund's remit.  (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.) I haven't thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive - this gives donors greater choice and minimises worries about permissible fund allocation. 
6
Jonas V
3y
This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I've talked to actually want the fund managers to spend the money that way (the EA Funds pitch is "defer to experts" and donors want to go all in on that, with only minimal scope constraints). Yeah, I agree that all grants should be broadly in scope – thanks for clarifying. Fund scope definitions are always a bit fuzzy, many grants don't fit into a particular bucket very neatly, and there are lots of edge cases. So while I'm sympathetic to the idea in principle, I think it would be really hard to do in practice. See Max's comment.
4
Max_Daniel
3y
I care about donor expectations, and so I'd be interested to learn how many donors have a preference for fund scopes to not overlap. However, I'm not following the suggested reasoning for why we should expect such a preference to be common. I think people - including donors - choose between partly-but-not-fully overlapping bundles of goods all the time, and that there is nothing odd or bad about these choices, the preferences revealed by them, or the partial overlap. I might prefer ice cream vendor A over B even though there is overlap in flavours offered; I might prefer newspaper A over B even though there is overlap in topics covered (there might even be overlap in authors); I might prefer to give to nonprofit A over B even though there is overlap in the interventions they're implementing or the countries they're working in; I might prefer to vote for party A over B even though there is overlap between their platforms; and so on. I think all of this is extremely common, and that for a bunch of messy reasons it is not clearly the case that generally it would be best for the world or the donors/customers/voters if overlap was reduced to zero.  I rather think it is the other way around: the only thing that would be clearly odd is if scopes were not overlapping but identical. (And even then there could be other reasons for why this makes sense, e.g., different criteria for making decisions within that scope.)

However, I'm not following the suggested reasoning for why we should expect such a preference to be common.

I definitely have the intuition the funds should be essentially non-overlapping. In the past I've given to the LTFF, and would be disappointed if it funded something that fit better within one of the other funds that I chose not to donate to.

With non-overlapping funds, donors can choose their allocation between the different areas (within the convex hull). If the funds overlap, donors can no longer donate to the extremal points. This is basically a tax on donors who want to e.g. care about EA Meta but not Longtermist things.

Consider the ice-cream case. Most ice-cream places will offer Vanilla, Chocolate, Strawberry, Mint etc. If instead they only offered different blends, someone who hated strawberry - or was allergic to chocolate - would have little recourse. By offering each as a separate flavour, they accommodate purists and people who want a mixture. Better for the place to offer each as a standalone option, and let donors/customers combine. In fact, for most products it is possible to buy 100% of one thing if you so desire. 

This approach is also common in finance; firms will offer e.g. a Tech Fund, a Healthcare Fund and so on, and let investors decide the relative ratio they want between them. This is also (part of) the reason for the decline of conglomerates - investors want to be able to make their own decisions about which business to invest in, not have it decided by managers.

I agree the finance example is useful. I would expect that in both our case and the finance case the best implementation isn't actually mutually exclusive funds, but funds with clear and explicit 'central cases' and assumptions, plus some sensible (and preferably explicit) heuristics to be used across funds like 'try to avoid multiple funds investing too much in the same thing'. 

That seems to be both because there will (as Max suggests) often be no fact of the matter as to which fund some particular company fits in, and also because the thing you care about when investing in a financial fund is in large part profit. In the case of the healthcare and tech fund, there will be clear overlaps - firms using tech to improve healthcare. If I were investing in one or other of these funds, I would be less interested in whether some particular company is more exactly described as a 'healthcare' or 'tech' company, and care more about whether they seem to be a good example of the thing I invested in. Eg if I invested in a tech fund, presumably I think some things along the lines of 'technological advancements are likely to drive profit' and 'there are low hanging fruit in terms of tech in... (read more)

8
Max_Daniel
3y
Thanks for sharing your intuition, which of course moves me toward preferences for less/no overlap being common. I'm probably even more moved by your comparison to finance because I think it's a better analogy to EA Funds than the analogies I used in my previous comments. However, I still maintain that there is no strong reason to think that zero overlap is optimal in some sense, or would widely be preferred. I think the situation is roughly: * There are first-principles arguments (e.g., your 'convex hull' argument) for why, under certain assumptions, zero overlap allows for optimal satisfaction of donor preferences. * (Though note that, due to standard arguments for why at least at first glance and under 'naive' assumptions splitting small donations is suboptimal, I think it's at least somewhat unclear how significant the 'convex hull' point is in practice. I think there is some tension here as the loss of the extremal points seems most problematic from a 'maximizing' perspective, while I think that donor preferences to split their giving across causes are better construed as being the result of "intra-personal bargaining", and it's less clear to me how much that decision/allocation process cares about the 'efficiency loss' from moving away from the extremal points.) * However, reality is more messy, and I would guess that usually the optimum is somewhere on the spectrum between zero and full overlap, and that this differs significantly on a case-by-case basis. There are things pushing toward zero overlap, and others pushing toward more overlap (see e.g. the examples given for EA Funds below), and they need to be weighed up. It depends on things like transaction costs, principal-agent problems, the shape of market participants' utility functions, etc. * Here are some reasons that might push toward more overlap for EA Funds: * Efficiency, transaction/communication cost, etc., as mentioned by Jonas. * My view is that 'zero overlap' just fails to carve

Nothing I have seen makes me thinks the EAIF should change the decision criteria. It seems to be working very well and good stuff is getting funded. So don’t change that to address a comparatively very minor issue like this, would be throwing the baby out with the bathwater!!
 

--
If you showed me the list here and said 'Which EA Fund should fund each of these?' I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above  you might have made the same call as well.

From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund's grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by the first fund).

And then all those dogmatic donors to... (read more)

7
Max_Daniel
3y
Thank you for this suggestion. It makes sense to me that this is how the situation looks from the outside. I'll think about the general issue and suggestions like this one a bit more, but currently don't expect large changes to how we operate. I do think this might mean that in future rounds there may be a similar fraction of grants that some donors perceive to better fit with another fund. I acknowledge that this is not ideal, but I currently expect it will seem best after considering the cost and benefits of alternatives. So please view the following points of me trying to explain why I don't expect to adopt what may sound like a good suggestion, while still being appreciative of the feedback and suggestions. I think based on my EA Funds experience so far, I'm less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between "EAIF managers think something is good to fund from a longtermist perspective" and "LTFF managers think something is good to fund from a longtermist perspective" (and vice versa for 'meta' grants) than you seem to expect.  This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they're aligned on broad "EA principes" and other fundamental views. I have this view both because of some cases I've seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers). To be clear, I would expect decision-relevant disagreements for a minority of grants - but not a sufficiently clear minority that I'd be comfortable acting on "the other fund is going to make this grant" as a default assumption. Your suggestion of retaining the option to make the grant through the 'original' fund would help with this, but
9
weeatquince
3y
Thank you so much for your thoughtful and considered reply.   Sorry to change topic but this is super fascinating and more interesting to me than questions of fund admin time (however much I like discussing organisational design I am happy to defer to you / Jonas / etc on if the admin cost is too high – ultimately only you know that). Why would there be so much disagreement (so much that you would routinely want to veto each others decisions if you had the option)? It seems plausible that if there is such levels of disagreement maybe:  1. One fund is making quite poor decisions AND/OR 2. There is significant potential to use consensus decisions making tools as a large group to improve decision quality AND/OR 3. There are some particularly interesting lessons to be learned by identifying the cruxes of these disagreements. Just curious and typing up my thoughts. Not expecting good answers to this.
5
Max_Daniel
3y
I think all funds are generally making good decisions. I think a lot of the effect is just that making these decisions is hard, and so that variance between decision-makers is to some extent unavoidable. I think some of the reasons are quite similar to why, e.g., hiring decisions, predicting startup success, high-level business strategy, science funding decisions, or policy decisions are typically considered to be hard/unreliable. Especially for longtermist grants, on top of this we have issues around cluelessness, potentially missing crucial considerations, sign uncertainty, etc. I think you are correct that both of the following are true: * There is potential of improving decision quality by spending time on discussing diverging views, improving the way we aggregate opinions to the extent they still differ after the amount of discussion that is possible, and maybe by using specific 'decision making tools' (e.g., certain ways of a structured discussion + voting). * There are interesting lessons to be learned by identifying cruxes. Some of these lessons might directly improve future decisions, others might be valuable for other reasons - e.g., generating active grantmaking ideas or cruxes/results being shareable and thereby being a tiny bit epistemically helpful to many people. I think a significant issue is that both of these cost time - both identifying how to improve in these areas and then implementing the improvements -, which is a very scarce resource for fund managers. I don't think it's obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas. Hopefully this means we're not too far away from the optimum.  I think there are different views on this within EA Funds (both within the EAIF committee, and potentially between the average view of the EAIF committee and the average view of the LTFF committee - or at least this is suggested by revealed preferences as my loose impression is that
4
weeatquince
3y
I am always amazed at how much you fund managers all do given this isn't your paid job!   Fair enough. FWIW my general approach to stuff like this is not to aim for perfection but to aim for each iteration/round to be a little bit better than the last.   That is possible. But also possible that you are particularly smart and have well thought-out views and people learn more from talking to you than you do from talking to them! (And/or just that everyone is different and different ways of learning work for different people)
6
Larks
3y
Thanks for writing up this detailed response. I agree with your intuition here that 'review, refer, and review again' could be quite time consuming. However, I think it's worth considering why this is the case. Do we think that the EAIF evaluators are similarly qualified to judge primarily-longtermist activities as the LTFF people, and the differences of views is basically noise? If so, it seems plausible to me that the EAIF evaluators should be able to unilaterally make disbursements from the LTFF money.  In this setup, the specific fund you apply to is really about your choice of evaluator, not about your choice of donor, and the fund you donate to is about your choice of cause area, not your choice of evaluator-delegate.  In contrast, if the EAIF people are not as qualified to judge primarily-longtermist (or primarily animal rights, etc.) projects as the specialised funds' evaluators, then they should probably refer the application early on in the process, prior to doing detailed due diligence etc.
2
Max_Daniel
3y
Thank you for sharing - as I mentioned I find this concrete feedback spelled out in terms of particular grants particularly useful. [ETA: btw I do think part of the issue here is an "object-level" disagreement about where the grants best fit - personally, I definitely see why among the grants we've made they are among the ones that seem 'closest' to the LTFF's scope; but I don't personally view them as clearly being more in scope for the LTFF than for the EAIF.]
4
weeatquince
3y
Thank you Max. A guess the interesting question then is why do we think different things. Is it just a natural case of different people thinking differently or have I made a mistake or is there some way the funds could better communicate. One way to consider this might be to looking at juts the basic info / fund scope on the both EAIF and LTFF pages and ask: "if the man on the Clapham omnibus only read this information here and the description of these funds where do they think these grants would sit?"  
8
Jonas V
3y
A further point is donor coordination / moral trade / fair-share giving. Treating it as a tax (as Larks suggests) could often amount to defecting in an iterated prisoner's dilemma between donors who care about different causes. E.g., if the EAIF funded only one org, which raised $0.90 for MIRI, $0.90 for AMF, and $0.90 for GFI for every dollar spent, this approach would lead to it not getting funded, even though co-funding with donors who care about other cause areas would be a substantially better approach. You might respond that there's no easy way to verify whether others are cooperating. I might respond that you can verify how much money the fund gets in total and can ask EA Funds about the funding sources. (Also, I think that acausal cooperation works in practice, though perhaps the number of donors who think about it in this way is too small for it to work here.)
2
Larks
3y
I'm afraid I don't quite understand why such an org would end up unfunded. Such an organisation is not longtermist or animal rights or global poverty specific, and hence seems to fall within the natural remit of the Meta/Infrastructure fund. Indeed according to the goal of the EAIF it seems like a natural fit: Nor would this be disallowed by weeatquince's policy, as no other fund is more appropriate than EAIF:
4
MichaelPlant
3y
Just a half-formed thought how something could be "meta but not longtermist" because I thought that was a conceptually interesting issue to unpick. I suppose one could distinguish between meaning "meta" as (1) does non-object level work or (2) benefits more than one value-bearer group, where the classic, not-quite-mutually-exclusive three options for value-bearer groups are (1) near-term humans, (2) animals, and (3) far future lives. If one is thinking the former way, something is meta to the degree it does non-object level vs object-level work (I'm not going to define these), regardless of what domain it works towards. In this sense, 'meta' and (e.g.) 'longtermist' are independent: you could be one, or the other, both, or neither. Hence, if you did non-object level work that wasn't focused on the longterm, you would be meta but not longtermist (although it might be more natural to say "meta and not longtermist" as there is no tension between them). If one is thinking the latter way, one might say that an org is less "meta", and more "non-meta", the greater the fraction of its resources are intentionally spent to benefit just only one value-bearer group. Here "meta" and "non-meta" are mutually exclusive and a matter of degree. A "non-meta" org is one that spends, say, more than 50% of its resources aimed at one group. The thought is of this is that, on this framework, Animal Advocacy Careers and 80k are not meta, whereas, say, GWWC is meta. Thinking this way, something is meta but not longtermist if it primarily focuses on non-longtermist stuff. (In both cases, we will run into familiar issues about to making precise what an agent 'focuses on' or 'intends'.)
4
MichaelPlant
3y
Yes, I read that and raised this issue privately with Jonas.
4
weeatquince
3y
Thank you Michelle. Really useful to hear. I agree with all of this. It seems like, from what you and Jonas are saying, that the fund scopes currently overlap so there might be some grants that could be covered by multiple funds and even if they are arguably more appropriate to another fund than another they tend to get funded with by whoever gets to them first as currently the admin burden of shifting to another fund is large.  That all seems pretty reasonable. I guess my suggestion would be that I would be excited to see these kinks minimised over time and funding come from which ever pool seems most appropriate. That overlap is seen as a bug to be ironed out not a feature.  FWIW I think you and all the other fund managers made really really good decisions. I am not just saying that to counteract saying something negative but I am genuinely very excited by how much great stuff is getting funded by the EAIF. Well done.  (EDIT: PS. My reply to Ben below might be useful context too: https://forum.effectivealtruism.org/posts/zAEC8BuLYdKmH54t7/ea-infrastructure-fund-may-2021-grant-recommendations?commentId=qHMosynpxRB8hjycp#sPabLWWyCjxWfrA6E Basically a more tightly defined fund scope could be nice and makes it easier for donors but harder for the Funds so there is a trade-off)  
2
weeatquince
3y
I lean (as you might guess) towards the funds being mutually exclusive. The basic principle is that In general the more narrow the scope of each fund then the more control donors have about where their funds go.  If the Fund that seemed more appropriate pays out for any thing where there is overlap then you would expect: * More satisfied donors. You would expect the average amount of grants that donors strongly approve to go up. * More donations. As well as the above satisfaction point, if donors know more precisely how their money will be spent then they would have more confident that giving to the fund makes sense comapred to some other option. * Theoretically better donations? If you think donors wishes are a good measure of expected impact it can arguably improve the targeting of funds to ensure amounts moved are closer to donors wishes (although maybe it makes the relationship between donors and specific fund managers weaker as there might be crossover with fund mangers moving money across multiple of the Funds).   None of these are big improvements, so maybe not a priority, but the cost is also small. (I cannot speak for CEA but as a charity trustee we regularly go out our way to make sure we are meeting donors wishes, regranting money hither and thither and it has not been a big time cost).  

OTOH my impression is that the Funds aren't very funding-constrained, so it might not make sense to heavily weigh your first two reasons (though all else equal donor satisfaction and increased donation quantity seems good).

I also think there are just a lot of grants that legitimately have both a strong meta/infrastructure and also object-level benefit and it seems kind of unfair to grantees that provide multiple kinds of value that they still can only be considered from one funding perspective/with focus on one value proposition. If a grantee is both producing some kind of non-meta research and also doing movement-building, I think it deserves the chance to maybe get funded based on the merits of either of those value adds. 

2
Jonas V
3y
Yeah, I agree with Dicentra. Basically I'm fine if donors don't donate to the EA Funds for these reasons; I think it's not worth bothering (time cost is small, but benefit even smaller).  There's also a whole host of other issues; Max Daniel is planning to post a comment reply to Larks' above comment that mentions those as well. Basically it's not really possible to clearly define the scope in a mutually exclusive way.
6
weeatquince
3y
Maybe we are talking past each other but I was imagining something easy like: just defining the scope as mutually exclusive. You write: we aim for the funds to be mutually exclusive. If multiple funds would fund the same project we make the grant from whichever of the Funds seems most appropriate to the project in question. Then before you grant money you look over and see if any stuff passed by one fund looks to you like it is more for another fund. If so (unless the fund mangers of the second fund veto the switch) you fund the project with money from the second fund. Sure it might be a very minor admin hassle but it helps make sure donor's wishes are met and avoids the confusion of donors saying – hold on a min why am I funding this I didn’t expect that. This is not a huge issue so maybe not the top of your to do list. And you are the expert on how much of an admin burden something like this is and if it is worth it, but from the outside it seems very easy and the kind of action I would just naturally expect of a fund / charity.  [minor edits]
2
weeatquince
3y
It also makes it easier for applicants to know what fund to apply to (or apply to first).
2
MichaelA
3y
(FWIW, that all makes sense and seems like a good approach to me.)

Retracted:

Upon reflection and reading the replies I think I perhaps I was underestimating how broad this Fund's scope is (and perhaps was too keen to find fault).

I do think there could be advantages for donors of narrowing the scope of this Fund / limiting overlap between Funds (see other comments), but recognise there are costs to doing that.

All my positive comments remain and great to see so much good stuff get funded.

7
Max_Daniel
3y
Hi Sam, thank you for this feedback. Hearing such reactions is super useful.  Could you tell us more about which specific grants you perceive as potentially "better suited to other funds"? I have some guesses (e.g. I would have guessed you'd say CLTR), but I would still find it helpful to see if our perceptions match here. Feel free to send me a PM on that if that seemed better.
6
MichaelA
3y
FWIW, I was also confused in a similar way by: * The CLTR grant * The Jakob Lohmar grant * Maybe the Giving Green grant If someone had asked me beforehand which fund would evaluate CLTR for funding, I would've confidently said LTFF. For the other two, I'd have been uncertain, because: * The Lohmar grant is for a project that's not necessarily arguing for longtermism, but rather working out how longtermist we should be, when, how the implications of that differ from what we'd do for other reasons, etc. * But GPI and Hilary Greaves seem fairly sold on longtermism, and I expect this research to mostly push in more longtermism-y directions * And I'd find it surprising if the Infrastructure Fund funded something about how much to care about insects as compared to humans - that's likewise not necessarily going to conclude that we should update towards more focus on animal welfare, but it still seems a better fit for the Animal Welfare Fund * Climate change isn't necessarily strongly associated with longtermism within EA * I guess Giving Green could also be seen as aimed at bringing more people into EA by providing people who care about climate change with EA-related products and services they'd find interesting? * But this report doesn't explicitly state that that's why the EAIF is interested in this grant, and I doubt that that's Giving Green's own main theory of change But this isn't to say that any of those grants seem bad to me. I was just somewhat surprised they were funded by the EAIF rather than the LTFF (at least in the case of CLTR and Jakob Lohmar).
9
Jonas V
3y
A big part of the reason was simply that CLTR and Jakob Lohmar happened to apply to the EAIF, not the LTFF. Referring grants takes time (not a lot, but I don't think doing such referrals is a particularly good use of time if the grants are in scope for both funds). This is partly explained in the introduction of the grant report.
4
MichaelPlant
3y
I recognise there is admin hassle. Although, as I note in my other comment, this becomes an issue if the EAIIF in effect becomes a top-up for another fund.
5
Jonas V
3y
FWIW, it's not just admin hassle but also mental attention for the fund chairs that's IMO much better spent on improving their decisions. I think there are large returns from fund managers focusing fully on whether a grant is a good use of money or on how to make the grantees even more successful. I therefore think the costs of having to take into account (likely heterogeneous) donor preferences when evaluating specific grants are quite high, and so as long as a majority of assessed grants seems to be somewhat "in scope" it's overall better if fund managers can keep their head free from scope concerns and other 'meta' issues. I believe that we can do the most good by attracting donors who endorse the above. I'm aware this means that donors with different preferences may want to give elsewhere. (Made some edits to the above comment to make it less disagreeable.)
2
Linch
3y
I think of climate change (at least non-extreme climate change) as more of a global poverty/development issue, for what it's worth.
6
Jonas V
3y
In the introduction, we wrote the following. Perhaps you missed it? (Or perhaps you were interested in a per-grant explanation, or the explanation seemed insufficient to you?)
4
weeatquince
3y
You are correct – sorry I missed that. I agree with Michael above that a) seems is a legit administrative hassle but it seems like the kind of think I would be excited to see resolved when you have capacity to think about it. Maybe each fund could have some discretionary money from the other fund.   An explanation per grant would be super too as an where such a thing is possible! (EDIT: PS. My reply to Ben above might be useful context too: https://forum.effectivealtruism.org/posts/zAEC8BuLYdKmH54t7/ea-infrastructure-fund-may-2021-grant-recommendations?commentId=qHMosynpxRB8hjycp#sPabLWWyCjxWfrA6E)
2
Larks
3y
I don't suppose you would mind clarifying the logical structure here: My intuitive reading of this (based on the commas, the 'or', and the absence of 'and') is: i.e., satisfying any one of the three suffices.  But I'm guessing that what you meant to write was  which would seem more sensible?
2
Jonas V
3y
Yeah, the latter is what I meant to say, thanks for clarifying.
4
weeatquince
3y
FWIW I had assumed the former was the case. Thank you for clarifying. I had assumed the former as * it felt like the logical reading of the phrasing of the above * my read of the things funded in this round seemed to be that some of them don’t appear to be b OR c (unless b and c are interpreted very broadly).  
6
Ben Pace
3y
I think that different funders have different tastes, and if you endorse their tastes you should consider giving to them. I don't really see a case for splitting responsibilities like this. If Funder A thinks a grant is good, Funder B thinks it's bad, but it's nominally in Funder B's purview, this just doesn't seem like a strong arg against Funder A doing it if it seems like a good idea to them. What's the argument here? Why should Funder A not give a grant that seems good to them?

I find this perspective (and its upvotes) pretty confusing, because:

  • I'm pretty confident that the majority of EA Funds donors choose which fund to donate to based far more on the cause area than the fund managers' tastes
    • And I think this really makes sense; it's a better idea to invest time in forming views about cause areas than in forming views about specifically the funding tastes of Buck, Michelle, Max, Ben, and Jonas, and then also the fund management teams for the other 3 funds.
  • The EA Funds pages also focus more on the cause area than on the fund managers.
  • The fund manager team regularly changes composition at least somewhat.
  • Some fund managers have not done any grantmaking before, at least publicly, so people won't initially know their fund tastes.
    • In this particular case, I think all fund managers except Jonas haven't done public grantmaking before.

I think a donation to an EA Fund is typically intended to delegate to some fund managers to do whatever is best in a given area, in line with the principles described on the EA Funds page. It is not typically intended to delegate to those fund managers to do w... (read more)

4
Ben Pace
3y
Yeah, that's a good point, that donors who don't look at the grants (or know the individuals on the team much) will be confused if they do things outside the purpose of the team (e.g. donations to GiveDirectly, or a random science grant that just sounds cool), that sounds right. But I guess all of these grants seem to me fairly within the purview of EA Infrastructure? The one-line description of the fund says: I expect that for all of these grants the grantmakers think that they're orgs that either "use the principle of effective altruism" or help others do so. I think I'd suggest instead that weeatquince name some specific grants and ask the fund managers the basic reason for why those grants seem to them like they help build EA Infrastructure (e.g. ask Michelle why CLTR seems to help things according to her) if that's unclear to weeatquince.
5
MichaelA
3y
Yeah, good point that these grants do seem to all fit that one-line description.  That said, I think that probably most or all grants from all 4 EA Funds would fit that description - I think that that one-line description should probably be changed to make it clearer what's distinctive about the Infrastructure Fund. (I acknowledge I've now switched from kind-of disagreeing with you to kind-of disagreeing with that part of how the EAIF present themselves.) I think the rest of the "Fund Scope" section helps clarify the distinctive scope: Re-reading that, I now think Giving Green clearly does fit under EAIF's scope ("Raise funds or otherwise support other highly-effective projects"). And it seems a bit clearer why the CLTR and Jakob Lohmar grants might fit, since I think they partly target the 1st, 3rd, and 4th of those things. Though it still does seem to me like those two grants are probably better fits for LTFF. And I also think "Conduct research into prioritizing [...] within different cause areas" seems like a better fit for the relevant cause area. E.g., research about TAI timelines or the number of shrimp there are in the world should pretty clearly be under the scope of the LTFF and AWF, respectively, rather than EAIF. (So that's another place where I've accidentally slipped into providing feedback on that fund page rather than disagreeing with you specifically.)
5
Ben Pace
3y
But this line is what I am disagreeing with. I'm saying there's a binary of "within scope" or not, and then otherwise it's up to the fund to fund what they think is best according to their judgment about EA Infrastructure or the Long-Term Future or whatever. Do you think that the EAIF should be able to tell the LTFF to fund a project because the EAIF thinks it's worthwhile for EA Infrastructure, instead of using the EAIF's money? Alternatively, if the EAIF thinks something is worth money for EA Infrastructure reasons, if the grant is probably more naturally under the scope of "Long-Term Future", do you think they shouldn't fund the grantee even if LTFF isn't going to either?
3
MichaelA
3y
Ah, this is a good point, and I think I understand where you're coming from better now. Your first comment made me think you were contesting the idea that the funds should each have a "scope" at all. But now I see it's just that you think the scopes will sometimes overlap, and that in those cases the grant should be able to be evaluated and funded by any fund it's within-scope for, without consideration of which fund it's more centrally within scope for. Right? I think that sounds right to me, and I think that that argument + re-reading that "Fund Scope" section have together made it so that I think that EAIF granting to CLTR and Jakob Lohmar just actually makes sense. I.e., I think I've now changed my mind and become less confused about those decisions. Though I still think it would probably make sense for Fund A to refer an application to Fund B if the project seems more centrally in-scope for Fund B, and let Fund B evaluate it first. Then if Fund B declines, Fund A could do their own evaluation and (if they want) fund the project, though perhaps somewhat updating negatively based on the info that Fund B declined funding. (Maybe this is roughly how it already works. And also I haven't thought about this until writing this comment, so maybe there are strong arguments against this approach.) (Again, I feel I should state explicitly - to avoid anyone taking this as criticism of CLTR or Jakob - that the issue was never that I thought CLTR or Jakob just shouldn't get funding; it was just about clarity over what the EAIF would do.)
5
Jonas V
3y
In theory, I agree. In practice, this shuffling around of grants costs some time (both in terms of fund manager work time, and in terms of calendar time grantseekers spend waiting for a decision), and I prefer spending that time making a larger number of good grants rather than on minor allocation improvements.
2
MichaelA
3y
(That seems reasonable - I'd have to have a clearer sense of relevant time costs etc. to form a better independent impression, but that general argument + the info that you believe this would overall not be worthwhile is sufficient to update me to that view.)
4
Ben Pace
3y
Yeah, I think you understand me better now. And btw, I think if there are particular grants that seem not in scope from a fund, is seems totally reasonable to ask them for their reasoning and update pos/neg on them if the reasoning does/doesn't check out. And it's also generally good to question the reasoning of a grant that doesn't make sense to you.
5
weeatquince
3y
Tl;dr: I  was  to date judging the funds by the cause area rather than the fund managers tastes and this has left me a bit surprised. I think in future I will judge more based on the fund mangers tastes.   Thank you Ben – I agree with all of this Maybe I was just confused by the fund scope. The fund scope is broad and that is good. The webpage says the scope includes: "Raise funds or otherwise support other highly-effective projects" which basically means everything! And I do think it needs to be broad – for example to support EAs bringing EA ideas into new cause areas. But maybe in my mind I had classed it as something like "EA meta" or as "everything that is EA aligned that would not be better covered by one of the other 3 funds" or similar. But maybe that was me reading too much into things and the scope is just "anything and everything that is EA aligned".  It is not bad that it has a broader scope than I had realised, and maybe the fault is mine, but I guess my reaction to seeing the scope is different to what I realised  is to take a step back and reconsider if my giving to date is going where I expect. To date I have been judging the EAIF as the easy option when I am not sure where to give and have been judging the fund mostly by the cause area it gives too. I think taking a step back will likely involve spending an hour or two going though all of the things given in recent fund rounds and thinking about how much I agree with each one then deciding if I think the EAIF is the best place for me to give, and if I think I can do better giving to one of the existing EA meta orgs that takes donations. (Probably I should have been doing this already so maybe a good nudge). Does that make sense / answer your query? – –  If the EAIF had a slightly more well defined narrower scope that could make givers slightly more confident in where their funds will go but has a cost in terms of admin time and flexibility for the Funds. So there is a trade-off. My gut fee
2[comment deleted]3y

5,000 to the Czech Association for Effective Altruism to give away EA-related books

Concretely, which books are they giving away? The most obvious book to give away (Doing Good Better) is more than 5 years old at this point, which is roughly half the length of the EA movement, and thus maybe expected to not accurately represent 2021 EA thought.

It depends on target audience. I guess that beside DGB it will be also The Precipice (both DGB and The Precipice have Czech translations), Human Compatible, Superintelligence, maybe even Scout Mindset or Rationality from AI to Zombies.
 

Thanks for the reply!

Small note: AI Safety researchers I've talked to (n~=5) have almost universally recommend Brian Christian's The Alignment Problem to Human Compatible. (I've started reading but have not finished either).

I also personally got a bunch of value from Parfit's Reasons and Persons and Larissa MacFarquhar's Strangers Drowning, but different people's tastes here can be quite different. Both books predated DBG I think, but because they're from fields that aren't as young/advancing as fast as EA, I'd expect them to be less outdated.

7
Max_Daniel
3y
(FWIW, I personally love Reasons and Persons but I think it's much more "not for everyone" than most of the other books Jiri mentioned. It's just too dry, detailed, abstract, and has too small a density of immediately action-relevant content. I do think it could make sense as a 'second book' for people who like that kind of philosophy content and know what they're getting into.)
2
Linch
3y
I agree that it's less readable than all books Jiri mentioned except maybe Superintelligence.  Pro-tip for any aspiring Reasons-and-Persons-readers in the audience: skip (or skim) section I and II. Section III (personal identity) and IV (population ethics) is where the meat is, especially section III. 
4
Max_Daniel
3y
FWIW, I actually (and probably somewhat iconoclastically) disagree with this. :P In particular, I think Part I of Reasons and Persons is underrated, and contains many of the most useful ideas. E.g., it's basically the best reading I know of if you want to get a deep and principled understanding for why 'naive consequentialism' is a bad idea, but why at the same time worries about naive applications of consequentialism or the demandingness objection and many other popular objections to consequentialism don't succeed at undermining it as ultimate criterion of rightness. (I also expect that it is the part that would most likely be perceived as pointless hair-splitting.) And I think the most important thought experiment in Reasons and Persons is not the teleporter, nor Depletion or Two Medical Programs, nor the Repugnant Conclusion or the Absurd Conclusion or the Very Repugnant Conclusion or the Sadistic Conclusion and whatever they're all called - I think it's Writer Kate, and then Parfit's Hitchhiker. Part II in turn is highly relevant for answering important questions such as this one. Part III is probably more original and groundbreaking than the previous parts. But it is also often misunderstood. I think that Parfit's "relation R" of psychological connectedness/continuity does a lot of the work we might think a more robust notion of personal identity would do - and in fact, Parfit's view helps rationalize some everyday intuitions, e.g., that it's somewhere between unreasonable and impossible to make promises that bind me forever. More broadly, I think that Parfit's view on personal identity is mostly not that revisionary, and that it mostly dispels a theoretical fiction most of our everyday intuitions neither need nor substantively rely on. (There are others, including other philosophers, who disagree with this - and think that there being no fact of the matter about questions of personal identity has, e.g., radically revisionary implications for ethics. But t
6
Linch
3y
Thanks for the contrarian take, though I still tentatively stand by my original stances. I should maybe mention 2 caveats here: 1. I also only read Reasons and Person ~4 years ago, and my memory can be quite faulty. 1. In particular I don't remember many good arguments against naive consequentialism. To me, it really felt like parts 1 and 2 were mainly written as justification for axioms/"lemmas" invoked in parts 3 and 4, axioms that most EAs already buy. 2. My own context for reading the book was trying to start a Reasons and Persons book club right after he passed away. Our book club dissolved in the middle of reading section 2. I kept reading on, and I distinctively remember wishing that we continued onwards, because sections 3 and 4 would kept the other book clubbers engaged etc. (obviously this is very idiosyncratic and particular to our own club).
4
Howie_Lempel
3y
If I had to pick two parts of it, it would be 3 and 4 but fwiw I got a bunch out of 1 and 2 over the last year for reasons similar to Max.
3
Misha_Yagudin
3y
(Hey Max, consider reposting this to goodreads if you are on the platform.)
5
Max_Daniel
3y
(done)
4
MichaelA
3y
(FWIW, I'm almost finished reading Strangers Drowning and currently feel I've gotten quite little out of it, and am retroactively surprised by how often it's recommended in EA circles. But I think I'm in the minority on that.)
2
Pablo
3y
I wonder if the degree to which people like that book correlates with variation along the excited vs. obligatory altruism dimension.
2
MichaelA
3y
Do you have a guess as to which direction the correlation might be in? Either direction seems fairly plausible to me, at first glance.
2
Pablo
3y
I was thinking that  EAs sympathetic to obligatory altruism would like it more, given the book's focus on people who appear to have a strong sense of duty and seem willing to make great personal sacrifices.
2
MichaelA
3y
(Yeah, that seems plausible, though FWIW I'd guess my own mindset is more on the "obligatory" side than is average.)
2
Linch
3y
Out of curiosity, do you read/enjoy any written fiction or poetry? 
6
MichaelA
3y
Until a couple years ago, I read a lot of fiction, and also wrote poetry and sometimes short stories and (at least as a kid) had vague but strong ambitions to be a novelist.  I now read roughly a few novels a year, mostly Pratchett. (Most of my "reading time" is now either used for non-fiction or - when it's close to bedtime for me - comedy podcasts.)
1
Jiri_Nadvornik
3y
Thanks a lot!

My most significant reservation about the wiki as a project is that most similar projects seem to fail – e.g., they are barely read, don’t deliver high-quality content, or are mostly abandoned after a couple of months. This seems to be the case both for wikis in general and for similar projects related to EA, including EA Concepts, PriorityWiki, the LessWrong Wiki, and Arbital.

This was also my most significant reservation about both whether the EA Wiki should get a grant (I wasn't a decision-maker on that - I just mean me thinking from the sidelines) and a... (read more)

Fwiw I think that looking at the work that's been done so far, the EA Wiki is very promising.

6
Max_Daniel
3y
Thanks, I agree that this is an interesting data point. I had simply not been aware of a new LessWrong Wiki, which seems like an oversight.
6
MichaelA
3y
(Just to clarify, what I meant was their tagging+concept system, which is very similar to the EA Wiki's system and is being drawn on for the EA Wiki's system. I now realise my previous comment - now edited - was misleading in that it (a) said "The new LessWrong Wiki" like that was its name and (b) didn't give a link. Google suggests that they aren't calling this new thing "the LessWrong Wiki".)

Yep, the new wiki/tagging system has been going decently well, I think. We are seeing active edits, and in general I am a lot less worried about it being abandoned, given how deeply it is integrated with the rest of LW (via the tagging system, the daily page and the recent discussion feed).

Also worth mentioning is that LessWrong has recently extended the karma system to wiki edits. You can see it here. I'm pretty excited about this feature, which I expect to increase participation, and look forward to its deployment for the EA Wiki.

The improvements are now ported to the Wiki. Not only can you vote for individual contributions, but you can also see, for each article, a list of each contributor, and see their contributions by hovering over their names. Articles now also show a table of contents, and there may be other features I haven't yet discovered. Overall, I'm very impressed!

2
Max_Daniel
3y
Great! I'm also intuitively optimistic about the effect of these new features on Wiki uptake, editor participation, etc.

Thanks for this incredibly detailed report. It's super useful to understand the rationale and thinking behind each grant. 

Are there plans to have update reports on the outcomes of these grants in the future (say 6 or 12 months)?

3
Michelle_Hutchinson
3y
No set plans yet.
4
MichaelA
3y
Are there plans to internally assess in future how the grants have gone / are going, without necessarily making the findings public, and even in cases where the grantees don't apply for renewal?  (I seem to recall seeing that the EA Funds do this by default for all grants, but I can't remember for sure and I can't remember the details. Feel free to just point me to the relevant page or the like.)
4
Max_Daniel
3y
That seems fairly important to me, and there are some loose ideas we've exchanged. However, there are a number of things that at first glance seem quite important, and we are very limited by capacity. So I'm currently not sure if and when ex-post evaluations of grants are going to happen. I would be very surprised if we thought that never doing any ex-post evaluations was the right call, but I wouldn't be that surprised if we only did them for a fraction of grants or only in a quite 'hacky' way, etc.
5
Jonas V
3y
I think we will probably do two types of post-hoc evaluations: 1. Specifically aiming to improve our own decision-making in ways that seem most relevant to us, without publishing the results (as they would be quite explicit about which grantees were successful in our view), driven by key uncertainties that we have 2. Publicly communicating our track record to donors, especially aiming to find and communicate the biggest successes to date #1 is somewhat high on my priority list (may happen later this year), whereas #2 is further down (probably won't happen this year, or if it does, it would be a very quick version). The key bottleneck for both of these is hiring more people who can help our team carry out these evaluations.

Update: Max Daniel is now the EA Infrastructure Fund's chairperson. See here.

Thanks for the work and the transparent write-up. I’m really glad and impressed our community got the funds going and running.

About the IIDM grant, I noticed some discomfort with this point and would be interested in pointers how to think about this, examples, how big the risk is and how easy preventable this factor is:

In addition, even somewhat successful but suboptimal early efforts could discourage potential top contributors or ‘crowd out’ higher-quality projects that, if the space had remained uncrowded, would have been set up at a later point.

7
MichaelA
3y
For the issue in general (not specific to the area of IIDM or how EAIF thinks about things), there's some discussion from 80k here and in other parts of that article. Probably also in some other posts tagged accidental harm. (Though note that 80k include various caveats and counterpoints, and conclude the article with: I say that just to avoid people being overly discouraged by reading a single section from the middle of that article, without the rest of the context. I don't say this to imply I disagree with Max's comments on the IIDM grant.)
2
MaxRa
3y
Thanks! The 80,000Hours article kind of makes it sound like it‘s not supposed to be a big consideration and can be addressed by things IIDM has clearly done, right? My impression is that the IIDM group is happy for any people interested in collaborating and called for collaboration a year ago or so, and the space of improving institutions also seems very big (in comparison to 80k‘s examples of career advice for EAs and local EA chapters).

(Disclaimer: speaking for myself here, not the IIDM group.)

My understanding is that Max is concerned about something fairly specific here, which is a situation in which we are successful in capturing a significant share of the EA community's interest, talent, and/or funding, yet failing to either imagine or execute on the best ways of leveraging those resources.

While I could imagine something like this happening, it's only really a big problem if either a) the ways in which we're falling short remain invisible to the relevant stakeholders, or b) our group proves to be difficult to influence. I'm not especially worried about a) given that critical feedback is pretty much the core competency of the EA community and most of our work will have some sort of public-facing component. b) is something we can control and, while it's not always easy to judge how to balance external feedback against our inside-view perspectives, as you've pointed out we've been pretty intentional about trying to work well with other people in the space and cede responsibility/consider changing direction where it seems appropriate to do so.

Curated and popular this week
Relevant opportunities