Recent Discussion

I’ve noticed that some people seem to have misconceptions about what kinds of grants EA Funds can make, so I put together a quick list of things we can fund that may be surprising to some of you.

(Reminder: our funding deadline for this round is March 7, though you can apply at any time of the year.)

  • EA Funds will consider building longer-term funding relationships, not just one-off grants.
    • Even though we typically make one-off grant decisions and have some turnover among fund managers, we can consider commitments to provide longer-term funding. We are also happy to otherwise help with the predictability of funding, e.g. by sharing our thoughts on how easy or hard we expect it to be to get funding in the future.
  • EA Funds can provide academic scholarships and teaching buy-outs.
    • We haven’t received a lot of applications for scholarships in the past, but the Long-Term Future Fund (LTFF) and EA
...
13Peter_Hurford2hHow do you feel about there being very few large institutional donors in effective altruism? This seems like it could be a good thing as it allows specialization and coordination, but also could be bad because it means if a particular person doesn't like you, you may just be straight up dead for funding. It also may be bad for organizations to have >80% of their funding come from one or two sources.

Some quick thoughts:

  • EA seems constrained by specific types of talent and management capacity, and the longtermist and EA meta space has a hard time spending money usefully
  • In this environment, funders need to work proactively to create new opportunities (e.g., by getting new, high-value organizations off the ground that can absorb money and hire people)
  • Proactively creating such opportunities is typically referred to as "active grantmaking"
  • I think active grantmaking benefits a lot from resource pooling, specialization, and coordination, and less from diversi
... (read more)
26Michelle_Hutchinson10hRelevant for people trying to get funding for a project: People could consider writing up their project as a blog post on the forum see if they get any bites for funding. In general, I think I'd encourage people looking for funding to do more writing up one page summaries of what they would like to get funded. It would include things like: * Problem the project addresses * Why the solution the project proposes is the right one for the problem * Team and why they're well suited to work on this I'd guess if you write a post like this there'd be quite a few people happy to read that and answer if it sounds like something they'd be interested to fund / if they know anyone to pass it on to / what more they'd need to know to fund or pass it on. Whereas my perception is that currently people feeling out a potential project and whether it could get funded are much more likely to approach people to ask to get on a call, which is far more time consuming and doesn't allow someone to quickly answer 'this isn't for me, but this other person might be interested'.

Summary

  • The Animal Welfare Fund, the Long-Term Future Fund, and the EA Infrastructure Fund (formerly the EA Meta Fund) are calling for applications.
  • Applying is fast and easy – it typically takes less than a few hours. If you are unsure whether to apply, simply give it a try.
  • The Long-Term Future Fund and EA Infrastructure Fund now support anonymized grants: if you prefer not having your name listed in the public payout report, we are still interested in funding you.
  • If you have a project you think will improve the world, and it seems like a good fit for one of our funds, we encourage you to apply by 7 March (11:59pm PST). Apply here. We’d be excited to hear from you!

Recent updates

  • The Long-Term Future Fund and EA Infrastructure Fund now officially support anonymized grants. To be transparent towards donors and the effective altruism community, we generally prefer to publish a report about
...
1AnonymousEAForumAccount41mI didn’t downvote your comment, though I am disappointed you won’t be considering applications this cycle. I hope that if CEA does choose to restrict CBG applications going forward (which seems to be under consideration per Harri [https://forum.effectivealtruism.org/posts/NfkdSooNiHcdCBSJs/apply-to-ea-funds-now-1?commentId=fhg8M8qeX5675kWkc] ) that the EAIF will fill the gap. FWIW I’d like to see EAIF funding this space even if CEA does open up applications, as I’d value diversifying funder perspectives more than any comparative advantage CEA might have.
3Harri Besceli5hHi, sorry for not responding to this comment sooner. It's taking us longer than we expected to decide on our plans for reopening applications. For context, some of the options of the programme that we're considering: 1. Continue to accept applications for funding group organizers from any EA group 2. Only accept new applications from a subset of groups We will give an update on this by June 1st latest (~2 weeks before the deadline for the next application deadline for EA Infrastructure Fund), and either let people know when they will be able to apply for CBG funding or recommending that they should apply for EAIF funding. We're chatting about the above with the EAIF, though it's ultimately up to them what they choose to accept applications for (so this comment shouldn't be seen as me speaking on their behalf)

Thanks for the update Harri. I'd suggest putting this info on the main CBG page so applicants have an up to date picture.

What this is

A syllabus of readings relating to ‘longtermist’ philosophy. I’m posting it here because I hope it might inform syllabi for university courses, reading groups or EA fellowships, and because I would love to see people share suggestions for other works to include. 

As this list was designed to include roughly a semester’s worth of material it is, needless to say, not an exhaustive resource. Indeed, each of the dozen topics could have a syllabus of their own and I am not myself very familiar with the relevant literature – suggestions are very welcome!

Some background 

Like many other student groups, my previous university EA community would often invite faculty speakers to join dinner discussions and fellowship meetings. In our group, the ethics professor Shelly Kagan has been generous enough to regularly attend group discussions. While he initially joined for conversations on Peter Singer’s arguments on charity, we started a few years...

We are in the process of implementing a major project on the Forum — turning our current system of tags into a full-fledged “EA wiki”.

Under this system, many of the tags used for posts will also serve as articles in the wiki. Many articles in the Wiki will also serve as tags that can be applied to articles.

However, there are exceptions in both directions. Some tags don’t make sense as wiki articles (for example, “EA London Update”). And some articles are too narrow to be useful tags (for example, Abhijit Banerjee). These will be marked as “wiki only” — they can be found with the Forum’s search engine, but can’t be used to tag posts.

The project is made possible by the work of Pablo Stafforini, who received an EA Infrastructure Fund grant to create an initial set of articles.

Why is an EA wiki useful?

EA content mostly takes the form of...

21AnonymousEAForumAccount6h1. Why can’t the existing content from EA Concepts be used to seed the new Wiki? 2. Are you planning to use prizes or other incentives after the “festival” is over? If not, how do you plan to handle the ongoing (and presumably increasing) maintenance burden? Do you have a (very rough) estimate for how much volunteer time will be required once the Wiki is up and running? 3. Why is a dedicated EA Wiki better than adding EA content/perspectives to Wikipedia? I think using the main Wikipedia would have numerous advantages: easier to make incremental progress, seen by more people, more contextualized, many more languages, forces EA to interact with opposing ideas, larger volunteer pool, etc.

Why is a dedicated EA Wiki better than adding EA content/perspectives to Wikipedia?

That seems like an interesting question. I'm wondering if one reason to use a separate Wiki is that some EA-relevant topics might not meet Wikipedia's notability requirements (i.e. couldn't get their own article there).

This is my first-ever AMA and I'm excited about it -- thanks to Aaron for the push! I will be answering questions here the afternoon of Monday, March 8 between 1-3pm East Coast time.

Here's some information about me and my work:

...

What are the major risks or downsides that may occur, accidentally or otherwise, from efforts to improve institutional decision-making? 

How concerned are you about these (how likely you think they are, and how bad would they be if it happened)?

1tamgent9hOn what timescales do you see most of the impact from improving institutional decision-making starting to kick in, and what does the growth function look like to you?
1IanDavidMoss9hHi Michael, there are some sample project descriptions [https://www.iandavidmoss.com/projects] over at my website, but I'll paste a couple here for convenience: Those should give you a high-level sense of what I do, but I'm happy to answer more specific questions as bandwidth allows.

Presumably this information is public but spread out.

If you know how many hits an EA website got last year, please post it here.

Even better, a link to a public analytics site.

1Answer by JJ Hepburn10hAI Safety Support [https://www.aisafetysupport.org/] only started mid last year so hard to get a clear picture * In Q3 2020 * 399 Sessions * 863 Page views * Q4 * 1006 Sessions * 1764 Page views So 1405 Sessions and 2627 Page view second half of 2020.
2kokotajlod13hWhoa, Lesswrong beats SSC? That surprises me.

I'd expect Lesswrong and EA Forum to be quite high. 
But it depends what metric you are thinking of. Forums have a lot of active content compared to the other sites that are not updated as often. Forums probably have lower Unique User numbers and higher Page Views compared to these other sites

The Forethought Foundation is seeking a Chief of Staff and an Executive Assistant for our Director, William MacAskill. Additionally, we are looking to potentially hire one or more Research Fellows. Learn more and apply here.

About the Forethought Foundation

The Forethought Foundation for Global Priorities Research aims to promote academic work that addresses the question of how to use our scarce resources to improve the world by as much as possible.

We are especially interested in the idea that the primary determinant of the value of our actions today is how those actions influence the very long-run future. We believe that by making the right decisions today, humanity has the opportunity to positively steer civilisation’s trajectory for thousands of years to come. We are therefore interested in supporting excellent research that:

  1. Defends or criticises the idea that we should primarily care about the very long-run impact of our actions.
  2. Works out the implications of
...

TL;DR

  • We have launched a preliminary (alpha) website for Probably Good.
  • The long-term goal of this effort is to provide EA-aligned career advice in a way that’s engaging, relevant and useful to people with a diverse range of views, backgrounds and circumstances. For more details see our original announcement.
  • As can be clearly seen in the site - this is an early version, which we are sharing here mainly to receive feedback, which you can give here in the comments or through the website's feedback page.
  • Any feedback - including on content, style, prioritization or typos - would be greatly appreciated.

Details

Three months ago, we announced Probably Good - a new career guidance organization aimed at filling existing gaps in career advice in the EA community, and providing tools and advice relevant to a wide range of empirical, epistemic and moral views. It was heartening to see the support, offers to help, requests for advice,...

In the ideal case, this form would submit to the bottom of an (air)table which can be upvoted. You could send out the top ones, but I could also sort by whatever I'm interested in. 

By Ian David Moss, Vicky Clayton and Laura Green

Summary

  • This post describes recent and planned efforts to develop improving institutional decision-making (IIDM) as a cause area within and beyond the effective altruism movement.
  • Despite increasing interest in the topic over the past several years, IIDM remains underexplored compared to “classic” EA cause areas such as AI safety and animal welfare.
  • To help address some questions that have come up in our community-building work, we provide a working definition of IIDM, emphasizing its interdisciplinary nature and potential to bring together insights across professional, industry, and geographic boundaries.
  • We also describe a new meta initiative aiming to disentangle and make intellectual progress on IIDM over the next year. The initiative includes several research and community development projects intended to enable more confident funding recommendations and career guidance going forward.
  • You can get involved by volunteering to work on our projects, helping us secure funding, or giving us
...

Thanks for this post! This seems like a valuable project, and I'm excited to see what comes of it over the course of this year. 

As a first step toward a working definition, we consider key institutions to be centrally managed bodies of one or more people in a direct position to allocate disproportionate funds and/or set rules, incentives, and norms affecting the lives of many.

tl;dr for the remainder of this comment: I tentatively suggest instead using a definition (or statement of scope/focus) more along the lines of: 

We think the institutions it

... (read more)
2MichaelA14hOut of interest, what does MFA stand for, in this context?

In a recent answer to Issa’s Why "cause area" as the unit of analysis?, Michael Plant presents his take on cause prioritization and points to his thesis. As part of my cause prioritization analysis work with QURI I read the relevant parts of his thesis and found them interesting and novel, so I want to bring more attention to it.

In his Ph.D. thesis, Michael Plant (the founder of Happier Lives Institute) reviews the foundations of EA, presents constructive criticism on the importance of saving lives, sheds more light on how we can effectively make more people happier, describes weaknesses in the current approaches to cause prioritization and suggests a practical refinement - "Cause Mapping".  In this post, I summarize the key points of chapters 5 and 6 about cause prioritization of Michael's thesis.

Main points

  1. Cause areas can be thought of as "problems", while interventions can be thought of as corresponding "solutions".
  2. Cause Prioritization is an
...

I found this really interesting and never would have read the full thing (sorry Michael!), so thanks for posting this summary. It was a pleasure to read

3MichaelA16hI share this view. I also feel like it might tie in somewhat with discussion related to Cause X [https://www.effectivealtruism.org/articles/moral-progress-and-cause-x/], the possibility of an ongoing moral catastrophe [https://philpapers.org/rec/WILTPO-101], etc. ...that said, I guess it's easy to agree that we shouldn't dismiss something "too soon", and I'm not actually sure whether EA is currently erring towards dismissing things too soon or not quickly enough. And I guess there's also a way your statement is in tension with the standard discussion of Cause X, the possibility of an ongoing moral catastrophe, etc.; if we spend more resources (including time) re-assessing causes that have been preliminarily dismissed, this leaves us with fewer resources available for identifying and (further) assessing cause candidates that haven't been dismissed yet.
6MichaelA16hMichael Plant made a similar point in the comment you cite at the start of this post [https://forum.effectivealtruism.org/posts/QZy5gJ6JaxGtH7FQq/why-cause-area-as-the-unit-of-analysis?commentId=Eda45NeqGpu49Nkqw] . I responded that I didn't think the point was quite right. The fleshed out picture of Michael's reasoning given in this post resolves some of what I said in response (regarding not knowing in advance what the best interventions in each area are). But I think the reasoning given in this post still isn't quite right, because I don't think we only care about the best interventions in each area; I think we also care about other identifiable positive outliers. Reasons for that include the facts that: * We may be able to allocate enough resources to an area that the best would no longer be the best on the margin (if there are diminishing returns) * Some people may be sufficiently better fits for something else that that's the best thing for them to do And there are probably cases in which we have to or should "invest" in a cause area in a general way, not just invest in one specific intervention. So it's useful to know which cause area will be able to best use a large chunk of a certain type of resources, not just which cause area contains the one intervention that is most cost-effective given generic resources on the current margin. For example, let's suppose for the sake of discussion that technical AI safety research is the best solution within the x-risk cause area, that deworming is the best solution in the global health & development cause area, and that technical AI safety is better than deworming. (Personally, I believe the third claim, and am more agnostic about the other two, but this is just an example.) In that case, in comparing the cause areas (to inform decisions like what skills EAs should skill up in, what networks we should build, what careers people should pursue, and where money should go), it would still be useful to know what t