Hide table of contents

I'm announcing a new project from the Centre for Effective Altruism: Effective Altruism Grants. Effective Altruism Grants aims to provide grants of up to £100,000 (~$130,000) to help individuals work on promising projects.

We hope to fund a wide range of effective projects that will directly or indirectly contribute to making the world a better place. We primarily expect to fund individuals who explicitly endorse the principles of effective altruism, broadly construed, but are in principle open to funding projects of any form.

This project is being run by the Centre for Effective Altruism. Individual applicants may receive up to £100,000 (~$130,000). CEA will cap our total funding for the Effective Altruism Grants in 2017 at £500,000 (~$650,000), but will present promising applications that we have not chosen to fund to other significant donors. These include the Effective Altruism Community Fund. This means that the total pool of available funding may be significantly higher.

We are running this project as a way to support the effective altruism community and to allow people to pursue useful projects. The Centre for Effective Altruism will only fund projects that further its charitable objects.[1]  However, we also welcome applications that may be of interest to our partners who are also looking to fund promising projects.

Motivation and aims

Effective altruism has attracted many people of outstanding talent and motivation. We believe that providing those people with the resources that they need to realize their potential could be a highly effective use of resources.

Currently, this funding opportunity is relatively neglected within the effective altruism community. There are not many donors directly funding small projects run by individuals. For that reason we believe that funding the very best projects after a thorough application process could be a great use of funds. We want to try out a grants project during 2017. If we feel that we have been able to use money well through this project, we will allocate new funds to it in 2018.

We want to use the grants to increase the diversity of approaches within effective altruism. We believe that untested strategies could yield significant information value for the effective altruism community, and will fund projects accordingly. We hope that the Effective Altruism Grants will also allow people from less privileged backgrounds (and therefore less financial stability) to pursue the highest expected value career paths open to them.

What projects could get funded?

We welcome applications from individuals of any background and hope that applicants will include senior professionals and academics as well as recent graduates. Our hope is to fund a diverse array of projects, both in terms of causes and approaches.

In terms of length and size, we are both looking to give large grants to longer projects (up to £100,000, e.g., over more than a year), and smaller grants to shorter projects, or parts of projects (e.g., £10,000 (~$13,000) over a three-month period). The smaller grants could go to applicants who wish to transition into a more impactful career or who want to get started on a larger project. Grants of any size may be renewed. If you have a project that would require significantly more than £100,000, we still encourage you to apply as we may be able to find you funding from other sources. 

Similarly, due to legal restrictions, CEA may not be able to fund certain types of projects. However, if the project seems promising, we may, with your permission, share your application with other individuals who are looking to fund projects in the effective altruism movement. 

We welcome applications in the following areas:

  • Writing a book or book proposal
  • Unsalaried fellowships or internships at think tanks or media outlets
  • Studies (e.g., Masters or PhDs)
  • Academic research, e.g., by providing teaching buy-out for professors
  • Independent research
  • High quality writing for blogs or other media outlets
  • Public outreach on effective altruism or effective altruism-related topics (such as prioritization or rationality)
  • Seed-funding for an independent project; e.g. founding a new charity

Some illustrative examples:  

  • Write a book proposal on technological solutions to factory farming
  • Take an unpaid internship at a think-tank as a way of transferring into a career in policy
  • Start a blog on some important and understudied future technology
  • Pursue a PhD in economics on how to factor variable-population ethics considerations into cost-benefit analysis
  • Run a local effective altruist group, part-time or full-time
  • Take several months off work in order to volunteer for effective altruist organisations and figure out your next career steps

How the grants work

  • The grants may be paid out by August 15th 2017 at the earliest. 
  • The grant’s duration can vary between a month and several years. We expect many of our grants to be short.  
  • The grant maximum is £100,000.
  • Grants can be renewed.
  • The Centre for Effective Altruism will not act as your employer. We will not be responsible for the grantees.
  • Funding will be paid out on a quarterly basis, conditional on itemized reports on spending in the last quarter, as well as spending  and activity plans for the coming quarter. However, we are consciously taking a ‘[hits-based giving](http://www.openphilanthropy.org/blog/hits-based-giving)’ approach and will not discontinue funding merely because initial results of the project were less promising than was hoped.
  • We welcome requests for funding of expenses, such as tuition fees, travel, buying grantees out of existing contracts etc, as well as living costs. You should provide us with an estimate of your living costs (subject to revision—relevant factors include seniority, location, etc.) Note that we will likely refer applications that request funding for living costs and overheads to our partners, rather than funding them ourselves.

Evaluation criteria

Our evaluation criteria are:

  • Understanding of, and commitment to, the principles of effective altruism. We are looking primarily for people who can show clear evidence that they want to benefit others as effectively as possible.
  • Demonstrated ability and drive. We are looking to fund people who are both highly competent and strongly motivated. In particular, it is important that you are able to bring your plans to completion. 
  • Quality of the project plan. We are looking to fund projects which have high expected value, either directly, or through enabling yourself or others to have an impact at a later point. We would prioritize a project which might fail over a safer bet  if the former has higher expected value. We also believe that information for the community is an important source of value. For that reason, we look favourably on projects exploring previously untested strategies.
  • Quality of the career plan. We are also looking at the quality of your overall career plan, and how your project fits in with that plan. In particular, we will be looking at how your proposal fits into your overall career plan, because this helps us to assess how committed you are to your proposal as well as your understanding of Effective Altruism. (This criterion may be weighted less heavily for senior applicants.)

In addition, any project funded by CEA must further CEA's charitable objects.

We are an equal opportunity organization and value diversity. We do not discriminate on the basis of religion, color, national origin, gender, sexual orientation, age, marital status, or disability status. Please contact us to discuss adjustments to the application process.

Application process

Applicants should apply through this application form. The deadline for applications is 1 July 2017. Please send any inquiries to eagrants@centreforeffectivealtruism.org.

Applications will be blinded to assessors. After an initial screening, the most promising candidates will be invited to interviews in the week of 24th July. These candidates will have three separate short interviews. Applications for smaller grants will other things equal have a proportionately greater chance of success. Assessors will recuse themselves if there is a conflict of interest. We aim to make the final decisions by 1 August.

Any promising projects that do not further CEA's objects, as well as the most promising rejected applications may, with permission, be presented to other significant effective altruist donors. These donors may at some as yet undecided point in time choose to fund some of those applicants.

* * *

[1]: Our charitable objects are listed in the "Documents" tab of this site

20

0
0

Reactions

0
0

More posts like this

Comments16


Sorted by Click to highlight new comments since:

How does this compare to EA Ventures?

It is a successor to EA Ventures, though EA Grants already has funding, and is more focused on individuals than start-up projects.

Have you already raised the funds for this? EA Ventures failed a while back primarily because there was not the money, and those in charge of it found that they had a much more difficult time raising funds than they expected.

Yes, this project is fully funded, from donations from a large donor given for this purpose.

On this topic...

I would be interested in a write up of EA Ventures and why it did not seem to work (did it fail) and what can be learned from it. I think there is significant value in learning for the EA community in writing up projects like this even if they went wrong.

Similarly I would be interested in seeing the write up of the Pareto fellowship - another program that possibly (it is unclear) was not the success that was hoped for.

If it is the case (I hope it would be the case) that CEA has an internal write-up of these projects but not a publishable one I can try to find a trustworthy London based volunteer who could re-write it up for you. Or it might be a good project for a summer intern.

This is all very exciting and I'm glad to see this is happening.

A couple of comments.

  1. The deadline for this is only three weeks, which seems quite tight.

  2. Could you give examples of the types of things you wouldn't fund or are very unlikely to fund? That avoids you getting lots of applications you don't want as well as people spending time submitting applications that will get rejected. For instance, would/could CEA provide seed funding for any for altruistic for profit organisations, like start-ups? Asking for a friend...

  1. We have set a tight deadline in order to allow us to process applications before the start of the academic year, since we see funding for study as one of the main use cases. If the project is successful, we will set more generous deadlines in the future.

  2. CEA may not be able to fund for-profit organisations, because we have to use money to further our charitable objects. However, we encourage applications from for-profits. We may then, with the applicant's permission, share information about the projects with private donors who might provide funding. In general, there aren't classes of projects that we won't consider: we want to cast the net wide. However, if others are unsure, I recommend that they email eagrants@centreforeffectivealtruism.org to clarify on a case-by-case basis.

Great idea! Do you want a 16 page CV or a 2 page resume? If CV, how do you want one to anonymize publications?

Thanks! Please use the resume.

[anonymous]3
0
0

Would anyone be interested in an EA prediction market, where trading profits were donated to the EA charity of the investor's choosing, and the contracts were based on outcomes important to EAs (examples below)?

  • Will a nation state launch a nuclear weapon in 2017 that kills more than 1,000 people?

  • Will one of the current top five fast food chains offer an item containing cultured meat before 2023?

  • Will the total number of slaughtered farm animals in 2017 be less than that in 2016?

  • Will the 2017 infant mortality rate in the DRC be less than 5%?

While I'm generally in favor of the idea of prediction markets, I think we need to consider the potential negative PR from betting on catastrophes. So while betting on whether a fast food chain offers cultured meat before a certain date would probably be fine, I think it would be a really bad idea to bet on nuclear weapons being used.

For context (plausibly Mac already knows this): At least in the U.S., real-money prediction markets are apparently legal so long as the profits from successful bets do not go to the bettors (e.g. because they go to charity instead): see Bet2Give. As I understand it, Bet2Give didn't become popular enough to be sustainable — perhaps because not enough players were motivated to participate given that they couldn't actually receive monetary rewards for successful bets.

My suspicion is that prediction markets on 'boring' topics will only take off if they are heavily subsidized.

Hoping to secure PhD study and have an EA-related research proposal. I've noticed the application characters limit is quite strict, so quite likely won't be able to explain much of the proposal in that. Should I attach it to my CV or should I just explain it very very briefly in the application?

You should write it briefly in the application. As the form mentions, the character limit is deliberately strict to encourage you to focus on the most important issues.

Thanks!

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Joris 🔸
 ·  · 5m read
 · 
Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels! Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it’s not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related careers. Before I jump in, just a quick note about how I think AAC did something really cool here: they identified a relatively underexplored area where it’s relatively easy for animal advocates to find impactful roles, and then designed a programme to help these people better understand that area, meet stakeholders, and learn how to find roles. I also think the participants developed meaningful bonds, which could prove valuable over time. Thank you to the AAC team for hosting this! On EU careers generally * The EU has a surprisingly big influence over its citizens and the wider world for how neglected it came across to me. There’s many areas where countries have basically given a bunch (if not all) of their decision making power to the EU. And despite that, the EU policy making / politics bubble comes across as relatively neglected, with relatively little media coverage and a relatively small bureaucracy. * There’s quite a lot of pathways into the Brussels bubble, but all have different ToCs, demand different skill sets, and prefer different backgrounds. Dissecting these is hard, and time-intensive * For context, I have always been interested in “a career in policy/politics” – I now realize that’s kind of ridiculously broad. I’m happy to have gained some clarity on the differences between roles in Parliament, work at the Commission, the Council, lobbying, consultancy work, and think tanks. * The absorbe
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr