Hide table of contents

                                                       
Applications are now open for our two upcoming Incubation Programs: 

  • July-August 2023 with a focus on biosecurity interventions and large-scale global health interventions
  • February-March 2024 with a focus on farmed animals and global health and development mass-media interventions

In this post we set out some of the key updates we’ve made to the program, namely: 

  • Increased funding 
  • More time for participants in person in London
  • Extended stipends and support to provide an even bigger safety net for participants 
  • Even more ongoing support after the program
  • More time for applications

Context: 

In four years we’ve launched 23 new effective charities that have reached 120 million animals and over 10 million people. The Incubation Program provides you with two months of intensive training, well-researched charity ideas, and access to the funding you need to launch. All we care about is impact, and the most pressing determinant of success is finding potential founders. 

APPLY HERE

Updates to the Incubation Program

All the details are here on our website, but below we summarize the latest changes/improvements.

Increased quantity and probability of funding 

In recent years, in part due to our portfolio’s track record, we’re seeing a significant uptick in the seed funding being achieved by our incubatees. In the most recent round, for example, eight out of nine participants started organizations and received $732,000, with grants ranging from $100,000 to $220,000. The ninth participant joined the CE team as a research analyst. 

A Bigger Safety Net

In the past two years, we’ve trained 34 people. After the program: 

  • 20 launched new charities and raised over $1.2 million in seed funding 
  • 6 got jobs in EA orgs (including CE)
  • 1 worked on mental health research with funding in Asia (and 1 year later become a co-founder of a newly incubated by CE mental health charity)
  • 1 worked as a senior EA community manager 
  • 1 got funded to do their own specialist research project and has since hired 3 people 
  • 2 launched their own grantmaking foundation
  • 1 works for that grantmaking foundation 
  • 1 is running for office in America and was elected to the district parliament
  • 1 kept on working on the project they co-founded in the alternative protein space
  • 1 runs a charity evaluator in China
  • 1 was hired by one of the previously incubated charities 


So in summary: 100% of participants, within weeks of finishing the program, landed relevant roles with high personal fit and excellent impact potential. 

During the program we will provide you with: 

  • Stipends to cover your living costs during the Incubation Program (e.g., rent, wifi, food, childcare). The stipends are around $2,000 per month and are based on participants' needs and adjusted accordingly.
  • Travel and board costs for the 2 weeks in person in London.

If, for any reason, you do not start a charity after the program, we provide: 

  • Career mentorship (our track record for connecting non-founder participants to research grants, related jobs, and other pathways to impact is near 100%).
  • Two-month stipends to provide a safety net during the period of looking for alternative opportunities.

More time in-person in London

The Incubation Program lasts 8 weeks, followed by a 2 week seed-funding process.

  • The 8 week program runs online, now with 2 weeks in person in CE’s London office 
  • During the 2 week seed-funding process you make final improvements to your proposal, which is submitted to the CE seed network that makes the final decision on your grant.

Even more support after the program

You will graduate the program with a co-founder, a high-quality charity idea, a plan for implementation, and a robust funding proposal. On top of that we offer you:

  • A seed grant of up to $200,000 (not guaranteed, but in recent years 80%+ of projects received funding)
  • Further learning:
    • Weekly ‘getting started’ sessions for the first 4 weeks
    • Regular emails with further videos and resources that are relevant to you later in your charity journey (e.g., on hiring, or charity registration)
  • Support in WIX website design
  • Mentorship
    • Monthly mentorship meetings with the CE team
    • Access to a broad network of mentors and potential funders
    • Coaching from external topic experts (e.g., on co-founder relations or M&E)
  • Operations support
    • Get professional operations and HR support from the CE team that will help you to set up your organization quickly
    • Start with a US charitable fiscal sponsorship allowing you to accept tax deductible donations
  • Community
    • Join a Slack group of over 100 charity founders and effective charity employees
    • Enjoy weekly London socials and annual gatherings
    • Tap into the knowledge and template base of our network of incubated charities

More time for applications

Applications will be open: 

  • From February 1 to March 12,  2023
    • Final results (acceptance letters): Mid May, 2023
  • From July 10 to September 30, 2023
    • Final results (acceptance letters): Early December, 2023

We hope you will apply early; doing so will give you access to a resource list that will help you prepare for the application process. Also, the earlier you apply, the earlier we will be able to process your application.

We will announce the top ideas for the July-August 2023 program soon, so be on the lookout for our next newsletter or post on the EA forum! We recommend applying early to increase your chances. 

APPLY HERE


 

Comments11


Sorted by Click to highlight new comments since:

As usual, I recommend checking our participants video about their experience in the program: 

Best of luck!

Thank you, Emre!

Excited to see another impactful set of charities get founded!

Hi! Do you take in founders with existing high impact (potentially) organisations that have already in the bootstrapping phase?

Hi Karolina, thanks so much for summarizing this, it's great to see the changes at a glance and exciting to see how the program is evolving. Two questions regarding the process:

  1. When you say you're able to process early applications sooner, does this mean that early applicants will get earlier responses? If so, do you have a time frame from submitting the application to receiving the response?

  2. Since you were recruiting for this year's summer cohort last fall already, would you be able to say how many spots you are still looking to fill?

Thanks in advance!

Hi there, thanks for your question! As Talent Systems Specialist, I'm happy to answer them:

  1. The main benefit will be going through the earlier stages sooner. Acceptance letters will be sent out by mid-May at the latest, but earlier for people who make it to the final stages sooner (i.e., apply earlier and send in their test tasks etc. earlier).  I can't say anything more precise than that as the total time we need to process all candidates through the entire application round will still depend largely on how many applications we get and how high the quality of the pool is - this varies from round to round from ~700-3000 initial applications.
  2. This is correct - we always recruit for two program rounds during each application round (so there is some flexibility for candidates and for us to recommend a particular program cohort and cause area combination to each future incubatee). We are usually looking for cohorts between 8-20 people (ideally 14-16) and have already accepted 3, so there are between 5 and 17 spots left for the July/August 2023 cohort and 8-20 for Feburary/March 2024. However, if we find someone who we think is a great fit and their test tasks and interviews are really exciting, we will never not accept them. Amazing founders are our bottleneck!

Would love to see you put in an application if you're interested! 

All the best,
Judith

Thank you so much Judith, this helps a lot!

Is it possible to get a list of the questions in the application form without having to fill in the earlier sections?

You can click on the dots at the bottom


For other forms, I usually just fill in random values and don't submit.

Indeed - you can do as Lorenzo suggested.  :) 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr