Hide table of contents

Summary

 

Background

Effective Altruism Community Building Grants (CBG) is a programme run by the Centre for Effective Altruism (CEA). It provides grants to individuals and groups to pay for part- and full-time organisers for city, national and university EA groups. Currently, there are 16 groups funded through this programme. 

Applications for new CBGs have been closed since August 2020 in order for CEA to focus on assessing and improving the program, although individuals have been able to submit expressions of interest. During this time, we have assessed applications for grant renewals and a small number of urgent new applications as a result of the expressions of interest. 

We’ve been able to hire strong organisers when we have worked with groups to run open hiring rounds (in 2020: New York City, Cambridge University). However, these hiring rounds take a substantial amount of time for the CBG programme, which is currently busy with grant renewals and responses to incoming enquiries. As a result, we haven’t been able to provide as much support to grant recipients or recruit organisers to work full-time in some cities and countries that have a large number of highly engaged EAs. 

 

CEA’s plans

Full- and part-time funding for city / national group organisers from some locations

To allow for time to run proactive hiring rounds in key cities and countries, we’ve made the hard decision to select only a few locations to support with the CBG programme at this time. These locations are: 

  • Australia (national)
  • Berlin, Germany (city)
  • Boston, USA (city)
  • Canada (national)
  • Czech Republic (national) (currently on a CBG)
  • France (national)  (currently on a CBG)
  • Germany (national)  (currently on a CBG)
  • London, UK (city) (currently on a CBG)
  • Netherlands (national)  (currently on a CBG)
  • New York City, USA (currently on a CBG)
  • Norway (national)  (currently on a CBG)
  • Oxford, UK (city)
  • San Francisco Bay area, USA (city)
  • Sweden (national) (currently on a CBG)
  • Switzerland (national)
  • Washington DC, USA (city) (currently on a CBG)

These “key locations” currently have the largest number of highly engaged EAs (calculated using a variety of sources, including EA survey data and attendance at virtual events). We wish to make sure locations with a very large number of EAs have strong organisers before funding groups in locations with smaller communities. We would also like to pilot a more intensive support programme (such as group and 1:1 calls) for organisers in these locations, before potentially rolling out this support to other groups.  

We also expect to invest in building groups in cities and countries that we think are particularly high priorities in terms of growing EA presence globally. We conducted a preliminary analysis and the only location we’re confident is in that category currently is India. We hope to conduct a thorough analysis and build out this list more once we have organizers in the locations listed above.  

Full- and part-time funding for group organisers at some universities

To allow for intensive support in our university program, CEA will prioritize providing CBGs for the following focus universities at the graduate and undergraduate level. 

These universities are: 

  • Brown University, USA
  • California Institute of Technology (Caltech), USA
  • Columbia University, USA
  • Georgetown University, USA
  • Harvard University, USA (currently on a CBG)
  • London School of Economics and Political Science (LSE), UK
  • Massachusetts Institute of Technology (MIT), USA
  • Oxford University, UK (currently on a CBG)
  • Princeton University, USA
  • Stanford University, USA (currently on a CBG)
  • Swarthmore College, USA
  • University of California, Berkeley, USA
  • University of Cambridge, UK (currently on a CBG)
  • University of Chicago, USA
  • University of Hong Kong, Hong Kong
  • University of Pennsylvania, USA
  • Yale University, USA

These focus universities were chosen primarily based on their track records of having highly influential graduates (e.g. Nobel prize winners, politicians, major philanthropists). We also place some weight on university rankings, universities in regions with rapidly-growing global influence, the track record of its group, and the quality of the group’s current plans.

We expect to include more universities in this list as we build up capacity for our university program. However, we think there are benefits to piloting our university support program with a smaller number of groups. 

Criteria for CBG funding

We expect the criteria for CBG funding will not change substantially from the past. Assessments will be based on 

  • the organiser’s past work and future plans,
  • how engaged or promising the current group members are,
  • the opportunities for producing more highly engaged/promising EAs.

Basic support for all groups

CEA will continue to provide non-salary support to EA groups in all locations, including 

  • funding to pay for groups’ operating expenses, such as event costs, books, and website hosting
  • online resources
  • the EA Groups Newsletter
  • personalised advice from the Groups team via calls and messages
  • connections with other group organisers through the EA Groups Slack and personal introductions.

CEA will also continue to invest in Virtual Programs, which give people around the world a chance to attend fellowships and reading groups. 

Hiring for a CBG Programme Manager

Harri Besceli, who has been running the programme since 2018, is stepping down from his role. We will be hiring a new programme manager. Individuals can apply at this link.  

 

Involvement from the Effective Altruism Infrastructure Fund (EAIF)
 

We believe there are valuable potential funding opportunities outside of CEA’s key locations and focus universities. We’ve seen strong organisers outside of these groups develop innovative models and contribute to the larger EA ecosystem. As a result, we think it is valuable for these groups to be able to access funding, even though CBGs are currently focused on a small number of groups. 

CEA has asked EAIF to assess applications from groups that are not eligible for CBG funding. CEA chose to do this rather than hire more staff, as we believe there will be benefits from us running a more focused programme. EAIF has the capacity and knowledge of movement-building to fill this need. 

EAIF recommends grants to improve the work of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge. Until now, they have not accepted applications to fund organisers to run local EA groups, as this opportunity has been provided by the CBG programme.

EAIF will accept grant applications from groups not covered by the CBG programme.  Buck Shlegeris, a fund manager from EAIF, will be the main fund manager assessing these applications. The EAIF will use its own criteria for assessing applications, which we expect to be roughly similar to the criteria CEA has been using for CBG applicants (as outlined above).  The EAIF accepts funding applications at any time.

Since CEA works regularly with EA groups and organisers, EAIF will usually ask CEA for input on the applications they receive, but will make independent decisions. 

 

Information for groups interested in funding

There is a single expression of interest form for any group interested in funding full- and part-time group organisers, including CBGs and EAIF funding. CEA’s Groups Team will be monitoring this form and sending applications to EAIF or the CBG team for evaluation. 

City and National Groups

Groups that currently have CBG funding, and are within our key locations: 

  • There will be no change to your funding. CEA will assess your applications for grant renewal.

Groups that currently have CBG funding, and are outside our key locations: 

  • Your CBG grant will continue for the remainder of the grant period. EAIF will assess your applications for additional funding thereafter.

City and national groups outside our key locations that aren’t on a CBG: 

Groups in our key locations that aren’t already on a CBG:

  • Once we hire someone to run the CBG programme, we will share the expected timeline for open application rounds on the EA Forum. We will also contact the current organisers of these groups.
  • Individuals interested in community building in the locations above can express interest before the hiring round using the CBG expression of interest form. We’d also be interested to hear from you about other potential applicants, whom we should reach out to and encourage to apply. If an applicant is in a time-sensitive situation, we may be able to accommodate an early CBG application.

University Groups

Groups already on a CBG:

  • There will be no change to your funding. CEA will assess your applications for grant renewal.

Focus university groups: 

CBG funding for focus university groups is now open. We expect to see two types of application, which we will assess at different times:

  • CBG applications for summer work (e.g. to spend time planning for the next academic year) or for individuals that have graduated, or will have graduated by next academic year, will be assessed on a rolling basis based on applicant timelines
  • CBG applications for part-time work alongside studies will be assessed before the start of the academic year. We may strongly encourage individual applicants in these categories to attend a group leaders training over the summer (more information forthcoming).
  • In all cases, if you are interested in applying for CBG funding for a focus university, please submit an expression of interest.

All other university groups: 

Non-local groups such as cause area or affiliation groups:

If you have any questions about this new arrangement, are unsure what category your group is in, or want general advice about funding for your group, please contact CEA’s Groups team at groups@centreforeffectivealtruism.org, or comment below.

Edits June 23rd 2021: Small edits to the EAIF information, as EAIF now accepts funding applications at all times, and requests organisers apply directly to them.

Comments9


Sorted by Click to highlight new comments since:

Prioritizing top universities makes perfect sense to me. I would argue we should consider directly working to establish EA communities at highly ranked universities that currently have none.

The choice of countries and cities makes much less sense to me.

My guess would be that the success of a given EA community is highly dependent on founder effects. The groups doing very well already have capable people with a keen sense on how to grow their community. If they didn't they wouldn't have gotten big in the first place.

Why focus most of your time on these communities? It seems to me their organizers are perfectly capable and will do just fine as long as they are provided adequate funding.

Wouldn't it make more sense to be spending your time helping smaller communities grow? Cities that immediately spring to mind are:

1. Seattle
2. Austin
3. Warsaw
4. Moscow
5. Copenhagen

They are all populous cities with highly educated populations. I don't have any a priori reason to believe that Austin and Warsaw has much less 'ea-potential' than Stockholm and Prague. It seems to me that many places have potential to grow as big as the communities you're focusing on, but for some reason have not.

Shouldn't it be higher among CEA's priorities to figure out how to help communities like these?

Thanks for your comments Mathias, 
 

Just to echo your point about supporting university groups - beyond supporting a subset of university groups with full-time organizers via the CBG program, we just released a job posting for someone to help us develop a scalable university support program that I think is high impact. This will further support volunteer-led university groups. 

> I don't have any a priori reason to believe that Austin and Warsaw have much less 'ea-potential' than Stockholm and Prague. It seems to me that many places have potential to grow as big as the communities you're focusing on, but for some reason have not.

We’d agree with that. Apart from India, our key locations were chosen because of their existing large groups of engaged EAs, not because of the particular potential of that location. As we noted in the post, about half of these groups with large EA populations currently don’t have paid community builders, so we’d like to make sure there are paid community builders in these areas before considering locations to grow EA in further. I’ll make a note to add the locations you suggested to consider as part of our analysis. 

There are projects other than the CBG programme that are intended to support the community members in other locations, including the EA Forum, conferences, virtual programs, and non-salary funding for all groups. We’re pleased that the EAIF is able to assess funding applications from other groups, and we expect volunteer-run and EAIF funded groups will also be able to grow.
 

Seems very well thought out and justified - thanks for putting this together.

CEA has asked EAIF to assess applications from groups that are not eligible for CBG funding. CEA chose to do this rather than hire more staff, as we believe there will be benefits from us running a more focused programme.

 

We expect to include more universities in this list as we build up capacity for our university program. However, we think there are benefits to piloting our university support program with a smaller number of groups. 

 

This seems to be saying there was the option of hiring more staff and rolling out the CBG programme and support to more groups, but CEA chose not to. What benefits could possibly come from that? Sounds like lost impact?

Within the CEA Groups team, we have several different sub-teams. Two of the sub-teams are focused on experimenting and understanding what a model looks like with full-time community builders in a focused set of locations (one sub-team for university groups, another sub-team for city/national groups). This is because the type of centralized support CEA might provide and the type of skills/characteristics required of someone working full-time running a university group or a city/national professional network might look very different depending on the ultimate model. 
 

Our staff capacity is limited (to either hiring, piloting, scaling) and we think that this focus will enable faster scaling in the long term. 

 

I also want to note a couple things: 
* In addition to the sub teams mentioned above, we have two sub teams supporting part-time organizers. One team provides foundational support to all part-time/volunteer group organizers (basic funding, resources hub, EA slack, phone calls), and another team runs the University Groups Accelerator Program to help part-time university organizers launch their group. 


* Additionally, just because the CEA Groups team building up the ‘full-time’ model is prioritizing certain locations, that doesn't mean we want to stop experiments in other locations. We'd encourage people interested in full-time organizing in places that aren't on the locations list above to apply to the EAIF, help us innovate on the community building model in different locations, and share back your learnings with other organizers and on the forum.




 

Thanks! I need to ask a lot of clarifying questions:

When you say "This is because the type of centralized support CEA might provide and the type of skills/characteristics required of someone working full-time running a university group or a city/national professional network might look very different depending on the ultimate model.", (1) does "This" refer to the fact that you have 2 subteams working with focus locations as opposed to everyone working on all locations? (2) If so, could I reword the explanation the sentence gives to "We need to work on focus locations to figure out the ultimate model before scaling up with that ultimate model"? In even more words, "We want to hire knowing for what model we are hiring, and we want to grow CEA knowing for what model we are growing it as soon as possible."

I really want to know how you mean this!

(3) I interpret your staff capacity being limited as "we need to prioritise" and the prioritisation coming out of that being "prioritise building a model based on focus-locations, then scale later". Correct?

(4) Your staff capacity being limited also suggests the major priority of hiring. I understand CEA is hiring quite fast, but I don't have any idea how  fast. Do you think you are prioritising hiring highly enough?

(5) What do you mean by "we think that this focus will enable faster scaling in the long term"? Firstly, again, which "focus" exactly is this referring to? Secondly, isn't "focussing" more intended to improve the quality  at the expense of speed of scaling? Intuitively I would say scaling is what enables faster scaling in the long term.

Maybe I can give some context from my side so we can find the crux of this quickly, and we are working in the same direction. I mostly see the lack of a pipeline into full-time CB in non-focus locations in stark contrast to all the extremely high-impact low-hanging fruit in CB and think "This can't be the best we can do".  It seems imperative we find a way to funnel talented EAs everywhere into this neglected career path. Hence my insistence on rolling things like the CBGs out in as many locations as possible.

I'm really interested in getting to the bottom of this. I hope I don't come across as intrusive into CEA's decisions without having any background knowledge. My interest is not to criticise CEA, but to solve this problem I see! :)

  1. yes
  2. correct
  3. yes
  4. we think focusing will improve quality in the short term, which will enable more potential scale / impact in the long term

Thanks for your questions! As mentioned before, I’m excited for others to consider full time community building via the infrastructure fund, and hope that you and others would peruse this option if you feel well positioned.

I don’t think CEA is covered all the net positive opportunities in this space — just the ones we think are the best given our view of our core competencies, staff capacity, and theory of change.

Re target universities, I wonder if UCLA, CMU, JHU, and Cornell could also be interesting, based on Shanghairankings, and strength in AI. Though I don't know about their undergrad programs in particular.

Thanks for the recommendations, Ryan. I'll pass this on to my team and we'll look into it the historical outcomes of undergraduates. 

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read