Hide table of contents

I’m writing this in my official capacity as the Head of the CEA Groups Team. Thanks to Asya Bergal, Claire Zabel, Aaron Gertler, Max Dalton, Alex Holness Tofts, Will Payne, Huw Thomas, Julia Wise, Will Fenning, and Ollie Base for their input on this draft. All opinions expressed are my own. 

CEA is discontinuing its focus university [1] programming – this includes its Campus Specialist program and Campus Specialist Internship program (formerly known as the community-building grants program for focus universities).

CEA will be redirecting university organizers who would like to be funded for their work to Open Philanthropy’s two new fellowships, which were launched in part to replace the funding aspects of the Campus Specialist and Campus Specialist Internship programs: 

  • The University Organizer Fellowship, which provides funding for organizers and groups expenses for part-time and full-time organizers helping with student groups focused on effective altruism, longtermism, rationality, or other relevant topics at any university (not just focus universities). (Intended to replace the Campus Specialist and Campus Specialist Internship programs.)
  • The Century Fellowship, a selective 2-year program that gives resources and support (including $100K+/year in funding) to particularly promising people early in their careers who want to work in areas that could improve the long-term future. (Intended partially for particularly strong Campus Specialist applicants.)

As before, all group organizers can also apply for group expenses from CEA’s Group Support Funding, and organizer funding and groups expenses from EA Infrastructure Fund. A detailed breakdown of potential funding options for organizers can be found here

CEA will continue to support early stage university groups via the University Group Accelerator Program and city and national groups in key locations via the Community Building Grants Program. 

The following post covers two sections: 

  1. Why we made this shift
  2. Lessons learned

I want to emphasise that this is a tactical shift about where funding for EA community building is coming from – not a signal about impact. CEA, Open Phil, and EAIF are just as excited about field-building work at focus universities as we were before – and we all still think advanced-stage university field building can be a very high impact opportunity. 

 

So why is CEA making this change? 

Ultimately, this was a decision related to organisational focus. I think that the CEA groups team will be more successful if we’re more focused. Over the last year we tried to cover three massive areas: focus university groups, early-stage university groups, and key city and national groups. It was challenging to provide excellent support to group leaders, who had different needs, in all of these areas simultaneously. We think we were more successful in early stage university groups and key city and national groups than focus university groups, who reported getting more value out of support and funding provided by other organisations. We want to encourage others who may be better-placed to provide focus university support to do so, and we worry that we may implicitly discourage this if it seems like we “own” the entire groups space. 

When choosing where to focus across early-stage university groups, late stage university groups, and city/national groups, we looked at our track record and the number of other organisations working in a given space. 

  • Track record: 
    • Over the last 6 months, we think we’ve done a strong job in our early-stage university work (supporting 32 groups via our University Group Accelerator Program with a stellar user rating of 9.1/10) and better serving group leaders in our Community Building Grants program for city/national group leaders (supporting ~ten hiring rounds and increasing grantee net promoter score from +28 to +65). To read more about our work in these areas see our Q4 '21 and Q1 '22 update.
    • In comparison, we think our support this year to advanced-stage university groups was good but not excellent.
    • Specifically, when we conducted an end-of-year survey, focus university group organizers rated the support provided by CEA as a bit less valuable than support provided by other organizations (e.g. CEA’s services were rated about 85% as valuable as the “average service” provided – more details provided in the appendix of this post). Other organizations providing services included Lightcone, the Open Philanthropy EA Community-Building Team, EA Infrastructure Fund, EA Cambridge, Stanford EA, and Global Challenges Project.

 

  • Finding the right services:
    • We think one of the reasons for our relatively lower scores in the focus university space is that we don’t think we fully figured out the best services to provide to support group leaders within our focus university program.
    • It was tricky to figure out what group leaders “really wanted”. For example, our Campus Specialist program, which we launched in the fall, had high demand. We built it, in part, because we heard that group leaders wanted more management and career stability to consider full-time field-building work. However, when we actually implemented the program in the fall, we found that weekly remote management calls were less effective than we initially hoped.
    • It was also tricky to find the right team to lead the work. Some members of our focus uni team were interested in approaches that seemed to have a better home outside the CEA groups team (e.g. direct AI safety field building, on-the-ground entrepreneurship at campus centres, mentorship matching at EA events). It seems plausible that support for advanced uni groups should be run by someone who has run a really successful EA Campus Centre – and those leaders are still emerging.

 

  • Other actors: 
    • In terms of providing funding for focus unis – we think that Open Philanthropy has a comparative advantage in funding within this space, especially if the goal is to help some EA groups transform into significant meta organizations (e.g. like EA Cambridge or SERI) and to encourage them to be entrepreneurial and autonomous. Open Phil has a strong track record of thoughtfully dispersing funding, offering funding to help individuals increase their ambitions, and helping to seed organizations.
    • In terms of providing support for group leaders for focus unis, we’re excited about the programs being run by the Global Challenges Project, as well as possible programs for university groups from Lightcone, EA Cambridge, and Stanford EA.

 

  • Concerns about crowding out:  
    • A final concern about CEA trying to cover the entire groups space is that we think this makes it seem like we “own” the space – a perception that might discourage others from taking experimental approaches. We think there’s some evidence that we crowded out others from experimenting in the focus uni space this year.
    • Looking forward, we hope Open Phil’s fellowships will continue to encourage individuals to fill gaps in the space – or even start something that directly competes with existing projects. If you’re interested in experimenting in focus uni spaces (or other spaces we’re currently running programs in) we’d love to chat with you about opportunities, potential collaborations, lessons learned, or areas where we think we can improve. (groups@centreforeffectivealtruism.org; joan.gass@centreforeffectivealtruism.org)



Lessons learned as I reflect on the last year:  

  1. I think I tried to provide services for too many different types of group leaders at the same time: focus university groups, early stage university groups, and city/national groups. While I’m proud that we identified these different clusters of groups  (as I do think they have different needs), trying to serve all of them at the same time meant that I was spending a significant amount of time hiring. This meant that I didn’t spend as much on-the-ground time at focus universities as I think was needed to develop excellent products. I’ve updated on the importance of providing great services to one set of group leaders before expanding to additional group leaders.
     
  2. I didn’t generate enough slack for our team for experimentation. Demand for basic support services at focus universities more than tripled over the last year (e.g. funding applications, calls, number of organizers wanting to attend retreats). This meant that our team was growing just to keep up with services that group leaders expected to receive from us, stretching our team to capacity. This left little time for reflection, experimentation, and pivoting. I think CEA’s brand makes it difficult to drop things – which is in part why I’m proud of the decision we’re sharing in this announcement. In the future I think it will be important to create more slack for the team to experiment and pivot.
     
  3. Finally, I think we tried to build services for group leaders that had long feedback loops (e.g. hiring for Campus Centres is a 6 month process, developing and designing metrics for groups involves at least a semester to see if the metric is helpful + lots of time communicating). We could have tested these services faster, conducted group leader interviews to shorten these feedback loops, and potentially even chosen to provide services that even had quicker feedback. Since the start of this year we’ve done a user sprint together as a group, incorporated more “minimum viable product” development, and user interviews in our work process. I’ve also enjoyed reading some resources that focus on this approach – such as Inspired and Sprint.

 

Throughout the process, I’ve been very grateful for the CEA focus university groups team. While I think we made mistakes, I think we also learned a significant amount and delivered a lot of value. For example, we helped to seed EA groups at several focus universities and supported other universities during critical leadership transitions. In the last ~half of 2021 we evaluated over 50 funding applications, responded to 200 requests for calls, and had 125 attendees at our retreats. We think we were counterfactually responsible for at least 10 new organizers working part-time at top universities. We helped make Campus Centres a real career option, and set out a framework for the focus university groups that a bunch of people use.

I’m particularly grateful for the hard work and thoughtfulness of the CEA focus uni groups team - Huw, Will, Alex, and Kuhan -  in evaluating this transition. I think the process of deciding to hand off part of the space embodies some CEA cultural values that I deeply admire: alliance mentality (wanting to see the work done well, not necessarily getting credit for it ourselves), perpetual beta (updating based on evidence), and purpose first (putting what’s best for the EA community above our own personal interests). 


If you’re an organizer and have questions about what this change means for you, we encourage you to check out our updated Groups Support page here, email us here (groups@centreforeffectivealtruism.org), or ask us questions below. 



 

  1. ^

    Brown University, California Institute of Technology (Caltech), Columbia University, Georgetown University, Harvard University, London School of Economics and Political Science (LSE), Massachusetts Institute of Technology (MIT), Oxford University, Princeton University, Stanford University, Swarthmore College, University of California, Berkeley, University of Cambridge, University of Chicago, University of Hong Kong, University of Pennsylvania, Yale University, These focus universities were chosen primarily based on their track records of having highly influential graduates (e.g. Nobel prize winners, politicians, major philanthropists). We also place some weight on university rankings, universities in regions with rapidly-growing global influence, the track record of its group, and the quality of the group’s current plans.

Comments2


Sorted by Click to highlight new comments since:
Joan
45
0
0

Appendix: 

Jan 2022 survey of Oxford/Cambridge/Stanford organizers 

We surveyed some full-time group organizers on how valuable they’d found various aspects of CEA support, versus support from non-CEA people (GCP, Lightcone, Buck Shlegeris – EAIF, Claire Zabel – Open Phil, EAIF, Stanford residencies). We gave them the option to be anonymous.

We split this up into 13 types of CEA support (UK group leaders retreat, US retreat, calls, etc.), and 8 types of non-CEA support. They rated things on a 1-7 scale, based on how useful they found them. 

Ignoring N/As, CEA activities got an average score of 4.2/7. Non-CEA activities got an average score of 5.1/7. Summing up scores (which doesn’t have a clean interpretation), CEA totaled 246 points and non-CEA people (GCP, Icecone (a winter retreat hosted by Lightcone), Stanford team, Cambridge’s online course) totaled 201 points.** This maybe indicates that CEA is providing a wider breadth of less intensely valued services. On the other hand, we asked more detailed questions about CEA’s services so the whole ‘total number’ could be biased upwards. 

Looking in more detail at scores, it seems that support calls with CEA staff members were less useful than support calls from non-CEA staff members, retreats were generally more useful, and various forms of funding were quite useful. Different leaders found quite different things useful.

Some more direct comparisons:

  • 1:1s: 
    • CEA 1:1s were rated 3.8/7
    • CEA in person campus visits 4.3/7
    • GCP 1:1s were rated 4.7/7
    • 1:1s with others (e.g. Claire, Buck) were rated 5/7
  • Retreats/events:
    • CEA’s summer retreat and EAG London retreats averaged 4.3/7
    • Icecone averaged 4.9/7
    • GCP’s summer residency averaged 5.0/7
    • Stanford’s residencies were 5.5/7
  • Funding:
    • CEA’s revised expense policy/Soldo cards were rated 4.1/7
    • CEA’s Funding for Campus Specialist Interns was rated 5.0/7
    • EAIF funding was 5.8/7
  • Other resources:
    • CEA’s remote community building fellowship was 3.0/7
    • GCP’s handbook was rated 4.3/7
    • (CEA) Lifelabs management calls were 4.4/7
    • GCP’s advice on how to do 1:1s was rated 4.5/7
    • Cambridge’s online cause specific programs were rated 6.0/7

 

Overall, this suggests that others provided more targeted, useful support. I think they suggest that CEA did provide some meaningful value to these group leaders, but that it might be better to cede this space to others if others have interest and capacity to take it on. 

** Notes on interpreting this: I think we split CEA activities up in a more fine-grained way, which may biased scores for individual activities downwards. I also think that some of these activities (e.g. UK/US retreats) were not aimed at these organizers, but at getting less involved organizers more excited. Also, it might be fine to have low averages, with a lot of things, e.g. if the things you’re providing are really useful to some organizers but useless (and easy to ignore) for other organizers.


 

 

Summary: CEA support for earlier stage focus unis group organizers 

We surveyed attendees of our January Groups Coordination Summit, both on that particular event, and also on what support had been more generally useful to them.

Key figures:

Participant retreat average /10

7.9

% saying their plans for the next 6 months are better

88%

CEA support average (overall) /10

6.4


 

 

 

 

Ignoring N/As, a similar gap remains. CEA activities got an average score of 4.8/7. Non-CEA activities got an average score of 5.4/7. The average scores are overall higher – this indicates that earlier stage groups can be more intensively helped by outside support. 

Summing-up scores (which doesn’t have a clean interpretation), CEA totaled 297 points and non-CEA people (GCP, Icecone, Stanford team, Cambridge’s online course) totaled 345 points. 

Some more direct comparisons:

  • 1:1s: 
    • CEA 1:1s were rated 4.0/7
    • GCP 1:1s were rated 5.0/7
    • Calls with others (e.g. Claire, Buck) were rated 5.0/7
  • Retreats/events:
    • CEA’s summer retreats and EAG London retreats average 4.9/7
    • Icecone average 5.7/7
    • Stanford’s residencies were 6.0/7
    • GCP’s summer residency averaged 6.3/7
  • Funding:
    • CEA’s revised expense policy/Soldo cards were rated 5.1/7
    • CEA’s Funding for Campus Specialist Interns was rated 4.8/7
    • EAIF funding was 5.9/7
  • Various forms of Resources:
    • GCP’s handbook was rated 4.4/7
    • GCP’s advice on how to do 1:1s was rated 4.8/7
    • (CEA) Lifelabs management calls were 5.0/7
    • CEA’s remote community building fellowship was 5.3/7
    • (CEA) University Group Accelerator Program (UGAP) was rated 5.3/7
    • Cambridge’s online cause specific programs were rated 5.8/7

For this group, retreats/events seem better when longer and/or focused on a narrow project (Icecone, Summer residency, Stanford residency) compared to our shorter retreats.

Thanks for sharing this data.  Would it be possible to share the wording of a sample question, e.g. for 1:1s, and how the scoring scale was introduced? 

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read