Hide table of contents
3 min read 5

51

Post intends to be a brief description of CEEALAR’s updated Theory of Change.

With an increasingly high calibre of guests, more capacity, and an improved impact management process, we believe that the Centre for Enabling EA Learning & Research is the best it's ever been. As part of a series of posts —see here and here— explaining the value of CEEALAR to potential funders (e.g. you!), we want to briefly describe our updated Theory of Change. We hope readers leave with an understanding of how our activities lead to the impact we want to see.

Our Theory of Change

Our goal is to safeguard the flourishing of humanity by increasing the quantity and quality of dedicated EAs working on reducing global catastrophic risks (GCRs) in areas such as Advanced AI, Biosecurity, and Pandemic Preparedness. We do this by providing a tailor-made environment for promising EAs to rapidly upskill, perform research, and work on charitable entrepreneurial projects. More specifically, we aim to help early-career professionals who 1) Have achievements in other fields but are looking to transition to a career working on reducing GCRs; or 2) Are already working on reducing GCRs and would benefit from our environment.

Eagle-eyed readers will notice we now refer to supporting work “reducing GCRs” rather than simply “high impact work”. We have made this change in our prioritisation as it reflects the current needs of the world and the consequent focus on GCRs by the wider EA movement, as well as the reality of our applicant pool in recent months (>95% of applicants were focused on GCRs).

Our updated theory of change —see below— posits that by providing an environment to such EAs that is highly supportive of their needs, enables increased levels of productivity, and encourages collaboration and networking, we can counterfactually impact their career trajectories and, more generally, help in the prevention of global catastrophic events.

This Theory of Change reflects our belief that there is something broken about the pipeline for both talent and projects in the GCR community, and that programs that simply supply training to early-career EAs are not enough on their own. We fill an important niche because:

  • At just $750 to support a grantee for 1 month, we are particularly cost-effective. For funders, this means reduced risk: you can make a $4,500 investment in a person for six months rather than a $45,000 investment, or use that $45,000 for hits-based giving and invest in ten people rather than one.
  • Since we remove barriers to entering full-time careers in reducing GCRs, the counterfactual impact is high. Indeed, when considering applications we look for prospective grantees who otherwise would not be able to pursue such careers, be that because they currently lack financial security, connections / credentials, or a conducive environment.
  • As grantees do independent research & projects, their work is often cutting-edge. When it comes to preventing global catastrophic events, it is imperative to support ambitious individuals who are motivated to try innovative approaches and further their specific fields.
  • Finally, because CEEALAR only offers time-limited stays (the average stay is ~4-6 months) and prioritises selecting agentic individuals as grantees, our alumni are committed to ensuring their learning translates into action.

This final bullet point can be seen in our alumni who have gone on to have impactful careers (see our website for further details). For example:

  1. Chris Leong, now Principal Organiser for AI Safety and New Zealand (before CEEALAR (BC) he was a graduate likely to take a non-EA corporate role)
  2. Sam Deverett, now an ML Researcher in the MIT Fraenkel Lab and an incoming AI Futures Fellow (BC he was a corporate data scientist)
  3. Derek Foster, previously a Research Analyst at Rethink Priorities (BC he was a master's student)
  4. Hoagy Cunningham, now a Researcher at the Stanford Existential Risks Initiative (BC he was a graduate likely to take a non-EA corporate role)

Your next steps

CEEALAR is just one part of a wider ecosystem. Rather than attempting to solve the pipeline for both talent and projects in the GCR community, we hope to serve a niche but important role in helping early-career professionals thrive.

If you have questions about this Theory of Change or our work more generally, feel free to email us at contact@ceealar.org. Additionally, you can:

  • Donate now! We support PayPal, Ko-Fi, PPF Fiscal Sponsorship, and bank transfer donations.
  • Check out our website and sign up for our mailing list to keep abreast of future updates.
  • Read through our forum posts for this giving season here.
Comments5


Sorted by Click to highlight new comments since:

Thanks for sharing an explicit theory of change. I think you make a convincing case for filling a niche and being more cost-effective than some other interventions in the GCR community-building space (which in my opinion are often unnecessarily flashy). 

Still, as an EA in Global Health who recently enjoyed a stay at CEEALAR, I'm disappointed to see the narrowing of focus to GCRs only. I think you can probably make a compelling case for why CEEALAR can be more effective if it specialises, but am I right in guessing that the change is driven mainly by trustees' assessment that GCRs are just way more important than other cause areas?

Personally I think AI x-risk (and in particular, slowing down AI) is the current top cause area, but I'm also keen on most other EA cause areas, inc Global Health (hence the focus on general EA from the start); but the update is mainly a reflection of what's been happening on the ground in terms of our applicants, and our (potential) funding sources.

My intuition is that the narrowing of CEEALAR is probably the correct choice:
• The major funders in the space seem to be more likely to fund GCR interventions.
• Insofar as the funders are interested in Global Health, they tend to prefer direct interventions like the Against Malaria Foundation and insofar as people want to kick off new projects Charity Entrepreneurship provides more specialized support.
• Independently of cause area priorities, focusing the project more narrowly makes it more legible for funders (harder to evaluate a project that does a little bit of this and a little bit of that)
• Focusing the project more narrowly makes it more competitive for high-potential grantees (who want to know that there will be other people with the same interest to bounce ideas off).
 

Maybe consider a name change to Centre for Enabling GCR Learning & Research? In general, I think it's better for orgs that are all-in on a subset of cause areas (that is exclusive of multiple significant cause areas) not to use branding that implies broader coverage.

Executive summary: CEEALAR aims to reduce global catastrophic risks by providing promising early-career effective altruists with a supportive environment to rapidly upskill, perform research, and develop projects.

Key points:

  1. CEEALAR focuses on enabling EAs working on reducing global catastrophic risks like advanced AI and biosecurity.
  2. CEEALAR provides a tailored, cost-effective program to help EAs transition into or accelerate high-impact careers.
  3. Alumni have gone on to impactful roles in GCR organizations after their CEEALAR residencies.
  4. CEEALAR fills an important niche in the GCR talent pipeline by removing barriers and enabling ambitious, cutting-edge independent work.
  5. CEEALAR has a time-limited program and targets agentic individuals, ensuring learning translates to action.
  6. CEEALAR welcomes donations and questions about their theory of change.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from CEEALAR
115
CEEALAR
· · 5m read
55
CEEALAR
· · 2m read
110
CEEALAR
· · 2m read
Curated and popular this week
 ·  · 11m read
 · 
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes.  I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate[1] this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points * Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because the animals are large. Other animal products, like chicken, causes relatively many animal deaths or total time of suffering because the animals are small, but cause relatively low greenhouse gas emissions. * A carbon tax will make some animal products, like beef, much more expensive. As a result, people may buy more chicken. This would increase animal suffering, assuming that farm animals suffer. However, this is not per se the case. It is also possible that the direct negative effect of a carbon tax on chicken consumption is stronger than the indirect (positive) substitution effect from carbon-intensive products to chicken. * I developed a non-linear market model to predict the consumption of different animal products after a tax, based on own-price and cross-price elasticities. * When calibrated for the United States, this model predicts a decrease in the consumption of all animal products considered (beef, chicken, pork, and farmed fish). Therefore, the modelled carbon tax is actually good for animal welfare, assuming that animals live net-negative lives. * A slaughter tax (a
MarieF🔸
 ·  · 4m read
 · 
Summary * After >2 years at Hi-Med, I have decided to step down from my role. * This allows me to complete my medical residency for long-term career resilience, whilst still allowing part-time flexibility for direct charity work. It also allows me to donate more again. * Hi-Med is now looking to appoint its next Executive Director; the application deadline is 26 January 2025. * I will join Hi-Med’s governing board once we have appointed the next Executive Director. Before the role When I graduated from medical school in 2017, I had already started to give 10% of my income to effective charities, but I was unsure as to how I could best use my medical degree to make this world a better place. After dipping my toe into nonprofit fundraising (with Doctors Without Borders) and working in a medical career-related start-up to upskill, a talk given by Dixon Chibanda at EAG London 2018 deeply inspired me. I formed a rough plan to later found an organisation that would teach Post-traumatic stress disorder (PTSD)-specific psychotherapeutic techniques to lay people to make evidence-based treatment of PTSD scalable. I started my medical residency in psychosomatic medicine in 2019, working for a specialised clinic for PTSD treatment until 2021, then rotated to child and adolescent psychiatry for a year and was half a year into the continuation of my specialisation training at a third hospital, when Akhil Bansal, whom I met at a recent EAG in London, reached out and encouraged me to apply for the ED position at Hi-Med - an organisation that I knew through my participation in their introductory fellowship (an academic paper about the outcomes of this first cohort can be found here). I seized the opportunity, applied, was offered the position, and started working full-time in November 2022.  During the role I feel truly privileged to have had the opportunity to lead High Impact Medicine for the past two years. My learning curve was steep - there were so many new things to
Ozzie Gooen
 ·  · 9m read
 · 
We’re releasing Squiggle AI, a tool that generates probabilistic models using the Squiggle language. This can provide early cost-effectiveness models and other kinds of probabilistic programs. No prior Squiggle knowledge is required to use Squiggle AI. Simply ask for whatever you want to estimate, and the results should be fairly understandable. The Squiggle programming language acts as an adjustable backend, but isn’t mandatory to learn. Beyond being directly useful, we’re interested in Squiggle AI as an experiment in epistemic reasoning with LLMs. We hope it will help highlight potential strengths, weaknesses, and directions for the field. Screenshots The “Playground” view after it finishes a successful workflow. Form on the left, code in the middle, code output on the right.The “Steps” page. Shows all of the steps that the workflow went through, next to the form on the left. For each, shows a simplified view of recent messages to and from the LLM. Motivation Organizations in the effective altruism and rationalist communities regularly rely on cost-effectiveness analyses and fermi estimates to guide their decisions. QURI's mission is to make these probabilistic tools more accessible and reliable for altruistic causes. However, our experience with tools like Squiggle and Guesstimate has revealed a significant challenge: even highly skilled domain experts frequently struggle with the basic programming requirements and often make errors in their models. This suggests a need for alternative approaches. Language models seem particularly well-suited to address these difficulties. Fermi estimates typically follow straightforward patterns and rely on common assumptions, making them potentially ideal candidates for LLM assistance. Previous direct experiments with Claude and ChatGPT alone proved insufficient, but with substantial iteration, we've developed a framework that significantly improves the output quality and user experience. We're focusing specifically on