Hide table of contents

We’re looking for applications to the board of trustees of the Centre for Effective Altruist Learning and Research (CEEALAR), informally the EA Hotel. To apply please fill out this short form. Applications will be assessed on a rolling basis, with a deadline of 18th October.

CEEALAR is a registered charity in England and Wales that supports people to develop high impact careers by providing free and subsidised accommodation, a productive workspace, and support with networking and learning.

The current trustees of CEEALAR are Greg Colbourn, Florent Berthet, and Sasha Cooper.

Who are we looking for?

We’re particularly looking for people who:

  • Have a good understanding of effective altruism
  • Have a track record of integrity and good judgement, and who more broadly embody these guiding principles of effective altruism
  • (Ideally) have experience in one or more of the following areas:
    • HR, accounting, law, finance, risk management or service management
    • Management or trusteeship or similar role in a non-profit
  • Are able to work collaboratively under uncertainty

We think the role will require significant time and attention, though we expect it to vary depending on the needs of the organisation. Trustees should be ready to commit ~2hrs/wk to the role on a regular basis, though they should also be prepared to scale up their involvement from time to time in the case of urgent decisions requiring board response.

We especially encourage individuals with diverse backgrounds and experiences to apply, and we especially encourage applications from people of colour, self-identified women, and non-binary individuals who are excited about contributing to our mission. 

The role is remote, though we strongly prefer someone who is able to make meetings in times that are reasonable hours in the UK. 

The role is unpaid. 

What does a CEEALAR trustee do?

As a member of the board, you have ultimate responsibility for ensuring that the charity fulfils its charitable objectives as best it can. Ideally, most strategic and programmatic decision-making is delegated to the managers or to the ED, with the trustees attending a monthly meeting for a high level summary and 6-weekly check-ins with the managers to help ensure their welfare and development.

During business as usual times, we expect the primary activities of a trustee to be:

  • Assessing grant applications (from people requesting to stay at CEEALAR)
  • Assessing the performance of managers and hiring managers where appropriate (funding permitting, we intend to hire a paid Executive Director within the next 6 months). 
  • Evaluating and deciding on high-level issues that impact the organisation as a whole.
  • Reviewing budgets and broad strategic plans for the organisation.
  • Evaluating the performance of the board and whether its composition could be improved (e.g. by adding in a trustee with underrepresented skills or experiences).

Why should you apply?

CEEALAR has navigated many challenges, and had very little bandwidth to evaluate our impact, but our best guess is that at least 10% of our grantees go on to very high value work, with the majority estimating substantial counterfactual improvement to where they’d have been without it. This is very much in line with the hits-based-giving philosophy it was founded on – but we expect those proportions to continue to increase over time as we field-test various productivity increases and measurement of impact, and for our cost-effectiveness to improve. We’ve recently hired three highly driven operations managers and feel that, given time and enough support, they could turn the project into something exceptional even within the EA movement.

Yet our future isn’t and has never been certain; a persistent lack of funding security has limited the bandwidth of our staff and a lack of time and domain-specific experience has limited the bandwidth of our trustees. 

We think that adding the right trustee would substantially increase our capacity to deal with these problems, and improve our resilience to further force majeures. This means your help could shape the project immensely, and help to ensure CEEALAR can continue to exist in its current form, and fulfil its future potential.

How can you apply? 

If you would like to sit on the CEEALAR board of trustees, please fill out this form. Applications will close on 18th October and will be assessed on a rolling basis. 

Following the initial application, promising candidates will be invited to interview with current trustees and managers and we will undertake a series of background and reference checks on them. We may also add a work test or other additional steps if we feel it’s necessary to select the best candidate, though we don’t currently plan to.  

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel
michel
 ·  · 4m read
 · 
I'm writing this in my personal capacity as someone who recently began working at the Tarbell Center for AI Journalism—my colleagues might see things differently and haven’t reviewed this post.  The rapid development of artificial intelligence could prove to be one of the most consequential technological transitions in human history. As we approach what may be a critical period in AI development, we face urgent questions about governance, safety standards, and the concentration of power in AI development. Yet many in the general public are not aware of the speed of AI development, nor the implications powerful AI models could have for them or society at large. Society’s capacity to carefully examine and publicly debate AI issues lags far behind their importance.  This post makes the case for why journalism on AI is an important yet neglected path to remedy this situation. AI journalism has a lot of potential I see a variety of ways that AI journalism can helpfully steer AI development.  Journalists can scrutinize proposed regulations and safety standards, helping improve policies by informing relevant actors of potential issues or oversights. * Example article: Euractiv’s detailed coverage of the EU AI Act negotiations helped policymakers understand the importance of not excluding general-purpose models from the act, which may have played a key role in shaping the final text.[1] * Example future opportunity: As individual US states draft AI regulations in 2025, careful analysis of proposed bills could help prevent harmful loopholes or unintended consequences before they become law. Journalists can surface (or amplify) current and future risks, providing activation energy for policymakers and other actors to address these risks.  * Example article: 404 Media's investigation revealing an a16z-funded AI platform's generation of images that “could be categorized as child pornography” led a cloud computing provider to terminate its relationship with the platf
Relevant opportunities