Hide table of contents

You can read the full job description here

Application deadline: December 29th. Applications will be processed on a rolling basis, so early applications are encouraged.

Expected hours: Full-time (40 hours per week) or part-time (at least 6 hours per week)

Location: Remote, ability to co-work with other EA organizations in the Trojan House in Oxford

Compensation: $60 per hour for part-time staff, between $60,000 and $100,000 (based on experience) for full-time staff. For full-time employees, we offer generous benefits, described below.

Start date: As soon as possible, ideally February or early March 2025

Apply now

If you know of anyone who might be a good fit for this role, please forward this to them and encourage them to apply. 

If you have any questions, do not hesitate to reach out to the fund chair, Karolina, at karolina@effectivealtruismfunds.org

We are also seeking expressions of interest in the role of Fund Development Officer/Manager/Director. You can read more about this here

About EA AWF

The AWF's mission is to alleviate the suffering of non-human animals globally through effective grantmaking.

Our grants portfolio prioritizes interventions that can collectively have the highest impact and help the greatest number of animals. Thus, we support projects focused on:

  • Reducing suffering and improving the lives of animals in factory farms
  • Bringing factory farming to an end
  • Positively affecting other groups of animals on a large scale
  • Supporting these goals by researching and piloting novel approaches and interventions

Since its founding in 2017, AWF has emerged as a major funder in effective animal advocacy (EAA), distributing $23.3M across 347 grants and contributing to building a more robust and diverse funding ecosystem. 2024 brought many positive changes and improvements to the fund, and we have ambitious plans for the future. You can read our 2024 review here.

Why join us?

  • Joining the AWF as a fund manager offers a high-impact opportunity to alleviate animal suffering on a global scale. AWF is dedicated to transforming animal welfare by deploying funds strategically to the most promising and effective interventions across a range of neglected and emerging areas, such as factory farming and invertebrate and wild animal welfare.
  • As a fund manager, you would be directly involved in identifying and evaluating grant opportunities and deciding how millions of dollars should be spent to advance AWF’s mission to maximize positive impact.
  • This role includes an intellectually stimulating process of assessing projects for the strength of their theory of change, scale of counterfactual impact, cost-effectiveness, and more.
  • You will be able to increase your knowledge across all areas that AWF’s grantmaking supports, such as policy advocacy, corporate reform, emerging welfare initiatives, movement building, and other strategies, as well as across various species such as chickens, wild animals, invertebrates, fish, and more.
  • You will also contribute to the broader ecosystem: By communicating your reasoning to the community through communications with applicants and public payout reports, you will indirectly contribute to the culture and epistemics of the EA and effective animal advocacy (EAA) community. By providing feedback, you will help existing projects improve. In the longer term, your work will help develop the capacity to allocate a potentially much greater volume of funding each year. While doing so, you will interact with other intellectually curious, experienced, and welcoming fund managers, all of whom share a profound drive to make the biggest difference they can.
  • You will be offered a competitive salary and generous benefits designed to best support you and your work.

Responsibilities

We are interested in experienced grantmakers, professionals working in various roles at animal advocacy organizations (such as campaign managers, policy experts, corporate outreach specialists, or program coordinators), and researchers, as well as junior applicants who are looking to build experience in grantmaking.

As a fund manager, your primary goal will be to source, investigate, and make decisions about grant opportunities, while also having the ability to contribute to other areas of AWF’s work.

  • Your core responsibilities will include:
    • Investigating and evaluating grants assigned to you
    • Reviewing and engaging with other fund managers’ grant recommendations
    • Voting on grant recommendations (each fund manager can vote on each grant)
    • Sourcing high-quality applications based on your ideas and fund strategy (‘active grantmaking’)
    • Communicating your thinking to the community in writing, e.g., feedback to grantees, grant reports, EA Forum posts and comments
  • If they wish, each fund manager can also contribute beyond those responsibilities to wider priorities of the fund, such as:
    • Providing input on the overall strategic direction of the fund
    • Conducting monitoring, evaluation, and learning based on previous grants
    • Improving grant evaluations and other processes
    • Broader responsibilities in communicating about our work and grantmaking
    • And more.

If you are interested in the fund manager role - please apply here.

If you know of anyone who might be a good fit for this role, please forward this document to them and encourage them to apply. If you have any questions, do not hesitate to reach out to Karolina via karolina@effectivealtruismfunds.org

Applications are open now until the 29 of December.

We look forward to hearing from you!

 

Comments1


Sorted by Click to highlight new comments since:

Exciting opportunity.

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d