We’re featuring some opportunities and job listings at the top of this post. Some have (very) pressing deadlines.
- You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.
- These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.
- (If you think your organization should be getting emails about adding their updates to this series, please apply here.)
Opportunities and jobs
Fellowships and workshops
- The Center on Long-Term Risk will be running its second-ever Intro Fellowship on risks of astronomical suffering (s-risks), intended for effective altruists to learn more about which s-risks we consider most important and how to reduce them. Apply here by December 7, 2023.
- Rethink Priorities will hold a virtual workshop on how to use its cause prioritization tool on Giving Tuesday, November 28, at 9 am PT. Please complete this form to indicate your interest and receive further information.
- During GiveWell’s Fall virtual event, economist and author, Emily Oster will moderate a panel on maternal and newborn health featuring Svetha Janumpalli, Founder and CEO of New Incentives, and Erin Crossett, a senior researcher at GiveWell. Elie Hassenfeld, GiveWell’s CEO, will also join to answer the audience’s Q&A. This event will take place on November 29th at 1 pm ET via Zoom, you can register to join them here.
- Campaign Manager for the broiler chicken team (UK/remote, £36.9K, apply by 30 November)
- Corporate Relations Manager for the broiler chicken team (UK/remote, £36.9K, apply by 30 November)
- Research Engineer for Interpretability team (Hybrid in SF/London, $280K - $520K)
- Research Engineer for Responsible Scaling Policy Evaluations (Hybrid in SF/London, $280K - $520K)
- Research Scientist for Frontier Red Team (Hybrid in SF/London, $250K - $450K)
- Research Scientist for Societal Impacts (Hybrid in SF/London, $250K - $375K)
- AI Safety Course Designer (Technical) (London, £55K - £80K, apply by 28 November)
- AI Safety Course Designer (Policy) (London, £55K - £80K, apply by 12 December)
- AI Safety Community Manager (London, £45K -70K, apply by 5 December)
- Chief of Staff (Oxford, UK/San Francisco, US, £100k/ $150k, apply by 17 November)
- Program Manager (Oxford, UK/San Francisco, £68k/$96, apply by 17 November)
Effective Institutions Project
- Research Lead (Tech Industry) (Remote, $100K - $140K + benefits, apply by 19 November)
- Intern (Summer 2024) (Remote, $15/hr with hours negotiable, apply by 10 December)
Future of Life Foundation
- Founder Search and Recruitment Lead (Hybrid/SF Bay Area preferred, $100K - $100K+)
- Researcher, General (Hybrid in SF Bay Area)
- Researcher, Specializing in AI Safety (Hybrid in SF Bay Area)
Global Priorities Institute, Oxford University
- Postdoctoral and Senior Research Fellows in Economics (Oxford, UK, £36.6K - £61.1K, apply by 22 November)
- Assistant Director (Economics) (Oxford, UK, £52,815-£61,198, apply by 22 November)
- Various positions across Open Philanthropy’s teams focused on Global Catastrophic Risk (salaries and locations vary; you can apply to any number of these positions using a single form) (apply by 27 November)
80,000 Hours released a blog post about new opportunities opening up in AI governance following the AI Safety Summit in the UK and the US executive order on AI safety and security.
On The 80,000 Hours Podcast, Rob interviewed:
- Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind, and elsewhere
- Ian Morris on whether deep history says we're heading for an intelligence explosion
- Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down
And Luisa interviewed:
- Seren Kell on the research gaps holding back alternative proteins from mass adoption
- Paul Niehaus on whether cash transfers cause economic growth and keeping theft to acceptable levels
- The Polish parliamentary elections yielded promising results, as the coalition of opposition parties won the majority of votes. Collaboratively, Anima International, Compassion in World Farming, Albert Schweitzer Foundation, Eurogroup for Animals, and Green Rev Institute secured public support for animal welfare-related asks from all but one of the opposition parties. The next phase will focus on turning these pledges into real changes for animals.
Berkeley Existential Risk Initiative (BERI)
The Berkeley Existential Risk Initiative (BERI) recently added 14 new university researchers and research groups to its collaborations program, up from an average of 6 new collaborations in previous years. Read about their new collaborations here.
Centre for Effective Altruism
CEA announced and opened applications for EA Global: Bay Area (Global Catastrophic Risks) (Feb 2–4) and EA Global: London (May 31–June 2).
The EA Forum is hosting a series of online events during Giving Season 2023, including a donation election, weekly themes, and more.
In collaboration with Ozzie Gooen and Sam Donald, Julia Wise published a summary of the project on organizational reforms in EA on the EA Forum. As part of this, they published four more posts produced during the project, starting with “Advice about board composition and practices.”
Effective Institutions Project
The Effective Institutions Project (EIP) has been engaging with the philanthropic sector to help funders make sense of AI. To this end, EIP partnered with Schmidt Futures to deliver large-scale town-hall-style sessions for philanthropic advisors on AI. EIP also released a funder’s guide to AI governance and strategy. Ian David Moss spoke on a panel on ‘Opportunities for funding responsible generative AI’ at the Global Philanthropy Forum in San Francisco.
They recently launched The Observatory, a newsletter focused on monitoring and interpreting what the world’s most important institutions are doing.
Finally, Ella McIntosh recently joined the team as Chief of Staff, and EIP has begun building an independent board of directors. The first six board members are Dave Orr, Gaia Dempsey, Andrea Ordóñez, Nadia Gomes, John Abodeely, and Ian David Moss.
Faunalytics has once again been named an Animal Charity Evaluators (ACE) Recommended Charity.
The organization also added two new blog posts, Collaborating Successfully: Psychological Scientists And Animal Advocates and Roadmap To E.U. Farmed Fish Policy Reform to their website. Additionally, Faunalytics has updated their Research Library with articles on a variety of animal advocacy topics including a look at how many shrimps are killed for food.
Fish Welfare Initiative
Fish Welfare Initiative (FWI) recently co-hosted the World Farm Animal Welfare-Beijing Consensus Meeting in Rome. This FAO-supported event is part of FWI’s efforts to promote the field of fish welfare in China. You can learn more about FWI’s work in China, as well as its project in India, here.
This month, FP published its report on Global Catastrophic Biologic Risks authored by Senior Researcher Christian Ruhl. Applied Researcher Tom Barnes will be leaving for a three-month secondment to work with the UK government on issues related to artificial intelligence.
FP’s grantmaking has increased significantly this year. In recent months, FP granted approximately $5m to LEEP to support their ongoing work to end childhood lead exposure, as well as $3m to NTI in support of the formation of IBBIS, a new organization working to strengthen biosecurity norms and develop innovative tools to uphold them.
If you have a lead on a promising organization, cause area, or intervention for next years grantmaking, feel free to email it directly to FP’s research director, Matt Lerner, at firstname.lastname@example.org
Future of Life Foundation
The Future of Life Foundation (FLF) is a new organization, affiliated with the Future of Life Institute, whose mission is to steer transformative technology toward benefiting life and away from extreme large-scale risks. The FLF aims to recruit, fund, and offer substantial support to founders who show the potential to bring new organizations in this area to fruition.
GiveWell is publishing a multi-part FAQ series on its blog. The first post in the series is focused on cost-effectiveness and why this is generally the most important factor in their recommendations.
Recently, GiveWell recommended a $1.6 million grant to PATH to coordinate a randomized controlled trial measuring the effectiveness of malaria interventions for infants and young children living in areas with perennial transmissions of malaria.
Recently, GiveWell recommended a $1.8 million grant to the Development Innovation Lab (DIL) at the University of Chicago to conduct research on water chlorination programs in Kenya and develop plans for additional research on chlorination in India and Nigeria.
1. Yesterday Giving Green released its 2023-2024 selection of top climate giving strategies and nonprofits here. Using the criteria of "scale, feasibility and neglectedness", we set out to find timely giving opportunities that have huge impact potential, but are neglected by traditional climate funding.
2. We are running a webinar on Nov. 29th to walk through our year-long research and hear from top climate nonprofits. It's open to the public and EA-inspired folks are more than welcome to join us here.
Giving What We Can
Giving What We Can has launched a significant redesign of the GWWC homepage, aimed at clearly communicating what GWWC is about, its mission and values, as well as overall design improvements. The redesign highlights the GWWC community, emphasising their collective commitment to effective giving.
As part of the introduction of pledging wealth, Giving What We Can also implemented and launched their new pledge recommendation tool. Integrated into the pledge sign-up flow, this tool helps users in selecting the pledge that best aligns with their income-to-wealth ratio. This 'help me decide' feature, available at step 2 of the pledge sign-up, guides users towards a commitment that suits their financial situation.
Happier Lives Institute
Vox featured the Happier Lives Institute’s (HLI) work in an in-depth article titled “A surprisingly radical proposal: Make people happier — not just wealthier and healthier”. The article highlights HLI’s efforts to prioritise improving happiness over solely increasing wealth and health.
HLI has also recently published three new web pages explaining its methodology for charity evaluations in more detail: the charity evaluation methodology, cost-effectiveness analysis, and quality of evidence pages outline how HLI puts evidence at the core of its charity evaluations and recommendations. We welcome any feedback to continue refining our methods.
The Humane League
The Open Wing Alliance (OWA), The Humane League’s global coalition of animal protection organizations working to end the abuse of chickens raised for food, released their Global Restaurant Report. The report ranks global restaurants on animal welfare and asks: Which companies are following through on their promises? And which are failing animals—along with customers who trusted them?
The Humane League also published a new quarterly progress report, which highlights key updates on the global progress being made for animals raised for food. This quarter's report includes exciting updates from both Yum! Brands—the world’s largest restaurant chain—and Barnes & Noble, the world’s largest bookstore chain.
Legal Priorities Project
LPP’s Head of Strategy Mackenzie Arnold spoke before the US Senate’s bipartisan AI Insight Forum on Privacy and Liability, convened by Senate Majority Leader Chuck Schumer (D-N.Y.). For LPP’s perspective on how Congress can meet the challenges that AI presents to liability law, you can read their written statement here.
Matthijs Maas published three “AI Foundations Reports”:
- “AI is like… A literature review of AI metaphors and why they matter for policy”: This report reviews why and how metaphors matter to both the study and practice of AI governance.
- “Concepts in advanced AI governance: A literature review of key terms and definitions”: This report provides an overview, taxonomy, and preliminary analysis of many cornerstone ideas and concepts within Advanced AI Governance.
- “Advanced AI governance: A literature review of problems, options, and proposals”: This literature review provides an updated overview and taxonomy of research in advanced AI governance.
LPP hosted a Law & AI dinner around EAG Boston, with nearly 60 participants.
Magnify Mentoring is delighted to announce the launch of their database of jobseekers. If you are a recruiter at an organization working on evidence-based interventions to make the world better, please reach out to Kathryn. They are also excited and grateful to say their work has been supported by Open Philanthropy. Results from their fifth round of mentorship can be found here.
New Incentives has enrolled over 1 million infants in 2023—more than all previous years combined (2017-22). Take a look at the data to see how they’ve encouraged childhood vaccinations with cash incentives and scaled their program through the years.
One for the World
Emma Cameron, One for the World's Director of Chapter Organizing, kicked off a collaboration across effective giving groups with Giving What We Can Group Leaders in New York, Berlin, London, and Vancouver over the last month.
One for the World is supporting these new Giving What We Can Groups by sharing our almost ten years of experience running university and corporate chapters. We look forward to seeing what the Giving What We Can groups accomplish during their upcoming events during the holiday giving season.
Open Philanthropy launched a hiring round for more than 20 positions across its teams working on Global Catastrophic Risk (all positions close on November 27th, 2023). It also announced its plans to allocate $300 million to GiveWell over the next few years and shared results from a project focused on forecasting key questions related to Joseph Carlsmith’s paper “Is Power-Seeking AI an Existential Risk?”.
Rethink Priorities (RP)
Artificial intelligence: The AI Governance and Strategy team has evolved into the Institute for AI Policy and Strategy (IAPS). Their mission is to reduce risks related to the development and deployment of frontier AI systems. IAPS’ work can be found here.
Worldview investigations: The team released Causes and uncertainty: Rethinking value in expectation (CURVE) which investigated the assumptions that (1) we should prioritize based on what would maximize expected value and (2) doing so leads to prioritizing existential risk mitigation. Rather than defend specific prioritizations, the researchers try to clarify fundamental decision-relevant issues and explore alternatives to expected value maximization. The team also created a cross-cause cost-effectiveness model that allows for transparent reasoning about cause prioritization and helps funders navigate uncertainty.