Hide table of contents

These monthly posts originated as the "Updates" section of the EA Newsletter. Organizations submit their own updates, which we edit for clarity.

Job listings that these organizations highlighted (as well as a couple of other impactful jobs) are at the top of this post. Some of the jobs have extremely pressing deadlines. 

You can see previous updates on the "EA Organization Updates (monthly series)" topic page, or in our repository of past newsletters. Notice that there’s also an “org update” tag, where you can find more news and updates that are not part of this consolidated series.

The organizations are in alphabetical order.


Note also: Why you’re not hearing as much from EA orgs as you’d like.

Announcements

Job listings

Consider also exploring jobs listed on “Job listing (open).”

Animal Advocacy Careers:

Clinton Health Access Initiative:

Effective Institutions Project: 

Effective Ventures Operations:

Epoch

Fish Welfare Initiative:

Family Empowerment Media 

Founders Pledge:

GiveDirectly

GiveWell:

Global Priorities Institute:

IDinsight:

Longview Philanthropy:

Open Philanthropy:

Probably Good:

Multiple positions in operations, growth, and community management (Remote)

Organizational updates

These are in alphabetical order.

80,000 Hours

This month, 80,000 Hours launched their updated job board, which now has a search function, improved job filtering, and an email alert system. 

They also released Anonymous advice: If you want to reduce AI risk, should you take roles that advance AI capabilities? and shared seven new problem profiles: 

The web team also updated these pages: 

Meanwhile, on The 80,000 Hours Podcast Rob Wiblin interviewed Alan Hájek on puzzles and paradoxes in probability and expected value and Bear Braumoeller on the case that war isn't in decline

Anima International

At the end of September, Anima International’s team in Denmark announced that after years of campaigning and discussions, the country has agreed to ban the production of cage eggs. The victory has gained significant media attention. Anima International has worked on this for a number of years and recently handed in more than 50,000 signatures to the Minister of Agriculture. The phase out is set for 12 years but Anima will continue to push for a shorter deadline. The latest figures state that half a million cage hens are reared in Denmark and they make up about 17% of the total production of eggs – in 2010 it was around 61%.

Animal Charity Evaluators

Announcing a New Tool for Charities: Decision-Making Framework

ACE has a new Decision-Making Framework in their Tools for Charities. This framework was developed and used internally at ACE over the past year. ACE decided to share it because it has been useful for decision-making at ACE and because several external stakeholders have expressed interest in using it at their organizations.

View the framework here

Centre for the Study of Existential Risk (CSER)

Charity Entrepreneurship

Charity Entrepreneurship has just finished an application round for its February-March 2023 Incubation Program. The organization plans to start five new charities in the area of global health policy and animal welfare (more details in this post). The next round of applications is planned for March 2023 and will focus on large-scale global health projects and biosecurity interventions. CE team members will be present at various EAGs in the upcoming months, so we encourage you to attend their talks, office hours, and book one-on-ones at EAGxBerkeley, EAGxIndia, EAG Bay Area, and EAGxNordics.

If you’ve decided not to apply to CE Incubation Program, for any reason, CE is now looking for feedback. Perhaps it’s financial considerations, imposter syndrome or something else? Telling CE could help improve the program! Please share as much or as little as you’d like in this very short form.

Effective Institutions Project

Effective Institutions Project has launched the EIP Innovation Fund, a regranting and prize program supporting promising efforts and research to support smarter and more benevolent institutions around the world. EIP expects to have an initial slate of grantmaking opportunities to support and recommend to its broader network of funding partners by early 2023.

EIP has recruited Philip ReinerLoren DeJonge Schulman and Tim HwangShahar Avin to lead deep dives on the US National Security Council (focusing on catastrophic risk reduction and long-term planning) and Google / Alphabet (focusing on AI safety) respectively.

EIP has added several advisors in recent months: Lewis Bollard, who leads Open Philanthropy's grantmaking on farm animal welfare; Dan Perez, director of the North America practice at SRI Executive, a search firm that specializes in leadership placements at important institutions; Jaan Tallinn, a philanthropist, investor, and thought leader on transformative AI and existential risks to humanity; Paul Timmers, former director of digital society, trust, and cybersecurity at the European Commission; and Nicole Tisdale, former US National Security Council legislative director and director of intelligence and counterterrorism at the US House of Representatives Committee on Homeland Security. EIP also welcomes two new research fellows for the fall semester, Martin Trouilloud and Rio Popper.

Family Empowerment Media 

Family Empowerment Media (FEM) is scaling its evidence-driven family planning radio campaigns across cost-effective intervention regions in Nigeria. The organization aims to reach 30 million listeners by 2027. FEM has recently formed three new implementing partnerships in Anambra, Ondo and Kogi State. It has developed relationships with key stakeholders in the regions, conducted listener research, and is soon airing proof-of-concept campaigns. FEM is also reaching 5.5 million listeners through a nine-month-long campaign in Kano State. In addition, they are building a solution to answer follow-up questions to its campaigns with an AI-based WhatsAapp service, in partnership with AIDE and a top AI research institute.

Faunalytics 

Faunalytics’ newest report looks at Chinese consumers’ attitudes towards farmed animal welfare. In this project, they partnered with The Good Growth Co. to conduct focus groups with the Chinese public to learn their perspectives towards meat consumption, the concept of farmed animal welfare, and to identify different types of messaging and strategies for encouraging movement growth.

The organization has also updated their research library with articles on topics including the current state of cellular agriculture research and the outdated reliance on animal data. Faunalytics was also recommended by Giving What We Can in a recent YouTube video

Fish Welfare Initiative

Fish Welfare Initiative continues their farmercorporate, and policy work (see new post) to reduce the suffering of farmed fish in India. Their Executive Director, Haven, also recently gave a talk at the Animal and Vegan Advocacy Summit about how other organizations can also help fish (video to be posted online later).

GiveDirectly

  • GiveDirectly has launched two of the world’s largest basic income projects: one in Malawi and another in Liberia. These are in addition to their long-running UBI study in Kenya, which published mid-line results in 2020. 
  • In the past two years, GiveDirectly has delivered $160M+ annually to people in poverty and has the capacity to deliver much more. However, with inflation + stock market downturn, they’ve received less donations this year than the previous two. To ensure current programs are fully funded for next year, readers are encouraged to give before the end of the year (deadlines here).  
  • GiveDirectly has published the following new materials:
    • Blog: “Why we work in the United States”
    • Report: “Inflation is worst for people in extreme poverty”
    • Survey results: “We asked people in poverty how they prefer to receive money”

GiveWell

  • As the end of the year approaches, GiveWell wants to remind its donors that it now has three distinct giving funds: the Top Charities Fund (allocated to GiveWell top charities), the All Grants Fund (allocated to any grant that meets GiveWell's cost-effectiveness bar), and the unrestricted fund (supporting GiveWell's operations). This year, GiveWell expects to be underfunded, meaning it will identify more highly cost-effective grant opportunities than it can fund. Supporters are encouraged to donate if they haven't before, or increase their recurring donation if they can, to ensure that these impactful programs remain fully funded.
  • GiveWell also wants to remind its followers that UK-based donors can now give directly to GiveWell and be eligible for a tax benefit through GiveWell UK, a Charitable Incorporated Organisation. 
  • GiveWell has published the following new research materials:
    • report summarizing the evidence for malaria vaccines, focusing on the RTS,S/AS01 vaccine.
    • page on a grant of up to $64.7 million that GiveWell recommended to Evidence Action's Dispensers for Safe Water program in January 2022. More about what led GiveWell to recommend the grant here.
    • page on a grant of $10.4 million that GiveWell recommended to the Clinton Health Access Initiative (CHAI) for its Incubator program, which scopes and scales cost-effective interventions that haven't yet been widely implemented.
    • page on a grant of $1.4 million that GiveWell recommended to researchers at the University of California–Berkeley to support a follow-up of a randomized controlled trial of GiveDirectly's program in Kenya.
    • page on a grant of $15 million that GiveWell recommended to RESET Alcohol Initiative, a consortium of organizations advocating for policies that reduce the harms of excess alcohol consumption in low- and middle-income countries.

Giving Green

Giving Green is launching its updated recommendations for effective climate giving later this month, and encourages anyone interested to sign up to the newsletter to receive the announcement.

Giving Green’s Dan Stein published a Center for Effective Philanthropy blog about climate philanthropy in the new US policy landscape.

For businesses interested in climate action, Giving Green announced its recommendation of Frontier, a private-sector-led advance market commitment intended to support and accelerate the development and deployment of carbon removal technologies.

Global Catastrophic Risk Institute

GCRI Executive Director Seth D. Baum recently published the article "Assessing natural global catastrophic risks" which discusses how natural hazards, such as volcanic eruptions, asteroid strikes, and climate change, pose significant threats to human civilization.

GCRI Research Associate Andrea Owe, Executive Director Seth D. Baum, and University of Vienna's Prof. Mark Coekelbergh recently published the article "Nonhuman value: A survey of the intrinsic valuation of natural and artificial nonhuman entities". The article discusses how natural nonhuman entities, such as ecosystems or nonhuman animals, and artificial nonhuman entities, such as art or technology, might hold inherent value.

Happier Lives Institute

HLI’s entry to GiveWell’s Change Our Mind Contest raised twelve critiques of GiveWell’s cost-effectiveness analyses that substantially alter the results. Ten apply to specific inputs for malaria prevention, cash transfers, deworming, and two are relevant for more than one intervention.  These were shallow explorations but, on the basis of them, HLI estimate that malaria prevention is 30% less effective, cash transfers are 40% less effective, and deworming is 20% better than GiveWell previously thought.

In two complementary papers, Michael Plant (HLI Director) and Harry R. Lloyd (Summer Research Fellow) advocate for a novel ‘property rights’ approach to moral uncertainty: that we should divide our resources between the moral theories we have credence in and allow each theory to use its resources as it sees fit. 

Recent talks

LPP published two new working papers:

Cullen O’Keefe gave a talk at Harvard Law School Effective Altruism on “Mitigating Extreme AI Risks: How Lawyers Could Help”. You can find the video recording here.

John Bliss also gave a talk at HLS EA on “Existential Advocacy”.

LPP is still actively receiving expressions of interest particularly from people looking to contribute to their community building and outreach projects, as well as people interested in operations roles.

One for the World 

One for the World is hosting an hour-long Giving Tuesday party on November 29th. They will be discussing One for the World’s impact this year, sharing their favorite giving stories, and altogether celebrating the giving season! You can stay for the whole party or just drop in for a bit. RSVP via the LinkedIn event here.

One for the World has launched a monthly newsletter, The Philanthropist Monthly, which will include updates about the organization, our Nonprofit Partners, and the effective giving space. Sign up for the newsletter here. (Scroll to the bottom!) 

Our One for the World Chapter Leaders are well into the semester now, organizing and fundraising on campus. Columbia University has held the #1 spot in a number of Pledges for a few weeks, but the University of Pennsylvania undergrad chapter is close behind. 

Open Philanthropy

Open Philanthropy senior research analyst Ajeya Cotra was named one of Vox’s “Future Perfect 50” in recognition of her work on forecasting when transformative artificial intelligence might arrive.

Open Philanthropy announced the 2022 cohort of its Century Fellowship. The Century Fellowship is a two-year program that supports promising and ambitious early-career individuals who want to work on challenges the world may face this century that could have a lasting and significant impact on the long-term future.

The Quantified Uncertainty Research Institute (QURI)

Rethink Priorities (RP)

  • Senior Research Manager William McAuliffe and collaborator Adam Shriver (Ph.D., Philosophy) investigated the relative importance of the severity and duration of pain for moral decisions as well as any potential implications for when pain occurs in an individual's life.
  • Co-CEO Peter Wildeford released “Squigglepy,” an unofficial new Python package for Squiggle, which may be useful for probabilistic estimations and utility functions for Bayesian networks.
  • Peter also spoke about forecasting “the things that matter” with Spencer Greenberg on the Clearer Thinking podcast.
  • Tapinder Sidhu completed her research fellowship, which culminated in a post exploring the idea of using artificial intelligence (machine vision) to increase the effectiveness of human-wildlife conflict mitigations to benefit wild animal welfare. 
  • Senior Research Manager Bob Fischer is in the midst of publishing a sequence on the Moral Weight Project, which compares the welfare capacities of different species with implications for allocations between different animals as well as between humans and nonhuman animals. 
  • Researcher Jenny Kudymowa and Research Analyst Bruce Tsai published their report (commissioned by Open Philanthropy) on the effectiveness of prizes in spurring innovation.
  • Senior Researcher Ben Snodin and Research Assistant Marie Davidsen Buhl made a database of resources relevant to nanotechnology strategy research. 
  • Senior Researcher Neil Dullaghan’s new research on slaughterhouse bans reassesses the US public’s levels of support for radical action against factory farming in the name of animal welfare.

Training for Good

TFG released an update on their progress so far and their plans for 2023. They now focus exclusively on programmes that enable talented and altruistic early-career professionals to directly enter the first stage of high impact careers. Concretely, this means they will only run the following programmes in Sep 2022 – Aug 2023: 

  1. EU Tech Policy Fellowship 
  2. Tarbell Fellowship (journalism)
  3. *An unannounced 3rd programme which is still under development*

TFG also opened applications for the EU Tech Policy Fellowship 2023. This is an 8-month programme to catapult ambitious graduates into high-impact career paths in EU policy, mainly working on the topic of AI Governance. Those interested can apply here by December 11.

49

0
0

Reactions

0
0

More posts like this

Comments


No comments on this post yet.
Be the first to respond.
More from Lizka
Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
Relevant opportunities