Hide table of contents

There was a lot of great EA-related content and research in 2023; I’ve highlighted content that seems particularly notable. Please add content you think should be in a list like this! I'm also sharing excerpts from every month’s edition of the EA Newsletter, in case it’s helpful for some people who might want to look through a low-resolution timeline of 2023. 

A companion post lists some reviews 2023 news on AI safety, animal welfare, global health and more. (That post summarizes news, the one you're reading features content.) 

Skip to: 

Requests & other notes:

  • Consider adding other content or research that you appreciated in the comments. (I haven't really tried to make this an exhaustive list, and I know the sample I started with has big blind spots.)
  • I’d really appreciate feedback on the EA Newsletter if you’re subscribed to it (see archives and subscribe here). We get very little constructive feedback ion it, and feedback could help us improve a newsletter that goes out to 60K subscribers. 
  • More context on how and why I made this: I wanted to collect “important stuff from 2023” to reflect on the year, and realized that one of the resources I have is one I run — the monthly EA Newsletter. So I started compiling what was meant to be a quick doc-turned-post (by pulling out good links from the 2023 emails, occasionally updating them and remembering related content that I also thought was good). Things kind of ballooned as I worked on this post and added non-Newsletter links. Long story short, there are now two posts; see the companion post, which is less focused on content/research and more focused on "news."
  • For more cool EA-related content from 2023, see: curated Forum poststalks from EA conferences in 2023, and your EA Forum Wrapped recommendations (we’ll also try to compile a post from what people mark as “most valuable”). 
Sources for images in the banner, starting at the top and going clockwise: 1, 2, 3, 4, 5, 6, 7, 8

Highlights: content by causes

Links that stand out, loosely organized into cause areas (although a lot of the content is useful for more than one cause area/category). Links that I appreciate an unreasonable amount are starred (not very consistently). ⭐

Cross-cause discussions & content about causes less prominent in EA

More: ⭐ The Capability Approach to Human Welfare and There is little (good) evidence that aid systematically harms political institutions from Ryan C Briggs, GWWC's evaluations of evaluatorsTED Talk on effective philanthropy from Natalie Cargill, Radical tactics can increase support for more moderate groups (Social Change Lab), Why should ethical anti-realists do ethics? (Joe Carlsmith), Rethink Priorities’ Cross-Cause Cost-Effectiveness ModelEA is three radical ideas I want to protect and EA Strategy Fortnight postsWisdom of the Crowd vs. "the Best of the Best of the Best", and Bringing about animal-inclusive AI.

Global health and development: historical lessons and research on potentially under-appreciated causes

More: Cause area report: Antimicrobial Resistance (Akhil), ⭐ Are education interventions as cost effective as the top health interventions? (Founders Pledge report), The first results from the world’s biggest basic income experiment in Kenya are in (covered by Vox — see also a study on mortality reductions and a podcast on cash transfers and economic growth), What you can do to help stop violence against women and girls (Akhil), Lucia Coulter on preventing lead poisoning for $1.66 per child (80,000 Hours), Sectoral transformation & what we really know about growth in LMICs (Karthik Tadepalli), and Clean Water - the incredible 30% mortality reducer we can’t explain (Nick Laing).

More: Open Phil Should Allocate Most Neartermist Funding to Animal Welfare (Ariel Simnegar), Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat (Jacob Peacock), ⭐ EA’s success no one cares about (Jakub Stencel), Why I No Longer Prioritize Wild Animal Welfare (Saulius), Net global welfare may be negative and declining (see Vox coverage), Change my mind: Veganism entails trade-offs, and health is one of the axes (Elizabeth), Animal Advocacy Strategy Forum 2023 Summary, OWID’s new page about animals, and claims that only mammals and birds are sentient.

Note: This section consists entirely of links to the Forum, which makes me think I’m more systematically missing awesome content on animal welfare than I am for other causes. But also: huge Rethink Priorities representation. 

Non-AI global catastrophic risks (including pandemic preparedness, climate change, nuclear safety, etc.): government investments, developing and deploying key technologies, and more

More: Reviewing nuclear winter (Michael Hinge — see also Nuclear winter scepticism and Philanthropy to the Right of Boom from Founders Pledge), 20 concrete projects for reducing existential risk (Buhl), Kevin Esvelt on cults, stealth vs wildfire pandemics, and how he felt inventing gene drives (80,000 Hours Podcast), Advice on communicating in and around the biosecurity policy community (Elika), and Alison Young on how top labs have jeopardised public health with repeated biosafety failures (80,000 Hours Podcast).

This section is mostly oriented towards strategy/analysis-of-the-field discussions and advocacy and governance-oriented work (i.e. I don’t feature links with great technical research here). 

What Do We Mean When We Talk About “AI Democratisation”? (GovAI)

There’s a bunch more on AI safety — see some content that was linked in the relevant section in the companion post. [LINK]

Month-by-month: highlights and featured news from 2023

The actual newsletters included a large amount of other content (like announcements/opportunities). See full newsletters in the archive.

January

EA Global, an update on the ozone layer, and staring into the abyss

Why Anima International suspended the campaign to end live fish sales in Poland

Effectiveness requires noticing mistakes and correcting them. Unfortunately, people often don’t do this (we tend to flinch away from uncomfortable ideas), and even when we do, we rarely discuss it publicly. [...] Anima International had been running a campaign against live fish sales in Poland. [...] Both the farming and transportation of the fish cause a lot of suffering — so progress seemed exciting. Unfortunately, it turned out that the campaign was causing unexpected harm; some people were switching from carp to salmon, and farming salmon requires farming more fish to feed the salmon (which are carnivorous). Anima International’s models and research showed that the campaign was worse than they’d hoped, and even tentatively implied that the program was harmful overall. They decided to stop the campaign. [...]

Ben Kuhn calls this kind of thinking “staring into the abyss” and identifies it as a core life skill, key to doing great work.

Why the ozone hole is on track to be healed by mid-century (Kelsey Piper in Vox)

Good news: a panel commissioned by the United Nations reports that “the Earth’s ozone layer is on track to recover within four decades.” [...] This success shows us that the world can come together to work on big problems — we should learn from it.

Let’s think about slowing down AI (Katja Grace)

…A recent post suggests that slowing down AI progress is unreasonably discarded by people interested in AI safety in part because of a “can’t-do” attitude. The post makes the case that this approach is viable, not radical, and can be cooperative, and shares other thoughts and models. 

The classic featured ITNA framework for comparing global problems in terms of expected impact (and links to The ITN framework, cost-effectiveness, and cause prioritisation and Most problems fall within a 100x tractability range (under certain assumptions)).

February

New charity ideas, the abolition of slavery, and research on animal welfare

Christopher Brown on why slavery abolition wasn’t inevitable (80,000 Hours Podcast)

…He rebuts one prominent theory — that the practice of slavery was bound to end as it was no longer profitable for slaveholders and traders — by noting that people involved in the slave trade were viewing the system as profitable. [...] One change is described as especially significant: the shift from feelings of unease about the morality of slavery to organized action in slaveholding societies. Brown notes that it might be comforting to think that “once [people] understood the cruelty of slavery, then of course they would organize and do something about it,” but he stresses that “not only did it not happen that way, but it almost never happens that way.” […]

H5N1 bird flu (Pandemic Prediction Checklist: H5N1 and What Are the Odds H5N1 Is Worse Than COVID-19?)

Worries about bird flu have been around for a while, but they’ve recently been ramping up again; in October, a strain of the flu (H5N1) infected minks at a fur farm and probably started spreading from one mink to another (which hadn’t happened with mammals before). The development is particularly concerning because minks are well suited to transmitting the disease to humans. You can read more here.

It currently seems unlikely that H5N1 will spread widely among humans — as of right now, Metaculus predicts a 2% chance that H5N1 will cause at least 10,000 human deaths. [...]

Rethink Priorities’ Welfare Range Estimates

How do you decide [between improving the lives of farmed chickens or saving some pigs from factory farms]? One of the difficulties here is that it’s not clear how to compare the experiences of different animals, and this gets harder when the animals in question are less similar to humans.

A report from Rethink Priorities estimates the “welfare ranges” of different species. These ranges track the difference between the most intense pleasures and the most intense pains that the animal can experience. […]

Classic: The Moral Imperative Towards Cost-Effectiveness, with additional links to Differences in impact and Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness.

March

GPT-4, the history of a simple cure, and more

GPT-4 and the road to out-of-control AIs (“This Changes Everything” by Ezra Klein in the NYT (paywalled) and other links about the news)

On Tuesday, OpenAI unveiled the AI model GPT-4, an even-more-capable successor to the system that powered the popular chatbot ChatGPT. That same day, Google made one of its most powerful AI models accessible to developers, Anthropic opened access to the AI chatbot Claude, and more — the news continued throughout the week. 

Two days before those announcements, Ezra Klein published a column in The New York Times, “This Changes Everything” (paywalled), in which he wrote: “[developing AI] is an act of summoning. The coders casting these spells have no idea what will stumble through the portal… They are calling anyway.” If this “summoning” continues unchecked, humans might find themselves at the mercy of deeply alien and uncontrollable AI systems. (Out-of-control AI might sound like science fiction, but experts are increasingly afraid of this possibility, and in a 2022 survey of machine learning researchers, nearly half said there is at least a 1 in 10 chance that the effects of AI would be “extremely bad (e.g. human extinction).”)

GPT-4 itself probably won't be disastrous, but it is a step towards AI systems that might be. [...]

Salt, Sugar, Water, Zinc: How Scientists Learned to Treat the 20th Century’s Biggest Killer of Children (Matt Reynolds in Asterisk)

A recent article by Matt Reynolds focuses on [why it took so long to discover ORS, a simple cure that has saved millions]. One theme highlighted in the piece is […] a lack of understanding of what was happening on a biological level. […] But while the first part of the problem was developing any effective treatment that doctors could theoretically administer, a crucial hurdle was finding a simpler, more practical treatment. By the mid-20th century, intravenous salines were often used to treat cholera. This “high-tech” treatment was effective and popular in richer areas, but inaccessible in others. The development of oral rehydration solution was a major breakthrough precisely because of its simplicity.

Can Policymakers Trust Forecasters? (Gavin Leech and Misha Yagudin in the Institute for Progress)

[…] A recent article argues that generalist forecasters — people who make and track predictions across a wide range of topics — slightly outperform domain experts and statistical models at predicting future events. Moreover, combining different approaches might be more promising, as it can help policymakers avoid biases and weaknesses of any particular group. […]

Classic: Most* small probabilities aren't pascalian by Greg Lewis.  

April

AI, why unconventional climate change approaches can be better, and more

Johannes Ackva on unfashionable climate interventions that work, and fashionable ones that don’t (80,000 Hours Podcast)

Two highlights: 

  • Interventions can look a lot more — or less — effective when you evaluate them on a global scale. For example, some groups in Switzerland are advocating for thorough insulation of all homes to make them more energy-efficient. This would reduce emissions in Switzerland, but it would have a small impact globally because most emission growth is in countries where insulation isn’t the problem. […]
  • We should accelerate the development of new clean energy technologies. […]

What should we do about risks from AI? (“We must slow down the race to God-like AI” (paywalled) in FT, and the launch of Planned Obsolescence)

[…] One post on slowing AI progress raises some key questions: 

  • Is it better to ask for evaluations — ongoing audits on whether systems are dangerous — instead of a pause? 
  • Is a 6-month pause too short? Is it even the right thing to ask for? A more continuous and iterative approach might be better. (See more.)
  • Will a moratorium like this backfire by worsening competitive dynamics? 

Given the uncertainty, what can all of us do? Stay informedadvocate for alignment, support people working on safety approaches, and use our skills and resources to work on the problem if we can (explore resources for upskilling).

Child and Infant Mortality (Our World in Data)

…until very recently in human history, almost half of all children died before the end of puberty. Today, global child mortality is around 4%. This still means that thousands of children die every day — far too many, but so much better than it used to be. 

Classic: “The timing of labour aimed at reducing existential risk” by Toby Ord (and a more recent Twitter thread). 

May

Good news for pig welfare, releasing billions of mosquitos, and regulating AI

​​An unexpected win for animal welfare (Vox)

In an unexpected ruling against the pork industry, the US Supreme Court upheld an important animal welfare law. California’s “Proposition 12” bans the sale of some pork products that come from farms where sows are kept in extremely small “gestation crates.” The law was passed via a 2018 ballot measure that was approved by over 62% of Californian voters. […] The outcome was far from guaranteed; the Supreme Court passed the ruling by a narrow majority. […]

How releasing billions of modified mosquitos might help fight dengue fever (WMP)

[…] The World Mosquito Program (WMP) is coming at the mosquito problem from another angle; they plan to build a mosquito farm in Brazil to start releasing modified mosquitos that can’t spread certain viruses. The mosquitos will contain the bacteria Wolbachia, which should prevent the insects from transmitting viruses like dengue, Zika, and yellow fever. The farmed mosquitos will then spread Wolbachia into the wild mosquito population. WMP has run trials; they report that one project led to a 77% reduction in confirmed dengue cases in the affected area. 

I don’t know if this program is especially cost-effective. Though dengue fever is probably more neglected than diseases like malaria, it also affects fewer people. But it’s still inspiring to see the range of ways that diseases can be fought. […]

How should we regulate artificial intelligence? (12 tentative ideas)

A year ago, the idea of out-of-control AI might have sounded a bit like science fiction, and people worried about catastrophic risks from AI thought that getting public interest in regulation would be difficult. But things have changed. Awareness and interest in regulation are growing, and governments are responding. In the US, Sam Altman (CEO of OpenAI) testified before the Senate today and pushed for safety-oriented regulation, and earlier the White House met with Altman and other AI CEOs to talk about potential dangers. And in the EU, a proposed AI Act would classify and regulate AI systems based on their risk levels.

Understanding what regulations are most effective is probably harder. Luke Muehlhauser, a senior program officer at Open Philanthropy, recently suggested 12 tentative ideas for US AI policy. These include tracking and licensing big clusters of cutting-edge chips, requiring that frontier AI models follow stringent information security protections, and subjecting powerful models to testing and evaluation by independent auditors. It’s helpful to understand what strategies can look like, but more research and work are required before the ideas can be implemented. [...]

Classic: How much should you research your career? (80,000 Hours), applying “Terminate deliberation based on resilience, not certainty.” 

June

Proposals for AI governance, lessons from charity evaluation, and many opportunities

Elie Hassenfeld on two big-picture critiques of GiveWell's approach, and six lessons from their recent work (80,000 Hours)

[...] A recent episode of the 80,000 Hours Podcast with Elie Hassenfeld, the CEO and co-founder of GiveWell, focuses on difficult questions [for GiveWell, like]:

  • How can you compare interventions that have different types of benefits? […]
  • Should GiveWell fund more interventions that speed up economic growth in poor countries? […]

Cause area report: Antimicrobial Resistance (Akhil)

Antibiotics and other life-saving medicines are becoming less effective due to antimicrobial resistance (AMR), which occurs when bacteria, viruses, fungi, and parasites adapt to the methods commonly used to combat them. A new report suggests that AMR is responsible for millions of deaths each year (particularly in sub-Saharan Africa) and has serious economic costs — by one estimate, $55 billion every year in the US alone. The problem is neglected and getting worse as the overuse of antibiotics in healthcare and food production continues and more drug-resistance bacteria evolve. 

There are promising approaches for working on AMR. These include creating incentives to accelerate the development of new antimicrobial medicines, contributing to quantitative research on the causes of AMR, running fellowships for policymakers, improving diagnostics (and their accessibility) to prevent misuse of antimicrobials, and raising the profile of AMR. [...]

AI governance: CAIS statement on AI risk and GovAI: survey of expert opinion 

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” [Statement on AI Risk]

[...] A recent survey of expert[s] found a lot of agreement on best practices in AI safety and governance. The survey (which was run by the Centre for the Governance of AI and got 51 responses out of 93 experts contacted) asked participants how much they agreed with 50 statements about what AGI labs should do; participants, on average, agreed with all of them.

Proposals based on evaluating models for risky qualities to prevent the deployment of dangerous models got the most approval. This strategy is also the subject of a new paper co-authored by researchers from Deepmind, OpenAI, Anthropic, and more.

AI-safety-related:

Classic: Why did renewables become so cheap so fast? (Our World in Data)

July

Why regulating advanced AI chips could be important, farmed animal welfare reform cost-effectiveness, and more

The Puzzle of Non-Proliferation (Carl Robichaud in Asterisk) and Lennart Heim on compute governance (80,000 Hours)

Building nuclear weapons requires enriched uranium and plutonium, which is hard to produce; this is a key “choke point” for limiting nuclear proliferation. Training powerful AI models requires a lot of advanced chips and computing resources — “compute.” While regulating algorithmic research and other resources could be especially difficult, advanced chips could be tracked, licensed, and controlled. (Compute governance is discussed at length in a recent 80,000 Hours Podcast episode.)

In an Asterisk article, Carl Robichaud argues that nuclear non-proliferation has lessons for AI governance — like the importance of understanding choke points. Another parallel with nuclear weapons is the worry that fear of competition (for instance between the US and China) could drive countries to rush AI development. […]

A Cost-Effectiveness Analysis of Historical Farmed Animal Welfare Ballot Initiatives

[…] A new report by Laura Duffy at Rethink Priorities analyzes historical US ballot initiatives aimed at improving the lives of farmed animals, focusing on four initiatives for which relevant data were available. Some insights:

  • The vast majority (~99%) of reduced suffering came from restricting cage use for chickens. Ballot initiatives that targeted egg-laying hens were over a hundred times more cost-effective than those that target only veal calves and sows. 
  • Ballot initiatives averted about a year of extreme suffering for an animal per $10, and were about 21% to 81% as cost-effective corporate cage-free campaigns, which are unusually successful
  • There are a lot of uncertainties, but many reasons to be optimistic about future ballot initiatives. […]

Are education interventions as cost effective as the top health interventions?

[…] A new post from Founders Pledge [describes] a way to estimate how [improved] education affects someone’s future income. The post introduces five separate lines of evidence and argues that taken together, they show that improving students' test scores leads to significant and sustained increases in their future earnings. This framework suggests that a program that furnishes software for teaching numeracy and literacy in Malawi is 11 times as cost-effective as GiveDirectly (a charity often used as a high baseline of cost-effectiveness), meaning that it is as promising as top GiveWell grants.  […]

AI-safety-related:

Classic: On "fringe" ideas (Kelsey Piper) 

August

The road to far-UVC protection against pandemics, an environmental case for agricultural productivity, and more

Thoughts on far-UVC after working in the field for 8 months

Imagine that installing special lights in places like schools and hospitals dramatically reduced indoor disease transmission (the primary driver of many epidemics). This isn’t fantasy; “germicidal” ultraviolet radiation can neutralize airborne pathogens — either in special zones removed from humans or throughout rooms via skin-safe “far-UVC” light. Far-UVC is particularly exciting because of two major advantages; the lights would cut transmission of many different diseases (including novel diseases) and installing far-UVC in essential spaces would reduce pandemic risk without relying on individuals to take specific actions. 

But we’re not yet ready to use far-UVC to prevent pandemics. In a recent overview, Max Görlitz argues that more work is needed before we can widely deploy the technology. For instance, it seems that far-UVC doesn’t penetrate human skin, but its effects on eyesight should be studied more carefully. And it might not be enough to make sure that far-UVC is safe and effective; getting the most out of far-UVC might mean supporting deployment, as purely commercially driven adoption might lead to less useful installations that underinvest in protection against less frequent but more extreme situations. […]

Improving agricultural yields: Hannah Ritchie on why it makes sense to be optimistic about the environment (80,000 Hours)

[The average farmer in Tanzania has to work for a year to get as much output as the average U.S. farmer does in three to four days; some] regions have much higher crop yields (per unit of land) than others. A [key] reason for this disparity is the fact that agricultural productivity has grown much slower in sub-Saharan Africa than in other regions, and the reduced productivity causes serious issues. 

In a recent podcast episode on a range of environmental topics, Hannah Ritchie argues that improving agricultural productivity in sub-Saharan Africa would significantly help global living standards. Many farmers (who account for the majority of the region’s poorest people) don’t have extra time or resources to invest in productivity improvements like better irrigation systems. They also lack access to richer markets (in part due to the EU’s agricultural policies). As a result, many are trapped in poverty. But the problem might be tractable; we know it's possible to increase agricultural productivity on a large scale as we've done it before. In the case of sub-Saharan Africa, influencing policy and supporting things like irrigation systems and high-yield crops could help launch an agritech market that helps with these problems. 

These interventions would also significantly mitigate the environmental impacts of farming by reducing the need for more farmland. About half of the world’s habitable land is used for agriculture, leading to problems like biodiversity loss. Projections imply that without agricultural productivity improvements, we’d need 26% more cropland by 2050 — an area the size of India and Germany combined. […]

AI-safety-related:

Everything else:

Classic: Radical Empathy

September

TED: What could we accomplish if the global 1% gave 10%? (and more)

What if the global 1% gave 10%? (TED Talk)

Thoughtful philanthropy can achieve a lot of good. The Rockefeller Foundation funded Norman Borlaug’s research on improving crop yields, which is estimated to have saved hundreds of millions of lives. The Pugwash Conferences helped limit the proliferation of nuclear weapons. The March of Dimes Foundation funded the development of the polio vaccine. 

recent TED talk by Natalie Cargill, the founder and co-CEO of Longview Philanthropy, discusses what we could achieve with 10% of the income (or 2.5% of the net worth) of the world’s richest 1% — $3.5 trillion in one year. […]

Biological risks and why we should be cautious about irreversibly sharing powerful AI models (shutting down DEEP VZN and open-sourcing AI models)

After two years, USAID has shut down DEEP VZN, a controversial virus-hunting program aimed at stopping the next pandemic before it happened. The plan was to collect potentially dangerous virus samples in the wild, analyze the samples in labs to identify the viruses capable of causing a pandemic, and publish a ranked list of dangerous viruses and their genomes. Some, like MIT biologist Kevin Esvelt, expressed concerns that sharing a list like that could significantly lower the barrier for terrorists or others who might want to start a pandemic by providing them with instructions on which viruses to use; “gene synthesis,” which can print a virus’s DNA given its genome, would help them do the rest. The program wound down this summer. (To lower the chances of human-made pandemics, governments could require that gene synthesis companies screen orders and customers.)

Related concerns have been raised about openly sharing powerful AI models. “Open-sourcing” is often presented in a positive light, but giving total access to an AI model is a risky and irreversible choice; there’s no way to take an open-sourced model down or set up protections if we later discover that the model is too dangerous. AI models are becoming more powerful and some worrying possibilities have already been demonstrated, like when a group of researchers discovered that a model they were using to screen new drugs for toxicity could be repurposed to suggest 40,000 new possible chemical weapons in six hours. 

Instead of being shared without safeguards, powerful AI models could be evaluated for extreme risks and then released via a structured access model that lets researchers study copies without sharing the models irreversibly. 

Why we didn't get a malaria vaccine sooner (Works in Progress)

It’s worth celebrating the progress of two promising malaria vaccines and the rollout of the RTS,S vaccine, which was endorsed by the WHO in 2021. But, given that malaria kills around half a million children every year (or more than 1000 children every day) and the fact that the RTS,S vaccine alone spent 23 years in trials and pilot studies before it was licensed, it’s also worth asking why it took so long to get a malaria vaccine — and what we can do to speed things up next time. 

A recent article on why we didn’t get a malaria vaccine sooner explains that while malaria poses some unique problems, one obstacle was broader: lack of funding. Malaria affects some of the poorest countries (which can't afford expensive vaccines), and developing a vaccine costs a lot of money and time (especially given the high chance of failure). The expected payoff wasn't large enough to incentivize individual firms to invest the resources needed.

The authors make the case for “advance market commitments” (AMCs), where governments or philanthropists promise to subsidize a new vaccine in large quantities if it’s developed and countries actually want to use it. […]

Classic: Advice on how to read our advice (80,000 Hours) — related to the release of their new career guide

October

An upcoming virtual event, biotech risks and opportunities, and many open roles

Responsible biotech: Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives (80,000 Hours Podcast)

The day after Kevin Esvelt discovered a way to spread a modified gene through an entire population of plants or animals, he “woke up in a cold sweat.” […] 

In a recent appearance on the 80,000 Hours Podcast, Esvelt explains why he now thinks CRISPR-based gene drive technology is relatively safe (it’s slow, clearly detectable, and easily countered). Accidents or carelessness could still lead to catastrophic delays and other issues, so Esvelt argues that we should only start small and local (via “daisy drives”) and only proceed with proper regulation and community buy-in.

The podcast episode also covers why and how we should protect society from those who might want to start a civilization-ending pandemic. The number of people who have the ability to identify and release a dangerous virus is growing [...]. But some things could help. For instance: 

  • Monitoring wastewater for suspicious patterns or signs of edited DNA to detect pandemics early
  • Investing in developing better personal protective equipment (and stockpiling it)
  • Installing virus-neutralizing far-UVC lights in workplaces and labs [...]

What does democratic oversight of AI look like? (GovAI: What Do We Mean When We Talk About “AI Democratisation”?)

Recent polls suggest that most U.S. voters would approve of regulation that prevents or slows down AI superintelligence, favor restricting the release of AI models we don’t fully understand, and prefer federal regulation of AI development over self-regulation by tech companies. It’s hard to accurately interpret poll results like these, but they point to unease and a disconnect between the general public and some major AI labs in the U.S. So it’s worth exploring how the public should be involved in steering AI development.

People sometimes talk about “AI democratization,” but the phrase is vague. An overview from the Centre for the Governance of AI outlines different things people mean by “AI democratization” and how the stances diverge. The post (and accompanying paper) explains that democratizing AI use (making AI more accessible), development (helping a wider range of people contribute to AI design), benefits (more equitably distributing the benefits of AI), and governance (distributing influence over how AI should be used, developed, and shared to a wider community of stakeholders) have significant differences. Democratizing AI governance doesn’t necessarily mean making it possible for everyone to use or build AI models however they want, but rather introducing democratic processes like citizen assemblies to give people input on key decisions about AI. […]

Animals in data: a new resource on animal welfare from Our World in Data

[It highlights] information like the number of animals that are farmed and slaughtered (hundreds of millions every day) and why adopting slower-growing breeds of chicken would reduce animal suffering. […]

AI-safety-related: 

Classic: Don’t think, just apply! (usually) — EA Forum 

November

AI (safety) news, the invisible toll of air pollution, and more

How long do policy changes matter?

Is advocating for policy changes effective? One of the things you'd need to evaluate to answer this question is how long a new policy would persist before it would probably [be] repealed. […]

recent paper that analyzes historical data from the US finds that policy changes are surprisingly persistent. A narrowly passed referendum will probably (80%) still be in place a century later. Moreover, referendums that narrowly fail will probably (60%) not pass at all in the next century. This suggests that advocacy for policy change might be much more cost-effective than is often assumed. […]

Air pollution is responsible for ~12% of deaths: what might help?

Air pollution accounts for the deaths of close to 6.7 million people per year (including half a million infants). Pollution is particularly bad in countries like India, where the average person might be losing 3 to 6 years of life expectancy due to bad air. And the problem is incredibly neglected

In a recent podcast, Santosh Harish discusses the main causes of air pollution and some potentially cost-effective interventions. Indoor pollution can result from people burning solid fuels for cooking. (Unfortunately, a lack of access to cleaner fuels like liquid petroleum gas or reliable electricity means many people have no choice except to use fuels like firewood.) Outdoor air pollution is caused by waste burning, illegal industrial gas dumping, vehicle emissions, and more. And policies meant to prevent air pollution are often outdated or left unenforced.

Research, policy outreach, and technical assistance to governments could be effective ways for philanthropy to support work on this problem. Anything that improves energy efficiency would also help, as have subsidies that help people switch from solid fuels. [...]

AI-safety-related:

  • The UK’s AI Safety Summit gathered political and tech leaders to discuss risks from advances in AI and how to manage them, and produced the Bletchley Declaration, signed by representatives of 28 countries (including the US, UK, and China, as well as the EU). 
  • U.S. President Biden issued an executive order on “safe, secure, and trustworthy” AI (brief summary and analysisfact sheetfull order), requiring reporting systems, safety precautions at bio labs, and more.
  • Liv Boeree talks about the dark side of competition in AI in a recent TED Talk. 
  • Key scientists share a short “consensus paper”, and prominent Chinese, US, UK, and European scientists sign a statement on a join strategy for AI risk mitigation.
  • In TIME (paywalled), Yoshua Bengio and Daniel Privitera outline policy goals that could help achieve AI progress, safety, and democratic participation.
  • AI safety researcher Paul Christiano discusses responsible scaling policies and more on the Dwarkesh Podcast.
  • Industry updates: OpenAI's CEO Sam Altman was fired. This news is important but still developing. Meta has disbanded its “Responsible AI” team. 

Everything else: 

Classic: What happens on the average day? (Rose Hadshar)

December

Charity spotlights, reflections on 2023, and opportunities for 2024. This was an unusual newsletter. Many people donate right at the end of the year, so the December EA Newsletter’ focused on featuring exciting charities (in case people want to donate to them) and giving readers a sense of what people who are working on EA-related projects (or projects that look pretty good from an EA perspective) actually do. 

Doing good via new charities (LEEP and Charity Entrepreneurship as a case study)

Exposure to tiny amounts of lead can lower a child’s IQ by 1-6 points, shorten their lifespan, and more. People know that lead is harmful, but few are aware of just how widespread the problem is; 1 out of every 3 children worldwide has unsafe blood lead levels. And the global toll is catastrophic. Can charities help? 

In a recent podcast episode, Lucia Coulter discusses the Lead Exposure Elimination Project (LEEP), which she co-founded in 2020. LEEP identifies and alerts communities that are unwittingly being exposed to lead paint, and provides support to local governments and producers as they transition to lead-free paint. [...] LEEP seems extremely cost-effective [...]. 

Coulter didn't stumble into lead exposure — she applied to a 2020 charity incubation program run by Charity Entrepreneurship (CE), for which CE researchers had pre-selected eight promising charity ideas. As a participant, Lucia Coulter picked one of the eight ideas, paired up with a cofounder, refined the plan, and got LEEP off the ground with training and $60K in seed funding from CE. 

Charity Entrepreneurship incubates 5-8 charities every year, and the charities they launch have a strong track record. If you want to support new charities that are likely to achieve a lot of good, consider donating to CE’s Incubated Charities Fund (see their 2024 charity ideas) or to individual charities they’ve launched. 

Farmed animal welfare is neglected, but some charities are making important progress (The Humane League and the Animal Welfare Fund as examples)

In 2023, farmed animal welfare advocates got 130 companies to agree to stop using battery cages in the coming years. Corporate pledges like these are a tested approach that is expected to help a huge number of animals; if companies implement the changes they promised, the ~3000 pledges secured to date would reduce the suffering of around 800 million chickens alive at any time. (Around 90% of the pledges that have come due by last year have been fully implemented.)

This progress and some other wins were achieved by a handful of animal advocacy organizations that could accomplish more if they were less funding-constrained. (The total annual budget of all organizations that try to promote farmed animal welfare is estimated to be a bit over $200 million. For context, over $1 billion annually is spent on animal shelters in the US, supporting significantly fewer animals. As a different reference, the Metropolitan Museum of Art had a budget of over $300 million last year.) […]

Some donation options are less direct but might achieve more (highlighting charitable funds, and effective giving initiatives like Giving What We Can)

Researchers have argued that giving to expert-led charitable funds is often more effective than supporting individual charities. Fund managers’ expertise and resources mean that donations tend to be used more effectively, and recipient organizations might benefit from the consistency and additional support that funds can provide. 

Three new cause-area funds were recently announced by Giving What We Can: the Global Health and Wellbeing Fund, the Effective Animal Advocacy Fund, and the Risks and Resilience Fund. They’ve also highlighted some older funds that have a strong track record.

Supporting “effective giving initiatives,” which focus on raising money for high-impact opportunities, might be another way to get more out of your donations. Giving What We Can (GWWC), for instance, seems to generate around $30 for highly effective charities per $1 spent on its operations, has encouraged thousands to pledge to give 10% of their lifetime income, and supports a variety of other projects. Some other effective giving initiatives operate primarily outside of English-speaking countries, like Effektiv SpendenAyuda Efectiva, and Doneer Effectief. [...]

Other exciting organizations

[…] For more comprehensive lists, explore projects featured on GWWC's site, charity recommendations from staff at Open Philanthropy and the charity evaluator GiveWell, or read posts about donation choice on the EA Forum.

AI-safety-related

Concluding thoughts + notes on the Newsletter

I don’t think this is a great list (and I think a shorter list would be valuable), but I’m hoping that it’s a start. Please add or comment on things you want to highlight

I also want to take this chance to solicit feedback on the monthly EA Newsletter (which goes out to ~60K subscribers at varying levels of engagement with EA). 

I’m hoping to write more reflections, but I might deprioritize them and/or not turn them into a publishable form. Topics might include: things I wish we discussed more on the Forum, posts that changed my mind, a shorter links of my really-truly favorites, and links that seemed most actionable/useful. 

Here’s a link to the companion post, which focuses on 2023 "news."

Comments2
Sorted by Click to highlight new comments since: Today at 9:40 AM

Thanks for the summary, Lizka!

Here’s a link to the companion post, which focuses on 2023 "news."

The link here is not right, although you provide the correct one at the start of the post.

Thanks for compiling this! I skimmed this, and it was a good way of getting an overview of what is happening in parts of EA that I know less about. I found having it separated by cause then by month useful so the reader can choose which overview they prefer, although some non-AI causes could have had their own section rather than being clumped together (I slowly scrolled through month by month and clicked on some of the more interesting looking articles).

More from Lizka
Curated and popular this week
Relevant opportunities