Hide table of contents

I’m listing some of the important and/or EA-related events that happened in 2023. Consider adding more in the comments! 

A companion post collects research and other "content" highlights from 2023. (That post features content; the one you're reading summarizes news.) 

Also, the monthly EA Newsletter discussed a lot of the events collected here, and was the starting point for the list of events in this post. If you’re subscribed, we would really love feedback.

Skip to: 

  1. News related to different causes
    1. AI safety: AI went mainstream, states developed safety-oriented regulation, there was a lot of discourse, and more
    2. Global health & development: new vaccines, modified mosquitoes, threatened programs, and ongoing trends
    3. Animal welfare: political reforms and alternative proteins
    4. Updates in causes besides AI safety, global health, and animal welfare
  2. Concluding notes

Other notes: 

  • There might be errors in what I wrote (I'll appreciate corrections!). 
  • Omissions! I avoided pretty political events (I think they're probably covered sufficiently elsewhere) and didn't focus on scientific breakthroughs. Even besides that, though, I haven’t tried to be exhaustive, and I’d love to collect more important events/things from 2023. Please suggest things to add.
  • I’d love to see reflections on 2023 events. 
    • What surprised you? What seemed important but now feels like it might have been overblown? What are the impacts of some of these events?
    • And I’d love to see forecasts about what we should expect for 2024 and beyond. 
  • I put stars ⭐ next to some content and news that seemed particularly important, although I didn’t use this consistently. 
  • More context on how and why I made this: I wanted to collect “important stuff from 2023” to reflect on the year, and realized that one of the resources I have is one I run — the monthly EA Newsletter. So I started compiling what was meant to be a quick doc-turned-post (by pulling out events from the Newsletter’s archives, occasionally updating them or looking into them a bit more). Things kind of ballooned as I worked on this post. (Now there are two posts that aren’t short; see the companion, which is less focused on news and more focused on "content.")
Sources for the pictures in this collage-banner (left to right, top to bottom): 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

AI safety: AI went mainstream, states developed safety-oriented regulation, there was a lot of discourse, and more

See also featured content on AI safety

0. GPT-4 and other models, changes at AI companies, and other news in AI (not necessarily safety)

Before we get to AI safety or AI policy developments, here are some relevant changes for AI development in 2023:

  1. ⭐ New models: OpenAI launched GPT-4 in mid-March (alongside announcements from Google, Anthropic, and more). Also around this time (February/March), Google released Bard, Meta released Llama, and Microsoft released Bing/Sydney (which was impressive and weird/scary). 
  2. Model use, financial impacts, and training trends: more people started using AI models. Developers got API access to various models. Advanced AI chips continued getting better and compute use increased and got more efficient
  3. Improvements in models: We started seeing pretty powerful multimodal models (models that can process audio, video, images — not just text), including GPT-4 and Gemini. Context windows grew longer. Forecasters on Metaculus seem to increasingly expect human-AI parity on selected tasks by 2040.
  4. Changes in leading AI companies: Google combined Brain and DeepMind into one teamAmazon invested in AnthropicMicrosoft partnered with OpenAIMeta partnered with Hugging Face, a number of new companies launched, and OpenAI CEO Sam Altman was fired and then reinstated (more on that). 
  5. Other news: Generative AI companies are increasingly getting sued for copyright infringement. (E.g. AI art tools by artists, OpenAI by NYT.) And I felt like there were many moments this year when AI “breakthroughs,” news, etc. would get awe and attention that didn’t seem appropriate given the developments’ relative unimportance.

1. Policy/governance: new regulations, policymakers take AI risk seriously, government investments into AI safety

These seem like really important changes.

  1. ⭐ Important regions rolled out measures aimed at reducing catastrophic AI risks:
    1. US: In October, President Biden issued an executive order on “safe, secure, and trustworthy” AI (summary/analysisfull ordermore discussion), requiring reporting systems, safety precautions at bio labs, and more.[1] 
      1. Also in October, the US tightened its 2022 export controls on advanced AI chips and semiconductor manufacturing equipment, making it harder for Chinese companies to access advanced chips.[2]
    2. China: central government regulators in China issued measures related to generative AI that releases content to the public in China (draft released in April, and finalized measures went into effect in August). The measures declare “AI service providers” liable for generated content (if it contains illegal or protected personal information, it has to be taken down, and the underlying issue fixed and reported) and regulate training data. They also require certain kinds of providers to pass a security assessment from a regulator.
    3. EU: EU policymakers have reached an agreement on the AI Act, which was first proposed in 2021 and had been cycling through negotiations and revisions for most of 2023 (in part due to lobbying from AI companies and less safety-oriented governments). It will go into effect after a transitional period during which the EU Commission is seeking voluntary commitments from companies to start implementing the Act’s requirements. (See a post about the Act from 2021.) 
    4. UK: The UK hasn’t passed regulation related to AI safety, although it hosted the AI Safety Summit (discussed below) and started an advisory AI Safety Institute.
  2. More generally, AI safety is getting discussed and taken a lot more seriously by policymakers:
    1. US: NIST released an AI Risk Management Framework, the White House met with AI lab leaders and secured voluntary commitments from some of the top AI companies (May-July), and the Senate held a series of “AI Insight Forums,” three of which — in JulyOctoberDecember — focused significantly on catastrophic AI risk (others were more focused on topics like biasprivacy, and IP/copyright). (See also this explainer of different proposals from August.)
    2. UK: In November, the UK hosted the AI Safety Summit, gathering political and tech leaders to discuss risks from advances in AI and how to manage them (see a debrief and discussion of the results).
  3. Governments are also investing in AI safety in other ways. Several countries established national AI safety institutions. The US’s NSF (in partnership with Open Philanthropy) announced $20M in grants available and a request for proposals for empirical AI safety research (deadline 16 January!). 

2. Scientists and others shared thoughts on AI safety and signed statements on existential risk from AI 

  1. ⭐ Key AI scientists started writing more on AI existential risk and the need for safety-oriented work
    1. This includes two of three[4] “godfathers of deep learning,” Yoshua Bengio and Geoffrey Hinton (who left his job at Google to speak about this more freely). (Industry leaders have also shared views on AI risk: see e.g. podcast interviews with Dario AmodeiMustafa Suleyman, and Sam Altman.)
  2. High-profile people outside the field of AI also shared AI safety concerns and thoughts in 2023. Examples include President Barack Obama (here's an interview), Bill Gates, and Nate Silver. (See more.) 
  3. Statements:  
    1. Two weeks after the launch of GPT-4, the Future of Life Institute released a letter asking for a six-month pause in “giant AI experiments.” It was signed by Elon Musk, Steve Wozniak, and many AI experts and public figures. 
    2. ⭐ A few weeks later, hundreds of key executives, researchers, and figures signed the simpler CAIS “Statement on AI Risk, which was covered by many media outlets, including The New York Times (front page)The GuardianBBC News and more.
      1. See US public perception of the statement. Signatories include Turing Award winners, AI company executives like Demis Hassabis, Sam Altman, and Mustafa Suleyman, and others like Bill Gates, Ray Kurzweil, Vitalik Buterin, and more. 
    3. In October, high-profile scientists published ⭐ a “consensus paper” (arXivpolicy supplement), outlining risks from upcoming AI systems and proposing priorities for AI R&D and governance. Signatories include Andrew Yao, Stuart Russell, and more. 
    4. Also in October, prominent Chinese, US, UK, and European scientists signed a statement on a joint strategy for AI risk mitigation.

3. Public discourse included discussions of AI safety

  1. The public pays attention to AI risk
    1. By May, the US public seemed concerned about risks and receptive to safety-oriented regulation, although it’s worth noting that nuclear war, climate change, and an asteroid impact tended to be seen as more likely to cause extinction. Other survey results in footnote.[5]
  2. Media outlets covered AI risk. Notable content includes the following pieces (all of these are paywalled): 
    1. ⭐ Ian Hogarth's “We must slow down the race to God-like AI” in the Financial Times
    2. Ezra Klein writing “This Changes Everything” in The New York Times
    3. How AI Progress Can Coexist With Safety and Democracy” by Yoshua Bengio and Daniel Privitera in Time, and this Time piece by two researchers
    4. (More coverage can be found here.)
  3. Broad-audience and introductory AI safety content was also featured in some large outlets:
    1. Ajeya Cotra went on Freakonomics.
    2. In TED Talks, Eliezer Yudkowsky asked whether superintelligent AI will end the world and Liv Boeree talked about the dark side of competition in AI.
    3. Yoshua Bengio outlined how rogue AIs may arise.

4. There were other developments in AI safety as a field

  1. Industry support[6]: OpenAI, Google, Anthropic, and Microsoft announced a Frontier Model Forum to make progress on AI safety, then shared an update about a $10M AI Safety Fund. OpenAI also announced $10M in Superalignment Fast Grants and grants for research into Agentic AI Systems.
  2. ARC Evals (now METR) worked with OpenAI and Anthropic to evaluate GPT-4 and Claude (pre-release). More generally, safety evaluations seem to have taken off as a field.
  3. The 2023 Expert Survey on Progress in AI from AI Impacts is out (2778 participants from six top AI venues). Expected time to human-level performance dropped 1-5 decades since the 2022 survey, and median respondents put 5% or more on advanced AI leading to human extinction or similar (a third to a half of participants gave 10% or more).
  4. New organizations working on AI safety and alignment have been announced, and there’s been a lot of research, which I will not cover here (please add highlights in the comments if you want, though!). 

For more on 2023, consider checking: the CAIS AI Safety NewsletterAI ExplainedZvi’s series on AI, or newsletters on this list.

See also highlights from content and research about global health and development.

1. Progress on malaria vaccines & other ways to fight mosquito-transmitted diseases

  1. ⭐ The new R21/Matrix-M malaria vaccine is extremely promising (~68%-75% efficacy) and the WHO recently cleared an important hurdle for deployment. 
    1. In April, Ghana and Nigeria approved the vaccine and the Serum Institute of India prepared to manufacture over 100 million doses per year. 
    2. For most of the year, however, other countries with high rates of malaria weren’t rolling the vaccine out (and production wasn’t ramping up), in part because[7] the WHO hadn’t “prequalified” it yet. Alex TabarrokPeter Singer, and others wrote about the significant costs of delay, suggesting that the WHO didn’t seem to be treating malaria as the emergency it was. 
    3. On December 21, the WHO announced that the new vaccine had prequalification status. (1Day Sooner’s Josh Morrison reflected on how to evaluate the effects of their advocacy efforts. See also why GiveWell funded the rollout of the first malaria vaccine, from earlier in the year, and a recent discussion of whether the vaccines are actually more cost-effective than other malaria programs.)
  2. ⭐ The World Mosquito Program is fighting dengue fever by releasing millions of special mosquitoes. The mosquitoes are infected with Wolbachia bacteria, which blocks transmission of dengue (and related diseases, including Zika and yellow fever); see Saloni Dattani for more.
  1. ⭐ Prevalence of lead in turmeric dropped significantly after researchers collaborated with charities and the Bangladesh Food Safety Authority on interventions like monitoring and education campaigns. This work seems highly cost-effective and was supported by GiveWell. Lead exposure is extremely harmful.
  2. A new, potentially very effective tuberculosis vaccine has entered late-phase trials thanks to $550 million in funding from Wellcome and the Bill & Melinda Gates Foundation. The vaccine could save millions of lives. (Recent related Vox coverage.)
  3. PEPFAR (an HIV/AIDS program that was a huge success) is at risk.
  4. Charity Entrepreneurship launched new charities working on reducing microbial resistancewomen’s healthreducing tobacco use, and more.
  5. GiveDirectly shared that around $1M was stolen from them in the DRC in 2022 and related updates. This is around 0.8% of $144M GiveDirectly helped transfer globally in 2022. Kelsey Piper discusses the events in Vox
  6. Relatedly, the first results from the world’s biggest basic income experiment in Kenya are in.

See some more events in this newsletter.

3. Very important things continued to happen

⭐ I love “What happens on the average day” by @rosehadshar, which emphasizes the way "news" (and/or what gets covered) can diverge from the things that are really important. So here’s a brief outline of some global-health-related things that kept happening:

Ongoing philanthropic projects kept delivering:

  • The Against Malaria Foundation distributed ~90 million nets, expected to protect 160 million people. “The impact of these nets is expected to be, ± 20%, 40,000 deaths prevented, 20 million cases of malaria averted and a US$2.2 billion improvement in local economy (12x the funds applied). When people are ill they cannot farm, drive, teach – function, so the improvement in health leads to economic as well as humanitarian benefits.”
  • Helen Keller International distributed over 63 million capsules of vitamin A via a program that seems highly cost-effective.
  • You can see more compiled here or in GiveWell’s updates and recommendations

For more important trends/things-that-happen, check Our World in DataRose’s postWikipedia’s Current events.[8]

Animal welfare: political reforms and alternative proteins

See also featured content and research on animal welfare.

1. Policies protecting animals: wins and losses

  1. ⭐ EU: The EU was on track to phase out cages for farmed pigs and egg-laying hens (and more: see a related EU Food Agency recommendation to ban cages). Unfortunately, the EU Commission seems to have dropped the promised animal welfare reforms (related thread.) 
  2. ⭐ US: In an unexpected ruling, the US Supreme Court upheld California’s Proposition 12 (Vox), which sets minimum space requirements for animals and bans cages for egg-laying hens (cost-effectiveness). 
    1. Prop 12 and other important U.S. animal welfare bills might still be threatened by the EATS Act, which prohibits state governments from setting standards on the production of agricultural products imported from other states. (U.S. citizens can get in touch with their legislators about this.)

2. Alternative proteins & plant-based food: supported by many countries, cleared for sale in US, banned in Italy

  1. ⭐ Lab-grown meat was cleared for sale in the United States.
  2. Italy banned cultivated meat.
  3. Denmark, India, the UK, Germany, and other countries invested in alternative proteins/plant-based food. (See more on this and other highlights from GFI.)
  4. Plant-based meat sales seem to have stagnated in the US.

3. Other important developments: the first ever octopus farm, Peter Singer’s Animal Liberation Now, bird flu

  1. News of a plan for the world's first octopus farm caused concern and outcry
    1. Relatedly, the Aquatic Life Institute’s certification tool ranks aquaculture certifiers based on the quality of their welfare requirements; the 2023 update includes a prohibition on octopus farming.  (More on invertebrate welfare.) 
  2. Peter Singer published Animal Liberation Now and gave a TED Talk.
  3. Millions of farmed birds are being killed in an extremely inhumane way after a flu outbreak in the US (which seems to be ongoing).

4. Very important things continued to happen

Around 900,000 cows and 3.8 million pigs were slaughtered every day. Around 440 billion shrimp were killed on farms in 2023. Almost all livestock animals in the US lived their lives on factory farms (globally around three quarters of farmed land animals are factory farmed). And the world is on track to eat almost a trillion chickens in the next decade. 

Explore more on animal welfare here and in Lewis Bollard’s newsletter, where he recently shared some wins for farmed animals from 2023.

Updates in causes besides AI safety, global health, and animal welfare

See also featured content/research on topics that don't fit into the causes above.

  1. After two years, USAID has shut down DEEP VZN, a controversial virus-hunting program aimed at stopping the next pandemic before it happened, which some (including Kevin Esvelt) worried would end up causing a pandemic instead of stopping it. 
  2. Transmissibility developments in the H5N1 bird flu caused some concern and discussion about the potential danger and the odds that H5N1 would be worse than COVID-19
  3. 2023 was “the hottest year ever recorded.” Coal production was probably a smaller share of global electricity production but grew overall. Cost of energy from renewable sources probably kept falling, and more energy came from renewables
  4. global catastrophic risks law was approved in the United States.

Concluding notes

Please suggest additions to the list[9] (or feedback), share your feedback on the EA Newsletter if you have any, and consider reflecting on these events! I'd also love to see (and in some cases work on) related projects:

I viewed this in large part as an exercise, and would like to do some more, like the following: seeing how large forecasts on important questions might have changed, identifying my biggest areas of confusion about what was important in 2023 and trying to list and resolve some cruxes, deliberately choosing a list of questions to forecast for 2024 and trying to forecast them, looking back on the “events” of 2023 and checking for events that surprised me (and thinking about where I should question whatever led to false expectations), seeking out information about my blindspots, choosing a subset of “events” that at least seem particularly important one way or another and trying to actually evaluate how they were impactful and to what extent, and more.

I’d also be excited about a more "meta-EA version" of this kind of collection, tracking important events and wins for EA-related people and groups. (The current list probably already skews a bit in this direction, but I'd like to see a reflection that includes things like a shift in conversation, discussion of whether and in what way we might be in Third Wave EA, etc.)

I probably won’t get to most of the above ideas, but I’ll likely work on some of them, although I expect I won’t bother to clean things up and publish them in many cases. Let me know if you have thoughts on what’s more/less useful!

  1. ^
  2. ^

    See a Twitter thread summarizing the revised controls, and this analysis of the 2022 controls

  3. ^
  4. ^

    Yann LeCun famously disagrees with them on AI risk.

  5. ^

    AI Impacts also has a long list of US public opinion survey results from different sources

    You can also explore results from an international survey of public opinion towards AI safety, which finds that there’s some variation between countries but agreement on some questions, like the importance of testing.

  6. ^

    Note that I'm worried about safety-washing, and in some of these cases I'm particularly unsure about what the risk/safety implications of these initiatives are.

  7. ^

    The WHO’s “prequalification” of vaccines is important for organizations like GAVI and UNICEF to start procuring and deploying vaccines to lower- and middle-income countries.

  8. ^
  9. ^

    I don’t think this list even remotely covers the important things that happened in 2023. There are some obvious or predictable blindspots (e.g. I deliberately didn’t focus on scientific/knowledge developments or political changes) and lacking information (lagging metrics, events/changes that are difficult to measure, issues where we’ve passed some kind of point of no return or an inflection point but haven’t landed at the next stable equilibrium, etc.) — and I’m also just missing a bunch of stuff.

Comments1
Sorted by Click to highlight new comments since: Today at 3:24 PM

Nice, I didn't know about some of these, good to take stock after an eventful year! I am so used to GPT-4 and integrating it into my work and life that it is weird to think it has been around such a short length of time ...

More from Lizka
Curated and popular this week
Relevant opportunities