Hide table of contents

Recordings from various 2023 EA conferences are now live on our YouTube channel. These include talks from EAG Bay AreaEAG LondonEAG BostonEAGxLatAmEAGxIndiaEAGxNordics, and EAGxBerlin (alongside many other talks from previous years).

In an effort to cut costs, this year some of our conferences had fewer recorded talks than normal, though we still managed to record over 100 talks across the year. This year also involved some of our first Spanish-language content, recorded at EAGxLatAm in Mexico City. Listening to talks can be a great way to learn more about EA and stay up to date on EA cause areas, and recording them allows people who couldn’t attend (or who were busy in 1:1 meetings) to watch them in their own time.

Some highlighted talks are displayed below:

EA Global: Bay Area

Discovering AI Risks with AIs | Ethan Perez

In this talk Ethan presents on how AI systems like ChatGPT can be used to help uncover potential risks in other AI systems, such as tendencies towards power-seeking, self-preservation, and sycophancy.

How to compare welfare across species | Bob Fischer

People farm a lot of pigs. They farm even more chickens. And if they don’t already, they’re soon to farm even more black soldier flies. How should EAs distribute their resources to address these problems? And how should EAs compare benefits to animals with benefits to humans? 

This talk outlines a framework for answering these questions. Bob Fischer argues that we should use estimates of animals’ welfare ranges to compare how much good different interventions can accomplish. He also suggests some tentative welfare range estimates for several farmed species. 

EA Global: London

Taking happiness seriously: Can we? Should we? A debate | Michael Plant, Mark Fabian

Effective altruism is driven by the pursuit to maximize impact. But what counts as impact? One approach is to focus directly on improving people’s happiness — how they feel during and about their lives. 

In this session, Michael Plant and Mark Fabian discuss how and whether to do this, and what it might mean for doing good differently. Michael starts by presenting the positive case — why happiness matters and how it can be measured — then shares the Happier Lives Institute’s recent research on the implications and suggesting directions for future work. Mark Fabian acts as a critical discussant and highlights key weaknesses and challenges with ‘taking happiness seriously’. After their exchange, these issues open up to the floor.

Panel on nuclear risk | Rear Admiral John Gower, Patricia Lewis, Paul Ingram

This panel joins together Rear Admiral John Gower, Patricia Lewis, and Paul Ingram for a panel on a conversation exploring the future of arms control, managing nuclear tensions with Russia, China's changing nuclear strategy, and more. 

EA Global: Boston

Opening session: Thoughts from the community | Arden Koehler, Lizka Vaintrob, Kuhan Jeyapragasan

In this opening session, hear talks from three community members (Lizka Vaintrob, Kuhan Jeyapragasan, and Arden Koehler) as they give some thoughts on EA and the current state of the community.

Screening all DNA synthesis and reliably detecting stealth pandemics | Kevin Esvelt

Pandemic security aims to safeguard the future of civilisation from exponentially spreading biological threats. In this talk, Kevin outlines two distinct scenarios—"Wildfire" and "Stealth"—by which pandemic-causing pathogens could cause societal collapse. He then explains the ‘Delay, Detect, Defend’ plan to prevent such pandemics, including the key technological programmes his team oversees to mitigate pandemic risk: a DNA synthesis screening system that prevents malicious actors from synthesizing and releasing pandemic-causing pathogens; a pathogen-agnostic wastewater biosurveillance system for early detection of novel pathogens; AI/bio capability evaluations and technical risk mitigation strategies; and pandemic-proof PPE.

EAGxLatAm

Effective Altruism in Low and Middle Income Countries (LMICs) | Panel

This panel has speakers share their experiences and takeaways from working on community building projects in LMICs, namely the Philippines, South Africa, Russia, Nigeria, Mexico, Brazil, and Colombia.

The panel consists of Jordan Pieters, Zakariyau Yusuf, Elemerei Cuevas, Leo Arrunda, Angela Aristizábal, Sandra Malagón, and Aleksandr Berezhnoi.

EAGxIndia

Cause area — Air Quality in South Asia | Santosh Harish 

The session introduces air pollution in South Asia as an EA cause area, and provides a brief overview of the South Asian Air Quality program at Open Philanthropy. Santosh outlines major sub-strategies that we will be focusing on and the types of grant opportunities that are likely to be cost-effective.

EAGxNordics

What can we say about the size of the future? | Anders Sandberg

In this thought-provoking talk, Anders touches upon various factors that could shape the trajectory of humanity, drawing from multiple disciplines to provide a broad perspective. He explores the implications of different potential outcomes and how understanding these possibilities can inform our actions in the present.

EAGxCambridge

Fireside Chat | Lord Martin Rees

Lord Martin Rees is the Astronomer Royal and Co-founder of the Centre for the Study of Existential Risk. He is a former President of the Royal Society, former Master of Trinity College, and Emeritus Professor of Cosmology and Astrophysics, and is the author of 10 books including ‘If Science is to Save Us’ and ‘Our Final Century’. The interview covers both his career and his views on key open questions in the field of existential risk studies.

EAGxBerlin

Intercausal Impacts and the Power of Food System Change | Chris Popa

This talk explores the concept of intercausal impacts and analyses food system change as a prime example, given that our current food system not only causes vast amounts of animal suffering but also is a key driver in many other of the world’s most pressing problems.
 

Comments5


Sorted by Click to highlight new comments since:

These are useful, thanks. I would suggest we also enable/permit a lower-quality recording to be posted or shared of the other talks. It should be fairly costless to have a few people record and post these with camera phones, etc., and I believe it would add substantial value.

Thanks for the suggestion David — we've thought about this and might consider it for the future, but I worry it would be a fair amount of work for a low-quality product (that I expect wouldn't get many views). However for our recent Boston event we did take audio recordings of most talks and are planning to have many of them written up as Forum posts soon.

Audio recordings would be good, thanks.

Not sure about the benefit/cost. Am I naive to think something like:

  • Tripod (or a small stabilizer on a desk)
  • Volunteer (or paid person) in each room, sits at front or operates tripod
  • Uses own camera phone
  • Uploads to YouTube directly from phone

Time cost: Maybe 1-2 hours of 'equivalent extra person work' per 1-hour session (say 90 minutes).

Benefit: If even 5-10 people watch the videos, I suspect the value outweighs the cost.

  • Enabling them to shift time; e.g., do 1-on-1's if attending ...

  • Encouraging some people to not come in person (saving tremendous expense obviously)

  • Presenter and their team can re-watch the video to improve their own presentation, as well as using it for onboarding etc.

My guess (very rough) is the value 'per watcher who spends at least 20 minutes viewing on the talk' has about 20% of the value of the 90 minutes spent by the person filming and uploading on average.

(Obviously more so if it's a highly productive person doing the watching, or if the speaker themselves watches it to improve their presentation.)

So I guess if at least 5 people watch the average video for 20 minutes or more, this would be worth doing. Not sure how that compares to the statistics you've seen on usage.

Could it be enabled on a 'strictly voluntary basis', i.e., give permission for people to record certain sessions, announce this, and upload it to an (unofficial?) channel?

Are there plans to release the videos from EAGx Virtual?

Yes! We'll need to review footage and confirm with speakers, but they should be up soon :) 

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
Thomas Kwa
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read