Hide table of contents

Below is this month's EA Newsletter. I'm experimenting with crossposting it to the Forum so that people can comment on it and give feedback. If this is your first time hearing about the EA Newsletter, you can:

Subscribe and read past issues

Hello!

Our favourite links this month include:

Also, much more content, including an article about gene drives, an update from CEA’s new CEO, and a short video about the dangers of AI adapted from a short story. We also highlight jobs such as CEO of Giving What We Can (30 September) and Head of Operations at CEA (7 October).

— Toby, for the EA Newsletter Team

Image from here

Articles

Do insects suffer?

Insects are more complex than you think. Monogamous breeding pairs of termites regularly have 20-year relationships. The smallest mammal brain (a shrew) is only six times the size of the largest insect brain (a solitary wasp). Fruit flies are used in efficacy studies of depression medication — because fruit flies can exhibit depression-like behavior.

Yet scientists have long assumed that insects don’t suffer. As entomologist Meghan Barrett argues on a recent episode of the 80,000 Hours podcast, their reasons for believing this no longer stand up. For example, scientists used to claim that insects didn’t have nociceptors (put simply, pain receptors), but these, and analogous systems, have now been found. It was observed that some insects reacted strangely to grievous injury, ignoring their wounds. Now insects have been observed grooming their own or other insects’ wounds, and even responding to painkillers.

This matters because, in 2020, between 1 and 1.2 trillion insects were farmed, generally without any concern for their welfare. Barrett is too much of a scientist to confidently assert that insects do feel pain. However, she believes that, even if you think insect suffering is unlikely, the sheer number of individuals involved makes insect farming a major issue.

If you are interested in insect welfare, check out Barrett’s related blog post, which provides ways that we can all help to bring awareness to, or work directly on, this problem.

Image from here

Almost all of us can save a life

The key is to donate to the right charities. Some are far more effective than others, and some interventions are incredibly cost-effective. A new piece from Our World in Data makes a concise and data-driven case for donating wisely. They argue:

  • Some health interventions are 1,000x as effective as others (see the graph below). Likewise, the best charities are much more effective at saving lives than the average charity.
  • Saving a life is relatively inexpensive. GiveWell, a leading charity evaluator, has found four charities that can save a life for around $5,000 — a lot less than the $1 million that the UK government is willing to spend.

If you’re not sure how to pick the best charities, rely on an expert charity evaluator, like GiveWell, or a managed fund. Also, consider sending the article to friends or family who are reluctant to compare charities.

Image from here.

California attempts AI regulation

SB 1047 is a California Senate bill that would require companies to develop safety plans for AI models costing over $100 million to train. If companies refuse, they would be liable if the models cause mass casualty events, or over $500 million in damages. The bill has passed the California Senate and now awaits Governor Gavin Newsom's decision to sign or veto.

The bill appears popular among Californians, including current and former staff from AI companies like OpenAI, DeepMind, and Meta, who have signed an open letter urging Newsom to make the bill law.

However, Google, Meta, and OpenAI have been lobbying against the bill, as has Andreessen Horowitz, a venture capital firm heavily invested in AI. Representatives from the tech industry have argued that the bill would stifle innovation among AI startups — even though it only applies to the most expensive training runs — and falsely claimed that it would threaten AI developers with jail.

Crowd-sourced prediction market Metaculus currently puts a 20% chance on the bill being enacted by the first of October this year. Expect more updates in the next couple of weeks.

Image from here.

In other news

Resources

Links we share every time — they're just that good!

Image from here.

Jobs

  • The 80,000 Hours Job Board features more than 800 positions. We can’t fit them all in the newsletter, so you can check them out there.
  • The EA Opportunity Board collects internships, volunteer opportunities, conferences, and more — including part-time and entry-level job opportunities.
  • If you’re interested in policy or global development, you may also want to check Tom Wein’s list of social purpose job boards.

Selection of jobs

BlueDot Impact

Centre for Effective Altruism

Cooperative AI Foundation

Founders Pledge

GiveWell

Giving What We Can

  • Global CEO (Remote, $130K+, visa sponsorship, apply by 30 September)

Lead Exposure Elimination Project (LEEP)

  • If you’re interested in working at LEEP, please complete this form.

Legal Impact for Chickens

  • Attorney (Remote in US, $80K–$130K, apply by 7 October)

Open Philanthropy

The Good Food Institute

Image from here

Announcements

Fellowships, internships, and volunteering

  • The Cosmos Fellowship is looking for applicants with “the potential for world-class AI expertise and deep philosophic insight” to work alongside the Human-Centered AI Lab at the University of Oxford (or other host institutions), pursuing independent projects, with access to mentorship. The fellowship pays $75,000 (pro rata) for up to one year of 90-day intervals. Apply before 1 December.
  • Future Impact Group is offering a part-time, remote-first, 12-week fellowship. Fellows will spend 5-10 hours per week working on policy or philosophy projects on subjects such as suffering risks, coordinating international governance, reducing risks from ideological fanaticism, and more. Apply by 28 September.
  • Artificial Intelligence Governance & Safety Canada is looking for volunteers to help with content creation, events, translation (French to English) and more. If you’re interested in helping, contact them here.

Conferences and events

  • Upcoming EA Global Conferences: Boston (1-3 November, apply by 20 October).
  • Upcoming EAGx Conferences: Bengaluru (19-20 October), and Sydney (22-24 November)

Funding and prizes

  • The Strategic Animal Funding Circle is offering up to $1 million in funding, to be distributed among promising farmed animal welfare projects. Find out more, and apply for funding before 20 September.
    • If you are a donor willing to give upwards of $100K to farmed animals per year, you can enquire about joining the funding circle at this email: jozden[at]mobius.life

Organizational updates

You can see updates from a wide range of organizations on the EA Forum.

Image from here.

 

Timeless classic

We only came up with the idea of human extinction fairly recently. Why? In an 80,000 Hours episode, Thomas Moynihan, an intellectual historian, discusses the strange and surprising views previous generations held about extinction, apocalypse, and much more. The podcast recasts our present day assumptions as recent discoveries, and asks what future intellectual historians might think of our beliefs today.

Image from here.

We hope you found this edition useful!

If you’ve taken action because of the Newsletter and haven’t taken our impact survey, please do — it helps us improve future editions.

Finally, if you have feedback for us, positive or negative, let us know via our feedback form, or in the comments below. 

– The Effective Altruism Newsletter Team

10

0
0

Reactions

0
0

More posts like this

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f