I think that someone should write a detailed history of the effective altruism movement. The history that currently exists on the forum is pretty limited, and I’m not aware of much other material, so I think there’s room for substantial improvement. An oral history was already suggested in this post.

I tentatively planned to write this post before FTX collapsed, but the reasons for writing this are probably even more compelling now than they were beforehand. I think a comprehensive written history would help…

  1. Develop an EA ethos/identity based on a shared intellectual history and provide a launch pad for future developments (e.g. longtermism and an influx of money). I remember reading about a community member who mostly thought about global health getting on board with AI safety when they met a civil rights attorney who was concerned about it. A demonstration of shared values allowed for that development.
  2. Build trust within the movement. As the community grows, it can no longer rely on everyone knowing everyone else, and needs external tools to keep everyone on the same page. Aesthetics have been suggested as one option, and I think that may be part of the solution, in concert with a written history.
  3. Mitigate existential risk to the EA movement. See EA criticism #6 in Peter Wildeford’s post and this post about ways in which EA could fail. Assuming the book would help the movement develop an identity and shared trust, it could lower risk to the movement.
  4. Understand the strengths and weaknesses of the movement, and what has historically been done well and what has been done poorly.

There are a few ways this could happen.

  1. Open Phil (which already has a History of Philanthropy focus area) or CEA could actively seek out someone for the role and fund them for the duration of the project. This process would give the writer the credibility needed to get time with important EA people.
  2. A would-be writer could request a grant, perhaps from the EA Infrastructure Fund.
  3. An already-established EA journalist like Kelsey Piper could do it. There would be a high opportunity cost associated with this option, of course, since they’re already doing valuable work. On the other hand, they would already have the credibility and baseline knowledge required to do a great job.

I’d be interested in hearing people’s thoughts on this, or if I missed a resource that already exists.

54

0
0

Reactions

0
0
Comments16


Sorted by Click to highlight new comments since:

A few months ago I received a grant to spend six months researching the history of effective altruism, conducting interviews with early EAs, and sharing my findings on a dedicated website. Unfortunately, the funds for this grant came from the Future Fund, and have been affected by the collapse of FTX. I still intend to carry out this project eventually, but finding alternative funding sources is not a high priority for me, since my current projects are more urgent and perhaps more important.

If you think I should prioritize this project, or have thoughts on how it should be carried out, feel free to get in touch.

You should definitely prioritize it! What about creating an open source wiki of sorts to crowd source information?

You could always double check / get citations later on.

You mention opportunity cost, but I think it's worth further emphasizing. To do this well, you'd need somebody who has been around a while (or at least a lot of time and cooperation from people who have). You'd need them to manage different perspectives and opinions about various things that happened. You'd need them to be a very good writer. And you'd need the writer to be someone people trust--my perspective is "Open Phil hired this person" would probably not be sufficient for trust.

There are people who could do this: Kelsey Piper is one as you suggest. But these are all pretty unusual characteristics and the opportunity costs for the sort of person who could do this well just seem really massive. I might be wrong about this, but that's my first thought when reading your post.

I don't know that I'm the kind of person OP is thinking of, but beyond opportunity cost there's also a question of reportorial distance/objectivity. I've thought a lot about whether to do a project like this and one sticking point is (a) I identify as an EA (b) I donate to GiveWell and signed the GWWC pledge (c) many of my friends are EAs, so I'm not sure any book I produce would be perceived as having sufficient credibility among non-EA readers.

I'd encourage you to consider taking it on. Even if identifying as an EA would reduce the credibility for outsiders, I'm sure whatever you produced would be a wonderful starting point for anyone else tackling it down the line.

People enjoyed reading Winston Churchill's history of the war and he was hardly a neutral observer! Pretty clear which side he wanted to win.

See also: Thomas Young's history of abolitionism, Friedrich Engels' history of Marxism.

I’d also say take it on. Someone objective can always rewrite it later, but if we don’t save it now we could lose a lot.

Definitely agree with Chris here!  Worst case scenario, you create useful material for someone else who tackles it down the line, best case scenario, you write the whole thing yourself.

I wonder whether Larissa MacFarquhar would be interested? She wrote about the early EA community in her 2015 book Strangers Drowning (chapters "At Once Rational and Ardent" and "From the Point of View of the Universe") and also wrote a 2011 profile of Derek Parfit.

That would certainly be great if she would. I actually first heard about EA when I read Strangers Drowning in 2016! It's very well written.

A possible middle ground is to make efforts to ensure that important source material was preserved, to keep option value of doing this project later. That would presumably require significantly fewer resources, and wouldn't require opportunity costs from "the sort of person who could do [the writing of a book] well."

Great point!  A historian or archivist could take on this role.  Maybe CEA could hire one?  I’d say it fits within their mission “to nurture a community of people who are thinking carefully about the world’s biggest problems and taking impactful action to solve them.”

I think opportunity cost is well worth mentioning, but I don't know that I think it's as high as you believe it to be.

Choosing someone who has been around a while is optional.  The value of having an experienced community member do it is built-in trust, access, and understanding.  The costs are the writer's time (though that cost is decreasing as more people start writing about EA professionally) and the time of those being interviewed.  I would also note that while there's lots of work for technical people in EA, writers in the community may not have found such great opportunities for impact.

Having a relative outsider take on the project would add objectivity, as Dylan noted.  Objectivity would both improve credibility to outsiders and increase that likelihood of robust criticism being made.  I also think there are just a lot of pretty great writers in the world who might find EA interesting.  Perhaps you just get different benefits from different types of writers.

There's a cost to waiting as well.  The longer you wait, the more likely it is that important parts of the story will be forgotten or deleted.

I expect a project like this is not worth the cost. I imagine doing this well would require dozens of hours of interviews with people who are more senior in the EA movement, and I think many of those people’s time is often quite valuable.

Regarding the pros you mention:

  1. I’m not convinced that building more EA ethos/identity based around shared history is a good thing. I expect this would make it even harder to pivot to new things or treat EA as a question, it also wouldn’t be unifying for many folks (e.g. who having been thinking about AI safety for a decade or who don’t buy longtermism). According to me, the bulk of people who call themselves EAs, like most groups, are too slow to update on new arguments and information and I would expect that having a written and agreed upon history would not help with this. Then again, my point might be made better if I could reference common historical cases of what I mean lol

  2. I don’t see how this helps build trust.

  3. I don’t see how having a written history makes the movement less likely to die. I also don’t know what it looks like for the EA movement to die or how bad this actually is; the EA movement is largely instrumental toward other things I care about: reducing suffering, increasing the chances of good stuff in the universe, my and my friends’ happiness to a lesser extent.

  4. This does seem like a value add to me, though the project I’m imagining only does a medium job at this given it’s goal is not “chronology of mistakes and missteps”. Maybe worth checking out https://www.openphilanthropy.org/research/some-case-studies-in-early-field-growth/

With ideas like this I sometimes ask myself “why hasn’t somebody done this yet”. Some reasons that come to mind: too busy doing other things they think are important, might come across as self aggrandizing, who’s going to read it?-and ways I expect it to get read are weird and indoctorinaty (“welcome to the club, here’s a book about our history”, as opposed to “oh, you want to do lots of good, here are some ideas that might be useful”), it doesn’t directly improve the world and the indirect path to impact is shakier than other meta things.

I’m not saying this is necessarily a bad idea. But so far I don’t see strong reasons to do this over the many other things openphil/cea/Kelsey piper/interviewees could be doing.

I’ve addressed the point on costs in other commentary, so we may just disagree there!

  1. I think the core idea is that the EA ethos is about constantly asking how we can do the most good and updating based on new information.  So the book would hopefully codify that spirit rather than just talk about how great we’re doing.
  2. I find it easier to trust people whose motivations I understand and who have demonstrated strong character in the past.  History can give a better sense of those two things.  Reading about Julia Wise in Strangers Drowning, for example, did that for me.
  3. Humans often think about things in terms of stories.  If you want someone to care about global poverty, you have a few ways of approaching it.  You could tell them how many people live in extreme poverty and that by donating to GiveDirectly they’ll get way more QALYs per dollar than they would by donating elsewhere.  You could also tell them about your path to donating, and share a story from the GiveDirectly website about how a participant benefited the money they received.  In my experience, that’s the better strategy.  And absolutely, the EA community exists to serve a purpose.  Right now I think it’s reasonably good at doing the things that I care about, so I want it to continue to exist.
  4. Agreed!

I think there could be a particular audience for this book, and it likely wouldn’t be EA newbies.  The project could also take on a lot of different forms, from empirical report to personal history, depending on the writer.  Hopefully the right person sees this and decides to go for it if and when it makes sense!  Regardless, your commentary is appreciated.
 

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI