Here is the tweet with the cover.

I found it very cool that the author found out about EA through 80,000 Hours and took the Giving What We Can Pledge!




Sorted by Click to highlight new comments since:

I really liked this one. Between this, the New Yorker piece, and Dylan Matthews' Vox one, there's been an unusual amount of nuanced, high-quality coverage of EA in mainstream outlets lately imo.

Same here -- Will MacAskill's publicists are doing a great job getting EA in the public eye right as What We Owe the Future looms. (Speaking of which, the front page of this Sunday's New York Times opinion section  is The Case for Longtermism!)  

On a slight tangent, as a university organizer, I've noticed that few college students have heard of EA at all (based on informal polling outside a dining hall, ~<10%). It'll be interesting to see if/how all this contemporary coverage changes that.

2 of my acquaintances at uni who sorta knew I was involved in EA stuff but weren't really that interested in it themselves have quite recently (~a week ago) reached out to ask about EA because they came across it elsewhere. My guess is that there are many more who'll come across it and be curious but not necessarily connect the dot to engaging with their university student group. 

Would be interesting to hear from people at universities that have the new academic year soon about if the EA coverage in the media changed anything!  

There were plenty of shortcomings, I thought, in the New Yorker piece (the only one of the three I've read).

Curious to hear what you thought these were if you feel it worth your time to share.

I don’t know what Josh thinks the flaws are, but since I agree that this one is more flawed, I can speak a bit for myself at least. I think most of what I saw as flawed came from isolated moments, in particular criticisms the author raised that seemed to me like they had fairly clear counterpoints that the author didn’t bring up (other times he managed to do this quite well). A few that stand out to me, off the top of my head:

”Cremer said, of Bankman-Fried, ‘Now everyone is in the Bahamas, and now all of a sudden we have to listen to three-hour podcasts with him, because he’s the one with all the money. He’s good at crypto so he must be good at public policy . . . what?!’”

The 80,000 Hours podcast is about many things, but principally and originally it is about effective career paths. Earning to give is recommended less these days, but they’ve only had one other interview with someone who earned to give that I can recall, and SBF is by far the most successful example of the path to date. Another thing the podcast is about is the state of EA opportunities/organizations. Learning about the priorities of one of the biggest new forces in the field, like FTX, seems clearly worthwhile for that. The three hours point is also misleading to raise, since that is a very typical length for 80k episodes.

”Longtermism is invariably a phenomenon of its time: in the nineteen-seventies, sophisticated fans of ‘Soylent Green’ feared a population explosion; in the era of ‘The Matrix,’ people are prone to agonize about A.I.”

This point strikes me as very as hoc. AI is one of the oldest sci-fi tropes out there, and in order to find a recent particularly influential example they had to go back to a movie over 20 years old that looks almost nothing like the risks people worry about with AI today. Meanwhile the example of population explosion is also cherry picked to be a case of sci fi worry that seems misguided in retrospect. Why doesn’t he talk about the era of “Dr. Strangelove” and “War Games”? And immediately after this,

”In the week I spent in Oxford, I heard almost nothing about the month-old war in Ukraine. I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.”

Some people take comfort in this probably, but generally those are people, like the author, who aren’t that viscerally worried about the risk. Others have very serious mental health problems from worrying about AI doom. I’ve had problems like this to some degree, others have had it so bad they have had to leave the movement entirely, and indeed criticize it in the complete opposite direction.

I am not saying that people who academically or performatively believe in AI risks, and can seek refuge in this, don’t exist. I’m also not saying the author had to do yet more research and turn up solid evidence that the picture he is giving is incomplete, but when you start describing people thinking everything and everyone they love may be destroyed soon as a comforting coping mechanism, I think you should bring at least a little skepticism to the table. It is possible that this just reflects the fact that you find a different real world problem emotionally devastating at the moment and thinking about this risk you don’t personally take seriously is a distraction for you, and you failed your empathy roll this time.

A deeper issue might be the lack of discussion of the talent constraint on many top cause areas in the context of controversies over spending on community building, which is arguably the key consideration much of the debate turns on. The increased spending on community building (which still isn’t even close to most of the spending) seems more uncomplicatedly bad if you miss this dimension.

Again though, this piece goes through a ton of points, mostly quite well, and can’t be expected to land perfectly everywhere, so I’m pretty willing to forgive problems like these when I run into them. They are just the sorts of things that made me think this was more flawed than the other pieces.

I agree, but EA is a big messy fast-changing movement with lots of internal diversity, controversies, projects, ideas etc., that is pretty poorly known by the average person. This writer had to tease out a good, nuanced take on the movement, as far as I can tell, basically from scratch, which isn't easy, and I think it shows that he put a ton of care, thought, and research into the task. The product wasn't perfect, but I think it's much better than the average explainer non-EAs, or frankly some EAs, would write on the topic.

Yeah I totally agree that the article was much better than many others on the subject, and that it isn't an easy task. I just thought it was worth acknowledging the shortcomings as well.

I also think it was probably the most flawed of the three, but it also seemed like the most ambitious and packed with some of the most interesting information and narrative (plus by the person with the least prior familiarity) so I think I was unusually forgiving towards the flaws it did have.

I thought this was a surprisingly good article! Many journalists get unreasonably snarky about EA topics (e.g., insinuate that people who work in technology are out of touch awkward nerds who could never improve the world; suggest EA is cult-like; make fun of people for caring about literally anything besides climate change and poverty). This journalist took EA ideas seriously, talked about the personal psychological impact of being an EA, and correctly (imo) portrayed the ideas and mindsets of a bunch of central people in the EA movement. 

I think it helps that the journalist had been aware of EA since 2013 and taken the GWWC pledge in 2014 even if they hadn't been involved in the community that much.

Definitely!!!! A lot of journalists seem to cover topics they don't really understand (mainstream media coverage of things like nuclear power or cryptocurrency can be particularly painful), so it was awesome to read something written by a person who gets the basic philosophy. 


If this is a legit cover (e.g. is the one on the print edition, and here, and here), this puts EA in a new era of media coverage.

It appears to be most prominent on the "Front Page" of the website.

Great piece :) Nitpick:

"When things feel particularly bleak, I sometimes tell myself that even if I had the time and energy to try to make the world better, I’d probably fail.

Effective altruists try anyway. They know it’s impossible to take the care you feel for one human and scale it up by a thousand, or a million, or a billion."

Quote 2:
"We could really make things very good in the future,” he tells me. “Imagine your very best days. You could have a life that is as good as that, 100 times over, 1,000 times over."

(highlights mine)

At the face value the question comes up: if it is impossible to scale the care you feel by a factor of a 1000 or more  why would it be possible to have a life that is a 1000 times over as good as how you might imagine "your very best days"? Wouldn't that max out at some point too?

There is some nuance to both of these quotes, which removes the conflict somewhat:
1. the first quote is about your "care-o-meter" (as given in the linked essay), while the second one is about "goodness" of life in general. The word "imagine" suggests the latter quote is about feeling in your life as good as you feel on your best days times 1000, however, the word "imagine" can also mean other things (you can think that your best day was when you donated to rescue a 1000 birds, which does not necessarily feel much different to saving one, but the "goodness" factor comes up from other reasons than subjective wellbeing here)
2. perhaps it's about having 1000 times more "very best days", or 500 times more "very best days", which are subjectively two times as "best" as they are now - or some other combo
3. perhaps there are limits to "care-o-meter" but not on how we percieve subjective wellbeing, the scales don't necessarily need to have same limits and same progression patterns. (is it even the question one should be asking? Do these scales actually work that way in the first place?)

Obviously hard to give all these caveats in a single quote in an introductory press article, so it's nobody's fault, but still - an interesting conundrum.

At the face value the question comes up: if it is impossible to scale the care you feel by a factor of a 1000 or more  why would it be possible to have a life that is a 1000 times over as good as how you might imagine "your very best days"? Wouldn't that max out at some point too?

2022 era biological humans may not be capable of either, but our descendants (assuming we survive) may have a lot of room to change on both, should  they wish to do so.

Curated and popular this week
Relevant opportunities