Here is a list of new 80,000 Hours content since we wrote our previous update for this forum last September, in chronological order.

Note that a lot of our new ideas now come out through our podcast - you can subscribe by searching for 80,000 Hours in any podcasting app.

Another way to stay up to date is to join our newsletter (just double check it's reaching your inbox and not getting stuck in your 'promotions' folder).

  1. Should you work at an effective non-profit? A research backed analysis. (Career review)
  2. Is it time for a new scientific revolution? Julia Galef on how to make humans smarter, why Twitter isn’t all bad, and where effective altruism is going wrong (Podcast)
  3. The psychologists in the race against collective stupidity and how you can join them (Problem profile)
  4. Why we must end factory farming as soon as possible - and how to do it (Podcast)
  5. Podcast with FLI: Choosing a Career to Tackle the World’s Biggest Problems with Rob Wiblin and Brenton Mayer (External)
  6. Our computers are fundamentally insecure. Here’s why that could lead to global catastrophe (Podcast)
  7. Working in the government you can have a big impact on pressing global problems. Here’s how to get started. (Career review)
  8. Podcast: You want to do as much good as possible and have billions of dollars. What do you do? (Podcast)
  9. Speeding up social science 10-fold, how to do research that’s actually useful, & why plenty of startups cause harm (Podcast)
  10. If you want to do good, here’s why future generations should be your focus (Article)
  11. Dr Cameron fought Ebola for the White House. Here's what keeps her up at night. (Podcast)
  12. We can use science to end poverty faster. But how much do governments listen to it anyway? (Podcast)
  13. Leaders survey: What are the most important talent gaps in effective altruism - and which problems are most impactful to work on? (Blog post)
  14. Going undercover to expose animal cruelty, get rabbit cages banned and reduce meat consumption (Podcast)
  15. Why you should consider applying for grad school right now (Blog post)
  16. Prof Tetlock on predicting catastrophes, why keep your politics secret, and when experts know more than you (Podcast)
  17. Why despite global progress, humanity is probably facing its most dangerous time ever (Article)
  18. Guide to effective holiday giving in 2017 (Blog post)
  19. Michelle hopes to shape the world by shaping the ideas of intellectuals. Will global priorities research succeed? (Podcast)
  20. Our descendants will probably see us as moral monsters. What should we do about that? (Podcast)
  21. Ofir Reich on using data science to end poverty and the spurious action/inaction distinction (Podcast)
  22. “It’s my job to worry about any way nukes could get used” (Podcast)
  23. Bruce Friedrich makes the case that inventing outstanding meat replacements is the most effective way to help animals (Podcast)
  24. The world’s most intellectual foundation is hiring. Holden Karnofsky, founder of GiveWell, on how philanthropy can have maximum impact by taking big risks. (Podcast)
  25. A new recommended career path for effective altruists: China specialist (Article)
  26. Yes, a career in commercial law has earning potential. We still don’t recommend it. (Career review)
  27. The non-profit that figured out how to massively cut suicide rates in Sri Lanka, and their plan to do the same around the world (Podcast)
  28. A machine learning alignment researcher on how to become a machine learing alignment researcher (Podcast)
  29. Why it’s a bad idea to break the rules, even if it’s for a good cause (Podcast)
  30. Our top 3 lessons on how not to waste your career on things that don’t change the world (Video)
  31. Why we have to lie to ourselves about why we do what we do, according to Prof Robin Hanson (Podcast)
  32. Why operations management is one of the biggest bottlenecks in effective altruism (Article)

We used to break out articles that were aimed at the EA community from those which weren't, but at this point there isn't enough of the latter to bother dividing them.

Enjoy! 

- The 80,000 Hours team

P.S. Here's the first in the series from last June.

Comments5


Sorted by Click to highlight new comments since:

I like the content.

But a small terminology note: I'm not sure it is a good practice to call it research pieces (and 80k hours is one of the norm-setting organization in EA)

A big portion of the pieces is podcasts, "recorded conversations between smart people". This is useful in many ways, but is it research?

In general, it seems to me ... "research" is high prestige in EA movement. Which creates an incentive to label things as research. So many important things are labelled research.

So it shouldn't surprise anyone there is e.g. shortage of operations people

Fair point. I thought about calling them articles... but they're definitely not all articles. Thought about calling them 'content releases' but that felt like corporate vagueness.

I should have gone with something nobody could dispute: 32 new(ish) universal resource locators. ;) - RW

The links to 2, 4, 6 and 15 seem broken on the 80K end, I just get 'page not found' for each.

Link 30 also does not work, but that is just because it starts with an unnecessary "effective-altruism.com/" before the youtube link.

I checked and everything else seems to work.

Hi Alex thanks I fixed 30. 2,4,6 and 15 are working for me - can you email over a screenshot of the error you're getting?

Huh, weirdly they seem to all work again now, they used to take me to the same page as any non-valid URl, e.g. https://80000hours.org/not-a-real-URL/

Curated and popular this week
abrahamrowe
 ·  · 9m read
 · 
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.  Commenting and feedback guidelines:  I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time, in part because I won't do any further work on it. This is a post I drafted in November 2023, then updated for an hour in March 2025. I don’t think I’ll ever finish it so I am just leaving it in this draft form for draft amnesty week (I know I'm late). I don’t think it is particularly well calibrated, but mainly just makes a bunch of points that I haven’t seen assembled elsewhere. Please take it as extremely low-confidence and there being a low-likelihood of this post describing these dynamics perfectly. I’ve worked at both EA charities and non-EA charities, and the EA funding landscape is unlike any other I’ve ever been in. This can be good — funders are often willing to take high-risk, high-reward bets on projects that might otherwise never get funded, and the amount of friction for getting funding is significantly lower. But, there is an orientation toward funders (and in particular staff at some major funders), that seems extremely unusual for charitable communities: a high degree of deference to their opinions. As a reference, most other charitable communities I’ve worked in have viewed funders in a much more mixed light. Engaging with them is necessary, yes, but usually funders (including large, thoughtful foundations like Open Philanthropy) are viewed as… an unaligned third party who is instrumentally useful to your organization, but whose opinions on your work should hold relatively little or no weight, given that they are a non-expert on the direct work, and often have bad ideas about how to do what you are doing. I think there are many good reasons to take funders’ perspectives seriously, and I mostly won’t cover these here. But, to
Jim Chapman
 ·  · 12m read
 · 
By Jim Chapman, Linkedin. TL;DR: In 2023, I was a 57-year-old urban planning consultant and non-profit professional with 30 years of leadership experience. After talking with my son about rationality, effective altruism, and AI risks, I decided to pursue a pivot to existential risk reduction work. The last time I had to apply for a job was in 1994. By the end of 2024, I had spent ~740 hours on courses, conferences, meetings with ~140 people, and 21 job applications. I hope that by sharing my experiences, you can gain practical insights, inspiration, and resources to navigate your career transition, especially for those who are later in their career and interested in making an impact in similar fields. I share my experience in 5 sections - sparks, take stock, start, do, meta-learnings, and next steps. [Note - as of 03/05/2025, I am still pursuing my career shift.] Sparks – 2022 During a Saturday bike ride, I admitted to my son, “No, I haven’t heard of effective altruism.” On another ride, I told him, “I'm glad you’re attending the EAGx Berkely conference." Some other time, I said, "Harry Potter and Methods of Rationality sounds interesting. I'll check it out." While playing table tennis, I asked, "What do you mean ChatGPT can't do math? No calculator? Next token prediction?" Around tax-filing time, I responded, "You really think retirement planning is out the window? That only 1 of 2 artificial intelligence futures occurs – humans flourish in a post-scarcity world or humans lose?" These conversations intrigued and concerned me. After many more conversations about rationality, EA, AI risks, and being ready for something new and more impactful, I decided to pivot my career to address my growing concerns about existential risk, particularly AI-related. I am very grateful for those conversations because without them, I am highly confident I would not have spent the last year+ doing that. Take Stock - 2023 I am very concerned about existential risk cause areas in ge
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to