For example, I emailed the following to a friend who'd enjoyed reading Doing Good Better and wanted to learn more about EA, but hadn't further engaged with EA or longtermism. He has a technical background and (IMO) is potentially a good fit for AI Policy work, which influenced my link selection.
The single best article I'd recommend on doing good with your career is by 80,000 Hours, a non-profit founded by the Oxford professor who wrote Doing Good Better, incubated in Y-Combinator, and dedicated to giving career advice on how to solve pressing global problems. If you'd prefer, their founder explains the ideas in this podcast episode.
If you're open to some new, more speculative ideas about what "doing good" might mean, here's a few ideas about improving the long-run future of humanity:
- Longtermism: Future people matter, and there might be lots of them, so the moral value of our actions is significantly determined by their effects on the long-term future. We should prioritize reducing "existential risks" like nuclear war, climate change, and pandemics that threaten to drive humanity to extinction, preventing the possibility of a long and beautiful future.
- Quick intro to longtermism and existential risks from 80,000 Hours
- Academic paper arguing that future people matter morally, and we have tractable ways to help them, from the Doing Good Better philosopher
- Best resource on this topic: The Precipice, a book explaining what risks could drive us to extinction and how we can combat them, released earlier this year by another Oxford philosophy professor
- Artificial intelligence might transform human civilization within the next century, presenting incredible opportunities and serious potential problems
- Elon Musk, Bill Gates, Stephen Hawking, and many leading AI researchers worry that extremely advanced AI poses an existential threat to humanity (Vox)
- Best resource on this topic: Human Compatible, a book explaining the threats, existential and otherwise, posed by AI. Written by Stuart Russell, CS professor at UC Berkeley and author of the leading textbook on AI. Daniel Kahneman calls it "the most important book I have read in quite some time". (Or this podcast with Russell)
- CS paper giving the technical explanation of what could go wrong (from Google/OpenAI/Berkeley/Stanford)
- How you can help by working on US AI policy, explains 80,000 Hours
- (AI is less morally compelling if you don't care about the long-term future. If you want to focus on the present, maybe focus on other causes: global poverty, animal welfare, grantmaking, or researching altruistic priorities.)
- Improving institutional decision-making isn't super straightforward, but could be highly impactful if successful. Altruism aside, you might enjoy Phil Tetlock's Superforecasting.
- 80,000 Hours also wrote profiles for working in climate change and nuclear war prevention, among many other things
[Then I gave some info about two near-termism causes he might like: grantmaking, by linking to GiveWell and the Open Philanthropy Project, and global poverty, by linking to GiveDirectly and other GiveWell top charities.]