Hide table of contents

This is mostly a linkpost to a Gdoc which itself links to notes on 20 EA-relevant books (to be updated on an ongoing basis). I hope you'll find it useful! Here is the list, with links included for convenience:

Communication

Chip and Dan Heath (2007) Made to Stick: Why Some Ideas Survive and Others Die

History and International Relations

Graham T. Allison (2017) Destined for War: Can America and China Escape Thucydides’ Trap?

David Christian (2018) Origin Story: A Big History of Everything

Yuval Harari (2011) Sapiens: A Brief History of Humankind

James C. Scott (2017) Against The Grain: A Deep History of the Earliest States

Interdisciplinary (history, psychology, philosophy, policy…)

Joseph Henrich (2015) The Secret of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter

Joseph Henrich (2020) The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous

Toby Ord (2020) The Precipice: Existential Risk and the Future of Humanity

Richard Thaler and Cass Sunstein (2008) Nudge: Improving Decisions about Health, Wealth, and Happiness

Personal Growth/Efficacy/Welfare

Angela Duckworth (2016) Grit: The Power of Passion and Perseverance

Chris MacLeod (2016) The Social Skills Guidebook

Kristin Neff (2011) Self-Compassion: The Proven Power of Being Kind to Yourself

Randolph M. Nesse (2019) Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary Psychiatry

Cal Newport (2016) Deep Work: Rules for Focused Success in a Distracted World

Cal Newport (2019) Digital Minimalism

(Applied) Psychology and Rationality

Dan Ariely (2008) Predictably Irrational: The Hidden Forces That Shape Our Decisions

Dan Gardner and Philip E. Tetlock (2015) Superforecasting: The Art and Science of Prediction

Jonathan Haidt (2012) The Righteous Mind: Why Good People are Divided by Politics and Religion

Daniel Kahneman (2011) Thinking, Fast and Slow

Kevin Simler and Robin Hanson (2017) The Elephant in the Brain: Hidden Motives in Everyday Life

24

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

Hey Calvin, thanks so much for posting this. I found these notes useful, particularly going back and pulling some of the main ideas from The Precipice. 

I was wondering if you would share what your note taking style is? Do you have your laptop next to you as you read and just take down notes when you find something interesting? Do you read first and then synthesize later? I have been attempting to work on a note taking system for my entertaining reads, but haven't quite been able to find one that balances out.

[anonymous]2
0
0

Hi Cam, I'm glad you found the notes useful! Most of these (with The Precipice being an exception) were notes taken from audiobooks. As I was listening, I'd write down brief notes (sometimes as short as a key word or phrase) on the Notes app on iPhone. Then, once a day/once every couple days, I'd reference the Notes app to jog my memory, and write down the longer item of information in a Gdoc. Then, when I'd finished the book, I'd organize/synthesize the Gdoc into a coherent set of notes with sections etc. 

These days I follow a similar system, but use Roam instead of Gdocs. Contrary to what some report, I don't find that Roam has significantly improved anything for me, but I do like the ability to easily link among documents. As a philosopher I don't find this super useful. I think if I were e.g. a historian I would find it a lot more useful. 

Curated and popular this week
 ·  · 16m read
 · 
Applications are currently open for the next cohort of AIM's Charity Entrepreneurship Incubation Program in August 2025. We've just published our in-depth research reports on the new ideas for charities we're recommending for people to launch through the program. This article provides an introduction to each idea, and a link to the full report. You can learn more about these ideas in our upcoming Q&A with Morgan Fairless, AIM's Director of Research, on February 26th.   Advocacy for used lead-acid battery recycling legislation Full report: https://www.charityentrepreneurship.com/reports/lead-battery-recycling-advocacy    Description Lead-acid batteries are widely used across industries, particularly in the automotive sector. While recycling these batteries is essential because the lead inside them can be recovered and reused, it is also a major source of lead exposure—a significant environmental health hazard. Lead exposure can cause severe cardiovascular and cognitive development issues, among other health problems.   The risk is especially high when used-lead acid batteries (ULABs) are processed at informal sites with inadequate health and environmental protections. At these sites, lead from the batteries is often released into the air, soil, and water, exposing nearby populations through inhalation and ingestion. Though data remain scarce, we estimate that ULAB recycling accounts for 5–30% of total global lead exposure. This report explores the potential of launching a new charity focused on advocating for stronger ULAB recycling policies in low- and middle-income countries (LMICs). The primary goal of these policies would be to transition the sector from informal, high-pollution recycling to formal, regulated recycling. Policies may also improve environmental and safety standards within the formal sector to further reduce pollution and exposure risks.   Counterfactual impact Cost-effectiveness analysis: We estimate that this charity could generate abou
sawyer🔸
 ·  · 2m read
 · 
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic. People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology. To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as
 ·  · 1m read
 · 
The belief that it's preferable for America to develop AGI before China does seems widespread among American effective altruists. Is this belief supported by evidence, or it it just patriotism in disguise? How would you try to convince an open-minded Chinese citizen that it really would be better for America to develop AGI first? Such a person might point out: * Over the past 30 years, the Chinese government has done more for the flourishing of Chinese citizens than the American government has done for the flourishing of American citizens. My village growing up lacked electricity, and now I'm a software engineer! Chinese institutions are more trustworthy for promoting the future flourishing of humanity. * Commerce in China ditches some of the older ideas of Marxism because it's the means to an end: the China Dream of wealthy communism. As AGI makes China and the world extraordinarily wealthy, we are far readier to convert to full communism, taking care of everyone, including the laborers who have been permanently displaced by capital. * The American Supreme Court has established "corporate personhood" to an extent that is nonexistent in China. As corporations become increasingly managed by AI, this legal precedent will give AI enormous leverage for influencing policy, without regard to human interests. * Compared to America, China has a head start in using AI to build a harmonious society. The American federal, state, and municipal governments already lag so far behind that they're less likely to manage the huge changes that come after AGI. * America's founding and expansion were based on a technologically-superior civilization exterminating the simpler natives. Isn't this exactly what we're trying to prevent AI from doing to humanity?