Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more
An excerpt about the creation of PEPFAR, from "Days of Fire" by Peter Baker. I found this moving.
22
Mo Putera
20h
3
GiveWell did their first "lookbacks" (reviews of past grants) to see if they've met initial expectations and what they could learn from them: (While I'm very glad they did so with their usual high quality and rigor, I'm also confused why they hadn't started doing this earlier, given that "okay, but did we really help as much as we think we would've? Let's check?" feels like such a basic M&E / ops-y question. I'm obviously missing something trivial here, but also I find it hard to buy "limited org capacity"-type explanations for GW in particular given total funding moved, how long they've worked, their leading role in the grantmaking ecosystem etc) Their lookbacks led to substantial changes vs original estimates, in New Incentives' case driven by large drops in cost per child enrolled ("we think this is due to economies of scale, efficiency efforts by New Incentives, and the devaluation of the Nigerian naira, but we haven’t prioritized a deep assessment of drivers of cost changes") and in HKI's case driven by vitamin A deficiency rates in Nigeria being lower and counterfactual coverage rates higher than originally estimated:
I notice a pattern in my conversations where someone is making a career decision: the most helpful parts are often prompted by "what are your strengths and weaknesses?" and "what kinds of work have you historically enjoyed or not enjoyed?" I can think of a couple cases (one where I was the recipient of career decision advice, another where I was the advice-giver) where we were kinda spinning our wheels, going over the same considerations, and then we brought up those topics >20 minutes into the conversation and immediately made more progress than the rest of the call to that point. Maybe this is because in EA circles people have already put a ton of thought into considerations like "which of these jobs would be more impactful conditional on me doing a 8/10 job or better in them" and "which of these is generally better for career capital (including skill development, networks, and prestige)," so it's the conversational direction with the most low-hanging fruit. Another frame is that this is another case of people underrating personal fit relative to the more abstract/generally applicable characteristics of a job.
The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure. I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles. 1. The harms associated with the origins of our funding The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from: * harms to adolescent mental health, * cooperation with authoritarian regimes, * and the erosion of democracy, even in the US and Europe. These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company. To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics. But the systems that generated that wealth — and shaped the broader tech landscape could still matter. Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don't occasionally check your blind spot you might cause damage. 2. Ongoing risk from the same culture Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk. Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be comple
GiveWell's cost to save a life has gone from $4,500 to a range between $3,000 and $5,500: https://www.givewell.org/how-much-does-it-cost-to-save-a-life From at least as early as December 2023 (possibly as early as December 2021 when the page says it was first published) until February 2024, that page highlighted a $7.2 million 2020 grant to the Against Malaria Foundation at an estimated cost per life saved of $4,500. The page now highlights a $6.4 million 2023 grant to the Malaria Consortium at an estimated cost per life saved of $3,000. You can see all the estimated cost per life saved (or other relevant outcome) for all GiveWell's grants at this spreadsheet, linked-to from: https://www.givewell.org/impact-estimates