New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more
50
saulius
3d
2
There seems to be a pattern where I get excited about some potential projects and ideas during an EA Global, fill EA Global survey suggesting that the conference was extremely useful for me, but then those projects never materialise for various reasons. If others relate, I worry that EA conferences are not as useful as feedback surveys suggest.
GiveWell's cost to save a life has gone from $4,500 to a range between $3,000 and $5,500: https://www.givewell.org/how-much-does-it-cost-to-save-a-life From at least as early as December 2023 (possibly as early as December 2021 when the page says it was first published) until February 2024, that page highlighted a $7.2 million 2020 grant to the Against Malaria Foundation at an estimated cost per life saved of $4,500. The page now highlights a $6.4 million 2023 grant to the Malaria Consortium at an estimated cost per life saved of $3,000. You can see all the estimated cost per life saved (or other relevant outcome) for all GiveWell's grants at this spreadsheet, linked-to from: https://www.givewell.org/impact-estimates
9
Linch
9h
0
Am i the only one who finds the X% disagree UX confusing? It's hard not to read it and intuitively think to myself that it's an alternative weighting/aggregation/expression of the Agree/Disagree votes.  Note sure how to change the UX to be clearer, perhaps "X% disagree with question" would make it clearer to me.
This is one of the first times I have seen lead poisoning make front page news!  https://www.bbc.com/news/articles/cy4n7wn8l58o Could this be an opportunity for the new lead alliance, or even just one of the lead orgs to help China do better on lead? Maybe there will be a window where they are more open to external help? It seems Chona has new "maximum" lead levels regulations in paint from 2020 but there are obviously still issues...
The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure. I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles. 1. The harms associated with the origins of our funding The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from: * harms to adolescent mental health, * cooperation with authoritarian regimes, * and the erosion of democracy, even in the US and Europe. These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company. To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics. But the systems that generated that wealth — and shaped the broader tech landscape could still matter. Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don't occasionally check your blind spot you might cause damage. 2. Ongoing risk from the same culture Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk. Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be comple