New & upvoted

Customize feedCustomize feed
168
· · · 3m read

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
Me: "Well at least this study shows no association beteween painted houses and kids' blood lead levels. That's encouraging!" Wife: "Nothing you have said this morning is encouraging NIck. Everything that I've heard tells me that our pots, our containers and half of our hut are slowly poisoning our baby" Yikes touche... (Context we live in Northern Uganda) Thanks @Lead Research for Action (LeRA) for this unsettling but excellently written report. Our house is full of aluminium pots and green plastic food containers. Now to figure out what to do about it! https://drive.google.com/file/d/1pqRUeejiRCX2bXekeZnL0zGi34zbK23w/view
On alternative proteins: I think the EA community could aim to figure out how to turn animal farmers into winners if we succeed with alternative proteins. This seems to be one of the largest social risks, and it's probably something we should figure out before we scale alternative proteins a lot. Farmers are typically a small group but have a large lobby ability and public sympathy.
alignment is a conversation between developers and the broader field. all domains are conversations between decision-makers and everyone else: “here are important considerations you might not have been taking into account. here is a normative prescription for you.” “thanks — i had been considering that to 𝜀 extent. i will {implement it because x / not implement it because y / implement z instead}." these are the two roles i perceive. how does one train oneself to be the best at either? sometimes, conversations at eag center around ‘how to get a job’, whereas i feel they ought to center around ‘how to make oneself significantly better than the second-best candidate’.
Here’s a random org/project idea: hire full-time, thoughtful EA/AIS red teamers whose job is to seriously critique parts of the ecosystem — whether that’s the importance of certain interventions, movement culture, or philosophical assumptions. Think engaging with critics or adjacent thinkers (e.g., David Thorstad, Titotal, Tyler Cowen) and translating strong outside critiques into actionable internal feedback. The key design feature would be incentives: instead of paying for generic criticism, red teamers receive rolling “finder’s fees” for critiques that are judged to be high-quality, good-faith, and decision-relevant (e.g., identifying strategic blind spots, diagnosing vibe shifts that can be corrected, or clarifying philosophical cruxes that affect priorities). Part of why I think this is important is because I generally think have the intuition that the marginal thoughtful contrarian is often more valuable than the marginal agreer, yet most movement funding and prestige flows toward builders rather than structured internal critics. If that’s true, a standing red-team org — or at least a permanent prize mechanism — could be unusually cost-effective. There have been episodic versions of this (e.g., red-teaming contests, some longtermist critiquing stuff), but I’m not sure why this should come in waves rather than exist as ongoing infrastructure (org or just some prize pool that's always open for sufficiently good criticisms).