Martin Gould's Five insights from farm animal economics over at Open Phil's FAW newsletter points out that (quote) "blocking local factory farms can mean animals are farmed in worse conditions elsewhere":
...Consider the UK: Local groups celebrate blocking new chicken farms. But because UK chicken demand keeps growing — it rose 24% from 2012-2022 — the result of fewer new UK chicken farms is just that the UK imports more chicken: it almost doubled its chicken imports over the same time period. While most chicken imported into the UK comes from the EU, wh
Zooming out, regarding other examples of altruistic mistakes that we might be making, I think there are a lot of scenarios in which banning something or making something less appealing in one locations is intended to reduce the bad thing, but actually just ends up shifting the thing elsewhere, where there are even fewer regulations.
The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary.
TLDR;
80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world.
According to their post, they will still host t...
I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions.
There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines.
And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world ("a...
Thanks for clarifying this!
I think ultimately we seem to have quite different intuitions on the trade-offs, but that seems unresolvable. Most of my intuitions there come from advising non-EA HNWs (and from spending time around advisors specialized in advising these), so this is quite different from mostly advising EAs.
Yesterday, I did 7 one-hour walks with Munich EA community members. Here's what I learned and why I would recommend it to similarly extroverted community members:
This is a small appreciation post for the deep and prompt community engagement from the 80k team after their announcement of their new strategic direction.
No organization is under any obligation to respond to comments and criticisms about their strategy, and I've been impressed by the willingness of so many 80k staff members to engage in both debate and reassurance - at least 5 people from the organization have weighed in.
It has both helped me understand their decision better and made the organization feel more caring and kind then if they had just d...
There's truth there and I would agree its better EV to engage too. There could be many different motives. Higher EV, damage control reaction, kindness, community building, nostalgia for those old days when they were global health people too ;).
Regardless though I like to frame things in more human and interpersonal terms and will continue to do so :)
Counting people is hard. Here are some readings I've come across recently on this, collected in one place for my own edification:
I think it would be really useful for there to be more public clarification on the relationship between effective altruism and Open Philanthropy.
My impression is that:
1. OP is the large majority funder of most EA activity.
2. Many EAs assume that OP is a highly EA organization, including the top.
3. OP really tries to explicitly not take responsibility for EA and does not claim to themselves be highly EA.
4. EAs somewhat assume that OP leaders are partially accountable to the EA community, but OP leaders would mostly disagree.
5. From the poi...
Thanks for writing this Ozzie! :) I think lots of things about the EA community are confusing for people, especially relationships between organizations. As we are currently redesigning EA.org it might be helpful for us to add some explanation on that site. (I would be interested to hear if anyone has specific suggestions!)
From my own limited perspective (I work at CEA but don’t personally interact much with OP directly), your impression sounds about right. I guess my own view of OP is that it’s better to think of them as a funder rather than a collab...
I want to see a bargain solver for AI alignment to groups: a technical solution that would allow AI systems to solve the pie cutting problem for groups and get them the most of what they want, for AI alignment. The best solutions I've seen for maximizing long run value involve using a bargain solver to decide what ASI does, which preserves the richness and cardinality of people's value functions and gives everyone as much of what they want as possible, weighted by importance. (See WWOTF Afterwards, the small literature on bargaining-theoretic approaches to...
I want to see more discussion on how EA can better diversify and have strategically-chosen distance from OP/GV.
One reason is that it seems like multiple people at OP/GV have basically said that they want this (or at least, many of the key aspects of this).
A big challenge is that it seems very awkward for someone to talk and work on this issue, if one is employed under the OP/GV umbrella. This is a pretty clear conflict of interest. CEA is currently the main organization for "EA", but I believe CEA is majority funded by OP, with several other clear st...
Yeah I agree that funding diversification is a big challenge for EA, and I agree that OP/GV also want more funders in this space. In the last MCF, which is run by CEA, the two main themes were brand and funding, which are two of CEA’s internal priorities. (Though note that in the past year we were more focused on hiring to set strong foundations for ops/systems within CEA.) Not to say that CEA has this covered though — I'd be happy to see more work in this space overall!
Personally, I worry that funding diversification is a bit downstream of impro...
Sharing this talk I gave in London last week titled "The Heavy Tail of Valence: New Strategies to Quantify and Reduce Extreme Suffering" covering aspects of these two EA Forum posts:
I welcome feedback! 🙂
Over the years I've written some posts that are relevant to this week's debate topic. I collected and summarized some of them below:
"Disappointing Futures" Might Be As Important As Existential Risks
The best possible future is much better than a "normal" future. Even if we avert extinction, we might still miss out on >99% of the potential of the future.
Is Preventing Human Extinction Good?
A list of reasons why a human-controlled future might be net positive or negative. Overall I expect it to be net positive.
Hard to summarize but this p...
It seems like "what can we actually do to make the future better (if we have a future)?" is a question that keeps on coming up for people in the debate week.
I've thought about some things related to this, and thought it might be worth pulling some of those threads together (with apologies for leaving it kind of abstract). Roughly speaking, I think that:
The World Happiness Report 2025 is out!
...Finland leads the world in happiness for the eighth year in a row, with Finns reporting an average score of 7.736 (out of 10) when asked to evaluate their lives.
Costa Rica (6th) and Mexico (10th) both enter the top 10 for the first time, while continued upward trends for countries such as Lithuania (16th), Slovenia (19th) and Czechia (20th) underline the convergence of happiness levels between Eastern, Central and Western Europe.
The United States (24th) falls to its lowest-ever position, with the United Kingdom (23rd)
Sharing https://earec.net, semantic search for the EA + rationality ecosystem. Not fully up to date, sadly (doesn't have the last month or so of content). The current version is basically a minimal viable product!
On the results page there is also an option to see EA Forum only results which allow you to sort by a weighted combination of karma and semantic similarity thanks to the API!
Final feature to note is that there's an option to have gpt-4o-mini "manually" read through the summary of each article on the current screen of results, which will give...
Has the past two months seen more harm done (edit: by the new administration) than the totality of good EA has done since it's existence?
I realized that the concept of utility as a uniform, singular value is pretty off-putting to me. I consider myself someone who is inherently aesthetic and needs to place myself in a broader context of the society, style and so on. I require a lot of experiences— in some way, I need more than just happiness to reach a state of fulfillment. I need to have aesthetic experience of beauty, the experience of calmness, the anxiety of looking for answers, the joy of building and designing.
The richness of everyday experience might be reducible to two dimensions: positive and negative feelings but this really doesn't capture what a fulfilling human life is.
You might appreciate Ozy Brennan's writeup on capabilitarianism. Contrasting with most flavors of utilitarianism:
...Utilitarians maximize “utility,” which is pleasure or happiness or preference satisfaction or some more complicated thing. But all our ways of measuring utility are really quite bad. Some people use self-reported life satisfaction or happiness, but these metrics often fail to match up with common-sense notions about what makes people better off. GiveWell tends to use lives saved and increased consumption, which are fine as far as they go,
A good friend turned me onto The Telepathy Tapes. It presents some pretty compelling evidence that people who are neurodivergent can more easily tap into an underlying universal Consciousness. I suspect Buddhists and other Enlightened folks who spend the time and effort quieting their mind and letting go of ego and dualism can also. I'm curious what others in EA (self-identified rationalists for example) make of this…