Counting people is hard. Here are some readings I've come across recently on this, collected in one place for my own edification:
Good links. My favorite example is Papua New Guinea, which doubled their population estimate after a UN Population Fund review. Chapter 1 of Fernand Braudel's The Structures of Everyday Life is a good overview of the problem in historical perspective.
I think it would be really useful for there to be more public clarification on the relationship between effective altruism and Open Philanthropy.
My impression is that:
1. OP is the large majority funder of most EA activity.
2. Many EAs assume that OP is a highly EA organization, including the top.
3. OP really tries to explicitly not take responsibility for EA and does not claim to themselves be highly EA.
4. EAs somewhat assume that OP leaders are partially accountable to the EA community, but OP leaders would mostly disagree.
5. From the poi...
Thanks for writing this Ozzie! :) I think lots of things about the EA community are confusing for people, especially relationships between organizations. As we are currently redesigning EA.org it might be helpful for us to add some explanation on that site. (I would be interested to hear if anyone has specific suggestions!)
From my own limited perspective (I work at CEA but don’t personally interact much with OP directly), your impression sounds about right. I guess my own view of OP is that it’s better to think of them as a funder rather than a collab...
I want to see a bargain solver for AI alignment to groups: a technical solution that would allow AI systems to solve the pie cutting problem for groups and get them the most of what they want, for AI alignment. The best solutions I've seen for maximizing long run value involve using a bargain solver to decide what ASI does, which preserves the richness and cardinality of people's value functions and gives everyone as much of what they want as possible, weighted by importance. (See WWOTF Afterwards, the small literature on bargaining-theoretic approaches to...
I want to see more discussion on how EA can better diversify and have strategically-chosen distance from OP/GV.
One reason is that it seems like multiple people at OP/GV have basically said that they want this (or at least, many of the key aspects of this).
A big challenge is that it seems very awkward for someone to talk and work on this issue, if one is employed under the OP/GV umbrella. This is a pretty clear conflict of interest. CEA is currently the main organization for "EA", but I believe CEA is majority funded by OP, with several other clear st...
Yeah I agree that funding diversification is a big challenge for EA, and I agree that OP/GV also want more funders in this space. In the last MCF, which is run by CEA, the two main themes were brand and funding, which are two of CEA’s internal priorities. (Though note that in the past year we were more focused on hiring to set strong foundations for ops/systems within CEA.) Not to say that CEA has this covered though — I'd be happy to see more work in this space overall!
Personally, I worry that funding diversification is a bit downstream of impro...
I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions.
There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines.
And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world ("a...
Thank you for sharing your disagreements about this! :)
I would love for there to be more discussion on the Forum about how current events affect key EA priorities. I agree that those discussions can be quite valuable, and I strongly encourage people who have relevant knowledge to post about this.
I’ll re-up my ask from my Forum update post: we are a small team (Toby is our only content manager, and he doesn’t spend his full 1 FTE on the Forum) and we would love community support to make this space better:
Sharing this talk I gave in London last week titled "The Heavy Tail of Valence: New Strategies to Quantify and Reduce Extreme Suffering" covering aspects of these two EA Forum posts:
I welcome feedback! 🙂
Over the years I've written some posts that are relevant to this week's debate topic. I collected and summarized some of them below:
"Disappointing Futures" Might Be As Important As Existential Risks
The best possible future is much better than a "normal" future. Even if we avert extinction, we might still miss out on >99% of the potential of the future.
Is Preventing Human Extinction Good?
A list of reasons why a human-controlled future might be net positive or negative. Overall I expect it to be net positive.
Hard to summarize but this p...
It seems like "what can we actually do to make the future better (if we have a future)?" is a question that keeps on coming up for people in the debate week.
I've thought about some things related to this, and thought it might be worth pulling some of those threads together (with apologies for leaving it kind of abstract). Roughly speaking, I think that:
The World Happiness Report 2025 is out!
...Finland leads the world in happiness for the eighth year in a row, with Finns reporting an average score of 7.736 (out of 10) when asked to evaluate their lives.
Costa Rica (6th) and Mexico (10th) both enter the top 10 for the first time, while continued upward trends for countries such as Lithuania (16th), Slovenia (19th) and Czechia (20th) underline the convergence of happiness levels between Eastern, Central and Western Europe.
The United States (24th) falls to its lowest-ever position, with the United Kingdom (23rd)
Sharing https://earec.net, semantic search for the EA + rationality ecosystem. Not fully up to date, sadly (doesn't have the last month or so of content). The current version is basically a minimal viable product!
On the results page there is also an option to see EA Forum only results which allow you to sort by a weighted combination of karma and semantic similarity thanks to the API!
Final feature to note is that there's an option to have gpt-4o-mini "manually" read through the summary of each article on the current screen of results, which will give...
Has the past two months seen more harm done (edit: by the new administration) than the totality of good EA has done since it's existence?
I realized that the concept of utility as a uniform, singular value is pretty off-putting to me. I consider myself someone who is inherently aesthetic and needs to place myself in a broader context of the society, style and so on. I require a lot of experiences— in some way, I need more than just happiness to reach a state of fulfillment. I need to have aesthetic experience of beauty, the experience of calmness, the anxiety of looking for answers, the joy of building and designing.
The richness of everyday experience might be reducible to two dimensions: positive and negative feelings but this really doesn't capture what a fulfilling human life is.
You might appreciate Ozy Brennan's writeup on capabilitarianism. Contrasting with most flavors of utilitarianism:
...Utilitarians maximize “utility,” which is pleasure or happiness or preference satisfaction or some more complicated thing. But all our ways of measuring utility are really quite bad. Some people use self-reported life satisfaction or happiness, but these metrics often fail to match up with common-sense notions about what makes people better off. GiveWell tends to use lives saved and increased consumption, which are fine as far as they go,
A good friend turned me onto The Telepathy Tapes. It presents some pretty compelling evidence that people who are neurodivergent can more easily tap into an underlying universal Consciousness. I suspect Buddhists and other Enlightened folks who spend the time and effort quieting their mind and letting go of ego and dualism can also. I'm curious what others in EA (self-identified rationalists for example) make of this…
Giving now vs giving later, in practice, is a thorny tradeoff. I think these add up to roughly equal considerations, so my currently preferred policy is to split my donations 50-50, i.e. give 5% of my income away this year and save/invest 5% for a bigger donation later. (None of this is financial/tax advice! Please do your own thinking too.)
In favor of giving now (including giving a constant share of your income every year/quarter/etc, or giving a bunch of your savings away soon):
I would like to publicly set a goal not to comment other people's posts with a criticism of some minor side point that doesn't matter. I have a habit of doing that, but I think it's usually more annoying than it is helpful so I would like to stop. If you see me doing it, feel free to call me out
(I reserve the right to make substantive criticisms of a post's central arguments)
Random thought: does the idea of explosive takeoff of intelligence assume the alignment is solvable?
If the alignment problem isn’t solvable, then an AGI, in creating ASI, would face the same dilemma as humans: The ASI wouldn’t necessarily have the same goals, would disempower the AGI, instrumental convergence, all the usual stuff.
I suppose one counter argument is that the AGI rationally shoudn’t create ASI, for these reasons, but, similar to humans, might do so anyway due to competitive/racing dynamics. Whichever AGI doesn’t creates ASI will be left behind, etc.
I've been trying to process the conversation in this thread:
https://forum.effectivealtruism.org/posts/mopsmd3JELJRyTTty/ozzie-gooen-s-shortform?commentId=o9rEBRmKoTvjNMHF7
One thing that comes to mind is that this seems like a topic a lot of people care about, and there's a lot of upvotes & agreements, but there also seemed to be a surprising lack of comments, overall.
I've heard from others elsewhere that they were nervous about giving their takes, because it's a sensitive topic.
Obviously I'm really curious about what, if anything, could be ...
Hey Ozzie, a few quick notes on why I react but try not to comment on community based stuff these days:
Quick (visual) note on something that seems like a confusion in the current conversation:
Others have noted similar things (eg, and Will’s earlier take on total vs human extinction). You might disagree with the model (curious if so!), but I’m a bit worried that one way or another people are talking past each other (least from skimming the discussion).
(Commenting via phone, sorry for typos or similar!)
Clarifying "Extinction"
I expect this debate week to get tripped up a lot by the term “extinction”. So here I’m going to distinguish:
Human extinction doesn’t entail total extinction. Human extinction is compatible with: (i) AI taking over and creating a civilisation for as long as it can; (ii) non-human biological li...
Fairly strong agree -- I'm personally higher on all of (2), (3), (4) than I am on (1).
The main complication is that I think among realistic activities we can pursue, often they won't correspond to a particular one of these; instead having beneficial effects on multiple. But I still think it's worth asking "which is it high priority to make plans targetting?", even if many of the best plans end up being those which aren't so narrow as to target one to the exclusion of the others.