Quick takes

Counting people is hard. Here are some readings I've come across recently on this, collected in one place for my own edification: 

  1. Oliver Kim's How Much Should We Trust Developing Country GDP? is full of sobering quotes. Here's one: "Hollowed out by years of state neglect, African statistical agencies are now often unable to conduct basic survey and sampling work... [e.g.] population figures [are] extrapolated from censuses that are decades-old". The GDP anecdotes are even more heartbreaking
  2. Have we vastly underestimated the total number of people on E
... (read more)

Good links. My favorite example is Papua New Guinea, which doubled their population estimate after a UN Population Fund review. Chapter 1 of Fernand Braudel's The Structures of Everyday Life is a good overview of the problem in historical perspective. 

I think it would be really useful for there to be more public clarification on the relationship between effective altruism and Open Philanthropy. 

My impression is that:
1. OP is the large majority funder of most EA activity. 
2. Many EAs assume that OP is a highly EA organization, including the top.
3. OP really tries to explicitly not take responsibility for EA and does not claim to themselves be highly EA.
4. EAs somewhat assume that OP leaders are partially accountable to the EA community, but OP leaders would mostly disagree. 
5. From the poi... (read more)

Showing 3 of 20 replies (Click to show all)

Thanks for writing this Ozzie! :) I think lots of things about the EA community are confusing for people, especially relationships between organizations. As we are currently redesigning EA.org it might be helpful for us to add some explanation on that site. (I would be interested to hear if anyone has specific suggestions!)

From my own limited perspective (I work at CEA but don’t personally interact much with OP directly), your impression sounds about right. I guess my own view of OP is that it’s better to think of them as a funder rather than a collab... (read more)

3
Matrice Jacobine
Presumably https://reflectivealtruism.com/category/billionaire-philanthropy/?
6
AnonymousTurtle
Many people claim that Elon Musk is an EA person, @Cole Killian has an EA Forum account and mentioned effective altruism on his (now deleted) website, Luke Farritor won the Vesuvius Challenge mentioned in this post (he also allegedly wrote or reposted a tweet mentioning effective altruism, but I can't find any proof and people are skeptical)

I want to see a bargain solver for AI alignment to groups: a technical solution that would allow AI systems to solve the pie cutting problem for groups and get them the most of what they want, for AI alignment. The best solutions I've seen for maximizing long run value involve using a bargain solver to decide what ASI does, which preserves the richness and cardinality of people's value functions and gives everyone as much of what they want as possible, weighted by importance. (See WWOTF Afterwards, the small literature on bargaining-theoretic approaches to... (read more)

2
Parker_Whitfill
Is the alignment motivation distinct from just using AI to solve general bargaining problems? 

I don't know! It's possible that you can just solve a bargain and then align AI to that, like you can align AI to citizens assemblies. I want to be pitched.

I want to see more discussion on how EA can better diversify and have strategically-chosen distance from OP/GV.

One reason is that it seems like multiple people at OP/GV have basically said that they want this (or at least, many of the key aspects of this). 

A big challenge is that it seems very awkward for someone to talk and work on this issue, if one is employed under the OP/GV umbrella. This is a pretty clear conflict of interest. CEA is currently the main organization for "EA", but I believe CEA is majority funded by OP, with several other clear st... (read more)

Showing 3 of 19 replies (Click to show all)

Yeah I agree that funding diversification is a big challenge for EA, and I agree that OP/GV also want more funders in this space. In the last MCF, which is run by CEA, the two main themes were brand and funding, which are two of CEA’s internal priorities. (Though note that in the past year we were more focused on hiring to set strong foundations for ops/systems within CEA.) Not to say that CEA has this covered though — I'd be happy to see more work in this space overall!

Personally, I worry that funding diversification is a bit downstream of impro... (read more)

6
sapphire
I think its better to start something new. Reform is hard but no one is going to stop you from making a new charity. The EA brand isn't in the best shape. Imo the "new thing" can take money from individual EAs but shouldn't accept anything connected to OpenPhil/CEA/Dustin/etc.  If you start new you can start with a better culture. 
6
huw
AIM seems to be doing this quite well in the GHW/AW spaces, but lacks the literal openness of the EA community-as-idea (for better or worse)

I really liked several of the past debate weeks, but I find it quite strange and plausibly counterproductive to spend a week in a public forum discussing these questions.

There is no clear upside to reducing the uncertainty on this question, because there are few interventions that are predictably differentiated along those lines.

And there is a lot of communicative downside risk when publicly discussing trading off extinction versus other risks / foregone benefits, apart from appearing out of touch with > 95% of people trying to do good in the world ("a... (read more)

Showing 3 of 13 replies (Click to show all)

Thank you for sharing your disagreements about this! :)

I would love for there to be more discussion on the Forum about how current events affect key EA priorities. I agree that those discussions can be quite valuable, and I strongly encourage people who have relevant knowledge to post about this.

I’ll re-up my ask from my Forum update post: we are a small team (Toby is our only content manager, and he doesn’t spend his full 1 FTE on the Forum) and we would love community support to make this space better:

  1. We don’t currently have the capacity to maintain
... (read more)
4
Toby Tremlett🔹
Thanks for engaging on this as well! I do feel the responsibility involved in setting event topics, and it's great to get constructive criticism like this.  To respond to the points a bit (and this is just my view- quite quickly written because I've got a busy day today and I'm happy to come back and clarify/change my mind in another reply):  (a) - maybe, but I think the actual content of the events almost always contains some scepticism of the question itself, discussion of adjacent debates etc... The actual topic of the event doesn't seem like a useful place to look for evidence on the community's priorities. Also, I generally run events about topics I think people aren't prioritising. However, I think this is the point I disagree with the least - I can see that if you are looking at the forum in a pretty low-res way, or hearing about the event from a friend, you might get an impression that 'EA cares about X now'.  (b) - The Forum does appear in EA-critical pieces, but I personally don't think those pieces distinguish much between what one post on the Forum says and what the Forum team puts in a banner (and I don't think readers who lack context would distinguish between those things either). So, I don't worry too much about what I'm saying in the eyes of a very adversarial journalist (there are enough words on the forum that they can probably find whatever they'd like to find anyway).  To clarify - for readers and adversarial journalists - I still have the rule of "I don't post anything I wouldn't want to see my name attached to in public" (and think others should too), but that's a more general rule, not just for the Forum.  (c)- I'm sure that it isn't the optimum Forum week. However (1) I do think this topic is important and potentially action-relevant - there is increasing focus on 'AI Safety', but AI Safety is a possibly vast field with a range of challenges that a career or funding could address, and the topic of this debate is potentially an important
2
Patrick Gruban 🔸
I think the main issue is that I was interpreting your point about the public forum's perception as a fear that people outside could see EA as weird (in a broad sense). I would be fine with this. But at the same time, I hope that people already interested in EA don't get the impression from the forum that the topics are limited. On the contrary, I would love to have many discussions here, not restricted by fear of outside perception.

Over the years I've written some posts that are relevant to this week's debate topic. I collected and summarized some of them below:

"Disappointing Futures" Might Be As Important As Existential Risks

The best possible future is much better than a "normal" future. Even if we avert extinction, we might still miss out on >99% of the potential of the future.

Is Preventing Human Extinction Good?

A list of reasons why a human-controlled future might be net positive or negative. Overall I expect it to be net positive.

On Values Spreading

Hard to summarize but this p... (read more)

It seems like "what can we actually do to make the future better (if we have a future)?" is a question that keeps on coming up for people in the debate week.

I've thought about some things related to this, and thought it might be worth pulling some of those threads together (with apologies for leaving it kind of abstract). Roughly speaking, I think that:

... (read more)
huw
41
1
2

The World Happiness Report 2025 is out!

Finland leads the world in happiness for the eighth year in a row, with Finns reporting an average score of 7.736 (out of 10) when asked to evaluate their lives.

Costa Rica (6th) and Mexico (10th) both enter the top 10 for the first time, while continued upward trends for countries such as Lithuania (16th), Slovenia (19th) and Czechia (20th) underline the convergence of happiness levels between Eastern, Central and Western Europe.

The United States (24th) falls to its lowest-ever position, with the United Kingdom (23rd)

... (read more)
Showing 3 of 4 replies (Click to show all)

Clueless, although there are bound to be outliers and exceptions even if we don't understand why.

6
huw
Here is their plot over time, from the Chapter 2 Appendix. I think these are the raw per-year scores, not the averages. I find this really baffling. It’s probably not political; the Modi government took power in 2014 and only lost absolute majority in late 2024. The effects of COVID seem to be varied; India did relatively well in 2020 but got obliterated by the Delta variant in 2021. Equally, GDP per capita steadily increased over this time, barring a dip in 2020. Population has steadily increased, and growth has steadily decreased. India have long had a larger residual value than others in the WHR’s happiness model; they’re much less happy than their model might predict. Without access to the raw data, it’s hard to say if Gallup’s methodology has changed over this time; India is a huge and varied country, and it’s hard to tell if Gallup maintained a similar sample over time.
2
Mo Putera
Thanks for digging up that plot, I'd been looking for annual data instead of 3-year rolling averages. Here's what WHR say about their methodology which seems relevant.  That Gallup website doesn't say if they've changed their methodology over time; that said, they seem to try their best to maintain a similar sample over time, e.g.  I remain as baffled as you are. 

Sharing https://earec.net, semantic search for the EA + rationality ecosystem. Not fully up to date, sadly (doesn't have the last month or so of content). The current version is basically a minimal viable product! 

On the results page there is also an option to see EA Forum only results which allow you to sort by a weighted combination of karma and semantic similarity thanks to the API!

Final feature to note is that there's an option to have gpt-4o-mini "manually" read through the summary of each article on the current screen of results, which will give... (read more)

If you could set a hiring manager a work task for an hour or two, what would you ask them to do? In this situation you're applying for a job with them.

Has the past two months seen more harm done (edit: by the new administration) than the totality of good EA has done since it's existence? 

5
JWS 🔸
I don't really get the framing of this question. I suspect, for any increment of time one could take through EAs existence, then there would have been more 'harm' done in the total rest of world during that time. EA simply isn't big enough to counteract the moral actions of the rest of the world. Wild animals suffer horribly, people die of preventable diseases etc constantly, formal wars and violent struggles occur affecting the lives of millions. There sheer scale of the world outweighs EA many, many times over. So I suspect you're making a more direct comparison to Musk/DOGE/PEPFAR? But again, I feel like anyone wielding using the awesome executive power of the United States Government should expect to have larger impacts on the world than EA.

True, I was vague and should have specified the new administration. 

If people do think that this is true, as it seems obvious to you, then perhaps EA should have allocated resources differently?

I realized that the concept of utility as a uniform, singular value is pretty off-putting to me. I consider myself someone who is inherently aesthetic and needs to place myself in a broader context of the society, style and so on. I require a lot of experiences— in some way, I need more than just happiness to reach a state of fulfillment. I need to have aesthetic experience of beauty, the experience of calmness, the anxiety of looking for answers, the joy of building and designing. 

The richness of everyday experience might be reducible to two dimensions: positive and negative feelings but this really doesn't capture what a fulfilling human life is. 

You might appreciate Ozy Brennan's writeup on capabilitarianism. Contrasting with most flavors of utilitarianism: 

Utilitarians maximize “utility,” which is pleasure or happiness or preference satisfaction or some more complicated thing. But all our ways of measuring utility are really quite bad. Some people use self-reported life satisfaction or happiness, but these metrics often fail to match up with common-sense notions about what makes people better off. GiveWell tends to use lives saved and increased consumption, which are fine as far as they go,

... (read more)

A good friend turned me onto The Telepathy Tapes. It presents some pretty compelling evidence that people who are neurodivergent can more easily tap into an underlying universal Consciousness. I suspect Buddhists and other Enlightened folks who spend the time and effort quieting their mind and letting go of ego and dualism can also. I'm curious what others in EA (self-identified rationalists for example) make of this…

Giving now vs giving later, in practice, is a thorny tradeoff. I think these add up to roughly equal considerations, so my currently preferred policy is to split my donations 50-50, i.e. give 5% of my income away this year and save/invest 5% for a bigger donation later. (None of this is financial/tax advice! Please do your own thinking too.)

In favor of giving now (including giving a constant share of your income every year/quarter/etc, or giving a bunch of your savings away soon):

  • Simplicity.
  • The effects of your donation might have compounding returns, e.g.
... (read more)

Another one you missed is that the world is getting better over time, so we should expect donation opportunities in the future to be worse.

4
MichaelDickens
Another important consideration in favor of giving now—if you earn a steady income—is that your donations this year only represent a small % of your lifetime giving. In fact, if you think the giving-now arguments strongly outweigh giving-later but you expect to earn most of your income in the future, then it might make sense to borrow money to donate and repay the loans out of future income. But that's difficult in practice.

I would like to publicly set a goal not to comment other people's posts with a criticism of some minor side point that doesn't matter. I have a habit of doing that, but I think it's usually more annoying than it is helpful so I would like to stop. If you see me doing it, feel free to call me out

(I reserve the right to make substantive criticisms of a post's central arguments)

8
MichaelDickens
I think the tendency to write unconstructive criticisms (at least for me) comes from the combination of: 1. I have a strong urge to comment on anything that looks incorrect 2. Writing substantive criticisms of a post (often) requires grokking the whole post and thinking deeply about it, which is hard. Criticizing some specific sentence is easy because my brain instantly surfaces the criticism when I read the sentence
8
Joseph
I think it is admirable to strive for that. I also notice the tendency within myself to be uselessly nitpicky with well actually. Recurse Center's social rules have provided some small inspiration for me: https://www.recurse.com/social-rules

I was just about to reply mentioning well actually as well! Strong +1 on this

Random thought: does the idea of explosive takeoff of intelligence assume the alignment is solvable?

If the alignment problem isn’t solvable, then an AGI, in creating ASI, would face the same dilemma as humans: The ASI wouldn’t necessarily have the same goals, would disempower the AGI, instrumental convergence, all the usual stuff.

I suppose one counter argument is that the AGI rationally shoudn’t create ASI, for these reasons, but, similar to humans, might do so anyway due to competitive/racing dynamics. Whichever AGI doesn’t creates ASI will be left behind, etc.

not if the ai increases intelligence via speed up or other methods which don't change the goals. 

I've been trying to process the conversation in this thread:
https://forum.effectivealtruism.org/posts/mopsmd3JELJRyTTty/ozzie-gooen-s-shortform?commentId=o9rEBRmKoTvjNMHF7

One thing that comes to mind is that this seems like a topic a lot of people care about, and there's a lot of upvotes & agreements, but there also seemed to be a surprising lack of comments, overall. 

I've heard from others elsewhere that they were nervous about giving their takes, because it's a sensitive topic. 

Obviously I'm really curious about what, if anything, could be ... (read more)

Hey Ozzie, a few quick notes on why I react but try not to comment on community based stuff these days:

  • I try to limit how many meta-level comments I make. In general I’d like to see more object-level discussion of things and so I’m trying (to mixed success) to comment mostly about cause areas directly.
  • Partly it’s a vote for the person I’d like to be. If I talk about community stuff, part of my headspace will be thinking about it for the next few days. (I fully realize the irony of making this comment.)
  • It’s emotionally tricky since I feel responsibility for
... (read more)
Lizka
15
1
0
1

Quick (visual) note on something that seems like a confusion in the current conversation:


Others have noted similar things (eg, and Will’s earlier take on total vs human extinction). You might disagree with the model (curious if so!), but I’m a bit worried that one way or another people are talking past each other (least from skimming the discussion). 

(Commenting via phone, sorry for typos or similar!)

Clarifying "Extinction"

I expect this debate week to get tripped up a lot by the term “extinction”. So here I’m going to distinguish:

  • Human extinction — the population of Homo sapiens, or members of the human lineage (including descendant species, post-humans, and human uploads), goes to 0.
  • Total extinction — the population of Earth-originating intelligent life goes to 0.

Human extinction doesn’t entail total extinction. Human extinction is compatible with: (i) AI taking over and creating a civilisation for as long as it can; (ii) non-human biological li... (read more)

Fairly strong agree -- I'm personally higher on all of (2), (3), (4) than I am on (1).

The main complication is that I think among realistic activities we can pursue, often they won't correspond to a particular one of these; instead having beneficial effects on multiple. But I still think it's worth asking "which is it high priority to make plans targetting?", even if many of the best plans end up being those which aren't so narrow as to target one to the exclusion of the others.

Load more