Ozzie Gooen

12192 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
1163

Topic contributions
4

It seems like recently (say, the last 20 years) inequality has been rising. (Editing, from comments)

Right now, the top 0.1% of wealthy people in the world are holding on to a very large amount of capital.

(I think this is connected to the fact that certain kinds of inequality have increased in the last several years, but I realize now my specific crossed-out sentence above led to a specific argument about inequality measures that I don't think is very relevant to what I'm interested in here.)

On the whole, it seems like the wealthy donate incredibly little (a median of less than 10% of their wealth), and recently they've been good at keeping their money from getting taxed.

I don't think that people are getting less moral, but I think it should be appreciated just how much power and wealth is in the hands of the ultra wealthy now, and how little of value they are doing with that.

Every so often I discuss this issue on Facebook or other places, and I'm often surprised by how much sympathy people in my network have for these billionaires (not the most altruistic few, but these people on the whole). I suspect that a lot of this comes partially from [experience responding to many mediocre claims from the far-left] and [living in an ecosystem where the wealthy class is able to subtly use their power to gain status from the intellectual class.]

The top 10 known billionaires have easily $1T now. I'd guess that all EA-related donations in the last 10 years have been less than around $10B. (GiveWell says they have helped move $2.4B). 10 years ago, I assumed that as word got out about effective giving, many more rich people would start doing that. At this point it's looking less optimistic. I think the world has quite a bit more wealth, more key problems, and more understanding of how to deal with them then it ever had before, but still this hasn't been enough to make much of a dent in effective donation spending.

At the same time, I think it would be a mistake to assume this area is intractable. While it might not have improved much, in fairness, I think there was little dedicated and smart effort to improve it. I am very familiar with programs like The Giving Pledge and Founders Pledge. While these are positive, I suspect they absorb limited total funding (<$30M/yr, for instance.) They also follow one particular highly-cooperative strategy. I think most people working in this area are in positions where they need to be highly sympathetic to a lot of these people, which means I think that there's a gap of more cynical or confrontational thinking.

I'd be curious to see the exploration of a wide variety of ideas here. 

In theory, if we could move from these people donating say 3% of their wealth, to say 20%, I suspect that could unlock enormous global wins. Dramatically more than anything EA has achieved so far. It doesn't even have to go to particularly effective places - even ineffective efforts could add up, if enough money is thrown at them.

Of course, this would have to be done gracefully. It's easy to imagine a situation where the ultra-wealthy freak out and attack all of EA or similar. I see work to curtail factory farming as very analogous, and expect that a lot of EA work on that issue has broadly taken a sensible approach here. 

From The Economist, on "The return of inheritocracy"

> People in advanced economies stand to inherit around $6trn this year—about 10% of GDP, up from around 5% on average in a selection of rich countries during the middle of the 20th century. As a share of output, annual inheritance flows have doubled in France since the 1960s, and nearly trebled in Germany since the 1970s. Whether a young person can afford to buy a house and live in relative comfort is determined by inherited wealth nearly as much as it is by their own success at work. This shift has alarming economic and social consequences, because it imperils not just the meritocratic ideal, but capitalism itself.

> More wealth means more inheritance for baby-boomers to pass on. And because wealth is far more unequally distributed than income, a new inheritocracy is being born.

 

A bit sad to find out that Open Philanthropy’s (now Coefficient Giving) GCR Cause Prioritization team is no more. 

I heard it was removed/restructured mid-2025. Seems like most of the people were distributed to other parts of the org. I don't think there were public announcements of this, though it is quite possible I missed something. 

I imagine there must have been a bunch of other major changes around Coefficient that aren't yet well understood externally. This caught me a bit off guard. 

There don't seem to be many active online artifacts about this team, but I found this hiring post from early 2024, and this previous AMA. 

I've known and respected people on both sides of this, and have been frustrated by some of the back-and-forth on this.

On the side of the authors, I find these pieces interesting but very angsty. There's clearly some bad blood here. It reminds me a lot of meat eaters who seem to attack vegans out of irritation more than deliberate logic. [1] 

On the other, I've seen some attacks of this group on LessWrong that seemed over-the-top to me. 

Sometimes grudges motivate authors to be incredibly productive, so maybe some of this can be useful.

It seems like others find these discussions useful form the votes, but as of now, I find it difficult to take much from them.

[1] I think there are many reasonable meat eaters out there, but there are also many who are angry/irrational about it.

Interesting analysis!

One hypothesis: animal advocacy is a frequent "second favorite" cause area. Many longtermists prefer animal work to global health, but when it comes to their own donations and career choices, they choose longtermism. This resembles voting dynamics where some candidates do well in ranked-choice but poorly in first-past-the-post.

Larks makes a good point - AI risk is also underfunded relative to survey preferences. The bigger anomaly is global health's overallocation.

My very quick guess is that's largely founder effects. I.E. GiveWell's decade-long head start in building donor pipelines and mainstream legibility, while focusing on global health. 

I find this pretty exciting. Would love to see FAR-UVC become more popular, and I think this seems like a smart move to help do that. Thanks for organizing and financing! 

I'm not a marketing expert, but naively these headlines don't look great to me. 
"Veganuary champion quits to run meat-eating campaign"
"Former Veganuary champion quits to run meat-eating campaign - saying vegan dogma is 'damaging' to goal of reducing animal suffering"

I'd naively expect most readers to just read the headlines, and basically assume, "I guess there's more reasons why meat is fine to eat." 

I tried asking Claude (note that it does have my own custom system prompt, which might bias it) if this campaign seemed like a good idea in the first place, and it was pretty skeptical. I'm curious if the FarmKind team did, and what their/your prompt was for this. 

I appreciate this write-up, but overall feel pretty uncomfortable about this work. To me the issue was less about the team not properly discussing things with other stakeholders, than it was just the team doing a risky and seemingly poor intervention. 

Quick things:
1. There are some neat actions happening, but often they are behind-the-scenes. Politics tends to be secretive. 
2. The work I know about mostly falls into work focused on AI safety and bio safety. There's some related work trying to limit authoritarianism in the US. 
3. The funding landscape seems more challenging/complex than with other things. 

I think I'd like to see more work on a wider scope of interventions to do good via politics. But I also appreciate that there are important limitations/challenges here now. 

Good points!

>Would love to see something like this for charity ranking (if it isn't already somewhere on the site). 
I could definitely see this being done in the future.

>Don't you need a philosophy axioms layer between outputs and outcomes?
I'm nervous that this can get overwhelming quickly. I like the idea of starting with things that are clearly decision-relevant to the certain audience the website has, then expanding from there. Am open to ideas on better / more scalable approaches! 

>"governance" being a subcomponent when it's arguably more important/ can control literally everything else at the top level seems wrong. 
Thanks! I'll keep in mind. I'd flag that this is an extremely high-level diagram, meant more to be broad and elegant than to flag which nodes are most important. Many critical things are "just subcomponents". I'd like to make further diagrams on many of the different smaller nodes. 

I made this simple high-level diagram of critical longtermist "root factors", "ultimate scenarios", and "ultimate outcomes", focusing on the impact of AI during the TAI transition.



This involved some adjustments to standard longtermist language. 
"Accident Risk" -> "AI Takeover
"Misuse Risk" -> "Human-Caused Catastrophe" 
"Systemic Risk" -> This is spit up into a few modules, focusing on "Long-term Lock-in", which I assume is the main threat. 

You can read interact with it here, where there are (AI-generated) descriptions and pages for things. 

Curious to get any feedback! 

I'd love it if there could eventually be one or a few well-accepted and high-quality assortments like this. Right now some of the common longtermist concepts seem fairly unorganized and messy to me.  

---

Reservations:

This is an early draft. There's definitely parts I find inelegant. I've played with the final nodes instead being things like, "Pre-transition Catastrophe Risk" and "Post-Transition Expected Value", for instance. I didn't include a node for "Pre-transition value"; I think this can be added on, but would involve some complexity that didn't seem worth it at this stage. The lines between nodes were mostly generated by Claude and could use more work. 

This also heavily caters to the preferences and biases of the longtermist community, specifically some of the AI safety crowd. 

Sure thing!

1. I plan to update it with new model releases. Some of this should be pretty easy - I plan to keep Sonnet up to date, and will keep an eye on other new models. 

2. I plan to at least maintain it. This year I can expect to spend maybe 1/3rd the year on it or so. I'm looking forward to seeing what use and the response is like, and will gauge things accordingly. I think it can be pretty useful as a tool, even without a full-time-equivalent improving it. (That said, if anyone wants to help fund us, that would make this much easier!)

3. I've definitely thought about this, can prioritize. There's a very high ceiling for how good background research can be for either a post, or for all claims/ideas in a post (much harder!). Simple version can be straightforward, though wouldn't be much better than just asking Claude to do a straightforward search. 

Load more