TH

Tobias Häberli

@ Pivotal Research
2246 karmaJoined Bern, Switzerland

Comments
156

Topic contributions
1

I’m sad to announce that I’m leaving academia.

I’m looking forward to working on AI safety.

Good point, they might know. Does anyone know a neurotypical? Or a friend of a neurotypical that I could reach out to?

Attacking people's outfit choices is a unique low for the EA forum!

THE EA FORUM TEAM COULD REALLY HELP HERE BY ADDING A 'HEADING 0' AND 'HEADING -1' FORMATTING OPTION.

Thanks for writing this!

You're describing integral altruism as broader than EA, but if I understand you correctly, it's also narrower in many ways. Some examples:

Letting go of the need to control everything and transcending the frame that we are in conflict with the natural unfolding of the universe. This also means emphasising collective action over individual heroism.

–> Effective altruism doesn't take a position on whether we are in conflict with the natural unfolding of the universe. EAs emphasise collective actions vs. individual heroism to various degrees.

take radical uncertainty seriously

–> EAs already do this to various degrees. If integral altruists take this really seriously, they are a subset of EAs in this regard.

altruism grounded in truth rather than being driven by guilt or pride

–> EA doesn't say where your altruistic motivation should be grounded in. All of the reasons you list are considered viable (although people of course disagree to what degree they are conducive/to be encouraged).


Some of the things you describe (especially the 'different ways of knowing') seem to sit more outside of what is common within EA. There it seems more like integral altruism actually is broader.

Overall I'm not completely sure whether integral altruism is a way of doing effective altruism differently, or a competing (though often overlapping) world view.

Good points, thank you!

They have incredibly short AGI timelines, so per their own views, they can't afford to move slowly. If they are giving less than 5% of assets after they already claim AGI, that's a huge failure.

Do we know whether this is true for the OAF board?[1] Sam Altman is on it, and he definitely believes something along these lines but it's less clear for the others. Here's a ChatGPT and a Claude answer on this, which points towards the others being less bullish & concerned (but also a lack of information about what they believe). I expect there to be a range of views on timelines & transformativeness of AGI among the board members – which probably makes it more likely that their spending targets are compatible with the foundations mission.

  1. ^

    Bret Taylor (Chair), Adam D’Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, Nicole Seligman, Sam Altman

It looks much nicer than the original imo. If I didn't have context, I'd probably be confused though.

Why 80,000 hours? And what is the pie chart / watch face analogy about? On first glance I’m not sure whether it’s about career choice, time management, life balance, or some '5pm' metaphor.

I looked at it in this order: (1) “80,000 hours”, (2) pie chart / watch face, trying to figure it out, (3) subtitle, (4) endorsement. But the subtitle and endorsement are doing most of the work of telling me what the book is actually about and whether it’s for me.

Maybe some of this is intended, to make people pick up the book and try to find answers. :) 

I agree it would be bad if the OpenAI Foundation were still giving under 5% per year several years from now. But I don’t think 'they should spend 5%+ in year one' follows.

Directing billions well is really hard, especially for a new foundation. Coefficient Giving says it directed over $4 billion from 2014 to mid-2025, and that 2025 was the first year it directed more than $1 billion. Their 'endowment' is much smaller (~10x smaller?) than OAF’s but it still points towards allocating money well at that scale being genuinely hard. I wouldn't call a new foundation planning to deploy $1 billion in its first year "conservative".

What I'd most like to see is OAF committing to an aggressive, public ramp-up targets, maybe something like reaching 5% of assets by 2028.

No, sorry. The diamond emoji (🔸) is specifically for people who donate 10% of their earnings. 

But taking a 50% pay cut for altruistic reasons is incredibly based, so you should use the square emoji instead (🟧). It's also larger, which seems fitting.

Thanks, that's useful. I mostly agree with you, and mistakenly read the second bullet point as saying "work that opposes fascism should come from all sides of the political spectrum", which is something I agree with. I think the OP somewhat assumed that opposing fascism will look like 'work with your local anti-fascist network', but I expect much of it could look more like 'militarising Europe' (something the political left would typically oppose).

Load more