ER

Eli Rose

Program Officer, Global Catastrophic Risks Capacity-Building @ Open Philanthropy
2190 karmaJoined Working (6-15 years)

Bio

GCR capacity-building grantmaking and projects at Open Phil.

Posts
29

Sorted by New

Sequences
1

Open Phil EA/LT Survey 2020

Comments
165

It turns out, these managed hives, they're just un-bee-leave-able.

I think there's really something to this, as a critique of both EA intellectual culture & practice. Deep in our culture is a type of conservatism and a sense that if something is worth doing, one ought to be able to legibly "win a debate" against all critiques of it. I worry this chokes off innovative approaches, and misses out on the virtues of hits-based giving.

However, there are really a wide variety of activities that EAs get up to, and I think this post could be improved by deeper engagement with the many EA activities that don't fit the bednet mold.

My job is helping the world navigate the development of transformative AI without blowing up, getting taken over by AIs or small groups of humans, or generally going off the rails. The weird nature of this challenge and the lack of a long history of clearly analogous work to study means we fundamentally can't be too measurement-based in the way you describe (though we are certainly vulnerable to other types of pathologies). Many EAs work in this area, and my employer Open Philanthropy gives a lot to it.

An example from a very different part of EA might be Legal Impact for Chickens, currently featured on the CEA website. Though I have no special insight into this work at all, I suspect it also faces fundamental barriers to measurement, since the outcomes of legal action are much more concentrated in a few data points than the outcomes of bednet distribution.

I'm not an axiological realist, but it seems really helpful to have a term for that position, upvoted.

Broadly, and off-topic-ally, I'm confused why moral philosophers don't always distinguish between axiology (valuations of states of the world) and morality (how one ought to behave). People seem to frequently talk past each for lack of this distinction. For example, they object to valuing a really large number of moral patients (an axiological claim) on the grounds that doing so would be too demanding (a moral claim). I first learned these terms from https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ which I recommend.

However, some of the public stances he has taken make it difficult for grantmakers to associate themselves with him. Even if OP were otherwise very excited to fund AISC, it would be political suicide for them to do so. They can’t even get away with funding university clubs.

(I lead the GCR Capacity Building team at Open Phil and have evaluated AI Safety Camp for funding in the past.)

AISC leadership's involvement in Stop AI protests was not a factor in our no-fund decision (which was made before the post you link to).

For AI safety talent programs, I think it's quite unlikely we'd consider something like "leadership involvement in protests" on its own as a significant factor in a funding decision. So I don't think the "it would be political suicide" reasoning you give here is reflective of our decision process.

I edited this post on January 21, 2025, to reflect that we are continuing funding stipends for graduate student organizers for non-EA groups, while stopping funding stipends for undergraduate student organizers. I think that paying grad students for their time is less unconventional than for undergraduates, and also that their opportunity cost is higher on average. Ignoring this distinction was an oversight in the original post.

Hey! I lead the GCRCB team at Open Philanthropy, which as part of our portfolio funds "meta EA" stuff (e.g. CEA).

I like the high-level idea here (haven't thought through the details).

We're happy to receive proposals like this for media communicating EA ideas and practices. Feel free to apply here, or if you have a more early-stage idea, feel free to DM me on here with a short description — no need for polish — and I'll get back to you with a quick take about whether it's something we might be interested in. : )

What is the base rate for Chinese citizens saying on polls that the Chinese government should regulate X, for any X?

I thought this was interesting & forceful, and am very happy to see it in public writing.

The full letter is available here — was recently posted online as part of this tweet thread.

Load more