ER

Eli Rose🔸

Program Officer, Global Catastrophic Risks Capacity-Building @ Open Philanthropy
2513 karmaJoined Working (6-15 years)

Bio

GCR capacity-building grantmaking and projects at Open Phil.

Posts
29

Sorted by New

Sequences
1

Open Phil EA/LT Survey 2020

Comments
189

+1 on wanting a more model-based version of this.

And +1 to you vibe coding it!

Upon seeing this, I had the same thought about vibe coding a more model-based version ... so, race you to whoever gets around to it?

I mostly donated to democracy preservation work and did some political giving. And a little to the shrimp.

Wow awesome thanks for letting me know!

Thanks for writing this!!

This risk seems equal or greater to me than AI takeover risk. Historically the EA & AIS communities focused more on misalignment, but I'm not sure if that choice has held up.

Come 2027, I'd love for it to be the case that an order of magnitude more people are usefully working on this risk. I think it will be rough going for the first 50 people in this area; I expect there's a bunch more clarificatory and scoping work to do; this is virgin territory. We need some pioneers.

People with plans in this area should feel free to apply for career transition funding from my team at Coefficient (fka Open Phil) if they think that would be helpful to them.

I'm quite excited about EAs making videos about EA principles and their applications, and I think this is an impactful thing for people to explore. It seems quite possible to do in a way that doesn't compromise on idea fidelity; I think sincerity counts for quite a lot. In many cases I think videos and other content can be lighthearted / fun / unserious and still transmit the ideas well.

I think the vast majority of people making decisions about public policy or who to vote for either aren't ethically impartial, or they're "spotlighting", as you put it. I expect the kind of bracketing I'd endorse upon reflection to look pretty different from such decision-making.

But suppose I want to know who of two candidates to vote for, and I'd like to incorporate impartial ethics into that decision. What do I do then?

That said, maybe you're thinking of this point I mentioned to you on a call

Hmm, I don't recall this; another Eli perhaps? : )

(vibesy post)

People often want to be part of something bigger than themselves. At least for a lot of people this is pre-theoretic. Personally, I've felt this since I was little: to spend my whole life satisfying the particular desires of the particular person I happened to be born into the body of, seemed pointless and uninteresting.

I knew I wanted "something bigger" even when I was young (e.g. 13 years old). Around this age my dream was to be a novelist. This isn't a kind of desire people would generally call "altruistic," nor would my younger self have called it "altruistic." But it was certainly grounded in a desire for my life to mean something to other people. Stuff like the Discworld series and Watchmen really meant something to me, and I wanted to write stuff that meant something to others in the same way.

My current dreams and worldview, after ~10 years of escalating involvement with EA, seem to me to spring from the same seed. I feel continuous with my much younger self. I want my life to mean something to others: that is the obvious yardstick. I want to be doing the most I can on that front.

The empirics were the surprising part. It turns out that the "basic shape of the world" is much more mutable than my younger self thought, and in light of this my earlier dreams seem extremely unambitious. Astonishingly, I can probably:

  • save many lives over my career at minimum, by donating to GiveWell, and likely more by doing more off the beaten path things
  • save <large number> of e.g. chickens from lives full of torture
  • be part of a pretty small set of people seriously trying to do something about truly wild risks from new AI and bioengineering technologies

It probably matters more to others that they are not tortured, or dying of malaria, or suffering some kind of AI catastrophe, than that there is another good book for them to read, especially given there are already a lot of good novelists. The seed of the impulse is the same — wanting to be part of something bigger, wanting to live for my effect on others and not just myself. My sense of what is truly out there in the world and of what I can do about it are what's changed.

Load more