Eli Rose

EA community-building grantmaking and projects at Open Phil.

Sequences

Open Phil EA/LT Survey 2020

Topic Contributions

Comments

Mid-career people: strongly consider switching to EA work

I think this is a good question and there are a few answers to it.

One is that many of these jobs only look like they check the "improving the world" box if you have fairly unusual views. There aren't many people in the world for whom e.g. "doing research to prevent future AI systems from killing us all" tracks as an altruistic activity. It's interesting to look at this (somewhat old) estimate of how many EAs even exist.

Another is that many of the roles discussed here aren't research-y roles (e.g. the biosecurity projects require entrepreneurship, not research).

Another is that the type of research involved (when the roles are in fact research roles) is often difficult, messy, and unrewarding. AI alignment, for instance, is a pre-paradigmatic field. The problem statement has no formal definition. The objects of study (broadly superhuman AI systems) don't yet exist and therefore can't be experimented upon. Out of all possible research that could be done in academia, "expected tractability" is a large factor in determining what questions people try to tackle. But when you're filtering strongly for impact as EA is, you can no longer select strongly for tractability. So it's much more likely that things will be a confusing muddle that it's difficult to make clear progress on.

reallyeli's Shortform

What I'm talking about tends to be more of an informal thing which I'm using "EMH" as a handle for. I'm talking about a mindset where, when you think of something that could be an impactful project, your next thought is "but why hasn't EA done this already?" I think this is pretty common and it's reasonably well-adapted to the larger world, but not very well-adapted to EA.

reallyeli's Shortform

EMH says that we shouldn't expect great opportunities to make money to just be "lying around" ready for anyone to take. EMH says that, if you have an amazing startup idea, you have to answer "why didn't anyone do this before?" (ofc. this is a simplification, EMH isn't really one coherent view)

One might also think that there aren't great EA projects just "lying around" ready for anyone to do. This would be an "EMH for EA." But I think it's not true.

Consider Changing Your Forum Username to Your Real Name

I changed my display name as a result of this post, thanks!

reallyeli's Shortform

There is/was a debate on LessWrong about how valid the efficient market hypothesis is. I think this is super interesting stuff, but I want to claim (with only some brief sketches of arguments here) that, regarding EA projects, the efficient market hypothesis is not at all valid (that is, I think it's a poor way to model the situation that will lead you to make systematically wrong judgments). I think the main reasons for this are:

  1. EA and the availability of lots of funding for it are relatively new — there's just not that much time for "market inefficiencies" to have been filled.
  2. The number of people in EA who are able to get funding for, and excited to start, new projects, is really small relative to the number of people doing this in the wider world.
reallyeli's Shortform

If you're an EA who's just about to graduate, you're very involved in the community, and most of the people you think are really cool are EAs, I think there's a decent chance you're overrating jobs at EA orgs in your job search. Per the common advice, I think most people in this position should be looking primarily at the "career capital" their first role can give them (skills, connections, resume-building, etc.) rather than the direct impact it will let them have.

At first blush it seems like this recommends you should almost never take an EA job early in your career — since jobs at EA orgs are such a small proportion of all jobs, what are the odds that such a job was optimal from a career capital perspective? I think this is wrong for a number of reasons, but it's instructive to actually run through the list. One is that a job being at an EA org is correlated with it being good in other ways — e.g. with it having smart, driven colleagues that you get on well with, or with it being in a field connected to one of the world's biggest problems. Another is that some types of career capital are best gotten at EA orgs or in doing EA projects — e.g. if you want to upskill for community-building work, there's plausibly no Google/McKinsey of community-building to go get useful career capital for this at. (Though I do think some types of experience, like startup experience, are often transferable to community-building.)

I think a good orientation to have towards this is to try your hardest, when looking at jobs as a new grad, to "wipe the slate clean" of tribal-affiliation-related considerations, and (to a large extent) of impact-related considerations, and assess mostly based on career-capital considerations.

(Context: I worked at an early-stage non-EA startup for 3 years before getting my current job at Open Phil. This was an environment where I was pushed to work really hard, take on a lot of responsibility, and produce high-quality work. I think I'd be way worse at my current job [and less likely to have gotten it] without this experience. My co-workers cared about lots of instrumental stuff EA cares about, like efficiency, good management, feedback culture, etc. I liked them a lot and was really motivated. However, this doesn't happen to everyone at every startup, and I was plausibly unusually well-suited to it or unusually lucky.)

My experience with imposter syndrome — and how to (partly) overcome it

Thanks for posting this. I found a lot of it resonant — particularly the stuff about inventing reasons to discount positive feedback, and having to pile on more and more unlikely beliefs to avoid updating to "I'm good at this."

I remember, fairly recently, taking seriously some version of "I'm not actually good at this stuff, I'm just absurdly skilled at fooling others into thinking that I am." I don't know man, it seemed like a pretty good hypothesis at the time.

Effectiveness is a Conjunction of Multipliers

One can't stack the farmed animal welfare multiplier on top of the ones about giving malaria nets or the one about focusing on developing countries, right? E.g. can't give chickens malaria nets.

It seems like that one requires 'starting from scratch' in some sense. There might be analogies to the human case (e.g. don't focus on your pampered pets), but they still need to be argued.

So I think the final number should be lower. (It's still quite high, of course!)

Open Phil’s longtermist EA movement-building team is hiring

Just a reminder that the deadline for applications is this Friday, March 25th.

How does the simulation hypothesis deal with the 'problem of the dust'?

Hmm. Thanks for the example of the "pure time" mapping of t --> mental states. It's an interesting one. It reminds me of Max Tegmark's mathematical universe hypothesis at "level 4," where, as far as I understand, all possible mathematical structures are taken to "exist" equally. This isn't my current view, in part because I'm not sure what it would mean to believe this.

I think the physical dust mapping is meaningfully different from the "pure time" mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each time t, I identify each possible pairing of dust specks with a different neuron in George Soros's brain, then say "at time t+1, if a pair of dust specks is farther apart than it was at time t, the associated neuron fires; if a pair is closer together, the associated neuron does not fire."

This could conceivably fail if there's not enough pairs of dust specks in the universe to make the numbers work out. The "pure time" mapping could never fail to work; it would work (I think) even in an empty universe containing no dust specks. So it feels less grounded, and like an extra leap.

...

I agree that it seems like there's something around "how complex is the mapping." I think what we care about is the complexity of the description of the mapping, though, rather than the computational complexity. I think George Soros mapping is pretty quick to compute once defined? All the work seems hidden in the definition — how do I know which pairs of dust specks should correspond to which neurons?

Load More