@ Rethink Priorities
Working (6-15 years of experience)
14933Joined Dec 2015


"To see the world as it is, rather than as I wish it to be."

I'm a Research Manager on the General Longtermism team at Rethink Priorities. I'm primarily interested in coming up with, prioritizing, and potentially incubating the "longtermist megaprojects of the future," and secondarily interested in strategic clarity for intermediate goals EAs should aim for on a 5-15 year timescale.

I also work part-time as a Fund Manager at the Long-term Future Fund (Not to be confused with the FTX Future Fund).

People may or may not also be interested in my comments on Metaculus and Twitter, though (un)fortunately I'm now less active on them:



Clarification on commenting norms

COI disclaimers: Like many(most?) people doing direct work in EA nonprofits, I have financial COIs with the large funders in EA in the sense that they directly or indirectly fund my work. I think compared to most direct workers, my COI with FTX is relatively larger. 


Maybe a feature to let Google Doc headers be switched automatically to EAF headers? This will be mildly useful to me, and considering the most common type of broken links I see from others on the forum, probably to others as well!

Could you give examples? Usually the arguments I see look more like " Does it really make sense to pay recent college grads $X" or "isn't flying out college students to international conferences kinda extravagant?"and not "the EV of this grant is too low relative to the costs."

It was a very quick lower bound. From the LT survey a few years ago, basically about ~50% of influences on quality-adjusted work in longtermism were from EA sources (as opposed to individual interests, idiosyncratic non-EA influences, etc), and of that slice, maybe half of that is due to things that look like EA outreach or infrastructure (as opposed to e.g. people hammering away  at object-level priorities getting noticed).

And then I think about whether I'd a) rather all EAs except one disappear and have 4B more, or b) have 4B less but double the quality-adjusted number of people doing EA work. And I think the answer isn't very close.

2018 article about OpenAI salaries on the top end, though it's unclear to me whether OpenAI should count as an EA org.

FWIW I agree with Charles that tech industry salaries have high ranges and aren't very transparent, compared to EA orgs. 

  1. If you look at the salary ranges posted by EA orgs (here's Rethink's ranges, here's  a job posting by Open Phil) the ranges are substantially narrower. This substantially limits the room for negotiation/favoritism.
  2. The ranges are posted by the orgs themselves. In tech, the numbers are posted (often against a company's will) by employees and job candidates. This is a pretty adversarial dynamic. I'm tentatively glad EA does not have this (though I'm uncertain).

While we're nitpicking, I think "above a certain threshold, money doesn't make a difference to your life" is probably false. My best sense of the literature is that there are sharply diminishing returns but I don't think additional money gets you zero or negative utility[1]

But overall I thought the interview was quite good. I got the impression people really liked him. Intuitively it must be really hard to be honest for 10+ minutes without saying something that goes against leftist dogma, and of course doing it live adds a lot of pressure.

  1. ^

    Now of course we don't have accurate data/studies on billionaires, so basically we're back at intuition/priors.

To me, I think the main thing is to judge effectiveness by outcomes, rather than by processes or inputs.

Hi. Thanks for the constructive engagement! In my case I downvoted because of the combination of coming across aggressive and me thinking that this was not an accurate identification of the problem plus being worried it'd promote certain bad discussion norms that are increasingly common in the left-leaning parts of the internet.

I appreciate the constructive engagement and the apology. Thanks.

I agree it's technically possible but it seems kinda absurd to think this is likely. 

Load More