Careers Questions Open Thread

You should take the quant role imo. Optionality is valuable (though not infinitely so). Quant trading gives you vastly more optionality. If trading goes well but you leave the field after five years you will have still gained a large amount of experience and donated/saved a large amount of capital. It's not unrealistic to try for 500K donated and 500K+ saved in that timeframe, especially since firms think you are unusually talented. If you have five hundred thousand dollars, or more, saved you are no longer very constrained by finances. Five hundred thousand dollars is enough to stochastically save over a hundred lives. There are several high impact EA orgs with a budget of around a million dollars a year (Rethink Priorities comes to mind). If trading goes very well you could personally fund such an org. 

How are you going to feel if you decide to do the PHD and after five years you decide that it was not the best path?  You will have left approximately a million dollars and a huge amount of earning potential on the table. You could have been free to work for no compensation if you want. You would have been able to bankroll a medium sized project if you keep trading. 

There are a lot of ways to massively regret turning down the quant job. It is plausible that the situation is so dire that you need to drop other paths and work on AI safety right now.  But you need to be confident in a  very detailed world model to justify giving up so much optionality. There are a lot of theories on how to do the most good. Stay upstream. 

What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent?

I am a rather strong proponent of publishing credible accusations and calling out community leadership if they engage in abuse enabling behavior. I published a long post on Abuse in the Rationality/EA Community. I also publicly disclosed details of a smaller incident. People have a right to know what they are getting into. If community processes are not taking abuse seriously in the absence of public pressure then information has to be made public. Though anyone doing this should be careful.

Several people are discussing allegations of DXE being abusive and/or a cult. I joined in early 2020. I have not personally observed or heard any credible accusations of abusive or abuse enabling behavior by the leadership of DXE during the time I have been a member. It is hard for me to know what happened in 2016 or 2017.

Given my history in the rationality you should trust that if I had evidence I could post about systematic abuse within DXE I would post it. Even if I did not have the consent of victims to share evidence I will still publicly state I knew of abuse. I will note it is highly plausible DXE is acting badly behind closed doors. If this becomes clear to me I will certainly let people know.

(This is explicitly not a claim there is no evidence I find concerning. But I think you should be quite critical of most organizations and your eyes open for signs of abusive behavior.)

How Dependent is the Effective Altruism Movement on Dustin Moskovitz and Cari Tuna?

Good point that Open Phil makes all donations public. I found a CSV on their site and added up the donations dated 2018/2019/2020.

2018: $190,477,938

2019: $273,279,362

2020 so far: $145,405,362

This is a really useful answer.

What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent?

I am a member of DXE and have interacted with Wayne. I think if you care about animals the amount of QALYs gained would be massive. In general Wayne has always seemed like a careful, if overly optimistic, thinker to me. He always tries to follow good leadership practices. Even if you are not concerned with animal welfare I think Wayne would be very effective at advancing good policies.

Wayne being mayor would result in huge improvements for climate change policy. Having a city with a genuine green policy is worth a lot of QALYs. My only real complaint about Wayne is that he is too optimistic but that isn't the most serious issue for a Mayor.

Replaceability Concerns and Possible Responses

Why do you think orgs labelled 'effective altruist' get so much talent applying but those orgs don't? How big do you think the difference is? I am somewhat informed about the job market in Animal Advocacy. It does not seem nearly as competitive as the EA market. But I am not sure the magnitude of the difference in the replaceability analysis.

Thoughts on 80,000 Hours’ research that might help with job-search frustrations

Really good article. I have been critical of 80K hours in the past but this article caused me to substantially update my views. I am happy to hear you will be at 80K hours.

What to do with people?

I think we are pretty far from exhausting all the good giving oppurtunities. And even if all the highly effective charities are filled something like Give Directly can be scaled up. It is possible in the future we will eventually get to the point where there are so few people in poverty that cash transfers are ineffective. But if that happens there is nothing to be sad about. the marignal value of donations will go down as more money flows into EA. That is an argument for giving more now. A future where marginal EA donaions are ineffective is a very good future.

After one year of applying for EA jobs: It is really, really hard to get hired by an EA organisation
Does the high difficulty of getting a job at an EA organization mean we should stop promoting EA? (What are the EA movement's current bottlenecks?)

Promoting donations or Earnign to Give seems fine. I think we should stop promoting 'EA is talent constrained'. There is a sense in which EA is 'talent constrained'. But the current messaging around 'EA is talent constrained' consistently misleads people, even very informed people such as the OP and some of the experts who gave him advice. On the other hand EA can certainly absorb much more money. Many smaller orgs are certainly funding constrained. And at a minimum people can donate to Give Directly if the other giving oppurtunities are filled.

Simultaneous Shortage and Oversupply

At least some people at OpenAI are making a ton of money: /. Of course not everyone is making that much but I doubt salaries at OpenAI/DeepMind are low. I think the obvious explanation is the best one. These companies want to hire top talent. Top talent is hard to find.

The situation is different for organizations that cannot afford high salaries. Let me link to Nate's explanation from three years ago:

I want to push back a bit against point #1 ("Let's divide problems into 'funding constrained' and 'talent constrained'.) In my experience recruiting for MIRI, these constraints are tightly intertwined. To hire talent, you need money (and to get money, you often need results, which requires talent). I think the "are they funding constrained or talent constrained?" model is incorrect, and potentially harmful. In the case of MIRI, imagine we're trying to hire a world-class researcher for $50k/year, and can't find one. Are we talent constrained, or funding constrained? (Our actual researcher salaries are higher than this, but they weren't last year, and they still aren't anywhere near competitive with industry rates.)
Furthermore, there are all sorts of things I could be doing to loosen the talent bottleneck, but only if I knew the money was going to be there. I could be setting up a researcher stewardship program, having seminars run at Berkeley and Stanford, and hiring dedicated recruiting-focused researchers who know the technical work very well and spend a lot of time practicing getting people excited -- but I can only do this if I know we're going to have the money to sustain that program alongside our core research team, and if I know we're going to have the money to make hires. If we reliably bring in only enough funding to sustain modest growth, I'm going to have a very hard time breaking the talent constraint.
And that's ignoring the opportunity costs of being under-funded, which I think are substantial. For example, at MIRI there are numerous additional programs we could be setting up, such as a visiting professor + postdoc program, or a separate team that is dedicated to working closely with all the major industry leaders, or a dedicated team that's taking a different research approach, or any number of other projects that I'd be able to start if I knew the funding would appear. All those things would lead to new and different job openings, letting us draw from a wider pool of talented people (rather than the hyper-narrow pool we currently draw from), and so this too would loosen the talent constraint -- but again, only if the funding was there. Right now, we have more trouble finding top-notch math talent excited about our approach to technical AI alignment problems than we have raising money, but don't let this fool you -- the talent constraint would be much, much easier to address with more money, and there are many things we aren't doing (for lack of funding) that I think would be high impact.


Load More