Hide table of contents

This post is for orgs that have the common problem of not getting enough candidates to apply.

What I observe

About once a week, I talk to an EA who says “I won’t apply to this org, better fit people will probably apply” or “I won’t apply to this org, I’m probably only slightly better than the alternative so it's a tiny counterfactual”.

I hypothesize the “next best” is often also not applying

Since I hear this so often.

I hypothesize job ads often make this worse

Job ads often seem to forget that many EAs have impostor syndrome. They write things like “we only hire the best, even getting the 80th percentile instead of the 90th percentile would be so much less good for our org, which is why we aim so high!..” [this is not an actual quote]

What do I recommend employers do?

  • Look at your own job ad or hiring pitch and ask yourself how a candidate with impostor syndrome would read it.
  • Share this problem with your candidates, they probably don’t know about it.
  • Encourage candidates to apply regardless of very subjective assessments of their own skill. Here’s an example of a CEA job ad which tries hard to avoid this whole problem.

Does this happen with people who I think would get the job?

Yes, totally. It doesn’t seem correlated with skill as far as I can tell.

Is this the only reason EAs don’t apply?

No, but it’s one of the big ones.

Candidates who are reading this: Am I saying you should apply even if you think you're not a good fit?

Long story. I hope this or this can help. 

Summary

I hope this post uncovers a situation that is hard to see from the perspective of any individual employer or candidate, and that it lets you improve your hiring.

Have a low bar for reaching out


 

Comments3
Sorted by Click to highlight new comments since: Today at 12:33 AM

These considerations have certainly kept me from applying for any EA jobs for many years!

(I have my own EA startup now, which is probably the best of both worlds anyway, but that just means that this topic will become important for me from the other perspective.)

I’ve written about my worries here.

Basically, I feel like we’re back in 2006 before there was any EA or GiveWell, and someone gives me the advice that World Vision does good stuff and I should donate to them. It has about zero information content for me and leaves me just as ignorant about the best use of my resources as it found me. What are they trying to achieve and how? What are the alternatives? How do I compare them to each other? What criteria are important or irrelevant? How fungible are my contributions and what other activities am I leveraging?

Likewise with jobs I’m at a complete loss. How reliable are our interview processes? What are the probability distributions around the projected-performance scores that they produce? How can a top candidate know how much their distribution overlaps with that of the runner-up and by what factor they are ahead? Maybe even more importantly: How can a top candidate know what other options the runner-up will have so that the top candidate can decline the offer if the runner-up would otherwise go into AI capabilities (or is only an expected 2.5 rejections away from the AI capabilities fallback) or if the runner-up would otherwise have to give up on altruistic things because they’d run out of financial runway?

80,000 Hours has several insightful posts on the differences between the best and the second best candidate, reliability of interview processes, the trajectory of the expected added value of additional applicants, etc. Those are super interesting and (too) often reassuring, but a decision where I want to work for the next 5+ years is a decision that is about as big for me as a decision where to donate half a million or so. So these blog posts don’t quite cut it for me.  Nor do I typically know enough about an organization to be sure that I’m applying the insights correctly.

What I would find more reassuring are adversarial collaborations, research from parties that don’t have any particular stake in the hiring situation, attempts to red-team the “Hiring is largely solved” kind of view, and really strong coordination between orgs. (Here’s a starting point.)

Questionnaires tell me that I have a serious case of impostor syndrome, so I don’t trust my intuitions on these things and don’t want to write a red-teaming attempt for fear it might be infohazardous if I’m wrong on balance. Then again I thought I must be somehow wrong about optimizing for impartial impact rather than warm-fuzzies in my charity activism before I found out about EA, and now I regret not being open about that earlier.

One thing that I have going for myself is that I’m not particularly charismatic, so that if I did end up as the supposed top candidate, I could be fairly sure that I could only have gotten there by skill, by chance, or by some unknown factor. So I feel like the riskiness of jobs for me forms a Laffer curve where jobs with no other applicants are trivially safe and jobs with hundreds of good applicants are safe again because the chance factor is really unlikely.  In between be dragons.

Imma suggested reserving a large pot of donation money (and time for volunteering and coaching, and a promise to keep applying for jobs for a year, not work on AI capabilities, etc.) and then signaling that I’ll donate this pot according to the preferences of the organizations that I’m applying to if they all reject me. I can’t make the pot large enough to be really meaningful, but maybe it can serve as a tie breaker.

Thank you for sharing!

 

 

How can a top candidate know how much their distribution overlaps with that of the runner-up and by what factor they are ahead?

My short answer is that the solution is not "sit at home alone and think about it" and the solution probably does involve "talking to the org". I predict you will sometimes get answers that will make your life very easy, like "there is no runner-up". Even if it won't be that easy, I claim it will be easier with actual data than trying to solve the theoretical problem.

 

 

donate half a million or so [...] if they all reject me

I know of at least one EA org that would take a very good candidate over [much more than] $500k, and I expect to find a few more if I ask.

 

 

Hiring is largely solved

What?? Who says that?

 

 

80,000 Hours has several insightful posts on the differences between the best and the second best candidate, reliability of interview processes, the trajectory of the expected added value of additional applicants, etc.

Would you link to the post(s) you're talking about?

 

 

One thing that I have going for myself is that I’m not particularly charismatic, so that if I did end up as the supposed top candidate, I could be fairly sure that I could only have gotten there by skill, by chance, or by some unknown factor.

If you end up as a candidate again, I think there are things we could talk about that could help you. I'm not diving into them since you have a different hat on now.

[I feel like I don’t approach this topic as dispassionately as I usually do with epistemics, so please bear in mind this “epistemic status.”]

Even if it won't be that easy, I claim it will be easier with actual data than trying to solve the theoretical problem.

Indeed! I imagine that that trades off against personal biases. When you feel like you’ve won in a 1:1000 lottery at your dream job while being worried about your finances, it’s hard to think objectively about whether taking the job is really the best thing impartially considered. I’d much rather stand on the shoulders of a crowd of people who are biased in many different directions and have homed in on some framework that I can just apply when I have to make such a decision.

What?? Who says that?

Oh, sorry, not explicitly, but when I run into an important, opaque, confusing problem and most other people act like it’s not there, my mind goes to, “They must understand something that makes this a solved problem or non-issue that I don’t understand.” But of course there’s also the explanation that they’ve all concluded that someone else should solve it or that it’s too hard to solve.

Back in the day before EA, orgs that I was in touch with were also like, “The library, the clinic, and the cat yoga are all important projects, so we should split our funds evenly between them,” and I was secretly like, “Why? What about all the other projects besides these three? How do you know they’re not at least equally important? Are those three things really equally important? How do they know that? If I ask, will they hate me and our org for it? Or is it an infohazard, and if I ask, they’ll think about it and it’ll cause anomie and infighting that has much worse effects than any misallocation, especially if I’m wrong about the misallocation?”

It’s hard to say in retrospect, but I think was split like “30% they know something I don’t; 30% it’s an infohazard; 30% something else is going on; and 10% I’m right.” I failed to take into account that the “10% I’m right” should have much more weight because how important it would be if they turn out true despite the low probability, even though conversely I was very concerned about the dire effects of spreading a viral infohazard.

(After 1–2 years I started talking about it in private with close friends, and after 4 years, in 2014, I was completely out of the closet, when I realized that Peter Singer had beat me to the realization by a few decades and hasn’t destroyed civilization with it.)

Now I feel like the situation is vaguely similar, and I want to at least talk about it to not repeat that mistake.

Would you link to the post(s) you're talking about?

Just need to try to find them again.

https://80000hours.org/2021/05/how-much-do-people-differ-in-productivity/

If productivity is really power-law distributed, that’d be a strong reason not to worry much about it because the top candidate is probably easy to identify. But without having engaged much with it, I’m worried that seeming outliers are often carried

  1. by network effects (i.e. they are a random person that got cast into the right spot at the right time and thus were super successful, but so would’ve been half of the rest of the candidates);
  2. by psychological effects  (e.g., maybe almost anyone who gets cast into a leading position will become more confident, have less self-doubt, and so create more output);
  3. by skillful Goodharting of the most legible metrics, including skillful narcissism, (because they’re the ones that bestow social credit and that researchers might also have to rely on when studying job performance in general) at the expense of social cohesion, collaboration, and any other qualities that are harder to attribute to someone.

What makes things worse, or harder to study, is that there are probably always many necessary conditions for outsized success, some of which stem from the candidate and others from the position or people the candidate ends up working with. These need to be teased apart somehow.

Brian has this interesting article about the differences in expected cost-effectiveness among the top 50% of charities. It contains a lot of very general considerations that limit the differences that we can reasonably expect between charities. Maybe a similar set of considerations applies to candidates so that it’s unlikely that there are even 10x differences between the top 50% of candidates in subjective expectation.

https://80000hours.org/2013/05/intelligence-matters-more-than-you-think-for-career-success/

With Less Wrong in 2013 maybe having an average and median IQ almost three standard deviations above average and the overlap of LW and EA, it’s easy for anyone but about 1:300 people to conclude that they probably don’t need to apply for most jobs. (Not that they’d be right not to – that’s an open question imo.) Whatever crystallized skills they have that are unique and relevant for the job, the IQ 150+ people can probably pick them all up within a year. That’s an oversimplification since there are smart people who will just refuse to learn something or otherwise to adapt to the requirements of the situation, but it feels like it’ll apply by and large.

An org I know did an IQ test as part of the application process, but one that was only calibrated up to IQ 130. That could be an interesting data point since my model would predict that some majority (don’t know how to calculate it) of serious applicants must’ve maxed out the score on it if the average IQ among them is in the 140 area. (By “serious” I mean to exclude ones who only want to fill some quota of applications to keep receiving unemployment benefits and similar sources of noise.)

What’s ironic is that as a kid I thought that my 135 score was unlikely to be accurate because it’s a 1:100 unlikely score, so it’s more likely that I got lucky during the test in some fashion or it was badly calibrated.  (Related to the Optimizer’s Curse, but I didn’t know that term at the time.) Now among people whose average IQ is 140ish, it seems perfectly plausible. Plus several more tests all came out at 133 or 135. Yay, reference class tennis! Doesn’t reduce my confusion about what to do though.

Update: One of my worries has been that LW surveys from 2013–15ish found that the average IQ on LW was 140–143, even after quite a bit of statistical sanitization. If that carries over to EA, it makes me below-average smart in that crowd, so I should expect to be among the top candidates for a job almost never (only when it demands very rare, specific skills that I happen to have). There are effects such as that this subgroup is itself probably not normally distributed (but idk in which direction this pushes) and that the smartest people are perhaps already employed, but that’s all a bit unconvincing. 

Now Scott has had some more data and ideas, and found that the average IQ is probably closer to 128 among the LW 2015 crowd. That makes me above-average smart and sign-flips this whole consideration for me.

https://80000hours.org/2019/08/how-replaceable-are-top-candidates-in-large-hiring-rounds/

This seems like a very interesting model that has made me much less worried about applications to very large hiring rounds in fields where the applicants’ plan Z is unlikely to be extremely harmful.

I wanted to play around with the models, reimplement them in Squiggle or Causal, and understand their sensitivity to the inputs better, but I never got around to it.

https://80000hours.org/articles/coordination/

It’s been too long that I read this article, but it also seems very relevant!

https://80000hours.org/career-guide/personal-fit/#performance-is-hard-to-predict-ahead-of-time

This is another article (section) I referenced.

Another important input is the so-called “value drift,” which, in my experience, has nothing to do with value drift and is mostly people running out of financial runway and ending up in dead-end industry jobs that eat up all of their time until they burn out. (Sorry for the hyperbole, but I dislike the term a lot.)

More recent research indicates that it’s surprisingly low compared to what I would’ve expected. But I haven’t checked whether I trust the data to be untainted by such things as survival bias.

Curated and popular this week
Relevant opportunities