Hide table of contents

For those that don't know, I've worked as a therapist for the rationality and EA community for over two years now, first part time, then full time starting in early 2020. I often get asked about my observations and thoughts on what sorts of issues are particularly prevalent or unique to the community, and while any short answer to that would be oversimplifying the myriad of issues I've treated, I do feel comfortable saying that "concern with impact" is a theme that runs pretty wide and deep no matter what people come to sessions to talk about.

Seeing how this plays out in various different ways has motivated me to write on some aspects of it, starting with this broad generalization; rejection hurts. Specifically, rejection from a job that's considered high impact (which, for many, implicitly includes all jobs with EA organizations)  hurts a lot. And I think that hurt has a negative impact that goes beyond the suffering involved. 

In addition to basing this post off of my own observations, I’ve written it with the help of/on behalf of clients who have been affected by this, some of whom reviewed and commented on drafts.

I. Premises

There are a few premises that I’m taking for granted that I want to list out in case people disagree with any specific ones:

  • The EA population is growing, as are EA organizations in number and size.

This seems overall to be a very good thing.

  • In absolute numbers, EA organizations are growing slower or at pace with the overall EA population.

Even with massive increases in funding this seems inevitable, and also probably good? There are many high impact jobs outside of EA orgs that we would want people in the community to have.

(By EA orgs I specifically mean organizations headed by and largely made up of people who self-identify as Effective Altruists, not just those using evidence-and-reason-to-do-the-most-good)

((Also there’s a world in which more people self-identify as EAs and therefore more organizations are considered EA and by that metric it’s bad that EA orgs are growing slower than overall population, but that’s also not what I mean))

  • Even with more funding being available, there will continue to be many more people applying to EA jobs than getting them.

I don’t have clear numbers for this, but asking around at a few places got me estimates between ~47-124 applications for specific positions (one of which noted that ~¾ of them were from people clearly within and familiar with the EA community), and hundreds of applications for specific grants (at least once breaking a thousand). 

This is good for the organizations and community as a whole, but has bad side effects, such as:

  • Rejection hurts, and that hurt matters.

For many people, rejection is easily accepted as part of trying new things, shooting for the moon, and challenging oneself to continually grow. 

For many others, it can be incredibly demoralizing, sometimes to the point of reducing motivation to continue even trying to do difficult things.

So when I say the hurt matters, I don’t just mean that it can cause suffering and that we should try to reduce suffering wherever we can. I also mean that as the number of EAs grows faster than the number of positions in EA orgs, the knock-on effects of rejection will slow community and org growth, particularly since:

  • The number of EAs who receive rejections from EA orgs will likely continue to grow, both absolutely and proportionally.

Hence, this article.

II. Models

There are a number of models I have for all of this that could be totally wrong. I think it’s worth spelling them out so that people can point to more bits and let me know if they are important, or why they might not be as important as I think they are.

Difficulty in Self Organization

First, I think it’s important to note that there are some steps already being taken to try to reduce the overall problem, such as putting out messages that people can self-organize or act independently rather than rely on the existing infrastructures.

But hearing something and internalizing it are two different things. People who become EAs will, by and large, have trouble finding similar flavors of “save the world energy” in non-EA organizations, and while some may manage to find local groups or friends or coworkers or others that they can collaborate with around the ideas of using evidence and reason to tackle big problems in the world, many others won’t.

It is, on the whole, hard work. We should not be surprised if most people who try to do this fail, even with support, let alone those who try without it.

Even if we take for granted that someone isn’t looking for save-the-world energy in particular, and has a more focused goal or cause area, wanting to work with others who are interested in evidence and reason is still highly appealing. Personally, I’m unlikely to ever want to leave private practice and work for a clinic again unless it’s one that can reasonably be considered a “rationalist/EA clinic,” not just in goals but in day-to-day function.

But on top of all this, there’s something that makes the rejections those in EA face from EA organizations even “worse” than rejections faced elsewhere.

Ingroup vs Outgroup Rejection

Where you’re rejected from matters. Many people, particularly those new to EA, struggle with imposter syndrome and similar worries of inadequacy. When you see people you admire doing something amazing, it can take a lot of courage to decide to apply to join them... which can then make the rejection cut deeper.

This applies to grant funding too. It’s great that people can often see what sorts of activities have been funded in the past, but it also means that if someone’s funding request gets rejected, they will likely feel that they have been judged as lesser-than all the applicants who succeeded.

As noted earlier, our current culture broadly speaks as though people should be ok with being rejected, and I agree that this is an ideal state to reach; the same way a good scientist has to be comfortable and (if they can manage it) even celebratory of null hypothesis results, people who want to accomplish things have to be willing to try and fail and try again.

But celebrating what we want to be and promoting it as the ideal can often make it more difficult for some to work through where they currently are.  Enough rejections in a short enough period can wear people down, particularly if there’s a sense of narrow alternatives; it’s easier to shrug off a rejection to one of a hundred places you’d like to work, and harder if there’s only a handful.

False Expectations

Apart from the emotional pain and risk of demotivation, there is also a separate problem that if people have false expectations, they will make suboptimal career decisions. When people think there is an impactful job waiting for them if they just learn the right skills, they take different actions than if they know how low the odds are.

If certain EA orgs are getting more funding than ever and self-describe as “talent constrained,” fewer people may decide to earn-to-give to fund a friend’s project, and fewer people will try to start their own organization. But unless the distribution of funds (specifically those funds accessible through open application processes, which is less than the total amount of money in the EA network) or new jobs is increased at the same pace as the applications, there’s going to be an even greater percentage of applicants who get rejected… and what’s worse, again, the knowledge that there’s lots of funding and a talent constraint can make them feel even more dejected.

This puts organizations and grants in an understandably difficult spot, as they would ideally want lots of people to apply to jobs and grants, and not self-select out for fear of the competition or worries of inadequacy. So what can be done about it?

While we probably will not be able to eliminate the mismatch of supply and demand for labor that will lead to inevitable rejections anytime soon, I think there are likely some simple ways we can reduce harm caused by them. But first...

III. Objections/Responses

Objection 1: Something along the line of “It’s not our job to…”

Regardless, as a consequentialist I think it’s important to note what is happening and work to improve it if possible, which I think it is.  A number of people have burnt out of EA or felt unable to continue on as part of it because of feelings of inadequacy. Other careers have lots of rejections too, but most don’t send out a strong narrative of World Saving and Heroic Responsibility that can make judgements of inadequacy more emotionally impacting than “just” the loss of a job or better salary. 

Objection 2: Rejection is part of life, it's better for people to learn to accept rejection and move on.

We should all be aware of the typical mind fallacy in conversations like this. For many people a rejection is just a motivator to try harder. For others it’s a crushing indicator that they’re not good enough. When paired with lack of feedback for why they were rejected, it can lead to confusion and despair. I have helped a number of people find it easier to take the roll-with-the-punches approach to failure, but it is not an easy thing for many, and can sometimes require a lot of time and energy. Exposure therapy first creates a safe space to face expected adversity, teaches tools to manage the harm caused, and practices with a growing scale of difficult scenarios to prepare for real-world encounters. We should recognize that rejection is part of life, but also strive to make real-world rejections less jarring and costly through things like forewarning, legibility, and signals of care, which can be very valuable.

Objection 3: EA orgs might not have the resources to provide detailed feedback to everyone.

This is understandable, but I feel strongly enough about the value of this that my next question is “Would it be worth hiring someone whose job is to do this?” I don’t have a good sense for the monetary value of the resulting reduction in emotional pain, burnout, retention of people in the community, etc, and I’m open to the possibility that it’s not worthwhile for some small organizations. But I’d be surprised if it isn't for bigger ones.

IV. Potential Solutions

It’s unlikely we’ll ever be able to solve this problem completely, and I’m definitely not in a position to know what the best solutions to this might be; I’m also sure some of the organizations in question have considered these issues and may have their own internal discussions about which would be net positive or negative.

But to help get a more public discussion going, from the perspective of someone who has spoken with a number of people about it, here are just a few things I think might mitigate the harm:

1. Be honest and open about the success rate of applications

It makes sense that EA orgs would do everything they can to get everyone to apply, especially if their bottleneck really is that one outstanding person who is just the right fit. It  also makes sense if you know that people can be very bad at judging their own skills, so you don’t want anyone to self-select out.

One possible disclaimer I’d like to see is something like: 

“This position/funding is very competitive, but it’s our experience that people are bad at self evaluation, so we want you to apply anyway.”

This can help applicants manage their expectations, and will have multiple other benefits:

  • Rejection will hurt less, and be more informative.
  • People with limited emotional energy for applications and rejections, can better prioritize where to apply, when given accurate information.
  • People will be able to make better informed career plans, e.g. not invest too much time in training for a specific job at a specific place.

Being honest and open about the success rate of applications is cheap and scales well. It is just as much effort independent of how many applicants you get. Ideally this information should be posted along with the job ad.

2. Try to provide applicants with a sense of why they didn’t get the position

This is almost certainly easier said than done, particularly if the organization only has a few people fielding hundreds of applications. But between the enormous task of individualized feedback vs generic rejection/silence, I believe there is space for categories of semi-automated responses; things like:

Thanks for your application! You meet all our skill and experience requirements, but we had other applicants who better suited the role. We would appreciate another application from you in the future, if/when we have a similar job opening.


Thanks for applying. Unfortunately while you meet the skill requirements, we are looking for someone with more experience. If you would like to apply to a position like this in the future, we will be happy to reconsider your application once you have X Y Z. However keep in mind that any future positions will probably be very competitive, so having the right skills and experience are still no guarantee for getting the job.


Thank you for your interest. Unfortunately this position requires someone with X Y and Z skills, which we did not find in your application.

And so on. Getting these sorts of responses can do a lot for people who might otherwise get discouraged from rejections from positions that they might otherwise have gotten had a more perfect fit not come along, or, if two or more applicants were evenly matched on paper, had the proverbial coin toss gone in their favor.

It is important to acknowledge that giving honest feedback can be intensely uncomfortable. It is very easy to fall into the trap of telling ourselves that we are being nice and helpful by giving false encouragement to the people we are rejecting, when actually we are mostly protecting ourselves from discomfort. One way out of this trap, and to make the task less aversive, is to ask ourselves: “What would I want to know in their situation?” or “What would I tell a friend who trusted me to be completely honest?”

I know there’s some risk associated with this; some applicants might lash out or try to debate the decision. I’m not saying this is an easy thing to do; just that I think it’s the right one.

But I’m open to other thoughts, perspectives, and ideas, and if there’s something I’ve misunderstood about the processes involved, I’m happy to hear about them. Again, I don’t claim to know what the right answers or approaches are for these problems; I can only report what I’ve seen, share the experiences of others, and suggest what comes to mind.

Edit: Recent posts by Scott Alexander and Constance Li  have extended the discussion of this effect through experiences of rejection to events like EAG.

Sorted by Click to highlight new comments since:

It's a little aside from your point, but good feedback is not only useful for emotionally managing the rejection -- it's also incredibly valuable information! Consider especially that someone who is applying for a job at your organization may well apply for jobs at other organizations. Telling them what is good or bad with their application will help them improve that process, and make them more likely to find something that is the right fit for them. It could be vital in helping them understand what they need to do to position themselves to be more useful to the community, or at least it could save the time and effort of them applying for more jobs that have the same requirements you did, that they didn't meet -- and save the time and effort of the hiring team there rejecting them.

A unique characteristic of EA hiring is that it's often good for your goals to help candidates who didn't succeed at your process succeed at something else nearby. I often think we don't realize how significantly this shifts our incentives in cases like these.


In our current hiring round for EA Germany, I'm offering all 26 applicants "personal feedback on request if time allows", and I think it's probably worth my time at least trying to answer as many feedback requests as I can.

I'd encourage other EA recruiters to do the same, especially for those candidates that already did work tests. If you ask someone to spend 2h on an unpaid work test, it seems fair to make at least 5min time for feedback.

(Sidenote: Fwiw, I think people should also seriously consider actually paying honoraria for work tests, rather than leaving them unpaid. At least for longtermist and meta EA projects, I expect that if funders would fund staff costs they'd also fund the costs for paying applicants for their time spent on applications. At least, I can say for sure that that'd be my default stance as an EAIF guest manager.)

On the topic of feedback... At Triplebyte, where I used to work as an interviewer, we would give feedback to every candidate who went through our technical phone screen. I wasn't directly involved in this, but I can share my observations -- I know some other EAs who worked at Triplebyte were more heavily involved, and maybe they can fill in details that I'm missing. My overall take is that offering feedback is a very good idea and EA orgs should at least experiment with it.

  • Offering feedback was a key selling point that allowed us to attract more applicants.

  • As an interviewer, I was supposed to be totally candid in my interview notes, and also completely avoid any feedback during the screening call itself. Someone else in the company (who wasn't necessarily a programmer) would lightly edit those notes before emailing them -- they wanted me to be 100% focused on making an accurate assessment, and leave the diplomacy to others. My takeaway is that giving feedback can likely be "outsourced" -- you can have a contractor / ops person / comms person / intern / junior employee take notes on hiring discussions, then formulate diplomatic but accurate feedback for candidates.

  • My boss told me that the vast majority of candidates appreciated our feedback. I never heard of any candidate suing us, even though we were offering feedback on an industrial scale. I think occasionally candidates got upset, but they mostly insulated me from that unless they thought it would be valuable for me to hear -- they wanted my notes to stay candid.

  • Jan writes: "when evaluating hundreds of applications, it is basically certain some errors are made, some credentials misunderstood, experiences not counted as they should, etc. - but even if the error rate is low, some people will rightfully complain, making hiring processes even more costly." I think insofar as you have low confidence in your hiring pipeline, you should definitely be communicating this to candidates, so they don't over-update on rejection. At Triplebyte, we had way more data to validate our process than I imagine any EA org has. But I believe that "our process is noisy and we know we're rejecting good candidates" was part of the standard apologetic preamble to our feedback emails. (One of the worst parts of my job was constant anxiety that I was making the wrong call and unfairly harming a good candidate's career.)

  • Relatedly... I'm in favor of orgs taking the time to give good feedback. It seems likely worthwhile as an investment in the human capital of the rejectee, the social capital of the community as a whole, and improved community retention. But I don't think feedback needs to be good to be appreciated -- especially if you make it clear if your feedback is low confidence. As a candidate, I'm often asking the question of which hoops I need to jump through in order to get a particular sort of job. If part of hoop-jumping means dealing with imperfect interviewers who aren't getting an accurate impression of my skills, I want to know that so I can demonstrate my skills better.

  • But I also think that practices that help you give good feedback are quite similar to practices that make you a good interviewer in general. If your process doesn't give candidates a solid chance to demonstrate their skills, that is something you should fix if you want to hire the best people! (And hearing from candidates whose skills were, in fact, judged inaccurately will help you fix it! BTW, I predict if you acknowledge your mistake and apologize, the candidate will get way less upset, even if you don't end up hiring them.) A few more examples to demonstrate the point that interviewing and giving feedback are similar competencies:

    • Concrete examples are very useful for feedback. And I was trained to always have at least one concrete example to back up any given assessment, to avoid collecting fuzzy overall impressions that might be due to subconscious bias. (BTW, I only saw a candidate's resume at the very end of the interview, which I think was helpful.)

    • Recording the interview (with the candidate's consent), so you can review it as needed later, is another thing that helps with both objectives. (The vast majority of Triplebyte candidates were happy to have their interview recorded.)

    • Using objective, quantifiable metrics (or standard rubrics) makes your process better, and can also give candidates valuable info on their relative strengths and weaknesses. (Obviously you want to be diplomatic, e.g. if a candidate really struggled somewhere, I think we described their skills in that area as "developing" or something. We'd also give them links to resources to help them level up on that.)

  • At Triplebyte, we offered feedback to every candidate regardless of whether they asked for it. I once suggested to my boss that we should make it opt-in, because that would decrease the time cost on our side and also avoid offending candidates who didn't actually want feedback. IIRC my boss didn't really object to that thought. It wasn't deemed a high-priority change, but I would suggest organizations creating a process from scratch make feedback opt-in.

BTW if any EA hiring managers have questions for me I'm happy to answer here, via direct message, or on a video call. I interviewed both generalist software engineers (tilted towards backend web development) and machine learning engineers.

I was one of the people who edited interview notes and sent other feedback to Triplebyte candidates; I certify that everything John said here is correct, even re: the parts of the process he wasn't directly involved in, and I endorse his takeaways. This comment is more a response to John than it is a response to the OP, but hopefully/maybe people might still find it useful.

Feedback emails were about 25% of my job. As a team, we sent maybe 50 feedback emails on an average day (not totally sure here, numbers fluctuated a lot and also it was two years ago).

One of the things that made it possible to give good feedback at scale was that Triplebyte had a well-oiled, standardized process. Every candidate took much the same interview, which meant that we could largely build our emails out of pre-existing blocks — e.g., telling a candidate that we were impressed with their code quality but they could have been faster, or mentioning specific knowledge areas where they could improve and linking to relevant resources. I doubt the same could be done at most EA orgs? Maybe if they're hiring programmers.

The process of editing interviewers' raw feedback became pretty quick and easy after a while (edit out the swearing  and anything mean, change some keywords, bam), although sometimes one of us would slip up and that wasn't great, lol.  So yeah I agree that this is a job that could pretty easily be offloaded to a decent communicator who was familiar with the interview process. We did write some of our own content if we felt it was needed (e.g. writing nice things if we knew the candidate was struggling personally), and we used our judgment to tone down the harshness (e.g. if someone needed improvement in every single area we tested, we would focus on just a few areas rather than sending a super long email telling them they were bad at everything). 

There was also huge variation in quality between the notes of different interviewers; some would write long, detailed, encouraging messages to the candidates, while others would write like one sentence. So if EA orgs choose to go down this road, they need to make sure to give the feedback-giver enough to work with.

Another thing is that we were explicitly assessing only candidates' technical abilities, and not whether we wanted them to join our team. That meant that all rejections were of the form "you need to brush up on X skills", and we never had to reference a candidate's personality or energy levels or whatever. That probably helps a ton re: protection from lawsuits. (I had never thought of that before, huh.)

This is great to hear and an interesting read, thank you for sharing!

I think giving feedback to rejected applicants is very useful psychologically, but is very hard to do well. Not only would it be time consuming but the key issue for me is that, at least in the United States, organizations take on considerable legal risk when explaining to applicants why they were rejected. (For example, the statement "we were looking for someone with more energy" has initiated an age discrimination suit in the US.) Even short of potential lawsuits, you also open yourself to the applicant arguing with you and asking you to reconsider their application.

If someone does give feedback to applicants, I suggest you keep it very factual and focused on specific skills. Or talk about what an ideal candidate would have had and let them infer from there where they may have fallen short. Definitely don't make opinon-based, emotion-based claims, or statements about something the candidate cannot change.

I think the best thing for applicants to keep in mind is that organizations they apply to are often looking for specific things. You may feel like organizations are trying to hire for "best researcher" but not see that they actually are hiring for "researcher who we think best fits our particular research agenda at this particular time, as evaluated by whatever hiring and selection tools we have" which is very different... it's not a indication of your overall skill or worth, just an indication of your fit at an organization. I know many people who have been rejected by one organization go on and do very well after they are hired by a very similar organization.

(Note that this entire comment of mine is heavily adapted from views shared to me by Abraham Rowe.)

We've discussed this internally, but I want to register that I continue to think that while there are considerable costs and risks to organizations for giving feedback, there are also considerable benefits to individuals for precise, actionable feedback as well, and the case has not been adequately made that the revealed preferences of orgs is anywhere close to altruistically optimal. 

In particular, I also have not seeing much evidence that the legal risks are actually considerable in EV terms compared to either the org time costs of giving feedback or the individual practical benefits of receiving feedback. 

(all views my view, of course)

Hiya -- EA lawyer here. While the US legal system is generally a mess and you can find examples of people suing for all sorts of stuff, I think the risk of giving honest feedback (especially when presented with ordinary sensitivity to people you believe to be average-or-better-intentioned) is minimal. I'd be very surprised if it contributed significantly to the bottom-line evaluation here, and would be interested to speak to any lawyer who disagreed about their reasons for doing so.

Yeah we can look into changing our policy

Really appreciate this!

Yeah we can discuss this a bit more, in particular if it looks like we studied it and it's actually too time-consuming/risky or if it's too expensive or time-consuming to do the legal research to figure out whether it's too risky, I'm happy to continue to abide by the current policy! Just want to make sure the policy is evidence-based or at least based on evidence that being evidence-based is too hard! 

Note that there is a mechanistic way of solving this by offering insurance in the case of being sued for discrimination. 

Insurance seems like a fairly poor tool here, since there's a significant moral hazard effect (insurance makes people less careful about taking steps to minimize exposure), which could lead to dynamics where the price goes really high and then only the people who are most likely to attract lawsuits still take the insurance ...

Actually if there were a market in this I'd expect the insurers as condition of cover to demand legible steps to reduce exposure ... like not giving feedback to unsuccessful applicants.

The moral hazard effect might be reduced if insurance is approved for one proposed hiring methodology, rather than without conditions.

I'm also not thinking of it as a general market, but rather as an intervention that a deep-pocketed organization (realistically OpenPhil) could offer smaller (and thus perhaps more risk-averse) organizations.

The standard two arguments for the asymmetric risks of offering insurance are adverse selection and moral hazard. Note that these claims risk being a fully-general argument, as this will all else equal be a good argument against all insurance (and indeed is taught that way in econ textbooks), but insurance companies can just increase rates to adjust for this.

I'm sure this is also covered in the literature, but having an organization with deep pockets cover the insurance also makes your org a juicier target for lawsuits than usual, in addition to the standard arguments. 

Hmm, insurance is only a good solution if the expected costs are low relative to the benefits but the variance is high and you don't want to be exposed to that risk. Insurance is not a good solution if the expected costs are sufficiently high.

Though that said, one issue is that orgs are insufficiently capable of individually assessing risks, so if a centralized body can estimate the relevant risks and price them accordingly, orgs can decide for themselves whether it's worth it.

Hey there, 

I agree with your main point that rejection is painful, has negative effects on the culture, and we should think about how to minimise it. 

But I wanted to add that in my post about whether EA is growing, I estimate that the number of people employed in EA orgs and the number of engaged EAs have both been growing at around 20% p.a. since 2015.

If anything, in the last 1-2 years I'd guess that the number of jobs has been growing faster than the total number of people.

There was a period maybe around 2015-2018 when the number of people was more likely to have been growing faster than the number of jobs, but I don't think that's happening right now.

Hey, thanks for the comment! Just to clarify because I may be too sleep deprived to track what you're saying... I originally read that as proportional by percent but not by absolute numbers, right?  

So if roughly 900 new people per year are considered engaged enough to count as part of the community, ~20% of that and ~20% of 650 would still leave a growing number of people in the community working EA jobs, and even ~30% or ~40% increase in jobs would still leave a growing absolute number of people in the community not working EA jobs.

(Again, not to say that this is bad necessarily, and as you noted there's also people who were funded by grants or doing research or similar)

Yes, my figures were proportional rather than absolute.

I was mainly responding to:

  • EA organizations are growing slower or at pace with the overall EA population

This sounds like a proportional claim to me. My take is they're growing at the same or faster pace as the overall EA population.

It's true that if they both grow the same proportionally, the absolute number of people not able to get jobs will grow. It's less obvious to me something is 'going wrong' if they both grow at the same rate, though it's true that the bigger the community, the more important it is to think about culture.

Ah, yeah that wasn't intended as my meaning. Will edit :)

I mostly agree with the problem statement.

With the proposed solution of giving people feedback - I've historically proposed this on various occasions, and from what I have heard, one reason for not giving feedback on the side of organizations is something like "feedback opens up space for complaints, drama on social media, or even litigation". The problem looks very different from the side of the org: when evaluating hundreds of applications, it is basically certain some errors are made, some credentials misunderstood, experiences not counted as they should, etc. - but even if the error rate is low, some people will rightfully complain, making hiring processes even more costly. Other question is, what is the likelihood of someone from the hundreds of applicants you don't know doing something bad with the feedback - ranging from "taking it too seriously" to "suing the org for discrimination". (Where the problem is more likely to come from the non-EA applicants).

I'm not saying this is the right solution, but it seems like a reasonable consideration.

One practical workaround: if you really want feedback, and ideally know someone in the org, what sometimes works is asking informally +signaling you won't have do anything very unreasonable with the feedback.

My vague understanding is that there's likely no legal issues with giving feedback as long as it's impartial. It's instead one of those things where lawyers reasonably advise against doing anything not required since literally anything you do exposes you to risk. Of course you could give feedback that would obviously land you in trouble, e.g. "we didn't hire you because you're [ethnicity]/[gender]/[physical attribute]", but I think most people are smart enough to give feedback of the form "we didn't hire you because legible reason X".

And it's quickly becoming legally the case that you can request not just feedback but all notes people took about you during the hiring process! Many companies use digital systems to keep notes on candidates, and the data in those systems is covered by GDPR, so candidates can make requests for data potential employers have about them in those systems (or so is my understanding; see for example this article for corroboration). Doesn't apply in the US, but does in the UK and EU.

There are a bunch of illegible factors involved in hiring the right person, though. If the reason for rejection is something like "we think you'd be a bad culture fit," then it seems legally risky to be honest.

True, but what you can do is have explicit values that you publicize and then ask candidates questions that assess how much they support/embody those values. Then you can reasonably say "rejected candidate because they didn't demonstrate value X" and have notes to back it up, or say "rejected because demonstrated ~X". This is harder feedback for candidates to hear, especially if X is something positive that everyone thinks they are like "hard working", but at the same time it should be made clear this isn't about what's true about the candidate, but what could be determined from their interview performance.

Yeah, this seems a hard problem to do well and safely from an organizational standpoint. I'm very sympathetic to the idea that it is an onerous cost on the organization's side; what I'm uncertain about is whether it ends up being more beneficial to the community on net.

I'm unfamiliar with EA orgs' interview processes, so I'm not sure whether you're talking about lack of feedback when someone fails an interview, or when someone's application is rejected before doing any interviews. It's really important to differentiate these because because providing feedback on someone's initial application is a massively harder problem:

  • There are many more applicants (Wave rejects over 50% of applications without speaking to them and this is based on a relatively loose filter)
  • Candidates haven't interacted with a human yet, so are more likely to be upset or have an overall bad experience with the org; this is also exacerbated by having to make the feedback generic due to scale
  • The relative cost of rejecting with vs. without feedback is higher (rejecting without feedback takes seconds, rejecting with feedback takes minutes = ~10x longer)
  • Candidates are more likely to feel that the rejection didn't give them a fair chance (because they feel that they'd do a better job than their resume suggests) and dispute the decision; reducing the risk of this (by communicating more effectively + empathetically) requires an even larger time investment per rejection

I feel pretty strongly that if people go through actual interviews they deserve feedback, because it's a relatively low additional time cost at that point. At the resume screen step, I think the trade-off is less obvious.

Candidates haven't interacted with a human yet, so are more likely to be upset or have an overall bad experience with the org; this is also exacerbated by having to make the feedback generic due to scale


Candidates are more likely to feel that the rejection didn't give them a fair chance (because they feel that they'd do a better job than their resume suggests) and dispute the decision; reducing the risk of this (by communicating more effectively + empathetically) requires an even larger time investment per rejection

Are you speaking from experience on these points? They don't seem obvious to me. In my experience, having my resume go down a black hole for a job I really want is incredibly demoralizing. I'd much rather get a bit of general feedback on where it needs to be stronger. And since I'm getting rejected at the resume stage either way, it seems like the "frustration that my resume underrates my skills" factor would be constant.

I'm also wondering if there is a measurement issue here -- giving feedback could greatly increase the probability that you will learn that a candidate is frustrated, conditional on them feeling frustrated. It's interesting that the author of the original post works as a therapist, i.e. someone paid to hear private thoughts we don't share with others. This issue could be much bigger than EA hiring managers realize.

It sounds like you interpreted me as saying that rejecting resumes without feedback doesn't make people sad. I'm not saying that—I agree that it makes people sad (although on a per-person basis it does make people much less sad than rejecting them without feedback during later stages, which is what those points were in support of—having accidentally rejected people without feedback at many different steps, I'm speaking from experience here).

However, my main point is that providing feedback on resume applications is much more costly to the organization, not that it's less beneficial to the recipients. For example, someone might feel like they didn't get a fair chance either way, but if they get concrete feedback they're much more likely to argue with the org about it.

I'm not saying this means that most people don't deserve feedback or something—just that when an org gets 100+ applicants for every position, they're statistically going to have to deal with lots people who are in the 95th-plus percentile of "acting in ways that consume lots of time/attention when rejected," and that can disincentivize them from engaging more than they have to.

I think part of our disagreement might be that I see Wave as being in a different situation relative to some other EA organizations. There are a lot of software engineer jobs out there, and I'm guessing most people who are rejected by Wave would be fairly happy at some other software engineer job.

By contrast, I could imagine that stories like the following happening fairly frequently with other EA jobs:

  • Sally discovers the 80K website and gets excited about effective altruism. She spends hours reading the site and planning her career.

  • Sally converges on a particular career path she is really excited about. She goes to graduate school to get a related degree, possibly paying significant opportunity cost in earnings etc.

  • After graduating, Sally realizes there are actually about 3-4 organizations doing EA work in her selected area, and of those only 2 are hiring. She applies to both, but never hears back, possibly due to factors like:

    • She didn't do a great job of selling herself on her resume.

    • She's not actually applying for the role her degree+resume best suit her for.

    • It so happens that a lot of other people reading the 80K website got excited about the same thing Sally did around the same time, and the role is unexpectedly competitive.

    • The organization has learned more about what they're looking for in this role, and they no longer consider Sally's degree to be as useful/relevant.

    • Her resume just falls through the cracks.

At this point, Sally's only contact with the community so far is reading the 80K website and then not hearing back after putting significant effort into getting an EA career. Can we really blame her if she gives up on EA at this point, or at the very least starts thinking of herself as playing on "single player" mode?

My point here is that we should distinguish between "effort the candidate expended on your hiring process" and "effort the candidate expended to get a job at your org". The former may be far bigger than the latter, but this isn't necessarily visible.

The same visibility point applies to costs to the org -- Sally may complain bitterly to her friends about how elitist the org is in their hiring / how elitist EA is in general, which might count as a cost.

Anyway, I think total cost for giving feedback to everyone is probably the wrong number here -- really you should be looking at benefits relative to costs for an individual applicant.

I also think it'd be worth trying experiments like:

  • Ask candidates who want feedback to check a box that says "I promise not to complain or cause trouble if I don't like the feedback"

  • Instead of saying "we can't hire you because you don't have X", spend less time making sure you're understanding the resume correctly, and more time asking questions like "it looks like your resume doesn't have X, we were hoping to find someone with X for this role". If they've got something to say in response to that, that's evidence that they really want the job -- and it might be worth letting them progress to the next stage as a way of validating your resume screen.

Interesting. It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.

In addition to the feedback thing, this seems like a generally very bad dynamic—for instance, in your example, regardless of whether she gets feedback, Sally has now more or less wasted years of graduate schooling.

It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.

I don't know that. But it seems like a possibility. [EDIT: Sally's story was inspired by cases I'm familiar with, although it's not an exact match.] And even if it isn't happening very much, it seems like we might want it to happen -- we might prefer EAs branch out and become specialists in a diverse set of areas instead of the movement being an army of generalists.

This is a good point, my comment exchange with Peter was referring to people who did at least one interview or short work trial (2 hours), rather than rejected at the initial step.

Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume. 

[1] It used to be Peter and Marcus, and then as we've expanded, researchers on the relevant team, and now we have a dedicated hiring specialist ops person who (among other duties) review the initial application.

Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume. 

I'm a bit confused about the phrasing here because it seems to imply that "Alice's application is read by a human" and "if Alice is rejected it's not just because of her resume" are equivalent, but many resume screen processes (including eg Wave's) involve humans reading all resumes and then rejecting people (just) because of them.

I mean the entire initial application (including the screening questions) is read, not just the resume, and the resume plays a relatively small part of this decision, as (we currently believe) resumes have low predictive validity for our roles. 

Thanks for your post. You offer valuable thoughts, and I only have one small additional one. Having now been through a stage of my career where I have done hiring, I know the process is way more arbitrary (and less sophisticated) than I previously realized. When I was younger I used to take job rejections way more personally than I should have based on what I know now. There are all sorts of sub-optimal reasons hiring decisions are made, and applicants should not take rejection as a strong signal about their skills, talents, or future potential (IMO). Still, I do think it's important that hiring managers be as respectful as possible and give feedback when they are able. 

If I could give advice to my 22-year-old self it would be, "if you want to work for a certain organization or in a certain field badly enough, just keep persisting through every rejection. The act of persistence alone will increase your probability of working in that area. Also, put yourself in the shoes of the hiring manager. What do you think they want to hear from an ideal prospective applicant? Don't lie or mislead, but if you really think you will be a good fit for a position, freely tell your interviewers how devoted you will be to the work. They want to hear passion."

I think feedback on job applications is far too rare given its value, for reasons others have stated and the additional reason "everyone I've ever talked to about this has complained about how hard it is to get feedback".

Here's how I handled feedback for the one hiring process I've run at CEA:

  • Anyone I rejected in the first round got an offer from me to provide feedback on their application. Looking back, I see that I didn't make this offer to second-round candidates; this seems really dumb, and I'm not sure why I didn't, though I think I did share some verbal feedback when I interviewed those candidates.
  • When someone asked for feedback, I generally wrote a paragraph or two, very quickly, sometimes copying and pasting from other feedback emails (in cases where multiple people made the same mistake).
  • Because I was mostly providing feedback on a standardized work test, that part was very quick (though giving people advice on how to copyedit better probably wasn't very valuable). My other feedback concerned unclear descriptions of their experience or confusing resumes — this seems more valuable.
  • People were very grateful for even this basic feedback. It helped that one of my most common notes was "this was above-average for all applications, but not in the top 10%, and I only took the top 10% for the second round". This let me give people some amount of confidence/comfort while still being honest about the high standards for the position.

General lessons:

  • It seems much easier to give feedback when you have a standardized task to comment on.
  • Having "opt-in" feedback seems ideal; people who ask for it will value it more, and most people won't ask (maybe 15% of the people who got my offer followed up to ask).

As an additional data point, I got very detailed feedback when I applied for a position at Ought in 2018: "You did X, which was good, but the best candidates did X and also Y, which was clearly better". This was a good learning experience and left me feeling very positive about the organization.

This exactly chimes with my experience. I've been hiring for 10 years now, and the range in application volume has been 10-200 for a position. 

In particular, I've been using an opt-in for feedback for years and my experience has also been that this is requested by a very low volume of people (I'd actually guess at 5% for early rejections, rising to 75% if they did an interview, at which point most people seem to want feedback).

For what it's worth, I think this is a moral issue as well - we have a duty to the community to try to give useful feedback when we can; and  to treat people with kindness. 

I try to take it in good faith when people say "I'm too busy to give feedback" but I feel that this is often not literally true; and in the rare cases where it is (maybe someone running one of the big 'legacy' EA organisations and getting hundreds of applicants per position), the solutions in other comments are viable.

Hot takes:

Agree that ingroup rejection matters. If you want to make a large impact, getting rejected from an EA org can feel like you're not good enough to make a large impact. 

Group organizers may hear that many budding high potential EAs apply to EA jobs and become less motivated to do EA organizing if there is no clear paths for budding EAs given the high rejection rates. 

As a result, I think having good alternatives to EA orgs is quite important. 80k mentions having a plan Z if plans A or B don't work out. For high-achieving people, not being able to achieve plan A or B can feel awful. However, it is possible that non plan As or Bs can still be very high impact. I feel like good examples of people settling on their plan Z's are underrepresented in the EA community since guest speakers tend to work on EA causes in EA orgs, and therefore you'd never get someone who gives a talk about landing on a plan Z situation. 

Let me know what you think! :)

Agreed, more public figures of people who found something meaningful and impactful that wasn't what they initially thought they would/should work on would help with that :)

I agree with most of this, thanks for writing

Some thoughts:

  • For some reason, it hurts more when we feel are being rejected personally and not as a category. So it is indeed better to be specific say "we rejected you because you didn't have red-umbrella-qualifications", where this is the case

  • Even better, 'conditionally anonymous' rejection (like the Tinder paradigm, where possible) or pre-emptive rejection by making the qualifications clear in advance

  • 'Feedback without litigation risk': The employer could offer to give feedback, at a future date, on the candidate's overall CV and presentation, and not specifically for the position they applied for. That is what I have tried to do where possible.

Would it be possible for some kind of third party to give feedback on applications?  That way people can get feedback even if hiring organizations find it too costly.   Someone who was familiar with how EA organizations think / with hiring processes specifically, or who was some kind of career coach, to be able to say "You are in the nth percentile of EAs I counsel.  It's likely/unlikely that if you are rejected it's because you're unqualified overall." or "Here are your general strengths and weaknesses as someone applying to this position, or your strengths and weaknesses as someone seeking a career in EA overall."  Maybe hiring organizations could cooperate with such third parties to educate them on what the organization's hiring criteria / philosophy are, so that they have something like an inside view.

As another option to get feedback, many colleges and universities' career development offices offer counseling to their schools' alumni, and resume review (often in the context of specific applications to specific jobs) is one of the standard services they provide at no extra charge.

I have an anecdote of a time I received feedback for a (non-EA) job I didn't get in a manner I found helpful. Hopefully this helps hiring managers who are brainstorming ways to provide feedback.

I had applied for a consulting position as an undergraduate. I was selected for a second-round interview, where I was asked to discuss how I would advise a client (IIRC, the client was a grocery store considering a merger). A few days later, my interviewer called me and said, "Sorry, you didn't get the job. Would you like some feedback on your interview?" I said yes, and he proceeded, "We were looking for a response that addressed factor ___ and you didn't mention that."

What I liked about this was that (A) I received feedback, which is rare in hiring processes and (B) the feedback was specifically about my interview answer, not about broad qualities I was missing. The latter made the feedback easier to manage emotionally, because I didn't end up thinking "oh no, I'm not creative/thorough/skilled." Instead, my takeaway was "I should talk about more facets of the problem if I do this again."

Very clear to me that this is a huge issue among my personal EA network.

I think calibrating people is step 1 of mitigating the hurt feelings, probably more important than feedback and certainly much cheaper.

My sour grapes:

I previously contacted Rob Wiblin and suggested that 80k publish some stats on the various orgs on the 80k jobs board that would help people calibrate their odds of getting the job. I pointed out that this is quite relevant to assess neglectedness and tractability. He responded by asking if I had tried to contact the orgs myself and suggested I do so, which I consider a dismissal of my IMO uncontroversially good suggestion. 

I also posted this: https://forum.effectivealtruism.org/posts/6Dqb8Fkh2AnhzbAAM/reducing-ea-job-search-waste and felt the community was mostly disinterested in the problem. I am glad that your post is getting more traction. 


Rejection sucks and you often feel stupid for even applying in the first place. Sometimes it helps to lean into this feeling and Dare To Be Stupid

I see a lot of EAs using scope insensitivity as a reason to assume listening to their feelings is not worthwhile. Your feelings are probably better seen as helpful messengers -- I'd love to see a Slack Group or channel for discussing feelings and how to integrate them with research, action and feedback!


Personally, if I apply to an organization and get no feedback - particularly after a lengthy process, I am much, much less likely to apply again. 

If that's what your organization wants -> then this is probably a good strategy.

But I would think that for candidates who were strong but didn't quite get the job - an organization would not want to discourage them from applying for future positions.

Is there an argument here for trying to spread more of a Growth Mindset in EA? I don't want to diminish the hurt of rejection that people feel by implying that they just need to reframe it and everything will be fine - but the approach of seeing challenges/failures as learning experiences can be genuinely transformative for people.

In general, I think developing a growth mindset is incredibly valuable, and I wonder if this is something Training for Good could look at.

Curated and popular this week
Relevant opportunities