Hide table of contents

Summary

A career transition into the EA ecosystem often needs plenty of patience and a sustained motivation to do good. In a system with abundant applicants and scarce feedback, candidates can lose months of potential impact in iterating, estimating their fit, and absorbing rejections with minimal guidance.

The EA community can multiply its impact by simply providing detailed feedback on job applications. In this post, I will outline a few approaches that would require minimal time from hiring committees while delivering rich feedback to the applicants.

My key assumptions are:

  1. The “broad funnel – strong filter” approach used by many EA organizations, by design, attracts orders (100‑1,000×) more applicants than positions.
  2. A large fraction of applicants are testing their personal fit for different roles and cause areas.
  3. Feedback requests from applicants help them find where they can have the most impact, rather than make them question the organization's decision to reject their application.
  4. The feedback provided for these job applications is often negligible, especially at the initial stages.

Based on these assumptions, I argue that:

  1. The current feedback is too meager to be useful; applicants have to apply to several positions to gather enough information to make decisions.
  2. The collective time lost by applicants is non‑trivial and scales with the breadth of the application funnel.
  3. It is easy to provide rich, non‑individualized feedback that does not require significant time and effort from the hiring team.
  4. Rich feedback can reduce both the time spent by applicants on job applications and the total number of applications processed by the hiring committee, because it helps applicants evaluate fits to positions better.
  5. This problem is important to solve because programs like HIP and the CEA Career Bootcamp bring in experienced professionals who lose counterfactual impact in navigating low‑feedback application processes.

I provide potential approaches for providing rich feedback to applicants. In addition, I made a small proof-of-concept app to showcase these approaches.

I end the post with a survey to collect concrete data from both applicants and hiring committees. The survey is an attempt to validate my assumptions (which I will communicate with another post, if I get sufficient response).

Note: The post is written from an applicant's perspective - that is, what is best for the applicants and their impact. The situation might look very different from the perspective of the hiring committee. I tried my best to balance both perspectives by aiming to minimize the time and effort of hiring committees while still providing rich feedback to applicants.

Problem

Here is an example scenario:[1]

  • An EA organization that an applicant wants to join is hiring, but the applicant is unsure of their personal and/or skillset fit.
  • The job description invites applicants from diverse fields (broad funnel).
  • EA advice recommends quick tests by applying and assessing based on outcome.
  • The job description asks applicants to spend no more than 1 hour on the application.
  • In practice, applicants spend 2–3 hours (time limits are underestimated and do not account for resume tweaks, talking to people, etc.).
  • Most applicants receive an email saying 1,000+ applications have been received and individual feedback is not possible. These applicants spent 2–3 hours with no clear feedback on their fit - all they learn is that this position is not a good fit at this time.
  • Applicants are told that this is normal and that they need to keep applying, maybe take more courses, volunteer, etc.
  • Some applicants move to later rounds - work tests, interview, etc. This is the only definitive feedback of fit they receive, as often no feedback is shared regardless of how far they get in the application process.
  • After several months to a year of applications, some applicants secure a position, others pivot to something else.

This example illustrates that the application process is essentially a brute force approach with sparse feedback. Often, the only feedback the applicant receives is how consistently they advance into different stages of the application process.

Collectively, most time is lost in the application stage (refer to table). The broadness of the funnel implies that most who apply end up not receiving any clear feedback on how their application was perceived. This in turn invites a lot more applicants as they search for feedback, further increasing the collective time lost.[2][3]

StageFeedback receivedExpected time spent (hour)Actual time spent (hour)Total hours (100x / 1000x)Days lost (100x / 1000x)
Application (+ Resume)Yes / No12200 / 200020 / 200
First Interview (10%)Sparse (interviewer reactions)1220 / 2002 / 20
Work test (10%)None (typically)2220 / 2002 / 20
Final Interview (2%)Sparse (interviewer reactions)124 / 400.4 / 40
Selected applicants (1%)  88 / 800.8 / 8
Rejected applicants (99%)  2.44 (average)244 / 244024.4 / 244

(Note: I assume a day to be 10 hours - as it is essentially a loss of a work day)

TL;DR: Providing feedback for the application stage will have the largest impact on the collective time spent and possible reducing the total number of applications in the long run.

Solution

If a large chunk of applicants in the pool are estimating their personal and/or skillset fit, the most important feedback for them would be their standing in the applicant pool. As an example, if they know that they consistently fall in the upper 90% of the applicant pool, they know that their profile fits the job description, even if their application is passed over.

The information about an applicant's position in the pool falls in the sweet spot of being rich for the applicant while being low-effort for the hiring committee.

Below, I will provide three ways this can be done, starting from low-hanging fruit to the rich, individualized feedback. In addition, I have showcased these approaches through this simple proof-of-concept app.

Example: Applicant Dataset

Assumption:[4] For every stage in the application process, the hiring committee decides a rubric for scoring applicants. The committee then scores (or uses algorithms that score) each application. The scores are then sorted, and applicants with top x% scores advance to the next stage.

For this dataset, I assume that the hiring process has three stages of selection:

  1. Application answers + resume
  2. Work test + first interview
  3. Final interview

At each round, applicants are scored for the components of the stage, and a score (rounded to 1st decimal) is computed based on a weighted factor model. This score is used to determine who advances to the next stage.

Here is an example table of how the dataset looks (table is sorted by total score):

Approach 1: Simple Statistics

Total applications: 1000 
Positions available: 2 

Applicant's rank: 106 
Applicant's percentile: 89.55%

Approach 2: Visual Statistics

In addition to the above, provide a histogram of applicant scores as well as the applicant's standing.

Approach 3: Descriptive Statistics

Combine approaches 1 and 2 to provide descriptive feedback.

How This Helps

The above approaches provide applicants with feedback in the following ways:

  1. Approach 1 uses simple statistics to provide the applicant with a measure of their counterfactual impact - i.e., how many applicants have similar or better applications.
  2. Approach 2 adds to this by providing more information about the applicant's location in the distribution. It also provides a lot more information about the distribution (e.g., are there many applications similar to mine? How far off am I from the selected candidates and is it heavy-tailed?).
  3. Approach 3 provides rich information about the application process and the applicant's standing in each stage. This is the most helpful from the applicant's perspective - it provides a lot more information on where they can improve. Additionally, this rich feedback helps candidates make decisions about fits and career paths earlier on in the process, saving them (and the hiring organizations) a lot of time (money, and energy).

I believe that this will have several effects on the hiring system:

  1. The time spent by each applicant is no longer lost as they are provided with rich feedback on how to improve their scores for the particular job application.
  2. It pushes organizations to generate and share a clear rubric that helps applicants update beliefs about their fit and test it using applications.
  3. It changes the applicant's perspective from applying widely to applying selectively and improving themselves at every step (with their location in the applicant distribution a clear metric of their improvement).
  4. It makes the hiring process very open and transparent, automatically increasing the value of the organization (halo effect).

Concerns

One of the main concerns organizations have about providing rich feedback is that they believe it is time-consuming. One of the aims of this post is to show that this is not the case - it is easy to provide rich feedback that is completely automated.

Another critical concern is the worry that opening up the hiring process and sharing detailed feedback might increase the possibility of gaming this system. This might be a fair concern - it might be possible to learn more about the scoring process from the detailed statistics provided in approach 3.

That said, it is expected that only a small proportion of people will try to game a system. Here, I feel that the positive benefits of feedback outweigh the small increases in this proportion, if any.

Throughout this post, I assume good faith on the candidates’ part - the only use of the provided feedback is self-improvement. And I intuitively feel like this could be a valid assumption for the EA community. Please do let me know (comments or direct communication) how valid my assumptions are, along with evidence for and against my assumptions.

Final thoughts/considerations

I am one of several experienced professionals exploring a transition into the high-impact space. From my understanding, every year programs like the Impact Accelerator Program (which I was part of) introduce 300-400 professionals to EA and high-impact organizations. More programs, like the Centre for Effective Altruism’s Career Bootcamp, are starting up, suggesting a growing demand/inclination of experienced professionals to increase their impact through their careers.

From my conversations with several others, I find that the biggest hurdle for these professionals (including myself) is navigating the low-feedback environment. After building several skills throughout my career, it seems an incredibly inefficient way to figure out where I can apply my skills in an impactful manner.

Some organizations provide feedback to applicants who clear the first few stages of their application process. For example, Ambitious Impact provided feedback on my Charity Entrepreneurship application after my final interview. This was very useful for me - it provided direct feedback on my strengths and weaknesses. Additionally, I could ask specific questions on where I excelled, fell short, and how to improve myself for future applications. This is something I wish I had for all my applications. However, I know that it is not possible to have such rich, personalized feedback for all my applications as time is a valuable resource.

Practically, rich feedback does not necessarily mean personalized/individualized feedback - I strongly believe that it is possible to provide automated feedback that is very useful for the applicants (80-20 rule). Additionally, I think this is an easy problem to solve as most EA organizations already use clear and rational methods to score and evaluate applications. Sharing the applicants’ scores as descriptive statistics is a ripe, low-hanging fruit that is ready to harvest and that can have multiplicative effects for those aiming to improve their impact through their careers. I hope I have managed to convey that in this post.

Survey

I made a survey to gather concrete data to test my assumptions. If you are an applicant or part of a hiring committee, I would be very grateful if you could fill out the survey (it should take 10-15 min for applicants, and 5 min for hiring teams). Also, please do reach out if you have more thoughts or feedback on the post.

Survey link: https://docs.google.com/forms/d/e/1FAIpQLSdgZKgkYlwbODeGT-3XDYrJyAyv0GhZj5iK2hTikcgTnvLrgg/viewform

Acknowledgements: I am very grateful to Sruthi Balakrishnan, Nina Friedrich, Ivan Muñoz, and Mike X. Cohen for providing feedback on my draft.


  1. This is based on job application rates I have encountered and/or heard from fellow applicants. ↩︎
  2. In the worst case scenarios, applicants fall to the "spray and pray" method to find out where their applications get hits (feedback). ↩︎
  3. Note that this does not even account the time lost by the hiring committee in vetting extra applications that arise due to a lack of feedback. ↩︎
  4. I expect this to be valid for many, if not all, EA orgs. Please let me know if this is not valid or if I am oversimplifying this process. ↩︎

4

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities