V

Vinoy

1 karmaJoined Working (6-15 years)

Comments
2

Thanks for proposing this idea Dinesh. I'm supportive of the core idea here. More feedback is (obviously) better than less, and even the simplest version of what you're proposing would be useful to implement.  I hope some Orgs read this and act on it. 

That said, I want to share some immediate concerns and thoughts I had (not quite hot takes, but not super well thought either) 

For context: I'm someone who went through the High Impact Professionals program in late 2024 and have applied, unsuccessfully, to several EA roles over the last few years. I'm currently in the middle of another round of applications. So I'm very much the target audience for this post. But note also that frustration at this long and (so far) unsuccessful process might colour my opinions. 

The purpose of a system is what it does..

I'd like to believe the framing here: hiring teams think feedback is too time-consuming, and the solution is showing them it can be automated and easy. But maybe the incentives are simply well aligned as it is right now?

The benefits of the "broad funnel, strong filter" accrues largely to orgs. They get a large, talented applicant pool and they can be highly selective. The cost, on the other hand — candidates spending months in low-feedback application cycles, burning time, money, and emotional energy — is borne largely by the applicants. 

The post suggests that feedback means that applicants will apply more selectively, reducing total volume. I think most hiring teams would see that as a theoretical benefit at best. A smaller, more targeted applicant pool sounds nice in the abstract, but it also means potentially missing the unexpected candidate who wouldn't have applied if they'd self-selected out. 

The broad funnel gives orgs a lot of optionality. Orgs either have or need to create efficient processes for screening large volumes and they seem to be doing so (automated video interview, standard questions, LLM use etc)

To be clear, I'm not imputing bad intentions here. I believe most EA hiring teams would genuinely like to provide better feedback. But in a world of tradeoffs, I can understand why this might fall to the bottom of the list of priorities. .

Statistical feedback is less actionable than it appears

I think the percentile rank for applications is informative at the extremes: if I'm in the 95th percentile and still didn't get the role, I can assume I was competitive and it just had to go one way or the other. Bad luck. And if I'm in the 10th percentile, maybe I seriously misjudged my fit/skills/experience for the job. 

But what about if you end up in the messy middle?

What applicants actually need to know is why they scored where they did. Was it their experience? The framing? A mismatch between what the applicant emphasised and what the committee was actually looking for? 

Is it clear what the orgs actually wanted?

That question is harder to answer than it looks because job descriptions tend to be written broadly, partly by design (broad funnel), partly because writing a really precise job description is hard and time-consuming, and honestly, probably not the best use of an org's person-hours.

Here's a concrete example from my own experience. I was told informally — through a conversation — that a particular role was really looking for someone with a strong entrepreneurial streak, someone who'd demonstrated the ability to start and execute projects on their own initiative. Was that in the job description? Technically, yes but it was buried among a dozen other qualities. And it was probably followed by one of those "even if you don't meet all the criteria, we encourage you to apply" sentences. 

In a situation with such blurry criteria, a percentile rank tells you where you stood but not necessarily what would have moved you up.  You might see a decent percentile and think "I'm close, I just need to polish my application a bit," when the real issue is that you're optimising for the wrong criteria.

 

I don’t have any other strong solutions to offer. But I would love for us (as a community) to explore how orgs can be more explicit about what actually drives their decisions, because the current ambiguity incurs huge costs within the whole system of orgs and applicants. 

One thing that might help enormously is normalising brief, even formulaic, qualitative feedback at rejection. It doesn't need to be personalised or lengthy. Even something like "Your application was strong on X but we needed more evidence of Y" — a single sentence would be useful. I know some orgs already do this at later stages (and I'm grateful to the ones that have given me feedback). The question is whether it can be extended earlier in the process, even in a templated way.

To Dinesh's credit, the post is pushing in the right direction. Any feedback is better than the current status quo. And if the statistical approaches outlined here are what's realistic in the short term, I'd take them over nothing. 

Fantastic podcast episode as usual. I learnt a lot.