I will say this upfront, so you can weight my post however you want, but I applied to a researcher role at Givewell and got through to the final round before being rejected. I have substantial experience of hiring dozens of people at all levels of seniority and at every stage of the recruitment process.

Givewell is the most unprofessional and lackadaisical organisation with which I have ever dealt. An application process that should have taken 6-8 weeks at most took more than five months. Not least because it took them more than two months to make an initial sift of my application and get back to me. This initial sift really should not take more than a week, given the small amount of information that the application form requires.

Although the first written exercise seemed to run effectively (a response time of one week - perhaps I just got lucky there), things fell apart again at the second round. For example, the two Givewell employees that joined the call I was allowed to schedule before submitting my second round exercise seemed completely disinterested and uninformed regarding the assessment exercise. 

Moreover, I would have thought that an effective second round would at least incorporate another call after submitting the exercise to discuss the submitted work and go over various different factors that were (or were not) included. That would provide information on how the applicant works and engages with colleagues, which is a crucial component in any workplace. However, Givewell’s application process assumes that people work entirely in isolation.

This difference from how good workplaces actually function is also reflected in the fact that they do not provide any information regarding what they actually want in the assessment exercises. There is no instruction to e.g. cover issue x, or focus on area y, despite the fact that in a real workplace those instructions would be provided (or at least discussed before the exercise). 

Hence, despite Givewell's claims that the exercises reflect work tasks, that clearly is not the case as they do not reflect how work is actually conducted in a well-functioning organisation. (And if it reflects how work is conducted in Givewell, then that just proves that Givewell is not a well-functioning organisation.)

Givewell then took 50% more time than they said they would to get back to me regarding my submission for the second round application. And the response they provided was just a boilerplate rejection email the day before a national holiday. Said boilerplate rejection email claimed that I was “not a good fit for the role” - either that is true and they should have realised that at the initial sift stage rather than make someone who was not a good fit go through the application process, or they really need to change the rejection letter email. Either way, it’s not exactly a sign of competence on Givewell’s part.

Moreover, from what I can gather, the application process seems to discard all information previously obtained at earlier application stages. For example, the second stage of the process appears to use only the second stage exercise, rather than take into account the first stage exercise and the application form. This is clearly inefficient as it ignores information that is freely available. If Givewell wishes to be taken seriously, then it should use all information from previous application stages rather than just using the specific exercise submission at each stage.

Even worse is that despite my getting through to the final round, Givewell refused to provide any feedback on what I could have done to improve - I can understand such an approach after the initial sift or even after the first round (due to the sheer number of applicants at those stages), but it is common courtesy and accepted good-hiring practice to provide feedback to final round applicants (who are not going to be that numerous so providing feedback at that stage would not be burdensome). Not providing feedback at this stage is both rude and lazy.

For an organisation that is proud of its (supposed) transparency, the hiring process is not transparent at all: the lack of information regarding what they’re looking for at each stage; the absence of any criteria given as to what they will judge the exercise on; and the refusal to provide any feedback are each the very opposite of transparency. Even an organisation as opaque as the Civil Service provides an exact set of criteria against which an application will be judged, and provides feedback once an application has been reviewed. The fact that Givewell is less transparent than the Civil Service is a damning indictment.

If Givewell was serious about welcoming outsiders’ input into what they could do better, they’d work with experts to improve their hiring process. But they’re not a serious organisation, so I suspect they’ll ignore this.

On the whole, Givewell's lack of effort and competence in the hiring process really makes me doubt the quality and accuracy of their work assessing the cost effectiveness of other charities. If they cannot be bothered to do a good job in the hiring process, they also cannot be bothered to do a good job in examining a charity or intervention.


 

43

0
0

Reactions

0
0

More posts like this

Comments44
Sorted by Click to highlight new comments since: Today at 4:06 PM

I'm sorry to hear about your negative experience with GiveWell's hiring cycle.

I think that it's easy to under-estimate how hard it is to hire well though. For comparison, you can honestly give all the same complaints about the hiring practice of my parent company (Google).

It is slow, with many friends of mine experiencing up to a year of gap between application and eventual decision.

Later interviewers have no context on your performance on earlier parts of the application. This is actually deliberate though, since we want to get independent signal at each step. I wouldn't be surprised if it was deliberate at GiveWell as well.

You often aren't told what is important at each interview stage. You're just posed technical or behavioral questions, and then you have to figure out what's important to solve the problem. Again, this is somewhat deliberate to see if candidates are able to think through what the important parts of an issue are.

You certainly aren't given feedback for improvement after rejection. An explicit part of interviewer training is noting that we shouldn't say anything about a candidate's performance (good or bad) to the candidate, for fear of legal repercussions. Some EA orgs have chosen to give rejection feedback despite this, but it seems to be both not standard and not necessarily wise for the organization.

Interviewing and hiring just kind of sucks. I'd love it if GiveWell was unusually excellent here, but I think that it's at least important to recognize that their hiring practices are pretty normal.

[anonymous]1y34
14
3

An explicit part of interviewer training is noting that we shouldn't say anything about a candidate's performance (good or bad) to the candidate, for fear of legal repercussions.

Legal repercussions for interview feedback have been discussed on the EA Forum in the past, e.g. in the comments of this post. The consensus seems to be that it's not an issue either in theory or in practice. Certainly if your feedback is composed of lawyer-approved boilerplate, and only offered to candidates who ask for it, I think your legal risk is essentially nil. [Edit: Liability insurance could be used to mitigate downside risk even further.]

Recent relevant post: Keep EA high-trust. Something I've observed is that the trust relationship in EA seems very asymmetrical. Junior EAs are asked to put lots of trust in senior EAs, but senior EAs put very little trust in junior EAs. Not giving feedback because of very hypothetical lawsuit risk is a good example of that.

I think trust is a two-way street. So if we want EA to be high-trust, then senior EAs should be a bit more willing to trust junior EAs, including by giving interview feedback.

Recent relevant post: Keep EA high-trust. Something I've observed is that the trust relationship in EA seems very asymmetrical. Junior EAs are asked to put lots of trust in senior EAs, but senior EAs put very little trust in junior EAs. Not giving feedback because of very hypothetical lawsuit risk is a good example of that.

This is a good point, thanks for writing it

I am telling you what Google told me (and continues to tell new interviewers) as part of its interview training. You may believe that you know the law better than Google, but I am too risk averse to believe that I know the law better than them.

The legal risks consist almost entirely of situations where there is reasonable cause to suspect that the applicant has been discriminated due to some protected characteristic. In these situations the hiring party is incentivized to maximally control information in order to minimize potential evidence. Feedback could act as legal ammunition for the benefit of the discriminated candidate.

Because hiring organisations gain very little from giving feedback and instead lose time, effort, and assume more risk when doing it; it's very common to forbid recruiters and interviewers from giving feedback entirely. Exaggerating the legal risks provides an effective explanation for doing this. The rule is typically absolute because otherwise recruiters may be tempted to give feedback out of niceness or a desire to help rejected candidates.

Also, Google's interpretation of the law is almost certainly made from Google's perspective and for Google's benefit — not from the perspective of what is the desired outcome of the law; or even more importantly, what is the underlying issue and how should we be trying to solve it to make the world better.

[anonymous]1y2
1
4

Google is generally quite risk-averse. My guess is that they don't give feedback because that is the norm for American companies, and because there is no upside for them. I'd be surprised if their lawyers put more than 10 hours of legal research into this.

[anonymous]1y1
0
0

Another thought: Even if Google's lawyers did some research and said "yeah we could probably give feedback", my model of Google is they would not start giving feedback.

Separately regarding trust, I don't feel obligated to trust senior EAs. I sometimes read the analyses of senior EAs and like them, so I start to trust them more. Trust based on seniority alone seems bad, could you give some examples where you feel senior EAs are asking folks to trust them without evidence?

[anonymous]1y1
1
3

How about a post like this one? It's not an analysis. It's an announcement from CEA that says they're reducing transparency around event admissions.

There may be evidence that CEA is generally trustworthy, but the post doesn't give any evidence that they're doing a good job with event admissions in particular. [In fact, it updates me in the opposite direction. My understanding of psychology research (e.g. as summarized by Daniel Kahneman in Thinking Fast and Slow) is that a decision like EAG admission will be made most effectively using some sort of statistical prediction rule. CEA doesn't give any indication that it is even collecting a dataset with which to find such a rule. The author of the post essentially states that CEA is still in the dark ages of making a purely subjective judgement.]

I guess I read that as a description of what they're doing rather asking me to trust them. CEA can choose the admission criteria they want, and after attending my first EAG earlier this year I felt like whatever criteria they were using seemed to broadly make for a valuable event for me as an attendee.

I think you're really underestimating how hard giving useful feedback at scale is and how fraught it is. I would be more sympathetic if you were running or deeply involved with an organization that was doing better on this front. If you are, congrats and I am appreciative!

[anonymous]1y1
0
0

I guess I read that as a description of what they're doing rather asking me to trust them.

It's a description of how they're going to be less transparent. I think that's about as good as we can get, because if they hadn't described how they were going to be less transparent, there would be no post to share! All I'd be able to say is "they have a secretive vibe" or something like that, which seems unsatisfactory.

(I do think the "secretive vibe" thing is true though -- if we stop looking at posts that are published, and start looking at posts that aren't published, I'd say the ratio of forum posts written by leadership in the past year to the number of EAs in leadership roles is quite low. Holden Karnofsky would be a notable exception here.)

So, I'm not sure what would qualify as an answer to your "examples where you feel senior EAs are asking folks to trust them without evidence" query at this point? You don't seem to think either the "Keep EA high-trust" post or the "How CEA approaches applications to our programs" post qualifies.

I felt like whatever criteria they were using seemed to broadly make for a valuable event for me as an attendee.

Sounds like you have private info that they're trustworthy in this case. That's great, but the post still represents senior EAs asking people to trust them without evidence.

It's not necessarily bad for senior EAs to be trusted, but I do think there's a severe trust imbalance and it's causing significant problems.

I think you're really underestimating how hard giving useful feedback at scale is

Can you explain why you think it's hard? I am very familiar with a particular organization that does feedback at scale, like the UK Civil Service -- that's the basis for my claims in this thread. I think maybe American organizations just aren't in the habit of giving feedback, and assume it's much more difficult/fraught than it actually is.

I think CEA's mistake was "getting into the weeds". Simply copy/pasting boilerplate relevant to a particular application is a massive improvement compared to the baseline of no feedback. Categorize a random sample of rejected applications and for each category, identify an "EA virtue" those rejects failed to demonstrate. Compose polite lawyer-approved boilerplate for each virtue. Then for a given reject who wants feedback, copy/paste the boilerplate for the virtues that weren't demonstrated sufficiently. Make it clear that you can't respond to follow-up emails. This would be pretty easy and have very minimal legal risk.

Alternatively, if CEA wants to get into the weeds in a fair way, then if an applicant asks for more info, flip 3 coins and if at least one lands tails, say no. If all coins are heads, get into the weeds publicly (anonymizing applicant identity by default) so people get a sense for what's going on. This could provide in-depth transparency without causing CEA to get overwhelmed.

A hypothetical example that I would view as asking for trust would be someone telling me not to join an organization, but not telling me why. Or claiming that another person shouldn't be trusted, without giving details. I personally very rarely see folks do this. An organization doing something different and explaining their reasoning (ex. giving feedback was not viewed as not a good ROI) is not asking for trust.

Regarding why giving feedback at scale is hard, most of these positions have at best vague evaluation metrics which usually bottom out in "help the organization achieve its goals." Any specific criteria is very prone to being Goodharted. And the people who most need feedback are in my experience disproportionately likely to argue with you about it and make a stink to management. No need to trust me on this, just try out giving feedback at scale and see if it's hard.

My admittedly limited understanding of the UK Civil Service suggests that it's more amenable to quantization compared to GiveWell research analysts and Google software engineers. For example, if your job is working at the UK equivalent of a DMV, we could grade you based on number of customers served and a notion of error rate. That would seem pretty fair and somewhat hard to game. For a programmer, we could grade you based on tickets closed and bugs introduced. In contrast, this is absolute trash as the sole metric (although it does have some useful info).

[anonymous]1y1
1
1

Any specific criteria is very prone to being Goodharted.

I don't think CEA should share specific criteria. I think they should give rejects brief, tentative suggestions of how to develop as an EA in ways that will strengthen their application next time. Growth mindset over fixed mindset. Even a completely generic "maybe you should get 80K advising" message for every reject would go a long way.

Earlier in this thread, I claimed that senior EAs put very little trust in junior EAs. The Goodharting discussion illustrates that well. The assumption is that if feedback is given, junior EAs will cynically game the system instead of using the feedback to grow in good faith. I'm sure a few junior EAs will cynically game the system, but if the "cynical system-gaming" people outweigh the "good faith career growth" people, we have much bigger problems than feedback. (And such an imbalance seems implausible in a movement focused on altruism.)

I'd argue that lack of feedback actually invites cynical system-gaming, because you're not giving people anywhere productive to direct their energies. And operating in a low-trust regime invites cynicism in general.

And the people who most need feedback are in my experience disproportionately likely to argue with you about it and make a stink to management.

Make it clear you won't go back and forth this way.

This post explains why giving feedback is so important. If 5 minutes of feedback makes the difference for a reject getting bummed out and leaving the EA movement, it could be well worthwhile. My intuition is that this happens quite a bit, and CEA just isn't tracking it.

Re: making a stink -- the person who's made the biggest stink in EA history is probably Émile P. Torres. If you read the linked post, he seems to be in a cycle of: getting rejected, developing mental health issues from that, misbehaving due to mental health issues, then experiencing further rejections. (Again I refer you to the "Cost of Rejection" post -- mental health issues from rejection seem common, and lack of feedback is a big factor. As you might've guessed by this point, I was rejected for some EA stuff, and the mental health impact was much larger and longer than I would've predicted in advance.)

I think we would prefer that rejects make a stink to management vs making a stink on social media. And 5 minutes of feedback to prevent someone from entering the same cycle Torres is in seems well worthwhile.

No need to trust me on this, just try out giving feedback at scale and see if it's hard.

Again, I do have significant knowledge related to giving feedback at scale. It isn't nearly as hard as people say if you do it the right way.

My admittedly limited understanding of the UK Civil Service suggests that it's more amenable to quantization compared to GiveWell research analysts and Google software engineers. For example, if your job is working at the UK equivalent of a DMV, we could grade you based on number of customers served and a notion of error rate. That would seem pretty fair and somewhat hard to game. For a programmer, we could grade you based on tickets closed and bugs introduced. In contrast, this is absolute trash as the sole metric (although it does have some useful info).

This seems like a red herring? I assume anyone applying for an analyst position at Givewell would be applying for a similar type of position at the Civil Service. White collar work may be hard to quantize, but that doesn't mean job performance can't be evaluated. And I don't see what evaluation of on-the-job performance has to do with our discussion.

I assume anyone applying for an analyst position at Givewell would be applying for a similar type of position at the Civil Service.

My experience with government positions is that they are legally required to have relatively formulaic hiring criteria. A benefit of this is that it's easy to give feedback: you just screenshot your rubric and say "here are the columns where you didn't get enough points".

So my guess is that even if there was literally the same position at GiveWell and the UK Civil Service it would be substantially easier to give feedback for the civil service one (which of course doesn't necessarily mean that GW shouldn't give feedback, just that they are meaningfully different reference classes).

I don't think CEA should share specific criteria. I think they should give rejects brief, tentative suggestions of how to develop as an EA in ways that will strengthen their application next time. Growth mindset over fixed mindset. Even a completely generic "maybe you should get 80K advising" message for every reject would go a long way.

The post you linked says:

When we have a specific idea about what would improve someone’s chances (like “you didn’t give much detail on your application, could you add more information?”) we’ll often give it. 

I guess you would rather they say "always" instead of "often," but otherwise it seems like what you want? And my recollection is that even the generic rejection emails do contain generic advice like linking to 80 K?

I guess this is kind of a tangent on the thread, but for what it's worth I'm not sure that EAG is actually doing something different than what you are suggesting.

(Note: I work for CEA, but not on the events team.)

This thread is the kind of tiring back and forth I'm talking about. Please, try organizing feedback for 5k+ rejected applicants for something every year and then come back to tell me why I'm wrong and it really is easy. I promise to humbly eat crow at that time.

[anonymous]1y3
1
0

For what it's worth, I'm also feeling quite frustrated. I've been repeatedly giving you details of how an organization I'm very familiar with (can't say more without compromising anonymity) did exactly what you claim is so difficult, and nothing seems to get through.

I won't trouble you with further replies in this thread :-)

You can see how that the lack of details is basically asking me to... trust you without evidence?

Edit: to use less 'gotcha phrasing, anonymously claiming that another organization is doing better on feedback, but not telling me how, is asking for me to blindly trust you for very little reason.

I don't think feedback practices are widely considered secrets that have to be protected, and if your familiarity is with the UK Civil Service, that's a massive organization where you can easily give a description without unduly narrowing yourself down.

I can confirm that my experiences at Google is similar, as someone who both went through the application process as an applicant and was an interviewer (however, I was never on a hiring committee or explicitly responsible for hiring decisions). Including the parts about slowness, about intentionally not knowing how the candidate did in earlier stages (I believe we're technically barred from reading earlier evaluations before submitting our own), and the part about being trained to be very strongly forbidden from giving candidate feedback.

Another thing I'll add:

However, Givewell’s application process assumes that people work entirely in isolation.

[...]

There is no instruction to e.g. cover issue x, or focus on area y, despite the fact that in a real workplace those instructions would be provided (or at least discussed before the exercise). 

Hence, despite Givewell's claims that the exercises reflect work tasks, that clearly is not the case as they do not reflect how work is actually conducted in a well-functioning organisation. (And if it reflects how work is conducted in Givewell, then that just proves that Givewell is not a well-functioning organisation.)

I wonder if this is just a workplace cultural difference. In almost every job I've had, being able to independently come up with an adequate solution for tightly scoped problems given minimum or no additional task-specific instructions is sort of the baseline expectation of junior workers in the "core roles" of the organizations (e.g. software engineers at a tech company, or researchers in an EA research org). Now it's often better if there's more communications and people know or are instructed to seek help when they're confused, but neither software engineering nor research are inherently very minute-to-minute collaborative activities. 

I personally agree with the assessment with the OP that EA orgs should give feedback for final-round applicants, and have pushed for it before. However, I don't think Google's hiring process is particularly dysfunctional, other than maybe the slowness (I do think Google is dysfunctional in a number of other ways, just not this one).

It makes no sense to compare Givewell and Google. Alphabet has around 130,000 employees. Givewell has what, a  few dozen?  Obviously organizational dysfunction grows with size. Givewell should be compared to small firms and other small non-profits. Now everyone recognizes that even big companies probably could be much more efficient and Google in particular has got extremely fat and lazy off the back of the massive profitability of search - hence why it keeps buying interesting things only to fail to do anything with them and then kill them, so that should also be factored in. 

Google, quite frankly, are also in a position to set terms in the hiring marketplace. They can fuck people around and frankly will always have an endless stream of quality talent wanting to go work there anyway. Givewell, by contrast, is going to be reliant on  its own reputation and that of the wider of the EA movement to attract  the people it wants. It is not in the same position and reputation matters.

I spent quite a bit of time last year looking into the hiring processes/recommendations of various big companies, and Google was generally the one I came away most impressed/convinced by, in terms of how much I expect their hiring decisions to correlate with employee quality. I'd actually claim it's much better than most other companies (of any size), and probably has had a positive influence on hiring as a whole through people copying their processes.

I'm not convinced it needs to take as long as it does, and that might be evidence of bloat/ dysfunction. But other than that, I don't think Google's hiring process is dysfunctional. 

People and organizations figure out things much harder than hiring well. Compare running one of the most used search engines on the planet with laying out a couple of assessments, a couple of people looking at them, a bit of communication by email and phone, and all run with reasonable promptness, much less than what one would expect from everyday postal services.

This post is quite informative, but at points seems written in a needlessly harsh tone.

 

E.g. "If Givewell was serious about welcoming outsiders’ input into what they could do better, they’d work with experts to improve their hiring process. But they’re not a serious organisation, so I suspect they’ll ignore this."

 

I read this part as venting some anger after being rejected (in a frustrating way!), which is understandable. But it makes it harder for me to place the post more broadly, as I worry that parts may be similarly exaggerated or that the focus on negative parts may omit other parts that would be needed for a representative picture of the application process.

 

Still, I found this informative and upvoted. I wanted to mention it as it may explain the voting pattern.

Thanks for explaining! (NB. I didn't write the original post, but I'm interested in hiring processes and in keeping the forum civil.)

Hi, Bob,

We're very sorry to hear that you had a bad experience! We take feedback like this seriously and have passed it on to the senior staff in charge of research hiring. 

Responding to all applicants in as timely a manner as we'd like has been challenging, in large part due to understaffing on the research team. But, we are working to improve on this. We invite others who may be considering applying for one of our research positions to review our FAQs for more information about the hiring process, including the typical timeline.   

Thank you for sharing your concerns!

Best,

Miranda Kaplan, GiveWell Communications Associate
 

I'm surprised this hiring process took so long. I've previously applied for a job at OpenPhil (and again got to fairly late stages before being rejected) and it took much less time than this. Our organization also operates a fairly GiveWell-like hiring process, and most[1] hiring rounds take significantly less time.

Other than that, I'm not particularly impressed/convinced by the claims made here, and (weakly) downvoted the post. I don't think people involved in hiring should modify their processes based on this post, other than maybe increasing the weight they put on trying to make hiring proceed as quickly as possible.

  1. ^

    One recent hiring round for a senior position has proven an exception to this.

Thanks for explaining your vote! I agree, more promptness in the process is a good takeaway. Also, looking into how applications are rejected: At the later stages, a phone call is a much better choice than an email, I would say. (NB again: I didn't write the original post, but I'm interested in hiring processes and in keeping the forum civil.)

Also, looking into how applications are rejected: At the later stages, a phone call is a much better choice than an email, I would say.

This might be a nice thing to do, but I definitely don't think it's required, and I don't think a lack of it is evidence that GiveWell's hiring is unprofessional or in need of reform.

I don't have all that much sympathy for a candidate who gets angry because they got a boilerplate-ish rejection email; this is widespread standard practice. Getting rejected sucks, obviously, and I have sympathy for that, but I don't think pressuring orgs to take on burdensome practices to mitigate that is likely to be a good use of resources.

I agree that a rejection email isn't evidence that GiveWell is worse than other places. At the same time, even though it's standard practice, an organization can do better. A two-minute phone call to each of the few remaining candidates at later stages isn't that burdensome and has several benefits:

  • It makes the organization stand out as one that cares about applicants. Which is good because organizations compete for talent.
  • It maintains the relationship with the rejected candidate. Which is good because a candidate who got to the later stages might be fit for other roles in the future.
  • It makes rejection hurt less, which is good in and of itself.

Now, dan.pandori says he would find a phone call rejection off-putting. So it becomes a question of degrees: What share of people would find it off-putting, depending on how well or badly it's done?

It makes rejection hurt less, which is good in and of itself.

Why would rejection hurt less in a call than an email?

Because with email it's easy to read in a tone of ‘you suck, we're sending you boilerplate niceties to get rid of you’, which is not possible with a phone call. (Unless the caller makes it sound like boilerplate niceties. I'm not saying such calls are easy. Email is the easy cop-out.) Something like that. Have you had the experience where you keep communicating with someone by text and get more and more annoyed with them, then you get on a call and all annoyance melts away because hearing a voice reminds you of the other person's humanity? Perhaps it's just me who thinks strange things.

Ultimately it's an empirical question and my prediction is that on balance, a phone call has more value.

Was going to say the same. I've only ever been rejected over email (or ghosted entirely). I would also find it off-putting to get a phone call rejection. I guess organizations can choose to call if they wanted, but I wouldn't personally encourage it.

What we did at RP Longtermism's most recent hiring rounds (not sure if it's applicable to other departments/teams) is send rejections via email and offer rejected final round candidates a chance to call with someone on the team if they wanted to. This lets candidates opt in to talk more with team members if and only if they wanted to, and also do so at their own pace so they're emotionally ready to call when ready.

What share of people took you up on that?/Did anyone comment on the offer?

I don't have statistics off the top of my head, but I want to say more than half. I think people were positive about it, but it's hard to get accurate takes when there are such strong incentives for people to just generally seem positive here, so I wouldn't take the positive sentiment there too seriously.

I see. Thanks!

What would you find off-putting about it?

Phone calls for me are socially awkward and I generally want some time to privately process rejection rather than immediately need to have a conversation about it. Also I generally keep my phone at home during business hours so it's quite likely I'd need to spend half an hour playing phone tag.

Good to know, thanks!

For completeness, my idea of a rejection phone call (derived from https://www.manager-tools.com/2014/11/how-turn-down-job-candidate-part-1) is:

  • You call, greet the person, say in the first sentence that you won't be making an offer, say a few more short sentences, react to any responses, then hang up. You don't make it a conversation. The important thing is that they hear your voice.
  • It's fine to speak on voicemail and for the other person not to call back. This avoids phone tag.

Note that Manager Tools doesn't always have to most airtight arguments, but they tend to have tested their core guidance (which includes hiring) empirically.

Sabs
1y10
10
7

addendum: if Givewell does this, they're hurting not only their own rep but also the rep of other EA orgs.  Talent may well no longe wish to apply to EA orgs in general after having had such a bad experience with one of the most prominent ones. This would be a shame, especially since other orgs may well actually have much better hiring practices!

Exactly! I'm definitely going to think twice about applying to any EA organisation that requires such an involved application process.

I both descriptively agree with you and normatively wish this isn't the case. I think the world will probably be better if EA orgs have more leeway to experiment, and more of a mandate for trying out new things without worrying too much about the "brand" or the amorphous sense of whether it benefits or hurts the EA brand/identity.

But in the world we currently live in, I definitely agree with you that EA orgs' identities are coupled together in the minds of job applicants, and organizations in the ecosystem doing poorly makes it harder for others.

Meta comment: There are many downvotes, but barely any comments. Feels a little uncivil. Certainly, the post has disagreeable points. But it's useful input, too.

Bob
9mo-1
0
0

Returning to this after some time to consider everything - I stand by everything I said. The attempts to defend Givewell's approach in here hold no weight whatsoever. Just as an example, the attempt to defend Givewell's discarding of information from previous rounds is simply absurd. It is the very definition of inefficiency not to use all of the information available to one when making a decision. 

 

And it shows exactly how ridiculous Givewell's approach to recruitment is - their focus on hiring from only a tiny subset of universities (that have now been found to be racist and toxic) along with their ignoring relevant information already revealed during the redruitment process indciates that Givewell is not serious about hiring the best people. It needs serious reform - knocking the whole thing down and starting from scratch would be th best approach.

More from Bob
Curated and popular this week
Relevant opportunities