Researcher at Giving What We Can.
We did correspond via email, but yes that's right - we didn't have a video call with any candidates until the work trial.
I think there's a case to have had a call before then, as suggested by one of the candidates that gave us feedback:
One helpful suggestion they offered us was running a Q&A session with each candidate just before the work trial. This could have been an opportunity to more casually meet with them, and discuss any concerns they might have about the work trial.
The reason it's non-obvious to me whether that would have been worthwhile is that it would have lengthened the process (in our case, due to the timing of leave commitments, the delay would have been considerable).
Yes, we are aiming to publish this next week, and it should include an explanation on the delay. (Also thanks for checking in on this - the accountability is helpful.)
I don't have any particularly strong views, and would be interested in what others think.
Broadly, I feel like I agree that more specificity/transparency is helpful, though I don't feel convinced that it's not also worth asking at some stage in the application an open-ended question like "Why are you interested in the role?". Not sure I can explain/defend my intuitions here much right now but I would like to think more on it when I get around to writing some reflections on the Research Communicator hiring process.
I'm not sure I follow what you mean by transparency in this context. Do you mean being more transparent about what exactly we were looking for? In our case we asked for <100 words on "Why are you interested in this role?" and "Briefly, what is your experience with effective giving and/or effective altruism?" and we were just interested in seeing if applicants' interest/experienced aligned with the skills, traits and experience we listed in the job descriptions.
In the hiring round I mentioned, we did time submissions for the work tests, and at least my impression is we found a way of doing so worked out fairly well. Having a timed component for the initial application is also possible, but might require more of an 'honour code' system as setting up a process that allows for verification of the time spent is a pretty a big investment for the first stage of an application.
As a former applicant for many EA org roles, I strongly agree! I recall spending on average 2-8 times longer on some initial applications than was estimated by many job ads.
As someone who just helped drive a hiring process for Giving What We Can (for a Research Communicator role) I feel a bit daft having experienced it on the other side, but not having learned from it. I/we did not do a good enough job here. We had a few initial questions that we estimated would take ~20-60 minutes, and in retrospect I now imagine many candidates would have spent much longer than this (I know I would have).
Over the coming month or so I'm hoping to draft a post with reflections on what we learned from this, and how we would do better next time (inspired by Aaron Gertler's 2020 post on hiring a copyeditor for CEA). I'll be sure to include this comment and its suggestion (having a link at the end of the application form where people can report how long it actually took to fill the form in) in that post.
Thanks for this post! I appreciate your writing, and also appreciated including images in your post -- it made it more fun to read.
I wrote some feedback privately which the author thought would be good to share publicly, so this is a lightly edited version of that feedback:
Thanks for conducting this impact assessment, for sharing this draft with us before publishing it, and for your help with GWWC's own impact evaluation! A few high-level comments (as a researcher at GWWC):
Regarding the difference between how you have modelled the value of the GWWC Pledge versus how we did so:
Thanks again for your work!
Hi Michael, thank you for the response
No problem!
Regarding:
Also, wouldn't the above 'x-risk discount rate' be 2% rather than 0.2%?
There was a typo in my answer before: (1 - ((1 - 1/6)^(1/100)) = 0.0018) which is ~0.2% (not 0.2), and is a fair amount smaller than the discount rate we actually used (3.5%). Still, if you assigned a greater probability of existential risk this century than Ord does, you could end up with a (potentially much) higher discount rate. Alternatively, even with a high existential risk estimate, if you thought we were going to find more and more cost-effective giving opportunities as time goes on, then at least for the purpose of our impact evaluation, these effects could cancel out.
I think if we spent more time trying to come to an all-things-considered view on this topic, we'd still be left with considerable uncertainty, and so I think it was the right call for us to just acknowledge to take the pragmatic approach of deferring to the Green Book.
In terms of the general tension between potentially high x-risk and the chance of transformative AI, I can only speak personally (not on behalf of GWWC). It's something on my mind, but it's unclear to me what exactly the tension is. I still think it's great to move money to effective charities across a range of impactful causes, and I'm excited about building a culture of giving significantly and effectively throughout one's life (i.e., via the Pledge). I don't think GWWC should pivot and become specifically focused on one cause (e.g., AI) and otherwise I'm not sure exactly what the potential for transformative AI should imply for GWWC.
We did!
Our team put a lot of thought into the job description which highlights the essential and desirable skills we were looking for. Each test was written with these criteria in mind, and we also used them to help reviewers score responses.[1] This helped reviewers provide scores more consistently and purposefully. Just to avoid overstating things though, I'd add that we weren't just trying to legalistically make sure every question had a neat correspondence to previously written criteria, but instead were thinking "is this representative of the type of work the role involves?"
This is probably a bit more in the weeds than necessary, but though the initial application questions were written with clear reference to essential/desirable skills in the job description, I didn't convert that into a clear grading rubric for reviewers to use. This was just an oversight.