Hide table of contents

Over the last year, there’s been a lot of discussion about the “EA job market” and how to build an effective career in an EA field.

A few months ago, I went through my first hiring process from the employer side when I found a part-time copyeditor to work on CEA’s social media posts (and a few other tasks). I thought my process might be interesting to people who've been following the aforementioned discussion, so I’m writing up my notes in this post.

Meta: It can be legally tricky to write applications, conduct job interviews, and provide feedback to applicants. (Though feedback can also be really valuable!) I recommend consulting your local HR expert before attempting to hire.

 

Statistics on the hiring process

The initial application was meant to give me information about applicants’ editing skills and experience, as well as their familiarity with EA (which I felt would be helpful for the role, given the material they’d be editing and sometimes writing). 

Applicants were asked to edit half of a transcript of an EA Global talk generated by a transcription service, which contained many errors. They were given a two-hour limit to make it as clean and readable as they could. Only a few applicants didn’t finish the full edit, some because they went over the time limit and others because they applied very soon before the deadline. It’s possible that some people ignored the limit; I spot-checked the edit history of some of the best transcripts and didn’t see this, but I didn’t check all transcripts in this way.

Applicants were also asked to provide some information about themselves; see the next section for a link to the full job description.

Number of applicants who completed the initial work trial: 183

Number of applicants who scored at least “1” on each of two 1-3 scales (one for EA/editing experience, one for editing skill): 147

(I describe my system in more detail below, but you can think of this as “the number of people who followed the instructions, seemed to be fluent in English, and indicated a genuine interest in the position.”)

Number of applicants who reached the interview stage: 21

Number of applicants who reached the “final work trial” stage: 8

 

The job description

Here’s the description I used to advertise the position. I shared it through the EA Newsletter, the 80,000 Hours job board, the EA Job Postings Facebook group, and EA Work Club. I didn’t try to track how many candidates came through each source. The position was posted in the first week of June, and was open until the last day of June.

Some thoughts on how I handled this process, and what I wish I’d changed:

I didn’t have a good sense for how many people would apply, so I erred on the side of having a more “open” description: I described an “ideal” candidate, but didn’t set out many strict requirements.

On the one hand, this led to many more people applying than I had expected, and quite a lot of applicant time being invested in a work test for minor (if any) benefit to applicants. This makes me wish I’d done more research beforehand to better understand how many people tend to apply for these positions — for example, by asking GiveWell about their past experience hiring people to write up conversation notes (a position with similarly loose qualifications).

On the other hand, had I screened for professional editing experience or professional EA experience, I might have missed out on some of my best candidates,perhaps including the person I eventually hired. 

I could have saved even more applicant time by asking contacts in the EA community for references; several candidates I chose to interview were people I expect I’d have found through this method. However, since I didn’t think the position would require a strong EA background to be done well, I wanted to make the process more open and give chances to people looking for their first EA-aligned work experience.

 

The initial work test

The most important skill for the position was basic copyediting: Even if someone had a strong resume/background and a lot of passion for EA, I still needed to be able to trust their edits and minimize the time I spent reviewing their work.

Things I think went well with this test:

  • I was able to identify a set of “tricky edits” that did a good job of separating the best editors from the rest of the applicants. This let me focus mostly on those spots and save time reading through 150+ applications.

Things I wish I’d reconsidered:

  • Had I realized so many people would apply, I’d have aimed at a shorter test, perhaps with some special modifications so that a shorter passage would still have enough “tricky edits” to let me gauge an applicant’s skill. While I’m guessing that the average time spent by applicants was well under the two-hour limit, I still might have been able to save 50-100 hours of applicant time with a couple of hours of thought on my part. When hiring, you have control over a lot of applicants’ collective time — don’t squander it!

Scoring rubric:

  • 0 = I couldn’t open the Google Doc, or the applicant sent a document with no visible edits (e.g. they shared it with the wrong permissions, leaving their edits invisible, and didn’t change those permissions after I wrote to them letting them know what had happened)
    • This was surprisingly common. Upon reflection, I wish I’d written “and enable comments” as “and share your doc with the setting “anyone with the link can comment” — it was the last line of the instructions, and a lot of people seem to have missed or misunderstood it.
  • 1= Quite a few mistakes; based on this test, I’d feel uncomfortable with this candidate publishing on CEA’s behalf, and I think I’d have to watch them closely
  • 2 = Few mistakes; copy almost as clean as what I think I’d have done; hits all or almost all of the tricky spots correctly
  • 3 = Almost zero mistakes; copy as clean or cleaner than what I think I’d have done; in addition to getting the tricky spots right, may also show some creative flair (e.g. using bullet points to break up a very long sentence with many examples of something)

The difference between, say, a 2 and a 2.5 was highly subjective. It’s likely that I gave slightly higher scores to candidates whose natural “flow” in editing sentences was similar to mine, even if other candidates’ work was perfectly grammatical/smooth. This was subconscious, but I think I endorse it; if I’m going to look over a lot of someone’s work, it helps if we have a similar sense for how sentences should sound. 

Score distribution:

  • Score <1: 34
  • 1 <= score < 2: 110
  • 2 <= score < 2.5: 26
  • 2.5 <= score: 13 (two perfect “3” scores)

Feedback from the person I eventually hired:

“One of the reasons I applied was that your initial screen/first step was a test — not an interview. It made me respect CEA's attempts to avoid hiring bias (plus it backed up the org's claim that you are evidence-based).”

Grading the applications

In addition to the writing task, I asked applicants to send some information about themselves. I graded this on an 0-3 scale, as well. I’m satisfied with the information I asked for, and I can’t easily think of any questions I wish I’d included in hindsight.

Scoring rubric:

  • 0 = the applicant didn’t send any information about themselves, or it was clear that they were totally unfamiliar with EA (or firmly opposed to it, etc.)
  • 1 = the applicant seemed familiar with the basic ideas of EA but not much beyond that; alternatively, they had some editing experience but no familiarity with EA
  • 2 = the applicant showed a lot of familiarity with EA and/or a moderately good sense of what CEA does; they have some experience with writing and editing, though maybe not in a role specific to copyediting
  • 3 = the applicant was a longtime member of the EA community and/or had previous EA work experience, or paired at least moderate EA knowledge with experience as a copyeditor

I cared more about performance on the work trial than on an applicant’s EA background; I think it takes less time and effort to become familiar with EA (assuming at least a basic inclination toward its ideas) than to become a very polished copywriter/editor from a baseline of having only moderate skill. 

Within the application score, knowing about someone’s editing background was helpful, but I put less weight on that than on their EA background, because I had access to their editing task already. (For what it’s worth, editing experience did correlate quite positively with performance on the task.)

Score distribution:

  • Score <1: 14
  • 1 <= score < 2: 94
  • 2 <= score < 2.5: 52
  • 2.5 <= score: 23 (four perfect “3” scores)

 

Second-round decisions

I calculated a “total” score by multiplying the 0-3 editing score by two, then adding the 0-3 application score. Nine candidates scored at 7 or above, with many more between 6 and 7. I selected some but not all candidates from the 6-7 range, in part by using the following factors:

  • Excluding a couple of people who required substantially higher salaries than the $25/hour we stated in the ad; we might have paid such rates for an outstanding candidate, but not for one whose scores were similar to those of many other candidates
  • Adding a small bonus (0.5 points) for candidates within a short distance of Berkeley or Oxford, since it is mildly useful when our contractors can visit one of CEA's offices from time to time
  • Adding a small bonus (0.5 points) for candidates who included strong and relevant personal recommendations (e.g. a professor who praised a candidate’s work as an editor for a book they had published)
  • Adding a small bonus (0.5 points) for people who submitted especially well-written applications. This doesn’t mean beautiful flowing prose (some people who got this bonus used bullet points), but it did mean going beyond the standard “list of accomplishments” in a way that helped me understand:
    • A candidate’s writing style (important for a job where some tasks involve original writing), or:
    • What they might be like to work with (e.g. candidates who included a brief summary of their most relevant experience, but also a link to a more complete summary in case I found it useful to read further — or who did various other things to make the evaluation process smoother on my end)

This isn’t an exhaustive list of factors that mattered (there may have been a dozen elements in any given application that made me more or less inclined to move a candidate to the next round), but it covers all the most important points.

 

Follow-up and feedback

Of the candidates I did not interview, most got the following email:

Thank you for applying to the freelance copyeditor position at CEA. I appreciate your taking the time to complete the initial work test, and your interest in our mission.

After careful consideration, I've decided not to move forward with your application.

Please don’t take this as a sign that you aren't a capable editor, or that you shouldn’t apply for positions with other organizations connected to effective altruism. More than 180 people applied for the position, and many people with strong qualifications didn’t pass the first stage. 

If you have any further questions, please let me know; I'm open to providing individual feedback, though it may be brief. And I wish you the best of luck with any other editing jobs for which you may apply!

However, 28 non-interviewed candidates demonstrated strong editing skills and had work trial scores competitive with some of the candidates I interviewed. I expected these candidates to be competitive applicants for other writing/editing jobs at EA orgs that might open up in the future. They got the following email:

Thank you for applying to the copyeditor position at CEA. I appreciate your taking the time to complete the work test, and your interest in our mission.

More than 180 people applied for the position, and many people with strong qualifications didn’t pass the first stage. After careful consideration, I’ve decided not to move forward with your application.

However, you were one of a small number of applicants who, despite not passing to the second round, made unusually strong edits. Based on your work test, I think you might be a good candidate for other jobs that involve writing or editing for organizations connected to effective altruism — most of which don’t have nearly as many applicants. I encourage you to apply to those positions in the future (if they interest you).

If you have any further questions, please let me know; I wish you the best of luck with your future applications!

(Note that I didn't include the "feedback" note here: In retrospect, this was a mistake. While high-scoring applicants may not have needed editing advice, many people also asked for my thoughts on their resumes/intro emails, and this seemed valuable to provide; I wish I'd gotten more requests.)

Candidates I did interview got an email which included the following language, meant to help them understand the process and decide whether they wanted to keep investing time in the position. (For example, someone might have been willing to stay in the running:for a freelance gig if they were competing with two other people, but not with twenty.)

For context, this is what I have planned for the rest of the application process:

  1. Interviews with candidates who passed the first round (21 out of 183 applicants)
  2. A second work test for approximately 10 of those candidates, based on their interviews and a closer examination of their original work tests.
  3. One person chosen to take the position — though it’s possible that work might be split among multiple people, or that other applicants might be asked to take the position if the first person hired becomes unable to continue the work.

I strongly expect to make my final decision by the end of August. 

Please let me know if you’d require an earlier decision in order to be able to take the job.

Nearly two dozen applicants (of those I didn’t interview) asked for feedback. I responded to them with specific notes on their particular applications, including editing mistakes and areas where I felt uncertain about their experience, as well as positive feedback (many of these candidates did make strong edits, or wrote excellent application emails). 

Feedback on my feedback (when I received it) was highly positive; it seems as though people really appreciated hearing back from a “hiring manager”. Sending those notes took a fair amount of time, but I’m glad I did it; it seems to have been helpful to some of the applicants, and I hope that it also made them feel more positive about CEA, and about EA in general. I’d cautiously recommend that other organizations do the same if they can spare the time and trouble (again, legal trickiness).

 

Interviews

Every candidate offered an interview chose to schedule one. The interviews had less structure than I’d have liked; while I asked each candidate the same set of initial questions, alongside specific questions about their application, I didn’t have a scoring rubric in mind. 

I wound up giving each interview a score “out of 10” (actual scores ranged from 6 to 9) after I finished, which made it hard to directly compare candidates later on. However, the candidate with the strongest interview, who I wound up hiring, also had among the strongest trial tasks in both rounds, so I wound up not needing to think too hard about these comparisons.

How I selected candidates for the second work trial (factors ordered from most to least important):

  1. The strength of their initial application (still a major factor, and weighted more heavily than the interview)
  2. How certain I was that they’d be available for the position for the right number of hours, and for a long time to come, despite its part-time nature (discussing this was a part of the interview)
  3. How engaged and curious they were during the interview? Did they ask questions that showed they were seriously thinking about how the position would work out for them? Did they seem to be thinking carefully about my questions before they answered?

 

The eight strongest candidates received this task as the final stage of the application.

I’m happy with the first two tasks (I got a great sense for how the candidates thought about social media, plus a lot of useful suggestions for improvements to the EA Newsletter). But I don’t think the third task wound up mattering much; it’s possible that I should have skipped it to save the candidates’ time.

The most important factors in my evaluation of this test (in no particular order):

  • Were the sample social media posts written such that I’d have been happy to see them appear on our feeds without further changes? If not, did the posts’ problems seem like they’d be easy to prevent with the right advice to the author?
  • What fraction of known typos and layout oddities (e.g. mismatched quotation marks) did the candidate catch? I’m not too concerned about mismatched quotation marks, but this felt like a good measure of attention to detail.
  • Was the Newsletter advice written such that I could easily act on it, or at least run an experiment to test its impact?
  • When giving advice, did the candidate acknowledge/account for their uncertainty about the Newsletter’s purpose, audience, and history? Did they know what they didn’t know?

Note: The set of “known” typos/oddities consisted of all the different issues that candidates found; I didn't re-copyedit my own newsletter for this task.

Three of the eight candidates had especially strong tests (particularly their Newsletter advice). I informed the top candidate that I wanted to offer her the position, and I informed the other two know that I was strongly considering them if the top candidate did not accept. 

After thinking about the initial offer and negotiating briefly for a higher rate, she did accept the position, and is currently working on several CEA projects. (Her new rate was still much lower than those requested by candidates I excluded for their high rate requirements.)

She requested that I not use her name, but gave me permission to talk a bit about her background and application:

  • She had less background EA knowledge than most of the other candidates, but more professional experience.
  • I’m almost certain we’d have hired someone else had I made the job ad more narrow, or relied entirely on references from my contacts.
  • Her edits on the first trial and her posts on the second trial were nearly perfect. It was hard to spot any changes I’d have made as her editor in either case. Since one of my central goals in making this hire was to save time, I was happy to find someone whose work didn’t seem like it would need much double-checking. (Several other applicants fit this description, too.)

 

Keeping in touch with candidates (want to hire someone?)

Hiring for this position took dozens of hours of my time, and hundreds of hours of candidates’ time. I want to squeeze as much value as I can from the process.

So, in addition to hiring a candidate, I’ve also kept a record of the other applicants who most impressed me, so that I can let them know if I hear about promising opportunities. I’ve already referred a few candidates for different part-time roles at other EA orgs, and I anticipate more chances to come.

(If you’re looking to hire someone for writing and/or editing, let me know!)

 

Any questions?

I’d be happy to respond to questions about the hiring process or anything else I’ve mentioned in this post. Please leave a comment or send me an email.

Comments11
Sorted by Click to highlight new comments since: Today at 9:39 AM

Great post Aaron! I appreciate the detail you included

Hi Aaron, Can you also answer the following for me please?

So, in addition to hiring a candidate, I’ve also kept a record of the other applicants who most >impressed me, so that I can let them know if I hear about promising opportunities. I’ve already >referred a few candidates for different part-time roles at other EA orgs, and I anticipate more >chances to come.

  1. How many people "most impressed you"?

  2. How many people have you already referred for different part-time roles at other EA orgs?

  3. How many people do you think EA orgs are hiring in this job type currently or within the last year?

  4. How many people in your list who didn't get hired, do you expect to get hired else where in EA orgs? (gut feel, guess, based on past experiences, anything)

This is super useful, we're just about to go through similar process (hiring full-time editor). Thanks for sharing!

Could you share the job listing with me? I'd love to forward it on to some of the candidates!

And maybe this is a bit much--> Do you have the distribution of where you got your candidates from? Here is an example from EAF's hiring round

I didn't collect information on where people heard about the position, though that would have been a good idea!

Nice Post Aaron! I have the following questions:

1. I was wondering if you can also provide the score ruberic and distribution for the interview rounds and the final work trial rounds?

2. Within what time frame did you receive the 180+ applications?

  1. There was no formal written rubric for either round, and submissions for the final work trial weren't given numerical scores. As I noted in my post: I wound up giving each interview a score “out of 10” (actual scores ranged from 6 to 9) after I finished. (However, these scores were fairly subjective.)
  2. I began to post the job listing roughly a month before applications were due. I received the first few applications within a day or two, and the last few on the day of the deadline.

Thank you very much Aaron. Are you then able to inform the distribution of the scores for the interview (21 people) and the final work trail (8 people)? I understand they are subjective. Nevertheless they were a score on 10.

No, I'm not going to share that information. I don't think there's any value to it given the subjectivity, and I think that anyone trying to analyze it will be wasting their time.

(Also, the final work trials were not scored.)