John_Maxwell

Wiki Contributions

Comments

Why aren't you freaking out about OpenAI? At what point would you start?

I also noticed this post. It could be that OpenAI is more safety-conscious than the ML mainstream. That might not be safety-conscious enough. But it seems like something to be mindful of if we're tempted to criticize them more than we criticize the less-safety-conscious ML mainstream (e.g. does Google Brain have any sort of safety team at all? Last I checked they publish way more papers than OpenAI. Then again, I suppose Google Brain doesn't brand themselves as trying to discover AGI--but I'm also not sure how correlated a "trying to discover AGI" brand is likely to be with actually discovering AGI?)

The Cost of Rejection

It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.

I don't know that. But it seems like a possibility. [EDIT: Sally's story was inspired by cases I'm familiar with, although it's not an exact match.] And even if it isn't happening very much, it seems like we might want it to happen -- we might prefer EAs branch out and become specialists in a diverse set of areas instead of the movement being an army of generalists.

The Cost of Rejection

I think part of our disagreement might be that I see Wave as being in a different situation relative to some other EA organizations. There are a lot of software engineer jobs out there, and I'm guessing most people who are rejected by Wave would be fairly happy at some other software engineer job.

By contrast, I could imagine that stories like the following happening fairly frequently with other EA jobs:

  • Sally discovers the 80K website and gets excited about effective altruism. She spends hours reading the site and planning her career.

  • Sally converges on a particular career path she is really excited about. She goes to graduate school to get a related degree, possibly paying significant opportunity cost in earnings etc.

  • After graduating, Sally realizes there are actually about 3-4 organizations doing EA work in her selected area, and of those only 2 are hiring. She applies to both, but never hears back, possibly due to factors like:

    • She didn't do a great job of selling herself on her resume.

    • She's not actually applying for the role her degree+resume best suit her for.

    • It so happens that a lot of other people reading the 80K website got excited about the same thing Sally did around the same time, and the role is unexpectedly competitive.

    • The organization has learned more about what they're looking for in this role, and they no longer consider Sally's degree to be as useful/relevant.

    • Her resume just falls through the cracks.

At this point, Sally's only contact with the community so far is reading the 80K website and then not hearing back after putting significant effort into getting an EA career. Can we really blame her if she gives up on EA at this point, or at the very least starts thinking of herself as playing on "single player" mode?

My point here is that we should distinguish between "effort the candidate expended on your hiring process" and "effort the candidate expended to get a job at your org". The former may be far bigger than the latter, but this isn't necessarily visible.

The same visibility point applies to costs to the org -- Sally may complain bitterly to her friends about how elitist the org is in their hiring / how elitist EA is in general, which might count as a cost.

Anyway, I think total cost for giving feedback to everyone is probably the wrong number here -- really you should be looking at benefits relative to costs for an individual applicant.

I also think it'd be worth trying experiments like:

  • Ask candidates who want feedback to check a box that says "I promise not to complain or cause trouble if I don't like the feedback"

  • Instead of saying "we can't hire you because you don't have X", spend less time making sure you're understanding the resume correctly, and more time asking questions like "it looks like your resume doesn't have X, we were hoping to find someone with X for this role". If they've got something to say in response to that, that's evidence that they really want the job -- and it might be worth letting them progress to the next stage as a way of validating your resume screen.

The Cost of Rejection

Candidates haven't interacted with a human yet, so are more likely to be upset or have an overall bad experience with the org; this is also exacerbated by having to make the feedback generic due to scale

...

Candidates are more likely to feel that the rejection didn't give them a fair chance (because they feel that they'd do a better job than their resume suggests) and dispute the decision; reducing the risk of this (by communicating more effectively + empathetically) requires an even larger time investment per rejection

Are you speaking from experience on these points? They don't seem obvious to me. In my experience, having my resume go down a black hole for a job I really want is incredibly demoralizing. I'd much rather get a bit of general feedback on where it needs to be stronger. And since I'm getting rejected at the resume stage either way, it seems like the "frustration that my resume underrates my skills" factor would be constant.

I'm also wondering if there is a measurement issue here -- giving feedback could greatly increase the probability that you will learn that a candidate is frustrated, conditional on them feeling frustrated. It's interesting that the author of the original post works as a therapist, i.e. someone paid to hear private thoughts we don't share with others. This issue could be much bigger than EA hiring managers realize.

The Cost of Rejection

On the topic of feedback... At Triplebyte, where I used to work as an interviewer, we would give feedback to every candidate who went through our technical phone screen. I wasn't directly involved in this, but I can share my observations -- I know some other EAs who worked at Triplebyte were more heavily involved, and maybe they can fill in details that I'm missing. My overall take is that offering feedback is a very good idea and EA orgs should at least experiment with it.

  • Offering feedback was a key selling point that allowed us to attract more applicants.

  • As an interviewer, I was supposed to be totally candid in my interview notes, and also completely avoid any feedback during the screening call itself. Someone else in the company (who wasn't necessarily a programmer) would lightly edit those notes before emailing them -- they wanted me to be 100% focused on making an accurate assessment, and leave the diplomacy to others. My takeaway is that giving feedback can likely be "outsourced" -- you can have a contractor / ops person / comms person / intern / junior employee take notes on hiring discussions, then formulate diplomatic but accurate feedback for candidates.

  • My boss told me that the vast majority of candidates appreciated our feedback. I never heard of any candidate suing us, even though we were offering feedback on an industrial scale. I think occasionally candidates got upset, but they mostly insulated me from that unless they thought it would be valuable for me to hear -- they wanted my notes to stay candid.

  • Jan writes: "when evaluating hundreds of applications, it is basically certain some errors are made, some credentials misunderstood, experiences not counted as they should, etc. - but even if the error rate is low, some people will rightfully complain, making hiring processes even more costly." I think insofar as you have low confidence in your hiring pipeline, you should definitely be communicating this to candidates, so they don't over-update on rejection. At Triplebyte, we had way more data to validate our process than I imagine any EA org has. But I believe that "our process is noisy and we know we're rejecting good candidates" was part of the standard apologetic preamble to our feedback emails. (One of the worst parts of my job was constant anxiety that I was making the wrong call and unfairly harming a good candidate's career.)

  • Relatedly... I'm in favor of orgs taking the time to give good feedback. It seems likely worthwhile as an investment in the human capital of the rejectee, the social capital of the community as a whole, and improved community retention. But I don't think feedback needs to be good to be appreciated -- especially if you make it clear if your feedback is low confidence. As a candidate, I'm often asking the question of which hoops I need to jump through in order to get a particular sort of job. If part of hoop-jumping means dealing with imperfect interviewers who aren't getting an accurate impression of my skills, I want to know that so I can demonstrate my skills better.

  • But I also think that practices that help you give good feedback are quite similar to practices that make you a good interviewer in general. If your process doesn't give candidates a solid chance to demonstrate their skills, that is something you should fix if you want to hire the best people! (And hearing from candidates whose skills were, in fact, judged inaccurately will help you fix it! BTW, I predict if you acknowledge your mistake and apologize, the candidate will get way less upset, even if you don't end up hiring them.) A few more examples to demonstrate the point that interviewing and giving feedback are similar competencies:

    • Concrete examples are very useful for feedback. And I was trained to always have at least one concrete example to back up any given assessment, to avoid collecting fuzzy overall impressions that might be due to subconscious bias. (BTW, I only saw a candidate's resume at the very end of the interview, which I think was helpful.)

    • Recording the interview (with the candidate's consent), so you can review it as needed later, is another thing that helps with both objectives. (The vast majority of Triplebyte candidates were happy to have their interview recorded.)

    • Using objective, quantifiable metrics (or standard rubrics) makes your process better, and can also give candidates valuable info on their relative strengths and weaknesses. (Obviously you want to be diplomatic, e.g. if a candidate really struggled somewhere, I think we described their skills in that area as "developing" or something. We'd also give them links to resources to help them level up on that.)

  • At Triplebyte, we offered feedback to every candidate regardless of whether they asked for it. I once suggested to my boss that we should make it opt-in, because that would decrease the time cost on our side and also avoid offending candidates who didn't actually want feedback. IIRC my boss didn't really object to that thought. It wasn't deemed a high-priority change, but I would suggest organizations creating a process from scratch make feedback opt-in.

BTW if any EA hiring managers have questions for me I'm happy to answer here, via direct message, or on a video call. I interviewed both generalist software engineers (tilted towards backend web development) and machine learning engineers.

What Makes Outreach to Progressives Hard

Just for reference, there's a group kinda like Resource Generation called Generation Pledge that got a grant from the EA Meta Fund. I think they've got a bit more of an EA emphasis.

Insights into mentoring from WANBAM

We are currently actively exploring how we can scale and provide mentoring support, in addition to WANBAM, to our community (those who are interested in/ inspired by Effective Altruism) more broadly.

You probably thought of this, but I suppose you could move in more of an 80K-ish direction by asking mentees to take notes on the best generalizable advice they get in their mentoring conversations, then periodically publishing compilations of this (perhaps organized by topic or something). If I was a mentor, I think I'd be more willing to spend time mentoring if my advice was going to scale beyond a single person.

EA is a Career Endpoint

My sense is that Triplebyte focuses on "can this person think like an engineer" and "which specific math/programming skills do they have, and how strong are they?" Then companies do a second round of interviews where they evaluate Triplebyte candidates for company culture. Triplebyte handles the general, companies handle the idiosyncratic.

I used to work as an interviewer for TripleByte. Most companies using TripleByte put TripleByte-certified candidates through their standard technical onsite. From what I was able to gather, the value prop for companies working with TripleByte is mostly about 1. expanding their sourcing pipeline to include more quality candidates and 2. cutting down on the amount of time their engineers spend administering screens to candidates who aren't very good.

Some of your comments make it sound like a TB like service for EA has to be a lot better than what EA orgs are currently doing to screen candidates. Personally, I suspect there's a lot of labor-saving value to capture if it is merely just as good (or even a bit worse) than current screens. It might also help organizations consider a broader range of people.

Introducing High Impact Athletes

Ryan Carey suggests that athletes could have an impact by giving EA presentations to high schoolers.

Geographic diversity in EA

But it's not easy to visit or live in an EA hub city like London or San Francisco, for most of the global population (financially, legally, for family reasons) ... Fewer like-minded people around you means you have to put in a lot more effort to stay engaged and informed

EA Anywhere might help :-)

Load More