This is a special post for quick takes by a guy named josh. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Would love for orgs running large-scale hiring rounds (say 100+ applicants) to provide more feedback to their (rejected) applicants. Given that in most cases applicants are already being scored and ranked on their responses, maybe just tell them their scores, their overall ranking and what the next round cutoff would have been - say: prompt 1 = 15/20, prompt 2 = 17.5/20, rank = 156/900, cutoff for work test at 100.

Since this is already happening in the background (if my impression here is wrong please lmk), why not make the process more transparent and release scores - with what seems to be very little extra work required (beyond some initial automation). 

I tried doing this a while back. Some things I think I worried about at the time:

(1) disheartening people excessively by sending them scores that seem very low/brutal, especially if you use an unusual scoring methodology (2) causing yourself more time costs than it seems like at first, because (a) you find yourself needing to add caveats or manually hide some info to make it less disheartening to people, (b) people ask you follow-up questions (3) exposing yourself to some sort of unknown legal risk by saying something not-legally-defensible about the candidate or your decision-making.

(1) turned out to be pretty justified I think, e.g. at least one person expressing upset/dissatisfaction at being told this info. (2) definitely happened too, although maybe not all that many hours in the grand scheme of things (3) we didn't get sued but who knows how much we increased the risk by

Jamie, I've been contemplating writing up a couple of informal "case study"-type reports of different hiring practices. My intention/thought process would be to allow EA orgs to learn about how several different orgs do hiring, to highlight some best practices, and generally to allow/encourage organizations to improve their methods. How would you feel about writing up a summary or having a call with me to allow me to understand how you tried giving feedback and what specific aspects caused challenges?

Unfortunately this was quite a while ago at the last org I worked at; I don't have access to the  relevant spreadsheets, email chains etc anymore and my memory is not the best, so I don't expect to be able to add much beyond what I wrote in the comment above. 

This is something I would be interested in seeing! A lot of EA orgs already have public info on their hiring process (at least in a structural sense). I'd be more curious about what happens under the hood, 'scoring methodologies' in particular. 

Regarding "disheartening people," I once got feedback for a hiring round and the organization shared what scores I got, and even shared scoring info for the other (anonymized) candidates. It was the best and most accurate data I have ever been given as feedback.

I scored very low, much lower than I had expected. Of course I felt sad and frustrated. I wish that I knew more details about their scoring methodology, and part of me says that it was  an unfair process because they weren't clear on what I would be evaluated on. But I draw a analogies to getting rejected from anything else (such as a school application or a romantic partner): it sucks, but you get over it eventually. I felt bad for a day or two, and then the feelings of frustration faded away.

Okay, I definitely see those concerns! Unknown legal risk - especially as it relates to in many cases hiring in a lot of different countries at the same time with potentially different laws seems like a good reason not to release scores. 

For me personally getting a rejection vs getting a rejection and being told I had the lowest score among all applicants, probably wouldn't make much of a difference - it might even save me time spent on future applications for similar positions. But on that maybe just releasing quarter percentiles would be a better less brutal alternative? 

I think a general, short explainer of the scoring methodology used for a hiring round could/should be released to the applicants, if only for transparency's sake. So, explainer + raw scores and no ranking might also be another alternative? 

Maybe I am misguided in my idea that 'this could be a low-time-cost way of making sure all applicants get a somewhat better sense of how good/bad their applications were.' I have after all only ever been on the applicant side of things and it does seem the current system is working fine at generating good hires. 

(I run hiring rounds with ~100-1000 applicants) agree with Jamie here. However, if someone was close to a cutoff, I do specifically include "encourage you to apply to future roles" in my rejection email. I also always respond when somebody asks for feedback proactively.

Is revealing scores useful to candidates for some other reason not covered by that? It seems to me the primary reason (since it sounds like you aren't asking for qualitative feedback to also be provided) would be to inform candidates as to whether applying for future similar roles is worth the effort.

revealing scores useful to candidates for some other reason not covered by that

Honestly, I hadn't even thought of encouraging them to apply for future roles. My main thought regarding feedback is to allow them to improve. If you assess my work and then tell me the ways in which it falls short, that allows me to improve. I know that to work on. An example would be something like "Although your project plan covered a lot of the areas we requested, you didn't explain your reasoning for the assumption you made. You estimated that a [THING] would cost $[AMOUNT], but as the reader I don't know where you got that number. If you had been transparent about your reasoning, then you would have scored a bit higher." or "We were looking for something more detailed, and your proposal was fairly vague. It lacked  many of the specifics that we had requested in the prompt."

Quantitative scoring doesn't really give you that, though!

I think scores would be good in the potentially time-saving way you outlined. I also think that having a more nuanced sense of how well my applications - or specific parts of it - were perceived/scored would be helpful. 

My experience asking for qualitative feedback has been mixed - sometimes I have gotten just flat out ignored, at other times I have gotten the usual 'no can do due to lack of operational capacity' and some times I have actually gotten valuable personal feedback. 

My idea is that there has to be a way to make some feedback beyond yes/no automatically available to all applicants. Maybe simply being told one is a particularly strong applicant and should reapply or apply to similar roles is good (and kind) enough. 

I suppose I'm skeptical that quant scores in an auto-sent email will actually give you a nuanced sense - but I do see how, e.g., if over time you realize it's always your interview or always your quant question that scores poorly, that is a good signal

I do think being kind is an underrated part of hiring!

Have also tried this, although most our applicants aren't EAs. People who reapply given detailed feedback usually don't hit the bar.

We still do it, in part because we think it's good for the applicants, and in part because people who make a huge improvement attempt 2 usually make strong long-term hires

That actually seems like a really strong signal of something important: can people improve, if given a modest amount of guidance/support. I'd certainly be interested in hiring someone who does rather than someone who doesn't.

But I'm also impressed that you provide feedback to candidates consistently. I've always thought that it would be something fairly time-consuming, even if you set up a system to provide feedback in a fairly standardized way. Would you be willing to share a bit about how you/your team does feedback for rejected job applicants?

I view our hiring process as a constant work in progress, and we look back at the application process of everyone after their time with us, the best and worst performers alike, and try figure out how we could have told ahead of time. Part of that is writing up notes. We use chatgpt to make the notes more sensitive and send them to the applicant. 

Caveat: We only do this for people who show some promise of future admission. 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
LewisBollard
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by