Ben_Kuhn

I'm the CTO of Wave, where we're bringing financial infrastructure to sub-Saharan Africa.

Personal site (incl various non-EA-related essays): https://www.benkuhn.net/

Email: ben dot s dot kuhn at the most common email address suffix

Wiki Contributions

Comments

The Cost of Rejection

Interesting. It sounds like you're saying that there are many EAs investing tons of time in doing things that are mostly only useful for getting particular roles at 1-2 orgs. I didn't realize that.

In addition to the feedback thing, this seems like a generally very bad dynamic—for instance, in your example, regardless of whether she gets feedback, Sally has now more or less wasted years of graduate schooling.

Early career EA's should consider joining fast-growing startups in emerging technologies

Top and (sustainably) fast-growing (over a long period of time) are roughly synonymous, but fast-growing is the upstream thing that causes it to be a good learning experience.

Note that billzito didn't specify, but the important number here is userbase or revenue growth, not headcount growth; the former causes the latter, but not vice versa, and rapid headcount growth without corresponding userbase growth is very bad.

People definitely can see rapidly increasing responsibility in less-fast-growing startups, but it's more likely to be because they're over-hiring rather than because they actually need that many people, in which case:

  • You'll be working on less important problems that are more likely to be "fake" or busywork
  • There will be less of a forcing function for you to be very good at your job (because it will be less company-threatening if you aren't)
  • There will be less of a forcing function for you to prioritize correctly (again because nothing super bad will happen if you work on the wrong thing)
  • You're more likely to experience a lot of politics and internal misalignment in the org

(I'm not saying these applied to you specifically, just that they're generally more common at companies that are growing less quickly. Of course, they also happen at some fast-growing companies that grow headcount too quickly!)

The Cost of Rejection

It sounds like you interpreted me as saying that rejecting resumes without feedback doesn't make people sad. I'm not saying that—I agree that it makes people sad (although on a per-person basis it does make people much less sad than rejecting them without feedback during later stages, which is what those points were in support of—having accidentally rejected people without feedback at many different steps, I'm speaking from experience here).

However, my main point is that providing feedback on resume applications is much more costly to the organization, not that it's less beneficial to the recipients. For example, someone might feel like they didn't get a fair chance either way, but if they get concrete feedback they're much more likely to argue with the org about it.

I'm not saying this means that most people don't deserve feedback or something—just that when an org gets 100+ applicants for every position, they're statistically going to have to deal with lots people who are in the 95th-plus percentile of "acting in ways that consume lots of time/attention when rejected," and that can disincentivize them from engaging more than they have to.

The Cost of Rejection

Note that at least for Rethink Priorities, a human[1] reads through all applications; nobody is rejected just because of their resume. 

I'm a bit confused about the phrasing here because it seems to imply that "Alice's application is read by a human" and "if Alice is rejected it's not just because of her resume" are equivalent, but many resume screen processes (including eg Wave's) involve humans reading all resumes and then rejecting people (just) because of them.

The Cost of Rejection

I'm unfamiliar with EA orgs' interview processes, so I'm not sure whether you're talking about lack of feedback when someone fails an interview, or when someone's application is rejected before doing any interviews. It's really important to differentiate these because because providing feedback on someone's initial application is a massively harder problem:

  • There are many more applicants (Wave rejects over 50% of applications without speaking to them and this is based on a relatively loose filter)
  • Candidates haven't interacted with a human yet, so are more likely to be upset or have an overall bad experience with the org; this is also exacerbated by having to make the feedback generic due to scale
  • The relative cost of rejecting with vs. without feedback is higher (rejecting without feedback takes seconds, rejecting with feedback takes minutes = ~10x longer)
  • Candidates are more likely to feel that the rejection didn't give them a fair chance (because they feel that they'd do a better job than their resume suggests) and dispute the decision; reducing the risk of this (by communicating more effectively + empathetically) requires an even larger time investment per rejection

I feel pretty strongly that if people go through actual interviews they deserve feedback, because it's a relatively low additional time cost at that point. At the resume screen step, I think the trade-off is less obvious.

Frank Feedback Given To Very Junior Researchers

I don't have research management experience in particular, but I have a lot of knowledge work (in particular software engineering) management experience.

IMO, giving insufficient positive feedback is a common, and damaging,  blind spot for managers, especially those (like you and me) who expect their reports to derive most of their motivation from being intrinsically excited about their end goal. If unaddressed, it can easily lead to your reports feeling demotivated and like their work is pointless/terrible even when it's mostly good.

People use feedback not just to determine what to improve at, but also as an overall assessment of whether they're doing a good job. If you only give negative feedback, you're effectively biasing this process towards people inferring that they're doing a bad job. You can try to fight it by explicitly saying "you're doing a good job" or something, but in my experience this doesn't really land on an emotional level.

Positive feedback in the form "you are good at X, do more of it" can also be an extremely useful type of feedback! Helping people lean into their strengths more often yields as much or more improvement as helping them shore up their weaknesses.

I'm not particularly good at this myself, but every time I've improved at it I've had multiple reports say things to the effect of "hey, I noticed you improved at this and it's awesome and very helpful."

That said, I agree with you that shit sandwiches are silly and make it obvious that the positive feedback isn't organic, so they usually backfire. The correct way to give positive feedback is to resist your default to be negatively biased by calling out specific things that are good when you see them.

Announcing "Naming What We Can"!

Looks like if this doesn't work out, I should at least update my surname...

My mistakes on the path to impact

I note that the framing / example case has changed a lot between your original comment / my reply (making a $5m grant and writing "person X is skeptical of MIRI" in the "cons" column) and this parent comment ("imagine I pointed a gun to your head and... offer you to give you additional information;" "never stopping at [person X thinks that p]"). I'm not arguing for entirely refusing to trust other people or dividing labor, as you implied there. I specifically object to giving weight to other people's top-line views on questions where there's substantial disagreement, based on your overall assessment of that particular person's credibility / quality of intuition / whatever, separately from your evaluation of their finer-grained sub-claims.

If you are staking $5m on something, it's hard for me to imagine a case where it makes sense to end up with an important node in your tree of claims whose justification is "opinions diverge on this but the people I think are smartest tend to believe p." The reason I think this is usually bad is that (a) it's actually impossible to know how much weight it's rational to give someone else's opinion without inspecting their sub-claims, and (b) it leads to groupthink/herding/information cascades.

As a toy example to illustrate (a): suppose that for MIRI to be the optimal grant recipient, it both needs to be the case that AI risk is high (A) and that MIRI is the Best organization working to mitigate it (B). A and B are independent. The prior is (P(A) = 50, P(B) = 50). Alice and Bob have observed evidence with a 9:1 odds ratio in favor of A, so think (P(A) = 90, P(B) = 50). Carol has observed evidence with a 9:1 odds ratio in favor of B. Alice, Bob and Carol all have the same top-line view of MIRI (P(A and B) = 0.45), but the rational aggregation of Alice and Bob's "view" is much less positive than the rational aggregation of Bob and Carol's.

It's interesting that you mention hierarchical organizations because I think they usually follow a better process for dividing up epistemic labor, which is to assign different sub-problems to different people rather than by averaging a large number of people's beliefs on a single question. This works better because the sub-problems are more likely to be independent from each other, so they don't require as much communication / model-sharing to aggregate their results.

In fact, when hierarchical organizations do the other thing—"brute force" aggregate others' beliefs in situations of disagreement—it usually indicates an organizational failure. My own experience is that I often see people do something a particular way, even though they disagree with it, because they think that's my preference; but it turns out they had a bad model of my preferences (often because they observed a contextual preference in a different context) and would have been better off using their own judgment.

My mistakes on the path to impact

if you make a decision with large-scale and irreversible effects on the world (e.g. "who should get this $5M grant?") I think it would usually be predictably worse for the world to ignore others' views

Taking into account specific facts or arguments made by other people seems reasonable here. Just writing down e.g. "person X doesn't like MIRI" in the "cons" column of your spreadsheet seems foolish and wrongheaded.

Framing it as "taking others' views into account" or "ignoring others' views" is a big part of the problem, IMO—that language itself directs people towards evaluating the people rather than the arguments, and overall opinions rather than specific facts or claims.

My mistakes on the path to impact

Around 2015-2019 I felt like the main message I got from the EA community was that my judgement was not to be trusted and I should defer, but without explicit instructions how and who to defer to.
...
My interpretation was that my judgement generally was not to be trusted, and if it was not good enough to start new projects myself, I should not make generic career decisions myself, even where the possible downsides were very limited.

I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:

  • Moral uncertainty, giving other moral systems weight "because other smart people believe them" rather than because they seem object-level reasonable
  • Lots of emphasis on avoiding accidentally doing harm by being uninformed
  • People bring up "intelligent people disagree with this" as a reason against something rather than going through the object-level arguments

Being epistemically modest by, say, replacing your own opinions with the average opinion of everyone around you, might improve the epistemics of the majority of people (in fact it almost must by definition), but it is a terrible idea on a group level: it's a recipe for information cascades, groupthink and herding.

In retrospect, it's not surprising that this has ended up with numerous people being scarred and seriously demoralized by applying for massively oversubscribed EA jobs.

I guess it's ironic that 80,000 Hours—one of the most frequent repeaters of the "don't accidentally cause harm" meme—seems to have accidentally caused you quite a bit of harm with this advice (and/or its misinterpretations being repeated by others)!

Load More