A

AnonymousThrowAway

380 karmaJoined Jul 2022

Comments
9

First off, again lauding you and CarolineJ for throwing some challenge. I've tried to think about how to counter-argue by being critical of your arguments / their implications without sounding like I'm being critical of you two personally. I think it's hard to do this, especially through this medium, so I apologise if I get this wrong.

So main counter-arguments...:

  • For some of the things you talk about in this post (e.g. "The priority tasks are often mundane, not challenging", 'The role is mostly positioned as "enabling the existing leadership team" to the extent that it seems like "do all the tasks that we don't like"') I agree that it is bad inasmuch as EA orgs do this as egregiously as you're describing. I've never seen this happen in an EA org as blatantly as you're describing, but find it easy to believe that it happens.
  • However, if we talked through the details I think there's a reasonable chance that I'd end up thinking that you were being unfair in your description.
  • I think one factor here is that some candidates are actually IMO pretty unreasonably opposed to ever doing grunt work. Sometimes jobs involve doing repetitive things for a while when they're important. For example, I spoke to 60 people or so when I was picking applicants for the first MLAB, which was kind of repetitive but also seemed crucial. It's extremely costly to accidentally hire someone who isn't willing to do this kind of work, and it's tricky to correctly communicate both "we'd like you to not mostly do repetitive work" and "we need you to sometimes do repetitive work, as we all do, because the most important tasks are sometimes repetitive".

My point here wasn't about quantifying how often it happens, or whether it's a "fair" claim. It's about wanting people to write job descriptions that will attract more / better candidates than they're currently doing. Even if it doesn't apply in 90% of cases, I thought it was important to make the point so people could think about the signals it sends out, as I laid out in the post. 

I agree though that all jobs will involve some 'grunt work' or enabling others even at the cost of your own projects, and it's important to signal that; the issue as I outlined is that a good candidate will think "they really don't know how to get the best of me" if the JD is mostly that.

  • I think our main disagreement is that you're more optimistic about getting people who "aren't signed up to all EA/long-termist ideas" to help out with high level strategy decisions than I am. In my experience, people who don't have a lot of the LTist context often have strong opinions about what orgs should do that don't really make sense given more context.

I think that is often the value they will add - paradigmatic challenge as well as practical insight. It's not like the entire EA / LT house is built on solid epistemic foundations, let alone very clear theories of change / impact behind every single intervention. (I do think this is a dirty secret we don't do well of owning up to, but maybe this is for another post). And if there were, it's not as if people with outside experience wouldn't do a good job of red-teaming them. I outlined examples in my original post about where outsiders would bring value in emerging / pre-paradigmatic fields, even if they are not fuller EA / LT signed up; I think they should be more strongly considered.

For me personally, I find perspectives of people outside of my field helpful for challenging my fundamental assumptions about how innovation actually works. Them being in the room makes it more likely a) they'll get listened to, b) they have access to the same materials / research as me so we can see why we're diverging on what appears to be the same information, and then check / challenge each others assumptions. 

I know I should really listen when I feel uncomfortable; when I feel annoyed at what another team member is saying. It usually indicates they've struck a nerve of my uncertainty, or pointed out some fundamental assumption I'm resting too heavily upon. I'm not always good at doing it, but when I do it does make our work stronger. 

  • For example, some strategic decisions I currently face are:
    • Should I try to hire more junior vs more senior researchers?
    • Who is the audience of our research?
    • Should I implicitly encourage or discourage working on weekends?

There are some structural / cultural issues which pop up again and again in different workplaces. EA / LT might have more astronomically massive goals than most other orgs, but the mechanics of achieving them will have more in common with other teams / orgs than idiosyncracies; especially given most planning periods / real decisions never go beyond 5-10 years.

I can't imagine why someone outside EA / LT would be constrained by 'not being signed up enough' from contributing to the strategic decision you mention? Some of them are pretty classic whatever team you're in; like senior / junior researchers. The cultural ones - e.g. implicitly encouraging / discouraging weekend working - would especially benefit from outsiders who have experienced lots of team / org environments, and have better intuition for good / bad workplace cultures. I think so because, again, lots of EA orgs have really messed up their own cultures and left a trail of burn-out from it.

  • I think that people who don't have experience in a highly analogous setting will often not have the context required to assess this, because these decisions are based on idiosyncrasies of our context and our goals. Senior people without relevant experience will have various potentially analogous experience, and I really appreciate the advice that I get from senior people who don't have the context, but I definitely have to assess all of their advice for myself rather than just following their best practices (except on really obvious things).

Reading this makes me think two things: 

  1. Isn't that an argument for bringing outsiders into the organisation; so they can acquire the wider context, weigh it up along with their experience from other situations - some analogous along the lines you think and some analogous along different lines than you'd expect - and then add their thoughts to yours to come down on a decision? My bet is that would be more valuable to you than someone more similar to you doing the same to make those strategic decisions collaboratively.
  2. I'm always sceptical about arguments that can be boiled down to "our context is different" or "our organisation is unique". We all think that about our teams / orgs; we all think our constraints and challenges are very specific, but in reality most things going well / badly can be explained by some combination of clarity of purpose / vision, clarity of roles and responsibilities, and how people are actually working together[1].

 

  • If I was considering hiring a senior person who didn't have analogous experience and also wanted to have a lot of input into org strategy, I'd be pretty scared if they didn't seem really on board with the org leadership sometimes going against their advice, and I would want to communicate this extremely clearly to the candidate, to prevent mismatched expectations.

But I hope this would be a two-way street? To be fair, I don't know of any recruitment where this kind of negotiation of terms doesn't happen to begin with. That's normal. People recruit 'outsiders', different levels of autonomy are given, they see if they can work together, if it doesn't they part ways; usually with the newbie leaving but sometimes with the incumbent leaving as the vision of the newbie carries more sway...

Working well with people you don't 100% agree with is very possible, esp. if you're optimised your hiring for certain qualities like openness and abilities to argue but maintain cohesion. It's also just a really important leadership skill to build for most contexts.

  • I think that the decisions that LTist orgs make are often predicated on LTist beliefs (obviously), and people who don't agree with LTist beliefs are going to systematically disagree about what to do, and so if the org hires such a person, they need that person to be okay with getting overruled a bunch on high level strategy. I don't really see how you could avoid this.

Reading this, I felt a little like "what's wrong about that?" Or more specifically "what's wrong about having systematic disagreements?" Conflict / disagreements are good things! The more fundamental the better, provided you're not rehashing the same ground on repeat (and there is a subtle difference between seeing the same argument ad nauseum compared with the same justifiable tensions being played out in different problem / solution spaces). 

However, if your assumption is that the non-LT person would be overruled consistently, then yes that would be a problem because then you're sacrificing the opportunity for synthesis or steel-manning. I feel that if I'm making really good LT arguments with a good theory of change, I should be able to convince someone who isn't 100% signed up to that to come on the journey with me. If that person was being overruled again and again without any form of synthesis emerging, I'd take that as a sign that I was doing a rubbish job of listening and understanding another person's perspective.

I think if I was working in such an LT org I would be terrified of not having a detractor in the room. This is because I think a lot of LT Vs. near-term conflict happens because of a lack of concrete theories of change being put forward by LTists. I would want a detractor challenging the assumptions behind my LT decisions, steel-manning at every step of the way. Personally, I would want them jumping up and down at me if it sounded like I was willing to sacrifice a lot of near-term high probability impact  in the service of more spurious long-term gains. If I am going to do that, I want to be really confident I made that decision for the right reasons; I want to be so confident that the detractor can plausibly agree with me, and even leave the org on good terms if they could understand the decision being made but couldn't abide by it. 

I am sceptical that an advisor from outside the org would give that level of challenge; they don't have the full context, they are not as invested in thinking about these things, they won't have the bandwidth to really make the case. And I'm very sceptical that just really smart, conscientious people who also happen to share the same set of assumptions as me will do as good a job steelmanning. In practice, they should be able to, but in reality assumptions / blindspots need people who don't have them to point them out.

I think two additional cruxes I'm realising we have might be:

  • I think really good organisational leadership is hard, and requires experience - ideally lots of experience, doing well and badly and reflecting on yourself and what it's like to work with you. I think the leadership in many EA orgs have not had those opportunities, which I think is a big risk.  But I think you and perhaps others see this as less the case?
  • I trust my judgement less if I am not getting serious challenge from people on strategic decisions, that shakes me both on the process and on the paradigm / values-system. Because I don't think it's possible I know everything, but that wisdom of crowds should get to better solution space.  I think I might be unusually strong in this tendency, but I won't put words in yours / others mouths on what the opposite looks like.
    • In fact, last two weeks I received a tonne of highly critical challenge fundamentally about whether my programme had a plausible theory of change. Though initially frustrating, it's helped me see my work for what it is; ultimately experimental, and if it doesn't work it must be killed off.

To finish, this is the type of thing I'd usually chat over in a tone / setting that's relaxed and open and less point-by-point rebuttaly; because I think this type of topic is better as an open, reflective conversation but this medium sets it up to feel more confrontational. Ultimately, I don't know what's going on in anyone else's head, or their full context, so I can only make observations / ask questions / pose challenge and hope they feel useful. Because I don't think the medium can do this topic justice, I would be open to exploring in a different medium if need be.

And yes, part of my deciding to go for the delicate core is inspired by Scott Alexander's recent post, and reinforced by Helen's more recent post.

  1. ^

    I totally over-summarised this bit in haste, but there's tonnes of org literature on this, much of the best I read is by McKinsey on organisational health.

In deliberately vague summary, where I could and to greater or lesser degrees depending on how much circumstances permitted it.

Thanks for that response - interesting to hear others aren't struggling! Would be cool if you / your org shared more about this.

I really appreciate your honest response - thanks for sticking your neck out. 

I think you've layered on nuance / different perspectives which enables a richer understanding in some regards, and in some others I think we diverge mostly on how big a risk we perceive non-EAs in senior roles as presenting relative to value they bring. I think we might have fundamental disagreements about 'the value of outside perspectives' Vs. 'the need for context to add value'; or put another way 'the risk of an echo chamber from too-like-minded people' Vs. 'the risk of fracture and bad decision-making from not-like-minded-enough people'. 

I'm going to have a think about what the most interesting / useful way to respond might be; I suspect it would be a bit dull / less useful to just rebutt point by point rather than get deeper, but don't want to infer your drivers too much. Will likey build on this later in the week.

I don't think we disagree much here, but where we do I'm trying to bottom out the cruxes... 

I think it's primarily risk appetite. I do agree though that the wrong hire can make things hellish, on many levels. But in my experience that's usually been less driven by what people thought was important and moreso by the individual's characteristics, behaviours, degree of self-awareness, tendency towards defensiveness / self protection vs. openness. Usually if it doesn't work out in terms of irreconcilably different views on a problem, people just agree to disagree and move on!

Perhaps we also have different things in our heads as meaningful signals of being a good leader for the org, and maybe different models of how a "signed up to doing good but not every EA doctrinal belief" person would operate. 

As mentioned in the post how you (dis)agree is often the most important thing; which reflects what you're saying about flexible and open-minded people with their own perspectives. I think I stand by the IIDM example, illustrating how you don't need to be signed up to every EA idea to add a lot of value to an organisation. I think it's similar for X-risk oriented pandemic preparedness, AI risk, etc; that sometimes the most strategically sound thing to do would be more near-term, but those with a long-term orientation could not have that in their immediate view. Similarly would apply for e.g. deciding which funders / partners to work with; skills / talent requirements within the team; etc.  

(That said, if there's an instinctive feeling that an EA adjacent / non-EA hire - senior or otherwise - could threaten organisational alignment, it's almost a recipe for unconscious ostracism and exclusion; almost in a self-fulfilling prophecy kind of way. It's just very human to react negatively to someone who you feel is threatening. So yeah - another thing to reflect on if you are working in an EA org).

Maybe another crux is how much those people are exceptions? As I argued in the post, my hunch is there's many more people like that who are not getting a shot - referring to the 'wild card' example in the post again. I suspect this question could really only be answered by orgs doing the post mortem on their recruitments, to see why people fell off at different stages and if the question is asked (ironically, like in an actual post-mortem) if 'anything different could have been done?'

Glad to see the number of up-votes, clearly other people were thinking something similar. But it kind of worries me there hasn't been more dissent or nuance thrown in?  So I would invite anyone to speak up who thinks...:

  • this isn't as big a problem as I've made it come across
  • the arguments around hiring from outside EA are not that strong
  • who read this, felt they were the target audience, but didn't change their mind for whatever reason

It would feel weird if after writing all this out I didn't come away feeling like I learned something!

Glad to hear. 

I skimmed your post, and felt it added a lot of value to mine as it highlighted a big difference between 'ops roles' in an operation - i.e. keeping the show on the road - Vs. projects - i.e. starting up a new show and running it at the same time.  Very different skillsets again, because of difference levels of uncertainty; one more about optimising, one more about figuring out what the hell do we do? 

I think the ops skillset I set out is much more project-oriented, so would welcome more of your critque of hiring approaches from a more ops-y less project-y perspective.

While managing inbox on short-term 'needs must' basis, making that ask of a very senior person is quite a red flag!

I'll take a look - any time you'd like me to get back to you by?