[Note: this was written in response to being asked for my thoughts on how important it is for EA orgs to hire aligned staff as they scale. Thanks to Sam Bankman-Fried for comments and for significant influence on these thoughts.]

Organizational alignment is a really hard problem; putting EA aside and just thinking about organizations that are trying to make money or something else, I think it’s still one of the biggest problems that organizations face.

There’s this Elon Musk quote that SBF likes to reference: “Every person in your company is a vector. Your progress is determined by the sum of all vectors.” By default, as you scale an organization, those vectors will all end up pointing in random different directions. Getting their directions more aligned is one of the key things you have to do to scale.

 

I think most of the stuff I have to say about organizational alignment applies to both EA and non-EA organizations. There’s two differences for EA orgs that I can think of:

  • EA gives you an extra tool in your alignment toolbox, in that other EAs will tend to be more aligned with your organization.
  • The less legible your goals are, the more important it is to hire aligned people. If your goal is “make money,” you at least have something to fall back on, in that you can do stuff like see how much money people make and compensate them accordingly. If your goal is “do research to determine priorities for making the long-term future go well,” that’s much harder to measure so it seems more important for people to be aligned.

 

I think the broad ways to do organizational alignment are:

  • building loyalty/trust
    • being fair to employees, treating them well, trusting them, etc
  • compensation
    • compensating in equity or something similar
    • making compensation heavily dependent on how much value you add
  • culture
    • setting an organization level culture that emphasizes teamwork, deemphasizes individual status, etc
  • management
    • you can pass the buck somewhat by just having good and aligned managers
  • hiring
    • just hiring people who are going to be aligned in the first place

 

Unsurprisingly, I think the more decision-making power someone has the more important it is for them to be aligned. There are a few ways I know of to have employees be super aligned with your org:

  • getting people who are dedicated EAs
  • getting people with a strong sense of personal loyalty to the org and/or the people in it
  • getting people who have relatively linear utility in money, and compensating them well in equity-type things

 

In practice, I think what I tend to do in my hiring is:

  • to first order, mostly just hire whoever is most competent and don’t worry too much about alignment
    • I put some weight on a potential hire being EA, but not a ton
    • I think trying to eg hire only EAs would have been fatally limiting to our growth
    • though depends on what the talent pools look like in the specific roles you’re trying to hire for
  • but when deciding to give people a lot of decision-making power or have them manage a lot of people, prioritize very strongly how aligned they are
    • this alignment can come from EA or from something else
    • working with them and getting to know them over time is probably more helpful for determining this than “whether they identify as EA”

But caveating that this is at an organization with fairly legible goals, and the less legible things are the more I’d expect hiring EAs to be important.

143

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 1:44 PM
[comment deleted]2y13
0
0

making compensation heavily dependent on how much value you add setting an organization level culture that emphasizes teamwork, deemphasizes individual status, etc

It's worth being aware that some of these options don't play nice together. For example, if you hire people who are intrinsically motivated by your mission and try to emphasize teamwork, you'd probably want to pay them fairly without emphasizing money too much. (According to my Educational Psychology professor) there's some evidence that offering to pay for results erodes intrinsic motivation and doesn't improve results for intellectual problems.

This is insightful!

Personally, I would consider appending “for onlookers”, in this particular instance, as the OP is probably extremely versed in the issues and has a strategy that considers these tradeoffs.

Yeah, I think a more basic look at this would be helpful, and would encourage someone to write an "intro to org theory" post. But in lieu of that, I'll point out that the issues here relate to incentives in organizations generally, and will point to a preprint paper I wrote that discusses some of the desiderata and strategic considerations in organizations in the context of using metrics, and money based on those metrics, to align people.

Yes for sure, it was meant to be a "yes and" to the post, not a criticism of Caroline!

This is great - just wanted to comment about my older post bringing up the issues, and thanking Caroline for moving the discussion forward!

[anonymous]2y7
0
0

Well, it looks like I'm hijacking a thread about organisational scaling with some anxieties around referring to people in overly utilitarian ways that I've talked about elsewhere. Which is fair enough; interestingly I've done the opposite and talked about org scaling on threads that were fairly tangentially related and got quite a few upvotes for it. All very intriguing and if you're not occasionally getting blasted, you're not learning as much as you might, getting enough information about e.g. limits, etc...

OP -- I'm curious to hear your thoughts about investing greater energy into making goals more 'legible', as you put it. It strikes me that organisational alignment via loyalty + compensation + culture + management + hiring is circumventing the main problem, which is that the organisation's goals aren't clear. 

For example, couldn't an organisation whose North Star is to “do research to determine priorities for making the long-term future go well" create alignment by breaking down that overarching aim into its constituent goals? I'm spit-balling here, but one such constituent goal could be to "Become a research powerhouse", which would in turn be measured by a number of concrete and verifiable metrics such as "Publish X policy briefs" and/or "Double number of downloads on knowledge products on the website".   These goals would be fleshed out and discussed in detail, then published for everyone to see (or even create sub-goals for specific teams/departments). One could even publish them online so that external candidates can see them during recruitment rounds. The overarching idea is that being able to assess the organisation's goals will allow people to self-select, both in terms of the work they're doing but also in terms of their personal fit within the organisation, leading to greater alignment as people re-focus or exit. 

It's likely very obvious by now, but I'm putting forward John Doerr's Objectives & Key Results framework. It's hugely popular these days, and I'll be the first to admit a bias toward it. Doerr's broader point, however, is that one of the benefits of better goal-setting is organisational alignment:

A two-year Deloitte study found that no single factor has more impact than “clearly defined goals that are written down and shared freely. . . . Goals create alignment, clarity, and job satisfaction.”

My curiosity regarding your thoughts on this arises purely because your original post doesn't mention better goal-setting as a way to generate alignment. I also haven't come across many critiques about the better goal-setting = alignment assumption, so any thoughts on that vein would be very interesting to hear 

If you're up for a long-winded take on what I called "underspecified goals," and how they make alignment fail, I wrote about this question on Ribbonfarm quite a while ago.

[anonymous]2y1
0
0

Every person in your company is a vector. Your progress is determined by the sum of all vectors.

'Hey! I'm not a vector!' I cried out to myself internally as I read this. I mean, I get it and there's a nice tool / thought process in there, but this feels somewhat dehumanising without something to contextualise it. There are loads of tools you might employ to make good decisions that might involve placing someone in a matrix or similar, but hopefully it's obvious that it's a modelled exercise for a particular goal and you don't literally say 'people are maths' while you do it.

Anyway, I was thinking of political parties as I read this. If your party does well, you get an influx of members who somewhat share the same goals but are different from the existing core, not chosen by you, probably less knowledgeable about your history and ideology, and less immediately aligned. You have essentially no ability to produce alignment via financial mechanisms or 'hiring' processes. How do you get people to pull together? There's some recent examples of UK parties absolutely mangling this, but probably some good examples too (Obama 2008? German Green Party?) Obviously in organisations there are then additional mechanisms, but this seems interesting to study from the cultural elements which can be more separated out. 

Curated and popular this week
Relevant opportunities