The ability to give and receive transparent feedback is probably one of our strengths as a community. But if we’d like to collectively imbue EA with a bias to action, it probably means being more thoughtful about criticism, perhaps asking if something is true, helpful or specific before vocalizing it. 

In the last few days, I’ve heard a few people make the following argument/criticism about well-funded EA projects (object-level) that are scaling rapidly. 


“Org X is sucking up the talent in the space”


Why do you believe that?


“Org X pays its employees well and gives them cushy jobs.”


What’s wrong with that? 


“It disincentives people from doing relatively, risky entrepreneurial projects” 


Do you think Org X is doing important work though?


“Yeah, maybe, but it’s not as important as Y and Z”


So i think this is a particularly bad form of criticism, for a few reasons:

 In its most basic form, this is literally an argument against scaling EA orgs. As an organization absorbs more funding, it tends to have better operational and HR support and higher brand recognition, making the prospect of working there “cushy”. It also probably absorbed funding because it found some type of “product market fit” (in the EA sense of that phrase) and hence has lower uncertainty attached to it,  making it attractive folks with lower risk seeking temperaments/personalities

 Ok so maybe the real claim here is that on the margin, larger EA orgs pull people towards lower-risk jobs and away from entrepreneurship/independent projects. Even that doesn’t seem to make a lot of sense, because there’s nothing about new ventures that makes them automatically higher EV than scale ups. Assuming the community funded something because it cleared the bar, we probably think it’s pretty important work and want them to succeed in recruiting talent, as much as we want the newer, smaller orgs to succeed?

 Perhaps then the version of this that has legs is:  As an organization scales and grows, it becomes less important for the marginal new hire to be “fully aligned” and that larger EA orgs ought to be hiring a higher proportion of their talent pool from outside the community. This sounds reasonable to me as a theoretical argument but sounds out of line with what’s currently happening at these orgs  (one of which already has at least ~30% of their talent pool sourced from outside the community). 


Also I’m highly uncertain about the counterfactual here. We’d have to drill down to specifics but in theory, it seems equally likely that risk-averse, “cushy-job lovers” would just work outside the EA space if it weren’t for these opportunities within the space. Of course, incentives matter but personalities probably do too.  It also completely ignores the potential upside of newly created EAs within these organizations via osmosis.

 I probably agree that we ought to have a broader discussion around what sorts of jobs necessitate “alignment”, in so far as we can coherently operationalize what alignment entails. And frankly, if we were to have that discussion, the first place I’d point my finger at is probably meta projects where EAs are recruited for roles involving office management , PAs  etc and not at the orgs doing fairly specialized object level work. 

 I will also concede that this argument is stronger if you also believe that Org X is just not doing something you deem important. If that is indeed the case, then the criticism (or at least the way it comes across) seems misdirected. Because then the claim is that EA funders are misallocating capital by funding these  larger organizations, since one way to theoretically solve this problem would be to use capital to compensate people financially for strating those new important projects. One would also then wonder why no one raised this objection when these projects were first being funded. (assuming they didn’t pivot super hard in the interim)

 To be clear, I’m not claiming that the criticism is in bad faith. But talk is indeed cheap, and the way to incentivise more entrepreneurship is by rewarding scale, not punishing it. If you knew you could create the next big EA org that people respect, that’s probably the kind of motivation one needs? 



New Comment