JC

Jason Clinton

208 karmaJoined Jul 2021

Comments
6

Yea, depending on success, we might split the next round in two to get global coverage.

This is the first 5 months of theory in the program. There's also practice and the new team members also shadowed security reviews. So, some self-practice and thinking about security exploits and applicability is expected to occur in parallel to the book club to get the full benefit.

Each set of two chapters we will read will take between 1-2 hours to read every two weeks. That's it.

Unfortunately, all of it. The discussion will be fast-moving and talk about reifying the abstract ideas into concrete, production systems and organization structure. It will be out of anyone's skill set who hasn't had worked with real production systems and technical orgs for a few years.

Firstly, it seems like the skills of a staff engineer at your current role might be different from what many new EA or AI safety orgs need. For example, you probably handle a lot of design and dependencies currently. Writing, communication is probably a lot of your value. In contrast, in a small, new organization, you probably need to knock out a lot of smaller systems quickly. Your skills and the edge you have might be different between these orgs.

Yes, I agree but with a huge caveat: every person will progress through various stages of competency during their career. While many early-stage folks could contribute just as well at an early stage EA startup (and should consider it), in the context of the 80k Hours article that I was replying to, we need to be transparent with folks about what a typical career path looks like and what tradeoffs there are to consider down each of the startup vs EtG paths. Here's the typical career progression for software engineers (though it's general enough to map onto other fields).

  1. New grad/early stage: needing direction from others on what to work on, executing that work.
  2. Leading self: proposing work in the context of larger goals and then executing that work
  3. Leading a small team: proposing work and technical direction for a small team of engineers. Major design, some direct contributions, work-stream shepherding, mentorship.
  4. Leading a large, ambiguous area/leading multiple teams: proposing strategic direction shifts, aligning team leaders, building consensus without authority, major design work, little direct contributions, mentorship + cultivation at scale.
  5. Leading the entire technical direction for a business function: everything of the previous role, except heavily influencing all of the non-tech functions in the organization.
  6. A business executive[1]

Regarding which level of progression that individuals might achieve by the end of their career, there's a bell curve distribution around the 3rd step. Only a handful will ever reach the 6th step of being an executive[2]. FAANG pays somewhere between $750k-$1.5M for step 5, though, and—while still rarefied—it's attainable for top talent, so a possible EtG goal to plan for.

All of this is a long-winded way of saying that CS folks who are about to graduate shouldn't throw away a job offer from FAANG for an EA startup, out-of-hand, if they think that they have career luck in their favor. It would be a hard call. If I were 22 and about to graduate today[3], I would give an EA startup 3-5 years to be successful before I switched tactics and tried for a FAANG or other top-of-market option.

Secondly, retirement security was an important point to you. I didn’t fully understand this. My guess is that in your personal case, the security you could provide for yourself and your partner might be large compared to what an EA org (or really most jobs) could easily provide. So my read was that you were talking about more junior engineers and SWE outside of FAANG.

Right: speaking about general SWE population considering private-sector versus non-profits which tend to pay less and also tend to provide less benefits like retirement account funding.

  • Is this guess wrong? For example, in the US / Cali, maybe I'm really ignorant and you need a large 7 figure nest egg to be safe.

In US/Cali Bay Area (where some EA startups are based), the median house price is $1.3M. So, for someone looking to put down roots in the Bay Area and retire within the same friend network close by, a nest egg of $2M isn't an unreasonable guess; $3-4m if their spouse isn't working and they start a family. If we expect EA-ers to come work for a startup in the Bay Area and then move to a lower cost of living place later, we should be transparent about that. (Or, we should be encouraging EA orgs to go remote-first to unlock paying top-of-market rates in rural areas.)

  • Finally, in terms of financial security, would income stability, such as guaranteeing transitional income if an organization needed to shutdown, substitute for very large income? The idea here is that everyone trusts you, and you were funded to move to successful organizations, so you didn't have to stay on or bail out zombie organizations.

Yes, that would make the job offers of EAs more attractive to new-career and mid-career folks. It's probably also applicable to all other roles that an EA would hire for.

HR is a big problem facing new companies—talent there is hard to find, too.

Could you write a little more here to make this more legible? Like, is there a book or blog post you can share?

To give context, it's not clear to me what you mean by HR needs. Do you mean basic operational tasks involved in HR? Maybe I'm really ignorant, but in many tech companies, it seemed to me like both talent attraction and team functionality is entirely up to management (e.g. the manager or skip of your "two pizza team"). HR was involved in only pretty fundamental processes, like scheduling interviews or paying out checks (and many of these were sub contracted). In fact, I knew a few director/VP at a FAANG who said they didn't really understand HR and literally said they provided documentation for terminations.

To be clear, the above could be really dysfunctional/disrespectful and betray my ignorance. 

It's common it tech to hear the sentiment of your social network that HR provides no value so I'm not surprised to see this. In Silicon Valley, there's similar discounting of the value provided by folks in operations, support, logistics and finance.

A note on horizontal organization roles: there are types of roles that apply horizontal, cultural influence. For example, a wise person once said, "If you want to understand why an organization behaves the way that it does, look at the incentives of the people in that organization."

I point to HR, specifically, because it's an area where I've seen the most struggles in small-stage startups precisely because it is a horizontal force multiplier. Here's some values that a functioning HR organization provides to a small stage startup:

  • A meritocratic system of promotion/career advancement that's seen as fair by the employees. This includes transparency of the expected roles and responsibilities at each stage of career progression. Of course, this is includes some objective criteria for deciding to fire someone, and all of the legal implications thereof (as mentioned above), but that's not the most important part. Retention is partially a function of aligning the hedonic treadmill with real career progress possibilities.
  • Setting norms on how individuals interact and, theoretically, backstopping those norms with enforcement. For example, an org might say that aggressive behavior in meetings and emails is not tolerated. This is just a theoretical rule unless org leaders actually back up those words with actions through the promotion process and, in extreme cases, HR-backed disciplinary action. It's also the function of HR to repeat the company's behavioral expectations, periodically.
  • Ensuring fair hiring practices is non-trivial. It's common in startups to hand-waive over this problem. But actually objectively evaluating candidates and ensuring that bias doesn't creep in and that pay is equal among all similar roles and levels is hard. Radical transparency can help here but it doesn't just magically fix the problem.
  • Setting organizational goals against which the org is measured is sometimes seen as operations or Product Management but there's an HR role there too: leaders of those sub-orgs that set those goals need to be held accountable and any exec/leadership compensation should be tied to business outcomes in a way that lower-level employees do not face. And all of those company benchmarks and the feedback cycle need to be done in front of the whole employee population, quarterly.
  • Assessing employee satisfaction and collecting feedback anonymously on an ongoing basis. This can be as simple as an anonymous Google Form that's open for two weeks once a year. But, actually collating the data, slicing it by org, trending over time, and proposing cultural changes to address employee feed back is hard.
  • Benefits benefits benefits. This is a constantly evolving space. To some extent, this can be outsourced, but there should be someone on staff continually evaluating the changing landscape of offerings and competitor offerings and continually updating the employees about those changes and acting as a partner to fix problems when they come up.

I could go on but these are the ones that came to mind while I was writing this, and I think that I've exceeded the amount of time that I intended to spend on this. 😉

 

  1. ^

    I acknowledge that this assumes a fully meritocratic progression; there are indeed many reasons that individuals might be given these roles without being qualified.

  2. ^

    I recognize that founders of startups are not necessarily destined to lead 1,000 employee organizations but that they do need some mix of all of the skills in these stacks. And this is often why startups fail.

  3. ^

    Full disclosure: I do not have a degree and am an anomaly. So, I can't really speak with authenticity on this hypothetical.

Hi Linch, we met and talked for awhile at the SF Picnic last year. This was my Reddit comment. I'll reply here even though this is my first time interacting on this forum. I have lurked here for a long time but felt like the conversations were too time intensive to get involved. So, I'll try to keep this brief.

It's worth noting that many of these restrictions (especially the first and third) would apply not only to working at EA nonprofits but also, e.g., tech startups or a political campaign as well.

Yes, this is true. While I didn't say it in this comment, I do believe EA orgs have a competitive edge of also offering more-meaningfully-certain employment, which is new, and Big Tech doesn't really have a way to counter that. Among the set of {cause, startup, politics}, FAANG compensation is effective at keeping us from leaving for these riskier things. Sometimes an exec will counter a senior engineer who is thinking of leaving with an offer to work on a project that is more values-aligned but, generally, Big Tech has a strategy of paying top-of-market and, if that fails, making  it easy to come back if the outside gig fails[1].

This problem seems much more doable. I imagine many early-stage nonprofit CEOs would be willing to spend 5 hours chatting with top people who they made an offer to, though probably not early on in the process. 

Agree. It seems like there could be an EA-aligned startup that is cultivating talent connections and sharing that pool and facilitating those conversations among all EA orgs. Not just for software engineers but for all roles; HR is a big problem facing new companies—talent there is hard to find, too.

As a starting point, there could be some folks offering presentations to spread the expertise around. As an example, I did an hour-long training on Forming and training a distributed InfoSec team during the pandemic in November to the Infosec in EA Facebook group. There at topics like these that every EA org could benefit from.

In general there's a "vibe" of the comment that I somewhat disagree with, something in the general vein of "morality ought to be really convenient, and other people should figure that out."

Well, to be starkly transparent about my own biases and mental framing: the message that EA sends is that there are effective charities and, if one only gives away enough money as EtG, one can live a moral life. This is intoxicating and it's a sanguine trap because of that: when I wrote my first annual check, the feels were real. And the feels keep coming, year after year.

An EA startup has to overcome the feels that EtG offers to attract top talent. (I say this to make the observation that this is a market reality; not to virtue signal.)

To close, allow me to state a straw-person risk analysis for a top-of-market tech employee (based on my own intuitions, not reproducible data):

  • Stay at FAANG
    • 40-60% chance that a top performer will be able to EtG >$1M/yr during the final 10 years of their career.
    • >95% chance that the person will be able to EtG $>100k/yr during the final 10 years of their career.
    • ~70% chance of retiring at 50. Then, the person can direct their attention to EA orgs for the last 10-20 working years of their career for free/little comp. 
  • Join an early EA startup
    • ~20% chance that the startup survives its first 5 years of existence and is effective, depending on cause area. People-problems are the reasons that startups fail, not bad ideas.
    • Depending on how early and in what capacity one contributes, it's hard to guess at the possible impact one might have in the final moral calculus of this organization's altruistic contributions, if successful, when measured against something like EtG donations to AMF. The wide range of outcomes could far outpace EtG, yes. And, importantly, enable other EtG-ers to find new places to sink altruistic capital. What are the odds? Unknown.
  1. ^

    The leave-and return-in-2-years-on-failure path has lifetime earnings opportunity cost of ~$2M via lost equity and career trajectory.