Hide table of contents

This post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it's valuable for there to be multiple perspectives on this topic out there.

Edited to add: see below for why I chose to focus on longtermism in this post.

While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize "paths" to particular roles working on particular causes; by contrast, I emphasize "aptitudes" one can build in a wide variety of roles and causes (including non-effective-altruist organizations) and then apply to a wide variety of longtermist-relevant jobs (often with options working on more than one cause). Example aptitudes include: "helping organizations achieve their objectives via good business practices," "evaluating claims against each other," "communicating already-existing ideas to not-yet-sold audiences," etc.

(Other frameworks for career choice include starting with causes (AI safety, biorisk, etc.) or heuristics ("Do work you can be great at," "Do work that builds your career capital and gives you more options.") I tend to feel people should consider multiple frameworks when making career choices, since any one framework can contain useful insight, but risks being too dogmatic and specific for individual cases.)

For each aptitude I list, I include ideas for how to explore the aptitude and tell whether one is on track. Something I like about an aptitude-based framework is that it is often relatively straightforward to get a sense of one's promise for, and progress on, a given "aptitude" if one chooses to do so. This contrasts with cause-based and path-based approaches, where there's a lot of happenstance in whether there is a job available in a given cause or on a given path, making it hard for many people to get a clear sense of their fit for their first-choice cause/path and making it hard to know what to do next. This framework won't make it easier for people to get the jobs they want, but it might make it easier for them to start learning about what sort of work is and isn't likely to be a fit.

I’ve tried to list aptitudes that seem to have relatively high potential for contributing directly to longtermist goals. I’m sure there are aptitudes I should have included and didn’t, including aptitudes that don’t seem particularly promising from a longtermist perspective now but could become more so in the future.

In many cases, developing a listed aptitude is no guarantee of being able to get a job directly focused on top longtermist goals. Longtermism is a fairly young lens on the world, and there are (at least today) a relatively small number of jobs fitting that description. However, I also believe that even if one never gets such a job, there are a lot of opportunities to contribute to top longtermist goals, using whatever job and aptitudes one does have. To flesh out this view, I lay out an "aptitude-agnostic" vision for contributing to longtermism.

Some longtermism-relevant aptitudes

"Organization building, running, and boosting" aptitudes[1]

Basic profile: helping an organization by bringing "generally useful" skills to it. By "generally useful" skills, I mean skills that could help a wide variety of organizations accomplish a wide variety of different objectives. Such skills could include:

  • Business operations and project management (including setting objectives, metrics, etc.)
  • People management and management coaching (some manager jobs require specialized skills, but some just require general management-associated skills)
  • Executive leadership (setting and enforcing organization-wide goals, making top-level decisions about budgeting, etc.)
  • Recruiting
  • Fundraising and marketing
  • Human resources
  • Office management
  • Events management
  • Assistant and administrative work
  • Corporate communications and public relations
  • Finance and accounting
  • Corporate law

Examples:

Beth Jones (Open Philanthropy Director of Operations); Max Dalton and Joan Gass at CEA; Malo Bourgon at MIRI. (I focused on people in executive roles and gave only a small number of examples, but I could've listed a large percentage of the people currently working at longtermism-focused organizations, as well as people working at not-explicitly-longtermist organizations doing work that's important by longtermist lights. In general, my examples will be illustrative and focused on relatively simple/"pure" cases of someone focusing on a single aptitude; I don't think people should read into any "exclusions.")

How to try developing this aptitude:

There are many different specializations here. Each can generally be developed at just about any organization that has the corresponding need.

In many cases, early-career work in one specialization can give you some exposure to others. It's often possible to move between the different specializations and try different things. (The last three listed - communications, finance/accounting, and law - are probably the least like this.)

I'm especially positive on joining promising, small-but-growing organizations. In this sort of organization, you often get a chance to try many different things, and can get a rich exposure to many facets of helping an organization succeed. This can be an especially good way to get experience with people management and project management, which are often very generally applicable and in-demand skills across organizations. Coming into such a company in whatever role is available, and then being flexible and simply focused on helping the company succeed, can be a good learning experience that helps with both identifying and skilling up at good-fit aptitudes.

On track?

As a first pass, the answer to "How on track are you to develop a longtermism-relevant aptitude?" seems reasonably approximated by "How generically strong is your performance?" Raises, promotions, and performance reviews are all data points here. I think one of the best indicators of success would be that the people you work most closely with are enthusiastic about you and would give you a glowing reference - combined with those people (and the organization you work for) being themselves impressive.

People working on this aptitude might sometimes have feelings like "I'm performing well, but I don't feel I'm contributing to a great mission." In early career stages, for this aptitude, I think performing well is more important than being at an organization whose mission you're enthusiastic about, assuming the work is overall reasonably enjoyable and sustainable. Later on, when you have a relatively stable sense of your core competencies and aren’t growing rapidly, I think it’s good to give the mission more weight.

Political and bureaucratic aptitudes

Basic profile: advancing into some high-leverage role in government (or some other institution such as the World Bank), from which you can help the larger institution make decisions that are good for the long-run future of the world.

While organization-supporting aptitudes are mostly (in the long run) about helping some organization whose mission you're aligned with accomplish its existing goals, political and bureaucratic aptitudes are more about using a position of influence (or an influential network) to raise the salience and weight of longtermist goals within an institution.

Essentially any career that ends up in an influential position in some government (including executive, judicial, and legislative positions) could qualify here (though of course some are more likely to be relevant than others).

Examples:

Richard Danzig (former Secretary of the Navy, author of Technology Roulette); multiple people who are pursuing degrees in security studies at Georgetown and aiming for (or already heading into) government roles.

How to try developing this aptitude:

First, you should probably have a clear idea of what institution (or set of institutions) could be a good fit. A possible question to ask yourself: "What's an institution where I could imagine myself being relatively happy, productive, and motivated for a long time while 'playing by the institution's rules?'" I'd suggest speaking with later-career people at the institution to get as detailed a sense as possible of how long it will take to reach the kind of position you're hoping for; what your day-to-day life will be like in the meantime; and what you will need to do to succeed.

Then, you can try for essentially any job at this institution and focus on performing well by the institution's standards. Others who have advanced successfully should be able to give a good guide to what these are. In general (though not universally), I would expect that advancing along any track the institution offers is a good start, whether or not that track is directly relevant to longtermism.

Sometimes the best way to advance will involve going somewhere other than the institution itself, temporarily (e.g., law school, public policy school, think tanks). Graduate schools present the risk that you could spend a long time there without learning much about the actual career track itself, so it may sometimes make sense to try out a junior role, see how it feels, and make sure you're expecting a graduate degree to be worth it before going for the graduate degree.

On track?

As a first pass, the answer to "How on track are you?" seems reasonably approximated by "How quickly and impressively is your career advancing, by the standards of the institution?" People with more experience (and advancement) at the institution will often be able to help you get a clear idea of how this is going (and I generally think it’s important to have good enough relationships with some such people to get honest input from them - this is an additional indicator for whether you’re “on track”). If you’re advancing and performing well generally, the odds seem reasonably good that you’ll be able to advance in some longtermism-relevant part of the institution at some point.

I think one of the main questions for this sort of aptitude is "How sustainable does this feel?" This question is relevant for all aptitudes, but especially here - for political and bureaucratic roles, one of the main determinants of how well you advance is simply how long you stick with it and how consistently you meet the institution's explicit and implicit expectations.

"Conceptual and empirical research on core longtermist topics" aptitudes

Basic profile: helping to reach correct substantive conclusions on action-relevant questions for effective altruists, such as:

  • Which causes are most promising to work on? (This could include things like making the case for longtermism)
  • What's a reasonable probability distribution over things like (a) when transformative AI will be developed; (b) the size of various existential risks?
  • What can we learn from historical cases about the most promising routes to growing the effective altruist community?
  • What policy changes would be most desirable to push for in order to reduce existential risk?
  • How should money be allocated between potential grantees in a given cause (or generally)? (And how should it be allocated across time, i.e., "giving now vs. giving later?")
  • What sorts of jobs should effective altruists be most encouraged to aim for?

I discuss this one at some length because I know it fairly well. However, I think it's one of the hardest aptitudes to succeed at at the moment, as it tends to require very high levels of self-directedness.

Examples:

  • Eliezer Yudkowsky, Nick Bostrom, and others who have worked to flesh out the case for prioritizing existential risk and AI safety in particular.
  • Most people in research roles at FHI.
  • Most people in research roles at Open Philanthropy (you could also think of the grantmaking roles this way).
  • People working on determining what 80,000 Hours's substantive recommendations and advice should be (as opposed to how to communicate it).

Note that some people in this category do mostly conceptual/philosophical work, while some do mostly empirical work; some focus on generating new hypotheses, while others focus on comparing different options to each other. The unifying theme is of focusing on reaching substantively correct conclusions, not on better communicating conclusions others have reached.

How to try developing these aptitudes:

One starting point would be a job at an organization specifically focused on the type of question you're interested in. So if you want to look for crucial considerations, you might try for a job at FHI; if you want to work on questions about grantmaking, you might try for a job at Open Philanthropy.

I think other jobs are promising as well for developing key tools, habits, and methods:

  • Academic study in fields that are relevant for the kinds of questions you want to work on. It's hard to generalize very far here, but for conceptual questions, I think philosophy, mathematics, computer science, and theoretical physics are especially promising; for more empirical questions, economics seems most generally promising, since fluency with quantitative social science seems important. (Many other areas, such as history and political science, could be useful as well.)
  • Jobs that heavily feature making difficult intellectual judgment calls and bets, preferably on topics that are “macro” and/or as related as possible to the questions you’re interested in. There are some jobs like this in "buy-side" finance (trying to predict markets) and in politics (e.g. BlueLabs).

I also think there are opportunities to explore and demonstrate these aptitudes via self-study and independent work - on free time, and/or on scholarships designed for this (such as EA Long-Term Future Fund grants, Research Scholars Program, and Open Philanthropy support for individuals working on relevant topics).

I think these aptitudes currently require a lot of self-direction to do well, no matter where you're doing them, so trying them on your own seems like a reasonable test (although given the difficulty, I'd suggest a frame of "seeing whether this is enjoyable/interesting/useful" rather than "actively pushing for a job").

The basic formula I see for trying out these aptitudes for self-study is something like:

  • Examine some effective-altruism-related hypothesis or question, and get very deep into it, forming your own "inside" view (a view based on your own reasoning and logic, rather than reasoning based on what others believe).
  • Write up your view with strong reasoning transparency, somewhere such as LessWrong or the EA Forum.
  • Engage in discussion.

Some example approaches:

  • Closely and critically review some piece of writing and argumentation on longtermist topics. This could be a highly influential piece of writing such as Astronomical Waste, On the Overwhelming Importance of Shaping the Far Future, some chapter of Superintelligence or The Precipice, various pieces on AI timelines, etc. Or for an easier start, it might be a recent post from the EA Forum, AI Alignment Forum, LessWrong, or a particular blog you think talks about interesting topics. Explain the parts you agree with as clearly as you can, and/or explain one or more of your key disagreements.
  • Pick some question such as "What are the odds of existential catastrophe this century?" (very broad) or "What are the odds of nuclear winter this century?" (narrower, likely more tractable). Write up your current view and reasoning on this, and/or write up your current view and reasoning on some sub-question that comes up as you're thinking about it.[2]
  • Look into some question that's been explicitly flagged as a "question for further investigation" (e.g. here or here). Try to identify some sub-question you can shed some light on, and write up what you find.

It could also be beneficial to start with somewhat more concrete, tractable versions of this sort of exercise, such as:

  • Explaining/critiquing interesting arguments made on any topic you find motivating to write about.
  • Making bets and/or forecasts on PredictIt, GJOpen or Metaculus and explaining your thinking.
  • Writing fact posts.
  • Closely examining and explaining and/or critiquing GiveWell's recommendations and cost-effectiveness analyses, or GPI's papers (all of these are publicly available and tend to explain their reasoning thoroughly).
  • Reviewing the academic literature on any topic of interest and trying to reach and explain a bottom-line conclusion (there are many examples of this sort of exercise in Slate Star Codex's "more than you wanted to know" tag; researching medical questions of personal interest can be an easy way to find topics).

In general, I think it's not necessary to obsess over being "original" or having some new insight. In my experience, when one tries to simply write up one's current understanding in detail - even when one's understanding is a very "vanilla" or widely accepted story - points of confusion and uncertainty often come into relief, and one often can learn a lot and/or notice underappreciated points this way. I think it's ideal to write up underappreciated points when one has them in mind, but I also see a lot of value in straightforward, detailed explanations and critical assessments of existing arguments.

On track?

Some example milestones you could aim for while developing these aptitudes:

  • You’re successfully devoting time to this and creating content. (I expect this to be the hardest milestone to hit for many - it can be hard to simply sustain motivation and productivity given how self-directed this work often needs to be.)
  • In your own judgment, you feel you have made and explained multiple novel, valid, nontrivially important (though not necessarily earth-shattering) points about crucial longtermist topics.
  • You’ve gotten enough feedback (upvotes, comments, personal communication) to feel that at least several other people (whose judgment you respect, and who put serious time into thinking about these topics) agree.
  • You’re making meaningful connections with others interested in these topics - connections that seem likely to lead to further funding and/or job opportunities. This could be from the organizations most devoted to your topics of interest; there could also be a "dissident" dynamic in which these organizations seem uninterested and/or defensive, but others are noticing this and offering help.

My very rough impression/guess is that for people who are an excellent good fit for this aptitude, a year of full-time independent effort should be enough to mostly reach these milestones, and that 2-3 years of 20%-time independent effort (e.g., one day per week) should also suffice.[3] (For this kind of role, I think there's a lot of important "background processing" of ideas, so I'd expect a 20%-time year to be more than 1/5 as productive as a full-time year.) I would generally consider this "clock" to start as soon as someone is carving out time and forming an intent to try this work (I wouldn't wait until they are successfully spending time on it, since this is one of the most challenging things about the work, as noted above).

Contrast with research "paths." Rather than aiming to work on a particular topic such as AI governance or cause prioritization, I'm suggesting starting with whatever topics you have the energy and interest to write about, and I think that someone who succeeds by the above criteria has a good shot at building a career around research on some topic in the general vicinity. Because of this, it should be possible to try/explore this aptitude without needing a particular job offer in a particular area (although again, I think the success rate will generally be low).

"Communicator" aptitudes

Basic profile: helping to communicate key, substantively well-grounded messages and ideas to particular audiences. The audiences could be very general (e.g., writing for mass-market media) or more specialized (e.g., writing for policymakers on particular issues). Example messages could be the importance of global catastrophic risks, the challenge of AI alignment, the danger of covert state bioweapons programs, the general framework of effective altruism, and many more.

Examples:

  • Kelsey Piper and other journalists at Future Perfect.
  • Authors of mass-market books such as The Alignment Problem.
  • People who do social media and/or podcasting, e.g. Julia Galef and Rob Wiblin.
  • People working at think tanks, whose main goal is to put key ideas in terms that will be more compelling to particular policymaking audiences.

How to try developing these aptitudes:

First, you should have some idea of what sort of target audience you’d like to communicate with. A possible question to ask yourself: "What's a type of person that I understand and communicate with, better than most EAs / longtermists do?"

Then, you can try to get any job that involves communicating with this audience and getting feedback on a regular basis - whether or not the communication is about EA/longtermist topics. The main aptitude being built is general ability to communicate with the audience (although understanding of EA/longtermist topics will be important at some point as well). So if you’re interested in communicating with fairly general/widespread audiences, most jobs in journalism, and many in public relations and corporate communications, would qualify.

I also think there's a lot of opportunity to build this sort of aptitude through independent work, such as blogging, tweeting, podcasting, etc. I expect that some of the people with the greatest potential as communicators are those who find it relatively easy to create large amounts of content and connect with their target audience naturally. (Though for anyone doing independent public work, I'd advise taking some measures to avoid publishing something unintentionally offensive, as this could affect your career prospects for a long time even if the offense is the result of a misunderstanding.)

On track?

As a first pass, the answer to "How on track are you to develop a longtermism-relevant 'communicator' aptitude?" seems reasonably approximated by "How generically successful are you by the standards of the (communications-focused) career track you're on?" The more successful, the better position you'll likely be in at some point to find ways to communicate important longtermist ideas to your target audience.

Building a following via independent content creation would also be a clear sign of promise.

In both cases, it seems realistic to get a pretty good read on how you're doing within 2-3 years.

"Entrepreneur" aptitude[4]

Basic profile: founding, building, and (at least for some time) running an organization that works on some longtermist goal. Some people found organizations primarily as a way to have independence for their research or other work; here I am instead picturing someone who is explicitly aiming to invest in hiring, management, culture- and vision-setting, etc. with the aim of building an organization that can continue to function well if they leave.

(Not all organizations are founded by someone who is explicitly focused this way; sometimes an organization is founded by one person, but a lot of the "entrepreneur" work ends up done by people who come in later and take the top executive role.)

Examples:

Some pretty clean examples (with the organization that was founded in parentheses, regardless of whether the person is still there) would be Ben Todd (80,000 Hours); Jason Matheny (CSET); Elie Hassenfeld and myself (GiveWell). Many other longtermist organizations had a fair amount of early turnover at the top (leaving it somewhat unclear who did the bulk of the "entrepreneur" work) and/or are academic centers rather than traditional organizations.

How to try developing this aptitude:

Entrepreneurship tends to require juggling more duties than one can really learn how to do "the right way." It crucially relies on the ability and willingness to handle many things "just well enough" (usually with very little training or guidance) and focus one's energy on the few things that are worth doing "reasonably well."

With this in mind, I generally think the person best-suited to found an organization is the person who feels such strong conviction that the organization ought to exist (and can succeed) that they can hardly imagine working on anything else. This is the kind of person who tends to have a really clear idea of what they're trying to do and how to make the tradeoffs gestured at above, and who is willing and able to put in a lot of work without much reliable guidance.

So my general approach to entrepreneurship would be: if there's no organization you have a burning desire to create (or at least, a strong vision for), it's probably not time to be an entrepreneur. Instead it could make more sense to try for a job in which you're learning more about parts of the world you're interested in, becoming more aware of how organizations work, etc. - this could later lead to identifying some "gap in the market" that you’re excited to fill.

I do think that if you have any idea for an organization that you think could succeed, and that you'd be extremely excited to try to create, giving this a shot could be a great learning experience and way of building a general "entrepreneur" aptitude. (Edited to add: and potentially having impact in other ways, such as via philanthropy - see discussion below.) This is true even if the organization you have in mind does not do longtermist-focused work (for example, if it's a conventional tech startup). Though it's worth keeping in mind that it could take a long time (several years, sometimes >10 years) to get a successful organization to the point where one can responsibly step away and move onto something else.

On track?

In the first couple of years, I think you’re doing reasonably well if your organization is in a reasonable financial position, hasn't had any clear disasters, and has done pretty well at attracting talent. Beyond that, I think it tends to be a big judgment call how an organization is doing.

"Community building" aptitudes

Basic profile: bringing together people with common interests and goals, so that they form a stronger commitment to these interests and goals and have more opportunities and connections to pursue them. This could be via direct networking (getting to know a lot of people and introducing them to each other); meetups and events; explicit recruiting;[5] etc. Referring new people to resources and helping them learn more is also an important component.

Examples:

People organizing local, university, etc. EA groups, organizing EAGx's, etc., as well as many of the people at the Centre for Effective Altruism.

How to try developing this aptitude:

There is likely some community you're a part of, or set of people you know, that you can immediately start working with in this way: networking and making introductions; organizing meetups and other events, etc. This can initially be done on free time; if you start to build a thriving mini-community, I'd suggest looking for funding to transition into doing the work full-time, and looking into whether you can expand the population you're working with.

On track?

I find it a bit harder to articulate "on track" conditions for this aptitude than for most of the others in this piece, but a couple of possibilities:

  • To use a meetup-type model as a concrete example: I'd roughly think that you're doing well if, within 1-3 years of calendar time (whether full-time or part-time), you've had a major hand in organizing a set of people that interacts regularly; has a good number of people who are highly engaged and showing up regularly; and has some number of people who you think are likely to devote a lot of their career to longtermist impact, and are likely to succeed at this. (Specific numbers are hard to give since communities vary significantly, but double-digit regular attendees and a handful of highly promising people are roughly what I have in mind.)
  • Other versions of community building might look less like "organizing a community of regularly-meeting effective altruists" and more like "creating events that cause new connections to happen." Here I think you can look at who is "endorsing" your events via their attendance (especially repeat attendance) and/or via recommendations to others, to get a sense of how you're doing.
  • A slightly more generalized statement of what it looks like to be "on track": you're providing a space and/or service (whether this is a networking service, social space, discussion space, etc.) that a good number of people value and recommend to others; you have a strong sense of your target audience and the value that people are getting out of the space; and there are a number of especially promising people that are making heavy use of what you're providing.

Software engineering aptitude

Basic profile: I think software engineering can be useful for longtermist goals in multiple ways:

  • AI labs (mostly in industry, but sometimes in academia) have demand for software engineers, and it may be growing.
    • In some cases, engineers work directly on AI alignment research; Anthropic, DeepMind, MIRI, and OpenAIF all have roles like this.
    • In other cases, they may work on building large and capable AI systems (e.g., AlphaStar and GPT3) that can then be analyzed and characterized. If the lab they're working at is committed to reducing AI risk above other goals, and therefore is cautious about publicizing and deploying these systems while investing heavily in analyzing them and using them to help with alignment research, this can (at least arguably) be good for longtermist goals.
  • Software engineering can also be useful at any organization doing heavy analysis, which could include organizations working in politics (e.g.) and potentially on biosecurity and pandemic preparedness (I don't currently know any examples of the latter, but think it's reasonably likely there will be some down the line).
  • Software engineering also tends to pay well, especially for people who join successful startups early on. (It is also probably a useful background for someone who wants to start a tech company.) So it could be good for would-be philanthropists.

Examples:

Catherine Olsson and Tom Brown have both done software engineering at OpenAI, Google Brain, and Anthropic.

How to try developing this aptitude:

Software engineering is a relatively well-established career path. You can start with something like App Academy or Lambda School. (For roles at e.g. DeepMind and OpenAI specifically, one probably needs to be in the top few percent of people in these programs.) Just about any software engineering job is probably a good way to build this aptitude; the more talented one's peers, the better.

On track?

See the "On track" section of Organization building, running and boosting.

Information security aptitudes

(In this case, there isn't as much difference between an "aptitude" and a "path." The same applies to the next section, as well.)

Basic profile: working to keep information safe from unauthorized access (or modification). This could include:

  • Research on theoretical and cutting-edge issues in information security - what attacks could theoretically be carried out, how they could theoretically be defended against, etc.
  • Working at a company, helping it to (a) define its information security goals and needs; (b) define the kinds of solutions that could be practical; (c) rolling out and supporting solutions so that information remains secure in practice.

This post by Claire Zabel and Luke Muehlhauser states, "Information security (infosec) expertise may be crucial for addressing catastrophic risks related to AI and biosecurity ... More generally, security expertise may be useful for those attempting to reduce [global catastrophic risks], because such work sometimes involves engaging with information that could do harm if misused ... It’s more likely than not that within 10 years, there will be dozens of GCR-focused roles in information security, and some organizations are already looking for candidates that fit their needs (and would hire them now, if they found them) ... If people who try this don’t get a direct work job but gain the relevant skills, they could still end up in a highly lucrative career in which their skillset would be in high demand."

I broadly agree with these points.

Examples:

Unfortunately, there aren't many effective altruists with advanced information security careers as of now, as far as I know.

How to try developing this aptitude:

Working on information security for any company - or working in any field of information security research - could be a good way to build this aptitude. I would guess that the best jobs would be ones at major tech companies for whom security is crucial: Amazon, Apple, Microsoft, Facebook, and (especially) Google.

On track?

See the "On track" section of Organization building, running and boosting.

Academia

Basic profile: following an academic career track likely means picking a field relatively early, earning a Ph.D., continuing to take academic positions, attempting to compile an impressive publication record, and ultimately likely aiming for a role as a tenured professor (although there are some other jobs that recruit from academia). Academia is a pretty self-contained career track, so this is a case where there isn't a lot of difference between an "aptitude" and a "path" as defined in the introduction of this post.

Being an academic could be useful for longtermist goals in a few ways:

  • You might do research that relates substantively to key longtermist questions, which would cause this aptitude to overlap with "conceptual and empirical research on core longtermist topics" aptitudes.
  • You might have opportunities to raise the profile of important longtermist ideas within your field. You could think of this as being a sort of specialized "communicator" role. (Global Priorities Institute often aims for some combination of this point and the one above.)
  • You might have opportunities to advise policymakers and the public, as an expert.
  • You might have opportunities to help introduce your students to important longtermist ideas, including by teaching courses on effective altruism and longtermism (example). (As a side note, I think there could also be a lot of potential impact in being a K-12 teacher who looks for opportunities to introduce students to important ideas in effective altruism and longtermism.)
  • Additionally, some academic fields open doors for potentially high-impact non-academic roles. AI is perhaps the best example: studying AI and having impressive early-career accomplishments (even prior to earning one's PhD) can be a good way to end up with a "scientist" role at a private AI lab. Economics also can lead to strong non-academic opportunities, including in policymaking.

Many academic fields could potentially lead to these sorts of opportunities. Some that seem particularly likely to be relevant for longtermists include:

  • AI
  • Biology, epidemiology, public health, and other fields relevant to biorisk
  • Climate science
  • Economics and philosophy, the two priority fields at Global Priorities Institute

Examples:

Hilary Greaves at Global Priorities Institute; Stuart Russell at Center for Human-Compatible AI; Kevin Esvelt.

How to try developing this aptitude:

The academic career path is very well-defined. People entering it tend to have fairly robust opportunities to get advice from people in their field about how to advance, and how to know whether they're advancing.

In general, I would encourage people to place high weight on succeeding by traditional standards - both when picking a field and when picking topics and projects within it - rather than trying to optimize too heavily for producing work directly relevant to longtermist goals early in their careers.

On track?

My answer here is essentially the same as for political and bureaucratic aptitudes.

Other aptitudes

There are almost certainly aptitudes that have a lot of potential to contribute directly to longtermist goals, that I simply haven’t thought to list here.

Hybrid aptitudes

Sometimes people are able to do roles that others can't because they have two (or more) of the sorts of aptitudes listed above. For example, perhaps someone is a reasonably strong software engineer and a reasonably strong project/people manager, which allows them to contribute more as a software engineering manager than they could as either a software engineer or a nontechnical manager. In the effective altruism community, "conceptual and empirical research" often goes hand in hand with "communicator" (as with Nick Bostrom writing Superintelligence).

I think it's good to be open to building hybrid aptitudes, but also good to keep in mind that specialization is powerful. I think the ideal way to pursue a hybrid aptitude is to start with one aptitude, and then notice an opportunity to develop another aptitude that complements it and improves your career options. I wouldn't generally recommend pursuing multiple aptitudes at once early in one's career.

Aptitude-agnostic vision: general longtermism strengthening

I think any of the above aptitudes could lead to opportunities to work directly on longtermist goals - at an AI lab, EA organization, political institution, etc. And I think there are probably many other aptitudes that could as well.

However, some people will find themselves best-suited for an aptitude that doesn't lead to such opportunities. And some people will develop one of the above aptitudes, but still not end up with such opportunities.

I think such people still have big opportunities to contribute to longtermist goals, well beyond (though including) "earning to give," by doing things to strengthen longtermism generally. Things that have occurred to me in this category include:

  • Spreading longtermist ideas within personal networks. I don't think people should promote longtermism aggressively or in ways that risk annoying their friends. But I think people who are connected and respected will have natural opportunities to get people excited about important ideas who would ordinarily not be open to them. (And success in any career is likely to lead to personal networks full of other successful people.) People who are good at this might also become a sort of expert in how to communicate about longtermism with a certain kind of person.
  • Showing up in (and/or creating and hosting) longtermist and effective altruist spaces such as local meetups, EA Global, dinners and parties, talks, etc. You can raise the quality of these events both by your presence and by your feedback (noticing what's suboptimal about them and being vocal about this). I don't think people should necessarily attend these events only for personal benefit - there's a lot of good to be done by making them better, such that new people attending them immediately have good experiences and encounter people they respect and can learn from. And hosting/creating events along these lines can often be a good idea.
  • Being a role model. You can aim to simultaneously be a deeply informed, committed and vocal longtermist, and a person whom non-longtermists think highly of and are glad to know. I think role models are important and impactful, so this could make a real difference within whatever communities you’re in.
  • Being a vocal “customer” of the effective altruist and longtermist communities. I value it when someone says, "I feel unwelcome in the community because ___" or "I have trouble engaging with longtermists because __" or "I really value events like __ and wish there were more." People with different perspectives can notice different things along these lines, and help make the longtermist community better at retaining future people like them.
  • Raising children. I feel a bit odd mentioning this one, and my intent is certainly not to tell anyone they "should" have children. But I believe that raising children takes a ton of work and probably makes the long-run future better in expectation, so it would also feel odd not to mention it. As of now, I'd guess that longtermists with children also significantly increase demand for the longtermist community to become more parent- and child-friendly; this seems like a good (and not minor) thing for longtermism and longtermists.
  • Donating. I've listed this one relatively late because I think "earning to give" has probably been overemphasized compared to the above. But I think there is a lot of potential impact here.
    • There isn't currently an "obvious" and arbitrarily scalable place for longtermists to donate, analogous to GiveWell's top charities. But if one doesn't have particular donations they're excited to make, I think it makes sense to simply save/invest - ideally following best investing practices for longtermist values (e.g., taking the optimal amount of risk for money intended to benefit others over long time horizons, and using charitable vehicles to reduce taxes on money that's intended for charitable purposes - I hope there will be writeups on this sort of thing available in the future). There are debates about whether this is better than giving today, but I think it is at least competitive.
    • Donor lotteries also seem like a solid option.
    • Either of these means you don't have to stress on an annual basis about optimizing your donations; I think that's a good thing, because I think that time and energy can be better spent on aiming for success as a professional and person, contributing to all of the above.
  • Being prepared to do direct longtermist work if the need/opportunity arises. The future is hard to predict, and many people who see no track toward direct longtermist work now may end up with a big opportunity in the future.
    • A late-career job switch can be difficult: it could involve a major reduction in pay and other aspects of status, recognition, appreciation, and comfort. I think anyone who has set themselves up to be truly open to a late-career job switch has (just by that fact) accomplished something impressive and important. I'd guess that your odds of being able to do this are higher if you have significant "reserves" in terms of physical and mental (and financial) health.
    • I'd guess that anyone who is succeeding at what they do and developing aptitudes that few can match, while being truly prepared to switch jobs if the right opportunity comes up, has - in some sense - quite high expected longtermist impact (over the long run) via direct work alone. I think this expected impact will often be higher than the expected impact of someone who is in a seemingly top-priority longtermist career now, but isn't necessarily performing excellently, sustainably or flexibly.

I would think anyone who’s broadly succeeding at many of the above things - regardless of what their job is - is having a large expected longtermist impact. I think being successful and satisfied in whatever job one has probably helps on all of these fronts.

How to choose an aptitude

I imagine some people will want a take on which of these aptitudes is "highest impact."

My main opinion on this is that variance within aptitudes probably mostly swamps variance between them. Anyone who is an outstanding, one-of-a-kind talent at any of the aptitudes I listed is likely having enormous expected impact; anyone who is successful and high-performing is likely having very high expected impact; anyone who is barely hanging onto their job is likely having less impact than the first two categories, even if they're in a theoretically high-impact role.

I also believe that successfully building an aptitude - to the point where one is "professionally in demand" - generally requires sticking with it and putting a lot of time in for a long time. Because of this, I think people are more likely to succeed when they enjoy their work and thrive in their work environment, and should put a good deal of weight on this when considering what sorts of aptitudes they want to build. (I think this is particularly true early in one's career.)[6]

With these points in mind, I suggest a couple rules of thumb that I think are worth placing some weight on:

  1. "Minimize N, where N is the number of people who are more in-demand for this aptitude than you are." A more informal way of putting this is "Do what you'll succeed at."
  2. "Take your intuitions and feelings seriously." A lot of people will instinctively know what sorts of aptitudes they want to try next; I think going with these instincts is usually a good idea and usually shouldn't be overridden by impact estimates. (This doesn't mean I think the instincts are usually "correct." I think most good careers involve a lot of experimentation, learning that some sort of job isn't what one pictured, and changing course. I think people learn more effectively when they follow their curiosity and excitement; this doesn't mean that their curiosity and excitement are pointing directly at the optimal ultimate destination.)

I do believe there are some distinctions to be made, in terms of impact being higher for a given level of success at one aptitude vs. another. But any guesses I made on this front would be pretty wild guesses, quite sensitive to my current views on cause prioritization as well as the current state of the world (which could change quickly). And I think there's potential for enormous expected longtermist impact within any of the listed aptitudes - or just via aptitude-agnostic longtermism strengthening.

Some closing thoughts on advice

Throughout this piece, I've shared a number of impressions about how to build an aptitude, how to tell whether you're on track, and some general thoughts on what rules of thumb might help to be successful and have impact.

I've done this because I think it's helped me try to get across a general framework/attitude for career choice that I think is worth some weight, and can help complement other frameworks that longtermists use.

But I'm generally nervous about giving career advice to anyone, even people I know well, because career choice is such a personal matter and it's so easy for an advice-giver to be oblivious to important things about someone's personality, situation, etc. I'm even more nervous about putting advice up on the internet where many people in many situations that I know very little about might read it.

So I want to close this piece by generally discouraging people from "taking advice," in the sense of making a radically different decision than they would otherwise because of their interpretation of what some particular person would think they should do. Hopefully this piece is useful for inspiration, for prompting discussion, and by raising points that one can consider on the merits and apply their own personal judgment to. Hopefully it won't be taken as any sort of instruction or preference about a specific choice or set of choices.

I'll also link to this page which contains a fair amount of "anti-advice advice," including quotes from me here (“A career is such a personal thing”), here (“When you’re great at your job, no one’s advice is that useful”), and here (“Don’t listen too much to anyone’s advice”).


This work is licensed under a Creative Commons Attribution 4.0 International License


  1. Some of the content in this section overlaps with that of 80,000 Hours's content on working at effective altruist organizations, particularly with respect to how one might prepare oneself for a role at such organizations. However, my section excludes research-based and other "idiosyncratic" roles at such organizations; it is about jobs based on "generally useful" skills that could also be used at many non-effective-altruist organizations (some of them giving an opportunity to have longtermist impact despite not being explicitly effective-altruist). In other words, this section takes a frame of "building aptitudes that can be useful to help many organizations, including non-effective-altruist ones doing important work" rather than "going to a non-effective-altruist organization in order to build skills for an effective-altruist organization." ↩︎

  2. I'd expect most investigations of this form to "balloon," starting with a seemingly straightforward question ("What are the odds of nuclear winter this century?") that turns out to rely on many difficult sub-questions ("What are the odds there will be a nuclear war at all? How much particulate matter does a typical nuke kick into the air? Are there bigger nukes that might be deployed, and how much bigger?") It can be very difficult to stay focused on a broad question, handling sub-questions pragmatically and giving a reasonable amount of depth to each. But allowing oneself to switch to answering a narrower and narrower subquestion could make the work more tractable. ↩︎

  3. This takes into account the fact that this kind of work can be very hard to put a lot of hours into. I'd expect even people who are a great fit for it to frequently struggle with maintaining focus and to frequently put in less time than they intended; nonetheless, I'd expect such people to achieve roughly the kind of progress I outline on the calendar time frames discussed. ↩︎

  4. This section is similar to 80,000 Hours's discussion of "nonprofit entrepreneur,", with the main difference being my emphasis that entrepreneurship experience with a non-effective-altruist organization (including a for-profit) can be useful. ↩︎

  5. For example, "online organizing" - asking people to take relatively small actions on compelling, immediate topics, resulting in their becoming more engaged and reachable on broader topics. ↩︎

  6. Also see my comments here (under “Not focusing on becoming really good at something”), which were made anonymously but which I'm now attributing. ↩︎

Comments37
Sorted by Click to highlight new comments since: Today at 8:43 PM

I think this is a really well-written piece, and personally I've shared it with my interns and in general tentatively think I am more inclined to share this with my close contacts than most 80k articles for "generic longtermist EA career advice" (though obviously many 80k articles /podcasts have very useful details about specific questions).

2 things that I'm specifically confused about:

  1. As Max_Daniel noted, an underlying theme in this post is that "being successful at conventional metrics" is an important desiderata, but this doesn't reflect the experiences of longtermist EAs I personally know. For example, anecdotally, >60% of longtermists with top-N PhDs regret completing their program, and >80% of longtermists with MDs regret it.

    (Possible ways that my anecdata is consistent with your claims:
    • These people are  often in the "Conceptual and empirical research on core longtermist topics" aptitudes camp, and success at conventional metrics is a weaker signal here than in other domains you listed.
    • Your notions of "success"/excellence are a much higher bar than completing a PhD at a top-N school.
    • My friends are wrong to think that getting a PhD/MD was a mistake.
  2. You mention that a crazy amount of total hours is necessary to become world-class excellent at things. I agree with the sentiment that  a) fit/talent is very important and b) college and other "normal/default" practices acclimate people to wrongly believing that success is easier and hard work is less critical than is true. But when I think about things that matter for longtermist EAs (rather than success at well-defined prestige ladders in fields with established paradigms), I think a lot of outlier success comes from extremely (arguably unsustainably) intense periods with relatively small calendar time or total hours invested. eg
    • A lot of success in early cryptocurrency trading (pre-2019, say) comes from people a) very talented b) willing to see the opportunity and c) be willing to make radical life changes to realize the once-in-a-decade event and immediately jump on it.
    • This seems to have happened a bunch during the pandemic. Eg amateur short-term forecasts, Youyang Gu's modeling, patio11's work with VaccinateCA, etc, all seemed to have been broadly better in most cases than similar work by established experts.
      • Obviously there are important exceptions like the mRNA vaccines and a lot of the testing/sequencing work.
    • My impression (from outside the field) is that a lot of the most important work in AI Safety is done by people who are fairly junior, and without a lot of experience in the field.
    • I imagine "crunch time" for EAs in longtermist causes to look a lot more like "be a generically competent  person who has your shit together + is willing to drop everything to work on the hard things that aren't really your specialty but somebody has to do it and nobody else well" than "prepare for 10-20 years working very hard for the exact thing you prepared for, and then emergency times will look pretty close to your specialty so you're well-placed."
      • Perhaps a crux here is that either your mental image of "crunch time" looks more like the latter scenario?
      • or maybe more that "crunch time" is just much less important relatively speaking?
    • A caveat here is that I do agree that a) excellence is very important and b) many EAs (myself included) are perhaps not working hard enough to achieve true excellence.
    • I also agree that a) general excellence and b) hard work specifically is somewhat transferable (eg many of the successful crypto people were great finance traders or great programmers before crypto trading, at least one of which requires insane hours, patio11 was a world-class software evangelizer before his covid work). But I think the importance of being world-class here is "just" building a) the general skillset of becoming world-class and b) the mental fortitude, flexibility, etc of willingness to sacrifice other things when the stakes are high enough, rather than either the direct benefits of your expertise or the network advantages of being around other prestigious/high-status/etc. people.
      • One way in which our models cash out to different actions:
        • If my intuition/heuristic is correct, this points to sometimes doing crunch-time work in less important eras as being the right way to prepare, rather than steadily climbing towards excellence in very competitive domains.
          • Being in "crunch mode" all the time may be actively bad, to the extent that it makes you miss out on great opportunities because you're too zoned into your specific work.
        • On the other hand, if we assume most of the benefits of excellence comes from the networking, etc, benefits of steady excellence, this points much more towards "spend 5-20 years becoming world-class at something that society thinks of as hard and important."

An obvious caveat to these points is that you have much more experience with excellence than I do, and your "closing thoughts on advice" aside, I'm mostly willing to defer to you if you think my heuristics/intuitions here are completely off.

Thanks for the thoughtful comments, Linch.

Response on point 1: I didn't mean to send a message that one should amass the most impressive conventional credentials possible in general - only that for many of these aptitudes, conventional success is an important early sign of fit and potential.

I'm generally pretty skeptical by default of advanced degrees unless one has high confidence that one wants to be on a track where the degree is necessary (I briefly give reasons for this skepticism in the "political and bureaucratic aptitudes" section). This piece only mentions advanced degrees for the "academia," "conceptual and empirical research" and "political and bureaucratic" aptitudes. And for the latter two, these aren't particularly recommended, more mentioned as possibilities.

More generally, I didn't mean to advocate for "official credentials that anyone could recognize from the outside." These do seem crucial for some aptitudes (particularly academia and political/bureaucratic), but much less so for other aptitudes I listed. For org running/building/boosting, I emphasized markers of success that are "conventional" (i.e., they're not contrarian goals) but are also not maximally standardized or legible to people without context - e.g., raises, promotions, good relationships.

Response on point 2: this is interesting. I agree that when there is some highly neglected (often because "new") situation, it's possible to succeed with a lot less time invested. Crypto, COVID-19, and AI safety research of today all seem to fit that bill. This is a good point.

I'm less sure that this dynamic is going to be reliably correlated with the things that matter most by longtermist lights. When I picture "crunch time," I imagine that generalists whose main asset is their "willingness to drop everything" will have opportunities to have impact, but I also imagine that (a) their opportunities will be better insofar as they've developed the sorts of aptitudes listed in this piece; (b) a lot of opportunities to have impact will really rely on having built aptitudes and/or career capital over the long run. 

For example, I imagine there will be a lot of opportunities for (a) people who are high up in AI labs and government; (b) people who know how to run large projects/organizations; (c) people with large existing audiences and/or networks; (d) people who have spent many years working with large AI models; (e) people who have spent years developing rich conceptual and empirical understanding of major potential risk factors, and that these opportunities won't exist for generalists.

"Drop everything and work hard" doesn't particularly seem to me like the sort of thing one needs to get practice with (although it is the sort of thing one needs to be prepared for, i.e., one needs not to be too attached to their current job/community/etc.) So I guess overall I would think most people are getting "better prepared" by building the sorts of aptitudes described here than by "simulating crunch time early." That said, jumping into areas with unusually "short climbs to the top" (like the examples you gave) could be an excellent move because of the opportunity to build outsized career capital and take on outsized responsibilities early in one's career. And I'll reiterate my reservations about "advice," so wouldn't ask you to defer to me here!

Related to this discussion, Paul Graham has a recent article called "How to Work Hard," which readers here might find valuable.

As Max_Daniel noted, an underlying theme in this post is that "being successful at conventional metrics" is an important desiderata, but this doesn't reflect the experiences of longtermist EAs I personally know. For example, anecdotally, >60% of longtermists with top-N PhDs regret completing their program, and >80% of longtermists with MDs regret it.

Your examples actually made me realize that "successful at conventional metrics" maybe isn't a great way to describe my intuition (i.e., I misdescribed my view by saying that). Completing a top-N PhD or MD isn't a central example - or at least not sufficient for being a central example, and certainly not necessary for what I had in mind.

I think the questions that matter according to my intuition are things like:

  • Do you learn a lot? Are you constantly operating near the boundaries of what you know how to do and have practiced?
  • Are the people around you impressed by you? Are there skills where they would be like "off the top of my head, I can't think of anyone else who's better at this than <you>"?

At least some top-N PhDs will correlate well with this. But I don't think the correlation will be super strong: especially in some fields, I think it's not uncommon to end up in a kind of bad environment (e.g., advisor who isn't good at mentoring) or to be often "under-challenged" because tasks are either too easy or based on narrow skills one has already practiced to saturation or because there are too few incentives to progress fast. 

[ETA: I also think that many of the OP's aptitudes are really clusters of skills, and that PhDs run some risk of only practicing too small a number of skills. I.e., being considerably more narrow. Again this will vary a lot by field, advisor, other environmental conditions, etc.]

What I feel even more strongly is that these (potential) correlates of doing a PhD are much more important than the credential, except for narrow exceptions for some career paths (e.g., need a PhD if you want to become a professor).

I also think I should have said "being successful at <whatever> metric for one of these or another useful aptitude" rather than implying that "being successful at anything" is useful.

Even taking all of this into account, I think your anecdata is a reason to be somewhat more skeptical about this "being successful at <see above>" intuition I have.

Completing a top-N PhD or MD isn't a central example - or at least not sufficient for being a central example

As an aside, if you're up for asking your friends/colleagues a potentially awkward question, I'd be interested in seeing how much of my own anecdata about EAs with PhDs/MDs replicates in your own (EA) circles (which is presumably more Oxford-based than mine). I think it's likely that EAs outside of the Bay Area weigh the value of a PhD/other terminal degrees more, but I don't have a strong sense of how big the differences are quantitatively. 

I find your crypto trading examples fairly interesting, and I do feel like they only fit awkwardly with my intuitions - they certainly make me think it's more complicated.

However, one caveat is that "willing to see the opportunity"  and "willing to make radical life changes" don't sound quite right to me as conditions, or at least like they omit important things. I think that actually both of these things are practice-able abilities rather than just a matter of "willingness" (or perhaps "willingness" improves with practice). 

And in the few cases I'm aware of it seems to me the relevant people were world-class excellent at some relevant inputs, in part clearly because did spend significant time "practicing" them. 

The point is just that these inputs are broader than "ability to do cryptocurrency trading". On the other hand, they also don't fit super neatly into the aptitudes from the OP, though I'd guess the entrepreneurial aptitude would cover a lot of it (even if it's not emphasized in the description of it).

However, one caveat is that "willing to see the opportunity"  and "willing to make radical life changes" don't sound quite right to me as conditions, or at least like they omit important things.

I agree with this! Narrowly,"chance favors the prepared mind" and being in either quant trading or cryptography (both competitive fields!) before the crypto boom presumably helps you see the smoke ahead of time, and like you some of the people I know in the space were world-class at an adjacent field like finance trading or programming.  Though I'm aware of other people who literally did stuff closer to fly a bunch to Korea and skirt the line on capital restrictions, which seems less reliant on raw or trained talent. 

Broadly, I agree that both seeing the opportunity (serendipity?) and willingness to act on crazy opportunities are rare skillsets that are somewhat practicable rather than just a pure innate disposition. This is roughly what I mean by 

But I think the importance of being world-class here is "just" building a) the general skillset of becoming world-class and b) the mental fortitude, flexibility, etc of willingness to sacrifice other things when the stakes are high enough, rather than either the direct benefits of your expertise or the network advantages of being around other prestigious/high-status/etc. people.

But I also take your point that maybe this is its own skillset (somewhat akin to/a subset of "entrepreneurship")  rather than a general notion of excellence.

Narrowly,"chance favors the prepared mind" and being in either quant trading or cryptography (both competitive fields!) before the crypto boom presumably helps you see the smoke ahead of time, and like you some of the people I know in the space were world-class at an adjacent field like finance trading or programming.  Though I'm aware of other people who literally did stuff closer to fly a bunch to Korea and skirt the line on capital restrictions, which seems less reliant on raw or trained talent. 

(I agree that having knowledge of or experience in adjacent domains such as finance may be useful. But to be clear, the claim I intended to make was that the ability to do things like "fly a bunch to Korea" is, as you later say, a rare and somewhat practiceable skillset.

Looking back, I think I somehow failed to read your bullet point on "hard work being somewhat transferable" etc. I think the distinction you make there between  "doing crunch-time work in less important eras" vs. "steadily climbing towards excellence in very competitive domains" is very on-point, that the crypto examples should make us more bullish on the value of the former relative to the latter, and that my previous comment is off insofar as it can be read as me arguing against this.)

(I agree that having knowledge of or experience in adjacent domains such as finance may be useful. But to be clear, the claim I intended to make was that the ability to do things like "fly a bunch to Korea" is, as you later say, a rare and somewhat practiceable skillset.

Got it!

and that my previous comment is off insofar as it can be read as me arguing against this

Thanks for the clarification! Though I originally read your comment as an extension of my points rather than arguing against them, so no confusion on my end (though of course I'm not the only audience of your comments, so these clarifications may still be helpful).

I read this post around the beginning of March this year (~6 months ago). I think reading this post was probably net-negative for my life plans. Here are some thoughts about why I think reading this post was bad for me, or at least not very good. I have not re-read the post since then, so maybe some of my ideas are dumb for obvious reasons. 

I think the broad emphasis on general skill and capacity building often comes at the expense of directly pursuing your goals. In many ways, the post is like “Skill up in an aptitude because in the future this might be instrumentally useful for making the future go well.” And I think this is worse than “Identify what skills might help the future go well, then skill up in these skills, then you can cause impact.” I think the aptitudes framework is what I might say if I knew a bunch of un-exceptional people were listening to me and taking my words as gospel, but it is not what I would advise to an exceptional person who wants to change the world for the better (I would try to instill a sense of specifically aiming at the thing they want and pursuing it more directly). This distinction is important. To flesh this out, if only geniuses are reading my post, I might advise that they try high variance, high EV things which have a large chance of ending up in the tails (e.g., startups, for which most the people will fail). But I would not recommend to a broader crowd that they try startups, because more of them would fail, and then the community that I was trying to create to help the future go well is largely made up of people who took long shot bets and failed, making them not so useful, and making my community less useful when it's crunch time (although I am currently unsure what we need at crunch time, having a bunch of people who pursued aptitudes growth is probably good). Therefore, I think I understand and somewhat endorse a safer, aptitudes based advice at a community scale, but I don't want it to get in the way of 'people who are willing to take greater risks and do whacky career stuff' actually doing so. 

My personal experience is that reading this post gave me the idea that I could sorta continue life as normal, but with a slight focus on developing particular aptitudes like building organizational success, research on core longtermist topics, communicating maybe. I currently think that plan was bad and, if adopted more broadly, has a very bad chance of working (i.e., AI alignment gets solved). However, I also suspect that my current path is suboptimal – I am not investing in my career capital or human capital for the long-run as much as I should be. 

So I guess my overall take is something like: people should consider the aptitudes framework, but they should also think about what needs to happen in the world in order to get the thing you care about. Taking a safer, aptitudes based approach, is likely the right path for many people, but not for everybody. If you take seriously the career advice that you read, it seems pretty unlikely that this would cause you to take roughly the same actions you were planning on taking before reading – you should be suspicious of this surprising convergence. 

I'm curious what you (and others) think about the following aptitude.

(I don't have a particular reason to think my intuition that this is "a thing" is right, and know barely anything about the careers I'm talking about, so I advise not taking this seriously at all as actual advice for now.)

"Negotiation and navigating conflicting interests" aptitude.

This involves things like:

  • Knowing what your own interests regarding some contested issue are.
  • Understanding where other people are coming from when approaching some issue, including in ways that go beyond what they are able to state explicitly.
  • Helping others understand what their interests regarding some contested issue are, and helping them to communicate them well to others, and/or being good at that kind communication oneself. This includes the ability to translate between different vocabularies and cultural codes.
  • Coming up with creative and original options for how to settle a conflict.

Examples [??]:

  • US top politicians who have a track record at getting bipartisan policies passed, e.g., Joe Biden
  • "Sherpas" and other political staffers involved in the nitty-gritty of international agreements
  • Roger Fisher and William L. Ury
  • Top executives and lawyers dealing with mergers and acquisitions
  • Some aspects of what HR departments to in companies
  • Machiavelli [???]
  • Robert Moses [???]

How to try to develop this aptitude [?]: 

  • Embark on and become conventionally successful in one of the above careers
  • Constructively contribute to the resolution of conflicts that happen around you (there usually is an abundance of them ...)
  • Model United Nations conferences and similar things [???]

On track [???]:

  • You find neither being directly involved in, nor helping to mediate conflicts, stressful or unpleasant, and there are several examples of when you've clearly contributed to finding significant Pareto improvements.
  • You are respected by a wide range of different people, and you often find yourself in situations where two people or groups can't have a good conversation with each other, but you get along will with both, pass their "Ideological Turing Tests", and can "talk in their language".
  • If you're doing this professionally, your achievements are recognized by your peers and bosses, you get promoted, and you take on "bigger" cases involving more responsibility etc.

Why do I think this might be important?

  • Depending on the path, I think there are significant synergies with the "organization building etc.", political/bureaucratic, community building, and entrepreneur aptitudes.
  • However, I think there may also be a case for viewing this as a potential 'central' aptitude in its own rights. Here's a straw argument for why:
    • Suppose that in 2050, longtermist-aligned MyAICompany makes a research breakthrough that makes them think they would have a decent shot at building a 'transformative AI system' if they had access to 10x-100x their current resources (e.g. compute). They're wondering if and how to talk about this to their main investors, the US government, the Chinese government, big tech companies, etc.
    • The aptitudes from the OP cover: MyAICompany being founded; it being run well operationally; it having good software infrastructure; it having access to sound bottom-line conclusions on relevant research questions; a good supply of longtermist-aligned talent; various other actors (e.g., parts of the US government) being more sympathetic to, or at least having a better understanding of, its goals; no-one stealing their AI research, or being able to undesirably eavesdrop on their subsequent negotations.
    • However, the aptitudes from the OP conspicuously do not cover (at least not centrally - the relevant capabilities don't seem to be emphasized in any of the other aptitudes): How to structure the conversations with these other actors? How to achieve a good outcome even though there will be a bunch of conflicting interests?
  • (Secondarily, and anecdotally, I think that a lack of this aptitude has also contributed to at least some EA organizations not always having been "well run" in a generic sense.)
  • I am concerned that due to founder effects and skewed intellectual foundations (e.g. commitment to philosophical views that hide the relevance of de-facto conflicting interests by instead emphasizing how ideal reasoners would converge to shared beliefs and goals) the current prevalence of this aptitude in the EA community is low, and that it is underappreciated.

I like this; I agree with most of what you say about this kind of work.

I've tried to mostly list aptitudes that one can try out early on, stick with if they're going well, and pretty reliably build careers (though not necessarily direct-work longtermist careers) around. I think the aptitude you're describing here might be more of later-career/"secondary" aptitude that often develops as someone moves up along an "organization building/running/boosting" or "political/bureaucratic" track. But I agree it seems like a cluster of skills that can be intentionally developed to some degree and used in a lot of different contexts.

I'd be quite interested to hear one or more people from 80k share their thoughts on this post, e.g. on questions like:

  • To what extent do they think there are "disagreements" between their advice/framework and this one, vs something more like this just being a different framework and providing different emphases (which might still therefore lead readers in different directions, especially if readers engage quickly)?
  • To what extent do they think it'd be good if someone thinking about career choice swapped out reading some 80k content for reading this?
    • E.g.,would 80k staff think it'd be best to read their full key ideas article, their full career planning process article, and then this? Or maybe read this earlier? Or maybe read this only after reading some problem profiles, career reviews, etc.?
    • E.g., how does this differ between different types of readers?

Hi Michael,

Just some very quick reactions from 80k:

  • I think Holden’s framework is useful and I’m really glad he wrote the post.

  • I agree with Holden about the value of seeking out several different sources of advice using multiple frameworks and I hope 80k’s readers spend time engaging with his aptitude-based framing. I haven’t had a chance to think about exactly how to prioritise it relative to specific pieces of our content.

  • It’s a little hard to say to what extent differences between our advice and Holden’s are concrete disagreements v. different emphases. From our perspective, it’s definitely possible that we have some underlying differences of opinion (e.g. I think all else equal Holden puts more weight on personal fit) but, overall, I agree with the vast majority of what Holden says about what types of talent seem most useful to develop. Holden might have his own take on the extent to which we disagree.

  • The approach we take in the new planning process overlaps a bit more with Holden’s approach than some of our past content does. For example, we encourage people to think about which broad “role” is the best fit for them in the long-term, where that could be something like “communicator”, as well as something narrower like “journalist”, depending on what level of abstraction you find most useful.

  • I think one weakness with 80k’s advice right now is that our “five categories” are too high-level and often get overshadowed by the priority paths. Aptitudes are a different framework from our five categories conceptually, but seem to overlap a fair amount in practice (e.g. government & policy = political & bureaucratic aptitude). However, I like that Holden’s list is more specific (and he has lots of practical advice on how to assess your fit), and I could see us adapting some of this content and integrating it into our advice.

Thanks for writing this! I like the aptitudes framing.

With respect to software engineering, I would add that EA orgs hiring web developers have historically had a hard time getting the same level of engineering talent as can be found at EA-adjacent AI orgs.* I have a thesis that as the EA community scales, the demand for web developers building custom tools and collaboration platforms will grow as a percentage of direct work roles. With the existing difficulty in hiring and with most EAs not viewing web development as a direct work path, I expect the shortage to continue.

Also as practical career advice, I'd recommend many people who already know how to code somewhat to try get a software engineering job at ~any tech company / startup. That company will spend months training you and the problems you'll be solving will be much more useful for learning than the toy problems offered by a bootcamp.

* This is not so much to cast aspersions on myself and my colleagues, as to agree with the post that the level of engineering talent in AI labs is very high.

I mostly agree, though I would add: spending a couple years at Google is not necessarily going to be super helpful for starting a project independently. There's a pretty big difference between being good at using Google tooling and making incremental improvements on existing software versus building something end-to-end and from scratch. That's not to say it's useless, but if someone's medium-term goal is doing web development for EA orgs, I would push working at a small high-quality startup. Of course, the difficulty is that those are harder to identify.

Thanks for writing this up! I found the overall perspective very helpful, as well as lots of the specifics, particularly (1) what it means to be on track and (2) the emphasis on the importance of 'personal fit' for an aptitude (vs the view there being a single best thing).

Two comments. First, I'm a bit surprised that you characterised this as being about career choice for longtermists.  It seems that the first five aptitudes are just as relevant for non-longtermist do-gooding, although the last two - software engineering and information security - are more specific to longtermism. Hence, this could have been framed as your impressions on career choice for effective altruists, in which you would set out the first five aptitudes and say they applied broadly, then noted the two more which are particular to longtermism. 

In the spirit of being a vocal customer, I would have preferred this framing. I am enthusiastic about effective altruism, but ambivalent about longtermism - I'm glad some people focus on it, but it's not what I prioritise - and found the narrower framing somewhat unwelcoming, as if non-longtermists aren't worth considering. (Cf if you had said this was career advice for women even though gender was only pertinent to a few parts.)

Second, one aptitude that did seem conspicuous by its absence was for-profit entrepreneurship - the section on the "entrepreneur" aptitude only referred to setting up longtermist organisations. After all, the Open Philanthropy Project, along with much of the rest of the effective altruist world, only exists because people became very wealthy and then gave their money away. I'm wondering if you think it is sufficiently easy to persuade (prospectively) wealthy people of effective altruism(/longtermism) that becoming wealthy isn't something community members should focus on; I have some sympathy with this view, but note you didn't state it here. 

Thanks for the thoughtful comments!

On your first point: the reason I chose to emphasize longtermism is because:

  • It's what I've been thinking about the most (note that I am now professionally focused on longtermism, which doesn't mean I don't value other areas, but does mean that that's where my mental energy goes).
  • I think longtermism is probably the thorniest, most frustrating area for career choice, so I wanted to focus my efforts on helping people in that category think through their options.
  • I thought a lot of what I was saying might generalize further, but I wasn't sure and didn't want to claim that it would. And I would have found it harder to make a list of aptitudes for all of EA without having noticeable omissions.

With all of that said, I hear you on why this felt unwelcoming, and regret that. I'll add a link to this comment to the main post to help clarify.

On your second point, I did try to acknowledge the possibility of for-profit startups from a learning/skill-building point of view (paragraph starting with "I do think that if you have any idea for an organization that you think could succeed ...") though I do agree this sort of entrepreneurship can be useful for making money and having impact in other ways (as noted by MichaelA, below), not just for learning, and should have been clearer about that.

Two small things on your final paragraph:

Thanks for the writeup Holden, I agree that this is a useful alternative to the 80k approach.

On the conceptual research track, you note "a year of full-time independent effort should be enough to mostly reach these milestones". How do you think this career evolves as the researcher becomes more senior? For example, Scott Alexander seems to be doing about the same thing now as he was doing 8 years ago. Is the endgame for this track simply that you become better at doing a similar set of things?

I think a year of full-time work is likely enough to see the sort of "signs of life" I alluded to, but it could take much longer to fulfill one's potential. I'd generally expect a lot of people in this category to see steady progress over time on things like (a) how open-ended and poorly-scoped of a question they can tackle, which in turn affects how important a question they can tackle; (b) how efficiently and thoroughly they can reach a good answer; (c) how well they can communicate their insights; (d) whether they can hire and train other people to do conceptual and empirical research, and hence "scale" what they're doing (this won't apply to everyone).

There could also be an effect in the opposite direction, though - I expect some people in this category to have their best insights relatively early on, and to have more trouble innovating in a field as the field becomes better-developed.

Overall this track doesn't seem like the one most likely to offer a steady upward trajectory, though I think some people will experience that. (I'd guess that people focused on "answering questions" would probably have more of a steady upward trajectory than people focused on "asking new questions / having totally original ideas.")

Great post.

Regarding the section on software engineering for biosecurity:

"...potentially on biosecurity and pandemic preparedness (I don't currently know any examples of the latter, but think it's reasonably likely there will be some down the line)."

— I have some experience with this having worked for the UK Joint Biosecurity Centre during the height of the pandemic (albeit briefly) in a data science role. I think it's fair to say we had a reasonably sized influence on the analysis that went into government policy relating to the pandemic, with my seniors often reporting straight to the Prime Minister's Office, and where 'reasonably sized' means JBC technical reports or slide decks might have made it into the top ten or even top five most influential policy documents that the most senior health officials would look at that day (very rough guess).

I would argue that data engineering was a reasonably sized bottleneck (that could have been addressed by having access to more good software engineers to help improve our data platforms) but that there were also difficulties in knowing what was relevant data etc, which was more of a data science problem. So really there were many bottlenecks to growth/research of which data engineering was just one (personal opinion).

As a piece of general career advice I would say that software engineers thinking of data engineering as a career would probably find their skills remain in the demand or possibly increase in the following decades, which might make it a good bet. Just as research software engineering is a thing, research data engineering is definitely a thing (if not always given that name) and more talented people working in this area would probably be good.

JBC might not exist in quite the same way for much longer because of how much the public health infrastructure in the UK is changing at the moment (personal opinion), but I think data (software) engineering in biosecurity and pandemic preparedness is definitely a thing (for as long as these institutions persist after covid). If you're interested it helps to have some domain experience of what existing public health data infrastructure exists in your country or region, so that you know where to actually search for jobs. Alternatively you could go in through the contractor route although this seems like a less efficient way of working on the things you are actually interested in.

[This comment is no longer endorsed by its author]Reply

With this in mind, I generally think the person best-suited to found an organization is the person who feels such strong conviction that the organization ought to exist (and can succeed) that they can hardly imagine working on anything else. This is the kind of person who tends to have a really clear idea of what they're trying to do and how to make the tradeoffs gestured at above, and who is willing and able to put in a lot of work without much reliable guidance.

So my general approach to entrepreneurship would be: if there's no organization you have a burning desire to create (or at least, a strong vision for), it's probably not time to be an entrepreneur. Instead it could make more sense to try for a job in which you're learning more about parts of the world you're interested in, becoming more aware of how organizations work, etc. - this could later lead to identifying some "gap in the market" that you’re excited to fill.

This sounds probably right to me, and also aligns with advice I've heard elsewhere. On the other hand, it seems to me that Charity Entrepreneurship-incubated charities have had more total success, and more consistently gone at least fairly well, than I might've expected or than this general advice would seem to predict. 

So I currently feel fairly uncertain about this matter, and I'm fairly open to the hypothesis that that general advice just doesn't apply if there's a very well run incubation program (including good ideas for what to found, good vetting, good training, etc.) and a very strong pool of applicants to it, or something.

For roughly this reason, I'm also more optimistic about the Longtermist Entrepreneurship Fellowship than that general advice might suggest (which also seems in line with Claire Zabel's view, given the grant that was provided to that project).

All that said, I haven't looked super closely into any of this, so these are just tentative views. 

Really appreciated this post. 

I see a lot of comments questioning some of the ideas, especially the notion of spending time to build aptitudes and become excellent at something you're good at rather than immediately jumping to a more "impactful" role. I've also seen comments from people regretting spending time on one thing and wishing they had switched sooner. 

From my perspective as someone later on in my career than some here, I would say Holden's observation is spot-on. 

I'm sure there are exceptions, but in general, a huge amount of what you learn as you work is topic-agnostic. It's just the ability to work with people, to get things done, to communicate well, to manage your boss and listen to your direct-reports, to plan your time ... Once you've learned all this, you just cannot imagine that once upon a time you didn't know about it. But (for most of us) there was such a time. 

Today maybe you ask someone to help you and they do, and you don't realise that if you'd asked 3 years ago, they would probably have refused because you wouldn't have known how to ask. And you would not even have realised that you had done anything wrong, you'd just have concluded that they weren't willing to help. 

Today maybe you look at a problem that seems intractable ... but maybe in 3 years you'd look at the same problem and find at least 4 different possible paths forward. Today you don't even realise that the intractability of the problem might be related to your own lack of experience. 

Your career is a marathon, not a sprint. This is the line every manager uses to explain why it's not such a big deal that you're not getting that promotion this year :D ... but it's also the truth. Your potential to contribute will grow massively over the first 5-10 years of your career. Your ability to identify where you can have the most impact will also grow in that time. So as long as you don't do anything which precludes you from being able to take on an impactful role in the future, the most important thing to do now is to find a role that you enjoy, that you're good at, and where you can learn a lot. 

Obviously, all the above refers to the first few years. There comes a time where there is a real risk of stagnation, where you may not be learning anything much. If you find yourself in that position, it's probably a great time to think about moving to a more impactful role. 

Thanks for this post! 

I think I disagree or at least feel hesitant about some specific things, but overall I think this seems like a really useful framework, it provides a bunch of good specific ideas and tips, and it's easy to work out concrete what you mean by each thing and how to apply these ideas (particularly due to the "Examples:" and "On track?" sub-sections). And I've already sent a link to the "Political and bureaucratic aptitudes" section to someone I spoke to recently who I think would find it useful.

The section "Some closing thoughts on advice" also made me think the following two links may be useful for some readers:

This might just be an extension of the "community building" aptitudes, but here's another potential aptitude.

"Education and training" aptitudes

Basic profile: helping people absorb crucial ideas and the right skills efficiently, so that we can reduce talent/skills bottlenecks in key areas.

Examples:

Introductory EA program, in-depth EA fellowship, The Precipice reading group, AI safety programmes, alternative protein programmes, operations skills retreat, various workshops organised in EAGs/EAGxs, etc

How to try developing this aptitude:

I'll split these into three areas: (a) pedagogical knowledge, (b) content knowledge, and (c) operations.

(a) Pedagogical knowledge

This specific knowledge you learn and skills you develop to teach effectively or help others learn more effectively. Examples: breaking down learning objectives into digestible chunks, how to design effective engaging learning experience, creating and presenting content, (EDIT) how to measure whether your students are actually learning .

This could be applied to classroom/workshop settings, reading and discussion groups, career guides, online courses, etc

You can pick up knowledge and skills either 
- formally: teaching courses, meta-learning courses, teaching assistant 
- or informally: helping others learn

(b) Content knowledge

This is knowledge specific to the domain you want others to learn. If you're teaching English alphabets, you need to know what it is (symbols that you can rearrange to create meanings and associations with physical or abstract things), why it's relevant (so you have a similar language with others to learn and communicate with), and how to apply it ("m"+"o"+"m" is mom!).

It's sometimes not necessary that you're an expert in this, but it helps a lot if you are above average at it.

(c) Operations
A big (but sometimes forgotten) part of organising classrooms, discussion groups, or workshops is that it needs to smooth (or within an expected parameter) to reduce any friction in the learning experience. It also helps that you understand the different trade-offs of running an education project (i.e. quality of learning vs. student's capacity vs. educator's capacity vs. financial cost).

You can pick up knowledge and skills either 
- formally: operations courses, project management courses, productivity books
- or informally: learning from "that friend who usually get things done and is generally reliable"

On track?

It's hard to generalise since there's so many different models (e.g. classroom, online courses, discussion groups) of how to educate/train a person, and each different model requires a different way of thinking.  Here's my rough take on this: 

Level 1: you get positive feedback from others when you had to explain and teach a certain topic informally (e.g. with friends over dinner, homework group, helping students as a teaching assistant during office hours).

Level 2: you get positive feedback when facilitating discussions.

Level 3: you get positive feedback when teaching a workshop.

Level 4 (you're likely on track here): you get positive feedback when teaching and running a course, online course, or lecture series with more than 50 participants

Yeah, this seems worth highlighting in addition to the aptitudes Holden highlighted (though I'm not necessarily saying it's as important as those other aptitudes; I haven't thought carefully abut that). And that seems like a good breakdown of relevant skills, how to tell you're on track, etc.

Regarding examples and places to apply this, I think an additional important (and perhaps obvious) place is with actual school students; see posts tagged Effective altruism outreach in schools. (There's also a Slack workspace for that topic, which I didn't create but could probably add people to if they send me a message.)

This general idea seems pretty promising to me.

This is great. I emphatically agree with almost all of it - and I expect I will send this post to many people going forward. 

It's very unclear if I have good intuitions about how to do career choice well, and so unclear if me agreeing should anyone make more than negligibly more willing to act on this advice - but at the very least I strongly suspect I could have avoided many career choice mistakes if I had read such a post in, say, 2016.

Some things that ring particularly true to me relative to what I perceive to be common attitudes among young people interested in EA:

  • Focus a lot on achieving success by conventional metrics.
  • When things are not working well by typical lights (e.g. when judged against things like in the "on track?" sections), quit and try something else. No matter whether you're on a path, or at an organization, that is typically considered to be "high-impact".
  • "Research vs. operations" is not a great question to ask [I'm aware you're not saying this directly in the post], and people are often better off replacing both "research" and "operations" with more fine-grained categories when thinking about their careers.
  • When making career decisions, put more weight on intuitions and gut feelings of excitement (in particular when based on actual experience, e.g., a representative work trial of the job you're considering) - and less on impact estimates.
  • Put less weight on advice when making concrete job decisions, especially advice from members of the effective altruism community who don't have much context on you and the options you're deciding between.
  • This: "I'd guess that anyone who is succeeding at what they do and developing aptitudes that few can match, while being truly prepared to switch jobs if the right opportunity comes up, has - in some sense - quite high expected longtermist impact (over the long run) via direct work alone. I think this expected impact will often be higher than the expected impact of someone who is in a seemingly top-priority longtermist career now, but isn't necessarily performing excellently, sustainably or flexibly."
     

When making career decisions, put more weight on intuitions and gut feelings of excitement (in particular when based on actual experience, e.g., a representative work trial of the job you're considering) - and less on impact estimates.

I think you probably mean in relation to types of work, activity, organisation, mindsets, aptitudes, etc., and not in relation to what cause areas or interventions you're focusing on, right? 

I.e., I think I'd often suggest people do focus mostly on impact estimates when choosing cause areas and maybe also interventions, but focus more on comparative advantage (using intuitions and gut feelings of excitement as some proxies for that) when choosing specific jobs, orgs, roles, paths, etc. Would you agree?

I think you probably mean in relation to types of work, activity, organisation, mindsets, aptitudes, etc., and not in relation to what cause areas or interventions you're focusing on, right? 

Basically yes. But I also think (and I understand Holden to say similar things in the OP) that "what cause area is most important" is perhaps less relevant for career choice, especially early-career, than some people (and 80k advice [ETA: though I think it's more like my vague impression of what people including me perceive 80k advice to say, which might be quite different from what current 80k advice literally says if you engage a lot with their content]) think.

There isn't currently an "obvious" and arbitrarily scalable place for longtermists to donate, analogous to GiveWell's top charities. But if one doesn't have particular donations they're excited to make, I think it makes sense to simply save/invest - ideally following best investing practices for longtermist values (e.g., taking the optimal amount of risk for money intended to benefit others over long time horizons, and using charitable vehicles to reduce taxes on money that's intended for charitable purposes - I hope there will be writeups on this sort of thing available in the future). There are debates about whether this is better than giving today, but I think it is at least competitive.

I was a little surprised that you didn't mention the EA Long-Term Future Fund as one competitive option for such donors? I'm not saying that giving to the LTFF is definitely better than investing to give later - I'm currently pretty open to the latter strategy - but it seems to me that giving to the LTFF is (like donor lotteries) one competitive option. (See also The Long-Term Future Fund has room for more funding, right now.)

Also, I do think there are some writeups on that sort of thing available, some of which can be found via the investing tag (and presumably there are other writeups available elsewhere too). But this an area I've read a lot on, and I do expect there'd be value in additional, better, or more thorough writeups.

(As usual, this comment expresses my personal opinions only.)

I didn't mean to express a view one way or the other on particular current giving opportunities; I was instead looking for something a bit more general and timeless to say on this point, since especially in longtermism, giving opportunities can sometimes look very appealing at one moment and much less so at another (partly due to room-for-more-funding considerations). I think it's useful for you to have noted these points, though.

The section "Aptitude-agnostic vision: general longtermism strengthening" reminded me of the post Illegible impact is still impact. I liked that post, but/and also think think that the specific examples you give in your section might be better examples to point to than the examples given in that post. 

Here are some excerpts from that post:

In the case of impact, legible impact is that which can be measured easily in ways that a model predicts is correlated with outcomes. Examples of legible impact measures for altruistic efforts include counterfactual lives saved, QALYs, DALYs, and money donated; examples of legible impact measures for altruistic individuals include the preceding plus things like academic citations and degrees, jobs at EA organizations, and EA Forum karma.

Some impact is semi-legible, like social status among EAs, claims of research progress, and social media engagement. [...]

Illegible impact is, by comparison, invisible, like helping a friend who, without your help, might have been too depressed to get a better job and donate more money to effective charities or filling a seat in the room at an EA Global talk such that the speaker feels marginally more rewarded for having done the work they are talking about and marginally incentives them to do more. Illegible impact is either hard or impossible to measure or there's no agreed upon model suggesting the action is correlated with impact. And the examples I gave are not maximally illegible because they had to be legible enough for me to explain them to you; the really invisible stuff is like dark matter—we can see signs of its existence (good stuff happens in the world) but we can't tell you much about what it is (no model of how the good stuff happened).

The alluring trap is thinking that illegible impact is not impact and that legible impact is the only thing that matters. If that doesn't resonate, I recommend checking out the links above on legibility to see when and how focusing on the legible to the exclusion of the illegible can lead to failure.

[...] To me the first step is acknowledging that illegible impact is still impact. For example, to me all of the following activities are positively impactful to EA such that if we didn't have enough of them going on then the EA movement would be less effective and less impactful and if we had more of them going on then EA would be more effective and more impactful, yet all of them produce impact of low legibility, especially for the person performing the action:

  • Reading the EA Forum, LessWrong, the Alignment Forum, EA Facebook, EA Reddit, EA Twitter, EA Tumblr, etc.
  • Voting on, liking, and sharing content on and from those places
  • Helping a depressed/anxious/etc. (EA) friend
  • Going to EA meetups, conferences, etc. to be a member in the audience
  • Talking to others about EA
  • Simply being counted as part of the EA community

The "general longtermism strengthening" section also reminded me of the EA Wiki entry scalably using labour and various posts with that tag.

I found this article helpful. I noticed some of the career paths I am considering are listed in your "Organization building, running, and boosting" aptitudes" skills group. You helped me in my career path investigation. I am learning there is a strong correlation with the careers at the top of that list. This helps me expand my search and not box myself into a narrow list of choices.

The onomatopoeia of the word “aptitude” alludes to a directive emphasis interaction.[1] The expression is used vaguely and inconsistently. Thus, readers may feel directed to something that has been insufficiently explained. This can worsen their subjective experience.

Alternatively, a more cooperatively sounding word, such as capacity, can be used and its meaning in the context clearly explained (such as knowledge, skills, and networks). This can better motivate readers to take steps in developing their ability to safeguard a positive long-term future.

  1. ^

    What body language or interpersonal interaction would you use when talking about aptitude? Versus capacity, ability, etc?