80,000 Hours has a lot of great research on promising career fields for effective altruists. But one thing I've discovered while doing my own career planning is that the difference between opportunities in a single field seems to matter just as much as the difference between fields. Opportunity-level analysis of job prospects is a great complement to looking at field-level overviews, and I think it can significantly improve career decisions.

As a case study, consider someone deciding between software engineering and academic mathematics. Looking at the typical person going into these fields, software engineering seems like a much more desirable choice from an EA perspective. As an industry, software pays better, is far less competitive, has more opportunities to do directly impactful work, and grants more career capital for the same level of ability. So you'd probably advise most people to pick software engineering over math academia.

But looking only at the typical cases throws away a lot of information. The average opportunity for a math degree isn't necessarily the opportunity you're most likely to take. I know a number of academic mathematicians with EA tendencies, and most of them help run SPARC, a CFAR-funded summer program that introduces mathematically talented high-schoolers to a bunch of ideas that includes many EA-related ones. (For various reasons, it seems to me that SPARC probably wouldn't have succeeded without some academic mathematicians on board.) I think SPARC is an extremely good idea and would be pretty sad if its instructors had all gone into software development instead. In other words, it's not just important what the average opportunity in a field is like, but also the opportunities that you in particular are likely to be able to find.

Furthermore, the quality of which individual opportunities are available in a given field seems to vary wildly, and randomly, from person to person. For instance, when I was looking around for software jobs last year, the job that I took looked easily twice as good as my next-best alternative. Someone with similar skills to me but who wasn't as lucky while looking for software jobs might not have found it--if so, they would probably have found better opportunities in a completely different field. Similarly, a number of my friends applied for academic research positions, and I've repeatedly seen one person find opportunities that seem much better than another person with virtually the same aptitudes.

All this means that basing your career decision purely on field-level analysis seems likely to miss some potentially great career paths. To sum up, here are some pieces of career choice advice that I think are currently underrated:

  • When finding a job, it's worth looking very hard for better opportunities. If there's a large random component to your available opportunities, then simply looking at a larger sample of opportunities is likely to surface better ones. To find my current job, I not only asked all my friends if they knew companies that would be good opportunities--I asked some of their friends and their friends' friends too. It was only at the third degree out that I found the place I ended up working.

  • If you're an unusually good fit for a field that isn't big with the EA crowd, look into it anyway. Even if the average opportunity in a field doesn't look that great, being really awesome at something tends to bring up cool opportunities almost regardless of what the thing is. Just like the mathematicians were able to start SPARC, or like Toby Ord and Will MacAskill have used their academic positions well, there seem to be a lot of benefits to excelling in almost any field.

  • Skills that expand the set of possible opportunities for you are extra-valuable. For instance, in college, I might have been better-served by taking fewer super-advanced math classes and more classes that would give me enough grounding in other fields to add some value in them. There's a balance to be struck here--it wouldn't be helpful to be a total dilettante, so it's probably better to branch out to neighboring fields than something different altogether. For instance, as a math and computer science major I probably wouldn't have gotten very much out of taking a couple journalism classes, but I might have tried out robotics, computational biology, or economics.

22

0
0

Reactions

0
0

More posts like this

Comments8


Sorted by Click to highlight new comments since:

The main impact of early choices in a career may be determining what skills you develop, who you know, what you are respected for, and so forth. The value of these resources depends on how useful they will be down the line, i.e. on what opportunities you will have in 10 years. This seems to be an important consideration in favor of thinking about what area to be in.

Great post Ben, this seems like a really good point to make clear. I think there's a general point here that it's much easier, and often better, to choose between specific options than general categories of options.

Generally when I think about career choice I think it's useful to begin by narrowing down to a few different fields that seem best for impact and fit, and then within those fields seek out concrete opportunities - and ultimately the decision will come down to how good the opportunities are, not a comparison between the fields themselves. But you've still narrowed by field initially. This seems to be the case especially when the fields you're comparing seem roughly as good as each other or each have different advantages.

I like the suggestion of putting a lot of effort into looking for really good opportunities, too - I imagine this is often neglected. A side point there is that obviously in some fields this is more worth doing than others, because some fields are going to be higher variance than others in terms of how good the opportunities are. e.g. I'd imagine there's higher variance in software jobs than in certain academic ones.

This strongly fits with my experience. Even on a pure earnings basis, as I've researched various job opportunities I've found there's a shocking amount of variation on how much two job opportunities can differ, much more than I initially anticipated based on a naive view of what a competitive job market looks like. Often this pops up in non-obvious ways.

Actually examples that might happen to you: one finance firm turns out to be much better about paying bonuses to new employees who make big contributions right away. Or maybe Google only offers you slightly more cash than the startup you interviewed with—but the startup is giving you so little equity it won't be worth much even if the company gets acquired for $100 million, whereas Google offers you RSUs worth half again what your base salary is.

Do you have any thoughts about how to juggle timing when different opportunities will arise at different times? For example, if applying for jobs & university places at the same time, the response times will be very different.

The obvious strategy is to delay the decision as long as possible, but it's hard to know how to trade off confirmed options that will expire against potential options you haven't heard from yet.

One EA friend I talked to about this said he tried to do this, then found that when it came down to it he couldn't bear to let an opportunity slide while waiting for others, so just took the first thing he got.

I haven't had this problem in the past, probably because software companies are frequently so desperate for engineers that once they offer you a job they're OK being strung along for quite a while. Plus I've never applied for things as disparate as graduate programs and non-academic jobs at the same time. So my experience is limited!

However, I do think that careful negotiation can help with this problem for high-skill non-software fields as well. If a company thinks you're good enough to hire, they probably think you're good enough to wait a little while for (unless they're REALLY strapped for time). An exploding offer is often just them using Dark Arts to try to get people to accept before they can get better options, like what happened to your friend.

Between that, timing your job applications correctly, and investigating opportunities you haven't officially been offered yet to see whether you really want them, it's hopefully possible to smooth out many of the synchronization issues.

I completely agree with this. This is why we put so much emphasis on our general framework and how to choose process. Finding options that do well according to the framework is what ultimately matters; not the specific career path you're in.

I agree that your framework and process can apply to opportunity-level decisions as well as field-level decisions--I just think that it isn't emphasized in proportion to how useful I found it.

For instance, to me it looks like those pages are framed almost completely in terms of choosing broad career paths rather than choosing between individual opportunities. E.g., the heading on the framework page reads:

Our career evaluation framework helps you compare between different specific career options, like whether to go into consulting or grad school straight out of university; or whether to continue at your current for-profit job or leave to work for a non-profit.

To me this seems to emphasizes the field-level use-case for the framework but not the opportunity-level use case.

Ah ok, I had 'specific career options' in mind, but then I see the examples don't give the right impression. I'll change this.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr