Career choice
Career choice
In-depth career profiles, specific job opportunities, and overall career guidance

Quick takes

188
9mo
5
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer! The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3 I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers! I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them. There are a few main reasons why I'm leaving now: 1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later. 2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (
108
7mo
11
GET AMBITIOUS SLOWLY Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring  speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.  Faced with big dreams but unclear ability to enact them, people have a few options.  *  try anyway and fail badly, probably too badly for it to even be an educational failure.  * fake it, probably without knowing they're doing so * learned helplessness, possible systemic depression * be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you.  * discover more skills than they knew. feel great, accomplish great things, learn a lot.  The first three are all very costly, especially if you repeat the cycle a few times. My preferred version is ambition snowball or "get ambitious slowly". Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures. I claim EA's emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago  What size of challenge is the right size? I've thought about this a lot and don't have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules: * stick to problems where failure will at least be informative. If you can't track reality well eno
26
1mo
TL;DR: A 'risky' career “failing” to have an impact doesn’t mean your career has “failed” in the conventional sense, and probably isn’t as bad it intuitively feels.   * You can fail to have an impact with your career in many ways. One way to break it down might be: * The problem you were trying to address turns out to not be that important * Your method for addressing the problem turns out to not work * You don’t succeed in executing your plan * E.g. you could be aiming to have an impact by reducing the risk of future pandemics, and you do this by aiming to become a leading academic to bring lots of resources and attention to improving vaccine development pipelines. There are several ways you could end up not having much of an impact: pandemic risk could turn out to not be that high; advances in testing and PPE mean we can identify and contain pandemics very quickly, and vaccines aren’t as important; industry labs advance vaccine development very quickly and your lab doesn’t end up affecting things; you don’t succeed at becoming a leading academic, and become a mid-tier researcher instead. * People often feel risk averse with their careers- we’re worried about taking “riskier” options that might not work out, even if they have higher expected impact. However there are some reasons to think most of the expect impact could come from the tail scenarios where you're really successful. * I think we neglect is that there are different ways your career plan can not work out. In particular, many of the scenarios where you don’t succeed to have a large positive impact, you still succeed in the other values you have for your career- e.g. you’re still a conventionally successful researcher, you just didn’t happen to save the world.  * And even if your plan “fails” because you don’t reach the level in the field you were aiming for, you likely still end up in a good position e.g. not a senior academic, just a mid-tier academic or a researcher in industry, or not
59
7mo
1
EA hiring gets a lot of criticism. But I think there are aspects at which it does unusually well. One thing I like is that hiring and holding jobs feels way more collaborative between boss and employee. I'm much more likely to feel like a hiring manager wants to give me honest information and make the best decision, whether or not that's with them.Relative to the rest of the world they're much less likely to take investigating other options personally. Work trials and even trial tasks have a high time cost, and are disruptive to people with normal amounts of free time and work constraints (e.g. not having a boss who wants you to trial with other orgs because they personally care about you doing the best thing, whether or not it's with them). But trials are so much more informative than interviews, I can't imagine hiring for or accepting a long-term job without one.  Trials are most useful when you have the least information about someone, so I expect removing them to lead to more inner-ring dynamics and less hiring of unconnected people. EA also has an admirable norm of paying for trials, which no one does for interviews. 
2
3d
Tu Youyou might be a model for EAs. According to Wikipedia she has saved millions of lives due to her discovery of treatments for Malaria for which she received a Nobel prize. I am guessing, without having done research that at least hundred thousand of these lives might be counterfactually saved due to the time it would take until the next person made this discovery. I randomly came across her going down a GPT/Wikipedia rabbit hole and was surprised to see her not mentioned once on the Forum so far. That said, I am unsure how many people there are that might have counterfactually saved ~100k people or more.
4
11d
Frances' quick take here made me think about what skills are particularly important in my own line of work, communications. 80,000 Hours has a skill profile for communicating ideas that covers some crucial considerations when assessing fit for communications work; these are additional skills or aptitudes that I often think about when considering fit for communications work in the EA ecosystem in particular: 1. Translating between groups: Especially in an EA context, communications work can entail the translation of complex, nuanced ideas from one group of people into something more legible for a different group or audience. Being able to speak the language of different niche groups—like researchers or philosophers—and then being able to translate that into a different kind of language or format proves useful, especially when communicating with audiences that are less familiar with EA. This is when having a background or understanding of different audiences or groups can come in handy for communications work. 2. Stewardship mentality: As a communicator, you don’t always represent your own ideas or original work. Often you’re representing the work or ideas of others, which requires a sense of stewardship in service of representing that work or those ideas accurately and with nuance. This can look like double-checking stats or numbers before sharing a social media post or doing further research to make sure you understand a claim or piece of research you’re discussing. 3. Excitement about being in a support role: Some communicators, like social media personalities or popular bloggers, don’t necessarily require this aptitude; but full-time communications roles at many organizations in the EA ecosystem require this, in my opinion. Similar to having a stewardship mentality, I find it helps if you have excitement about supporting the object-level work of others. Feeling jazzed about the message or impact of a particular organization or cause area probably means you’ll
39
7mo
5
Immigration is such a tight constraint for me. My next career steps after I'm done with my TCS Masters are primarily bottlenecked by "what allows me to remain in the UK" and then "keeps me on track to contribute to technical AI safety research". What I would like to do for the next 1 - 2 years ("independent research"/ "further upskilling to get into a top ML PhD program") is not all that viable a path given my visa constraints. Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship. [I'm not conscientious enough to pursue AI safety research/ML upskilling while managing a full time job.] Might just try and see if I can pursue a TCS PhD at my current university and do TCS research that I think would be valuable for theoretical AI safety research. The main detriment of that is I'd have to spend N more years in <city> and I was really hoping to come down to London. Advice very, very welcome. [Not sure who to tag.]
28
5mo
Radar speed signs currently seem like one of the more cost effective traffic calming measures since they don't require roadwork, but they still surprisingly cost thousands of dollars. Mass producing cheaper radar speed signs seems like a tractable public health initiative
Load more (8/29)