Career choice
Career choice
In-depth career profiles, specific job opportunities, and overall career guidance

Quick takes

193
1y
5
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer! The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3 I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers! I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them. There are a few main reasons why I'm leaving now: 1. New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like to give it a try sooner rather than later. 2. Post-EA crises stepping away from EA community building a bit - Events over the last few months in EA made me re-evaluate how valuable I think the EA community and EA community building are as well as re-evaluate my personal relationship with EA. I haven't gone to the last few EAGs and switched my work away from doing advising calls for the last few months, while processing all this. I have been somewhat sad that there hasn't been more discussion and changes by now though I have been glad to see more EA leaders share things more recently (e.g. this from Ben Todd). I do still believe there are some really important ideas that EA prioritises but I'm more circumspect about some of the things I think we're not doing as well as we could (
109
8mo
11
GET AMBITIOUS SLOWLY Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring  speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.  Faced with big dreams but unclear ability to enact them, people have a few options.  *  try anyway and fail badly, probably too badly for it to even be an educational failure.  * fake it, probably without knowing they're doing so * learned helplessness, possible systemic depression * be heading towards failure, but too many people are counting on you so someone steps in and rescue you. They consider this net negative and prefer the world where you'd never started to the one where they had to rescue you.  * discover more skills than they knew. feel great, accomplish great things, learn a lot.  The first three are all very costly, especially if you repeat the cycle a few times. My preferred version is ambition snowball or "get ambitious slowly". Pick something big enough to feel challenging but not much more, accomplish it, and then use the skills and confidence you learn to tackle a marginally bigger challenge. This takes longer than immediately going for the brass ring and succeeding on the first try, but I claim it is ultimately faster and has higher EV than repeated failures. I claim EA's emphasis on doing The Most Important Thing pushed people into premature ambition and everyone is poorer for it. Certainly I would have been better off hearing this 10 years ago  What size of challenge is the right size? I've thought about this a lot and don't have a great answer. You can see how things feel in your gut, or compare to past projects. My few rules: * stick to problems where failure will at least be informative. If you can't track reality well eno
26
3mo
TL;DR: A 'risky' career “failing” to have an impact doesn’t mean your career has “failed” in the conventional sense, and probably isn’t as bad it intuitively feels.   * You can fail to have an impact with your career in many ways. One way to break it down might be: * The problem you were trying to address turns out to not be that important * Your method for addressing the problem turns out to not work * You don’t succeed in executing your plan * E.g. you could be aiming to have an impact by reducing the risk of future pandemics, and you do this by aiming to become a leading academic to bring lots of resources and attention to improving vaccine development pipelines. There are several ways you could end up not having much of an impact: pandemic risk could turn out to not be that high; advances in testing and PPE mean we can identify and contain pandemics very quickly, and vaccines aren’t as important; industry labs advance vaccine development very quickly and your lab doesn’t end up affecting things; you don’t succeed at becoming a leading academic, and become a mid-tier researcher instead. * People often feel risk averse with their careers- we’re worried about taking “riskier” options that might not work out, even if they have higher expected impact. However there are some reasons to think most of the expect impact could come from the tail scenarios where you're really successful. * I think we neglect is that there are different ways your career plan can not work out. In particular, many of the scenarios where you don’t succeed to have a large positive impact, you still succeed in the other values you have for your career- e.g. you’re still a conventionally successful researcher, you just didn’t happen to save the world.  * And even if your plan “fails” because you don’t reach the level in the field you were aiming for, you likely still end up in a good position e.g. not a senior academic, just a mid-tier academic or a researcher in industry, or not
59
8mo
1
EA hiring gets a lot of criticism. But I think there are aspects at which it does unusually well. One thing I like is that hiring and holding jobs feels way more collaborative between boss and employee. I'm much more likely to feel like a hiring manager wants to give me honest information and make the best decision, whether or not that's with them.Relative to the rest of the world they're much less likely to take investigating other options personally. Work trials and even trial tasks have a high time cost, and are disruptive to people with normal amounts of free time and work constraints (e.g. not having a boss who wants you to trial with other orgs because they personally care about you doing the best thing, whether or not it's with them). But trials are so much more informative than interviews, I can't imagine hiring for or accepting a long-term job without one.  Trials are most useful when you have the least information about someone, so I expect removing them to lead to more inner-ring dynamics and less hiring of unconnected people. EA also has an admirable norm of paying for trials, which no one does for interviews. 
55
1y
2
Not all "EA" things are good    just saying what everyone knows out loud (copied over with some edits from a twitter thread) Maybe it's worth just saying the thing people probably know but isn't always salient aloud, which is that orgs (and people) who describe themselves as "EA" vary a lot in effectiveness, competence, and values, and using the branding alone will probably lead you astray. Especially for newer or less connected people, I think it's important to make salient that there are a lot of takes (pos and neg) on the quality of thought and output of different people and orgs, which from afar might blur into "they have the EA stamp of approval" Probably a lot of thoughtful people think whatever seems shiny in a "everyone supports this" kind of way is bad in a bunch of ways (though possibly net good!), and that granularity is valuable. I think feel very free to ask around to get these takes and see what you find - it's been a learning experience for me, for sure. Lots of this is "common knowledge" to people who spend a lot of their time around professional EAs and so it doesn't even occur to people to say + it's sensitive to talk about publicly. But I think "some smart people in EA think this is totally wrongheaded" is a good prior for basically anything going on in EA. Maybe at some point we should move to more explicit and legible conversations about each others' strengths and weaknesses, but I haven't thought through all the costs there, and there are many. Curious for thoughts on whether this would be good! (e.g. Oli Habryka talking about people with integrity here)
39
9mo
5
Immigration is such a tight constraint for me. My next career steps after I'm done with my TCS Masters are primarily bottlenecked by "what allows me to remain in the UK" and then "keeps me on track to contribute to technical AI safety research". What I would like to do for the next 1 - 2 years ("independent research"/ "further upskilling to get into a top ML PhD program") is not all that viable a path given my visa constraints. Above all, I want to avoid wasting N more years by taking a detour through software engineering again so I can get Visa sponsorship. [I'm not conscientious enough to pursue AI safety research/ML upskilling while managing a full time job.] Might just try and see if I can pursue a TCS PhD at my current university and do TCS research that I think would be valuable for theoretical AI safety research. The main detriment of that is I'd have to spend N more years in <city> and I was really hoping to come down to London. Advice very, very welcome. [Not sure who to tag.]
28
7mo
Radar speed signs currently seem like one of the more cost effective traffic calming measures since they don't require roadwork, but they still surprisingly cost thousands of dollars. Mass producing cheaper radar speed signs seems like a tractable public health initiative
40
10mo
I mostly haven't been thinking about what the ideal effective altruism community would look like, because it seems like most of the value of effective altruism might just get approximated to what impact it has on steering the world towards better AGI futures. But I think even in worlds where AI risk wasn't a problem, the effective altruism movement seems lackluster in some ways. I am thinking especially of the effect that it often has on university students and younger people. My sense is that EA sometimes influences those people to be closed-minded or at least doesn't contribute to making them as ambitious or interested in exploring things outside "conventional EA" as I think would be ideal. Students who come across EA often become too attached to specific EA organisations or paths to impact suggested by existing EA institutions.  In an EA community that was more ambitiously impactful, there would be a higher proportion of folks at least strongly considering doing things like starting startups that could be really big, traveling to various parts of the world to form a view about how poverty affects welfare, having long google docs with their current best guesses for how to get rid of factory farming, looking at non-"EA" sources to figure out what more effective interventions GiveWell might be missing perhaps because they're somewhat controversial, doing more effective science/medical research, writing something on the topic of better thinking and decision-making that could be as influential as Eliezer's sequences, expressing curiosity about the question of whether charity is even the best way to improve human welfare, trying to fix science.  And a lower proportion of these folks would be applying to jobs on the 80,000 Hours job board or choosing to spend more time within the EA community rather than interacting with the most ambitious, intelligent, and interesting people amongst their general peers. 
Load more (8/29)