I’ll start with an overview of my personal story, and then try to extract more generalisable lessons. I got involved in EA around the end of 2014, when I arrived at Oxford to study Computer Science and Philosophy. I’d heard about EA a few years earlier via posts on Less Wrong, and so already considered myself EA-adjacent. I attended a few EAGx conferences, became friends with a number of EA student group organisers, and eventually steered towards a career in AI safety, starting with a masters in machine learning at Cambridge in 2017-2018.
I think it’s reasonable to say that, throughout that time, I was confidently wrong (or at least unjustifiably confident) about a lot of things. In particular:
- I dismissed arguments about systemic change which I now find persuasive, although I don’t remember how - perhaps by conflating systemic change with standard political advocacy, and arguing that it’s better to pull the rope sideways.
- I endorsed earning to give without having considered the scenario which actually happened, of EA getting billions of dollars of funding from large donors. (I don’t know if this possibility would have changed my mind, but I think that not considering it meant my earlier belief was unjustified.)
- I was overly optimistic about utilitarianism, even though I was aware of a number of compelling objections; I should have been more careful to identify as "utilitarian-ish" rather than rounding off my beliefs to the most convenient label.
- When thinking about getting involved in AI safety, I took for granted a number of arguments which I now think are false, without actually analysing any of them well enough to raise red flags in my mind.
- After reading about the talent gap in AI safety, I expected that it would be very easy to get into the field - to the extent that I felt disillusioned when given (very reasonable!) advice, e.g. that it would be useful to get a PhD first.
As it turned out, though, I did have a relatively easy path into working on AI safety - after my masters, I did an internship at FHI, and then worked as a research engineer on DeepMind’s safety team for two years. I learned three important lessons during that period. The first was that, although I’d assumed that the field would make much more sense once I was inside it, that didn’t really happen: it felt like there were still many unresolved questions (and some mistakes) in foundational premises of the field. The second was that the job simply wasn’t a good fit for me (for reasons I’ll discuss later on). The third was that I’d been dramatically underrating “soft skills” such as knowing how to make unusual things happen within bureaucracies.
Due to a combination of these factors, I decided to switch career paths. I’m now a PhD student in philosophy of machine learning at Cambridge, working on understanding advanced AI with reference to the evolution of humans. By now I’ve written a lot about AI safety, including a report which I think is the most comprehensive and up-to-date treatment of existential risk from AGI. I expect to continue working in this broad area after finishing my PhD as well, although I may end up focusing on more general forecasting and futurism at some point.
I think this has all worked out well for me, despite my mistakes, but often more because of luck (including the luck of having smart and altruistic friends) than my own decisions. So while I’m not sure how much I would change in hindsight, it’s worth asking what would have been valuable to know in worlds where I wasn’t so lucky. Here are five such things.
1. EA is trying to achieve something very difficult.
A lot of my initial attraction towards EA was because it seemed like a slam-dunk case: here’s an obvious inefficiency (in the charity sector), and here’s an obvious solution (redirect money towards better charities). But it requires much stronger evidence to show that something is the one of the best things to do than just showing that it’s a significant improvement in a specific domain. I’m reminded of Buck Shlegeris’ post on how the leap from “other people are wrong” to “I am right” is often made too hastily - in this case it was the leap from “many charities and donors are doing something wrong” to “our recommendations for the best charities are right”. It now seems to me that EA has more ambitious goals than practically any academic field. Yet we don’t have anywhere near the same intellectual infrastructure for either generating or evaluating ideas. For example, in AI safety (my area of expertise) we’re very far from having a thorough understanding of the problems we might face. I expect the same is true for most of the other priority areas on 80,000 Hours’ list. This is natural, given that we haven't worked on most of them for very long; but it seems important not to underestimate how far there is to go, as I did.
My earlier complacency partly came from my belief that most of EA’s unusual positions derived primarily from applying ethical beliefs in novel ways. It therefore seemed plausible that other people had overlooked these ideas because they weren’t as interested in doing as much good as possible. However, I now believe that less work is being done by these moral claims than by our unusual empirical beliefs, such as the hinge of history hypothesis, or a belief in the efficacy of hits-based giving. And I expect that the worldview investigations required to generate or evaluate these empirical insights are quite different from the type of work which has allowed EA to make progress on practical ethics so far.
Note that this is not just an issue for longtermists - most cause areas need to consider difficult large-scale empirical questions. For example, strategies for ending factory farming are heavily affected by the development of meat substitutes and clean meat; while global poverty alleviation is affected by geopolitical trends spanning many decades; and addressing wild animal suffering requires looking even further ahead. So I’d be excited about having a broader empirical understanding of trends shaping humanity’s trajectory, which could potentially be applicable to many domains. Just as Hanson extrapolates economic principles to generate novel future scenarios, I’d like many more EAs to extrapolate principles from all sorts of fields (history, sociology, biology, psychology, etc, etc) to provide new insights into what the future could be like, and how we can have a positive influence on it.
2. I should have prioritised personal fit more.
I was drawn to EA because of high-level philosophical arguments and thought experiments; and the same for AI safety. In my job as a research engineer, however, most of the work was very details-oriented: implementing specific algorithms, debugging code, and scaling up experiments. While I’d enjoyed studying computer science and machine learning, this was not very exciting to me. Even while I was employed to do that work, I was significantly more productive at philosophical AI safety work, because I found it much more motivating. Doing a PhD in machine learning first might have helped, but I suspect I would have encountered similar issues during my PhD, and then would still have needed to be very details-oriented to succeed as a research scientist. In other words, I don’t think I could plausibly have been world-class as either a research engineer or a research scientist; but I hope that I can be as a philosopher.
Of course, plenty of EAs do have the specific skills and mindset I lack. But it’s worrying that the specific traits that made me care about EA were also the ones that made me less effective in my EA role - raising the possibility that outreach work should focus more on people who are interested in specific important domains, as opposed to EA as a holistic concept. My experience makes me emphasise personal fit and interest significantly more than 80k does, when giving career advice. Overall I'm leaning towards the view that "don't follow your passion" and "do high-leverage intellectual work" are good pieces of advice in isolation which work badly in combination: I suspect that passion about a field is a very important component of doing world-class research in it. However, I do think I’m unusually low-conscientiousness in comparison to others around me, and so I may be overestimating how important passion is for high-conscientiousness people.
3. I should have engaged more with people outside the EA community.
EA is a young movement, and idiosyncratic in a bunch of ways. For one, we’re very high-trust internally. This is great, but it means that EAs tend to lack experience with more formal or competitive interactions, such as political maneuvering in big organisations. This is particularly important for interacting with prestigious or senior people, who as a rule don’t have much time for naivety, and who we don’t want to form a bad impression of EA. Such people also receive many appeals for their time or resources; in order to connect with them, I think EA needs to focus more on ways that we can provide them value, e.g. via insightful research. This has flow-on benefits: I’m reminded of Amazon’s policy that all internal tools and services should be sold externally too, to force them to continually improve. If we really know important facts about how to influence the world, there should be ways of extracting (non-altruistic) value from them! If we can’t, then we should be suspicious about whether we’re deceiving ourselves.
I also found that, although at university EAs were a large proportion of the smartest and most impressive people I interacted with, that was much less true after graduating. In part this was because I was previously judging impressiveness by how cogently people spoke about topics that I was interested in. I also fell into the trap that Zoe identifies, of focusing too much on “value alignment”. Both of these produce a bias towards spending time with EAs, both personally and professionally. Amongst other reasons, this ingroup bias is bad because when you network with people inside EA, you’re forming useful connections, but not increasing the pool of EA talent. Whereas when you network with people outside EA, you’re introducing people to the movement as a whole, which has wider benefits.
A third example: when I graduated, I was very keen to get started in an AI safety research group straightaway. But I now think that, for most people in that position, getting 1-2 years of research engineering experience elsewhere before starting direct work has similar expected value - because there’s such a steep learning curve in this field, and because the cost of supervising inexperienced employees is quite high. I don’t know how that tradeoff varies in different fields, but I was definitely underrating the value of finding a less impactful job where I’d gain experience fast.
4. I should have been more proactive and willing to make unusual choices.
Until recently, I was relatively passive in making big decisions. Often that meant just picking the most high-prestige default option, rather than making a specific long-term plan. This also involved me thinking about EA from a “consumer” mindset rather than a “producer” mindset. When it seemed like something was missing, I used to wonder why the people responsible hadn’t done it; now I also ask why I haven’t done it, and consider taking responsibility myself. Partly that’s just because I’ve now been involved in EA for longer. But I think I also used to overestimate how established and organised EA is. In fact, we’re an incredibly young movement, and we’re still making up a lot of things as we go along. That makes proactivity more important.
Another reason to value proactivity highly is that taking the most standard route to success is often overrated. University is a very linear environment - most people start in a similar position, advance at the same rate, and then finish at the same time. As an undergrad, it’s easy to feel pressure not to “fall behind”. But since leaving university, I’ve observed many people whose successful careers took unusual twists and turns. That made it easier for me to decide to take an unusual turn myself. My inspiration in this regard is a friend of mine who has, three times in a row, reached out to an organisation she wanted to work for and convinced them to create a new position for her.
Other people have different skills and priorities, but the way in which I’m now most proactive is in trying to explore foundational intellectual assumptions that EA is making. I didn’t do this during undergrad; the big shift for me came during my masters degree, when I started writing about issues I was interested in rather than just reading about them. I wish I’d started doing so sooner. Although at first I wasn’t able to contribute much, this built up a mindset and skillset which have become vital to my career. In general it taught me that the frontiers of our knowledge are often much closer than I’d thought - the key issue is picking the right frontiers to investigate.
Thanks to Michael Curzi, whose criticisms of EA inspired this post (although, like a good career, it’s taken many twists and turns to get to this state); and to Buck Shlegeris, Kit Harris, Sam Hilton, Denise Melchin and Ago Lajko for comments on this or previous drafts.