In this post, I summarize a tough career decision I have had to take over the last few weeks.

Setting the stage

I am a few months from finishing my Ph.D. in quantum algorithms. During these 4 years, I have become quite involved in Effective Altruism: I attended two EA Globals, facilitated a couple of virtual intro fellowships, and helped organize the EA group in Madrid. Given my background, I have also felt closer to AI Safety than any other cause area. As such, I have also been involved in the AI Safety community, by participating in two AI Safety Camps, and as a facilitator in some intro fellowships. I even did a summer internship in AI Safety with José Hernández Orallo last summer which led to a rather lucky AAAI publication.

My Ph.D. has also gone well. While it started a bit dubitative and was unable to get anything published for the first two years, at that point I got my first two publications and over the last two, I have been able to do well. It is not a superstar Ph.D. but I believe I have learned enough to make contributions to the field that actually get used, which is harder than it looks. In fact, I feel happy that thanks to my last article, one rather serious quantum startup contacted me to collaborate, and this led to another quite high-quality paper.

The options

Since I am finishing my Ph.D., I had to plan my next step. The first obvious choice was to apply for funding to do AI Safety. I cold emailed Victor Veitch, who I found through the super useful Future of Life AI Safety Community, and he was happy to take me as long as I could work more or less independently.

The reason why I opted for applying with Victor was that my research style is more about knowing well the tools I am using, not to the level of a pure mathematician, but to the level where techniques are known and can be used. Additionally, I think causality is cool, and being able to apply it in large language models is rather remarkable. I am also a big fan of Ryan Carey, who works in causality and is one of the people in the community who has helped me the most. I am really grateful to him. Apart from the Future of Life postdoc program, I also applied to the EA Long Term Future fund, and Open Philanthropy, with this proposal. Out of this, the EA Long Term Future fund accepted funding me, most likely on the basis of a career change, while FLI declined based on the proposal and probably an interview that while I prepared was not able to use to explain well why I think this perspective could be useful. This came a bit of a disappointing result, to be honest. Open Philanthropy, on the other hand, is behind schedule, and I don't know their answer yet.

The alternative was to pursue, at least for now, a more standard research topic. I applied to IBM, Google, Amazon, and three startups: Zapata, PsiQuantum, and Xanadu. With the latter, I have been working, so it was not really an application. I never heard back from any of the big companies, but got offers from the three startups. To be fair I also applied to a couple of non-EA AI postdocs with the hope of getting them, but they were a very long shot. For a bit of context though, PsiQuantum is a very serious contender in building a photonic quantum computer and error correction and has really deep pockets, while Xanadu is probably a bit behind them, but it's also quite good and has a bit more focus on ML.

The situation and conditioning factors.

Perhaps the main conditioning factor on all of this was the fact that my girlfriend is really a very important part of my life, and responsible for my happiness. A friend of mine called this the unsolvable two-body problem 😛. In particular, I think that we both would like to have a smooth life. She is currently not able to move with me because she wants to be a teacher and that requires that she get some substitutions beforehand in Spain. Finally, there is the issue of the cost of changing research areas, and of an academic life, which is rather stressful and not very well paying.

What I did to take this decision.

To make this decision I have talked to pretty much everyone I could think of. I posted a couple of posts in the EA forum. Some people argued that it may be helpful to have someone in the community with expertise in Quantum Computing, although I am not convinced for two reasons: a) the impact of QC in AI Safety seems very unlikely, and b) even if QC were to become important at some point, I think I could still pick it up fairly quickly. However, it is true that I would marginally be one of the very few people in this position.

I obviously also talked a lot about this with my girlfriend and my family. They do not really understand why we cannot leverage the academic ML community for this instead of asking for career changes, and I am not totally sure they are wrong on this. Additionally, I talked with Shay, the certified health coach who collaborates with AI Safety Support. I also tried to talk to Habiba, from 80k hours, with whom I had previously done coaching, but I think she's been quite busy lately. The biggest favorable point for the quantum computing startup offers are the capability to work from home, and the salary (which is much higher than anyone in my family has ever earned). However, it also allows me to learn some more general ML and software engineering skills, which may be helpful down the line, without having to stop publishing.

The decision I have (reluctantly) taken.

While I believe that it would be a great opportunity to work on AI Safety, I have often felt that this is not only my decision but one that affects my couple too. It is perhaps for this reason that I have been more hesitant to decide on what I like best independently. To be fair, it is also the case that I am much happier being with her than going abroad, so overall I have (a bit sadly) felt that it might be a good idea to work remotely for Xanadu. They value me, and I think that I could work well with them.

However, I have not really renounced working in AI Safety, I just believe this will make things a bit more flexible down the line. In particular, I have proposed myself to become a distiller of academic papers in the field of causality for AI Safety. It's the same niche as my Open Philanthropy proposal, and I expect to dedicate 1 day/week (weekends, as I should then be more or less free) to writing beautiful posts explaining complicated papers. The objective is again similar to the postdoc, albeit much less impactful: to learn well a topic. Let this then be a preannouncement of a sequence on Causality, Causal Representation Learning, and Causal incentives, all applied to a certain degree to AI Safety.

I also expect to retry doing something a bit more serious in the future in AI Safety, and my girlfriend has agreed that I should do it, as it is something I value. And she will try to move around with me then, or by then I might be able to work remotely or in Valencia, Spain with Jose Hernández Orallo. In any case, it is true that I am a bit sad about not taking this opportunity right now.

I am happy to hear casual comments that I am mistaken and I should take the postdoc: I expect to make the final decision by this Sunday, but consider it more or less taken in any case.

65

13 comments, sorted by Click to highlight new comments since: Today at 1:25 AM
New Comment

Thanks for your writeup, it's reassuring to read your reasoning and decision. I (maybe self-servingly) find it very plausible that you've made the right call. I'm in a somewhat similar position; 3 months away from finishing my PhD on a biosecurity-related topic and staying in Germany with/for my partner for the next years and thus probably missing out on many opportunities that more mobile EAs have.

For me, the decision was pretty clear one I realised that both my affective well-being and life satisfaction were way higher than in previous relationships and that my partner had just an extremely good influence on me. And it seems very reasonable to prioritise this over the average short-term career decision (I'm not saying that things wouldn't be different for extremely rare and impactful opportunities). There is and will be enough work for smart and engaged EAs that you don't need to worry about losing a ton of impact by mainly doing something non-EA for a while and returning to, e.g., AI safety work if the circumstances are right. If your situation is at all like mine, the higher quality-of-life from your relationship alone seems worth it and I'd hazard that you will still learn a lot by working at Xanadu and doing distillation on the side.

I admit that things might be a bit different if your AI timelines are very short and you believe that the next X years are decisive for our survival, where X is small enough to induce anxiety about your hour-to-hour time management or something.

Thanks for sharing Jasper! It's good to hear the experience of other people in a similar situation. 🙂 What do you plan to do? Also, good luck with the thesis!

Thanks Pablo, good luck to you too! I'll apply to a few interesting remote positions and have some independent projects in mind. I'll see :)

Thanks for sharing Pablo, I think your decision process is very reasonable and I very much relate to and support the decision to prioritize a relationship, I would've done the same. Looking forward to the causality series! And I have the gut feeling that AI safety will continue to grow a lot and that you will have an easier time transitioning after your time at Xanadu, if you chose to do so.

Thanks a lot Max, I really appreciate it.

As always it's great to read your thoughts Pablo, and I like your scheme for getting the best of both worlds. I think it's worth recommending that you build accountability to prevent yourself from drifting away from your stated plan or a similarly good one. Wishing you the best at Xanadu!

Gracias Juan!

A lot of what you have written resonates with me. I think it is amazing that you have thought so deeply about this decision and spoken with many people. In that sense, it looks like a great decision, and I hope that the outcome will be fulfilling to you.

After I've finished my PhD, I was torn between doing something entrepreneurial and altruistic, or accepting a "normal" well-paid job. In the end, I decided to accept an offer from Google for reasons similar to yours. It fit well with the growing relationship to my now-wife, and the geographic location was perfect.

I stayed at Google for three years and learned a lot during this time, both about software engineering and about effective altruism. After these three years, I felt ready for a job where I could have a more direct positive impact on the lives of people. I think I was also much better equipped for it; not only in terms of software engineering techniques... Google is also a great place to learn about collaboration across teams, best practices in almost any area, HR processes, communication, etc. One aspect that was absolutely fantastic: I had no pressure at all to leave Google. It was a good job that I could remain in for as long as I wanted, until the perfect opportunity came along.

I could imagine that a few years from now, you might similarly be in a good position to re-evaluate your decision. You will probably be much more stable financially, have a lot more negotiation power, and a lot less time pressure. Plus, I'm sure you'll be so good then that IBM/Google/Amazon can no longer ignore you ;-)

Hey Sjlver! Thanks for your comments and experience. That's my assessment too, I will try. I have also been considering how to create an EA community in the startup. Any pointers? Thanks

At Google, most employees who came in touch with EA-related ideas did so thanks to Google's donation matching program. Essentially, Google has a system where people can report their donations, and then the company will donate the same amount to the same charity (there's an annual cap, but it's fairly high, like US$ 10k or so).

There is a yearly fundraising event called "giving week" to increase awareness of the donation matching. On multiple occasions during this week, we had people from the EA community come and give talks.

When considering starting an EA community, I might look for similar ideas to the ones mentioned above, in order to try making this part of company culture. There are selfish reasons for companies to do this sort of things (employee satisfaction, tax "optimization"). Also, there might be an existing culture of "tech talks" that you can leverage to bring up EA topics.

Oh... and for some companies, all you need to do to start a community is get some EA-related stickers that people can put on their laptops ;-)

(It's a bit tongue-in-cheek, but I'm only half joking... most companies have things like this. At Google, laptop stickers were trendy, fashionable, and in high demand. I'm sure that after being at Xanadu for a while, you'll find an idea that works well for this particular company)