Hide table of contents

Following Aaron Gertler’s prompt, I am writing about my job as a researcher at the Future of Humanity Institute: the path that led to me applying for it, the application itself, and what it’s like to do the job. See also the 80,000 Hours guides on academic research and philosophy academia.

The basics

Research fellow, Future of Humanity Institute (FHI) at Oxford University

October 1, 2020 - present

When I started this position, I was still working on a PhD in philosophy at New York University. I am still finishing up my dissertation, while working for FHI full-time (here's my FHI page).

Background and path to applying

I graduated from Harvard in 2011 with a degree in Social Studies (comparable to the UK’s PPE). I did a masters in philosophy at Brandeis University and started a PhD at NYU in fall 2015.

EA Global 2016

My path to FHI can be directly traced back to my desire, in the summer of 2016, to get my travel to EA Global reimbursed.

I got interested in EA around 2015 and took the Giving What We Can Pledge in summer 2016. Flushed with enthusiasm, I looked into going to EA Global 2016, which was in Berkeley.

Michelle Hutchinson organized an academic poster session for that EAG; somehow I was on a list of people who got an email encouraging me to submit a poster. It occurred to me that NYU’s Center for Mind, Brain, and Consciousness reimburses the travel expenses for PhD students who are giving talks and presentations in philosophy of mind. Driven in no small measure by this pecuniary motive, I hastily threw together a poster presentation at the intersection of EA and philosophy of mind.

The most important thing about the poster is simply that it got me to the conference.[1] That’s where I first met Michelle Hutchinson; I surmise that meeting Michelle, and being a presenter, got me on a list of EA academics.

GPI

As a result (I think), about a year later I was invited to be part of the Global Priority Institute’s first group of summer fellows in the summer of 2018. For my project, I worked on applying Lara Buchak’s work on risk aversion to longtermism and cause prioritization.[2] That summer I met lots of people at FHI, who we shared an office and a kitchen with - most notably for the purposes of this post, Katja Grace and Ryan Carey.

AI Impacts

Meeting Katja Grace in summer 2018 led to me doing research for AI Impacts in the summer of 2019. Also in summer 2019, Ryan Carey messaged me to encourage me to apply for the FHI Research Fellow role.

All told, that's all three of my EA gigs - GPI, AI Impacts, FHI - that stemmed from my decision to go to EA Global 2016 and my cheeky quest to get it reimbursed.[3]

PhD research

Throughout this time, I was doing my PhD research. It was during my PhD that I wrote a paper on fairness measures in machine learning that I would eventually use as my writing sample for FHI. My PhD research also gave me enough familiarity with AI to work on AI-related topics at AI Impacts and eventually FHI.[4] I also ran a reading group on philosophy and AI.

The application

Materials and process

The application required, if I recall correctly: cover letter, CV/resume, research proposal, writing sample, and two references. The process involved a 2-hour (maybe 3?) timed work test, and two rounds of interviews.

My research proposal, inspired by issues I had been thinking about at AI Impacts, outlined ways to get evidence for or against the Prosaic AGI thesis. In an interview, the selection committee made it clear that they were not especially excited about this research direction. I also discussed my work in AI ethics.

My references were Katja Grace and my dissertation supervisor, David Chalmers.

My writing sample was the aforementioned paper on fairness in machine learning. It was of sufficient interest to GovAI that I was (separately from the application process) invited to give comments at a workshop GovAI was hosting - so I figure that it was an asset.

More generally, as I understand it, the features that made me an attractive candidate were: my work running the NYU AI and philosophy reading group, my background in philosophy of mind and AI, and general writing and research strengths. FHI wanted me to help start a research effort on digital minds and AI consciousness, even though I had not really “officially” worked on these topics - that said, I had a decent background from grad school classes and had absorbed a good deal by osmosis, just by being around NYU. I agreed to take a crack at this problem, and I got an offer.

Other things I applied for and did not get

Lessons

My path to FHI seems haphazard to me even in hindsight. But some tentative remarks:

  1. Going to events and meeting people can be extremely valuable; it has high upside risk.
  2. It can be very high-impact to nudge people to apply to things. I’m not sure I would have applied to FHI without Ryan Carey’s encouragement.
  3. It can be useful to finish things to a high level of quality even if you come to believe they are not the highest-impact thing you could be doing. My fairness and machine learning paper had felt like an utter slog for many months by the time my FHI application came up, but I had kept revising and improving it. I’ve been told that the clarity of that piece helped my application stand out.
  4. It can be very difficult to know in advance which of your activities might end up being most professionally useful. For example, I suspect that running the reading group, as much as my "official" research, made me an attractive candidate for my current position.

Doing the job

The main thing I’ve been thinking about since starting at FHI is consciousness in AI systems: how we might know when AI systems are conscious (or if indeed they already are), and how we might make progress on this incredibly difficult question.[5] I also think about the relationship between cognitive science and AI, especially in light of the trend that recent AI progress has come from scaling up large machine learning systems, with relatively little inspiration from cognitive science.

Days and weeks

These are long-term projects where it’s not always clear how to proceed. Much like blogging, research is very independent: it is up to you to figure out what is most important to work on, how to tackle it, and when to do it. On any given day there will be little that must be done immediately, and hardly anyone to make you do it. Setting a schedule and keeping myself accountable is challenging and essential; my friends are all too familiar with a jumble of high-strung techniques I have cobbled together: deadlines with money accountability, group pomodoros, et alia.

On an ideal day, I’ll do deep work reading and writing on my most important research project for three or so hours, starting in the morning. Lighter tasks are reserved for the afternoon, like meetings, replying to emails, and organization. (See: Cal Newport, Gwern on morning writing).

Recently I’ve been working from home in London. I’ll usually start work some time between 9 and 1030am, and stop some time between 6 and 8, taking liberal breaks for lunch and for working out. I try to unwind during the evenings: reading, music, hanging out with friends. Research jobs make it hard to “clock out”; there’s always more you could be doing. But clocking out is important. With some jobs, it may be possible and helpful to work more or less all the time - research is not such a job, at least not for me.[6] It takes practice to make sure you do keep some kind of regular workday and work week.

I’m not very good at tracking hours, so can’t give a detailed breakdown of a typical week. But here are some things on my to-do list for this week:

  • read ‘The Meta Problem of Consciousness’: take notes, make flashcards about it, and make a handout for reading group
  • email several academics to ask if they will take a meeting with me, and asking if they are interested in visiting the digital minds group in the future
  • revise a draft of a paper I'm working on
  • meet with a colleague about a paper we are collaborating on
  • attend FHI events: digital minds reading group, the Research Progress Meeting
  • meet with an FHI colleague; a DeepMind research scientist; and my mentee for SERI’s summer research internship

Skills developed

-Writing clearly

-Reading academic papers effectively

First, there’s deciding what to read and how deeply to read it. Often this feels like a hard-to-articulate intuition about quality, which develops gradually with familiarity with a field or a literature. Then there’s reading. If a paper seems especially important, I will read it carefully and make 10-20 Anki flashcards about it, and take extensive notes on it. At the best of times, I will also write a (low-effort) prose summary of it.

-Learning new maths and empirical literatures as needed

Things that are relevant to AI consciousness: neuroscience, deep learning, philosophy, ethology. There’s a huge amount of things to learn, which is one of the greatest struggles of working on interdisciplinary questions. It is also one of the greatest rewards.

-Networking and field-building

I'm fairly extroverted for a researcher, and I really enjoy meeting and talking to people (including you, the reader! See below). I try to leverage this and make my work as social as possible.

-Self-management

See above. See also, Lynette Bye.

Pros and cons

The major advantage of the job is that I have a lot of freedom to work on extremely challenging problems in whatever way I think best. I get to do this surrounded by fascinating people from a variety of disciplines. I look forward to the joys of in-person office life: when grabbing a protein bar from the FHI kitchen, you’re liable to find Anders Sandberg ebulliently holding forth on the physical constraints that govern possible intergalactic civilizations, or overhear some alarming fact about the history of nuclear weapons.

The major drawback of the job is that I have a lot of freedom to work on extremely challenging problems in whatever way I think best. I never feel like I know enough or that I am up for the challenge - in general, but especially when working on something as genuinely bewildering as consciousness. Cluelessness means I rarely have the satisfaction of knowing I have moved things in the right direction. Imposter syndrome flares up not infrequently. Doing a PhD is known to be a huge predictor of anxiety and depression; I would imagine the same is true for many EA research jobs, which can be similar to graduate research in some key dimensions.

Get in touch

All told, I enjoy my job and consider myself very privileged to have it, especially considering my somewhat fortuitous path to it.

My path to FHI was unique in many ways, with luck, privilege, timing, and personal idiosyncrasies all playing a role. Still, I hope you found this post helpful. Here are a variety of ways to get in touch with me, which would delight me.

Acknowledgements: thank you to the NYU Center for Mind, Brain, and Consciousness for supporting my presentation at EAG 2016. And to Molly Strauss, Sophie Rose, and Stephen Clare for comments on a draft of this post.

Notes


  1. It’s worth noting that in my current opinion, the poster itself was - and this is not false modesty - not very good. It more or less consisted of two relatively trivial insights that a) it’s important for cause prioritization to know what systems are conscious and b) if the Integrated Information Theory of consciousness or something like it is correct, perhaps moral patienthood scales with “amount” of consciousness. Plausible thoughts, but many other posters presented substantive, polished papers. ↩︎

  2. Unfortunately for my work but fortunately for the world, not long after this someone else began working on this question, who was far more capable and far more familiar with Lara Buchak’s work: Lara Buchak. ↩︎

  3. EA Global 2016 is also when I performed the action that will probably outweigh the rest of my career impact combined: I successfully invited my friend Arden Koehler, who was not in effective altruism at all, to attend. ↩︎

  4. It might be of interest that I began to work on AI issues more from philosophical curiosity than from any EA considerations - at the time I was skeptical of AI safety as a cause area and of the 'longtermist turn' in EA more generally. Nor was I seriously considering EA research as a career at that point. ↩︎

  5. This has meant working on the following papers: (1) “The problem of minimal instantiations” with Jonathan Simon, on the discomfiting fact that very simple computational systems can satisfy the proposed criteria of consciousness of basically all of the leading scientific theories of consciousness. (2) I am scheming a paper how illusionists about consciousness should think about AI suffering. (3) I also need to write “AI consciousness: an overview for EAs” and post it to the EA forum! Please get in touch if you are interested in any of these topics. ↩︎

  6. Cf. Paul Graham: “[I]n many kinds of work there's a point beyond which the quality of the result will start to decline. That limit varies depending on the type of work and the person. I've done several different kinds of work, and the limits were different for each. My limit for the harder types of writing or programming is about five hours a day. Whereas when I was running a startup, I could work all the time.” ↩︎

Comments9
Sorted by Click to highlight new comments since: Today at 12:41 PM
DM
3y10
0
0

I really appreciated the many useful links you included in this post and would like to encourage others to strive to do the same when writing EA Forum articles.

rgb
3y15
0
0

Thanks Darius! It was my pleasure.

Thanks for writing, sounds like a great career you've got going, congrats! Unsurprisingly, many of your experiences track closely what I jotted down about academic economics yesterday. However, one big benefit of yours - that I didn't think to mention, but is relevant for anyone choosing between a university and a research institute - is the like-minded coworkers.

I'd be very surprised if >2  colleagues of mine knew about EA, and even more surprised if any aside from me had thought about longtermism, etc. This definitely makes it a bit solitary. I imagine I'd be happier and more productive in an environment with even just 1-2 people excited about Global Priorities Research. 

On the other hand, I think its probably useful to have GPR work being done in the wild to mainstream it some, so it's not all negative. 

rgb
3y11
0
0

That's a great point. A related point that I hadn't really clocked until someone pointed it out to me recently, though it's obvious in retrospect, is that (EA aside) in an academic department it is structurally unlikely that you will have a colleague who shares your research interests to a large extent. Since it's rare that a department is big enough to have two people doing the same thing, and departments need coverage of their whole field, for teaching and supervision.

That seems correct to me for the most part, though it might be less inevitable than you suspect, or at least this is my experience in economics. At my University they tried hiring two independent little 'clusters' (one being 'macro-development' which I was in) so I had a few people with similar enough interests to bounce ideas off of. A big caveat is that its a fragile setup: after 1 left its now just 2 of us with only loosely related interests. I have a friend in a similarly ranked department that did this for applied-environmental economics, so she has a few colleagues with similar interests. Everything said here is even truer of the top departments if you're a strong enough candidate to land one of those. 

My sense is that departments are wise enough to recognize the increasing returns to having peers with common interest at the expense of sticking faculty in teaching roles that are outside of their research areas. Though this will obviously vary job-to-job and should just be assessed when assessing whether to apply to a specific job; I just don't think its universal enough to steer people away from academia.

<<email several academics to ask if they will take a meeting with me, and asking if they are interested in visiting the digital minds group in the future>>

Are you able to share what you talk about on these calls? Do you have specific points/questions you want to address with people or do you just reach out for a chat and an intro to see what comes of it?

Context: I'm also a researcher at a (perhaps slightly less) academia-adjacent EA nonprofit/think tank (Sentience Institute) and I only do low-structured networking at events like EAG.

Great question, I'm happy to share.

One thing that makes the reaching out easier in my case is that I do have one specific ask: whether they would be interested in (digitally) visiting the reading group. But I also ask if they'd like to talk with me one-on-one about their work. For this ask, I'll mention a paper of theirs that we have read in the reading group, and how I see it as related to what we are working on. And indicate what broad questions I'm trying to understand better, related to their work.

On the call itself, I am a) trying to get a better understanding of the work and b) let them know what FHI is up to. The very act of preparing for the meeting forces me to understand their work a lot better - I am sure that you have had a similar experience with podcasting! And then the conversations themselves are informative and also enjoyable (for me at least!).

The questions vary according to each person's work. But one question I've asked everyone is:

If you could fund a bunch of work with the aim of making the most progress on consciousness in the next 40 years (especially with an eye to knowing which AIs are conscious), what would you fund? What is most needed for progress?

One last general thought: reaching out to people can be aversive, but in fact it has virtually no downside (as long as you are courteous with your email, of course). The email might get ignored, which is fine. But the best case - and the modal case, I think - is that people are happy that someone is interested in their work.

Oh and I should add: funnily enough, you are on my list of people to reach out to! :D

I do research in topics that I'm very interested in, as you. And I am also very interested in EA, rationality and so on, as you seem to be. I was wandering if you share a problem I have: I really don't know where my work starts or ends. I mean, many stuff I do (read, watch, write) for fun could be clearly considered work, and vice versa, many things I do for work could be considered leisure. It does also seem to be the case for you. 

Did you consider writing this post as part of your work, for example? Or reading EA posts? Or reading any blog or article about philosophy of mind, AI and so on? I guess you consider the work coordinating the digital minds group as part of your (paid) work, right? Where do you set the boundary. Or better, how do you set the boundary?

I really struggle with that. It is often not very relevant to clearly distinguish what is actual work and what it is not, but other times it is. In addition, it is not strange that something that started as 'for fun' ends being part of your research or main working activity (e.g. the digital minds group for you).