Hide table of contents

From 80,000 Hours's 2012 post, The high impact PA: how anyone can bring about ground-breaking research:

Rather than attempt to personally become a researcher in that field, instead attempt to find the very best researcher already working on the issue. Consider: if you can save that researcher one hour spent on activities besides research, then that researcher can spend one more hour researching. So, by saving that researcher time, you can convert your time into their time. Suddenly, one of your hours becomes one more hour spent by the best researcher, working in the best field!
There are many ways you could do this. For instance, you could volunteer to become their PA (Personal Assistant) – something many top academics lack. Then you could save them time spent organising meetings, shopping, filing taxes etc. If you picked the right person, then at least some of this would result in more research.

The article doesn't contain any examples of people who have tried this, let alone if it actually works.

Has anyone taken a 'multiplier' path? If so, how was it? Could you tell how much more research time your efforts enabled, if any?

Or, has anyone explored this area more fully than the article above?

New Answer
New Comment


2 Answers sorted by

Tanya at FHI first took the position of executive assistant to Nick Bostrom. She explained in the 80,000 Hours podcast how very, very valuable this has been for Nick Bostrom's research - and after that, for FHI operations.

I have, in some ways, done some PA work in the past year. I do think that some tasks are really helpful to take on some mental load off a busy researcher, such as helping with scheduling, answering emails, and choosing between different opportunities. Over time, this PA function becomes more and more those of a "project manager", with the delegation of some important projects for the researcher and the organization. I believe that, for some weeks, I have saved about 10 hours of work. I also made possible some high-value projects that wouldn't have been otherwise, measured in above $50K of value.

I think being a PA to someone at the top of their fields (or to someone just doing generally doing extremely high-impact work) is indeed a very high-impact path. It also gives amazing organizational, communication, and analytical skills. If you become a PA, you should probably aim to become a top-notch one ("The Chief of Staff/The Executive Officer"). It's important to note that this is a tough and high-impact job, that is often undervalued compared to what the person brings.

Being a PA (not in the sense of "research assistant", but in the sense of personal assistant) requires specific skills and personality traits: organization skills (being super organized with everything), communication skills (notably being excellent at emails), analytical skills (decide to say yes or no to opportunities), and being a generalist ready to roll up their sleeves on many different topics. It is also a role where you are in the shadow and let the other person shine, though there are also plenty of opportunities to grow a skill you're specifically focused on.

It's not surprising that many organizations are looking for PAs (80K, CSER, etc) as this role is truly an impact multiplier, and it's hard to find people that are really excellent PAs.

I would be very excited if more EAs took on this kind of role! If you're interested, I would strongly recommend doing a few short and longer tests to see if you like the kind of tasks the job entails. Also, anyone reading this: please contact me by PM if you want to talk more about it.

Interesting comment, thanks!

Tanya at FHI first took the position of executive assistant to Nick Bostrom. She explained in the 80,000 Hours podcast how very, very valuable this has been for Nick Bostrom's research - and after that, for FHI operations.

For people who don't know the latest chapter of that story: Tanya is now the Director of Strategy and Operations at FHI. 

Thank you for the in-depth response! This seems like a really underexplored path, and I'd like to see (or even make) a full review of the subject. Sending you a PM with some more questions.

I work at FHI, as RA and project manager for Toby Ord/The Precipice (2018–20), and more recently as RA to Nick Bostrom (2020–). Prior to this, I spent 2 years in finance, where my role was effectively that of an RA (researching cement companies, rather than existential risk). All of the below is in reference to my time working with Toby.

Let me know if a longer post on being an RA would be useful, as this might motivate me to write it.

Impact

I think a lot of the impact can be captured in terms of being a multiplier[1] on their time, as discussed by Caroline and Tanya. This can be sub-divided into two (fuzzy, non-exhaustive) pathways:

  • Decision-making — helping them make better decisions, ranging from small (e.g. should they appear on podcast X) to big (e.g. should they write a book)
  • Execution — helping them better execute their plans

When I joined Toby on The Precipice, a large proportion of his impact was ‘locked in’ insofar he was definitely writing the book. There were some important decisions, but I expect more of my impact was via execution, which influenced (1) the quality of the book itself; (2) the likelihood of it’s being published on schedule; (3) the promotion of the book and its ideas; (4) the proportion of Toby’s productive time it took up, i.e. by freeing up time for him to do non-book things. Over the course of my role, I think I (very roughly) added 5–25% to the book’s impact, and freed up 10–33% of Toby's time.

Career decisions

Before joining Toby, I was planning to join the first cohort of FHI’s Research Scholars Program and pursued my own independent projects for 2 years. At the time, the most compelling reason for choosing the RA role was:

  • Toby’s book will have large impact X, and I can expect to multiply this by ~10%, for impact of ~0.1X
  • If I ‘do my own thing’, it would take me much longer than 2 years to find and execute a project with at least 0.1X impact (relative to the canonical book on existential risk…)

One thing I didn’t foresee is how valuable the role would be for my development as a researcher. While I’ve had less opportunity to choose my own research projects; publish papers; etc., I think this has been substantially outweighed by the learning benefits of working so closely with a top tier researcher on important projects. Overall, I expect that working with Toby ‘sped up’ my development by a few years relative to doing independent research of some sort.

One noteworthy feature of being a ‘high-impact RA/PA/etc’ is that while these jobs are relatively highly regarded in EA circles, they can sound a bit baffling to anyone else. As such, I think I’ve built up some pretty EA-specific career capital.

Some other names

Here's an incomplete list of people who have done (or are doing) this line of work, other than Caroline and myself:

Nick Bostrom — Kyle Scott, Tanya Singh, Andrew Snyder-Beattie

Toby Ord — Andrew Snyder-Beattie, Joe Carlsmith

Will MacAskill – Pablo Stafforini, Laura Pomarius, Luisa Rodriguez, Frankie Andersen-Wood, Aron Vallinder


    1. some RA trivia — Richard Kahn, the economist normally credited with the idea of a (fiscal) multiplier, was a long-time RA to John Maynard Keynes, of whom Keynes’ wrote “He is a marvelous critic and suggester and improver … There never was anyone in the history of the world to whom it was so helpful to submit one’s stuff.” ↩︎

Matthew, thanks for your response! Very handy to have some names I might get in contact with, and this is turning out to be higher-impact than I thought. Can you say any more on how EA-specific your career capital might be?

I'd be very interested in a longer post on the subject!

[anonymous]3
0
0

Sorry about the late answer. I just wanted to say that I also upvoted your comment because I would be very interested in a longer piece on being an RA.

Thanks for this answer - I've shared links to it with several people, and will also link to it in my sequence on Improving the EA-Aligned Research Pipeline.

I've also now posted notes from a call with someone else who's an RA to a great researcher, who likewise thought this role was great for his learning.

Comments1
Sorted by Click to highlight new comments since:

I just had the opportunity to talk with someone who has been Executive Assistant to several EAs, including researchers. Here are the answers they gave to my questions:

As a rough estimate, how much research time do you think your efforts enabled, if any?
- I'm not sure. 10 hours per week?

What are the things you do that save the EA/researcher most time/energy?

- It is different for each person. But here is my current best guess at some common things:
- Communication management. Organizing the inbox, filtering out spam. Drafting emails. Highlighting anything urgent and important to the EA/researcher. 
- Calendar management. Acting as a 'gatekeeper' who can say no to things easily for them. Scheduling meetings in such a way that optimizes for the way that EA/researcher works best. 
- Reminding the EA/researcher about upcoming deadlines, so they don't have to use brain capacity tracking/worrying about that.
- Being a voice of reason when the EA/researcher is led towards spending time on something less important.
- Occasional big ongoing projects which they would have to do otherwise.
- Errands, purchases, small annoying tasks like dealing with customer service representatives etc.

(There's a good amount of overlap here with Matthew's experience, I notice.)

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal