Hide table of contents

Epoch AI is looking for an Operations Associate to help us grow and support our team. This person will manage our recruiting, onboarding, and offboarding processes, and generally support our staff in various other ways, helping our organization thrive. The successful candidate will report to me, Maria de la Lama, and work closely the rest of our 14-people team and our fiscal sponsor's operations team.

This role is full-time, fully remote, and we are able to hire in many countries. Apply by May 15!

Key Responsibilities

  • Recruiting and helping our team grow
    • The person in this role would manage our hiring rounds, handle outreach for our open roles, communicate with candidates, and help streamline our hiring processes to enable us to grow sustainably and hire the right staff.
    • They would also coordinate occasional visa sponsorship processes, and own onboarding & offboarding processes for contractors and employees.
    • More broadly, this person will play an important role in shaping Epoch’s organizational culture and ensuring it stays strong and healthy as we grow.
  • General staff support across a variety of areas including internal policies, SEO, subscriptions, payroll, invoices, benefits and team retreats.
    • Many support responsibilities are currently handled by our fiscal sponsor and professional employer organizations, but the successful candidate will be our staff’s point of contact for a wide range of support requests. We are a small team, so the support responsibilities this person would work on are quite diverse.

About you

You’ll likely have

  • An operations mindset: being good at identifying issues, keeping track of many fast-moving tasks and prioritizing among them, and implementing solutions efficiently.
  • Great communication skills, and comfort with open and direct communication. It's very important for us that our team members are comfortable giving and receiving feedback, and more generally sharing their thoughts clearly and openly.
  • Having a low-ego, supportive mindset and being comfortable working behind the scenes.
  • Excellent organization and attention to detail.
  • Interest in (further) developing your operations skills, and particularly people operations skills.
  • Experience working in operations or managing projects that involve logistics.
  • Comfort and experience owning projects and taking responsibility for the results.
  • Comfort with remote work.
  • Deep interest in Epoch AI’s mission.

If you think you'd be a good candidate but you don't check all these boxes, please still apply!

What We Offer

Compensation

  • Annual salary between $60,000 and $70,000 USD pre-tax. The exact salary will be based on prior relevant experience.
  • Salaries are not restricted to USD, and contracts and payments are usually in local currencies. Conversions from USD are based on a 1-year-average exchange rate that is updated annually.

Other Benefits

  • Comprehensive global benefits package
    • While they vary by country, we make every effort to ensure that our benefits package is equitable and high-quality for all staff. For most countries, the package includes medical insurance, life insurance and pension plan.
  • Generous paid time off policy, including:
    • Unlimited vacation with a minimum of 30 days off per year
    • Unlimited (within reason) personal and sick leave
    • Parental leave - up to 6 months of parental leave during the first 2 years after a child’s birth or adoption for parents of all genders
  • Equipment stipend of the equivalent of $2000 USD every 3 years to cover costs of purchasing work and office equipment.
  • Paid work trips, including 3 staff retreats per year and relevant conferences.
  • Additional co-working stipend of the equivalent of $2000 USD annually to work in the same location as other staff.
  • Professional development stipend equivalent to $2000 USD annually to spend on learning or development opportunities.
  • Opportunity to contribute to a high-impact non-profit organization — our research is trusted by key decision makers globally.
  • Other benefits as allowed at the discretion of Epoch’s leadership and local availability.

About Epoch

Epoch is a non-profit research institute investigating the trajectory and impact of artificial intelligence. We help policymakers and the public think more clearly about AI through scientific research and data. Our work informs policy-making at key government institutes and governance at leading industry AI labs.

You can learn more about our work in this summary dashboard or our blog.
 

Additional Information

  • Please email careers@epochai.org if you have any questions about this role, accessibility requests, or if you want to request an extension to the deadline.
  • While we welcome applicants from all time zones, you may be expected to attend meetings during working hours between UTC-8 and UTC+3 time zones, where most of our staff are based.
  • Please submit all of your application materials in English and note that we require professional level English proficiency.
  • Travel 1-3 weeks per year is an essential requirement for this position.
  • Epoch AI is committed to building an inclusive, equitable, and supportive community for you to thrive and do your best work. We’re committed to finding the best people for our team, so please don’t hesitate to apply for a role regardless of your age, gender identity/expression, political identity, physical abilities, veteran status, neurodiversity or any other background. 
  • Epoch AI is fiscally sponsored by Rethink Priorities.

5

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

Interested. Sounds like a good place to work. I will prepare my application today. Thanks for posting!

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Recent opportunities in AI safety