TL;DR: Aether is hiring 1-2 researchers to join our team. We are a new and flexible organization that is likely to focus on either chain-of-thought monitorability, safe and interpretable continual learning, or shaping the generalization of LLM personas in the upcoming year. New hires will have a chance to substantially influence the research agenda we end up pursuing.
About us
Aether is an independent LLM agent safety research group working on technical research that ensures the responsible development and deployment of AI technologies. So far, our research has focused on chain-of-thought monitorability, but we're also interested in other directions that could positively influence frontier AI companies, governments, and the broader AI safety field.
Position details
- Start date: Between February and May 2026
- Contract duration: Through end of 2026, with possibility of extension
- Location: We're based at Trajectory Labs in Toronto and expect new hires to work in-person, but may consider a remote collaboration for exceptional candidates. We can sponsor visas and are happy to accommodate remote work for an initial transition period of a few months.
- Compensation: ~$100k USD/year (prorated based on start date)
- Deadline: Saturday, January 17th EOD AoE
Role details
What you’d be working on:
- We're aren't committed to a single research agenda for the upcoming year yet. In the past, our research focus has been on chain-of-thought monitorability. A representative empirical project is our forthcoming paper How does information access affect LLM monitors’ ability to detect sabotage? and a representative conceptual project is our post Hidden Reasoning in LLMs: A Taxonomy.
- We will likely keep doing some work on CoT monitorability this year. However, we are also currently exploring safe and interpretable continual learning, shaping the generalization of LLM personas, and pretraining data filtering. We plan to always work on whatever direction seems most impactful to us.
We’re looking for:
- Experience working with LLMs and executing empirical ML research projects
- Agency and general intelligence
- Strong motivation and clear thinking about AI safety
- Good written and verbal communication
A great hire could help us:
- Become a more established organization, like Apollo or Redwood
- Identify and push on relevant levers to positively influence AGI companies, governments, and the AI safety field
- Shape our research agenda and focus on more impactful projects
- Accelerate our experiment velocity and develop a fast-paced, effective research engineering culture
- Publish more papers in top conferences
Team
Our team consists of three full-time independent researchers: Rohan Subramani, Rauno Arike, and Shubhorup Biswas. We are advised by Seth Herd (Astera Institute), Marius Hobbhahn (Apollo Research), Erik Jenner (Google DeepMind), Francis Rhys Ward (independent), and Zhijing Jin (University of Toronto).
Application process
If you're interested in the role, submit your application by Saturday, January 17th EOD AoE through this application form.
We generally prefer that candidates join us for a short-term collaboration to establish mutual fit before transitioning to a long-term position. The length of the short-term collaboration will likely be 1-3 months part-time. However, if you have AI safety experience equivalent to having completed the MATS extension, we are happy to interview you for a long-term position directly—we don't want the preference for testing fit to discourage strong candidates from applying. The interview process will involve at least two interviews: a coding interview and a conceptual interview where we'll discuss your research interests. The expected starting date for long-term researchers is Feb-May; we're happy to start short-term collaborations as soon as possible.
