TL;DR: TARA is now accepting participant applications (closing January 23rd) and rolling TA applications (final applications accepted January 23rd). We run in-person Saturday sessions over 14 weeks from March to June across the Asia-Pacific (APAC) region, designed to be completed alongside full-time work or study. Participants work on the ARENA curriculum and deliver a final capstone project. Apply here as a participant, here as a TA, or see our website for more details.
The Technical Alignment Research Accelerator (TARA) is now accepting applications for participants and teaching assistants!
TARA is an in-person 14-week part-time program aimed at full-time professionals and students across APAC that aims to accelerate your path to meaningful technical AI Safety research. Participants meet in-person every Saturday in our participating cities to work through select sections of the ARENA curriculum and deliver a final capstone project.
Participants will learn key machine learning and technical AI Safety concepts like transformer architectures, mechanistic interpretability, reinforcement learning and model evaluations, and collaboratively implement and consolidate this content by practicing pair programming (coding in pairs, taking turns to write and review code), guided by an expert Teaching Assistant. The program is free - we will cover compute credits for projects and coursework, lunch on Saturday, and dedicated study spaces.
Apply to TARA!
Applications close January 23rd. Participants and teaching assistants may apply up until the deadline but TAs will be assessed on a rolling basis and we encourage early applications!
Who is TARA for?
TARA is for people with a strong programming or machine learning background who are passionate about building their technical AI safety skills in order to reduce catastrophic risks from AI. In particular, we’re seeking to provide full-time professionals and students who are unable to relocate for overseas programs or unable to commit to full-time programs with a path to transition into AI safety. Successful applicants will need to be located in the participating cities as in-person attendance is mandatory.
This could include (but is not limited to!):
- Software engineers and ML practitioners wanting to transition their career into AI safety
- Undergraduate or postgraduate students seeking a career in AI safety
- Technical professionals who don’t have the capacity to undertake full-time programs
(These are only examples and we welcome applications from anyone with strong coding skills and a passion for AI safety - if you are unsure if you are suitable, err on the side of applying!)
Times and Locations
Round 1 2026 will run from March 7th - June 13th.
We are targeting cohorts in the following locations, but final locations will be determined by demand:
- Sydney
- Melbourne
- Brisbane
- Singapore
- Manila
- Taipei
- Tokyo
TARA v1 Outcomes
The inaugural cohorts were run in Melbourne and Sydney in March 2025.
We achieved a 90% completion rate.
94% of participants would recommend the program.
100% were satisfied with the program management.
89% were more motivated to pursue careers in AI safety
45% of participants were full-time professionals
Our 6-month follow up also showed that:
29% had secured further competitive fellowships (such as SPAR and LASR),
29% had transitioned into AI safety roles.
2 research outputs from the project phase had been published.
(For a detailed breakdown, follow this link to the full report)
Teaching Assistant Applications
We are looking for talented Teaching Assistants to help run TARA! As a TA, you'll work remotely to guide approximately 26 participants across two city cohorts within your assigned time-zone cluster. All instruction and support is delivered online - you don't need to be physically present in any of the cities.
You can be based anywhere in the world, as long as you're available during your cluster's Saturday session hours.
You'll be part of a team of 3-4 TAs, each responsible for one of three city clusters.
Apply to be a TA!
Required qualifications
- Completed most or all of the ARENA curriculum
- Strong Python and PyTorch skills
- Strong grasp of RL, transformer architectures, mechanistic interpretability, sparse autoencoders, and model evaluation
- Experience explaining complex technical concepts
- Ability to mentor in programming/ML
- Patient and encouraging teaching style
- Proactive communication habits
- Genuine interest in AI safety
- Available for the icebreaker session on Saturday 7 March (~1.5 hours)
- Available every Saturday from 14 March - 13 June 2026 (~7.5 hours per session)
Nice-to-have skills
- Technical AI alignment research experience
- Previous experience running technical workshops or bootcamps
- Experience with distributed/remote teaching
See the full role description here.
Questions? Please reach out to us at yanni@taraprogram.org and zac@taraprogram.org or comment below!
Who are we?
Yanni Kyriacos (Founder and Director)
Yanni previously co-founded and led AI Safety Australia & New Zealand, where he ran TARA v1, the AI Safety Careers Conference, launched Australia's first AI safety co-working space, and built monthly meetups across six cities.
Before AI safety, Yanni spent a decade in marketing and strategy at LinkedIn, News Corp, Spark Wave, and various agencies.
He has also served on advisory boards for Giving What We Can, RESULTS International, and The Deli Women & Children's Centre.
Zac Broeren (Technical Program Manager)
Zac is a Master of AI student at UNSW with a Bachelor of Science in Mathematical Physics from the University of Melbourne. He completed TARA v1, where his final project focused on causal interventions on a chess model using linear probes.
Zac was President and Education Director of EA University of Melbourne from 2022-2024, helping organise EAGx Australia 2023. He has worked at Elevate Education since 2021, recently helping develop their AI seminars.
Nelson Gardner-Challis (Advisor)
Nelson is the Technical Lead at Arcadia Impact.
Previously he co-organised the TARA pilot program in 2025, worked as a Technical Project Manager for the UK AISI Bounty programme managing three teams building evaluations on strategic deception, synthetic environment detection and propensity to sandbag.
Nelson completed the LASR Labs 2025 Summer Cohort, focused on control evaluation research that investigated the untrusted monitoring protocol.
Ryan Kidd (Advisor)
Ryan is Co-Executive Director of MATS, a Co-Founder and Board Member of the London Initiative for Safe AI (LISA), a Manifund Regrantor, and advisor to AI Safety ANZ, Catalyze Impact, and Pivotal Research.
Previously, he completed a PhD in Physics at the University of Queensland (UQ) and conducted independent research in AI alignment for the Stanford Existential Risks Initiative.
Dan Wilhelm (Advisor)
Dan is a visiting researcher at Meridian Cambridge, studying sleeper agents and mechanistic interpretability.
He has designed and led corporate courses in data science and machine learning, earning recognition as an inaugural Distinguished Faculty member at General Assembly. Dan also served as TARA's first teaching assistant.
Earlier, he co-founded a social networking startup after pursuing a PhD in Computation and Neural Systems at Caltech.
