Hide table of contents

TLDR:

ENAIS is teaming up with community builders from The Netherlands to seed AIS Netherlands. You can read about the role below, and if you would be interested in founding such an organisation, fill out this expression of interest form. We are currently in discussions with funders about seeding city and national groups across the world, so we would like to get a better sense of the local talent pool that would be interested in starting such an initiative. You can also (anonymously) refer people for us to contact here.

Co-signers:

  • Gergő Gáspár (Director, The European Network for AI Safety)
  • Joshua Lander (Community manager, Bluedot Impact)
  • James Herbert (Co-director, EA Netherlands)
  • Otto Barten (Director, Existential Risk Observatory)
  • Jelle Donders (Founder, Eindhoven AI Safety Team and Tilburg AI Safety Initiative)
  • Natalia Matuszczyk (Founder, AI Governance Rotterdam)
  • Margot Stakenborg (Independent researcher)

If you are not based in the Netherlands but are interested in community building, you can fill out the same form and indicate the location from where you would like to start a group.

Job title

Founding Executive Director – AIS Netherlands

Location

Randstad (Preferred) / Remote within the Netherlands

Work hours

Part-time to full-time role requiring 20-40 hours per week. Flexible working arrangements are available.

About AIS Netherlands

AIS Netherlands will be an organization focused on advancing the field of AI safety and ethical AI governance within the Netherlands and the EU. The founder of AIS Netherlands will work to build AI governance and technical talent, provide career advice, run courses and foster connections among members of the local and global AI Safety community. 

Position summary

AIS Netherlands will need a skilled and forward-thinking leader to serve as its Executive Director. In this foundational role, you'll guide the organization's growth, build its network, and develop its impact strategy. The Executive Director will be responsible for establishing AIS Netherlands’s initial programs and partnerships, focusing on communitybuilding, talent development, and educational outreach. The role calls for someone with a deep understanding of AI Safety, and experience in project and people management.

Key responsibilities

  • Strategic vision and program development
    • Shape the strategic direction of AIS Netherlands, designing programs aligned with the organization's mission to advance AI safety, based on insights and the evolving needs of the Dutch AI safety ecosystem.
    • Develop and launch programs, including educational courses, career support, and community-building activities to connect and support professionals in the Netherlands’s AI policy landscape.
  • Communitybuilding
    • Lead efforts to strengthen the AI safety community in the Netherlands, connecting students and professionals across AI governance and technical research, and related fields.
    • Organize networking events, facilitate AI safety meetups, and establish groups to build a collaborative and supportive community.
    • Coordinate with and support
      • Existing AI Safety organisations: Timaeus, Catalyze, Existential Risk Observatory, Legal Safety Lab etc.
      • Existing local groups: AIS Amsterdam, Delft, Eindhoven, Tilburg, Groningen Utrecht, Nijmegen, Maastricht, AIG Rotterdam etc.
  • Talent Development and Career Support
    • Provide resources and guidance to help individuals find impactful roles in AI governance and technical research
    • Develop partnerships with think tanks, universities, and other organizations to offer career support and networking opportunities.
  • Operational and Organizational Leadership
    • Oversee the operational setup of AIS Netherlands, including budgeting, compliance, staffing, and organizational development.
    • Collaborate with board members to ensure that the organization's programs and goals align with its mission and long-term objectives.

Program strategy and potential initiatives

AIS Netherlands will focus on two primary objectives:

  • Channeling Talent to High-Impact Roles
    • Career Advice and Connections: Build a team dedicated to helping professionals transition into AI safety roles in the Netherlands, providing mentorship and key connections within the Dutch policy ecosystem.
    • Educational Programs: Implement specialised courses, such as BlueDot or CAIS-type programs, aimed at students or experienced professionals. Follow up with participants to help them find meaningful roles in AI safety.

Key metrics: number of people getting into research fellowships, founding projects, transitioning to AI Safety careers

  • Building a Supportive Community
    • Networking Events and Retreats: Organize gatherings and retreats to unite people in AI safety, encourage collaboration, and foster new ideas.
    • Policy Fellowships and Visitor Programs: Create fellowship programs for mid-career professionals to gain experience at NL-based AI policy organisations, contributing to the broader community's impact.

Key metrics: number of new connections facilitated, number of policy fellows

As AIS Netherlands grows, the Executive Director may work with advisors and consultants to help identify and prioritize high-impact activities, refining the program strategy based on the Netherlands’s needs and opportunities.

Qualifications

  • Strategic and Visionary Leader: Experienced in guiding organizations or programs, ideally within the nonprofit, policy, or technology sectors.
  • Relationship Builder: Strong network in AI policy, governance, or related fields, with a proven ability to build effective partnerships.
  • Operational Skills: Skilled in managing budgets, operations, and staff oversight, with experience building sustainable organizational structures.
  • Community Development: Successful track record in fostering engaged communities through events, programs, and partnerships.
  • Understanding of AI Governance: Familiarity with AI safety concepts, knowledge of the Dutch and EU policy landscape and understanding of governance frameworks is necessary.

We recommend people to sign up even if they don't check all of these boxes, the ENAIS team will be ready to support you in developing new skills as part of the role.

Potential candidate profiles

AIS Netherlands is open to candidates from a variety of professional backgrounds who bring experience in one or more of the following areas:

  • Policy Connector: Experienced in the EU policy space, with skills in organizing events and building strategic alliances.
  • Educator: Skilled program managers with expertise in AI safety education, ideally focused on experienced professionals.
  • Consultant: Able to identify and assess fieldbuilding needs within the EU policy sphere, working with different stakeholders to design and implement projects.
  • Community Organizer: Skilled in community management, event planning, and outreach, committed to fostering collaboration in AI safety.

Compensation & benefits

We would like to make it clear that the funding is not yet secured for the organisation. We are currently in discussion with funders about the project and plan to either hold an open hiring process or help the top candidate(s) apply for funding for themselves. For this reason, we recommend filling out the EOI form and not waiting for the public job announcement.

We are currently fundraising for AIS Netherlands and for a full-time role, will aim to offer a competitive salary to a contractor in the range of 45,000 EUR - 78,000 EUR (gross salary), based on experience, along with a comprehensive benefits package, including:

  • 25 days of paid leave
  • Flexible working arrangements
  • Professional development support and advising
  • Community engagement - participating in team-building activities and retreats organised by the European Network for AI Safety and other community-builders in the Netherlands

How to register your interest

Please sign up on this form. EOIs will be reviewed on a rolling basis. As mentioned above, we will either hold an open hiring round or help the top candidate(s) to fundraise independently, so early submissions are encouraged. If you are not based in the Netherlands but are interested in community building, you can fill out the same form and indicate the location from where you would like to start a group. If you are not based in the Netherlands but are interested in community building, you can fill out the same form and indicate the location from where you would like to start a group.

35

0
0

Reactions

0
0

More posts like this

Comments3


Sorted by Click to highlight new comments since:

Let's go!

Maybe you should cross-post it to LW too?

Cool initiative! (Btw the last sentence is cut off / doesn't have a link)

Thanks for flagging, fixed it now!

More from gergo
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI