I am a strategist and founder with 20+ years of experience in communications, marketing, operations, civic infrastructure building, and systems-level storytelling. Based in Washington, DC, I came to AI Safety through curiosity and necessity and not via a traditional pipeline. I believe my perspective is exactly what this moment needs as the ecosystem expands. I am actively seeking my first formal role in the AI safety ecosystem at the intersection of communications, strategy, and governance.
I am actively seeking roles in AI governance, safety, and external affairs and particularly at organizations where AI safety is a core strategic priority, not an afterthought, and where communications, strategy, and field-building intersect. If you know of opportunities or can make an introduction, I would genuinely welcome the conversation.
I am also building Hokmah and looking to connect with aligned funders, potential advisors, and organizations exploring community-based approaches to AI governance. If that is your work or adjacent to it, let's talk.
Finally, I am always glad to connect with others who are serious about this work and willing to think together and especially those who came to this field through an unconventional path and understand both the urgency and the daily weight of caring deeply about humanity's future.
I am also newer to the EA Forum and would welcome guidance from those willing to help me navigate the community, find the right conversations, and eventually contribute my own thinking here.
I am most useful at the intersection of ideas, people, and audiences. I can help teams think through messaging, narrative strategy, and communications where those choices affect trust, adoption, and field-buildingand particularly for AI governance and safety work reaching non-technical audiences.
I am also a connector and community builder. If you are trying to reach faith communities, working-class Americans, or people of color with AI safety work, I can help you think through strategy, framing, and trusted messenger networks. These are communities I know deeply and have spent my career serving.
If you are newer to the field and navigating the transition into AI safety from a non-traditional background, I am happy to think alongside you. I have been that person recently and know how disorienting and energizing it can be simultaneously.
Agustin,
That's a helpful distinction and honestly it sharpens my thinking. If the roles are out there, then the real gap isn't that people can't find the door, it's that most people don't yet have what it takes to walk through it or understand the pathway to get there.
That's the part I was really trying to name. Someone who genuinely cares about this work but can't yet meet the bar for an AI safety role isn't going to wait around. They're going to take the governance, policy, or corporate “ai ethics” job that's hiring right now because that path exists and this one is harder to navigate. By the time the ecosystem builds the infrastructure to develop that person, it's already lost them.
So I'll refine what I said: the front door may be more visible than I implied. The pathway to being ready to walk through it is what's missing.
What you and your colleagues are building with the Generator Residency is exactly the kind of infrastructure this ecosystem needs. Creating a structured pathway that develops, credentials, and places generalists is the missing piece. I'm excited to see it and share it with my cohort team members.
I'll add one more data point from where I'm sitting right now. In my CEA bootcamp cohort there are several talented, mission-aligned generalists actively searching for roles in AI safety organizations. The consistent challenge isn't motivation or alignment, it's access. Without a technical background, breaking into this ecosystem often comes down to who you know or who referred you.
I hope the Generator Residency is the first of many programs like it. The talent is out there and it is mission aligned.
Pipelines don't just fail by being absent they lose talent by being outcompeted. Right now, policy organizations, governance think tanks and for profit businesses are winning the generalist talent competition by default, simply because they have job listings, defined roles, and an onboarding infrastructure. AI safety and impact organizations have tremendous potential and a broken front door and that asymmetry has a cost that compounds quietly.
The post identifies a real bottleneck, but I don't think this risk gets named or measured sharply enough in the ecosystem. The field isn't just failing to attract generalist talent, it's actively losing those who are aligned and passionate about working to adjacent pipelines that have clearer entry points.
Policy organizations, governance think tanks, and corporate AI teams are hiring. They post on LinkedIn and Indeed. Some add the term “ethics or safety” to the job title. They have defined roles, recognizable credentials, and onboarding infrastructure. For someone with strong mission alignment but no research or technical background, "AI safety-adjacent" becomes the path of least resistance. The person has the conviction, but easier and legible paths exist that are nuanced and harder to identify in the AI safety space.
The counterfactual isn't that person staying in marketing. It's that person doing meaningful work on AI, outside the ecosystem, in structures that don't share the same stakes or threat models and that is a compounding loss.
I'm raising this from inside of the problem as I'm a strategic communications executive, with 20+ years of experience and a non-technical background. I found AI safety last August through the standard rabbit hole that began with a YouTube talk by Tristan Harris, AI 2027, BlueDot, 80,000 Hours, Successif, and now the CEA Career Bootcamp. As of this writing I'm asking my bootcamp advisor for referrals to people who've made this pivot from generalist or strategic communications to AI safety organization. From networking and taking courses I respond to every job or connection. I show up to events where I'm often the eldest person in the room and routinely the only one without a research or technical background.
EA members tell me the ecosystem needs communicators and generalists. As the writers suggest searching the 80000 Hours job listings or fellowship opportunities it's hard to see the need for those of us with these skills. The credentialing infrastructure or pathway doesn't exist outside of Blue Dot or the CEA Bootcamp. There is an implicit hierarchy where research is the real work and it is legible even when it's unintentional. I have enough context and drive to push through that friction but I shouldn’t have to rely on referrals and my tenacity.
The Generator Residency is a meaningful signal and I plan to apply. But the deeper fix is making the pipeline visible enough that alignment-minded generalists don't have to stumble into AI safety and sturdy enough that they don't get routed elsewhere while they're looking for the door.
Credit to the authors for naming this clearly. The Generator Residency is a good first move and the pipeline problem is bigger than one program and worth continued pressure.
Congratulations @gergo! Having had the pleasure of connecting with you already, I can say with confidence that EA UK is in very good hands. Wishing you a strong and exciting start in this new role.
This post speaks directly to where I am right now and I truly enjoy your content. I am pivoting careers which is, honestly, a scary place to be. So, I keep showing up to EA events such as happy hours and women's gatherings. I am usually the oldest person in the room, which isn't always easy. However it is starting to pay off.
I attended an EA Women's event recently, met someone and within a week she had already opened two doors for me. I didn't plan on that and I am still working on describing what role I want to play in the AI Safety space. However, I am glad I attended that event.
A few months ago, at an EA Happy Hour I met someone in the same generation as me, also searching, pivoting careers and trying to figure it out. We now share resources and support each other and yes, joke that we are always the elders in the room. That connection alone has made the journey feel less lonely and I have someone I can reach out to and find community.
Sofia's framework captures something I'm living in real time. The layers and serendipity are real. I am learning greater patience and believe in the power of compounding. I am looking forward to the day I can stop saying I'm pivoting and finally say I've arrived. Until then, I'll keep showing up.
This post was recommended to me by a friend in EA and I was pleasantly surprised to see you were the author. Such great advice and guidance for those of us searching for jobs in the EA space. Thank you for the tips and the section on imposter syndrome. I am going to listen to the 80000 hours podcast on mental health. The journey to being hired is a deliberate practice and I am determined to find a role.
Thank you!
I love this post and the concept of "surface area for serendipity!" I believe in the power of serendipity and creating opportunity by stepping out and adding to your network. I am new to the EA community and learning to how to engage, be present and add value. This post is a helpful guide for newcomers. I hope to update this post in a few months with first hand knowledge of the guidance that you offer to assist in finding a job.
Thank you!
Hello everyone
I’m Karen Maria and I am based in Washington, DC in the USA. I was introduced to effective altruism through my 80,000 Hours coach, and I’m still orienting myself within the community.
I’ll be honest, the EA Forum can feel overwhelming and intimidating at times. There’s a level of rigor here that I deeply respect, and I’m currently more of a reader and lurker but I do plan to post and be a contributor. I’m learning how to sit with ideas, follow arguments carefully, and build understanding before I form strong views.
I’m making a late career pivot into AI safety and governance. I come from a background in marketing, communications, stakeholder engagement and brand strategy. Over the past year my curiosity about AI tools turned into deeper concern about societal impact, power concentration, persuasion, and who is most vulnerable as these systems advance. I am currently working on my theory of change.
Most of my downtime now is spent studying. I’m reading research papers, policy analysis, and watching lots of podcasts, often slowly and repeatedly, to build real comprehension rather than surface familiarity. This transition is intentional and, at times, uncomfortable, but it’s necessary.
I’d love recommendations from this community:
I’m here because I value the norms this community upholds: intellectual honesty, seriousness about impact, and humility in the face of uncertainty. I’m grateful to learn from those further along the path and to be part of the conversation as I continue to grow into this work.
Karen Maria
It’s extremely intimidating to write in the EAForum. Normally I write my thoughts and ask Google or Claude to edit for mistakes or clarity.
This is such an important conversation that I am living through now. I wanted to add not detract from the conversation. I will go back to lurking on the EA Forum.