Incidentally, ‘flipping non-EA jobs into EA jobs’ and ‘creating EA jobs’ both seem much more impactful than ‘taking EA jobs’. That could be e.g. taking an academic position that otherwise wouldn’t have been doing much and using it to do awesome research / outreach that others can build on, or starting an EA-aligned org with funding from non-EA sources, like VCs.
(excerpt from https://lydianottingham.substack.com/p/a-rapid-response-to-celeste-re-e2g)
Let’s create example trial tasks to strengthen EA hiring?
EA orgs use trial tasks quite a lot in hiring, which is great since candidates can demonstrate their skills, which is what truly matters regardless of their background. However, outside of EA, trial tasks are often quite different, and for the average candidate, it usually takes several rejections before they learn how to show their best in that setting.
It would be great if we had example trial tasks for different roles (research, operations, etc.) so that people could practice before applying to real jobs. This way, strong candidates would not get lost in the hiring process simply due to inexperience with trial tasks.
I'd like to get feedback on the writing style of this post. I want to try to write up bi-monthly updates but don't enjoy sinking time into writing.
I've never really stuck with blogging despite it being valuable for sharing what I'm working on as I'm a bit of a perfectionist. I end up spending hours combing over the posts I make. I'd like my posts to only take 30 minutes, so my current ideas are to write quickly and post as is, or to have an AI edit out my mistakes.
Which of the two do you prefer? Do you have any suggestions on ways to make quick blog posts without potentially attaching poor communication or AI slop to myself?
My original post.
My prompt.
Claude's edit of my post:
------
My Two-Month Deep Dive into AI Safety: From Imposter Syndrome to Clarity
How ARBOx and ARENA helped me navigate a career transition into AI Safety—and what I learned about myself along the way
Two months ago, I committed to spending my summer diving headfirst into AI Safety. As someone with a background in Swift development and traditional software engineering, the world of Transformers, Linear algebra, and AI alignment research felt like an entirely different universe.
Here's what happened when I threw myself into ARBOx and ARENA—the good, the challenging, and the surprisingly clarifying moments that helped shape my career transition.
Week 1-3: ARBOx in Oxford - Swimming in the Deep End
ARBOx accepted me for their intensive program: one week of prerequisites, followed by two weeks of in-person training in Oxford.
The reality check was swift. During pair programming sessions, I was often the weaker partner. While my colleagues brought post-grad experience with deep learning or career backgrounds in ML, I was frantically trying to remember basic PyTorch syntax. My years of Swift development, unit testing, and design patterns suddenly felt irrelevant when staring at Jupyter notebooks full of tensor operations.
However, being the "slower" partner was actually incredibly v
Make your high-impact career pivot: online bootcamp (apply by Sept 14)
Many accomplished professionals want to make a bigger difference with their career, but don’t always know how to turn their skills into real-world impact.
We (the Centre for Effective Altruism) have just launched a new, free, 4-day online career bootcamp designed to help with that.
How it works:
* Runs Sept 20–21 & 27–28 (weekends) or Oct 6–9 (weekdays)
* Online, 6–8 hours/day for 4 days
* For accomplished professionals (most participants mid-career, 5+ years’ experience, but not a hard requirement)
What you’ll get:
* Evaluate your options: identify high-impact career paths that match your skills and challenge blind spots
* Build your network: meet other experienced professionals pivoting into impact-focused roles
* Feedback on CVs: draft, get feedback, and iterate on applications
* Make real progress: send applications, make introductions, or scope projects during the bootcamp itself
Applications take ~30 mins and close Sept 14.
If you’re interested yourself, please do apply! And if anyone comes to mind — colleagues, university friends, or others who’ve built strong skills and might be open to higher-impact work — we’d be grateful if you shared this with them.
I'm currently taking a career break intended to fit-test what impactful careers suit me. I've created a spreadsheet with a weighted factor model (Altruism/Career projects tab) and a rough schedule. I'm eager to get feedback on how I'm planning to spend my time and how I've prioritized what to work on.
blog-post on topic.
Hi all,
I’m a 25-year-old PhD student (just finished my master's) in Turkey, working in wireless communications, with a focus on signal processing, information theory, and physical-layer technologies. I really love the field, especially the math behind it, and while research is sometimes stressful (more than sometimes), I honestly don’t see myself doing anything else long term.
That said, I’ve been engaging with the EA community recently, and I’m unsure how to connect my work with any of the top EA cause areas. The one that feels closest is AI alignment, but I’ve never worked in AI/ML directly. Some people in my field are exploring ML for wireless systems, but that’s still pretty far from core alignment research.
So here’s where I’m stuck:
-Is there a way for someone with my background to make a meaningful contribution to EA-aligned cause areas, without leaving the field I love?
For example, would working on non-terrestrial networks (improving global internet access) be a meaningful contribution to global development or infrastructure resilience?
-Or should I consider earning to give as my primary contribution, while continuing a career I’m good at and love?
-Are there overlooked intersections between wireless/communications and critical EA cause areas like AI safety, biorisk, or catastrophic risk resilience?
-Or should I make a longer-term pivot toward more "core" EA paths, even if that means stepping outside of my comfort zone?
I really want to do something that matters, but I’m starting to feel like there’s no good path that lets me do impactful work and stay true to my core interests. I’d appreciate any guidance.
Thanks in advance!
Hey everyone! As a philosophy grad transitioning into AI governance/policy research or AI safety advocacy, I'd love advice: which for-profit roles best build relevant skills while providing financial stability?
Specifically, what kinds of roles (especially outside of obvious research positions) are valuable stepping stones toward AI governance/policy research? I don’t yet have direct research experience, so I’m particularly interested in roles that are more accessible early on but still help me develop transferable skills, especially those that might not be intuitive at first glance.
My secondary interest is in AI safety advocacy. Are there particular entry-level or for-profit roles that could serve as strong preparation for future advocacy or field-building work?
A bit about me:
– I have a strong analytical and critical thinking background from my philosophy BA, including structured and clear writing experience
– I’m deeply engaged with the AI safety space: I’ve completed BlueDot’s AI Governance course, volunteered with AI Safety Türkiye, and regularly read and discuss developments in the field
– I’m curious, organized, and enjoy operations work, in addition to research and strategy
If you've navigated a similar path, have ideas about stepping-stone roles, or just want to chat, I'd be happy to chat over a call as well! Feel free to schedule a 20-min conversation here.
Thanks in advance for any pointers!