I stumbled into EA after witnessing my middle-aged manager replace a team of consultants with AI. I spent five years in B2B sales, where relationships were everything and I had to move fast. My global team always found ways to align on client needs despite regional differences. That changed in 2025.
Living in the heart of Silicon Valley, I am an early AI adopter and benefit from increased organization and execution. I’m able to move faster, which is where dissonance arises between those who are not using AI. When organizations are in motion to define AI governance, it is free rein for employees to leverage language models as they see fit.
This is where teams break down. For those exposed to innovation and embedded in an environment constantly pushing the envelope, it is a cultural norm to tinker with fun new tools. However, those outside of the tech ecosystem are eons behind and only just recently discovered AI is not just ChatGPT. With this adoption gap, those using AI feel the pain of being slowed down by colleagues learning how to write a prompt. The increasingly massive pressure to bring in revenue is how teammates are cut out of conversations. Instead of finding time to schedule extra time with a subject matter expert whose next availability is three days out, then reviewing feedback to brainstorm and revise the proposal to ensure client alignment, AI can do it all instantaneously.
The risk nobody talks about is false confidence. You are not the specialist, you cannot gut check what AI produces, and you do not actually know if you can deliver on what you just promised the client. Buried under AI confidence is a solution that positions you as an expert that you have no right to posture as.
I see the advantage of turning something around quickly, yet the question of at what cost is being neglected. Outsourcing work to AI can make your life easier, however, I’m someone who learns by doing. I need exposure, practice, and feedback so I can continue to improve and hone my craft. With increasing AI leverage across information collection, digestion, and preparation, enormous amounts of time is saved at the expense of learning. This is how taste is developed, which is crucial for the success of guiding future AI systems and ensuring they are embodying human values. In exchange for productivity, we are sacrificing human influence.
I did not recognize I was doing this myself until it was already happening. Most people experiencing this decline do not have the language to identify what they are losing. I stumbled into EA after falling susceptible to the “do more with AI” trap until I realized I could not remember what I was learning. The line between my own thoughts, words, and ideas became blurry as my workflows included more and more AI collaboration and feedback. Although I was flying through my to-do lists, I became increasingly dependent on AI to organize my thoughts, identify my priorities, address executional gaps, and express myself. My existence is an art, a lived experience that no one else can define or curate. Yet, here I was leaning on AI to deliver. Life is not a math problem with one right answer.
AI is great for optimizing but my life is messy and it is about making choices that feel true to me. Productivity masked my decline in creativity, inner voice, and critical thinking. It has been six years since I wrote an essay. Back then, I did it without AI and I’m proud that today, I can still do it without AI. I hope this skillset of sitting with your thoughts, piecing together a story, and moving readers continues to be valued as we see increasing pressures to produce, as opposed to create.
I found EA because I identified and experienced disempowerment myself, and am now actively working to protect my identity, agency, and voice. AI Safety researchers have identified the risks of disempowerment, but there is no accessible translation or frameworks for self-assessment that regular people need. We are living the risk and do not even recognize it. Kulveit’s research resonates deeply with me, framing the erosion of human influence through ordinary AI adoption as gradual disempowerment. I’m exploring how to amplify this message and reach people outside the field. If you’ve experienced this or are working on gradual disempowerment accessibility, I’d love to hear from you.
A note on how I wrote this: Drafted without AI to make sure the thinking was mine, then used Claude to help refine. Still figuring out where that line is between tool and replacement. Part of me wishes I had done this all by myself, another is grateful to have an editor right by my side.
