Anonymous Stories · 4 voices

From the engineers inside the loop

What AI fatigue actually looks like when you're living it.

Collected March 2026 · The Clearing · Identities protected


These stories are composite accounts. They're based on patterns and experiences common across the engineering community — real enough to recognize, anonymized enough to share. Names, roles, and companies are fictional. The feelings are not.

If you see yourself in any of these, you're not broken. You're not behind. You're one of a large and largely silent group of engineers feeling the weight of a transition nobody prepared you for.

A note on these stories: These are creative non-fiction composites — not direct transcripts or case studies. They're meant to make a shared experience visible and nameable. If they resonate with you, writing about your own experience can be a first step toward understanding it.

"I kept shipping. I stopped caring. I didn't know those two things could happen at the same time."

I've been in this industry long enough to remember writing Makefiles by hand, hand-rolling deployment scripts, arguing about tabs versus spaces like it was a moral issue. I loved all of it. Not because it was efficient — it wasn't — but because every line of code felt like a decision I made. A choice. Mine.

Then late 2023 happened. The company did a big AI push. Every team got Copilot licenses. There were internal benchmarks, leaderboards even — teams ranked by AI-assisted commits per engineer. I'm not kidding. Someone in leadership thought that was a good idea. I still feel a low burn of anger about it when I think about it.

I played along. My numbers looked great. My manager was happy. We shipped a major reliability feature in six weeks that probably would have taken four months in the old way. And I sat in the all-hands where they celebrated the team's velocity, and I smiled, and I felt absolutely nothing.

"My output was the highest it had ever been. My sense of self in the work was the lowest it had ever been. Nobody noticed the gap. I barely noticed it myself."

What I couldn't explain to anyone — not my manager, not my partner, not myself for a long time — was that the code didn't feel like mine anymore. I'd review a Copilot suggestion, think "yeah, that's correct," hit Tab, move on. Technically I was authoring. Functionally I was approving a draft someone else wrote. The distinction sounds philosophical until you realize your sense of professional identity was built on the former.

I started calling in sick on Fridays. Not because I was sick, but because I needed a day where I didn't have to be inside the machine. I started a small side project in Rust — a systems-level thing with no AI assistance, by rule — and the first time I sat with a problem for two hours and actually solved it myself, I almost cried. I didn't realize how starved I was for that.

I'm still at the same company. I haven't solved it. But I've gotten better at protecting time — at least an hour a day where I work on something hard without asking for help. It sounds simple. It is simple. And it's made a real difference.

What helped

A personal rule: one hour per day of "no AI" deep work, treated like a meeting that can't be moved. A side project written entirely by hand. Naming the feeling to his partner, which helped more than he expected.

"I realized I hadn't made a technical decision by myself in three months. The model had made them all."

I came up through a bootcamp. Not a four-year CS degree — a bootcamp, a lot of practice, a lot of grinding, a lot of imposter syndrome that I mostly overcame by the time I got my second job. I worked hard to trust myself. To trust my instincts. To look at a problem and have an opinion about it.

When AI tools became standard, I embraced them hard. Partly because they were genuinely useful, partly because I thought they'd help close whatever gap I still felt between myself and the "real" computer science engineers on the team. If I couldn't get there through knowledge, I could get there through tools. That was the logic.

What actually happened was different. The more I relied on the model's suggestions — for architecture decisions, for code structure, for how to approach tricky edge cases — the less I trusted my own instincts when I had them. I started second-guessing myself not in the bad moments but in the good ones. I'd have a clear idea and then immediately wonder if the model would have done it differently. I started saying "I think" more in meetings. I started hedging more in my PR descriptions.

"I used to know what I thought. Now I just know what I've been suggested. It took me months to notice those aren't the same thing."

The low point was a sprint planning meeting where my tech lead asked for my opinion on a database design decision — exactly the kind of thing I'd studied, worked on, had real experience with — and I blanked. Not because I didn't know anything. Because I'd stopped practicing the act of forming an opinion. My brain kept reaching for the prompt I'd type into the model, rather than the answer I already had.

I talked to a therapist about it — a tech-adjacent one who'd heard similar things from other engineers. She said something that stayed with me: "You haven't lost your ability to think. You've just been outsourcing it so consistently that the habit atrophied." That framing helped. Habits can be rebuilt.

I started keeping a small notebook — a physical one — where I write my technical opinions before I check anything. What do I think? What would I do? Then I can go look at references, compare, update. But I make myself commit to a thought first. It's like rehab for my instincts.

What helped

A physical notebook for "commit before you check" — forming her own opinion before opening any AI tool. Therapy with a tech-aware counselor. Deliberately taking small technical ownership moments in meetings.

"I was managing a team of humans running faster than humans are designed to run. Nobody was okay. We were all pretending."

Being an engineering manager in an AI-accelerated environment has a specific quality that I haven't seen written about: you're watching people work themselves into the ground in real-time, and the metrics say everything is fine. Velocity is up. DORA scores are good. Deployment frequency is at a record. And your engineers are quietly disappearing on you.

One of my best engineers — seven years of experience, incredibly thoughtful — started making uncharacteristic mistakes. Not coding mistakes. People mistakes. Short in Slack. Missing context in tickets she would normally have fully documented. Defensive in retros. I'd seen those patterns before — they're pre-burnout signals. But when I did a 1:1 to check in, she said she was fine. She pointed to her output. Her output was objectively great. She believed the metrics. I wasn't sure she was right.

"We built dashboards for everything except human depletion. We can tell you exactly how many deployments we did last quarter. Nobody can tell you how many engineers are about to quit."

She gave notice six weeks later. Not for more money — for a company that moved slower. Her exit interview, which she was unusually candid about, was essentially a catalog of AI fatigue: the feeling of never finishing anything, the sense that her judgment was always second-guessed by a model, the exhaustion of being the accountability layer on top of an automated system. She didn't use the phrase "AI fatigue" — that language wasn't common then — but that's what she was describing.

I lost two more strong engineers in the next quarter. I started pushing back on leadership. I asked for slower sprints. I asked for dedicated "no-AI" deep work time to be protected on the team calendar. I got some pushback — "we're in a competitive environment, we can't slow down" — but I made the case in terms leadership understood: attrition is far more expensive than a slightly slower quarter.

Some of it landed. We now have a standing team agreement: no Slack on Thursday afternoons, no new tickets assigned on Fridays without mutual agreement. Small things. But my team started telling me things again. That's when I knew it was working.

What helped

Advocating upward with business language (attrition cost). Establishing low-interruption team norms. Weekly "how are you actually doing" 1:1 questions that opened real conversations instead of status reports.

"I started my career in the AI era and I still don't know what I actually know. That uncertainty is its own kind of exhaustion."

Everyone who talks about AI fatigue seems to be talking about experienced engineers who remember the before times. I don't have before times. I graduated into a world where AI code generation was already standard. I've never known an industry without it. And I still burn out. I just burn out for different reasons.

Here's what nobody warned me about: when you start your career with AI assistance baked in from day one, you don't know what you know. I can make things work. I can ship features. I can debug most things given enough time. But I have a persistent uncertainty about whether the knowledge is mine or whether I'm just good at prompting. And that uncertainty is incredibly exhausting to live with.

In my first performance review, my manager asked me to explain a design decision in one of my features. I knew it was right — I'd tested it, it performed well, there were no issues. But I couldn't explain the reasoning clearly because the reasoning had been partially assembled from model suggestions that I'd accepted without fully interrogating. I felt like a fraud. Not because I had done anything wrong. Because I couldn't tell where the model ended and I began.

"I don't have a 'before' to go back to. This is just the world I work in. And the world I work in is really hard to feel competent in, even when you are competent."

The anxiety builds in ways that are hard to see. You get to a certain threshold of tasks you can do with AI assistance, and then the AI assistance stops working — the problem is genuinely novel, the edge case is genuinely weird — and you freeze. Not because you're incapable, but because you've never practiced trusting your own first principles enough to feel safe deploying them without backup.

A senior engineer on my team started doing something with me that helped a lot: she'd give me a small, real problem and take away my laptop for an hour. Literally sit next to me while I worked on paper. White-boarding. No search, no model. The first session was uncomfortable in a way that was almost physical. The second was less so. By the fifth session I'd started to trust my own thinking in a new way — not because the model wasn't useful, but because I'd proven to myself I could function without it.

I still use AI tools every day. I always will. But I now know there's a me underneath them. That's not a small thing. I spent fourteen months not being sure.

What helped

Deliberate "offline" problem-solving sessions, facilitated by a supportive senior engineer. Understanding that the anxiety about competence was itself a signal, not a fact. Finding language to describe the experience to a colleague who understood it.

You're not alone in this.

These stories are patterns, not outliers. The quiet exhaustion, the eroded confidence, the productivity that feels hollow — these are signals from real people doing real work. If any of this resonated, the most useful thing you can do right now isn't to make a plan. It's to stop for a moment and breathe.