Self-Assessment
What Kind of AI-Fatigued Engineer Are You?
AI fatigue looks different depending on who you are, your history, and your relationship with craft. Four patterns that keep showing up.
AI fatigue is not one experience. It is a cluster of related experiences that share certain features โ the hollowness, the ownership anxiety, the professional dread โ but express differently depending on your career stage, your relationship to craft, and how you arrived at using AI tools in the first place.
The four archetypes below are not neat boxes. You may see yourself in more than one โ that is common. The goal is recognition. Once you can name your specific pattern, the path forward becomes clearer. Read each one honestly.
Quick finder: which resonates most right now?
Choose the statement that feels truest. Not the one that sounds best โ the one that is actually true for you today.
When I imagine coding something significant completely without AI for the next week, I feelโฆ
When AI generates code that ships to production, I feelโฆ
The last time I felt genuinely proud of my work wasโฆ
โ Read the full profile below
The Over-Reliant
"I ship. But I'm not sure what I actually know."
You learned to code in the AI era, or you adopted AI tools early enough that they became load-bearing infrastructure in your workflow before you had fully internalized the fundamentals. You can build things. You can debug. You can navigate large codebases. But in quiet moments, you wonder: could I do any of this without the AI?
You have probably never had to find out. And that question has started to feel heavier over time.
Signs this is you:
- Opening a new file triggers an immediate instinct to prompt rather than think
- Whiteboard questions or live coding interviews fill you with a specific kind of dread
- You can fix bugs but struggle to articulate what the code was doing before the fix
- Reading documentation feels unusually slow โ you prefer AI summaries
- When asked why certain architecture decisions were made, you realize you accepted them without understanding them
- Side projects feel impossible without AI โ not just slower, but genuinely unclear
Here is the thing nobody says clearly enough: you are not as lost as you think. The skills are there in draft form โ they were building in the background while you were shipping. What you are missing is not ability. It is confidence, and the specific kind of confidence that only comes from solving problems without a safety net.
The Over-Reliant archetype is actually the most recoverable of the four, because the underlying capability is largely intact. It has just never been tested in isolation. Think of it like a muscle that has never been loaded โ it exists, but it has not been asked to do real work.
What actually helps
Start small and private. Pick one tiny problem โ a small script, a kata, a personal project โ and build it completely without AI assistance. No completion, no Copilot, just you and documentation. It will feel slow. That slowness is information, not failure. You are discovering what you actually know. Most engineers in this archetype are genuinely surprised by how much they can do. Do this once a week for a month and pay attention to what changes.
Also useful: read the source code of libraries you use. Not AI summaries โ the actual code. This is how you rebuild the pattern recognition that you skipped by outsourcing comprehension to AI.
See: Daily Practice ยท Recovery Guide
The True Believer
"I was an early adopter. Now I can't remember why I loved this work."
You were not skeptical of AI tools. You were enthusiastic. You shipped more. You showed colleagues how to use Copilot or Cursor. You may have been the person advocating for wider adoption on your team. And for a while, it genuinely felt like a superpower.
Somewhere along the way โ gradually enough that you almost did not notice โ the work stopped feeling like yours. You still ship. The output is fine, often better than before by measurable metrics. But the feeling is hollow in a way that is hard to articulate without sounding ungrateful or precious.
You used to love solving hard problems. Now you are not sure you are solving them. You are directing the solving of them. And the distinction matters more than it should.
Signs this is you:
- You ship more than ever but feel less satisfied than you did when you shipped less
- You cannot remember the last time you solved something hard and felt genuinely proud of how you did it
- Side projects have lost their appeal โ the magic of building something was in the figuring-out, which AI now bypasses
- You catch yourself feeling vaguely envious of engineers who seem to still love their work
- Looking at old code you wrote โ before heavy AI use โ gives you a strange feeling of recognition and loss
- Explaining your code to someone else feels difficult because you did not make all the decisions in it
The True Believer archetype is characterized by a particular kind of grief: you gave up something you did not know you would miss. The speed was real. The capability increase was real. And so is the loss.
This archetype is more common than it appears in public conversation, because the people experiencing it were often the loudest advocates for AI adoption. Admitting the downside feels like recanting.
What actually helps
Reclaim authorship deliberately. Pick one area of your codebase โ ideally the one you care most about โ and declare it yours. Write it without AI, from first principles. It does not need to be perfect. It needs to be yours in the way things used to be yours.
Also: lower the bar for what counts as meaningful work. The True Believer often waited for the next impressive AI-assisted project. But the satisfaction you are missing came from ordinary problems solved with your own thinking โ not spectacular problems solved fast. Small, complete, understood. That combination is what you are looking for.
The Burned-Out Senior
"I'm the last one reading the code."
You have been doing this long enough to know what good code looks like, what subtle bugs feel like before they surface, and what technical decisions have ten-year consequences. You have earned your judgment the hard way. And now you are spending most of it reviewing code you did not write, at a pace that does not leave enough time to actually think.
The team ships faster. PRs are bigger. Your review queue never clears. And you are the only person in the room who is genuinely reading each line โ everyone else seems content to approve things that look reasonable. You feel like the last human in the loop, and it is exhausting in a way that nobody around you seems to understand, because from the outside everything looks fine. Velocity is up.
Signs this is you:
- Your review queue is larger than it has ever been and does not shrink
- You find subtle bugs in AI-generated code that nobody else caught โ regularly
- The cognitive cost of reviewing code you did not author, at high volume, is not visible to your organization
- You feel responsible for catching what AI misses in a way that is never formally acknowledged
- Design reviews and architecture discussions feel truncated โ there is no time for slow thinking anymore
- You are less patient than you used to be. Less willing to explain things. Less energized by the hard problems.
What the Burned-Out Senior is experiencing is often invisible because it looks like success. Velocity is up. The team ships. Bugs get caught (because you caught them). The senior engineer is a superhero. Nobody asks what the cognitive cost of that is.
You are not a bottleneck. You are quality. The problem is that the organization has not made the infrastructure investment to match the pace it demands. The review process has not scaled with the generation speed. And you are absorbing the gap.
What actually helps
Name the asymmetry explicitly. The cognitive cost of reviewing AI-generated code at scale is different from reviewing human-written code โ and it needs to be said out loud in your organization. Not as a complaint. As a structural observation that should affect review standards, PR size limits, and team norms.
Also: protect time for the work that requires your judgment at its best. Not all problems need your deepest attention. Be explicit about which ones do, and protect those hours from the review queue. You are most valuable when you are thinking slowly, not approving quickly.
The Skeptic-Adopter
"I was never a believer. But I adopted anyway. And now I'm exhausted by both."
You never bought the hype. You read the papers on automation bias. You noticed early that AI tools introduced a specific category of plausible-sounding mistakes that were harder to catch than bugs in human code. You raised concerns in meetings that were nodded at and ignored. You adopted anyway, because not adopting felt like professional suicide and you are practical.
Now you are in a strange position: using tools you do not fully trust, on a team that trusts them too much, responsible for the quality of output you feel ambivalent about. You are tired of being right about things nobody wants to hear. You are tired of being the skeptic in a room full of believers. And you are starting to wonder if the exhaustion of constant vigilance is sustainable.
Signs this is you:
- You use AI tools but feel a low-grade wariness about them that never fully goes away
- You have noticed patterns of subtle bugs in AI code that you have quietly fixed without making a point of it
- Conversations about AI enthusiasm in your team or organization leave you feeling isolated and tired
- You feel responsible for a quality standard nobody else seems to be holding
- You have read research on cognitive load, automation bias, or skill atrophy โ and you see it happening around you
- The combination of "using tools you distrust" and "being the only one who distrusts them" is its own specific kind of lonely
The Skeptic-Adopter is often the most isolated archetype, because the dominant narrative leaves no room for this position. AI is either embraced enthusiastically or rejected Luddite-fashion. Nuanced, evidence-based skepticism from someone who uses the tools anyway occupies a position the industry has not yet made legible.
Your instincts are probably more right than your environment has acknowledged. The discomfort of holding a minority view you cannot act on fully is real and legitimate exhaustion.
What actually helps
Find one person โ one โ who shares your skepticism and create space for that conversation. The isolation of the Skeptic-Adopter is often what is most exhausting, not the skepticism itself. When that loneliness is reduced, everything else is more manageable.
Also: document your quality catches. When you find a subtle AI-generated bug, note it privately โ date, type, severity. This does two things: it validates your judgment with evidence, and over time it gives you data to bring to conversations about review standards. You do not have to make your case on vibes. You can make it on a log.
What if I see myself in more than one?
That is common, and it does not mean the archetypes are not useful โ it means your experience is layered. The most common combinations:
- Senior + Skeptic โ Very common. Years of experience build pattern recognition. That pattern recognition is what makes the skepticism evidence-based rather than instinctive.
- True Believer โ Over-Reliant โ The trajectory of many engineers who adopted early. Started enthusiastic, ended dependent. Both things are true at different time horizons.
- Junior + True Believer โ Early-career engineers who loved AI from the start and have since started to feel the hollowness of work they do not fully own.
When you see yourself in multiple archetypes, read the recovery guidance for each and look for the common thread. Usually there is one practice that addresses the overlap.
What all four types share
Despite the different profiles, four things are almost universal across all AI fatigue archetypes:
Isolation
The feeling that you are the only one experiencing this, because the public narrative insists AI only makes things better.
Unnamed experience
Knowing something is wrong but lacking the vocabulary to describe it to yourself or others.
Internalized blame
The assumption that feeling bad about AI tools means you are the problem โ too slow, too precious, falling behind.
Ownership loss
Work feels less yours than it used to โ less authored, less invested with your judgment and your decisions.
Recognition โ knowing that what you are experiencing has a name, that others experience it, that it is not a character flaw โ is itself the first and most important step toward recovery.
Next step
Get a calibrated read on your fatigue level
The 5-question quiz takes 2 minutes and gives you a tier with specific guidance. Runs entirely in your browser. No data collected.
Take the Quiz โ Recovery Guide โ