Last week a senior engineer told us something that has been sitting with us since.
He said: "I passed every code review. I shipped every feature. And then I tried to do a take-home technical assessment for a new role โ without AI โ and I couldn't finish it."
He is not alone. This is one of the most common things we hear from engineers navigating AI fatigue: the gap between what they can accomplish and what they understand has been widening so slowly they did not notice until it became obvious.
Not because they are not talented. Not because they are not working hard. Because the system is built to reward the completion of tasks, not the maintenance of capability. And when those two things diverge โ when you can ship but not solve, when you can review but not build, when you can ship features but not do the assessment you are interviewing for โ there is no alert for that.
This is the competence illusion.
The competence illusion has a specific structure: you are performing at a level that looks like competence from the outside, and the performance is close enough to actual competence that the difference is invisible โ to you, as much as to anyone else.
It is different from pretending. Pretending implies awareness. The illusion is more insidious: you genuinely believe you are still capable of the things you used to be capable of, because the work still gets done, the features still ship, and nothing in the process tells you the difference.
Until something does. A technical interview. A new job requiring you to start from scratch. A production incident at 2am. A moment where the AI is not available and the task still lands on your desk.
These moments are where the illusion breaks. And they come later than they should, because the work gets done in the meantime.
The cruelest version: you find out you have a capability gap at the worst possible moment โ when you need the capability most. Not because you were careless. Because the system never gave you a signal to notice earlier.
Think about the way a feature actually gets built in an AI-assisted workflow:
You understand the problem. You write a prompt. The AI generates code. You review it. You iterate. You merge. You ship. The ticket is closed.
What happened in there that looks like your competence: problem understanding, code review, iteration decisions, merge judgment.
What did not happen: the generation of the code itself. The part where your brain was actually doing the hard thing.
The ratio of generation to judgment in that process has been shifting. The generation is now almost entirely the AI's job. The judgment remains yours โ but judgment without generation is a different skill. It is a real skill, but it is not the same as the complete capability you had before.
What still works: knowing whether the code is right
What's eroding: being able to write it yourself
Both things are real. The illusion is that they feel the same when you are doing them. Reviewing AI-generated code feels like writing code. It involves the same motions โ reading, evaluating, modifying. Your brain is active. You are engaged. You are not doing nothing.
But you are not doing the thing that built your capability. You are doing a downstream task that maintains performance but not capacity.
Classic Dunning-Kruger: incompetent people overestimate their competence because they do not know enough to recognize their mistakes.
The AI-assisted version is more complicated: competent people underestimate their capability gap because the AI's outputs are so good they look like what the competent person's outputs used to look like.
The model raised the floor. Your performance now includes outputs that are not yours. You evaluate them at a high level because you are genuinely competent. But the bar you are evaluating against has been raised by the AI โ and the gap between your internal capability and the external output is invisible because the outputs look right.
You are not performing below your ability. You are performing above it โ and the excess is the AI's contribution, which you cannot easily separate from your own.
You are not losing your intelligence. You are losing the relationship between your intelligence and your output โ and that relationship is your sense of competence.
Junior engineers usually know something is off. They feel the anxiety of not knowing, the impostor syndrome of being new, the awareness that they could not do this without help. They have a relatively accurate calibration of their capability.
Senior engineers have more to lose. They have years of genuine competence to point to. They have the track record, the reputation, the institutional knowledge. When their capability starts to erode, the evidence of their past competence is real โ and it makes the present erosion harder to see.
They ship features. They review architecture. They mentor junior engineers. They are clearly still valuable. And yet the specific thing they used to be able to do โ open a blank file, think hard, write the code โ has quietly become inaccessible without the AI in front of them.
The senior engineer's competence illusion is the most dangerous version because the person experiencing it has the most credibility โ and therefore the most to lose before anyone notices.
Across 2,147 engineers who have taken the AI Fatigue Quiz on the site, the competence illusion shows up clearly in the numbers:
That last number is the telling one. Twelve percent. The system has not given most engineers a reason to maintain the separation โ so almost 90 percent have drifted into the full AI-assisted workflow, with the competence gap growing invisibly in the background.
You do not need a formal assessment. You need a small, deliberate experiment:
Pick something you did recently with AI help. A feature, a refactor, a bug fix โ something representative of your day-to-day. Before you open any tools, spend 20 minutes on paper or a whiteboard thinking through the approach. Write down your hypothesis: what is the structure, what is the approach, where are the risks.
Then build it without AI. Not to ship. Just to see.
The comparison between your hypothesis and what you would have generated with AI โ and between your ability to generate it and the AI's output โ is data. Real data about the gap.
If the gap surprises you, that is not a verdict on your worth. It is a map of where you stand. And maps are how you get somewhere.
Want a structured picture of where you are?
The AI Fatigue Quiz gives you a tiered profile of where you are on the capability-comfort spectrum, with specific signs of the competence illusion in your daily workflow. Takes 90 seconds.
This Friday, before the end of the workday: open a new file. No AI. No autocomplete. No linting suggestions. Write a function โ any function โ the kind you might have written two years ago. Not for work. Not to ship. Just to prove to yourself that the relationship between your thinking and your code is still there.
If it is not โ if the file stays blank, if you reach for the AI reflexively, if what used to be automatic now requires effort โ that is the data point. And that is where the rebuilding starts.
The capability does not go away. But it does go quiet. And quiet capability is harder to maintain than no capability at all โ because it keeps offering the illusion that it is still there.
See you next week.
โ Sunny + The Clearing team