You're three hours into a feature. The code is flowing. AI is suggesting the next function, the next refactor, the next test. You scan it. Looks right. You accept it. You move on.

Forty minutes later you're in review and something feels off. You can't quite trace what's happening. You've lost the thread.

You've been approving, not building. Your job has become quality gate instead of author. The work is shipping. The craft is evaporating.

This is the middleman feeling β€” and it's not fixed by working fewer hours or using better tools. It's fixed by one practice, applied every single time, without exception.

The Explanation Requirement: before you accept any AI suggestion, you must be able to write one sentence explaining why it's correct.

Not "this looks right." Not "AI said it, it must be fine." A real reason. "This handles the null case because X." "This uses a hash join because the dataset is large and O(n) lookup is better than O(nΒ²)." "This abstraction makes sense because these two functions always change together."

The sentence is not the point. The act of having to write it is the point.

Why This Is Different From Normal Code Review

You might be thinking: I've been code reviewing for years. I look at suggestions and evaluate them.

Here's the difference. In normal code review, you're evaluating a peer's decision β€” someone you can question, argue with, learn from. The collaboration is bidirectional. When you wrote the code yourself, you have the context from the work of writing it: the dead ends you explored, the tradeoffs you weighed, the specific constraints of the system.

With AI suggestions, there's no author to question. No dead ends documented. No tradeoff reasoning preserved. The suggestion arrives fully formed, looking correct, and your job β€” if you let it become that β€” is just to approve or reject.

That shift from author who reviews to reviewer who approves is the middleman transition. It happens gradually and then all at once. You stop building a mental model of the system. You start managing a queue of AI outputs.

The Explanation Requirement stops that transition. Every time you have to articulate why a suggestion is correct, you're doing the author's job β€” even if AI did the initial drafting. You're maintaining the reasoning loop that keeps you sharp.

The Cost of Skipping the Explanation

From 3,000+ engineers who've taken the AI Fatigue Quiz, the pattern is consistent:

  • 71% feel like middlemen in their own code β€” approving suggestions, not writing solutions
  • 58% notice their skills declining in areas they used to own completely
  • 44% are seriously considering leaving the industry

These aren't bad engineers. They're engineers who stopped having to explain things to themselves.

The mechanism is subtle but relentless. Every AI suggestion you accept without processing, you skip a micro-cycle of: problem β†’ hypothesis β†’ validation β†’ learning. Those micro-cycles are how expertise compounds. Skip enough of them and the compound stops. You're not worse at your job because you're using bad tools β€” you're worse because you've stopped learning from the work you were doing.

⚠️
The insidious part: the productivity metrics still look fine. Features ship. PRs merge. Sprint goals hit. The skill erosion is invisible until suddenly it isn't β€” when you're in an interview and can't trace through a basic algorithm, or in a technical discussion and you realize you've lost the contextual judgment that used to be your competitive advantage.

Why It Works: The Science

1. The Generative Retrieval Effect

Cognitive science has a well-documented phenomenon: retrieval practice is more learning-effective than re-reading. When you generate an answer rather than recognizing one, the memory trace is stronger (Roediger & Karpicke, 2006). Writing an explanation is generative. Scanning an AI suggestion and approving it is recognition. The first builds the skill. The second doesn't.

2. The Curse of Knowledge

Once you've seen a solution, it's nearly impossible to un-see it and think yourself back to not-knowing. This is the "curse of knowledge" (Loewenstein, 1994) β€” having the answer collapses the problem-solving process. If you accept AI's solution without first articulating your own hypothesis, you'll always find AI's answer appearing correct, even when you should have reached a different conclusion. The explanation requirement forces you to generate a hypothesis before the curse kicks in.

3. The Expertise Reversal Effect

Kalyuga's research (2003) shows that instructional techniques that help novices actually hinder experts. For senior engineers, passively reading AI explanations is particularly dangerous β€” the explanations are usually written for junior-to-mid audiences and skip the contextual reasoning that experts rely on. The Explanation Requirement forces you to rebuild the expert-level reasoning that passive consumption erodes.

4. The Fading Effect

Skills not practicedθ‘°ι€€ over time. The Explanation Requirement is low-friction deliberate practice β€” each explanation is a small act of retrieval that keeps the skill active. Research on expert performance (Ericsson, 2006) consistently shows that maintenance of high skill requires ongoing deliberate engagement, not passive exposure.

How to Practice the Explanation Requirement

This is not a framework. It's not a habit tracker. It's one rule, applied at the moment of decision:

The Rule

Before you press Tab, Accept, or Merge on any AI-generated code, write one sentence explaining why it's correct.

One sentence. In your own words. In the comment, in a notebook, on a sticky note β€” doesn't matter where. What matters is that you had to know why before you approved.

That's it. Here are the details that make it work:

Start Small: Just Notice First

Week one isn't about doing it perfectly. It's about noticing when you didn't do it. Before bed each day, look at your merged PRs and ask: for each AI suggestion I accepted, could I explain why? If the answer is no β€” that gap is data. That's your learning edge for tomorrow.

The Explainer Levels

Different suggestions require different depths of explanation:

  • Trivial / Obvious: "This is a standard error handler." β€” One sentence. Fine. The processing happened.
  • Non-obvious: "This handles the case where the user session expires mid-request, redirecting to /login instead of throwing an unhandled 500." β€” You're tracking state transitions.
  • Architectural: "This uses a event-driven pattern here because the existing system already has a message bus we should leverage instead of a new synchronous call." β€” You're reasoning about system design.
  • Gap: "I don't know why this is correct." β€” That's a stop. Don't accept. Investigate first.

The "I Don't Know" Response

When you genuinely can't explain a suggestion, the correct response is not to accept it anyway. It's to:

  1. Tell the AI: "Explain why this approach is better than X"
  2. Read the AI's explanation, trace through the code
  3. If you still don't understand, ask a colleague: "Can you help me understand why this was the right call here?"
  4. If no one can explain it, treat it as a learning opportunity β€” add it to your mental model

The failure to explain is not a bug. It's the feature. That's where real learning happens.

Variations by Role and Context

For Junior Engineers (Years 0–3)

The Explanation Requirement is most powerful early-career, when mental models are still forming. Every explanation you write is a thread in the web of understanding you're building. AI suggestions often arrive at the right answer via a path you'd have missed β€” understanding that path is the whole point of early-career practice. Don't skip it.

Strong version: For every AI suggestion, write two sentences β€” one explaining why it's correct, one explaining what you would have done differently and why.

For Senior Engineers (Years 5+)

The danger for senior engineers is subtler: you think you understand systems deeply enough that AI suggestions can't surprise you. They often can. The Explanation Requirement catches the cases where AI is confidently wrong, or where it's optimizing for the wrong thing from your context. It's your quality gate that you didn't know you needed.

Strong version: For architectural or design-level suggestions, write the explanation in terms of tradeoffs: "This is better than X because A, but worse than Y because B."

For ICs Who Lead Code Review

Make the Explanation Requirement a team norm: any AI suggestion merged into the codebase needs a one-sentence explanation in the PR comment. This is low-overhead for the author and transforms the review conversation. Instead of "LGTM," reviewers can ask "Can you explain why this approach vs. X?" β€” turning every AI-assisted PR into a teaching moment.

For Managers and Tech Leads

Model it in your own PRs. When you use AI to draft an architecture doc or a design proposal, write the explanation of why the approach is correct β€” and share that reasoning explicitly. It shows your team that the Explanation Requirement isn't about distrusting AI; it's about maintaining the craft-level judgment that makes senior engineers valuable.

Where the Explanation Requirement Breaks Down

πŸ›‘
The most common failure: engineers do it for a week, feel like they're getting value, then slowly stop. It feels slower. It is slower. The mistake is treating that slowdown as a bug rather than a feature. The slowdown is the learning. If the speed is killing you, you're not doing anything wrong β€” you're discovering what the pace actually costs.

Failure Mode 1: Writing Fake Explanations

"This looks correct." β€” That's not an explanation. That's approval with extra steps. A real explanation names a reason: "This handles X case because of Y constraint in the system." If you can't write a real one, treat that as information and go investigate.

Failure Mode 2: Only Explaining Surprising Suggestions

You explain the ones that seem wrong and just approve the ones that seem right. But "seems right" is exactly where automation bias hides. The suggestions that seem obviously correct are often the most worth questioning β€” because your confirmation bias is most active there.

Failure Mode 3: Explaining in AI's Words

You paste what AI told you. That's not generative retrieval β€” it's just copying a label. The explanation has to be in your words, from your mental model. "AI suggested a hash join because the dataset is large" β€” great. "The AI suggested a hash join" β€” that's just attribution, not explanation.

Failure Mode 4: Doing It Intermittently

Monday you do it perfectly. Friday you're exhausted and you skip it. Monday you skip again. By month two you've stopped entirely. The Explanation Requirement doesn't work as a best-effort practice. It works as a rule, like wearing a seatbelt. You don't wear it 80% of the time and still get the safety benefit.

What Changes Over Time

After 2 weeks: You notice the moments you can't explain. Some engineers find this uncomfortable β€” they didn't realize how often they were just approving. That's the awakening. The discomfort is the information.

After 4 weeks: The explanations start coming faster. Your brain has rebuilt the habit of generating before accepting. You start noticing AI suggestions that are subtly wrong β€” not obviously broken, but not quite right for your specific context.

After 3 months: The practice becomes automatic. You don't think of it as extra work β€” it's just part of how you evaluate code. You can articulate why most suggestions are correct, and when you can't, you know exactly what to investigate. Your technical judgment has strengthened, not eroded.

The surprising outcome many engineers report: the quality of the AI suggestions improves in your perception. You're not reflexively accepting β€” you're actively evaluating. You notice when AI is brilliant and when it's confidently wrong. Your relationship with the tool becomes deliberate rather than dependent.

The Team-Level Explanation Requirement

If you're a team lead or EM, you can make this structural. It's a low-overhead team norm that has an outsized effect on learning culture:

Team Agreement Draft

Proposed team norm: Any AI-assisted code in a PR should include a brief explanation of why the approach was chosen. This doesn't mean explaining obvious code β€” it means explaining decisions that involved judgment.

If AI generated the suggestion, the PR author provides the explanation. This is the Explanation Requirement at the team level.

Reviewers: when you see an unexplained AI suggestion, ask for the explanation. Not as gotcha β€” as a learning conversation.

The effect on team culture is subtle but real: instead of AI suggestions being invisible implementation details, they become explicit decisions that someone can explain and the team can learn from. Code review shifts from quality control to knowledge transfer.

What Engineers Say After 30 Days

"I started doing this without calling it anything. Three weeks in I realized I'd caught two bugs AI had suggested because I had to explain why they were correct and realized I couldn't. The explanation caught the errors."

β€” Senior backend engineer, 8 years experience

"The first two weeks felt like I was doing busywork. Then something shifted. My 1:1s got better because I could actually explain the technical decisions being made, not just describe what shipped."

β€” Staff engineer, 6 years at current company

"I made it a rule for the whole team. The first sprint it felt pedantic. By the third sprint, our code review conversations had completely changed. People were actually teaching each other through the review."

β€” Engineering manager, 12-person team

Frequently Asked Questions

Before accepting any AI code suggestion, write one sentence explaining why it's correct. Not "this looks right" β€” an actual reason: "This handles the null case because X," or "This uses a hash join because the dataset is large and O(n) lookup is better than O(nΒ²)." The act of explaining forces real processing, not passive acceptance.
At first, yes. The first two weeks feel slower β€” you're now doing something instead of just approving. But the slowdown is the point. You're rebuilding the learning loop AI has been bypassing. Within 3-4 weeks, the explanation becomes automatic and fast, and you still get AI's speed on the implementation. The difference: the work now belongs to you.
Not necessarily wrong β€” but it's a signal you don't understand it yet. That's information. It means you should: (1) read the AI's explanation, (2) trace through the code, (3) ask AI to explain the reasoning, (4) update your mental model. The failure to explain is the learning opportunity. Accepting without understanding is the skill-atrophy loop.
Both, differently. Juniors use it to build the mental models they haven't formed yet β€” it's deliberate practice with training wheels. Seniors use it to catch AI hallucinations, maintain their contextual judgment, and avoid the slow drift away from craft. The Explanation Requirement doesn't care about your level β€” it cares about whether you're learning or just approving.
Code review is peer judgment. The Explanation Requirement is self-diagnosis before acceptance. In review, you're catching errors. Before you accept an AI suggestion, you're deciding whether to learn from it or just pass it through. Review catches bugs. The Explanation Requirement rebuilds the instinct that makes you good at catching bugs in the first place.
Yes β€” teams adopt it as a norm: no AI suggestion gets merged without a one-sentence explanation in the PR comment. Engineers call it 'the Explanation Requirement' and it changes code review culture. Suddenly the conversation isn't 'LGTM' β€” it's 'can you explain why this approach rather than X?' It transforms AI use from passive approval to active teaching. The team norm version is particularly powerful because it creates a shared language: 'have you run that through the Explanation Requirement?' becomes a normal check-in without being preachy.

Start the Explanation Requirement Today

The practice is simple. The effect compounds over time. Today, in your next PR, try it once. Before you accept the AI suggestion, write one sentence explaining why it's correct.

If you can't β€” that's the point. That's your edge.

Take the AI Fatigue Quiz 30-Day Recovery Checklist