The Explanation Requirement
The one recovery practice that actually works โ and almost nobody is doing it.
Week three. If you have been following the 30-day plan, you are somewhere between starting to see the shape of the problem and slowly remembering why you got into this in the first place.
Today I want to talk about the one recovery tactic that shows up most consistently in our quiz data โ across all four tiers, all experience levels, all stack types โ as actually working.
And almost nobody is doing it.
The practice:
Before you accept any AI-generated code as final, complete this sentence out loud:
"I added this because ______."
If you cannot finish the sentence, the code is not yours. You have received an answer. You have not learned the problem.
Not reading โ explaining
Here is what most engineers do with AI-generated code:
- Read it over
- Check if it seems right
- Accept it if it works
Here is what The Explanation Requirement asks you to do:
- Read it over
- Close the code
- Explain it โ out loud, to no one in particular โ as if you were teaching someone
"This function takes the user ID, then it queries the database to get their preferences, and then it merges those with the default settings because the query uses a LEFT JOIN..."
The act of explaining reveals gaps you were not aware of. It forces you to confront whether you actually understand what the code does โ not just that it works.
Why it works (the research)
Psychologists call this the retrieval practice effect. Robert Bjork's desirable difficulty research shows that the act of trying to retrieve information โ not just re-reading it โ dramatically improves long-term retention and understanding.
When you force yourself to explain code you did not write, you are doing retrieval practice against code you should have written. The gap between what you can explain and what you cannot is exactly the gap where learning used to happen.
AI did not break your brain. It just made it very easy to skip the part where understanding gets built.
Two weeks of The Explanation Requirement
Try it for two weeks. Every piece of AI code you accept, run it through this filter:
- Can I explain what this does in 60 seconds?
- Could I explain the trade-offs in this implementation?
- Would I have solved it this way? Why or why not?
Track how often you can answer yes to all three. That number week over week is a real measurement of whether you are still in the learning loop.
The uncomfortable truth: if you cannot explain most of what AI generates, most of what ships is not really yours. That is not a moral failure. It is information about what needs practice.