The Irony Worth Sitting With
You went into software engineering because you wanted to make things. The making was the point.
Now you ship features faster than ever. The code works. The tests pass. The velocity metrics look great.
And something underneath is quietly going quiet.
This isn't burnout in the traditional sense. It's not exhaustion from overwork. It's something more like the difference between cooking a meal and ordering one. Both result in food on the table. Only one teaches you what you know.
This is what vibe coding feels like from the inside.
What Vibe Coding Actually Means
Vibe coding is a term that circulated in developer circles in 2024โ2025 to describe a workflow where you prompt an AI to write code, review the output, accept most of it as-is, iterate with more prompts, and ship. The goal is a working result that feels like you built it โ but the hands-on creative work happened mostly in the prompting, not the making.
The term was originally used somewhat playfully. It named something real: the ability to produce working software without writing it line-by-line. But the playful framing obscured how disorienting the experience could be.
Engineers who described vibe coding as their primary mode often reported something specific: the work was getting done, the code was shipping, and they felt progressively less connected to what they were actually doing.
The work is yours in name and legal attribution. It isn't yours in the way that matters: you didn't make it. You supervised its making.
Why It Spreads
Vibe coding isn't a personal failing. It's a structural trap โ and the individual incentives inside it are almost perfectly calibrated to make it self-reinforcing.
Incentives align around output, not understanding
Your company measures output. You ship features. The code exists. Tests pass. The velocity metrics look great. Vibe coding produces all the right signals in all the right dashboards.
AI got good enough to be dangerous
The newer models produce code that is frequently correct โ not just syntactically, but semantically. This is genuinely useful. It's also genuinely dangerous, because when the output is mostly right, you stop checking it as closely.
Pressure to keep up
There's a visible productivity gap between engineers who use AI aggressively and those who don't. Teams notice who's shipping more. The natural response is to match the visible output โ which means more AI, less friction, less learning.
The junior engineer problem compounds it
Junior engineers who learn through AI prompting have no baseline to know what they're not learning. They produce working code. The signal that something is missing โ the productive struggle that teaches problem-solving โ never fires.
The Four Stages of Vibe Coding Drift
Most engineers who slide into vibe coding don't notice it happening. The transition is gradual, and each individual step feels reasonable. The accumulated effect is what catches up with you.
Useful Tool
You still write most of the code yourself. You use AI as a research assistant, a collaborator you can ask questions. You learn from the outputs. Your skills stay sharp.
This is fine. This is what AI coding tools were supposed to be.
Rising Dependency
You start using AI for things you could do but don't want to. The code is faster. The research is easier. You tell yourself you're being efficient.
Your skills are still there. But you're not using them as much, and that shows up slowly.
Authorship Transfer
You start treating AI output as a first draft that's good enough to ship. You prompt, review, tweak slightly, commit. You notice you can't always explain in detail why the code works โ but it does work, and that's what matters.
The mental model of ownership is fraying. You know you've outsourced the authorship.
Passive Shipping
You prompt, the code works, you ship it. You couldn't write this implementation from scratch without AI, and you know it. But it ships. The tests pass. The velocity is high.
You are a reviewer and curator, not a builder. The builder is the AI.
What Gets Lost
The losses from vibe coding aren't obvious until they're significant. This is part of what makes it insidious โ the most important costs are invisible while they're accumulating.
Debugging without a mental model
Something breaks, the AI explains the error, you apply the fix, it works, you move on. What didn't happen: you didn't encounter the bug in a way that built your understanding of the system. You received a solution without the problem. Your debugging skills didn't improve. Six months later they're noticeably worse and you can't point to when it happened.
The calibration of effort and outcome
Working hard on something and feeling the difficulty is part of how your brain registers growth. The discomfort of not knowing is data. The relief of figuring it out is the reward signal that says "you learned something." When AI resolves the discomfort before you feel it fully, the reward signal fires weakly or not at all. You're growing less from the same work.
Confidence you haven't earned
Vibe coding produces code that works well enough that nobody questions it. This creates a competence gap โ you're producing at a level that looks like expertise, but the underlying understanding isn't keeping pace. The gap between your confidence and your actual capability widens. Most engineers can feel this as low-level anxiety: "I couldn't actually build this from scratch." That anxiety isn't impostor syndrome. It's a genuine calibration problem.
The sense of authorship
Many engineers in vibe coding mode describe a quiet loss of connection to the work. They shipped a feature. It works. They had it reviewed by AI and approved by humans. And they feel fine. They also feel nothing in particular. This isn't about gatekeeping or being anti-AI. It's about the fact that most engineers went into this field because they wanted to make things. When that part goes quiet, something important goes quiet too.
The diagnostic: once a week, close all AI tabs and try to build something small โ a utility function, a refactor, a feature from scratch.
If that feels hard in a way that isn't just about time โ if it feels like you're not sure how to start, or you're not confident your code is right without checking it against AI โ that muscle is weaker than it was.
The Difference Between Using AI and Renting Your Brain
Using AI well means your judgment is in charge. You know what you want to build, you understand the tradeoffs, and AI is a productive tool that gets you there faster. When AI is wrong, you catch it. When AI is right, you learn something.
Renting your brain means AI is in charge. You're prompting, reviewing, and mostly accepting. Your skills are slowly tracking downward and you're not fully aware of it. The code you ship is working code that you don't fully own.
The line between the two isn't about how much you use AI. It's about whether you're still the one driving.
The Question Worth Sitting With
Vibe coding isn't always a problem. Sometimes you genuinely don't need to understand something deeply. Sometimes shipping fast is the actual priority. The problem is when vibe coding is the only mode you have, and when you've stopped noticing the difference between the times when going fast is fine and the times when it's costing you something you care about.
What Actually Helps
One no-AI session per week
Not as a rule. Not as gatekeeping. As a diagnostic and a recalibration. Start with 90 minutes. No AI. Build something real. It doesn't have to be useful. It has to be yours.
Read code you didn't write
One of the underappreciated losses in vibe coding: you stop reading other people's code the way you used to. Deliberately reading code โ open source, library internals, frameworks โ is a way to keep learning even when you're not building from scratch. It exercises the pattern-matching that vibe coding lets atrophy.
Ask AI to explain, not just produce
When you use AI to generate code, before you ship it, ask it to explain why it chose this approach. Ask about alternatives it considered. Ask what would break if you changed this variable. This is AI being used to augment your judgment rather than replace it.
Track your confidence-to-capability ratio
Every few weeks, before you ask AI anything, take 5 minutes and write what you think the solution might be โ an actual hypothesis based on what you know. Then check against what AI produces. Not to see if you were right. To see how close you were and where the gap is.
Be honest about what you're trading
Most engineers who navigate this well aren't the ones who use AI least. They're the ones who stay honest about what they actually understand, what they've outsourced, and what they're quietly losing.
That honesty is harder than it sounds. But it's also the only thing that makes the difference between being a good engineer in 2025 and slowly becoming someone who can only work with AI in the room.
The Closing Truth
The engineers who navigate this well aren't the ones who use AI least. They're the ones who stay honest about what they actually understand, what they've outsourced, and what they're quietly losing.
Vibe coding can be a useful mode when it's intentional โ when you choose it, understand the tradeoffs, and maintain the parts of your skill that matter most.
The trap is drift: the gradual, invisible shift from "I'm using AI to build things" to "AI is building things and I'm supervising."
The way to avoid the trap isn't to reject AI. It's to stay awake to the difference.
Continue Reading
Frequently Asked Questions
No. Sometimes shipping fast is the actual priority and you genuinely don't need to understand something deeply right now. Sometimes vibe coding is the right call for a specific task. The problem is when it's the only mode you have โ when you've stopped noticing the difference between the times when going fast is fine and the times when it's quietly costing you something.
The practical diagnostic: once a week, close all AI tabs and try to build something small from scratch. If that feels hard in a way that isn't just about time โ if it feels like you're not sure how to start, or you're not confident your code is right without AI checking it โ that muscle is weaker than it was. That's your signal.
No. The goal isn't to reject AI. The goal is to stay in charge of the relationship. Using AI well and renting your brain look similar from the outside โ it's the internal experience that differs. The question is whether you're still driving.
The most important thing is having a baseline. You need to know what you're not learning yet โ which means finding ways to encounter the productive struggle that answer-inner">
The most important thing is having a baseline. You need to know what you are not learning yet โ which means finding ways to encounter the productive struggle that AI is currently bypassing for you. One no-AI session per week. Deliberately reading code from projects you did not build. These will not replace AI learning, but they will give you the contrast that lets you know what is actually yours.
Start with the explanation requirement: for any significant AI-assisted decision, write one paragraph in your own words about what happened and why. Not what AI said. What you understand now that you did not before. This is a practice that helps you stay sharp AND it is genuinely useful for the team โ it creates a record of understanding, not just output. Suggest it as a team practice, not a personal preference.
Not at all. There is a genuine question here: if AI can build software, does the individual skill still matter? For the person who wants to be a great engineer, yes โ the skill matters for their own growth, job security, and the quality of their work over time. For teams and organizations, the skill also matters because expertise handles the edge cases AI cannot. The concern is not about who gets to build things. It is about the engineers who want to be genuinely excellent at their craft slowly losing the thing that makes them excellent, without noticing it happening.