21 Dec

⌨️Vibe coding and the slow death of code quality

Vibe coding has become one of the buzzwords of 2025.
When I first heard the term, I found it amusing. I am no longer sure that it is.

Vibe coding describes a way of writing software where prompts replace design and quality concerns. As long as the program runs and produces the expected output, everything is considered fine—regardless of the quality of the code itself. In that sense, even someone who is not a developer can now “code.” I have no doubt that AI can generate working code. Like most professional developers, I use it myself: mainly for boilerplate, documentation, and sometimes to help analyze legacy code or explore solutions to a problem.

AI as a tool, not a substitute

Vibe coding

Used well, AI is a powerful assistant. But it remains an assistant—one that must be supervised. Sometimes it helps you move faster; sometimes it slows you down. It can suggest good solutions, but it can also hallucinate dependencies causing security issues, invent APIs, or generate code that poses significant cybersecurity risks. Supervising all of this takes time, judgment, and experience.

There is also another concern. Using too much AI may slowly erode those very skills. By delegating too much, we risk losing the ability to reason deeply about code over time. I am personally cautious for that reason—not because I am resistant to modern tools, but because I still want to be able to do my job correctly in a few years or when the AI bubble inevitably bursts.

The cost of not looking at the code

Vibe coding is something else entirely. It is the moment when we stop supervising the quality of the code and only validate the result. The output looks correct, tests pass, the demo works—and we move on. The code itself becomes incidental.

But even if we choose not to look at the code itself, one might still argue that it has sufficient quality. However, studies suggest otherwise. They show that AI-generated code produces about 1.7 times more issues, including readability problems, incorrect failure handling, and even security vulnerabilities. Notably, these studies focus on AI used as an assistant, not on fully AI-generated code without supervision, so the results could reasonably be expected to be even worse.

Writing code for durability

For some projects, this may be acceptable. Personal projects, throwaway prototypes, or experiments that will never be touched again can tolerate this approach. But most professional software does not fall into that category. Enterprise systems are expected to evolve, to be secured, to be audited, and to be maintained by humans over long periods of time. In that context, unreadable or poorly structured code is not a minor inconvenience; it is a liability.

Blindly trusting what AI produces can destroy a business. One enterprise learned this the hard way when a vibe-coded application ended up destroying its database. Who is responsible when AI-written code fails?

As long as code is meant to be read, understood, modified, and guaranteed by humans, it cannot be treated as a disposable byproduct. It must be written for humans. And only a human can validate that it is done right. Structure, clarity, intent, and durability still matter—perhaps more than ever in a world where producing code has become easy.

This tension between speed and understanding, between results and responsibility, is not new. But AI makes it impossible to ignore. And it is precisely why we need to rethink what it means to write good code—not just code that works today, but code that lasts.
That question, and what it implies for our craft, is at the heart of The Art of Code.