AMT
Published on

My Journey with Vibe Coding: From Technical Debt to Cognitive Debt

Authors
  • avatar
    Name
    Adão
--:--

Something changed in the kind of debt I carry after fifteen years in technology. For most of that time, the debt lived in the code. Shortcuts taken to ship faster. Features that worked but were held together with tape. I knew exactly where every shortcut was, and I knew what it would cost to fix.

That debt is not the one keeping me up anymore.

The debt we all knew

In technology, we call it technical debt. You are building a new product or testing an idea. The goal is to get it in front of users as fast as possible. What matters is the core logic, the thing that makes your product different. So you cut corners on the generic parts. Security gets a basic setup. Data storage is good enough. Quality checks are minimal. This was not limited to prototypes. Business critical features with hard deadlines produced the same result. Ship it, patch it later. That was life with technical debt. I could always point to the exact parts that needed work. The debt was visible, measurable, and familiar.

AI solved the old problem

Vibe coding and agentic development changed this in a way I did not expect.

The generic layers, the ones we used to skip, are exactly what AI handles best. Security frameworks. Data access patterns. Structural code that connects the frontend to the backend. Prompt for any of these and you get solid, functional output. AI eliminated the shortcuts we used to take because it does that foundational work well and fast. I expected this to feel like a pure win. It did not.

The debt moved

Before AI, we skipped the foundation to focus on what mattered: the core product logic. The layer where the real creativity lives. The algorithms, the rules, the edge cases that give a product its competitive edge. Now the foundation is handled. The debt shifted to the place that used to be our strength. The product logic itself. I am calling this cognitive debt. It is new, and I think it is more dangerous than the technical debt it replaced.

Accumulation of Cognitive Debt when Using an AI Assistant

Recent research supports this concern. A study from MIT Media Lab measured brain activity using EEG while participants completed complex tasks with and without AI assistance. The results showed that brain connectivity systematically scaled down with the amount of external AI support. Participants who worked without tools exhibited the strongest and widest ranging neural networks, while those using an LLM showed the weakest overall coupling. The neural pathways responsible for top-down control, working memory, and creative ideation were significantly less engaged when AI handled part of the cognitive work. Most striking: when participants who had relied on AI were later asked to work without it, their brain connectivity did not return to the levels of those who had practiced independently. The researchers described this as an accumulation of cognitive debt, where repeated reliance on AI replaces the effortful cognitive processes required for independent thinking, leading to diminished critical inquiry and decreased creativity over time.

The study focused on writing, not code. But the parallel is direct. Writing an essay and writing business logic both demand the same core cognitive operations: organizing ideas, making structural decisions, holding multiple constraints in working memory, and generating original solutions. When AI takes over those operations, the brain engages less. And what does not get exercised does not strengthen.

Where I felt it

I have been building a personal calendar manager using agentic development through OpenCode. The idea is simple. I have tasks with different properties: category, noise level, focus requirements, due dates, priorities. I want a system that fits those tasks into gaps in my calendar so I do not get overbooked with meetings and actually find time to do focused work. The agents generated the code fast. Security, data layer, API structure, all of it came together without me writing a line. Then I started using it. I got creative. I wanted to adjust the scheduling logic, add edge cases, tweak the rules to fit what I was learning through actual use.

And I could not do it.

For the first time in fifteen years, I did not know where to change the core logic of my own product. I did not understand the architecture the AI had chosen. The code was organized in ways that contradicted every pattern I have built intuition around. The structure was alien to me. With technical debt, I always knew what was missing. With cognitive debt, I did not even understand what was built.

Why this debt is harder to pay

Technical debt has a clear cost. You know the gap. You estimate the effort. You fix it.

Cognitive debt works differently. Before you can change anything, you have to learn the code first. Understand the decisions the AI made. Internalize the structure well enough to take ownership of the next change. This is not about spotting what is missing. It is about understanding what exists. The more you let AI generate the product logic without staying close to it, the less you can adjust, iterate, and create. You lose access to the creative part of building software. And I believe that creative process is what separates good products from everything else.

The MIT study found the same pattern in a different domain. The researchers recommend a balanced approach: leverage AI for routine assistance, but still challenge yourself to perform core cognitive operations independently. In their words, this is how we can harness the benefits of AI support without impairing the natural development of the brain's task-related networks.

What I am starting to think about

I have not solved this yet. I am reflecting on it. As I wrote in my last post, the discipline that worked for fifteen years still applies: go incremental, check your work, stay close to what is being built. It just applies to a different layer now. What I believe is that the answer lives in guidelines. Not guidelines for the code itself, but guidelines that describe how I want things built. Architecture principles. Patterns I expect agents to follow. Requirements that define structure, not just behavior. Described in plain English, clearly enough that an agent can produce work I can read, understand, and own from the first iteration.

As a CTO, I think about this beyond my own experiments. My teams will face the same shift. The people who thrive will be the ones who learn to describe their philosophy in a way that AI tools can follow. AI can make a great team ten times more capable. But only if that team stays close enough to the product logic to own it. The moment you step too far back, cognitive debt accumulates. And unlike technical debt, you might not notice until you try to make a change and realize you cannot.

The goal is not to build less. It is to think more clearly about what you want, and express it in a way that keeps you in the creative loop. If you are experimenting with AI in development and feeling this same disconnect, I would like to hear how you are dealing with it.