The Illusion of Speed
The Illusion of Speed in the AI Era
In a changing world, where new artificial intelligence tools arrive every day, the promise of speed and productivity seems to be the new frontier for CTOs and IT departments.
The mantra resonating in managers' heads these days is:
we need to be faster
"VibeCoding," the promise that anyone can build software without knowing how to program, is the flagship tool of this speed, and the vassals of this tool are coding agents: a platoon of virtual professionals that can arrive in waves on our projects, generating code and technical debt at impressive speeds.
However, this acceleration might hide an insidious danger: the creation of a generation of "virtual seniors"—professionals who, while capable of delivering products rapidly thanks to AI, lack the deep skills necessary to tackle the most complex technical challenges.
A disservice or a mockery?
If until today professional growth—from junior to senior—was a long and winding path made of errors, corrections, and manual learning, AI risks skipping this crucial phase, giving everyone tools that allow them to bypass "paying their dues" (the gavetta) and become hyper-productive immediately.
I have spoken in the past about how important "paying your dues" is for professional growth, but now this phase risks being completely bypassed, not by a course, but by a prompter that spoon-feeds us every step of the way.
A perverse mechanism disguised as productivity, which could turn into a weapon pointed at the very company demanding speed, creating "apparent professionals": people capable of achieving excellent results immediately, but then unable to adapt what is required to the more or less complex business needs coming from management.
The question we should ask ourselves is: how can CTOs avoid this drift? How can we counter the process of de-skilling currently underway? What are the tools that allow us to avoid filling companies with empty boxes?
Redesigning technical mentorship to avoid competence collapse
There is great enthusiasm for how AI accelerates everything: from writing code to creating documents, work plans, and so on, but this acceleration does not allow for one very important thing: error.
It is useless to deny that one error teaches more than a thousand lines generated by AI. When we delegate everything to an agent for too long, we only risk losing the ability to learn, because there is no material time to assimilate, study, make mistakes, and learn.
This creates a collapse of skills that risks being systemic for tech companies. The data speaks clearly: according to the Stack Overflow Developer Survey 2025, 84% of developers are using or plan to use AI tools (up from 76% in 2024), with 51% of professionals using them daily. However, trust has plummeted: only 33% trust the accuracy of AI output, while 46% actively distrust it (up from 31% in 2024). Only 3% report "high trust" in the results. The main frustration? 66% of developers complain that "AI solutions are mostly correct, but not entirely," and 45% find that debugging AI-generated code takes more time than writing it from scratch would.
We come from years where it was necessary to roll up our sleeves and learn, verify errors, study again, and improve. If we think about the next 5 years, the risk is that skills will no longer be formed because AI will do the work for us: in case of an error, one will continue to iterate until a presumably acceptable result is reached (or at least until an AI tells us the result is acceptable, or until the budget allows it).
But at that point: who will evaluate the quality of the result? Having the result evaluated by the one who achieved it is a contradiction. Who will understand if that product is suitable for the corporate context?
As long as we have people raised with the old learning mindset, perhaps it will be fine and we will manage to do a correct analysis, but in 5 years? And in 10? And no, the answer is not to have an AI's product evaluated by another AI, because this does not solve the problem of human competence.
Being forward-looking, we must think about how to prevent tech companies from finding themselves with a generation of hollow seniors, incapable of managing complex systems.
We must therefore not concentrate our skills on creating products, but on mitigating the risk that AI absorbs all corporate skills, effectively emptying people's competence and being so inserted within business processes that it can no longer be supervised or bypassed.
Digital Outsourcing
The direction we are heading is digital outsourcing, where we are giving our corporate assets largely to third parties: we are selling off culture in exchange for an appearance of speed.
Are we ready to take this step? Are we ready to delegate our know-how to software whose workings we do not know? But above all, the moment that software makes a mistake, how do we understand where it went wrong if we do not have the skills to do so? How do we correct the errors it committed if we are no longer masters of the technology?
Let's reflect on this phrase:
Startups can launch with AI-generated code, but they cannot scale without expert developers
If we need to create a POC: AI is fine; if we need to create an MVP, perhaps we start to have a problem; but if we need to build a solid company, with robust foundations, we cannot rely on those who do not have the skills to understand what they are doing.
Proactive strategies for CTOs: let's create antibodies
The solution is not to ban the use of AI, but to rethink the growth path of our developers with a reverse engineering of skills. We must design positive frictions in the workflow that force the brain not to switch off.
Let's try a stylistic exercise and give some pointers to a CTO who wants to avoid the desertification of skills:
1. AI-Free Zones
Once, humans had to run after animals to hunt; now, to stay fit, they spend hours on a treadmill. It may seem like a stretch, but we need to define boundaries where the use of AI is deliberately forbidden. CTOs should designate specific work areas, perhaps in critical contexts like legacy systems, as cognitive gyms. In these zones, for those who need to train, the use of AI is forbidden or limited solely to consulting documentation. This does not slow down any project; it secures it. It forces one to read, understand, and describe logic, creating that muscle memory that would otherwise be lost.
2. Rethinking Code Reviews: Ask "Why?"
Any software change must not be evaluated only on whether it works, is clean, or meets requirements: it is easy to create tests that respect what is done by the code they must validate, but the new question must be: "Why?". Why did you choose this specific pattern instead of another? When the answer is "AI suggested it," the change will be rejected. We must evaluate the ability to defend architectural choices, not just the ability to produce output.
I remember a university course where I taught in the past: after evaluating the quality of the assignments, I always asked the reason for the choices made. Sometimes students answered "Because ChatGPT suggested it." In that moment, I knew the fruit of the work was produced by a machine and not by them, and I insisted that it be rewritten, analyzed, and understood.
The ability to argue one's choices is fundamental for growth and will be even more important in a world where AI can suggest a thousand different alternatives in a few seconds, but not all will be useful to the context.
3. Let's play Wargames
AI is great at diagnosing common errors, but what happens when the system collapses in a way the model has never seen? A limit of current solutions is the holistic vision, the ability to correctly integrate various aspects of projects: not just the small context fed to it by a more or less long prompt.
Organize game sessions where parts of the staging environment are intentionally broken in subtle ways (network problems, race conditions, memory leaks) and ask the team to solve the problem without AI tools. The ability to trace an error through the "mental stack" is the skill that distinguishes a professional from a simple prompt engineer.
4. Digital Archeology
Assign tasks, allowing the possibility of using AI, but in contexts where AI has no information, so that the answers are largely wrong and people are forced to study what is requested better.
An example is managing legacy code on proprietary platforms that are poorly documented or where part of the documentation is missing. This forces people to better study how the old programmer thought and is an exercise in technical empathy fundamental for designing robust systems.
5. Let's talk about Design first and then Prompts
It is useless and counterproductive to throw oneself at the keyboard to write prompts if we do not have the project we want to get our hands on clearly in mind. Let's force people to first write a design document, a high-level architecture, a document that describes the overall vision.
Thought must precede the generation of the result, which is only a consequence. If AI writes the code, the human must write the architecture. If the human abdicates this too, they become a useless and probably harmful paper pusher.
Recruiting 2.0: hiring for cognitive capacity
If the way of working changes, the way we hire must also change. Old algorithmic tests are now useless: any AI model solves them in a handful of seconds.
What should we look for then?
The ability to ask questions: my 7-year-old son bombards me with questions, but from how he asks them, I understand what he is learning, what he has understood, and in what direction he is going. Do the same with candidates: describe a vague and ill-defined problem. Do not evaluate the solution, but the path and the questions they ask to clarify requirements. AI is terrible at handling human ambiguity: it does not clarify; it follows a path as the most logical or probable one and starts work. For those with a bit of sci-fi culture: it's like talking to a Vulcan.
Have them debug broken code: the skill needed is that of understanding errors. Give candidates a complex system that fails intermittently. Observe the mental investigation process.
Offline architectures: give the candidate a piece of chalk and a blackboard. I know: it's very "boomer," but it's a technique that still works. Have them draw a possible architecture for a project of any nature, then increase the requirements and follow the reasoning they use to adapt the architecture.
Distinguishing what is produced by a machine and what is produced by a human
It is important to understand the source of data, distinguishing what is produced by a machine and what is produced by a human. The reason is simple: the intents are different. A person has a specific goal, while a machine relies on probability and iteration.
If we do not impose rules to understand from which source the data we are handling arrives, we risk approaching data that, by its nature, is born differently, in the wrong way. A CTO must be well aware of this aspect, circumscribe the area of action of machines and people, and introduce a code of conduct.
When speaking of software development, this translates into some practical rules, which by translation can be applied in other corporate areas. The main ones are: labeling and quarantine.
Labeling serves to immediately understand the source of something: if your code, your documents, your procedures are more than 50% generated by a machine, introduce specific "AI GENERATED" annotations and perhaps indicate the model involved in the creation. If there is a problem in the future, we will have to make checks, and this type of labeling will help us trace the source of the problem. Like all software, AIs are not perfect; they commit errors and hallucinations and, based on the context and version, have different behaviors. Labeling what they produce helps us trace the source of a problem.
Quarantine allows us not to immediately take what is proposed to us as good. If it is code, avoid letting AI directly handle the product CORE. Use them to write tests, utilities, to critique the code created, to do analysis, to give you ideas, but not to deal heavily with what you have to write: in this historical moment, AI used as help is a good thing; used to manage the core business, it risks transforming it into a shapeless mass without a true owner.
What will the new Seniors have to do?
We are immersed in a true industrial revolution, and it is important to reason on the medium term and not on the short term. Without being futurists, let's think about what happened in the last 5 years and try to translate it into the next 5. The role of the senior changes. It is no longer the person who writes code the fastest or knows APIs by heart. It is not the one who knows how to solve problems the fastest. Speed will come from tools, which will be increasingly efficient compared to a person. For this reason, an evolution is necessary.
The new seniors will have to develop transversal holistic skills; they will have to understand if they are facing a hallucination or a truth. They will have to understand human specifications in the correct way, going beyond what the text says, and understanding the true sense of what is asked of us, which is not always literal, but often has implied implications that a machine, for now, is not able to understand.
We must therefore reward those who will be able to prevent technical debt generated by AI, not those who commit more features. KPIs must shift from speed to stability and the management of complexity and chaos.
We must not be faster, but more resilient
Let's fall out of love with speed, then: it is not the goal of our work. As CTOs, we must think about sustainable growth, a company model capable of being expanded and understood. We must avoid being slaves to technologies we are unable to govern, putting ourselves in a situation where technologies are at the service of the company and not the reverse.
Let's invest in training, in understanding processes, in understanding architectures; let's ensure that people specialize in humanistic subjects capable of understanding people, making them capable of understanding the needs of the human gender and of being able to intelligently critique what is proposed and created by machines.
If AI is to enter corporate processes, let's do it consciously, not like a wave that overwhelms everything without leaving anything behind. Let's ensure that AI is itself a tool for growth and not for the destruction of skills: it is not enough to create; it is necessary to explain how and why it was created, otherwise, we risk building sandcastles destined to collapse at the first breath of wind.