In software development, even the most experienced engineers often underestimate the time it takes to properly think through a solution: analyzing the problem, reviewing architectural options, comparing approaches, designing, and validating with examples.
It's not that we don't do it. We do, but often faster than we should, trusting that our experience will guide us to the right path without much exploration. Other times, we skip it entirely, assuming we'll "figure it out along the way." That confidence allows us to move forward, but it also leaves gaps: poorly grounded decisions, unexamined trade-offs, and designs that sometimes don't stand the test of time.
Working with AI changes this dynamic. Improvisation or intuition alone isn't enough: for AI to be useful, you need to precisely explain the complete need: what problem you want to solve, what constraints exist, and which cases must be covered.
The common mistake: "AI doesn't work"
It's common to hear that when AI doesn't produce the expected output, it's because "it doesn't work," "it's not ready," "it's dumb," or "it doesn't understand the context."
The reality is usually different:
- It doesn't have enough context because you didn't provide it.
- It can't solve the problem because you didn't explain it clearly.
- In short: you didn't invest enough time or effort in communicating the need.
It's true that, depending on the model and the agent, this effort is becoming smaller. With memory, integrated documentation, agents capable of better understanding the software, and search tools, it's easier to provide context without repeating everything. But usually, when AI doesn't give a useful result, the problem lies in how the engineer sets up the interaction, not in the model's capability.
What it means to explain the need
It's not just about writing a short prompt. It means explaining as if you were talking to someone completely new to your project, your company, or even your business.
With a teammate, you can take many things for granted; with AI, you can't. You need to make explicit everything that could influence the solution, as if you were onboarding a newcomer.
Context materials that make the difference
For AI to build a solid solution, you should provide it with the same materials (and more) that you'd share with a human team:
- Examples and counterexamples: how it should behave in typical cases and what it must not do.
- Technical and business context: dependencies, constraints, real goals.
- Logs and screenshots: traces, error captures, unexpected behaviors.
- Designs and diagrams: architecture sketches, user flows, module relationships.
- Videos or reproduction steps: showing how a bug or UX issue occurs.
- Previous issues or tickets: history of what was tried and what failed.
- Documentation: available APIs, data contracts, team conventions and rules.
In practice, this can make working with AI slower in some tasks. Sometimes you'd fix a small bug faster yourself than by preparing all the context for AI. But the difference lies in the quality and durability of the solution: with AI as a copilot, results tend to cover more cases, be better documented, and more robust over time.
AI can leverage all this material to build more complete solutions: not just implement what you asked, but design something that also covers scenarios you hadn't foreseen.
Context is also built with tools
Context doesn't depend only on what the engineer writes. AI can also build it if given the right tools, access, and permissions.
- Web search: to fill technical gaps or external references.
- Access to the project/repository: read code, understand dependencies, styles, and conventions.
- Access to internal documentation: from specifications to past architectural decisions.
- System exploration permissions: logs, metrics, databases, available APIs.
When AI can query relevant sources directly, its "mental map" of the problem is much richer. And the solutions it builds fit better with the real context, instead of relying only on what the engineer managed to explain.
This doesn't remove the responsibility of providing context manually, but it shifts part of the work toward responsible design of agents and tools: ensuring AI has access to what it needs, no more and no less.
Different interaction styles
These principles apply regardless of the interaction style with AI:
- Vibe coding: fast iterations, free exploration, useful for prototyping.
- Agentic coding: giving more autonomy to an agent to plan and execute with less intervention.
- Assisted approach: detailed prompts, review, and structured refinement.
The difference lies in the length of iterations, the degree of autonomy of AI, and the level of human review. But the starting point is always the same: explain the need clearly.
Beyond software
This shift doesn't just affect how we code. It also influences how we define personal projects, give instructions to a team, or even make everyday decisions. AI forces us to pause, structure, and communicate clearly before executing.
Conclusion
AI does not replace thinking. It puts it at the forefront. And when fed with a well-explained need, the right context, and access to relevant sources, it doesn't just generate code: it builds robust solutions that expand and improve on what we could have done alone.