Post content
A few years ago, programming meant fighting a language that didn't forgive you. A misplaced semicolon, an unclosed bracket, a wrong variable type, and the program wouldn't compile. The machine didn't understand your intentions. It only understood what you wrote, exactly as you wrote it.
That rigidity wasn't a flaw. It was a design choice. Programming languages belong to a family we call formal languages. They share that trait with mathematical logic, algebra, chemical notation, or chess. What defines them is a very precise architecture: a closed alphabet of symbols, a grammar that dictates exactly how they can combine, and a set of well-formed formulas that are the only valid ones. Everything else falls outside the language. The curious thing is that a formal language can exist without its symbols meaning anything at all. The syntax is enough; the semantics are optional. That is, what makes a sentence correct in a formal language is not what it says, but how it is built.
The language we speak every day works the other way around. Words have many meanings, sentences depend on tone, on the moment, on who says them and to whom. "Can you pass the salt?" is not a question about your motor skills, even though literally it is. Spanish, like any language, was not designed by anyone: it emerged through use, through mixture, through contact between generations. Its rules were described long after people were already speaking. This is what we call a natural language, and it has features a formal language cannot have. The relationship between the word and what it designates is arbitrary (there is nothing in the sound "dog" that resembles a dog, and in fact in other languages it is called something completely different). The grammar is productive: with a finite number of rules, infinite sentences can be generated that no one has ever spoken. And you can lie, be ironic, be ambiguous on purpose, talk about language itself, say things that mean the opposite of what they seem to say. Everything that in a formal language would be a compilation error, in a natural one is what makes it human.
The deepest difference is that in a formal language meaning depends only on form. In a natural language it depends on context. The same sentence, said by two different people in two different moments, can mean two different things, and both can be correct.
Two worlds, two disciplines
That is why, for decades, the world was split between those who spoke to people and those who spoke to machines. The first enjoyed the luxury of ambiguity: a word could mean three things and the listener would fill in the rest. The second lived inside the discipline of exactness. Between the two worlds there were translators: programmers who took what a human wanted and turned it into something a machine could execute.
That border is dissolving. And it is dissolving from an unexpected side. It's not that people have learned to speak like machines. It's that machines have started to understand how people speak. What is striking about this, and is almost never named, is that an AI does not eliminate the ambiguity of natural language. It interprets it. It does what we do when we listen to someone: fill in what hasn't been said, decide between interpretations, assume the missing context. The novelty is that machines are learning to live with ambiguity.
Templates where there used to be prose
And yet, right now, the internet is full of guides on how to write the perfect prompt. Courses, templates, schemes with labeled blocks (role, context, task, format, constraints) as if we were reinventing a syntax where there used to be prose. There is something curious about that. The tool that was designed to understand natural language is being approached as if it were just another formal language, with its conventions and its dogmas. As if we couldn't bear the idea that explaining yourself well is enough.
The impulse is not new. Since the late 1950s, Noam Chomsky has argued that human language is, deep down, a formal structure: a universal grammar wired into the brain of our species, which one only needs to discover to understand how we all speak. If Chomsky were right, formalizing prompts would make some sense, because we would be tapping into those deep rules. But there is another tradition, less comfortable and to my mind more interesting, that says exactly the opposite. Tom Wolfe defended it sharply in The Kingdom of Speech, drawing on the work of anthropologist Daniel Everett with the Pirahã, an Amazonian tribe whose language resists the rules that were supposed to be universal. For Wolfe and Everett, language is not a biological organ. It is an artifact. A cultural tool we learn, use and transform along the way. We do not discover its deep rules: we invent them, inherit them, change them every time we speak.
If language works that way, formalizing it doesn't go very far. What makes a sentence work isn't in its structure, it's in who says it, to whom, when, and why. No template captures that. Machines do not understand natural language because someone has finally formalized it. They understand it because they have learned to tolerate what cannot be formalized. And there we are, almost reflexively, trying to put it back into boxes.
A while ago I wrote about an experiment Iria and I did when we decided to play dumb in front of prompt engineering. We wrote small stories: a person's morning, their routines, their frustrations, their colleagues. And at the end we asked for an app that would make their day easier. No structure, no template. The results were surprisingly good. At the time I lived it as an act of experimentation, an attempt to break the dogmas many users were proclaiming on LinkedIn. But what was really behind it was intuition. We had understood, without articulating it, that if the tool understood natural language, what we had to do was use it well.
Which clarity survives
The interesting question is not whether it still makes sense to formalize prompts. It is what kind of clarity is still necessary and which kind is becoming dispensable. Because some clarity is still needed. A bad prompt gives a worse result than a good one. That is not in dispute. But what defines a good prompt is not the syntax, nor the order of the blocks, nor the number of examples. What defines a good prompt is the same as what defines a good message to a person: that the idea is clear, that the expectations are stated, that the limits have been thought through.
And here it gets uncomfortable. Because if what you have to learn is not how to write prompts but how to express yourself clearly, then the relevant skill is not technical. It is the same one needed to talk to a colleague, to give an instruction to a team, to write a brief. AI is not asking us to learn a new language. It is asking us to go back to the old one, used with care.
Not writing prompts, conversing
I barely write prompts anymore. What I do is talk through what I'm going to ask before formalizing it. Before requesting anything specific from an AI, I sit down with it and explore: I contrast points of view, I push at the edges, I let contradictions surface, I discard approaches that, once put into words, no longer convince me. The conversation works as a space where the idea sharpens. Only at the end, when I think I'm clear about what I want, do I ask the AI to take all of that and write a prompt to hand off to another AI. The formalization happens, but I'm no longer the one doing it.
On a few occasions, when I have asked for that formalization, I've surprisingly received the following answer: "There's no need to generate those instructions to give to Claude Code. I'll handle that myself."
What's interesting about this method is not the method itself. It is what it reveals about where the demand has shifted. What is still hard, what cannot be delegated, is what happens before: thinking with enough clarity for the conversation to have something to sharpen. Walking into the dialogue knowing what you want to explore. Telling, in the flow of replies, which one widens the idea and which one dilutes it. Knowing when to push and when to let the path drift. That is not prompt engineering. That is thinking.
The demand that cannot be delegated
The better AI understands natural language, the more valuable well-thought natural language becomes. This process returns us to an older demand: knowing what we want to say. And that demand applies to any interlocutor. A person who expresses themselves vaguely will get vague answers, from an AI and from a colleague to whom they explain a task halfway. A person who arrives with a sharp idea will get sharp answers, in both cases.
Now what we have to structure is thought. And structuring thought is an older and harder demand than any programming language. It's what we have been asking of prose, of speakers, of anyone who has had to make themselves understood, for centuries. AI has made that demand inescapable.
When I think about what programming will look like a few years from now, I don't picture more formalization. I picture less. I picture that the only thing left will be talking well.
–
Follow me on LinkedIn to stay updated on new posts.
–
References:
https://en.wikipedia.org/wiki/Formal_language
https://en.wikipedia.org/wiki/Natural_language
https://en.wikipedia.org/wiki/Tom_Wolfe