Long Live the Model, or YALP (yet another LLM post)

2026-03-01 • Manuel Vázquez Acosta

Natural languages are large and complicated tools, and we use them the best we can. We also abuse them when they lack the right concepts, or we are unaware of them.

That is why while talking about LLMs and coding agents we are often drawn to make them “think”, “decide”, or “make assumptions”. We know how these LLMs work in their core: they are all about predicting the next output from its current context. Yet they have become such a useful tool that we might forget what they really are.

We anthropomorphize LLMs because it seems to provide the best communicational abstraction for their inner workings. But this poses some risks of misunderstanding; especially when the very concept of intelligence is elusive.

Are these tools intelligent?

I have seen a lot of back and forth on social media. There seems to be some camp (school?) of thought that claim the LLM is always right in its output and the user is the main culprit for its non-compliance or deviations. This is true in the same sense that most bugs in your code are not the fault of the tool (programming language, IDE, compiler, etc.). However nobody claims that IDEs, compilers or programming languages are intelligent.

Either LLMs are intelligent with the capacity of error, or they are just tools and yours are the mistakes.

This conclusion is, nevertheless, also unsatisfactory. None of the non-LLM programming tools produce code from a natural language specification. They are in some sense more primitive, foundational. LLMs don’t replace compilers or programming languages. We still rely on them even to validate the LLMs’ outputs. Humans have also relied on them to validate their own output. Humans have relied on visual tools, diagrams, and many other great abstractions to handle their own faulty thoughts.

LLMs are therefore a step — qualitatively speaking — between those two extremes of human-like intelligence and purely mechanical/formal tools. I will keep approaching them with a tool-like mindset. I will keep trying to own the design I want to produce. I will keep using other mechanical or formal tools to verify the code I produce with LLMs.