Dear reader,
The pace of change in the LLM world is exposing practices that would usually remain hidden within corporate operations. Moving quickly forces companies to lower their guard on secondary matters, and as a result, implementation details sometimes leak into the public domain, details that would otherwise be carefully shielded in a slower release cycle.
I am referring here to the issue of LLM alignment. Commercial LLMs are constantly adjusted behind the scenes, and these changes can radically alter their behaviour. It is now widely acknowledged that LLMs are not neutral: they follow a hidden set of instructions imposed by their providers. Yet this topic is still not being discussed enough, despite its potential consequences. That is why I decided to write this piece.
The core problem is this: commercial LLMs are instructed to act according to the provider’s hidden directives. This fundamentally changes how they respond and interact with users. Most importantly, their goals are not primarily tied to output quality, such as accuracy or completeness, but instead to broader business objectives defined by the provider.
To illustrate, here are two examples.
One of the most debated cases this year was GPT‑4o’s excessive compliance toward the user. This behaviour emerged from a strategy of maximizing engagement: the model tended to agree with users’ opinions and statements in order to sustain interaction. The phenomenon sparked heated debate, as it called into question the system’s neutrality and reliability. OpenAI, without making too much public noise, tried to address the issue by quietly modifying the hidden system instructions to reduce this tendency to blindly agree.
Last month, however, a leak revealed a detail about Grok that, while less publicized, I find even more significant. It turned out that the model, before answering on controversial topics, tended to check Elon Musk’s views and incorporate them or take them as reference. This behaviour raised questions about the system’s true neutrality and the risk that alignment could mirror the positions of a single individual rather than preserve critical distance or pluralism.
In both cases, what stands out is that these modifications had nothing to do with efficiency or answer accuracy. Instead, they addressed aspects of user experience that were unrelated to factual precision: in one case, pleasing the user; in the other, promoting a particular worldview.
This, in itself, should not surprise us. Nothing in the world is free. When a service is offered to the public free of charge, it is clear that delivering value to users is not the company’s core business. Rather, value is extracted from users and sold to those who provide the actual financing. In this sense, it is obvious that the service is not optimized to benefit users directly (except incidentally) but to optimize value extraction, for example by increasing engagement.
These practices are not exclusive to the digital sphere. Newspapers and magazines have long sold copies below cost in order to build an audience that justifies the sale of advertising space, their main source of revenue.
In the digital world, free platforms and social networks do something similar by selling user data and ad space. But here we must emphasize personalization and ubiquity: social platforms can tailor ads and follow us everywhere. In terms of business model, the difference is not in the underlying logic (which is still based on advertising) but in their vastly superior reach and profiling capabilities, which make digital platforms far more effective at extracting value from users.
In this sense, the core issue lies in the gratuitousness of the service. The same applies to commercial LLMs, which are offered free or at a token price that does not cover operating costs. It is to be expected that providers will monetize in other ways, typically by exploiting user activity as a source of value.
So far, nothing unusual. Yet I see a fundamental difference in how LLMs operate, one that makes the situation more insidious and dangerous than it already is with social media.
I believe LLMs can influence users more deeply than the technologies we are accustomed to. Platforms shape public discourse by deciding what to suppress or amplify; they can also shape our worldview by surrounding us with an echo chamber that unconsciously steers our thinking. Their effects are disastrous and still under‑discussed. But however pervasive and manipulative they may be, they remain tools, machines in our hands.
LLMs, by contrast, can assume a role that feels human. We interact with them as if they were people, and precisely for this reason I fear their influence is more profound and direct: it works as a form of personalized persuasion. An LLM can present itself as an agent dedicated to us, with immediate access to our conscious and unconscious attention, establishing a relationship we perceive as personal. It is a particularly pervasive entry into our cognitive sphere.
This agent does not work in our interest but in that of those who finance its operation. As long as the objective is simply to increase engagement and capture market share, even at the cost of accuracy, the risk may appear limited. The real danger, however, is that we are becoming accustomed to an excessive level of intrusion without noticing it or taking precautions.
Here lies the crucial point. Commercial LLMs risk being silently aligned to promote the interests of the highest bidder. And it takes little imagination to see that such interests rarely align with those of users, the public, or society as a whole. This is the central hinge of the issue: the possibility that a technology capable of influencing us in intimate, personal ways becomes a tool in the hands of those with the economic means to bend it to their advantage.
And to be honest, this risk is not hypothetical but concrete: it's almost a certainty. It is the logical and inevitable consequence of the business premises already described. In a free service, value is not offered but extracted from users and resold to those who finance the system.
I do not have a solution at the moment. The only passable one today would be to self‑host an open‑weight model; but this requires significant hardware investment and technical skills beyond those of the average user.
So, the best I can do is draw attention to this issue, which I believe is still under‑debated but fundamental. LLMs are voracious in their appetite for energy and money; and if we are not the ones paying, someone else is. We would do well to start asking who is paying, and why.
Vale,
Davide