We log onto Facebook without any particular goal in mind. Perhaps we want to see what our friends are doing, or maybe we just want to pass the time or feel connected for a few minutes. Our feed appears as a natural succession of fragments: a family photo, an ironic comment, or an article shared by someone we know. Nothing seems forced. Nothing demands special attention.

This passive consumption, based on continuous scrolling, is a direct consequence of how social platforms filter information. For years, researchers have been talking about the filter bubble, a term coined by Eli Pariser to describe the way algorithms show us mostly content similar to what we have already interacted with. 

And yet, if we pause for a moment, we notice something curious: we are rarely truly surprised. Explanations of what is happening in the world tend to resemble each other. The recurring themes are predictable. Even outrage seems to follow a recognisable pattern. The world as it appears in our feed has a comfortable coherence.

This coherence is not the result of editorial selection, as it was in the traditional press. Rather, it is the product of a system that constantly learns from our reactions and optimises, with cold efficiency, what holds our attention. It is important to be aware, therefore, that Facebook does not offer us a picture of reality, but a probable version of it in a statistical sense, calculated on the basis of our previous behaviour. Analyses of the attention economy and algorithms clearly show that the main goal is to maximise engagement rather than informational diversity.

Personalisation as infrastructure, not as a service

The personalisation resulting from this process is often described as a service. “Relevant” content tailored to our interests. However, in this context, relevance is not synonymous with diversity or balance. Rather, it is a form of continuity: what worked yesterday is recycled tomorrow, slightly adjusted and refined. Over time, this continuity begins to resemble an explanation of the world.

This is where the phenomenon that researchers have termed the “echo chamber” comes in: a digital space that acts like a soundproofed room, where certain ideas circulate easily and are reinforced, while others become increasingly rare. This is not because they are censored, but because they do not generate enough of a reaction. This phenomenon has been empirically documented in studies analysing the structure of social networks.

How the framework narrows without us realising it

Day after day, we end up living in an informational landscape that seems stable. This is not because the world has become more coherent, but because the mechanism that selects fragments of reality for us prefers continuity over dissonance. The platform does not offer us the most important information, but rather that which is most likely to hold our attention for a little longer.

The subtle transformation that begins at this point is reflected in the fact that, although our opinions do not necessarily change overnight, the framework in which we formulate them narrows significantly. The questions that seem legitimate to us are increasingly those we have already seen asked. The explanations that seem reasonable to us are those we have encountered many times before. What hardly ever appears begins to seem, without our realising it, marginal or irrelevant.

It is in this context that the idea of an “echo chamber” emerges as a useful description of a structural effect.

Reaction optimisation and the role of emotion

Research analysing actual user behaviour shows that, the more we interact with the same type of content, the more homogeneous our networks become. We follow the same sources, share articles from the same ideological areas, and react to the same types of messages. From the outside, this may seem like a free choice. However, from within the system, it is also the result of the platform’s continuous adaptation to our habits.

Importantly, this process does not involve political intent. Algorithms do not have political settings; they only optimise. In a system designed to capture attention, optimisation favours content that provokes clear emotional reactions such as outrage, fear, and moral satisfaction. Messages do not have to be false to be polarising; it is enough for them to be formulated in a confrontational tone.

Thus, without being deliberately pushed towards radicalisation, we may end up perceiving the world as more divided and threatening than it is in reality. The differences between “us” and “them” become clearer, not necessarily because they have deepened, but because they are constantly brought to the forefront.

The Cambridge Analytica incident

In this already established landscape, the Cambridge Analytica scandal acted as a magnifying glass. It did not invent anything fundamentally new, but it revealed the consequences of these mechanisms being used deliberately.

The data collected by Facebook—whether likes, pages followed, or interaction patterns—is normally used for commercial advertising. This helps the platform decide which ad we see for a mundane product. In the hands of Cambridge Analytica, however, the same data was used to build approximate psychological profiles and tailor political messages to different sensitivities.

In practical terms, this means that two people living in the same city, watching the same news programmes on TV and shopping in the same supermarket could see different political content on Facebook. Not necessarily one true and one false, but rather two narratives designed to evoke different emotional responses. Without public debate. Without mutual visibility.

Facebook did not create those messages. However, its infrastructure—its ability to deliver highly accurate, reaction-optimised content—made them possible and effective. The case revealed an uncomfortable truth: personalisation, when combined with detailed data about human behaviour, can become an influence tool without appearing to be one.

What the research says (and doesn’t say)

However, it is important not to exaggerate the conclusions. Academic research is much more cautious than public discourse. Recent meta-analyses show that the effects of echo chambers are real, but not universal or absolute. These effects are more visible when we analyse usage data and less visible when we rely on users’ stated perceptions. This suggests that we are not always aware of our own information patterns.

At the same time, political polarisation did not begin with Facebook. It has historical, economic, and cultural roots that predate the emergence of social networks by many years. While digital platforms do not create these divisions out of thin air, they can amplify, accelerate, and make them more visible. This nuance is essential. It helps us to avoid both panic and indifference.

Influence as selection, not direction

Beyond statistics and scandals, what remains is the environmental change we have all experienced. Even if we are not forced to think in a certain way, we still think within an increasingly personalised, predictable, and reaction-oriented framework. While the feed does not tell us what to believe, it suggests through repetition what deserves our attention. Over time, however, attention becomes a filter for reality.

Understanding this does not mean withdrawing from the digital space or viewing technology with generalised suspicion. It means recognising that the information environment in which we live is constructed, and like any construction, it has a form, a logic, and limits.

When the created environment is not confused with reality

In all this discussion, perhaps the unspoken but necessary question is not whether Facebook manipulates us and how it does so, but how easily we embrace a stream of information designed specifically to hold our attention as a news feed. We now know that digital platforms do not explicitly tell us what to believe nor force us to adopt certain opinions. However, because they decide what we see more or less of, this selection ends up mattering over time.

Therefore, the problem is not that we are being lied to, but that reality is delivered to us in a fragmented and personalised way based on our previous actions. What appears repeatedly begins to seem important, and what is missing begins to seem irrelevant. Thus, without any drastic interventions or explicit messages, the framework through which we form our opinions becomes narrower.

Facebook does not take away our ability to think critically, but it subtly directs us to exercise this ability within a space that is already configured and far from neutral—a space designed for efficiency, not balance.

Understanding this mechanism does not offer simple solutions, nor does it guarantee immunity from influence. However, it is a necessary step.

According to a recent Pew Research Center survey, 53% of US adults say they get their news through social media at least sometimes, with 38% specifically mentioning Facebook as a constant source of news, surpassing some traditional news outlets. In Europe, approximately 40% of people use social media as a source of information.

In this context, understanding how content selection works becomes a basic form of civic hygiene.