Social interactions, and the tools that facilitate them, are changing the world in ways that even now, after all this time, we cannot anticipate.

In November 2019, Meta Platforms, Inc., formerly known as Facebook, Inc., announced that they had deleted 3.2 billion fake accounts, and that was only between April and September of the same year. Moderating the network content that is the subject of regular reporting by the company is just one indication of the ability of a communication tool to produce unexpected changes in society and its interactions.

A 2019 Facebook Community Standards Enforcement Report on online content moderating shows that the number of fake accounts deleted that year was more than double the previous year’s 1.55 billion. Among the deleted content were 2.5 million posts that presented or encouraged self-harm and suicide. An unimaginable 4.4 million drug sales posts were deleted in the first quarter of 2019 alone.

Facebook has reportedly deleted more than 11.6 million child porn posts, but the US authorities, those who held the company accountable for its activity, do not consider this action as good news. They are concerned about the network’s plans to provide greater privacy to users by encrypting the messaging service, because they consider it will hamper the state’s efforts to combat child abuse.

As regular users, no matter how many hours we spend on social media among pictures of holiday landscapes, cute babies and spoiled cats, we can hardly estimate the impact of this type of extremely comfortable and extremely accessible interaction on some essential elements of the functioning of society. Still, the risks of this type of communication are becoming more and more visible when we put together serious incidents that take place in very different corners of the world and notice common elements that challenge us to discern. But let’s focus on the incidents first. It shouldn’t be very difficult, because they are already in plain sight.

Cheap and effective propaganda

The Cambridge Analytica scandal is one of the most notorious cases because it had a political component whose ramifications shocked and appalled when they became obvious. In early 2018, a journalistic investigation found that a company had extracted an astonishing amount of private information about a huge number of Facebook users. This company further sold the information to the Cambridge Analytica consulting firm, an entity that used that data for political purposes.

Initially, because it allowed micro-targeting, the information obtained by the consultants was exploited in the US presidential campaign, in favour of candidate Ted Cruz. This means they directed a certain type of message (electoral, in this case) straight to the public that was the most inclined to accept it, a public that was carefully selected based on specific criteria.

At first, Facebook denied that any security breach had allowed data to leak. And they were telling the truth: the data had been obtained via the normal operation of the social network. Users installed a trivial game, which, in order to work, required access to certain user information. For the sake of the game, people consented to the data collection and the rest is history. A fascinating story, because that information was sold on several continents, including Europe, where—journalists also discovered—it was used to influence people for the benefit of politicians campaigning for Britain’s exit from the European Union and who, unlike Ted Cruz, actually won.

Moreover, it was not only Ted Cruz’s campaign crew who benefited from the political exploitation of the data provided, unknowingly, by gaming enthusiasts. As a former Cambridge Analytica employee would testify, even the former president of the United States, Donald Trump, collaborated in his campaign with the same data analytics firm. In fact, at the time when Cambridge Analytica was collecting data, the company was run by Steve Bannon, who later became an advisor of Donald Trump’s presidential campaign. From the information gathered from millions of Facebook profiles, the company created software capable of predicting and influencing elections through highly personalized election advertising.

Initially, Facebook denied responsibility, making sure everyone understood that there was no security breach—that it was not the fault of the social network that people gave away their data. Later, however, as the public began to understand the consequences of manipulating that data, and Facebook’s stock price continued to plummet, the company’s CEO agreed not only to appear in front of the US Congress but he also promised to the general public that he would reform the system and ensure that users enjoy much stronger privacy protection.

What and how much to censor

One of the most natural expectations of the public is for social media to be more widely censored, so that at least false information, intentionally misrepresentative, can no longer circulate freely. That is what Alexandria Ocasio-Cortez, a member of the US House of Representatives, said at a Facebook CEO hearing in a Congressional Committee. Mark Zuckerberg then said the social network would censor information that could cause immediate harm.

For example, it would remove posts that announced a false date of the elections to prevent a certain segment of voters from exercising their right—a strategy already practised in election campaigns, called “voter suppression”. Zuckerberg alluded to the fact that this practice is a direct attack on democracy. Still, the CEO said that censorship will not mean removing posts containing fake information from politicians, because that would limit the freedom of expression, which also includes the right of Facebook users to see that a politician is lying.

The episode is illustrative of the debate that is taking place in rich and relatively digitally literate societies: public opinion is becoming aware of the power of the new communication tools to amplify propaganda to a level that the greatest political strategists did not even dream of just ten years ago. Not only can the propaganda message reach a staggering number of people today, but it can also penetrate their intimacy and tailor its message to the target’s psychological profile, which not even a propagandist dropping bags of flyers on the street could have hoped for.

Public opinion, awakened to the reality of this force, pressured the company to take action. However, Facebook did not intervene, and the lack of intervention further accentuated the changes we are talking about—not only because it does not inconvenience them, but because it also indirectly legitimizes them.

Asia, a different place, same story

Facebook not only reaches rich and relatively digitally literate societies but also poor countries with poorly educated citizens, such as Bangladesh. In Bangladesh, one of the Asian countries with the highest internet adoption, government figures show that more than 50% of the population is using the internet.

A 2019 study published by the social media companies We Are Social and Hootsuite showed that Dhaka, the capital of Bangladesh, is the third city in terms of the number of active Facebook users. The third-largest city in the world. But its inhabitants, who are just beginning to discover the many benefits of the technology that connects us, are at the same time experiencing bloody conflicts because of the way this technology can be used.

At the end of October 2019, hundreds of Muslims took to the streets in the town of Borhanuddin (195 km from the capital of Bangladesh) to protest against a Facebook post mocking the Prophet Muhammad, which was allegedly written by a Hindu man. The protest degenerated and, in the clash with the police, four demonstrators lost their lives. Police later announced that several hackers had hijacked the young Hindu’s account to orchestrate a clash between the two communities.

Earlier the same year, the rumour that human sacrifice was needed to build a bridge in Bangladesh—and, therefore, that a group of criminals was trying to kidnap several children to be sacrificed—went viral on Facebook nationwide. Several people suspected of kidnapping children were beaten in the street, all because of a rumour.

We remain in Asia, where a United Nations analysis of the situation of the Rohingya minority found that “Facebook has been a useful instrument for those seeking to spread hate, in a context where, for most users, Facebook is the internet“.

In such societies, it is totally unrealistic to expect the public to be the driving force that will balance the transforming power of new security and communication technology. Atiqur Rahman, a media expert at Australia’s Queensland University of Technology, was very eloquent when he said that Facebook will only be able to combat the misfortunes that arise from using the network if they set up monitoring teams that take into account the culture of the monitored society.

“Bangladesh’s collective social structure meant it does not take much time for information to be spread widely,” Rahman said. However, analysts say that the government can also intervene, taking steps to educate social media users on how to protect themselves from rumours and how to use social networks responsibly.

The liability triangle does not include liars

Both in developed cities and those with precarious economies, the interaction of people through social networks requires a responsible bilateral approach. Although societies are so different, the scale at which the effects of network amplification are felt is, in essence, very similar. What’s worse: the fact that masses of voters are persuaded to exercise their right to vote based on false information that enters the privacy of their environment through unknown “doors”, or that masses of people are turned against each other because of rumours that reach them through what seems like a source of trust and emancipation?

“Lying is bad,” Mark Zuckerberg said in his dialogue with Alexandria Ocasio-Cortez. It goes without saying that the technology that makes it possible to spread lies in the most advantageous places for the election campaign is not the one to blame. Technology is handled by people, and the responsibility is shared between all those who use it: those who lie, those who manage the network that amplifies that lie, and those who receive the message and choose to believe it. However, the proportion of responsibility differs, and the parties with the greatest power remain the most responsible.

With their focus on personal life and psychology, social networks have more power than they seem to be able to manage. Users, on the other hand, easily get lost in the technology maze because the architecture of social networks is not transparent (the algorithms that Facebook uses, for example, are unknown to the vast majority of users).

If social networks were really interested in the well-being of the societies they help connect, from their financial surplus—which is more than obvious for everyone—they could invest in the education of their users (in a culturally-sensitive way, of course).

Governments too can protect their citizens through complementary education measures, but also by pressuring social media companies to contribute sustainably, not just superficially, to the well-being of societies. What is even more necessary is for all these measures to be accelerated, expedited and conscientiously implemented. Otherwise, we are left with a ridiculous picture: the society that boasts of the speed of technology is only willing (or should we say able?) to adapt to it at a snail’s pace.