ST Network

Digital natives, digitally naive: life at the dawn of another revolution

disinformation

In the disinformation epidemic, digital data can used to the detriment of accurately informing those who unsuspectingly made it available to online platforms.

The generation born with the tablet and the smartphone in its hands, but which ends up being exploited by big data cultivators and controlled by radicalization and polarization, could become the generation that implements anti-democratic movements.

Digital literacy, as commendable as it may be, is only one part of a solution that needs to be more broadly covered. Digital skills alone cannot cope with new challenges; on the contrary, they are the ammunition that propagates them. In 2018, a group of tech giants, including Google, Facebook, and Mozilla, voluntarily pledged to comply with the European Commission’s new standards for combating disinformation through fake news, signing the EU Code of Practice on Disinformation.

“This is the first time that the industry has agreed on a set of self-regulatory standards to fight disinformation worldwide, on a voluntary basis,” said Mariya Gabriel, European Commissioner for Digital Economy and Society. “The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and demonetisation of purveyors of disinformation,” the commissioner said.

A forum made up of several organizations interested in the issue of disinformation criticized the ratified document, saying that it “contains no common approach, no meaningful commitments, no measurable objectives or KPIs, no compliance or enforcement tools and hence no possibility to monitor the implementation process.”

Levels of uneasiness

Depending on how familiar we are with the subject of disinformation, this may have reached us on several levels. Most of us have already heard about the GDPR (EU Data Protection Regulation) and have a vague idea that our personal data is a bargaining chip for technology moguls. This is the first level on which all the others are based.

We then learned that our data may escape the control of companies, falling into the hands of third parties, with or without the consent of the platforms to which we have provided our data. We know that this data, obtained fraudulently or not, can be used to the detriment of those who made it available to online platforms, without suspecting any consequence. This is the second level.

However, the much less known and harder to understand level is that these online platforms have included in their very structure mechanisms through which they become harmful to society. I am not referring here to elements such as automatic scrolling, which has been said to be built specifically to be addictive to users, but to something much more fundamental to the functioning of all major online networks: algorithms.

The network we all love to hate, Facebook, was in the spotlight in 2018 when the company’s CEO, Mark Zuckerberg, was heard in front of the US Congress and Senate. Analysts expected the media mogul to be “grilled” by politicians, but that was not the case. During the congressional hearing, Mark Zuckerberg’s hearing showed that not only is it easy to break free from accusations when you’re an economic force (something analysts have noticed), but also how different the CEO’s perspective is compared to that of a regular user (which should have made users ask themselves: Why is their network of “friends” a technology company, instead of a communication one?).

Zuckerberg said back then that Facebook is not a media company, even if it hosts and produces content. Of course, if that were the case, no one would have discussed fake news today, a phenomenon that has thrived largely because of this network, and some have even gone so far as to say that the President of the United States himself would not have been chosen if his campaign team didn’t have a Facebook account to use to spread fake news.

However, Mark Zuckerberg did not lie during the hearing. Facebook really is a technology company. Its structure is meant to disseminate and collect data simultaneously. It is not one built just to disseminate information, as a simple channel, although it also does that. This is the starting point of the fake news phenomenon: the existence not so much of the good old propaganda, but of a media tool that can take it with maximum efficiency to the ends of the Earth. Until Facebook developed this data collection technology infrastructure, no other media architecture had such informational power.

Of course, every user today knows that Facebook collects information about users, and for some, this is a reason to share as little personal information on the network as possible. At least that’s what they think.

In a document compiled by Facebook employees to be forwarded to a potential advertising client, the network’s representatives boasted that they can identify when teens, who were the target of that advertising company, feel “insecure”, “worthless” and when they “need a confidence boost.”

We would imagine that the network obtains this information from teenagers who fill in their real age to create the account and who use it to write down their feelings in the “feeling” posts. But we would be wrong. In reality, the network does not extract this information by mapping the appearance of these precise words on the network, although the document showed that the network can monitor the status and photos posted in real time.

Facebook extracts its information through an automated process that reveals users’ attributes. In other words, the network can get the data it needs, following a process that closely resembles the logical deduction that a person uses, with the help of artificial intelligence, to be more specific.

Robots that know us better than humans

The artificial intelligence devices in which Facebook invests are powerful and process an unimaginable amount of information about users. Most users do not even imagine what this means, especially since even among the most educated there are those who believe that this processing is related to the “old rite” of programming—that is, that a person teaches a computer to extract a certain type of information by dictating some algorithms to the computer (“if X…, then Y…”) based on which the computer then provides the filtered information. However, what artificial intelligence is capable of today has far exceeded that level.

Artificial intelligence is capable and even developing new ways to probe huge amounts of information to get relevant data. Artificial intelligence computers can learn these new ways of structuring big data with or without the help of a person, through what is called deep-learning (automatic deep learning architecture).

Deep-learning is an artificial learning structure inspired by patterns of information processing in the biological nervous system—in other words, inspired by the brain. The automatic deep learning technology analyses new information in much more detail and has the ability to memorize past actions as well as the ability to learn from one’s own inferences. This technology is already being used, for instance, for voice and handwriting recognition, to detect spam and various electronic fraud, and also to predict trends or results.

Given the huge amount of information that Facebook has about its users, it’s hard to say how much the network really knows about us. Moreover, as artificial intelligence continues to analyse a growing amount of big data, it is realistic to say that not even Facebook knows yet what kind of information it will find out about its users.

The impact that this information hegemony can have on society should alarm us more than the risk of establishing an authoritarian political system based on population surveillance. Although technically possible, legally such a revolution is not yet real. (China could become the leading exponent of this system.)

Nevertheless, big data processing and the sale of information, based on consent, is legal and is happening right now as you are reading this article. Facebook is a technology company that makes a living by selling the data it collects from users who are willing to provide it. (That’s why the network isn’t very interested in the content that spreads through its channels, as long as that content keeps users’ attention and keeps them coming back).

The problem with this is not just the annoying advertisements that bombard us every day. For Facebook, promoting a political candidate, an idea with a social impact, or a product is structurally the same thing. The same algorithms will be used regardless of whether the advertising client has to sell a drill, a candidate, or an anti-vaccine appeal.

What are the consequences?

Journalists Joshua Green and Sasha Issenberg opened a real Pandora’s box on the consequences of this way of doing business when they published an investigation into Donald Trump’s election campaign on Bloomberg. Discussions about Donald Trump winning the elections with the help of fake news are already known, but less known is the help that the campaign received from the so-called “dark posts.”

Dark posting is a form of audience hyper-targeting that Facebook has been providing for years. Hyper-targeting is an extremely precise segmentation of the audience, according to such precise criteria (but also highly emotionally charged) that it would make someone who does not work in advertising or political marketing and who is not accustomed to such a categorization of the public blush with shame. [Like how Facebook should have blushed when it found out that it offers advertisers the opportunity to target Jew haters.]

The Bloomberg investigation brought the discussion to the fore when it showed that the former president’s campaign team had precisely targeted segments of African-American Facebook users, to whom they distributed messages meant to discourage them from voting.

Messages conveying that the vote of people of colour is neglectable reached African Americans even in countries where their vote mattered most. This strategy was by no means illegal. The campaign staff simply used an advertising tool invented exactly for the purpose for which it was used: to reach highly targeted audiences and sell them something.

The opposing campaign team could not have countered such a release even if it had been aware of it, because it could not have assured itself, and neither could Facebook guarantee, that the counter-message could reach exactly the same people who saw the original message. That was not the matter anyway, because that audience was hidden from the eyes of the opponent.

In the midst of the scandal about targeting African Americans, Facebook responded by launching the “Info and Ads” button, available on any official page (Pages), which shows all the campaigns that a particular page runs at the time of consultation. The network has also promised to design an archiving system that will allow those interested to check a history of all one-page campaigns. But after the scandal was gone, so was Facebook’s promise.

The political outcome of such manoeuvres is worrying, but what is even more disturbing is the overall effect. This type of advertising is, of course, a form of mass advertising. However, unlike the advertising we all knew before, this kind of hyper-targeting makes advertising happen on a scale, but in secret.

We are thus witnessing a mutation, a shift from the traditional public sphere, in which opinions are met in plain sight in a public marketplace of ideas, to an algorithmic and hyper-fragmented public sphere, managed through artificial intelligence. Why is this a bad thing? Because the algorithms that make online networks thrive economically are those that take advantage of the polarization of opinions.

An experiment has already shown that YouTube is the most influential radicalization tool in Westernized societies, because its mechanism for capturing attention—recommendations—is gradually giving users more and more radical and emotionally charged content, knowing that this type content maintains the best level of attention.

The scale effect of an algorithmic public sphere is an algorithmic culture. In other words, mass discussions are manipulated by algorithms that decide what precisely chosen crowds of people are exposed to. Since these algorithms promote, by their nature, the spread of outraging and radical messages, we can imagine what will be the effect of users voluntarily, but unknowingly, submitting themselves to exposure to radical messages.

Over time, the pressure of online “echo chambers” will generate a polarization that will undermine reason and cultivate division. And the individual, not being aware that the gear they are in is artificially constructed, controlled by the customer willing to pay more for the distribution of a message, has practically very few possibilities to contribute to the regulation of the public sphere.

A society in which the public sphere does not self-regulate, but is run by algorithms activated by a few who are able to “buy” them, is an algorithmic society, not a democratic one.

Such a company promotes Facebook worldwide in the name of freedom of expression and “connecting people around the globe.”

We can’t expect technology moguls to do us a favour and make sure things don’t degenerate. We can’t even expect journalists to secure quality information. This is no longer possible today, when one of the great innovations that ensure the economic success of online social networks—hypertargeting—is changing the rules of the communication game as it goes.

Society needs to develop cultural antibiotics for the disinformation epidemic, and for that it needs innovation to cover all levels: from organizational to social, educational, and legislative. The place which is most likely to deliver this innovation is academia, because that’s where the freshest chances come from: from fresh people, independent from corrupt systems, concerned with the bigger picture, and who still believe that responsible strategic thinking can change the future.

I was honoured by the invitation of the National School of Political and Administrative Studies (SNSPA) to participate as a speaker in the Romanian-American workshop titled “Fake News and Information Disorders in the Digital Age”, an event organized in partnership with the SNSPA Centre for Communication Research and the Cox International Centre, University of Georgia. The material above is an added and web-adapted version of my presentation.

Exit mobile version