In 2016 Donald Trump ran a tough campaign in which he more than once labelled the mainstream media as 'fake news media'. The term 'fake news' took on a life of its own and is now used inappropriately.
“And too often incorrectly,” say researchers Dr. Kristin Van Damme, Glen Joris and Bart Vanhaelewyn, “which is an even bigger problem than fake news itself.” In the newsDNA project they investigate how people can come into contact with more diverse opinions and thus better assess what is true and what is not.
What challenge are we facing exactly?
Kristin Van Damme: “More information is available than ever before. This makes it difficult to know what is reliable and what is not. Do you hear enough alternative voices, are they worthy or are they spreading incorrect information? Recognizing reliable information is a problem.”
Bart Vanhaelewyn: “It’s because more and more players, such as politicians and companies, communicate directly with their audience and are therefore skipping the traditional middle man: journalists. After all, they are the ones who are trained to check facts, to channel information and to provide the public with correct information. The responsibility for choosing the best sources therefore rests more and more with the public itself. Which are relevant? Which ones do you consider more critically? As an individual you cannot possibly know enough about everything to make a well-founded decision yourself.”
In this mass of information does the term “fake news” often appear incorrectly?
Bart: “We are used to putting everything in boxes: it is either right or wrong. We think there is only one reality. From there, it is easy to say that the other person’s opinion is “fake news,” as Trump invariably does. The reality is much more complex. Interpretations differ because of a different worldview and different experiences. Fake news is often just an easy way to stop a discussion when you have no more arguments.”
Kristin: “Fake news is deliberately spreading false information for personal or commercial gain, in a form that looks like news. This is not the same as what journalists report on the basis of incorrect information they’ve received. We are talking about disinformation or “news with incorrect information”, not fake news.”
It almost seems like a language issue, but does it have major consequences?
Kristin: “We can still expect journalists to only offer news when it has been confirmed by two sources. And in the so-called “fact checks” they will also check claims that have been made directly. By putting everything on the same page and consistently shouting “fake news”, you actually create the impression that journalists are out to deliberately misinform people. And that is how you diminish confidence in the news.”
So wrongly using the term "fake news" is a bigger problem than fake news itself?
Kristin: “Indeed. We see the confidence levels in traditional media continuing to decline year after year.”
Bart: “On the other hand, we really can’t ignore real fake news. There are plenty of examples in our country. When I see how certain messages are spread as news, with the intention of influencing people incorrectly or promoting a certain ideology, then there really is a problem.”
You are working on a solution in newsDNA, an algorithm that tackles both things by informing people more broadly. What exactly is behind it?
Glen Joris: “You cannot expect people to be informed about everything, but you have to bring them into contact with all the different opinions. There is currently a gap there, particularly as the current algorithms at news sites work on the basis of commercial criteria: popularity and interests. They do not take into account the news content. With newsDNA, we are investigating how people respond when they receive recommendations based on content. If you read a lot of left-leaning views, our system will also recommend opinions from the other end of the political spectrum."
"The discussion about the closure of nuclear power plants is a good example. If you think they should close, you probably read articles that confirm that position. With commercial news algorithms, there is just a small chance that you’ll even be confronted with articles that discuss the opposite view. We want to do that, even if it offends people or they immediately click away. The entire research project doesn’t just revolve around making suggestions for articles, as we also give people an insight into what they have read. How many articles have they read for example, and in what field of interest they were. So awareness raising.”
Bart: “When you are confronted with different opinions, people start to realize that not every other opinion is fake news. In this way you also meet the misuse of that term. You cannot approach a problem from just one point of view. There are always multiple opinions and possible ways of looking at something.”
How can you measure such a thing?
Kristin: “The biggest problem we have is deciding what ‘diversity’ is. How do you measure that someone has read in more diverse way than before? That’s more complex than we thought. And yet it sounds so simple (laughs).”
Glen: “You have to specify what left and what right articles are. You can do this on the basis of content, but the tone of voice also plays a role. What terminology does the journalist use? We receive texts from all the major media groups, which we collect on our own platform. In the beginning, the labeling is being done manually, but in the long term it will be done via machine learning, so automatically. We will soon start testing the algorithm. At worst, it turns out that readers will resist it and stick to their own interests and make different choices than we give them.”
The basis of the algorithm is determined by a person … that must present you with difficult choices?
Glen: “Absolutely. We have determined what diversity is, but how diverse do you go in political positions, for example? Do you give every political party an equal chance to be featured? Or do you give larger parties — with more seats in parliament — a higher value? You should not underestimate such choices. The media groups will also have to want to install the algorithm. Commercial interests play a role. We will have to find a balance between recommending articles that match the reader’s beliefs and articles with different opinions.”