How is information and misinformation disseminated on digital social networks? Jens Koed Madsen, a researcher at the Complex Human-Environment Systems Simulation Group (Cohesys) at the University of Oxford, explained the main cognitive biases induced by users and by algorithms in an interview with Fondation Hirondelle's publication "Mediation" on the theme "Informing despite social networks".
In just over a decade, social networks have become a global vehicle for disseminating information, and are also a privileged place for spreading false or hateful messages. Caroline Vuillemin, Executive Director of Fondation Hirondelleexplains that " Since their advent in the mid-2000s, social networks have become a global media reality by the end of 2010: 58 % of the world's population over the age of 13 use at least one social network. In countries where information is a market, 36 % of users of Facebook, the world's leading social network with nearly 2.3 billion subscribers, consult their accounts for information purposes. In emerging countries but facing new forms of censorship, such as Brazil or Turkey, the Whatsapp mobile phone application - owned by Facebook - is increasingly used by groups of several thousand people to share information on subjects of common interest. The success of these platforms is largely due to the media democratization they carry: anyone can produce and disseminate information, without passing through the filter of a recognized media or institution.
READ ALSO IN UP' : Would those who read the Facebook News Feed be more informed than others?
But the years 2016 to 2018 have raised a global mistrust of social networks linked to the political and social consequences of their massive use for disinformation purposes. One case that has made its mark is that of Cambridge Analytica and the way this company used Facebook to promote Brexit and then the election of Donald Trump as President of the United States. In Burma, the army is accused of having created hundreds of Facebook pages to spread hatred of the Rohingya Muslim minority, several thousand of whom were massacred and more than 700,000 others expelled to Bangladesh in a process that the UN Human Rights Council has called genocide.
Challenged on these subjects by public opinion and governments, the companies that own these social networks propose technical self-regulation measures on algorithms, or editorial measures on publication rules. These measures remain unclear, and have difficulty convincing people of their good faith and effectiveness. Governments also do not dare to legally constrain these platforms, which are a source of public interest and economic development, bearing in mind that World Bank and International Telecommunication Union statistics estimate that by 2025, 77 % of the world's population will be regularly connected on social networks and online (compared to about half in 2019). In this context, how can news media stand out on social networks?
Why not enjoy unlimited reading of UP'? Subscribe from €1.90 per week.
Jens Koed Madsen (1), a researcher at the Complex Human-Environment Systems Simulation Group (Cohesys) at Oxford University, responded to an interview published on January 14, 2020 in the 4th issue of the biannual publication "The Simulation of Complex Human-Environment Systems". Mediation " of the Fondation Hirondelle: "Informing despite social networks".
Would you say that the increased use of social networks over the last ten years has introduced confusion into what can be considered reliable information?
Jens Koed Madsen Social networks are an increasingly important source of information for most citizens. This has fundamentally changed our information structures, as the traditional media have editorial control. We have moved from "top-down" mass media to a top-down and bottom-up landscape of information sharing.
This has important benefits, as it increases citizens' participation in public discourse, allows them to speak out against powerful individuals or social entities, and facilitates the reporting of wrongdoing (for example, social networks have given #MeToo more impact and reach). But it also has serious drawbacks: it makes it easy to generate false or misleading accounts, it obscures accountability (it is difficult to know where a rumour or misinformation starts).
Given the ease with which false accounts and misinformation can be created, it is not surprising that many people are finding it increasingly difficult to know what is credible and what is not. We need a better understanding of how information flows so that we can design social networks that protect citizens from deliberate misinformation while preserving their freedom of expression.
How do social networks work psychologically? Can you give some examples of the mental biases they promote?
JKM : Psychology has identified many biases related to the way we search for and process the information we get from social networks. In particular, the "confirmation bias" and the "continuing influence effect". Confirmation bias is our tendency to seek out, interpret and recall information that confirms our beliefs. As the amount of data on social networks increases, it becomes easier for all citizens to identify information that confirms their beliefs. The continuing influence effect shows that information initially presented as true continues to influence what people think even when they have seen corrections that they find clear and credible. In other words, even when misinformation is corrected, it can continue to cause damage. Those who spread misinformation on social networks can exploit these biases.
The structure of the network also influences the dissemination of correct or incorrect information. Social networks are dynamic systems where people follow and leave each other, and where the underlying algorithms enhance or hide content. Users depend on how the algorithms are designed. For example, a company may decide to promote divisive statements (if they generate more user activity), which can contribute to polarizing debate and spreading misinformation. In one study, we showed that "echo chambers" can occur because of the structure of the network, even under conditions where people do not have a strong bias.
We need to understand the psychology of citizens, the structure of the network, and how people interact with each other on these platforms, because all of these elements influence how misinformation can spread. It's not enough to just understand the biases of the users. This would place excessive responsibility on them and undermine the role of system design and interactivity.
If you were a media publisher, how would you use social networks to make your media recognized as reliable sources of information?
To fight against disinformation and to favour analyses that decipher the news, join the circle of UP' subscribers.
JKM As information systems have also become bottom-up, the number of people producing content has increased. This puts pressure on the media, as they risk being put on an equal footing with any other entity that provides opinions or news, such as citizens, "bots" or politicians. To establish credibility on social networks, the media need to distinguish themselves from contributors who convey opinions or propaganda.
Since many opinions and assertions on social networks have little or no basis in fact, news media can distinguish themselves by highlighting their sources, clearly explaining the compelling reasoning that leads to a particular conclusion or assertion, and challenging hearsay and conjecture. By emphasizing critical, in-depth and informed journalism, news media can singularize their content on social networks. They would also be careful to stop reporting trends on these same social networks.
Do you think that social networks should be more regulated? If so, what should be done to prevent their use to spread misinformation?
JKM : Any country with defamation laws, consumer protection agencies or sanctions for incitement to hatred imposes restrictions on what can and cannot be said. Given the increasing complexity of information systems where anyone can participate (including malicious actors), it is crucial to consider how speech can (or should) be regulated on social networks. In particular, regulatory frameworks, with the participation of citizens, journalists, regulators and access providers, should seek to limit the deliberate dissemination of disinformation without punishing citizens who accidentally do so.
Regulation can also be done through fact-checking, algorithmic promotion of reliable media sources, and so on. However, we do not know how ordinary citizens, misinformation providers and social networks themselves will adapt to regulatory interventions. For example, will citizens switch to competing social networks if a network decides to impose standards? Until we understand in detail the complex web of horizontal communication on social networks, the solutions proposed by politicians, media, experts and social networks themselves will remain superficial.
(1) Author of the book " The Psychology of Micro-Targeted Election Campaigns " - Palgrave Macmillan Edition, 2019.
This book examines the psychology behind the micro-targeted tactics used in election campaigns and the advent of increasingly sophisticated dynamic agent-based models (ABMs). It discusses individual profiling, how data and modeling are deployed to improve the effectiveness of persuasion and mobilization efforts in campaigns, and the potential limitations of these approaches. In particular, Madsen explores how psychological knowledge and personal data are used to generate individualized models of voters and how these in turn are applied to optimize persuasion strategies tailored to a specific individual.
Finally, the book examines the broader democratic dilemmas raised by the introduction of these tactics into politics and the critical civic importance of understanding how these campaigns work. This timely book offers new perspectives to students and researchers in political psychology, philosophy, political marketing, media and communications.