ethics and algorithms

Ethics, the great oblivion of algorithms?

Start
From morning to night, we are confronted with algorithms. But this exposure is not without danger. Their influence on our political opinions, our moods or our choices is proven. Far from being neutral, algorithms make the value judgements of their developers, and most of the time we are subjected to them without our being aware of it. It then becomes necessary to question their ethics, and to find solutions to the biases that their users suffer.
 
Qhat is Facebook's job? Or that of Twitter? More generally, what is the job of a social network? The simplistic answer, but nevertheless correct, is this: select the information that will be presented to you on your wall, in order to make you spend as much time as possible on it. Behind this time-consuming "news feed" is a selection of content, advertising or not, optimized for each user with a large number of algorithms. Social networks use these algorithms to determine what will be of most interest to you.
Without questioning the usefulness of these sites - they may have been the ones that directed you to this article - their operation nevertheless raises serious ethical questions. Starting with this one: are all users aware of the weight of the algorithms on their perception of the news, on their opinions? And to go even further: what impact do these algorithms have on our lives, on our decisions?
 
For Christine Balaguéa researcher at Telecom School of Management and member of the Cerna (see box "The +" at the end of the article), "The subject of the collection of personal data is known, but less so the subject of the processing of this data by the algorithms. While users are paying more attention to what they share on social networks, they are still not necessarily wondering how the service they are using works. And this lack of knowledge is not just about Facebook or Twitter. Algorithms are everywhere in our lives, present in our mobile applications and the web services we use. From morning to night we are confronted with choices, suggestions, information that are processed by algorithms: Netflix, Citymapper, Waze, Google, Uber, TripAdvisor, AirBnb...
Are your routes determined by Citymapper? Or by Waze? Our mobility is more and more dependent on algorithms. Illustration: Diane Rottner for MTech.
 
"They run our lives" says Christine Balagué. "A sharply increasing number of papers by researchers from different fields are being published about the power that algorithms have on individuals. »
In 2015, Robert Epstein, a researcher at the American Institute for Behavioral Research, showed, among other things, how a search engine could affect the results of an election. His study, survey of more than 4,000 individuals, determined that the ranking of candidates in the research results influenced at least 20 % of undecided voters.
Another striking example: a 2012 research conducted by Facebook on 700,000 users of its service showed that people previously exposed to negative publications posted mostly negative content. Likewise, those previously exposed to positive posts posted mostly positive content. This proves that algorithms can manipulate people's emotions without their knowledge or awareness. What place for our personal preferences in a system of algorithms that we don't even know about?

The dark side of algorithms

In this opacity lies one of the main ethical problems of algorithms. On a search engine like Google, two users making the same query will not get the same result. The explanation put forward by the service is the personalization of the answers to better meet the expectations of each of the two individuals. But the mechanisms for selecting results are obscure. Among the parameters taken into account to determine which sites will be displayed on the page, more than a hundred concern the user making the query. Under cover of trade secret, the exact nature of these personal parameters and how they are taken into account by Google's algorithms are unknown. It's hard to know how the company categorizes us, determines our interests, or predicts our behavior. And once this categorization is done, is it even possible to get out of it? How can we stay in control of the perception that the algorithm creates of us?
 
Because of this opacity, it is even impossible to know the biases that may arise from the processing of the data. However, they do exist, and protecting against them is a real challenge for society. An example of the non-equity of individuals in the face of algorithms? Work carried out by Grazia Cecere, an economist at Télécom École de Management, has highlighted discrimination between men and women in algorithms for associating centres of interest in a large social network. "When we created an advertisement on STEM [science, technology, education, mathematics], we found that the software distributed it preferentially to men, even though women showed more interest in the subject. details Grazia Cecere. Far from the myth of evil artificial intelligence, the origin of this kind of bias is to be found in human action. Too often forgotten, the presence of developers behind every line of code must be remembered.
 
Algorithms are first and foremost there to offer services, most often commercial ones. They are therefore part of a business strategy that they will inevitably reflect in order to meet the economic expectations of the company.. "Data scientists working on a project aim to optimize their algorithms without necessarily thinking about the ethical issues of the choices made by these programs. points Christine Balagué.
In addition, humans have perceptions of society that they more or less consciously incorporate into the software they develop. The value judgment of an algorithm is then very often a value judgment made by its creators. In the case of Grazia Cecere's work, she explains the bias highlighted in a simple way The algorithm learns what it is asked to learn, and replicates the stereotypes if they are not sorted. »
What biases are hidden in the digital tools we use every day? What value judgments inherited from algorithm developers are we facing? Illustration: Diane Rottner for MTech.
 
An emblematic example of this phenomenon concerns medical imaging. An algorithm classifying a cell as sick or not will have to be configured to make a trade-off between the number of false positives and the number of false negatives. Indeed: developers must decide how tolerable it is to have positive tests on healthy people in order not to miss sick people whose tests would come back negative. For doctors, it is better to have false positives than false negatives. For scientists developing the algorithm, on the other hand, it is better to have false negatives than false positives, because scientific knowledge is cumulative. According to the values defended by the developers, they will favour one or the other profession.

Transparency? Yes, but not that!

To combat these biases, one of the proposals is to make the algorithms more transparent. Since October 2016, the Law for a Digital Republic proposed by Axelle Lemaire, former Secretary of State for Digital Affairs, has imposed transparency on all public algorithms. It is thanks to this, in particular, that the code of the website Admission post-bac (APB) has been made available to the public. Companies are also increasingly playing this card. Since 17 May 2017, Twitter has been offering its users the opportunity to find out about their areas of interest. However, despite these good intentions, transparency is hardly sufficient to guarantee the ethical dimension. First of all, the intelligibility of codes is still often neglected: algorithms are sometimes delivered in formats that do not make them easy to read and understand, even for professionals. Secondly, transparency can be artificial. In the case of Twitter"nothing is communicated about how interests are allocated"... notes Christine Balagué.
 
What publications by this user led him to be classified in "Action and Adventure", a theme with a very broad meaning? What is the weighting carried out by Twitter's algorithms between "Science News" and "Business and Finance" to display content in the user's news feed?
 
To go further, the degree of transparency of the algorithms needs to be evaluated. This is the meaning of the TransAlgo initiativeThis is also the first time that a new project has been launched by Axelle Lemaire and piloted by Inria. " It's a platform that measures transparency, looking at what data is used, what data is produced, how open the code is... " explains Christine Balagué, a member of TransAlgo's scientific board. The platform is the first of its kind in Europe, and makes France an advanced nation in the reflection on transparency.
In the same vein, DataIA is a data convergence institute, initiated on the Saclay plateau for a period of ten years. This unique program is interdisciplinary and includes research on artificial intelligence algorithms, their transparency and ethical issues.
 
By bringing together multidisciplinary scientific teams, the objective is to study the mechanisms of algorithm development. The social sciences and humanities have much to contribute to the analysis of the values and decisions behind code development. "It is becoming increasingly necessary to dissect algorithmic methods, to reverse engineer them, to measure their potential biases and discriminations, and to make them more transparent," insists Christine Balagué. "More broadly, ethnographic research on developers is needed, immersing oneself in their intentions and studying the socio-technical assembly of algorithms. »
As digital services become more and more important in our lives, it is essential to identify the risks that algorithms pose to their users.
 
Christine Balagué - I'MTech
The original of this article appeared ins I'MTech
 

A public commission dedicated to digital ethics.
Since 2009, the major French players in digital research and innovation have been grouped together within Allistene (Alliance des sciences et technologies du numérique). In 2012, this alliance decided to set up a commission for reflection on the ethics of research in digital sciences and technologies: the Cerna. On the basis of multidisciplinary work, bringing together expertise and contributions from all digital stakeholders at both national and international levels, the Cerna questions the digital world on its ethical aspects. By taking as objects of study themes as varied as the environment, health, robotics or nanotechnologies, it aims to raise awareness and enlighten technology designers on ethical issues. Its reports can be downloaded at his website.

Go further:
- Emission France Culture "Does the digital make us numbers? »March 2017
 

0 Comments
Inline Feedbacks
View all comments
ethics
Previous article

Ethics, a compass for building the future

industry
Next article

Diesel: experimentation by Volkswagen on humans. Return of the "banality of evil"?

Latest articles from Analyses

JOIN

THE CIRCLE OF THOSE WHO WANT TO UNDERSTAND OUR TIME OF TRANSITION, LOOK AT THE WORLD WITH OPEN EYES AND ACT.
logo-UP-menu150

Already registered? I'm connecting

Register and read three articles for free. Subscribe to our newsletter to keep up to date with the latest news.

→ Register for free to continue reading.

JOIN

THE CIRCLE OF THOSE WHO WANT TO UNDERSTAND OUR TIME OF TRANSITION, LOOK AT THE WORLD WITH OPEN EYES AND ACT

You have received 3 free articles to discover UP'.

Enjoy unlimited access to our content!

From $1.99 per week only.
Share
Tweet
Share
WhatsApp
Email
Print