ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Benoit Rottembourg (Inria)

The algorithms of large digital platforms massively influence our daily lives. Biases and unfair decisions are increasingly numerous and irritating. The evolving regulatory framework will give rise to new means of auditing, such as black box auditing, which calls for new algorithmic challenges such as those we tackle at Regalia.

Figure 1: Algorithms watching algorithms. Image by Gerd Altmann from Pixabay.
Figure 1: Algorithms watching algorithms. Image by Gerd Altmann from Pixabay.

The final vote in the European Parliament on July 5th, 2022 on two pieces of legislation, the Digital Services Act (DSA) and the Digital Markets Act (DMA), marks a turning point in the regulation of large digital platforms and their algorithms.

Three main trends have probably led to this evolution of European law: i) The dominant position that certain major digital players, such as Google, Facebook and Amazon, have taken in their markets. They alone represent more than 50% of the global online advertising market (excluding China). ii) At the heart of the value proposition of these platforms are algorithms, often based on artificial intelligence, which allow for both efficient scaling and extreme personalisation of the service. This algorithmic omnipresence makes the practice of regulation authorities both operationally delicate and legally complex. iii) A growing awareness among the general public of the sometimes opaque, even arbitrary or biased nature of the decisions made by these algorithms. The feeling that a form of discrimination or unfair behavior is being carried out by these algorithms is increasingly shared by public opinion. We can think, for example, of the scandal that affected Instagram when it was noted [L1] that images of "curvy" female models in bikinis were more frequently censored than the equivalent images of other models.

Undoubtedly, this evolution of the regulatory framework will have both organisational and technical impacts. The regulatory authorities (European and then national) themselves will see their prerogatives evolve to face this new need for algorithmic compliance. European commissioner Thierry Breton [L2]) recently announced that the European Digital Services Act will force tech giants to open the hood of their algorithms so that a committee of experts appointed by the commission can analyse them.

Just as the banking sector experienced in the 2000s and 2010s (with Basel II and Basel III, following the 2008 financial crisis), it is reasonable to think that the major digital players will have to adapt their IT production processes to the new regulatory situation. Audits, for the major algorithms structuring customer interaction, such as recommendation, pricing or moderation algorithms, will have to be carried out by companies in a systematic way. It should be noted that in the text of the DSA the word "audit" appears more than a hundred times [L3]. Even if the technical aspects of these audits are still to be defined by the regulation authorities, platforms should expect to provide greater access to their algorithms and the data that feeds them to accredited experts and academics, without being able to oppose the general terms of use. These steps could go as far as the targeted certification of the algorithms by trusted third parties.

While it is futile to believe that exhaustive transparency of the algorithms used by players at the cutting edge of artificial intelligence and data science can be achieved, it is possible to believe that black box tests, sufficiently sophisticated and piloted by experts, will be able to identify biases or unfair behavior. In any case, this is the direction of a growing amount of research work on the theme of "black box algorithm auditing".

Proving the existence – or absence – of a bias for an algorithm known only as a black box (i.e. by external querying) raises a set of difficult questions ([1],[2]). This is the core challenge of the Regalia project at Inria. We want to highlight here the most constraining characteristics:

  • First, the test carried out must be "conclusive", the queries made must be statistically representative and reveal the real activity of the algorithm.
  • The queries must not be easily identified by the platform as fictitious behavior, which would allow it to adapt or modify its response.
  • The test must cover a sufficiently large behavioral space, across all possible queries, so that significant prejudice is identified.
  • Finally, the test must be frugal so as not to disrupt the platform, and to offer reasonable calculation times for the auditor.

It is therefore understandable that performing a black box audit verifying these properties probably requires advanced algorithmic skills as well as computing resources and human expertise. The Multi-Armed Bandit Problem, a famous mathematical problem of the 1950s and classic benchmark for Reinforcement Learning algorithms, is among the conceptual frameworks for tackling such decision-making problems under uncertainty [3]. Funnily, it is also used by advertising recommendation algorithms of online platforms.

Beyond the necessary engineering and research efforts, critical masses of transdisciplinary personnel will have to be gathered to "lift the hood" of the algorithms of large platforms and offer ammunitions to the regulation authorities.

Links:
[L1] https://www.flare.com/news/instagram-censorship-fat-plus-size/
[L2] https://www.france24.com/en/live-news/20220422-eu-eyes-deal-to-tame-internet-wild-west
[L3] https://www.europarl.europa.eu/news/fr/press-room/20220412IPR27111/dsa-accord-sur-un-environnement-en-ligne-sur-et-transparent

References:
[1] E. Le Merrer, R. Pons, G. Trédan, “Algorithmic audits of algorithms, and the law”, 2022, arXiv:2203.03711.
[2] B. Ghosh, D. Basu, K. S. Meel, “Justicia: A stochastic sat approach to formally verify fairness” in Proc. of the AAAI Conference on Artificial Intelligence, 2021.
[3] B. Rastegarpanah, K.  Gummadi, M Crovella, “Auditing Black-Box Prediction Models for Data Minimization Compliance”, NeurIPS 2021.

Please contact:
Benoit Rottembourg, Inria, France
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 131
This issue in pdf

 

Image ERCIM News 131 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed