ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Andreas Rauber (TU Wien and SBA), Roberto Trasarti, Fosca Giannotti (ISTI-CNR)

The past decade has seen increasing deployment of powerful automated decision-making systems in settings ranging from smile detection on mobile phone cameras to control of safety-critical systems. While evidently powerful in solving complex tasks, these systems are typically completely opaque, i.e. they provide hardly any mechanisms to explore and understand their behaviour and the reasons underlying their decisions. This opaqueness raises numerous legal, ethical and practical concerns, which have led to initiatives and recommendations on how to address these problems, calling for greater scrutiny in the deployment of automated decision-making systems. Clearly, joint efforts are required across technical, legal, sociological and ethical domains to address these increasingly pressing issues.

Machines are becoming increasingly responsible for decisions within society. Various groups, including professional societies in the area of information technologies, data analytics researchers, industry, and the general public are realising the power and potential of these technologies, accompanied by a sense of unease and an awareness of their potential dangers. Algorithms acting as black boxes make decisions that we are keen to follow as they frequently prove to be correct. Yet, no system is fool proof. Errors do and will occur no matter how much the underlying systems mature and improve due ever more data being available to them and ever more powerful algorithms learning from them.

This awareness has given rise to many initiatives aiming at mitigating this black-box problem, trying to understand the reasons for decisions taken by systems.  These include the "ACM Statement on Algorithmic Transparency and Accountability" [1], Informatics Europe's "European Recommendations on Machine-Learned Automated Decision Making" [2] and the EU's GDPR regulation [3] which introduces, to some extent, a right for all individuals to obtain "meaningful explanations of the logic involved" when automated decision making takes place. One of the first activities of the newly established European Commission’s High Level Expert Group on Artificial Intelligence (HLEG-AI) has been to prepare a draft report on ethics guidelines for trustworthy AI [4]. These documents pave the way towards a more transparent and sustainable deployment of machine-driven decision making systems. On the other hand, recent studies have shown that models learning from data can be attacked to intentionally provide wrong decisions via generated adversarial data. In spite of a surge in R&D activities in this domain [5], massive challenges thus remain to ensure automated decision making can be accountably deployed, and the resulting systems can be trusted.

 

The articles collated in this special theme provide an overview of the range of activities in this domain. They discuss the steps taken and methods being explored to make AI systems more comprehensible. They also show the limitations of current approaches, specifically as we leave the domain of analysing visual information. While visualisation of deep learning networks for image analysis in the form of heat maps, attention maps and the like [6,7] has helped drastically in understanding and interpreting the regions most relevant in image classification, other domains are frequently reverting to extracting rules as surrogates for or explanations of more complex machine learning models. While such rules are, in principle, fully transparent individually, their complexity frequently renders them unusable for understanding the decision making complexity.

We should consider the main question in the field to be: what is an explanation? This question in itself illustrates how new this research topic is. As yet there is no formalism for an explanation and nor is there a way to quantify the grade of comprehensibility of an explanation for humans. The following works are pioneers in this area, creating fertile ground for innovation.

Ricardo Guidotti and his colleagues provide a very good introduction into this black-box problem and approaches to mitigate it. They also propose that explanations be broken into two levels, namely a local level explanation on a data instance level, which is subsequently combined on a global level by synthesising the local explanations, and optimising them for simplicity and fidelity.

On the other hand, as research has frequently pointed out over decades, even human experts find it hard or impossible to provide clear, transparent explanations for their decisions. Fabrizio Falchi in his paper highlights the importance of lessons to be learned from the field of psychology in understanding intuition and how these can assist with improving our understanding of deep learning algorithms.

Two articles delve deeper into the sources and effects of concerns about the lack of transparency, beyond just the complexity of the underlying machine learning models. Alina Sîrbu and colleagues review the process of opinion formation in society and how the “bias” introduced by selecting specific information being delivered to users influences  the outcome of public debate and consensus building. Maliciously manipulated data provided as adversarial input to machine learning algorithms are reviewed by Fabio Carrara and colleagues, highlighting the need for a means to detect such attacks and the importance of making algorithms more robust against them.

Alexander Dür and colleagues go beyond explanations of the decisions made by Deep Learning network, focusing on ways to extract information on the impact of specific inputs on the decision in a text mining domain as they propagate through the network layers.

Owing to the high level of scrutiny it receives, the area of health is one of the dominant application domains. Tamara Müller and Pietro Lio address the need to provide explanations that are meaningful to the specific user, introducing a system that builds on top of a rule extraction system for Random Forests in order to inspect, tune and simplify these rules, and to provide visualisation support for their interpretation. Anirban Mukhopadhyay and colleagues also focus on the need for deeper understanding of the workings and reasons behind decisions made by AI systems. While heat maps allow visualisation of the area most relevant for a decision, little information is provided about the actual reason for a certain decision being made. They identify three challenges that go beyond the technicalities of the algorithms, including the availability of data, a regulatory approval process and the integration of the doctor - patient relationship into the evaluation.

In a similar vein, Carmen Fernández and Alberto Fernández review the ethical and legal implications of the use of AI technology in recruiting software, proposing a separation of concerns via a multi-agent system architecture as a mechanism to regulate competing interests.

The final three papers in this special theme section present examples from other application domains. Ulrik Franke discusses a new project that has been set up to study transparency in the insurance industry, whilst Max Landauer and Florian Skopik highlight issues with the semantic expressiveness of log data elements for cyber threat identification. Last, but not least, Markus Berg and Sebastian Velten provide an example from the scheduling domain where transparency issues do not arise from the complexity of a black-box deep learning model, but from the tardiness of the underlying processes due to resource constraints in the underlying optimisation processes.

We strongly believe that the new wave of interest in the field, coupled with the existing big opportunities and challenges will produce a new era where AI will support many human activities. For this reason, society needs to open the black box of AI to empower individuals against undesired effects of automated decision making, to reveal and protect new vulnerabilities, to implement the “right of explanation”, to improve industrial standards for developing AI-powered products, increasing the trust of companies and consumers, to help people make better decisions, to align algorithms with human values and finally to preserve (and expand) human autonomy.

References:
[1] ACM Policy Council: Statement on Algorithmic Transparency and Accountability, 2017.
[2] Informatics Europe and EUACM: When Computers Decide: European Recommendations on Machine-Learned Automated Decision Making, 2018.
[3] European Parliament: Regulation (EU) 2016/679 of the European Parliament and Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), 2016.
[4] High-Level Expert Group on Artificial Intelligence. Draft Ethics Guidelines for Trustworthy AI, European Commission, Dec. 18 2018.
[5] R. Guidotti, et al.: “A Survey of Methods for Explaining Black Box Models”, ACM Computing Surveys 51(5):93, DOI 10.1145/3236009.
[6] J. Yosinski, et al.: “Understanding neural networks through deep visualization”, in International Conference on Machine Learning (ICML) Workshop on Deep Learning, 2015.
[7] P. Rajpurkar, et al.: “CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning”, arXiv Preprint arXiv:1711.05225v3, Dec. 2017.

Please contact:
Andreas  Rauber
TU Vienna, Austria
This email address is being protected from spambots. You need JavaScript enabled to view it.

Roberto Trasarti, Fosca Giannotti
ISTI-CNR, Italy
This email address is being protected from spambots. You need JavaScript enabled to view it.
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 116 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed