by Andreas Rauber (TU Wien and SBA), Roberto Trasarti, Fosca Giannotti (ISTI-CNR)

The past decade has seen increasing deployment of powerful automated decision-making systems in settings ranging from smile detection on mobile phone cameras to control of safety-critical systems. While evidently powerful in solving complex tasks, these systems are typically completely opaque, i.e. they provide hardly any mechanisms to explore and understand their behaviour and the reasons underlying their decisions. This opaqueness raises numerous legal, ethical and practical concerns, which have led to initiatives and recommendations on how to address these problems, calling for greater scrutiny in the deployment of automated decision-making systems. Clearly, joint efforts are required across technical, legal, sociological and ethical domains to address these increasingly pressing issues.

by Riccardo Guidotti, Anna Monreale and Dino Pedreschi (KDDLab, ISTI-CNR Pisa and University of Pisa)

Explainable AI is an essential component of a “Human AI”, i.e., an AI that expands human experience, instead of replacing it. It will be impossible to gain the trust of people in AI tools that make crucial decisions in an opaque way without explaining the rationale followed, especially in areas where we do not want to completely delegate decisions to machines.

by Fabrizio Falchi, (ISTI-CNR)

In recent years, expert intuition has been a hot topic within the discipline of psychology and decision making. The results of this research can help in understanding deep learning; the driving force behind the AI renaissance, which started in 2012.

by Alina Sîrbu (University of Pisa), Fosca Giannotti (ISTI-CNR), Dino Pedreschi (University of Pisa) and János Kertész (Central European University)

Does the use of online platforms to share opinions contribute to the polarization of the public debate? An answer from a modelling perspective.

by Fabio Carrara, Fabrizio Falchi, Giuseppe Amato (ISTI-CNR), Rudy Becarelli and Roberto Caldelli (CNIT Research Unit at MICC ‒ University of Florence)

The astonishing and cryptic effectiveness of Deep Neural Networks comes with the critical vulnerability to adversarial inputs — samples maliciously crafted to confuse and hinder machine learning models. Insights into the internal representations learned by deep models can help to explain their decisions and estimate their confidence, which can enable us to trace, characterise, and filter out adversarial attacks.

by Alexander Dür, Peter Filzmoser (TU Wien) and Andreas Rauber (TU Wien and Secure Business Austria)

With the desire and need to be able to trust decision making systems, understanding the inner workings of complex deep learning neural network architectures may soon replace qualitative or quantitative performance as the primary focus of investigation and measure of success. We report on a study investigating a complex deep learning neural network architecture aimed at detecting causality relations between pairs of statements. It demonstrates the need to obtain a better understanding of what actually constitutes sufficient and useful insights into the behaviour of such architectures that go beyond mere transformation into rule-based representations.

by Tamara Müller and Pietro Lió (University of Cambridge)  

We introduce a Clinical Decision Support System (CDSS) as an operation of translational medicine. It is based on random forests, is personalisable and allows a clear insight into the decision making process. A well-structured rule set is created and every rule of the decision making process can be observed by the user (physician). Furthermore, the user has an impact on the creation of the final rule set and the algorithm allows the comparison of different diseases as well as regional differences in the same disease.

by Anirban Mukhopadhyay, David Kügler (TU Darmstadt), Andreas Bucher (University Hospital Frankfurt), Dieter Fellner (Fraunhofer IGD and TU Darmstadt) and Thomas Vogl (University Hospital Frankfurt)

From screening diseases to personalised precision treatments, AI is showing promise in healthcare. But how comfortable should we feel about giving black box algorithms the power to heal or kill us?
In healthcare, trust is the basis of the doctor-patient relationship. A patient expects the doctor to act reliably and with precision and to explain options and decisions. The same accuracy and transparency should be expected of computational systems redefining the workflow in healthcare. Since such systems have inherent uncertainties, it is imperative to understand a) the reasoning behind such decisions and b) why mistakes occur. Anything short of this transparency will adversely affect the fabric of trust in these systems and consequently impact the doctor-patient relationship.

by Carmen Fernández and Alberto Fernández (Universidad Rey Juan Carlos)

Artificial Intelligence (AI) applications may have different ethical and legal implications depending on the domain. One application of AI is analysis of video-interviews during the recruitment process. There are pros and cons to using AI in this context, and potential ethical and legal consequences for candidates, companies and states. There is a deficit of regulation of these systems, and a need for external and neutral auditing of the types of analysis made in interviews. We propose a multi-agent system architecture for further control and neutral auditing to guarantee a fair, inclusive and accurate AI and to reduce the potential for discrimination, for example on the basis of race or gender, in the job market.

by Ulrik Franke (RISE SICS)

Automated decision-making has the potential to increase both productivity and competitiveness as well as compensate for well-known human biases and cognitive flaws [1]. But today’s powerful machine-learning based technical solutions also bring about problems of their own – not least in terms of being uncomfortably black-box like. A new research project at RISE Research Institutes of Sweden, in collaboration with KTH Royal Institute of Technology, has recently been set up to study transparency in the insurance industry, a sector that is poised to undergo technological disruption.

by Max Landauer and Florian Skopik (AIT Austrian Institute of Technology)

“Cyber threat intelligence” is security-relevant information, often directly derived from cyber incidents that enables comprehensive protection against upcoming cyber-attacks. However, collecting and transforming the available low-level data into high-level threat intelligence is usually time-consuming and requires extensive manual work as well as in-depth domain knowledge. INDICÆTING supports this procedure by developing and applying machine learning algorithms that automatically detect anomalies in the monitored system behaviour, correlate affected events to generate multi-step attack models and aggregate them to generate usable threat intelligence.

Next issue: April 2019
Special theme:
5G
Call for the next issue
Image ERCIM News 116 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed