ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
Back Issues Online
Back Issues Online

by Anahid N.Jalali, Alexander Schindler and Bernhard Haslhofer
 
Data driven prognostic systems enable us to send out an early warning of machine failure in order to reduce the cost of failures and maintenance and to improve the management of the maintenance schedule. For this purpose, robust prognostic algorithms such as deep neural networks are used whose put is often difficult to interpret and comprehend. We investigate these models with the aim of moving towards a transparent and understandable model which can be applied on critical applications such as within the manufacturing industry.

With the development of manufacturing technology and mass production, the methodology of maintenance scheduling and management has become an important topic in industry. Predictive maintenance (PdM) is the recent maintenance management approach after run to failure (R2F), which is applied when observing the failure in the system and preventive maintenance (PvM) that is scheduled based on average life time of the machine. PdM denotes a set of processes that aim to reduce maintenance costs in the manufacturing industry. For this purpose, prognostic systems are used to forecast metrics such as remaining useful life (RUL) and time to failure (TTF), which are often grouped into two data driven and model-based methods.

Data driven approaches are used when no or very little understanding of the physics behind the system operation exists. In order to detect changes within the system, these approaches usually employ techniques such as pattern recognition and machine learning, which compared with model-based methods, are easier to obtain and implement. These novel methods require robust predictive models to capture the health status of a machine by using information extracted from collected sensor data. At the moment models are built manually by feature engineering, which relies extensively on domain knowledge. The effectiveness of data-driven predictive maintenance methods, which predict the health state of some machinery, depends heavily on health indicators (HI), which are quantitative indicators (features) that are extracted from historical sensor data. This relies on the assumption that statistical characteristics of data are relatively consistent unless a fault occurs. A selection of relevant features is then fed into the model to compute a one/multi-dimensional indicator, which describes the degradation of the machine’s health to eventually estimate the remaining lifetime of the machine.     

Deep neural networks have shown superior performance on a variety of applications such as image and audio classification and speech and handwriting recognition. Similar to other applications, data assembled for predictive maintenance are sensor parameters that are collected over time. Utilising deep models could reduce manual feature engineering effort and automatically construct relevant factors and the health factors that indicate the health state of the machine and its estimated remaining runtime before the next upcoming downtime.

However, despite the promising features of deep neural networks, their complex architecture results in a lack of transparency and it is very complicated to interpret and explain their outcome, which is a severe issue that currently prevents their adoption in the critical applications and manufacturing domain. Explainable artificial intelligence (XAI) addresses this problem and refers to AI models yielding behaviours and predictions that can be understood by humans. The general goal of XAI research is to be able to understand the behaviour of the model by clarifying under what conditions a machine learning model (i) yields a specific outcome, (ii) succeeds or fails, (iii) yields errors, and (iv) can be trusted.

Therefore, explainable AI systems are expected to provide comprehensible explanations of their decisions when interacting with their users. This can also be considered as an important prerequisite for collaborative intelligence, which denotes a fully accepted integration of AI into society. Early work on machine learning model explanation often focused on visualising model predictions using a common visualisation technique called nomograms, which was first applied to logistic regression models. Later, this technique was used to interpret SVMs and Naive Bayes models. Recently, these visualisation techniques have been used on deep learning algorithms to visualise the output layers of deep architecture such as CNNs and also on RNNs. Besides producing visualisation for model predictions, existing studies investigated two other approaches for explainability of machine learning (ML) algorithms: prediction interpretation and justification as well as interpretable models.

Figure 1: The machine learning process.
Figure 1: The machine learning process.

In prediction interpretation and justification, a non-interpretable complex model and prediction are given to produce a justification, which is often done by isolating contributions of individual features of a prediction. This technique is proposed for models such as Multilayer Perceptron, probabilistic radial basis functions and SVM classifier where it was used to extract conjunctive rules from a subset of features. In 2008, a method was proposed that investigates alternative predictions of an unknown classifier in cases where a particular feature was absent and measured the effect of this individual feature. Effects were then visualised in order to explain the main contributions to a prediction and to compare the effect of that feature in various models. Later on in 2016, to interpret deep model’s predictions, a long-short term-memory (LSTM) with a loss function that encourages class discriminative information as a caption generation model was used to justify classification results of a CNN model.

The goal of our research is to first provide a methodology that can automatically construct machine health indicators in order improve the effectiveness of prediction models using deep neural networks. Second, it will investigate optimised deep neural networks to predict maintenance measures such as remaining useful life (RUL) and time-to-failure (TTF). Third, it will extend the field of explainable AI by investigating visualisation and interpretation models to justify and explain the outcome of a predictive model. For that task, the characteristics of deep learning architectures such as convolutional neural network, recurrent neural network and deep belief network will be investigated.
 
First the machine learning algorithm is trained with a training dataset. The trained model is then evaluated using a test/evaluation dataset. However, since the internal behaviour of the algorithm is not clear, the model’s output is not interpretable. The goal is to be able to understand a model’s success and failure, and to understand its decisions.

References:
[1] D. Gunning: “Explainable artificial intelligence (xai)”, Defense Advanced Research Projects Agency (DARPA), nd Web (2017).
[2] M. Robnik-Šikonja, et al.: “Efficiently explaining decisions of probabilistic RBF classification networks.” International Conference on Adaptive and Natural Computing Algorithms. Springer, Berlin, Heidelberg, 2011.
[3] D. Hendricks. “Long-Term Recurrent Convolutional Networks for Visual Recognition and Description”. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
[4] Epstein, Susan L. “Wanted: collaborative intelligence.” Artificial Intelligence 221 (2015): 36-45.

Please contact:
Anahid N.Jalali, Alexander Schindler and Bernhard Haslhofer, Austrian Institute of Technology GmbH
This email address is being protected from spambots. You need JavaScript enabled to view it.
This email address is being protected from spambots. You need JavaScript enabled to view it.
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: July 2024
Special theme:
Sustainable Cities
Call for the next issue
Image ERCIM News 116 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed