ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Sara Kepplinger (Fraunhofer IDMT)

“I describe this sound as ‘pling’ and the other as a ‘plong’…’”, the participant said, distinguishing between hockey pucks made from different materials. Similarly, it is possible to “listen” to acoustic quality in the context of industry. Based on a demonstrator using an air hockey table, we show an approach for acoustic quality monitoring, applicable to smart manufacturing processes.

Sensor technology already plays a major role in the industrial environment, and systems relying on machine learning (ML), sensor combinations, and data from all levels to improve production processes, are constantly evolving [L1]. However, the potential to use information gleaned from acoustic emissions has not yet been investigated in the context of industry. This is where our in-house demonstrator, an air hockey table, comes into play, enabling us to show vividly how we can recognise different material properties (in this case, different kinds of pucks) by their acoustic fingerprints. Players strike a small plastic disc (the puck), which moves on a small air cushion, from one side of the table to the other. With the help of measuring microphones and a supervised learning approach, it is possible to “listen” to which puck is being played (Figure 1).

Figure 1: Air hockey table equipped with two measurement microphones recording the puck’s sound (e.g., ‘pling’ and ‘plong’).
Figure 1: Air hockey table equipped with two measurement microphones recording the puck’s sound (e.g., ‘pling’ and ‘plong’).

The main advantage of our approach is the non-destructive quality control that does not interfere with the (production) process or demolish the inspected object. It combines various measurement and analysis steps: precise sound recording, pre-filtering of noise and useful sound, as well as intelligent signal analysis and evaluation using ML. Several steps are required prior to recording the data, including: determining how data will be acquired (i.e., type of microphone, position of sensors, description of the context, definition of recording length and method, sampling rate, resolution); context specific synchronisation with additional and other (sensor) data, and the definition of an annotation policy.

Once these requirements are defined, the actual recording of data considering different conditions for the later classification training takes place. In our case, the conditions include audible differentiation of three pucks of different weights (18/19/23 grams) and materials (3D printed (material: PA2200)/3D printed with rubber ring (material: PA2200)/hard plastic original), as well as different kinds of (simulated) background noises and player behaviour. We record continuously using an USB audio interface (into a mini PC in wave format using two directional microphones, one for each side of the air hockey field), with 44.1 kHz and 32 Bit. After the training the system based on various predefined conditions, the ad-hoc analysis and interpretation of data takes place. The classification starts during power-up of the air hockey table and is able to detect the different kinds of pucks on the field.

In an industrial environment, the installation of acoustic measurement technology is similar: a measurement microphone picks up the characteristic sounds of a given process. Depending on the application, we do the recording with several microphones or a microphone array. Algorithms of source separation (as one possible solution among others) filter out ambient noises, so that we separate background noise from useful noise. For this purpose, we use methods of source separation [1], originally developed to separate individual instruments from a music recording [2].

We use this approach, for example, in in-line quality assurance and predictive maintenance scenarios for industrial use cases. Here, the acoustic monitoring system can provide additional, and even more precise, information about the quality and condition of products or processes. This may be either integrated into existing measurement systems or used as a completely independent monitoring system.

The human auditory system is able to detect and distinguish individual sounds, even in noisy environments, and place them in specific contexts. In industrial manufacturing processes, it is often possible to make statements about the operating status of a component, engine or even an entire machine by listening attentively; experienced machine operators can hear problems or errors. However, so far it is quite difficult to detect these indications of faulty processes or products by any other means [3]. Our basic premise is that everything that is audible (and interpretable, e.g. recognisable as a difference) is also measurable as an indicator for quality.

In order to reliably identify and automatically classify machine noises, the analysis steps mentioned above are indispensable. Until now, extensive training data has been required to train the respective system reliably. This is one of the challenges when it comes to optimising the recognition performance in practical use. One of the current challenges is the lack of available measurement data, which is why, as a first step, we have generated test data based on three application examples (electric engines, marble track, and bulk tubes) and made it available for testing purposes [L2]. At this stage, we have successfully tested our aforementioned method in practice, together with various industrial partners successfully, and reached the Technology Readiness Level (TRL) 6.

In the future, we would like to achieve a high recognition rate with less training data. Furthermore, we plan to develop a self-learning system that learns from acoustic measurement data to assess the quality of products and processes. Additionally, we are working to understand and interpret what acoustic information is heard by machine operators and inspectors, and how they perceive it. To this end, we are working on a parameterisation of the subjective factors.

Links:
[L1] J Walker, “Machine Learning in Manufacturing – Present and Future Use-Cases” https://emerj.com/ai-sector-overviews/machine-learning-in-manufacturing/.
[L2] Industrial Media Applications Datasets 2019. https://www.idmt.fraunhofer.de/datasets.

References:
[1]  E., Cano, et al.: “Exploring Sound Source Separation for Acoustic Condition Monitoring in Industrial Scenarios”, EUSIPCO2017.
[2]  J., Abeßer, et al.: “Acoustic Scene Classification by Combining Autoencoder-Based Dimensionality Reduction and Convolutional Neural Networks”, in: DCASE 2017.
[3] S., Grollmisch, et al.: “Sounding Industry: Challenges and Datasets for Industrial Sound Analysis (ISA)”, in Proc. of EUSIPCO 2019.

Please contact:
Sara Kepplinger
Fraunhofer Institute for Digital Media Technology IDMT, Germany
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 119 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed