ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Christian Bauckhage, Daniel Schulz and Dirk Hecker (Fraunhofer IAIS)

Deep neural networks have pushed the boundaries of artificial intelligence but their training requires vast amounts of data and high performance hardware. While truly digitised companies easily cope with these prerequisites, traditional industries still often lack the kind of data or infrastructures the current generation of end-to-end machine learning depends on. The Fraunhofer Center for Machine Learning therefore develops novel solutions which are informed by expert knowledge. These typically require less training data and are more transparent in their decision-making processes.

Big data based machine learning (ML) plays a key role in the recent success of artificial intelligence (AI). In particular, deep neural networks trained with vast amounts of data and high performance hardware can now solve demanding cognitive tasks in computer vision, speech recognition, text understanding, or planning and decision-making. Nevertheless, practitioners in industries other than IT are increasingly skeptical of deep learning, mainly because of:

 

  • Lack of training data. Vapnik-Chervonenkis (VC) theory establishes that supervised training of complex ML systems requires substantial amounts of representative data in order to learn reliably. Since details depend on a system’s VC dimension, which is usually hard to come by, Widrow’s rule of thumb is to train with at least ten times more data than there are system parameters. Modern deep networks with their millions of adjustable parameters thus need many more millions of examples in order to learn well. However, large amounts of annotated data are rarely available in industries that are not yet fully digitised. Even in contexts such as the internet of things or industry 4.0 where data accumulate quickly, we still often face thin data scenarios where labeled data for end-to-end machine learning are inaccessible.
  • Lack of traceability. Trained connectionist architectures are black boxes whose inner computations are abstracted away from conceptual information processing. As their decision-making processes may thus be unaccountable, data scientists in industries where regulatory guidelines demand automated decision making to be comprehensible are wary of deep learning. Indeed, recent research shows that silly mistakes made by deep networks might be avoidable if they had “common sense”. Even more alarmingly, recent research also shows that silly mistakes can be provoked using adversarial inputs.

The newly established Fraunhofer Center for Machine Learning addresses both these issues and researches methods for informed machine learning that, on the one hand, can cope with thin data and, on the other hand, lead to more explainable AI for industrial applications.

A basic observation is that domain experts in industry typically know a lot about the processes and data they are dealing with and that leveraging their knowledge in the design of ML architectures or algorithms may lessen the need for massive training data. While the idea of hybrid AI that integrates knowledge- and data-driven methods has a venerable history, recent progress in Monte Carlo tree search and reinforcement learning now suggests new approaches. For instance, industrial expert knowledge is often procedural in the sense that there exists experience as to what to do when with measurements in order to achieve actionable results. Given training data for a novel problem and a database of interpretable procedures that have already proven their worth, Monte Carlo tree search or reinforcement learning can automatically compose basic building blocks into larger systems that solve the problem at hand [1]. At the same time, industrial product design or process control often rely on sophisticated knowledge-based simulations. Here, simulated data can augment small amounts of training data, or learning systems can improve existing simulators [2].  

Crucially, the Fraunhofer Center for Machine Learning is part of the Fraunhofer Cluster of Excellence Cognitive Internet Technologies [L1]. Also comprising the centers for IoT Communications and Data Spaces, this cluster covers aspects of data acquisition, exchange, curation, and analysis. On the one hand, this ecosystem provides numerous testbeds for informed learning and explainable AI in industry. On the other hand, it provides opportunities for research and development of distributed or federated learning approaches as well as for leaning on the edge. The cluster thus supports digital sovereignty and develops trustworthy technologies for the industrial data economy.

Link:
[L1] https://www.cit.fraunhofer.de

References:
[1] C. Bauckhage, et al.: “Informed Machine Learning through Functional Composition”, Proc. KDML, 2018.
[2] N. Aspiron and M. Bortz: “Process Modeling, Simulation and Optimization: From Single Solutions to a Multitude of Solutions to Support Decision Making”, Chemie Ingenieur Technik, 90(11), 2018.

Please contact:
Daniel Schulz
Fraunhofer IAIS, Germany
+49 2241 142401
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 116 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed