by Robert Haas (IBM Research Europe) and Michael Pfeiffer (Bosch Center for Artificial Intelligence)

The origins of artificial intelligence (AI) can be traced back to the desire to build thinking machines, or electronic brains. A defining moment occurred back in 1958, when Frank Rosenblatt created the first artificial neuron that could learn by iteratively strengthening the weights of the most relevant inputs and decreasing others to achieve a desired output. The IBM 704, a computer the size of a room, was fed a series of punch cards and after 50 trials it learnt to distinguish cards marked on the left from cards marked on the right. This was the demonstration of the single-layer perceptron, or, according to its creator, "the first machine capable of having an original idea". [1]

by Stanisław Woźniak, Thomas Bohnstingl and Angeliki Pantazi (IBM Research – Europe)

Deep learning has achieved outstanding success in several artificial intelligence (AI) tasks, resulting in human-like performance, albeit at a much higher power than the ~20 watts required by the human brain. We have developed an approach that incorporates biologically inspired neural dynamics into deep learning using a novel construct called spiking neural unit (SNU). Remarkably, these biological insights enabled SNU-based deep learning to even surpass the state-of-the-art performance while simultaneously enhancing the energy-efficiency of AI hardware implementations.

by Bojian Yin (CWI), Federico Corradi (IMEC Holst Centre) and Sander Bohté (CWI)

Although inspired by biological brains, the neural networks that are the foundation of modern artificial intelligence (AI) use exponentially more energy than their counterparts in nature, and many local “edge” applications are energy constrained. New learning frameworks and more detailed, spiking neural models can be used to train high-performance spiking neural networks (SNNs) for significant and complex tasks, like speech recognition and EGC-analysis, where the spiking neurons communicate only sparingly. Theoretically, these networks outperform the energy efficiency of comparable classical artificial neural networks by two or three orders of magnitude.

by Timothée Masquelier (UMR5549 CNRS – Université Toulouse 3)

Back-propagation is THE learning algorithm behind the deep learning revolution. Until recently, it was not possible to use it in spiking neural networks (SNN), due to non-differentiability issues. But these issues can now be circumvented, signalling a new era for SNNs.

by Steve Furber (The University of Manchester)

SpiNNaker – a Spiking Neural Network Architecture – is the world’s largest neuromorphic computing system. The machine incorporates over a million ARM processors that are connected through a novel communications fabric to support large-scale models of brain regions that operate in biological real time, with the goal of contributing to the scientific Grand Challenge to understand the principles of operation of the brain as an information processing system.

by Andreas Baumbach (Heidelberg University, University of Bern), Sebastian Billaudelle (Heidelberg University), Virginie Sabado (University of Bern) and Mihai A. Petrovici (University of Bern, Heidelberg University)

Uncovering the mechanics of neural computation in living organisms is increasingly shaping the development of brain-inspired algorithms and silicon circuits in the domain of artificial information processing. Researchers from the European Human Brain project employ the BrainScaleS spike-based neuromorphic platform to implement a variety of brain-inspired computational paradigms, from insect navigation to probabilistic generative models, demonstrating an unprecedented degree of versatility in mixed-signal neuromorphic substrates.


by Julian Göltz (Heidelberg University, University of Bern), Laura Kriener (University of Bern), Virginie Sabado (University of Bern) and Mihai A. Petrovici (University of Bern, Heidelberg University)

Many neuromorphic platforms promise fast and energy-efficient emulation of spiking neural networks, but unlike artificial neural networks, spiking networks have lacked a powerful universal training algorithm for more challenging machine learning applications. Such a training scheme has recently been proposed and using it together with a biologically inspired form of information coding shows state-of-the-art results in terms of classification accuracy, speed and energy consumption.

by Frédéric Alexandre, Xavier Hinaut, Nicolas Rougier and Thierry Viéville (Inria)

Major algorithms from artificial intelligence (AI) lack higher cognitive functions such as problem solving and reasoning. By studying how these functions operate in the brain, we can develop a biologically informed cognitive computing; transferring our knowledge about architectural and learning principles in the brain to AI.

by Bernard Girau (Université de Lorraine), Benoît Miramond (Université Côte d’Azur), Nicolas Rougier (Inria Bordeaux) and Andres Upegui (University of Applied Sciences of Western Switzerland)

SOMA is a collaborative project involving researchers in France and Switzerland, which aims to develop a computing machine with self-organising properties inspired by the functioning of the brain. The SOMA project addresses this challenge by lying at the intersection of four main research fields, namely adaptive reconfigurable computing, cellular computing, computational neuroscience, and neuromorphic engineering. In the framework of SOMA, we designed the SCALP platform, a 3D array of FPGAs and processors permitting to prototype and evaluate self-organisation mechanisms on physical cellular machines.

by Lyes Khacef (University of Groningen), Laurent Rodriguez and Benoît Miramond (Université Côte d’Azur, CNRS)

Local plasticity mechanisms enable our brains to self-organize, both in structure and function, in order to adapt to the environment. This unique property is the inspiration for this study: we propose a brain-inspired computational model for self-organization, then discuss its impact on the classification accuracy and the energy-efficiency of an unsupervised multimodal association task.

by Nasir Ahmad (Radboud University), Bodo Rueckauer (University of Zurich and ETH Zurich) and Marcel van Gerven (Radboud University)

The success of deep learning is founded on learning rules with biologically implausible properties, entailing high memory and energy costs. At the Donders Institute in Nijmegen, NL, we have developed GAIT-Prop, a learning method for large-scale neural networks that alleviates some of the biologically unrealistic attributes of conventional deep learning. By localising weight updates in space and time, our method reduces computational complexity and illustrates how powerful learning rules can be implemented within the constraints on connectivity and communication present in the brain.

by Dávid G. Nagy, Csenge Fráter and Gergő Orbán (Wigner Research Center for Physics)

Efficient compression algorithms for visual data lose information for curbing storage capacity requirements. An implicit optimisation goal for constructing a successful compression algorithm is to keep compression artifacts unnoticed, i.e., reconstructions should appear to the human eye to be identical to the original data. Understanding what aspects of stimulus statistics human perception and memory are sensitive to can be illuminating for the next generation of compression algorithms. New machine learning technologies promise fresh insights into how to chart the sensitivity of memory to misleading distortions and consequently lay down the principles for efficient data compression.

by Effrosyni Doutsi (FORTH-ICS), Marc Antonini (I3S/CNRS) and Panagiotis Tsakalides (University of Crete and FORTH/ICS)

The 3D ultra-high-resolution world that is captured by the visual system is sensed, processed and transferred through a dense network of tiny cells, called neurons. An understanding of neuronal communication has the potential to open new horizons for the development of ground-breaking image and video compression systems. A recently proposed neuro-inspired compression system promises to change the framework of the current state-of-the-art compression algorithms.

by Abbas Rahimi, Manuel Le Gallo and Abu Sebastian (IBM Research Europe)

Hyperdimensional computing (HDC) takes inspiration from the size of the brain’s circuits, to compute with points of a hyperdimensional space that thrives on randomness and mediocre components. We have developed a complete in-memory HDC system in which all operations are implemented on noisy memristive crossbar arrays while exhibiting extreme robustness and energy-efficiency for various classification tasks such as language recognition, news classification, and hand gesture recognition.

by Marco Breiling, Bijoy Kundu (Fraunhofer Institute for Integrated Circuits IIS) and Marc Reichenbach (Friedrich-Alexander-Universität Erlangen-Nürnberg)

How small can we make the energy consumed by an artificial intelligence (AI) algorithm plus associated neuromorphic computing hardware for a given task? That was the theme of a German  national competition on AI hardware-acceleration, which aimed to foster disruptive innovation. Twenty-seven academic teams, each made up of one or two partners from universities and research institutes, applied to enter the competition. Two of the eleven teams that were selected to enter were Fraunhofer IIS: ADELIA and Lo3-ML  (the latter together with Friedrich-Alexander-University Erlangen-Nürnberg - FAU) [L1]. Finally Lo3-ML was one of the four national winners awarded by the German research minister Anja Karliczek for best energy efficiency.

by Timo Oess and Heiko Neumann (Ulm University)

Audition equips us with a 360-degree far-reaching sense to enable rough but fast target detection in the environment. However, it lacks the precision of vision when more precise localisation is required. Integrating signals from both modalities to a multisensory audio-visual signal leads to concise and robust perception of the environment. We present a brain-inspired neuromorphic modelling approach that integrates auditory and visual signals coming from neuromorphic sensors to perform multisensory stimulus localisation in real time.

by Ella Janotte (Italian Institute of Technology),  Michele Mastella, Elisabetta Chicca (University of Groningen) and Chiara Bartolozzi (Italian Institute of Technology)

In nature, touch is a fundamental sense. This should also be true for robots and prosthetic devices. In this project we aim to emulate the biological principles of tactile sensing and to apply it to artificial autonomous systems.

by Henrik D. Mettler (University of Bern), Virginie Sabado (University of Bern), Walter Senn (University of Bern), Mihai A. Petrovici (University of Bern and Heidelberg University) and Jakob Jordan (University of Bern)

Despite years of progress, we still lack a complete understanding of learning and memory. We leverage optimisation algorithms inspired by natural evolution to discover phenomenological models of brain plasticity and thus uncover clues to the underlying computational principles. We hope this accelerates progress towards deep insights into information processing in biological systems with immanent potential for the development of powerful artificial learning machines.

by Martin Nilsson (RISE Research Institutes of Sweden) and Henrik Jörntell (Lund University, Department of Experimental Medical Science)

Biology-inspired computing is often based on spiking networks, but can we improve efficiency by going to higher levels of abstraction?  To do this, we need to explain the precise meaning of the spike trains that biological neurons use for mutual communication. In a cooperation between RISE and Lund University, we found a spectacular match between a mechanistic, theoretical model having only three parameters on the one hand, and in vivo neuron recordings on the other, providing a clear picture of exactly what biological neurons “do”, i.e., communicate to each other.

by Giacomo Indiveri (University of Zurich and ETH Zurich)

Artificial intelligence systems might beat you in a game of Go, but they still have serious shortcomings when they are required to interact with the real world. The NeuroAgents project is developing autonomous intelligent agents that can express cognitive abilities while interacting with the environment.

Next issue: July 2021
Special theme:
"Privacy-Preserving Computation"
Call for the next issue
Image ERCIM News 125
This issue in pdf

 

Image ERCIM News 125 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed