ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Abbas Rahimi, Manuel Le Gallo and Abu Sebastian (IBM Research Europe)

Hyperdimensional computing (HDC) takes inspiration from the size of the brain’s circuits, to compute with points of a hyperdimensional space that thrives on randomness and mediocre components. We have developed a complete in-memory HDC system in which all operations are implemented on noisy memristive crossbar arrays while exhibiting extreme robustness and energy-efficiency for various classification tasks such as language recognition, news classification, and hand gesture recognition.

A cursory examination of the human brain shows: (i) the neural circuits are very large (there can be tens of thousands of fan-ins and fan-outs); (ii) activity is widely distributed within a circuit and among different circuits; (iii) individual neurons need not be highly reliable; and (iv) brains operate with very little energy. These characteristics are in total contrast to the way traditional computers are built and operate. Therefore, to approach such intelligent, robust, and energy-efficient biological computing systems, we need to rethink and focus on alternative models of computing, such as hyperdimensional computing (HDC) [1][2].

The difference between traditional computing and HDC is apparent in the elements that the machine computes with. In traditional computing, the elements are Booleans, numbers, and memory pointers. In HDC they are multicomponent vectors, or tuples, where neither an individual component nor a subset thereof has a specific meaning: a single component of a vector and the entire vector represent the same thing. Furthermore, the vectors are very wide: the number of components is in the thousands. These properties are based on the observation that key aspects of human memory, perception, and cognition can be explained by the mathematical properties of hyperdimensional spaces comprising high-dimensional binary vectors known as hypervectors [1]. Hypervectors are defined as d-dimensional (where d ≥ 1,000) (pseudo)random vectors with independent and identically distributed (i.i.d.) components. When the dimensionality is in the thousands, a huge number of quasi-orthogonal hypervectors exist. This allows HDC to combine such hypervectors into new hypervectors using well-defined vector space operations, defined such that the resulting hypervector is unique, and with the same dimension.

HDC has been employed in a range of applications, including traditional computing, machine learning, cognitive computing, and robotics [3]. It has shown significant promise in machine learning applications that involve temporal patterns, such as text classification, biomedical signal processing, multimodal sensor fusion, and distributed sensors [4]. A key advantage is that the training algorithm in HDC works in one or only a few shots: that is, object categories are learned from one or a few examples, and in a single pass over the training data as opposed to many repetitive iterations in the deep learning models [4].

HDC begins by representing symbols with i.i.d. hypervectors that are combined by nearly i.i.d.-preserving operations, namely binding, bundling, and permutation, and then stored in associative memories to be recalled, matched, decomposed, or reasoned about. Manipulation and comparison of these large patterns results in a bottleneck when implemented on the conventional von Neumann computer architectures. On the other hand, the chain of operations implies that failure in a component of a hypervector is not contagious leading to robust computational framework. For instance, when unrelated objects are represented by quasi-orthogonal 10,000-bit vectors, more than a third of the bits of a vector can be flipped by randomness, device variations, defects, and noise, and the faulty vector can still be identified with the correct one, as it is closer to the original error-free vector than to any unrelated vector chosen so far, with near certainty. Therefore, the inherent robustness and the need for manipulations of large patterns stored in memory make HDC particularly well suited to emerging computing paradigms such as in-memory computing or computational memory based on emerging nanoscale resistive memory or memristive devices [5].

In the past few years, we have been working towards designing and optimising a complete integrated in-memory HDC system in which all the operations of HDC are implemented on two planar memristive crossbars together with peripheral digital CMOS circuits. We have been devising a way of performing hypervector binding entirely within a first memristive crossbar using an in-memory read logic operation and hypervector bundling near the crossbar with CMOS logic. These key operations of HDC cooperatively encode hypervectors with high precision, while eliminating the need to repeatedly program (i.e., write) the memristive devices. Unlike previous work, this approach matches the limited endurance of memristive devices and scales well with 10,000-dimensional hypervectors, making this work the largest experimental demonstration of HDC with memristive hardware to date [6].

In our architecture, shown in Figure 1, an associative memory search is performed using a second memristive crossbar for in-memory dot-product operations on the encoded output hypervectors from the first crossbar, realising the full functionality of the HDC system. Our combination of analog in-memory computing with CMOS logic allows continual functioning of the memristive crossbars with desired accuracy for a wide range of multiclass classification tasks, including language classification, news classification, and hand gesture recognition from electromyography signals. We verify the integrated inference functionality of the system through large-scale mixed hardware/software experiments, in which hypervectors are encoded in 760,000 hardware phase-change memory devices performing analog in-memory computing. Our experiments achieve comparable accuracies to the software baselines and surpass those reported in previous work. Furthermore, a complete system-level design of the in-memory HDC architecture synthesized using 65 nm CMOS technology demonstrates a greater than six-fold end-to-end reduction in energy compared with a dedicated digital CMOS implementation. More details can be found in our paper published in Nature Electronics [6].

Figure 1: The concept of in-memory HDC. A schematic of the concept of in-memory HDC showing the essential steps associated with HDC (left) and how they are realized using in-memory computing (right). An item memory (IM) stores h, d-dimensional basis hypervectors that correspond to the symbols associated with a classification problem. During learning, based on a labelled training dataset, a designed encoder performs dimensionality-preserving mathematical manipulations on the basis hypervectors to produce c, d-dimensional prototype hypervectors that are stored in an AM. During classification, the same encoder generates a query hypervector based on a test example. Subsequently, an AM search is performed between the query hypervector and the hypervectors stored in the AM to determine the class to which the test example belongs. In in-memory HDC, both the IM and AM are mapped onto crossbar arrays of memristive devices. The mathematical operations associated with encoding and AM search are performed in place by exploiting in-memory read logic and dot-product operations, respectively. A dimensionality of d = 10,000 is used. SA, sense amplifier; AD converters, analog-to-digital converters are adapted from [6].
Figure 1: The concept of in-memory HDC. A schematic of the concept of in-memory HDC showing the essential steps associated with HDC (left) and how they are realized using in-memory computing (right). An item memory (IM) stores h, d-dimensional basis hypervectors that correspond to the symbols associated with a classification problem. During learning, based on a labelled training dataset, a designed encoder performs dimensionality-preserving mathematical manipulations on the basis hypervectors to produce c, d-dimensional prototype hypervectors that are stored in an AM. During classification, the same encoder generates a query hypervector based on a test example. Subsequently, an AM search is performed between the query hypervector and the hypervectors stored in the AM to determine the class to which the test example belongs. In in-memory HDC, both the IM and AM are mapped onto crossbar arrays of memristive devices. The mathematical operations associated with encoding and AM search are performed in place by exploiting in-memory read logic and dot-product operations, respectively. A dimensionality of d = 10,000 is used. SA, sense amplifier; AD converters, analog-to-digital converters are adapted from [6].

References:
[1] P. Kanerva, Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cogn. Comput., 2009.
[2] P. Kanerva, “Computing with High-Dimensional Vectors,” IEEE Design & Test, 2019.
[3] A. Mitrokhin, et al. Learning sensorimotor control with neuromorphic sensors: toward hyperdimensional active perception. Science Robotics, 2019.
[4] A. Rahimi, et al. Efficient biosignal processing using hyperdimensional computing: network templates for combined learning and classification of ExG signals. Proc. IEEE, 2019.
[5] A. Sebastian, et al. Memory devices and applications for in-memory computing. Nature Nanotechnology, 2020.
[6] G. Karunaratne, et al. In-memory hyperdimensional computing. Nature Electronics, 2020.

Please contact:
Abbas Rahimi, IBM Research Europe, Säumerstrasse 4, 8803 Rüschlikon, Switzerland
+41 44 724 8303, This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 125
This issue in pdf

 

Image ERCIM News 125 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed