ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Hui Han (Fraunhofer IESE) and Jingyue Li (NTNU)

Tiny machine learning (TinyML) is the intersection of machine learning (ML) algorithms and embedded systems (hardware and software) in terms of low latency, low power, and small size. It allows data to be kept mainly on edge devices and to be processed and have ML tasks run directly in the device. Therefore, the TinyML paradigm is expected to preserve AI security and combat cybercrimes. In this study, we explore how TinyML, the cutting-edge of ML technologies, solves relevant AI security problems (including cybercrimes) from the AI lifecycle aspect: data engineering, model engineering and model deployment. Finally, we discuss the opportunities for future research.

Tiny machine learning (TinyML) is a fast-growing field of machine learning (ML) technologies and applications that include algorithms, hardware (dedicated integrated circuits), and software that can perform on-device sensors (vision, audio, inertial measurement unit, biomedical, etc.) data analytics at extremely low power, typically in the order of milliwatts, enabling a variety of always-on machine learning use cases on battery-powered devices [L1]. TinyML will play an essential role in our daily interactions with ML in the near future.

TinyML for AI security
The TinyML system is the integration of ML-based mechanisms with Microcontroller Units (MCUs) based edge devices. This smooths the path for the development of efficient services and novel applications that do not need ubiquitous processing support from the cloud [1]. TinyML can make the data stay on-premise, which enhances security and ensures data privacy without letting sensitive raw data leave devices. More importantly, TinyML can enable data analysis and real-time decision-making in the field without relying on the computing power from a cloud, which preserves AI security.

We explore how TinyML as a new technique solves relevant AI security problems from the AI lifecycle aspect: data engineering, model engineering and model deployment.

Data engineering
Data privacy and security in the digital age is a significant issue in AI security. Transmitting raw data through unstable and lossy wireless channels from edge devices to the cloud can jeopardise data privacy or lead to stolen data, data loss or compromised data, transmission errors and cyberattacks (e.g., man-in-the-middle (MITM) attack) [1]. TinyML allows embedded devices to process data locally (close to the sensor), which results in better data privacy and security.

Model engineering
The security challenges for model engineering are very important. Running ML models (deep-learning models or neural networks models) with TensorFlow Lite on ultra-low-power microcontrollers – e.g., SparkFun Edge Development Board Apollo3 Blue and Arduino Nano 33 BLE Sense board – for ML inferences, TinyML offers multiple advantages (low cost and energy and ubiquitous MCUs for ML models), notably preserving AI security.

Model deployment
The last stage of the lifecycle refers to the deployment of the model in practical use. TinyML makes responding or making decisions in a short time possible by avoiding continuous connectivity to the cloud. This brings a wide array of use cases (such as device identification, authentication, intrusion detection, malware detection, anomaly detection, secure range search, and attack detection) of TinyML deployment where privacy and security are considered vital effect factors.

Cybercrimes such as network intrusion, malware and human intervention are quite widespread in this digital society. As mentioned above, the significant applications of TinyML are intrusion detection, malware detection and attack detection, which can combat some cybercrimes.

Future work
TinyML is at the cutting edge of computer technology and AI, meaning it faces many challenges as well as more opportunities. We recommend some potential future research directions:

  • Creating an effective TinyML dataset or repurposing existing datasets for TinyML.
    Traditional ML models need a large amount of data that are hard to get. Collecting and labelling these large datasets is expensive and the data may be only used for special tasks. Although there are a number of well-known open-source datasets for training ML models, these public datasets are not suitable to train ultra-low-power models for embedded devices, and are relatively large for TinyML-specific use cases. Therefore, a specific TinyML dataset is one of the promising areas for future research. TinyML scholars could set up brand new datasets or take advantage of these existing datasets by repurposing them.
  • Designing specific algorithms for TinyML-specific requirements relevant to AI security.
    Most ML algorithms are very vulnerable to perturbations and TinyML algorithms. What’s more, some current algorithms are either computationally too intensive or overly complex for TinyML deployment [2]. Therefore, one interesting research direction is to empirically evaluate how robust the existing TinyML models/algorithms are and how to improve their robustness in terms of AI security.
  • Protocol, benchmarks, standards and rules for TinyML referring to AI security.
    New endpoint security mechanisms are not only required to meet adequate security standards, but also such mechanisms must be as lightweight as possible. The TinyML community extends the existing MLPerf benchmark suite to TinyMLPerf with TinyML systems. The goal of TinyMLPerf is to provide a detailed description of the motivation and guiding principles for benchmarking of TinyML systems [L2]. Although the security protocol Object Security for Constrained RESTful Environments (OSCORE) is designed to tackle this challenge, more standards are needed for AI security when employing TinyML [3].

The work described in this article has been carried out in the frame of an ERCIM “Alain Bensoussan” Fellowship.

Links:
[L1] https://www.tinyml.org
[L2] https://mlperf.org

References:
[1] R. Sanchez-Iborra and A. F. Skarmeta: “TinyML-enabled frugal smart objects: challenges and opportunities,” IEEE Circuits and Systems Magazine, vol. 20, no. 3, pp. 4–18, 2020, doi: 10.1109/MCAS.2020.3005467.
[2] S. Siddiqui, C. Kyrkou, and T. Theocharides: “Mini-NAS: a neural architecture search framework for small scale image classification applications”, in TinyML Research Symposium, 2021, pp. 1–8.
[3] H. Doyu, R. Morabito, and M. Brachmann: “A tinyMLaaS ecosystem for machine learning in IoT: overview and research challenges,” in International Symposium on VLSI Design, Automation and Test, 2021, pp. 1–6. doi: 10.1109/VLSI-DAT52063.2021.9427352.

Please contact:
Hui Han
Fraunhofer Institute for Experimental Software Engineering IESE, Germany
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 129
This issue in pdf

 

Image ERCIM News 129 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed