by Frank Alexander Kraemer, Nattachart Tamkittikhun and Anders Eivind Braten (NTNU)
Inevitably, there will be a huge number of sensor devices within the internet of things (IoT) - but how can we possibly manage to optimise each and every one of them? Our answer is to treat them as autonomous units, much like robots. To this end we have been experimenting with different approaches to find out how constrained devices can benefit from machine learning, so that they can operate optimally.
Sensor devices are often situated in heterogeneous environments that change over time, for instance by changing location or variations in the weather. This is critical for their operation: Many sensor devices use energy harvesting, like solar energy, to sustain their operations, and their energy budget is critical to achieving their goals. This requires a high degree of optimisation. One of the characteristics of the internet of things (IoT), however, is its expected scale in terms of the number of devices. Therefore, the task of optimising IoT sensors or individually oversee their operation, cannot be performed manually. This leaves us with two options: Either over-dimensioning the system, for instance by investing in larger solar panels or batteries, or reducing the duty cycle of sensor devices to save energy, which effectively means to sense less frequently and send less data. In either case, systems do not operate optimally. This was also our experience within a smart city sensing project [1], where we used a static sensing approach. Sensing the emission data every six minutes worked adequately during the summer, but the solar panels could not provide enough energy during the dark winter in the Nordic areas, which eventually caused the sensor devices to shut down.
Figure 1: Sensor devices need to constantly adapt and plan ahead to maintain optimal operation in variable environmental conditions.
The experiences within this smart city project motivated our approach of autonomous and adaptive sensing in the ART project: Instead of looking at sensor devices as simple and constrained sources of data, we see them as autonomous agents, much like robots. Throughout their operation, they have to constantly plan ahead and make decisions based on the changing environment and what they have observed so far. Possible mechanisms for this include different machine learning techniques, applied in combination with each other.
To verify such an approach, we established a lab for autonomous sensors, which consists of an array of sensor nodes called Waspmotes, an off-the-shelf sensing system from Libelium driven by an 8-bit microcontroller. They communicate via LoRaWAN to a backend. Since the sensor nodes as well as the network are fairly constrained, the question is how machine learning can be applied in such a scenario. One solution is a centralised approach, in which machine learning is applied as part of the device management. Instead of just collecting data and monitoring key performance parameters such as sending frequency and battery level, the backend also learns from the received metadata and calculates optimised sensing strategies. These sensing strategies are sent to the sensors every hour, and provide a guideline for how often data should be acquired to achieve a good balance between energy consumption and the required data rate.
This extended form of device management learns over time how the harvested solar energy depends on the current weather conditions. It also learns how the energy consumption changes with different sensing modes by considering the development of the battery level over time. Using weather forecasts, a planning algorithm predicts the resulting battery profile for different sensing strategies. The goal is both to keep the battery from being drained, and utilise the harvested energy to maximise the quality of data that is sensed and delivered. The SINet project follows a similar approach, but focuses on managing intermittent network connectivity.
The preliminary results are very encouraging. Already few features about the solar position (azimuth, zenith) and a couple of weather characteristics (cloudiness, precipitation, symbolic weather) are good indicators for the expected solar intake for the next day. We are experimenting with different machine learning techniques, including k-nearest neighbours and neural networks. In the given setting, the currently selected algorithms are less significant than the availability of sufficient training data. We are therefore investigating how autonomous sensing systems can be bootstrapped, i.e., how sensor nodes can start their operation even if little or no previous data exists for the prediction.
Another important question is how the system can perform its task in a less centralised way, i.e., considering autonomy on sensor level. To avoid single points-of-failure, sensors must be able to learn from insights gained at a global level, i.e., in the cloud, but still be able to act locally. For IoT applications, this implies a paradigm shift: machine learning methods should not only help analyse data collected by IoT nodes, but also help them to make optimal decisions about their own operation so that they can act autonomously.
Links:
SINet Project: https://sinet.item.ntnu.no
ART Project: https://ntnu.edu/iik/aas
Reference:
[1] Ahlers, D., Driscoll, P., Kraemer, F. A., Anthonisen, F., & Krogstie, J. (2016). A Measurement-Driven Approach to Understand Urban Greenhouse Gas Emissions in Nordic Cities. Norsk Informatikkonferanse NIK.
Please contact:
Frank Alexander Kraemer, Norwegian University of Science and Technology, NTNU, Norway