by Refiz Duro, Alexandra-Ioana Bojor and Georg Neubauer (AIT Austrian Institute of Technology GmbH)
The measures to tackle the COVID-19 pandemic have introduced a new way of living: human activities and behaviour have had to change. Lockdowns, closed businesses and social distancing have placed governments and their decision-making processes under scrutiny. Significant amounts of timely and precise data are critical in decision-making processes. Our contribution comes from a high vantage point – collecting and analysing Earth observation satellite imagery to detect moving vehicles as a direct sign of human activity. Can it be done?
In the current challenging times of the global pandemic, governments are attempting to counter the spread of COVID-19 using different approaches – some more effective than others. Most of these measures are based on insights gained from analysing data, from sources such as hospitals and COVID-19 test stations, about the number of individuals tested, infected and hospitalised. To help to contain the spread of the virus, important information can be acquired by analysing mobile phone tracking data (as in Italy), video surveillance data using cameras mounted on drones, or data collected using thermal cameras (as in Taiwan). These are just a few examples of information sources that can assist with decision making in the current situation. However, each has its own particular limitations pertaining to issues such as coverage (both spatial and temporal), ethics and data privacy.
In this respect, remote sensing and imaging – Earth observation (EO) – shows huge potential to help shed new light on the changes introduced by the pandemic. EO imagery features specific characteristics only available through satellite technologies; the main characteristic being the spatial coverage ranging from a few km² to 100 km² per image, thus allowing for quick, spatially wide-ranging data collection. Both low to medium spatial resolution (> 2 m) to very high spatial resolution data (0.3 m for optical imagery) are available from commercial and non-commercial EO missions. Furthermore, the availability of imaging sensors covering specific bands of the electromagnetic spectrum allows the constituent information within an image to be separated and the information to be interpreted as required. EO has been used increasingly in crisis situations in recent years due largely to improvements in the availability, timeliness and quantity of today’s satellite images. Similarly, EO data images have been used during the pandemic as a reliable basis and complementary source to help monitor the effectiveness of lockdown measures, social distancing and similar pandemic-related human activities.
The pandemic has affected traffic volumes in cities: in some cases traffic was reduced to half or even one third of its usual volume [L1]. EO imagery bypasses the limitations of spatial coverage encountered in other monitoring sensing devices and can observe traffic conditions for entire cities and road networks. The detection and counting of vehicles (e.g., trucks, cars) in sub-meter spatial resolution satellite imagery has been widely studied using standard machine learning methods [1, 2] but we have limited ability to acquire information on their kinematic properties, such as speed and direction of movement. To extract precise information on these variables, we need to know a vehicle’s location at two different points in time. This is not straightforward with satellite images, owing largely to the inherent temporal resolution of the data: the revisit time for all polar-orbiting satellites is not a matter of seconds, but of days. A direct comparison of images from two observations separated by many hours or even days is not useful for calculating the speed of a vehicle, which requires satellite observations that are taken (sub-) seconds apart.
The MiTrAs project, funded by the Austrian Security Research Programme, took on the challenge of acquiring the kinematic properties of vehicles from EO imagery. In principle, we bypassed the limitations by exploiting the imaging instruments’ setup design used on WorldView EO missions, where the detectors’ positions on the imaging device allow for short time lags between imaging in different spectral bands. For instance, WorldView-3 has eight multispectral detectors and thus eight images with time lags of up to 0.4 seconds, providing enough time to detect a vehicle on the ground moving at speeds of over roughly 20 km/h.
After extracting the data from the EO images using standard procedures, we applied a proven method: the standard principal component analysis (PCA) combined with thresholding . PCA effectively reduces the number of a variables of a dataset while preserving as much information as possible. Applied on the WorldView 8 band images, a moving object (i.e., a vehicle) generates a pair: a bright and a dark spot (see Figure 1). This is the signature that provides enough information to extract the position of the vehicle, its speed, and its direction of movement.
Figure 1: Left: Panchromatic WorldView-2 image of a city with traffic activity. Centre: After the roads have been extracted, PCA produces an output image in which neighbouring dark and bright polygons are visible, an indication of moving vehicles. Right: WorldView-3 image of a rural area showing the detection of a moving vehicle on a dirt road using the same methodology.
We performed the analysis using WorldView-2 and -3 images collected from several locations. By annotating the data, we could establish a confusion matrix, resulting in a detection accuracy of 85%. Vehicle movement is an indirect measure of human activity when put in the context of the situation, place and time, thus comparisons of observations from before and during the pandemic can provide valuable insights for decision-makers.
The main challenges with the method are related to detecting darker vehicles against a dark background (e.g., underlying road/asphalt), spatial resolution when two vehicles are only a few metres apart, thus not being able to discern individual vehicles and false detections due to other reflective objects producing similar bright/dark features. There is clearly potential for the improvement of the method and even for it to be combined with suitable machine learning approaches. Given the spatial coverage provided by the EO images, including very high spatial resolution, the method can be applied not only for monitoring human (or vehicle) activities during the pandemics, but also for purposes such as pollution source and traffic flow monitoring. This will be especially attractive when the planned satellite constellations (i.e., WorldView-Legion; [L2]) with up to 15 revisits per day establish the basis for near real-time monitoring.
 T. Ophoff, et al.: “Vehicle and Vessel Detection on Satellite Imagery: A Comparative Study on Single-Shot Detectors,” Remote Sens., vol. 12, no. 7, p. 1217, 2020, doi: 10.3390/rs12071217.
 S. Qu, et al.: “Vehicle Detection in Satellite Images by Incorporating Objectness and Convolutional Neural Network,” J. Ind. Intell. Inf., 2016, doi: 10.18178/jiii.4.2.158-162.
 F. Gao, et al.: “Moving Vehicle Information Extraction from Single-Pass WorldView-2 Imagery Based on ERGAS-SNS Analysis,” Remote Sens., vol. 6, no. 7, pp. 6500–6523, Jul. 2014, doi: 10.3390/rs6076500.
Refiz Duro, AIT Austrian Institute of Technology, Austria