by Giuseppe Riccardo Leone, Andrea Carboni, Davide Moroni, and Sara Colantonio (CNR-ISTI)

In the framework of the FAITH project, an AI-powered large-scale pilot is being organised to showcase the opportunities offered by advancements in edge computing and computer vision. The goal is to provide innovative ways to ensure safer and more sustainable public transport, encouraging modal shifts. Challenges in deploying responsible and trustworthy AI-based services will be addressed by assessing regulatory aspects and fostering co-design with end users and all the other stakeholders involved.

A connected, adaptive, sustainable and secure transport infrastructure constitutes a strategic asset in the development agenda of European countries [L1]. Adapting transport services to the actual loads and demands, planning and controlling fleets, optimising mobility and CO2 emissions, and ensuring the security and safety of citizens on board and in the stations are some of the key pursuits in the field. By exploiting the increasingly available sensor infrastructures and the massive amount of data they produce, AI-powered visual analytics may support several scopes to make transport safer, more reliable, more efficient and sustainable, thus positively impacting citizens’ lives.

The integration of AI in multimodal transit systems is expected to spread, with a global market size valued at USD 2.3 billion in 2021 and expected around USD 14.79 billion by 2030 [L2]. However, several technological, regulatory and societal challenges need to be addressed to fully realise these expectations. Indeed, the real-world uptake of such AI-powered tools requires stakeholders to trust them. Operators should be reassured that the system is technically robust, accurate and reliable so they can rely on it safely.

Solutions should be resilient against malicious attacks or failures that could lead to disruption or, even worse, compromise public safety. Moreover, privacy-by-design should be ensured, for instance, by implementing visual anonymisation in embedded edge computing and avoiding any form of biometrics, to guarantee unobtrusive monitoring of carriages and transport-related infrastructures.

The purpose of the large-scale pilot on public transit we are addressing in the context of the FAITH project [L3], is to develop and test experimental solutions to all these challenges, tailoring trustworthy AI visual analytics.

Pilot Description
The pilot will deliver a scalable privacy-preserving platform based on pervasive AI and video analytics that can help to improve efficiency, safety and security onboard and in stations, by using privacy-preserving smart cameras that capture and analyse visual data in real time [1]. The video stream will be acquired onboard the coaches or in stations and other travel-related premises: mainly they will be obtained by equipping carriages, supplied by TRENITALIA, with dedicated systems to analyse closed-circuit video surveillance streams during traffic daily routine. The data will be anonymised and processed onboard for identifying several different types of objects: garbage, abandoned objects and their type, equipment, furniture and missing or damaged items. Moreover, it will be possible to identify safety issues and count available seats and the total number of passengers [2]. The aggregated information will be transmitted to an operation centre and made available for managers in an “augmented intelligence” mode.

To ensure its success, it is essential to engage relevant stakeholders in the pilot, including: a) Public transport operator managers, who will provide access to their facilities and vehicles and drive the technical and non-technical requirements and needs; b) Public transport planners, who as end users will be enabled to produce more accurate knowledge-based optimisation of lines, frequency and service and c) citizens, who as end users (travellers and/or commuters), will contribute to defining the requirements and will benefit from a superior level of safety and security as well as an improved planning of public transport.

System Architecture and AI Solutions
The core of the system is based on a set of integrated video analytics services based on deep learning models for: a) image analysis and object detection; b) scene analysis and activity recognition from video chunks; c) privacy-preserving visual anonymisation/obfuscation tools. Such services will be deployed onboard edge nodes without video transfer to remote locations, thus reducing security and privacy risks (see Figure 1). Only aggregated and/or obfuscated data will be transmitted outside the edge nodes and conveyed to higher layers for data integration and cross-correlation. Data will be collected massively and prescreened with the already available algorithms. This will allow a refinement of the existing algorithms and, if relevant, the addition of new ones for behaviour characterisation (e.g. loitering detection), for responding to other safety concerns (e.g. fall detection) or for security (e.g. unattended luggage). Finally, a model for performing multi-camera cross-correlation of the data extracted from single video streams and characterising mobility patterns will be deployed. The learning paradigm used by the AI-enabled system includes supervised approaches for object detection (YOLO-like models), person and face recognition, person characterisation, fall detection and/or identification of unattended luggage. Unsupervised/supervised approaches will be deployed for the analysis of streams of events (textual data) in real time.

Figure 1: Privacy-by-design: the video streams are processed on edge and no image is stored permanently in all operations involving sensitive personal data such as seat counts or passenger analytics.
Figure 1: Privacy-by-design: the video streams are processed on edge and no image is stored permanently in all operations involving sensitive personal data such as seat counts or passenger analytics.

The pilot will be implemented on a large scale, encompassing various factors such as geographical coverage, transportation facilities and computational aspects. In particular, we plan to demonstrate the scalability and portability of AI-based distributed video analytics by augmenting physical nodes with a fraction of virtualised nodes, leveraging the AI@Edge infrastructure available at CNR-ISTI. First results are expected by October 2025.


[1] G. R. Leone et al., “Toward Pervasive Computer Vision for Intelligent Transport System,” IEEE Int. Conf. on Pervasive Computer and Communication Workshops, 2022.
[2] A. Carboni et al., “A Novel Smart Camera Network for Real Time Public Transport Monitoring and Surveillance,” IEEE Int. Conf. on Intelligent Transportation Systems, 2023.

Please contact:
Giuseppe Riccardo Leone, CNR-ISTI, Pisa, Italy
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: October 2024
Special theme:
Software Security
Call for the next issue
Image ERCIM News 138
This issue in pdf


Image ERCIM News 138 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed