by Péter Pallinger and László Kovács
The ILI (Intelligent Visitor Guidance) system is able to transform hospitals, museums, parking lots, office buildings or even a shopping mall into semantically driven, adaptive, smart spaces by integrating various pieces of sensory information and providing customised, context-sensitive information and services for visitors, patients, doctors or customers.
The necessary components for smart spaces will become increasingly widespread as intelligent self-regulating devices are becoming readily available. However, these devices usually act individually which limits their capabilities. The ILI system is a robust framework for easy implementation of intelligent environments which helps to integrate various devices. The project focused on visitor guidance and customer service tasks, but it may be adapted to other scenarios where ambient intelligence is required. It was designed to support various environments such as museums, hospitals, shopping centres, parking lots or office buildings. The system is able to assist and adaptively interact with its users depending on the available sensory and actuator devices (eg location services, sensors, RFID readers, screens, PDAs, smart-phones). At its core, ILI uses a real-time agent platform for decision making processes, augmented by a semantic data store, ontology and reasoner, making the project unique in its scope. We also created a pilot project to demonstrate the inherent facilities of the system: an augmented office at one project member's headquarters.
The architecture of the system follows a service-oriented design, with multiple loosely coupled modules that communicate through SOAP and REST. The structure is designed to be scalable and fault-tolerant. The main structure consists of three layers: a hardware-near layer that controls sensors, actuators, interactive devices and the software needed to connect them to the system; a middle-ware that polls and aggregates sensory data, and caches and distributes output data; and a processing layer that consists of an event handler, an agent system as real-time processor and decision maker, a semantic database and inference engine, a Content Management System, and various databases that are used by the previous components.
The hardware-near layer is augmented by software wrappers that make the hardware accessible to the upper layers. Interactive devices may run “heavier”client software. Possible sensors for a smart environment include RFID sensors, location providers (ie active ultra-wideband or ultrasound tags), cameras, smoke sensors, motion detectors and keypads (used for authentication, for example). Actuators may include various types of displays, speakers, phones, switchboards, door locks, gates and alarms. Interactive devices may include kiosks, PDAs, smart-phones and touch screens. As every network-enabled device may be connected to the system, it essentially handles an internet of things.
Figure 1: The office as the positioning subsystem detects it, and the indoor routing interface running at the same time.
The middle-ware layer polls the hardware sensors and optionally aggregates them, and sends the relevant events to the event handler. The middle-ware does complex filtering and low-level aggregation of input data, for example noise filtering or collision detection. For the actuators, it receives device-independent commands which are then transformed to device-specific formats, ie it performs modality switching.
The main control and decision making process is done by intelligent software agents that run in the real-time Cougaar agent platform. The Cougaar platform is transparently scalable and, as an agent platform, it is inherently modular. Each visitor/customer, and each device handled by the system is represented by a Cougaar agent, and these agents determine eventual system behaviour using message passing. A semantic database and inference engine (currently Jena) is also integrated into the agent platform, thus allowing semantic capabilities for all agents. Due to the inherent unpredictability of semantic inference times, the semantic layer can be used with time constraints: with rules to fall back to, or in an any-time fashion where simple rules provide a preliminary agent behaviour which is later augmented when the semantic inference completes.
Figure 2: Coverage of UWB signals as reported by the location engine.
The pilot implementation facilitated intelligent behaviour and various services in an office environment. Ubisense UWB (ultra-wideband) indoor positioning system was used to provide location information about workers and visitors, enabling location-aware services, such as providing information about people and office devices in a context-sensitive manner and enabling automatic redirection of office phone calls. The pilot system provided indoor navigation as well, using a web-based interface designed for smart-phones, and displayed useful information in case of emergency on appropriate display devices, such as the position of people still in the building and possible escape routes.
ILI was concluded in 2010, as a result of a collaboration between p92 IT Solutions Ltd. and NETvisor Inc. development companies and SZTAKI. The project was partially funded by the Hungarian National Development Agency (NFÜ) in KMOP 1.1.4. Its industrial deployment opportunities are currently under negotiation.
László Kovács, SZTAKI, Hungary
Tel: +36 1 279 6212