ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Tony Belpaeme

Currently, most cognitive and social robots only operate in the here and now, but the ALIZ-E project aims to change this, moving human-robot interaction from the range of minutes to the range of days. The project develops the theory and practice behind embodied cognitive robots capable of maintaining social interactions with a young user over an extended period of time.

Social robots are robots that interact and engage with people directly, using a range of communication channels, such as gestures, sounds, and speech. Social robotics is a young but fast-growing research field, with results mainly being generated in academic institutions and only recently finding their way into commercial products. One of the pillars of social robotics is the fact that humans have evolved to be gregarious, group-living animals and, as such, are very adept at social interaction. We generate and are sensitive to a wide range of conscious and unconscious signals that we use to operate as social beings. Of all such signals, language is the most complex communication channel and a uniquely human capacity. The study of social robots is important to enable us to build robots (and technology in general) that can interact with us on a more intersubjective level in environments where naturalistic social interaction between people and technology is desired.

The ALIZ-E project specifically explores robot-child interaction, capitalizing on children’s open and imaginative responses to artificial ‘creatures’. Promising future applications include the development of educational companion robots for child users. The project will innovate in taking robots out of the lab and putting them to the test in a health education role, with young diabetic patients, in a busy paediatric department at the Ospedale San Raffaele in Milan.

Aldebaran Nao robot.
Aldebaran Nao robot.

Technical and scientific challenges are rife. Available Automated Speech Recognition (ASR) typically handles only adult speech so recognising children’s speech requires novel ASR approaches. After speech has been recognised, it is passed to the Natural Language Processing (NLP). Currently NLP is only robust in closed and well-controlled dialogue contexts. The project studies how NLP can be ported to a robot in a semi-open real-world setting and how the Human-Robot Interaction experience can be tailored so failures in NLP or elements feeding into it go unnoticed by the young user. The robot’s perception needs to handle audio and video stream captured from onboard cameras and microphones and needs to return a high-level interpretation of gestures, expressions and various other social markers. Most existing algorithms work on carefully collected datasets, the challenge here is to adapt these to function on audiovisual streams captured by a small robot, thereby having a very different viewpoint than typical training databases, operating in a real-world environment.

The robots will learn online through unstructured interactions in dynamic environments and a number of different machine learning approaches will be integrated to facilitate this functionality. This requires that robots should have the capacity to store and recall experiences, to learn from them and to adapt their social behaviour on the basis of their previous experiences. A distributed “switch board” model, in which memory provides the substrate through which other cognitive modalities interact, will be used to provide socially coherent, long-term patterns of behaviour.

To orchestrate the robot’s behaviour we rely on URBI by Gostai, which serves as a middleware and as a common language for the different partners in the project. As all processing is computationally expensive, much of it will be off-loaded from the robot. For this we rely on Gostai’s cloud computing solution, GostaiNet, where URBI transparently calls code on remote servers, effectively using the robot as an input/output device, with computation and storage being remote. The computer on the robot only runs reactive and time-critical code, any other expensive processes - for example ASR, NLP or vision - are passed to the cloud.

ALIZ-E will use Aldebaran Nao robots as an implementation platform. The Nao being a small, autonomous, humanoid robot to which children respond very well. The project, coordinated by the University of Plymouth, involves a consortium of seven academic partners further comprising the Vrije Universiteit Brussel (Belgium), the Deutsches Forschungzentrum für Künstliche Intelligenz (Germany), Imperial College (UK), the University of Hertfordshire (UK), the National Research Council - Padova (Italy) and the Netherlands Organization for Applied Scientific Research (The Netherlands) plus commercial partners Gostai (France) and Fondazione Centro San Raffaele del Monte Tabor (Italy). Funded under the European Commission 7th Framework Programme the ALIZ-E project began in April 2010 and will run for a total of four and a half years.

Link:
http://www.aliz-e.org

Please contact:
Tony Belpaeme
University of Plymouth
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

{jcomments on}
Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Get the latest issue to your desktop
RSS Feed