by Vanessa Evers (University of Twente)
Since 2011, The Human Media Interaction Group at the University of Twente has been working on robots with social intelligence. This has led to the development of robots that can recognise human behaviour, interpret this behaviour and respond in a socially appropriate way. We have developed robots that can be used as guides at zoos or airports, and helping children with autism in understanding emotional expressions in faces.
Work started with the European FP7 project FROG [L1], the Fun Robotic Outdoor Guide. The FROG robot was an instantiation of a robot service in outdoor public places. We envisioned robotic information or other services in outdoor public places such as city squares, car parks at shopping malls and airports and leisure areas such as parks and zoos. The FROG robot was developed specifically to offer augmented reality information in places such as zoos or cultural heritage sites such as the Royal Alcazar in Seville, Spain.
FROG had to approach small groups of visitors, enquire whether they were interested in information or a short tour of the premises and would take the group along, offering them information along the way. To do this effectively, FROG tracked the visitors and their facial expressions and determined their interest. To show augmented reality information it had to autonomously navigate and position itself very precisely so that the augmented reality content would overlay the camera image of the scene behind the robot. When FROG detected people losing interest, it would change the type of information or the type of locations covered by the route.
Vanessa Evers at the University of Twente in the Netherlands, with a robot.
Picture: Kees Bennema (http://www.bennema.nl/).
In many iterations and real-world observation studies, we found the robot was able to capture people’s interest accurately and offer them an interactive experience of the location that added to their experience. FROG was particularly effective for families with small children who were not the target audience for the tours given by professional guides. Of course the novelty of a large robot autonomously navigating a crowded a public place caused disruption. While FROG was able to navigate challenging environments like banquet chairs and guests everywhere, people wanted to take selfies with the robot and would “test” the robot by not allowing it to pass where it wanted to guide people. While the robot would adjust the route, the robot’s tour group got frustrated at times because people outside their tour would hinder the robot. When people make use of a robot service they see the robot as “theirs” during that time.
A similar trend was observed in the SPENCER EU H2020 project [L2]. SPENCER was developed to guide airport passengers around the airside gate areas of Schiphol. The robot had to approach a group of people, engage them and take them to their newly assigned gate or other important location. While the robot had the technical ability to accurately track the people in the group it was guiding, know when to wait for a person and navigate the airport in a socially normative way (going around queues and families rather cut through), the robot was constantly stopped by other passengers for selfies or people would try to distract the robot, or prevent it from reaching its goal – to the frustration of the guided group. One participant in a test-run reported that he was happy when the robot seemed to ignore a person and kept going, acting like it was “their” robot.
We are currently developing robots that have to analyse, understand and interact with children in a social context. The SQUIRREL EU H2020 project [L3] concerns a robot playing with small groups of children and engaging them in a game that leads to sorting and tidying the environment. The robot analyses clutter in the environment, plans a way to clean it up and invents a multi-player game to achieve this. As the children engage in the game, SQUIRREL analyses their collaborative play and adjusts the game to optimise pro-social activities and teamwork between the children.
In the DE-ENIGMA project [L4], a robot assists a therapist in teaching young children with autism emotion recognition skills. The target group of children are aged between four and eight years and are far on the spectrum of autism, which means that they are not high functioning and very young. This is a challenging group for therapists and very individual in nature, therefore a one size fits all solution is out of the question. The DE-ENIGMA robot functions as an intermediate between the therapist and the child. It has the capability to display intricate facial expressions and unlike a person, it can very systematically move and show isolated dynamic facial movements such as an eyebrow raise. This facilitates the child’s learning process. Also, the DE-ENIGMA robot analyses the facial expressions of the children, it can minutely track facial features and through machine learning recognise the facial expressions in emotions. Therefore, the robot provides very detailed feedback over time to the therapist about how the children’s use of their own facial expressions develops over time and where exercise may be needed.
The DE-ENIGMA project is a strong example of a robot enhancing current work-practices. Therapists are able to administer interventions in ways that they could not have done before. The robotic intervention allows them to reach a target group which was difficult to reach and it allows them to tailor their therapy to the individual. The novelty effect observed in the other robot applications seems to have limited impact here. The children are intrigued by the robot but see it as a game, a toy or a tool and relate to it accordingly, the novelty does not cause a break down in the flow as is the case for the robot in more public places.
As robot services become more common, we expect the unique value contributed by a robot intervention to be optimised. When robots are able to understand the social aspects of an environment and respond to people in a socially appropriate manner, only then can we hope to integrate robot services seamlessly in our everyday lives.
University of Twente, The Netherlands