by Frank Verberne, Jaap Ham and Cees Midden

In the future, cars will be better at driving themselves than humans are. These cars will be useless, however, if humans do not trust them. Researchers from Eindhoven University of Technology, the Netherlands, are studying how to increase trust in automation technology in cars. In a human-technology interaction, we investigate whether technology can use similar trust enhancing mechanisms to those used by humans to build trust in each other, such as emphasizing similarity. Will technology that thinks, acts, and looks similar to its user will be trusted more than its non-similar version?

In the near future, fully autonomous cars will be widely available and will be able to drive more safely, at greater speeds, and more sustainably than any human can. However, if most drivers refuse to relinquish control, perfectly capable cars will be gathering dust, due solely to the mismatch between technological possibilities and technological acceptance.

Research has shown that people need to have sufficient trust in technology before they are willing to use it [1]. At Eindhoven University of Technology we study how to increase trust in these technologies. While some scholars focus on experience with new technologies to increase confidence in them, we focus on using mechanisms that humans also use, such as emphasizing similarity to another person. Therefore, we create similarities between technologies and their users to increase trust even before the actual use of the technologies. More specifically, three types of similarity cues are used: cognitive, behavioural, and appearance similarity.

In a first set of studies, the effect of cognitive similarity on trust was investigated. Participants were first asked to rank four driving goals (energy efficiency, speed, comfort, and safety) from one to four, one being the most important goal for them, four being the least important. There was no optimal ranking suited for everyone, every participant chose their own ranking. Then they were presented with a description of an Adaptive Cruise Control system that either shared or did not share their driving goal ranking. In the latter case, the system had the reverse ranking of that of the participant. Participants had to indicate their trust and acceptance of the system. Results showed that participants trusted the system more when it shared their ranking versus when it did not [2]. Thus, sharing goals (cognitive similarity) leads to greater trust in an Adaptive Cruise Control.

Figure 1: PhD student Frank Verberne testing a driving simulator for future experiments (photo: Bart van Overbeeke).

In a second set of studies, the effect of behavioural similarity on trust was investigated. In human-human interactions, humans (unconsciously) mimic each other’s body posture. This mimicry enhances liking and strengthens bonds between people, even between strangers [3]. In this set of studies, we studied whether mimicry could be used by a virtual agent to increase trust in the agent. Such an agent could appear in a display in future self-driving cars, to provide a digital face to the automation technology driving the car. In our lab, participants played a risky game with a virtual agent that either mimicked or did not mimic them. Participants’ head movements were measured using an orientation tracker, and in the mimicry condition, participants’ own head movements were copied by the virtual agent with a delay of several seconds. In the non-mimicry condition, the virtual agent used the head movements of the previous participant. In the risky game, participants were presented with ten different routes and had to decide to either plan the route themselves (safe option), or let the agent plan the route for them (risky option). Results showed that participants gave more routes to a mimicking agent versus a non-mimicking one. Thus, mimicry (behavioural similarity) leads to greater trust in virtual agents.

In the current set of studies, the effect of appearance similarity on trust is being investigated. Previous research has shown that people had greater trust in an individual, based on a photo, when that photo was morphed with their own photo [4]. That is, individuals whose faces look similar to a person’s own face are trusted more. In a first study, we investigate whether a virtual agent with a face similar to that of the participant is trusted more by that participant than a virtual agent that has a dissimilar face. First, we took pictures of all our participants and we used those pictures in FaceGen (a face modeller program) to create a 3D virtual head of all participants. Next, one standard head was morphed with all the participants’ virtual heads with a 50-50 blend. Then, participants played risky games with a virtual agent that had a custom head. For one half of the participants the custom head was their own 50-50 blend (containing 50% of their own virtual head, self similar), for the other half the custom head was someone else’s 50-50 blend (containing 50% of the virtual head of another participant, self non-similar). We expect that participants trust self-similar agents more than self-non-similar agents. The study is currently in progress. In future studies, we plan to extend the effects of these similarity cues to risky scenarios in a driving simulator.

Working together with DAF and Delft University of Technology, we will be able to test our ideas in more realistic settings in the future. The results of this line of research so far suggest that in the future, self-driving cars could use similarity to their users to gain their trust and persuade them to hand over the steering wheel.

Link:
http://hti.ieis.tue.nl/node/3344

References:
[1] J.D. Lee, K.A. See: “Trust in automation: Designing for appropriate reliance”, Human Factors, 46, 2004
[2] F.M.F. Verberne, J. Ham, C.J.H. Midden: “Trust in smart systems: Sharing goals and giving information to increase trustworthiness and acceptability of smart systems in cars”, Human Factors, 54, 2012
[3] T.L. Chartrand, J.A. Bargh: “The Chameleon effect: The perception-behavior link and social interaction”, JPSP, 76, 1999
[4] L.M. DeBruine “Facial resemblance enhances trust” Proc R. Soc. Lond. B., 2002.

Please contact:
Frank Verberne
TU/e, The Netherlands
Tel: +31402475250
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

{jcomments on}
Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Image ERCIM News 94 epub
This issue in ePub format
Get the latest issue to your desktop
RSS Feed