Trust regulation in Social Robotics: From Violation to Repair

Matouš Jelínek, Kerstin Fischer

While trust in human-robot interaction is increasingly recognized as necessary for the implementation of social robots, our understanding of regulating trust in human-robot interaction is yet limited. In the current experiment, we evaluated different approaches to trust calibration in human-robot interaction. The within-subject experimental approach utilized five different strategies for trust calibration: proficiency, situation awareness, transparency, trust violation, and trust repair. We implemented these interventions into a within-subject experiment where participants (N=24) teamed up with a social robot and played a collaborative game. The level of trust was measured after each section using the Multi-Dimensional Measure of Trust (MDMT) scale. As expected, the interventions have a significant effect on i) violating and ii) repairing the level of trust throughout the interaction. Consequently, the robot demonstrating situation awareness was perceived as significantly more benevolent than the baseline.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment