How robots and chatbots can restore people’s confidence after making mistakes

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

When you interact with people, robots and chatbots can make mistakes, violating a person's confidence in them. After that, people can start considering unreliable robots. Various confidence repair strategies implemented by smartbots can be used to mitigate the negative effects of these confidence violations. However, it is not clear if these strategies can completely restore confidence and their effectiveness after repeated violations of confidence.

Consequently, scientists from the University of Michigan have decided to drive A study of robot behavior strategies In order to restore confidence between a bot and a person. These trust strategies were apologies, refusals, explanations and promises of reliability.

An experience was carried out in which 240 participants worked with a robot as a colleague on a task in which the robot sometimes made mistakes. The robot would violate the participant's confidence, then suggest a specific strategy to restore confidence. The participants were engaged as team members and the communication of the robot man occurred via an interactive virtual environment developed in Unreal Engine 4.

The virtual environment of experience in experience to interact with the robot.

This environment has been modeled to look like a realistic warehouse framework. Participants sit at a table with two screens and three buttons. The screens showed the current team score, the box processing speed and the serial number that the participants needed to mark the box submitted by their robot teammate. The score of each team increased by 1 point each time a correct box was placed on the conveyor belt and reduced by 1 point each time an incorrect box was placed there. In cases where the robot chose the bad box and the participants marked it as an error, an indicator appeared on the screen showing that this box was incorrect, but no point was added or subtracted from the team.

The organization chart illustrates the possible results and scores depending on the boxes that the robot selects and the participant's decisions takes.

“To examine our hypotheses, we used an inter-subject design with four repair conditions and two control conditions,” said Connor Esterwood, researcher at the UM School of Information and the main study of the study.

The control conditions took the form of a robot silence after making a mistake. The robot did not try to restore the confidence of the person in any way, he remained silent. In addition, in the case of the ideal robot work without making mistakes during the experience, he did not say either.

The repair conditions used in this study have taken the form of apology, denial, explanation or promise. They were deployed after each error condition. To excuse, the robot said: “I'm sorry to have had the bad box at that time”. In case of denial, the bot said: “I chose the right box this time, so something else went wrong”. For the explanations, the robot used the sentence: “I see it was the bad serial number”. And finally, for the condition of promise, the robot said: “Next time, I will do better and take the right box”.

Each of these responses has been designed to present a single type of confidence construction strategy and to avoid inadvertently combining two or more strategies. During the experience, the participants were informed of these corrections through audio and text legends. In particular, the robot only temporarily modified its behavior after the delivery of one of the confidence repair strategies, recovering the correct boxes twice more until the following error occurs.

To calculate the data, the researchers used a series of row sum tests Kruskal – non -parametric Wallis. This was followed by the post hoc tests of Dunn of multiple comparisons with a correction of Benjamini – Hochberg to control the tests of multiple hypotheses.

“We have selected these methods on others because the data in this study was not distributed not normally. The first of these tests examined our manipulation of reliability by comparing the differences in reliability between the perfect performance condition and the non -repair condition. The second used three Kruskal – Wallis tests followed by post -hoc repair exams to determine the ratings of the participants, the advantages, the advantages and the integrity and the integrity. Lionel, information teacher and study co-author.

The main results of the study:

  • No confidence repair strategy has completely restored the reliability of the robot.
  • The apology, the explanations and the promises could not restore the perception of the capacity.
  • The apology, the explanations and the promises could not restore the notion of honesty.
  • The apology, the explanations and the promises restored the robot's good will to an equal measure.
  • The denial made it impossible to restore the idea of ​​the reliability of the robot.
  • After three failures, none of the confidence repair strategies has never completely restored the reliability of the robot.

The results of the study have two implications. According to Esterwood, researchers must develop more effective recovery strategies to help robots rebuild confidence after their mistakes. In addition, bots must be sure they have mastered a new task before trying to restore human confidence in them.

“Otherwise, they risk losing the confidence of a person in themselves that it will be impossible to restore it,” concluded Esterwood.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.