You may have already seen the “gyphus”, where the robot tries in vain to put the object carried by it on the table and eventually falls, as if from impotence. If not, she is waiting for you below. The image was created on the basis of a video that Boston Dynamics recently showed at a regular TED talk event, where she told and demonstrated her latest achievements in the field of robotics. There were already known to us the robot dog Spot, and the bipedal robot Atlas, which just became the hero of the video.
Despite the fact that the company has repeatedly been called “the harbinger of the robo-apocalypse” (it really develops one of the most advanced robots in the world), the reaction to the latest video and in particular to the GIF image of people turned out to be much softer. And some even sympathized with the iron poor fellow.
“I was very sorry for this robot. We are faced with a very strange future, “commented the commentator on the Reddit portal, where this GIF image first appeared.
“This was the most amusing behavior of the robot, among all the videos that I saw. Poor thing. He tried so hard, “- said another person in the discussion thread.
But why do people become “more friendly” not only to Boston Dynamics robots, but generally to all robots, when the latter make mistakes, sometimes even very rude ones, like the ones seen in the image above? According to a group of scientists from the Center for Human-Computer Interaction of Salzburg (Austria), this can be explained by the same reason why we like it when other people make mistakes. The whole point is that, regardless of the possible differences in the social situation, this puts them closer to us, making them more accessible. In other words, mistakes make robots look “more human”, because none of us is immune from mistakes.
A team of researchers from Salzburg, led by Nicole Myrnig, recently tested this hypothesis in an experiment involving 45 volunteers. They were tasked with assembling something from the LEGO constructor with a small humanoid robot. Mirnig and her colleagues, carrying out some tests, specially programmed the bot to make simple minor mistakes, for example, repeated words or could not grab parts of the designer. After each session on the collection of the designer, the scientists asked the volunteers to evaluate the robot according to a number of criteria, including attractiveness, anthropomorphism (humanlike) and intellect. The results showed that when the robot made mistakes, people liked it more.
It is difficult to say definitely why the robots making mistakes become more “their own” for people, however, as Mirnig suggests, everything is connected with the so-called “total failure effect,” which, according to social psychology, comes when for us a person causes more confidence or becomes more attractive when making mistakes. However, it is also noted here that the effect is very contextually dependent. That is, we do not like people who constantly make mistakes, but we like it when someone makes minor mistakes from time to time. And robots just fall under this last category.
In an experiment in Salzburg, volunteers were asked to build something from the LEGO constructor using a robot. The whole process was recorded, and then the scientists analyzed the reaction
“The study showed that people with their views and expectations for robots rely heavily on what they learned about these same robots from the media,” Mirnig told Digital Trends.
“In these media sources you can also record movies in which robots are often represented in the form of perfectly functioning entities, whether evil or good. Before their personal interaction with the robot, people try to return to this experience of observations and to these memories and expectations. Proceeding from this, I suppose that interaction with the robot making mistakes makes us feel that we are less inferior to modern technologies and rather become closer with them, with the same robots. “
Most likely, work in this psychological aspect will continue. And if in the end such conclusions are confirmed, then in the future, companies engaged in the production of robots, can program them to make minor errors when interacting with us. Of course, all this will be very carefully calculated so that the errors were not so serious as to affect the experience in the framework of this interaction. In general, cars sooner or later can really become more humane, although this humanity will be artificial. But, as it turns out, our subconscious can be deceived. And that someone will be able to take advantage of.