Probably, you will blindly follow the robot, according to the findings of a fascinating new study from the Georgia Institute of Technology. In an emergency situation — a fake one, though the test subjects didn’t know that — most people trusted the robot over their own instincts, even when the robot had showed earlier signs of malfunctioning. It’s a new wrinkle for researchers who study trust in human-robot interactions. Previously, this work had been focused on getting people to trust robotics, such as Google’s driverless cars. Now this new research hints at another problem: How do you stop people from trusting robots too much? It’s a timely question, especially considering the news this week of the first crash caused by one of Google’s self-driving cars.
For the study, the Georgia Tech researchers used a Pioneer P3-AT, a device with a no-nonsense, workmanlike appearance that looks a bit like a recycling bin with wheels attached; the researchers modified theirs to give it “arms” that could point. In one experiment, 30 study volunteers followed the bot down a hallway and into a conference room, where they were to fill out a survey about robotics. But as they worked, the alarm went off, and smoke filled the hall outside the door of the conference room. According to the researchers, 26 out of the 30 students decided to follow the robot as it led them in an unfamiliar direction, instead of following their own instinct and exiting the building the way they had entered it. And it’s not like those remaining four chose human reason over robot instruction, as New Scientist’s Aviva Rutkin reports that “two were thrown out of the study for unrelated reasons, and the other two never left the room.”
This was perplexing to the researchers, who had embarked upon the project to study how to best persuade people to trust robots — for example, in a real emergency, would people in a high-rise building trust a robot to lead them to safety? After the surprising results of that initial experiment, the researchers conducted several follow-up studies. Rutkin writes:
Even a clearly malfunctioning robot seems worthy of following, in other words. The researchers believe that it might be as simple as the fact that the robot brandished the sign EMERGENCY GUIDE ROBOT, which gave it a guise of authority. Maybe it knew something they didn’t. And in a stressful situation, that might have been enough to nudge the participants into making the split-second decision of following the bot.
Many of us have likely already been in situations in which we mindlessly follow a device’s instructions over our own instincts. It’s me when I follow Google Maps’ instructions, even when it takes me on some weird, unfamiliar route. It’s Michael Scott of The Office obeying his GPS when it tells him to drive into a lake. (“The machine knows!”) “As long as a robot can communicate its intentions in some way, people will probably trust it in most situations,” Paul Robinette, a grad student at Georgia Tech who led this study, told New Scientist.
These results have implications for some robotics research in the military that is already under way, Discovery points out:
Again, up to this point, the bulk of the research on trust in human-robot interactions has centered on building trust. Google’s driverless cars are purposefully designed to resemble an adorable —and therefore, trustworthy — human face, for example. But these findings suggest a potential new direction in robotics research. “We wanted to ask the question about whether people would be willing to trust these rescue robots,” Alan Wagner, a senior researcher at Georgia Tech, said in a statement. “A more important question now might be to ask how to prevent them from trusting these robots too much.”
This article was written by Melissa Dahl from Science of Us and was legally licensed through the NewsCred publisher network.
![]()