UM Today UM Today University of Manitoba UM Today UM Today UM Today
a humanoid robot

I'm persuasive // Photo: roboticage, flikr

Aye, robot

March 17, 2014 — 

A new study from the U of M’s Human-Computer Interaction Lab suggests that you’ll obey robots as predictably as you would a human.

The team designed the experiment to get participants to do dull tasks: take 80 minutes to rename hundreds of files extensions, changing “jpg” to “png”, as well as sing in different pitches, and repeatedly click on an icon. The task master was either a 27-year-old human male, or Jim, the pseudonym of a Nao (pronounced “now”) humanoid robot.

As the paper reads:

The robot experimenter sat upright on a desk, spoke using a neutral tone, gazed around the room naturally to increase sense of intelligence, and used emphatic hand gestures when prodding, all controlled from an adjacent room via a Wizard of Oz setup. The “wizard” used both predefined and on-the-fly responses and motions to interact with the participant; the responses were less varied than the human experimenter’s as we believed this would be expected of a robot. Participants were warned that the robot required “thinking time” (to give the wizard reaction time) and indicated this with a blinking chest light.

To reduce suspicion about the reason for having a robot and to reinforce its intelligence we explained that we were helping the engineering department test their new robot that is “highly advanced in artificial intelligence and speech recognition.” We explained that we are testing the quality of its “situational artificial intelligence.”

The goal, however, was to see whether the participants saw the human or the robot as more of an authority figure. Here’s an video abstract of the study that explains it further:

Quoting the paper again:

The results show that the robot had an authoritative social presence: a small, child-like humanoid robot had enough authority to pressure 46% of participants to rename files for 80 minutes, even after indicating that they wanted to quit. Even after trying to avoid the task or engaging in arguments with the robot, participants still (often reluctantly) obeyed its commands. These findings highlight that robots can indeed pressure people to do things they would rather not do, supporting the need for ongoing research into obedience to robotic authorities.

We further provide insight into some of the interaction dynamics between people and robotic authorities, for example, that people may assume a robot to be malfunctioning when asked to do something unusual, or that there may be a deflection of the authority role from the robot to a person.

The Nao robot is not an intimidating or authoritarian figure; it’s designed to be about as non-threatening as a pug puppy. And although only 46 per cent of the participants obeyed Nao, compared to 86 per cent who followed orders of the human, the intriguing — perhaps frightful — bit of info is that almost half of the participants did dull tasks they didn’t want to only because a small robot they just met asked them to. But perhaps it’s just that the Canadians didn’t want to seem rude.

Jim, the robot authority

Do as I say, share this story. // Photo HCI Lab at the University of Manitoba

Research at the University of Manitoba is partially supported by funding from the Government of Canada Research Support Fund.

, ,