in

Human players psyched out by a cute robot talking trash

pepper.jpg

Would you crack under a relentless assault of trash talk from a robot? Yeah, turns out you probably would. 

Or so says a recent study by researchers at Carnegie Mellon University, which demonstrated that negative smack talk by a robot can have a real impact on how humans perform. Conversely, encouraging talk from the robot seemed to aid performance. 

The trash talker in question was Pepper, the commercially available humanoid from SoftBank that, ironically, was created to act as a comforting robot companion replete with emotional intelligence.

More comically, the trash talk in the study was pretty vanilla. “I have to say you are a terrible player,” Pepper said to some study participants, and “Over the course of the game, your playing has become confused.” 

And yet, the trash talk worked. In the study, participants played a game called “Guardians and Treasures,” which is a standard tool researchers use to measure rationality. Participants each played 35 times against the robot. Some received encouraging feedback from the robot and others endured dismissive remarks. Although the human players’ rationality improved as the number of games played increased, those who were criticized by the robot didn’t score as well as those who were praised.

What’s interesting here is that participants clearly understood the source of their irritation was a machine programmed to trash talk. Even so, they couldn’t rationally tune out the robotic trash talk. The implications in a world that’s fast becoming crowded with robots and AI assistants could be substantial.

“This is one of the first studies of human-robot interaction in an environment where they are not cooperating,” said co-author Fei Fang, an assistant professor in the Institute for Software Research. It has enormous implications for a world where the number of robots and internet of things (IoT) devices with artificial intelligence capabilities is expected to grow exponentially. “We can expect home assistants to be cooperative,” she said, “but in situations such as online shopping, they may not have the same goals as we do.”

The study was an outgrowth of a student project in AI Methods for Social Good, a course that Fang teaches. It was presented last month at the IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) in New Delhi, India. 

The study is garnering attention due to its novelty factor. Most human-robot interaction studies focus on how humans and robots can best work together. The darker side of robotics, it turns out, may not come with a Terminator-style armageddon, but a string of missed free throws thanks to an obnoxious robot opponent in a pickup game of H-O-R-S-E.


Source: Robotics - zdnet.com

3 Questions: Dan Huttenlocher on the formation of the MIT Schwarzman College of Computing

Helping machines perceive some laws of physics