Retired Robots
vine underline

Natural Communication with Mobile Robots

(It says "Don't follow me... I'm lost")

This project aims to make it easy for people to communicate with navigating indoor office-based mobile robots by providing a bridge between (1) the kind of language people use to name places and describe paths among them, and (2) the radically different way existing mobile robots conceive of the world due to their impoverished sensory capabilities.

This project was completed as part of Mark Torrance's Master's Thesis at the MIT Artificial Intelligence Lab. For more information please consult the publications below.

Publications

Abstract

We have developed a system which mediates between an unmodified reactive mobile robot architecture and human natural language I/O. We introduce reactive-odometric plans and demonstrate their use in plan execution, plan recognition, and learning to associate human terms with perceptually unremarkable locations in the environment. The communication component of our architecture supports typewritten natural language discourse with people. It lets users name places either immediately or in relation to other known places, ask questions about the robot's plans and the spatial relationships of known places, and give the robot short and long term goals. This thesis presents results obtained with our implementation of this architecture on a physical mobile robot system designed by Jonathan Connell of IBM T.J. Watson Research Center, and in simulation. These results reflect experiments performed by the author and by other users.

Sample Transcript

The following transcript was taken directly from the robot. It demonstrates the robot learning about several new places in the environment and successfully navigating to and recognizing the places it has learned. The robot uses a combination of sensory plan-step termination conditions, such as "go until you can turn right", and odometry-based termination conditions, stored when the robot is just told to "stop" and then "you are at Jim's office". What the human user, or tutor, types is outdented in a normal font. What the robot says in response is indented and emphasized.

Where are you?
I don't know where I am right now.
Which way are you facing?
I don't know which way I am facing. Please tell me.
You are facing north
I am facing NORTH. Thanks.
You are at Mark's office
I'm learning about MARK'S OFFICE.
Turn around
I'm turning around.
Go until you can turn right
I'm going until I see no obstacle on the right.
You are at the northeast entrance to the elevator lobby
I'm learning about THE NORTHEAST ENTRANCE TO THE ELEVATOR LOBBY.
Go
I'm going.
Stop
I'm stopping.
You are at Jim's office
I'm learning about JIM'S OFFICE.
How do you get from Jim's office to Mark's office?
To get from JIM'S OFFICE to MARK'S OFFICE,
In the ROP, I face NORTH and go 8.2 feet.
In the ROP, I face NORTH and go 13.6 feet.

Karen's office is north of Jim's office.
I'm learning about KAREN'S OFFICE.
Go to Mark's office
Now I'm going from JIM'S OFFICE to THE NORTHEAST ENTRANCE TO THE ELEVATOR LOBBY.
Would you please tell me if I pass KAREN'S OFFICE.
This is THE NORTHEAST ENTRANCE TO THE ELEVATOR LOBBY.
Now I'm going from THE NORTHEAST ENTRANCE TO THE ELEVATOR LOBBY to MARK'S OFFICE.
This is MARK'S OFFICE.
I have arrived at MARK'S OFFICE.

Go to Karen's office.
I don't know how to get to KAREN'S OFFICE from here.
Go to Jim's office.
Now I'm going from MARK'S OFFICE to THE NORTHEAST ENTRANCE TO THE ELEVATOR LOBBY.
This is THE NORTHEAST ENTRANCE TO THE ELEVATOR LOBBY.
Now I'm going from THE NORTHEAST ENTRANCE TO THE ELEVATOR LOBBY to JIM'S OFFICE.
This is JIM'S OFFICE.
I have arrived at JIM'S OFFICE.

Additional Language Supported

In addition to the language exercised above (and related capacities not explicitly used here), the robot can also face named directions, accept arbitrary names for places, and answer questions about where places are in terms of their adjacent places.

Mark C. Torrance, <torrance@ai.mit.edu>

Natural Communication with Mobile Robots is a part of the Cognitive Robotics project of Professor Lynn Andrea Stein's AP group at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.