Human-Robot Dynamic Social Interaction
Rodney A. Brooks
1. Project Overview
NTT researchers are interested in the question of whether a physical robot produces a more direct emotional coupling with human beings than does a computer generated graphical image of a similar robot. At MIT we are building a robot that has human-like facial expressions and shoulder and neck gestures, and that perceives human motion and facial expressions. This is coupled to an emotional system so that the person and the robot naturally follow normal human communication social dynamics. This robot will be installed at the NTT Communications Science Laboratories in Kyoto where the response of human subjects will be measured and compared to their response a graphical face interface.
2. Progress through June 2000
During the first nine months of this project (through March 2000) we have achieved the following:
The new Kismet is a significant step forward as an interactive humanoid head. It is much more life-like, and operates at much more human-like speeds in its response.
We expect to complete items 5 and 6 above by June 2000. We will also begin the fabrication of the facial features by that time. Furthermore we will complete the fabrication of the delivery control system for the NTT version of Kismet.
3. Proposed work for the year July 1, 2000 through June 30, 2001
During the second year of this project we will fabricate the delivery version of Kismet 2, and deliver it to NTT. The necessary steps for that are:
Once the system is delivered to NTT we will work closely in developing the experiments to be done that compare the embodied version of Kismet with a graphical version.
These experiments will require much subtle planning. For instance, we have noticed with Kismet 1, that as it engages its own interest with something else in the environment, and turns its attention away from a persons face, that person really feels as though they are in the presence of another being. It will be quite a challenge to see how to give that same qualitative capability to the graphical robot, so that it too engages things in the physical environment in which the person exists. Controlling for such subtle differences across the embodied and graphical worlds will be difficult, but identifying the relevant aspects of the world in order to do the experiments will be revealing in itself.