Model Reduction for Human and Animal Locomotion

MIT2001-08

Progress Report: November 1, 2001–December 31, 2001

Jovan Popovic

Project Overview

Modeling human and animal motion is a fundamental scientific objective, with applications in computer graphics, robotics, and medicine. Graphics applications include education, training, and visualization, as well as animation in art, film, and entertainment; in robotics, robot design and the design of controllers for legged locomotion; in medicine, the diagnosis of medical problems and the design of prosthetic devices. In all these applications, a predictive physical model of locomotion is essential. The success of each application is contingent on the simplicity of this locomotion model: models of great complexity are difficult to simulate, analyze, and optimize.

This research develops a general framework for constructing simple, low-dimensional models of locomotion. Joint removal, a simple model-reduction method, is a standard computer graphics practice for reducing the dimensionality of the model by removing the joints with negligible effect. The purpose of this research is to construct more powerful techniques that not only eliminate insignificant joints, but also discover simple locomotion models that approximate the dynamics equations controllably. The research employs statistical methods such as principal component analysis to analyze empirical and simulated motion data.

Progress Through December 2001

This initiative began in November of 2001 and this progress report describes the work completed during the past three months. In this short period the project has picked up tremendous pace and shows outstanding promise.

Our investigation began with statistical analysis of a walking legged robot. The robot, named M2, is the 3D bipedal walker created and developed in the MIT Leg Lab project guided by Prof. Gill Pratt. Our tests used the data generated by physical simulations of M2's walking cycle to reduce the dimensionality of the motion from 12 degrees of freedom to 6 degrees of freedom. Our conclusion is that the lower dimensional space is sufficient to describe the original motion without significant visual artifacts.

**Figure 1 **The motion of a walking robot with 12 degrees of freedom (left) is expressed in a reduced 6-dimensional configuration space with minor artifacts (right). The eigenvalues computed by the principal component analysis (middle) indicate a simple configuration space for walking may exist.

In December, we purchased and installed an optical motion-capture system from Vicon Motion Systems. The system consists of ten high speed (120 Hz), high resolution (1000 by 1000 pixels) video cameras and a turn-key workstation for synchronizing and processing video data. The Vicon software extracts the motion by computing the trajectories of the visual markers (small spheres covered in reflective tape) placed on the surfaces of objects.

The 3D trajectory of each marker is reconstructed from optical video measurements. In our tests, the system can capture a motion with sub-millimeter accuracy.

**Figure 2 **A human broad jump (left) is expressed in a reduced configuration space (right) with some stretching and interpenetration artifacts. The 3D trajectories of 41 markers describe the original motion. This sequence of vectors with 123 elements each is reduced into a sequence vectors with only 8 elements each.

Our hypothesis is that for every locomotive activity there is a low-dimensional configuration manifold. To test this hypothesis, we used the motion-capture system to record several human broad jumps. The captured data is a time sequence of 3D vectors for all 41 visual markers attached to the human body performing the jump. At any time instant, the full configuration (position of all markers) is described by a vector with 123 (41 markers times 3 coordinates) elements. We analyzed this data with principal component analysis to compute an 8-dimensional configuration space. The tests show that even the significantly reduced configuration space can express the motions in our data set with small visual artifacts. The violation of geometric constraints creates the most noticeable visual artifacts.

Research Plan for the Next Six Months

In the next six months, we will capture motions of several human activities: running, jumping, walking, skipping, and others. At the moment, the motion capture process is quite involved and frequently requires manual intervention.

In parallel, we will continue our statistical analysis of each activity with existing dimensionality reduction techniques. Our plan is to develop a new dimensionality reduction method that can incorporate a priori knowledge about the data. In particular, our current results indicate that incorporating geometric constraints is critical for creating motions without significant visual artifacts. Furthermore, we need to confirm with cross-validation tests if there is a low-dimensional configuration space for each human activity.

Once we identify the low-dimensional space, we plan to construct the dynamics equation defined on this low-dimensional manifold. We already investigated this approach by projecting the dynamics equations for the M2 robot onto the reduced space. The reduced dynamics are critical for applications in motion synthesis, robot control and retargeting motion. We will develop specific tools for each of these three applications in the second year of this initiative.