Variable Viewpoint Reality

9807-28

Progress Report: January 1, 2000 — June 30, 2000

Paul Viola and Eric Grimson

 

Project Overview

In the foreseeable future, sporting events will be recorded in super high fidelity from hundreds or even thousands of cameras. Currently the nature of television broadcasting demands that only a single viewpoint be shown, at any particular time. This viewpoint is necessarily a compromise and is typically designed to displease the fewest number of viewers.

In this project we are creating a new viewing paradigm that will take advantage of recent and emerging methods in computer vision, virtual reality and computer graphics technology, together with the computational capabilities likely to be available on next generation machines and networks. This new paradigm will allow each viewer the ability to view the field from any arbitrary viewpoint -- from the point of view of the ball headed toward the soccer goal; or from that of the goalie defending the goal; as the quarterback dropping back to pass; or as a hitter waiting for a pitch. In this way, the viewer can observe exactly those portions of the game which most interest him, and from the viewpoint that most interests him (e.g. some fans may want to have the best view of Michael Jordan as he sails toward the basket; others may want to see the world from his point of view).

 

Progress Through June 2000

We have made rapid progress on a number of problems related to the goals of the Variable Viewpoint Reality project:

 

On the left is a typical input scenario. Notice that the imaging conditions are quite difficult, lighting is poorly controlled and the subject is wearing clothing that closely matches the background. On the right is an attempt to segment the subject from the background. These silhouettes are then intersected to in order to reconstruct the 3D shape. The quality of these silhouettes is poor but it is the best that can be done in real-time.

 

The 3D reconstruction on the left is the intersection of the silhouettes shown above. The holes or gaps result from the gaps in the segmentations. On the right is the output of our new algorithm. With little additional computation the results are significantly better.

 

 

 

The significance of correct calibration is shown in this figure. On the left is an overheard view of a 3D object and a single camera. When the camera is correctly calibrated (shown in green) the cone intersects the volume, as is required for correct reconstruction. If the camera were rotated 17 degrees (shown in pink) the cone no longer intersects the volume, and reconstruction is impossible.

Research Plan for the Next Six Months