Variable Viewpoint Reality

9807-28

Progress Report: July 1, 2000 — December 31, 2000

Paul Viola and Eric Grimson

 

Project Overview

In the foreseeable future, sporting events will be recorded in super high fidelity from hundreds or even thousands of cameras. Currently the nature of television broadcasting demands that only a single viewpoint be shown, at any particular time. This viewpoint is necessarily a compromise and is typically designed to displease the fewest number of viewers.

In this project we are creating a new viewing paradigm that will take advantage of recent and emerging methods in computer vision, virtual reality and computer graphics technology, together with the computational capabilities likely to be available on next generation machines and networks. This new paradigm will allow each viewer the ability to view the field from any arbitrary viewpoint -- from the point of view of the ball headed toward the soccer goal; or from that of the goalie defending the goal; as the quarterback dropping back to pass; or as a hitter waiting for a pitch. In this way, the viewer can observe exactly those portions of the game which most interest him, and from the viewpoint that most interests him (e.g. some fans may want to have the best view of Michael Jordan as he sails toward the basket; others may want to see the world from his point of view).

 

Summary of Progress Through July 2000

To create this new viewing paradigm, there are a number of important computer vision and graphics problems that must be solved. These include issues of real-time 3D reconstruction, coordination of large numbers of cameras, rendering of arbitrary viewpoints, learning to recognize common activities, finding similar visual events in archival video, and many other associated problems. We have made rapid progress on many of the problems related to the goals of the Variable Viewpoint Reality project:

Progress Through December 2000

Graph-cuts can be used to find the segmentations of high likelihood under an MRF prior for shape. Classically MRF’s were solved using a time intensive technique called Gibbs Sampling. This new approach transforms the MRF into a conventional graph and uses polynomial time graph cutting algorithms to find the lowest cost solution.

Upper left image shows the input (background omitted). Upper right and lower left show evidence used for segmentation. Lower left is image differences (notice the holes within the body). Upper right show shadow evidence. Locations where the color remains the same but intensity is reduced. Lower right shows the final segmentation.

 

Upper images show two images from a video sequence. Person is moving to the left. Lower left shows image segmentation based on color information. Lower middle shows motion information. Note this motion field is extracted using the boundaries estimated using color. Lower right show the segmentation using motion and color information.