Image-Based Synthetic Aperture Rendering

MIT9904-14

Progress Report: January 1, 2000 – June 30, 2000

Leonard McMillan and Julie Dorsey

 

 

Project Overview

Our research has focused on new approaches to computer graphics where images play a central role in the modeling and rendering processes. We have demonstrated computer generated animations whose realism is comparable to a photograph, yet having the same flexibility of navigation and interaction as a classical CAD model. Our "image-based" representations can be acquired both quickly and automatically. The representations consist of a collection or database of images augmented with calibration information describing each camera’s pose and internal parameters. Our rendering system proceeds by interpolating the rays required to construct a desired image. The interpolation process is controlled via a synthetic aperture model coupled with a dynamically variable user-specified focal plane. Our work has focused on end-to-end solutions for image-based rendering, including devices for image acquisition, algorithms for rendering from these image-based models, and three-dimensional display technologies for autostereoscopic direct viewing of image-based models.

In addition to the development of algorithms and representations for rendering image-based models, we plan, as part of this project, to construct a real-time system for acquiring dynamic image-based models. To date, nearly all research in image-based rendering has been limited to static scenes. In order to construct a system capable of rendering dynamic models it is necessary to build a two-dimensional camera array. The construction of such an apparatus would allow a new class of applications including three-dimensional teleconferencing and holographic television.

 

Progress Through June 2000

In the past six months we have refined and calibrated our low-cost acquisition device, based on a flat-bed scanner. The current system acquires an 8 by 11 array of images in a single scan. The resulting scan can be post processed in a few minutes. We have demonstrated the portability of the system by acquiring outdoor models. We have also acquired larger image-based models (32 by 32 or 1024 images) from our motion-platform based camera system.

We have developed a new rendering technique that allows us to compute images using cameras in arbitrary configurations. Previously our image reconstruction algorithms have required that the source images be acquired along a regular planar grid. Our new "Unstructured Lightfield" method removes this restriction. This allows us to use a series of images acquired from a hand-held camcorder as a model. This new rendering technique also simplifies the construction of a multi-camera array, by reducing the need for strict mechanical tolerances. Thus far we have demonstrated this method using footage from camcorders that are waved around static scenes. We have also been able to use this method to construct our first dynamic models. We believe that this method show great promise, and we intend to refine it further and submit it for publication.

We have also investigated several variations on our autostereoscopic display system in an effort to generate a higher resolution larger format display with the best quality possible. We have been a quantitative study of the various tradeoffs in building such a system. Our ultimate goal is to build a dynamic display.

 

 

Research Plan for the Next Six Months

Our primary goal for the remainder of the year to fabricate and demonstrate the real-time camera array for acquiring dynamic scenes. To fabricate the real-time camera array we will manufacture sixteen random-access sensor pods and a motherboard with an interface to a host PC. After the prototype light-field camera is complete, we would like to extend the array to 256 cameras and design a compatible, higher-resolution camera pod. Extending the array should be easy, as the system has been designed to be modular and to support larger array sizes.

We also intend to continue developing and improving our image-based rendering algorithms. In particular, we are interested in extending the field-of-view in our light field representations. We believe that this improvement would provide for more immersive image-based models. We are considering both optical (fisheye lenses, parabolic mirrors) and computational approaches (mosaicing multiple narrow filed of view images) to this problem. We are also investigating methods for extending the dynamic range of the acquired images. This would allow us to capture a wider range of image where light sources are visible simultaneously with objects in shadow. Wide dynamic-range images will allow us to treat the source images as measurements and allow us to apply a wider range of analysis to the source images.