Image-Based Synthetic Aperture Rendering

MIT9904-14

Progress Report: July 1, 2000–December 31, 2000

Leonard McMillan and Julie Dorsey

 

Project Overview

In our research we have developed new approaches to computer graphics where images are used as the underlying scene model. In our first year we developed a series of new algorithms and fast techniques for rendering novel views from these "image-based" representations. These image-based representations consist of a collection or database of images augmented with calibration information describing each camera’s pose and internal parameters. Our rendering methods operate by interpolating the rays needed to reconstruct a desired image from this database. The interpolation process is controlled via a synthetic aperture camera model coupled with a user-controlled focal plane. Our efforts have focused on end-to-end solutions for image-based rendering, including devices for image acquisition, algorithms for rendering from these image-based models, and three-dimensional display technologies for direct viewing of image-based models.

The focus in our second year is on constructing a real-time acquisition device for dynamic image-based representations. To date, nearly all research in image-based rendering has been limited to static scenes. In order to construct a system capable of rendering dynamic models it is necessary to build a two-dimensional camera array. The construction of such an apparatus would allow a new class of applications including three-dimensional teleconferencing and holographic television.

Progress Through December 2000

In the past 6 months, we have concentrated our efforts on the design of the camera module that will be replicated to form our camera array, which we call a pod. Each pod is composed of a CMOS imager, frame storage and interface circuitry. In our initial design the interface circuitry was based on an off-the-shelf field-programmable gate-array (FPGA). Based on our algorithm developments we have learned that a more capable and flexible interface is desirable. In order to undertake the development of such a camera module, we have begun a collaboration with a local company in Massachusetts to develop an intelligent pod with on-board processing and support for industry standard interfaces.

We are moving forward on a design that replaces the FPGA interface chip with a "one-chip" digital camera solution called the Clarity 2 manufactured by Soundvision Inc. (http://www.soundvisioninc.com). The Clarity 2 is a custom ASIC based on the ARM 7 (32-bit RISC processor). It is designed to be a low-cost and single-chip solution for OEM digital camera designs. The Clarity 2 has been used in a wide range of products that are shipping in high volumes. We have decided to used the Clarity in our system because of its high-level of system integration, the wide range sensors and system interfaces that it supports, and its advanced software development environment. Soundvision has agreed to assist us in the design of this pod. The design of the sensor pod is greatly simplified by the choice of the Clarity 2, the logic design is based on a striped-down version of an OEM design kit whose, schematics and layout are provided by Soundvision.

For our CMOS imager we are currently hoping to use a mega-pixel imager developed by Motorola. An interface between the Clarity 2 and the Motorola imager has already been developed and Soundvision has developed an OEM design based on this imager.

 

Research Plan for the Next Six Months

We are in the process of installing both software and hardware development environments necessary for hardware debugging and firmware development. This includes a cross-compiler environment and in-circuit emulation capability for the ARM7. It is our goal to finalize the pod design in the next 2 months in collaboration with Soundvision. Soundvision will be providing us with a development system so that we can begin firmware development while the camera pods and motherboards are being fabricated. We will also continue our algorithm development efforts, and continue our work on developing a dynamic autostereoscopic display.