Acquiring Rich Architectural Interiors to Support

Location-Dependent Computing

MIT9904-20

Seth Teller

 

 

We propose to continue development of the capability to acquire rich 2D and 3D models of architectural interiors using a mobile, semi-robotic sensor. These models have many applications in location-dependent computing. For example, they serve as the geometric embedding for information about spaces (offices), people (researchers), devices (printers, etc.) and services (networks). Entering map and model data is at present a very tedious manual process, and does not scale well to a building or many buildings worth of data. For example, it recently took a Master's student one man-year of effort to enter a seven-story building using AutoCAD. Thus we must develop new, fast ways to "author" useful building models.

We propose to tackle this project on two fronts, short-term (1-12 months) and long-term (2-4 years). Our short-term efforts will produce datasets of immediate usefulness to several other research efforts in the building. Our long-term efforts will produce techniques for semi-robotic and fully robotic model acquisition, significantly automating the process.

Short-Term:

In the short term (time scale 1-6 months) we will develop batch software tools which "compile" unenriched CAD data into useful 2D and 3D maps. MIT maintains unenriched 2D CAD for every floor of every building on campus. For example, see URL

http://insite.mit.edu/cgi-bin/cgi-bin-fp-mit/wfpmitindexscript?ifabx+NE43

This URL has a collection of line segments and text labels which represent walls, doors, windows, and room numbers. We will develop algorithms to convert this "line segment soup" into a meaningful 3D model of the entire campus, suitable as the geometric substrate for location-dependent computing. At a minimum, this requires determining the building's position on campus, and the elevation of each floor; resolving line segments into closed contours for offices, corridors, lounges, etc.; determining the function of each space by reference to the associated room numbers (and an external database mapping room numbers to space accounting data); and determining connectivity between spaces. Of course, much more is possible as well. We plan to get a skeletal model of Technology Square up within a few months, then enrich and expand it to include extended semantic information (room occupants, etc.) and other relevant buildings and spaces on campus.

Long-Term:

In the longer term, we propose to continue our research efforts to develop an autonomous mapping capability indoors. We have very exciting recent results outdoors. For example, we have demonstrated the ability to capture high-resolution images outdoors, and register them to a few centimeters and a tenth of a degree over acquisition areas spanning hundreds of meters. This enables fully automated computer vision acquisition of outdoor architectural scenes. In contrast to scene models built by hand, or built from a few photographs, our models can be produced rapidly, and can be viewed from any synthetic viewpoint. The URL

http://graphics.lcs.mit.edu/~seth/pubs/pubs.html

lists several recent papers about our modeling efforts.

We propose to extend our algorithms to indoor environments. In particular, we propose to deploy an omni-directional video camera on a semi-autonomous rolling platform, and acquire dense video observations of the indoor environment. Using our robust image-registration algorithms, we can produce huge datasets of calibrated imagery. Finally, from the imagery we can generate useful 3D models of the environment. By cross-indexing these 3D models with MIT space accounting and other information, we can produce a visually high-quality, information-rich, geometric model for use in many location-aware computing contexts.

Synergies with Other NTT Efforts:

This project is synergistic with the project proposed by Hari Balakrishnan and John Guttag for location-aware devices.

Press Mentions:

We have been the subject of a profile by Sky News in Britain, and have been approached for interviews by The New Scientist and the Boston Globe.

 

 

 

 

Contacts with NTT Personnel:

We have been in e-mail contact with Dr. Tsutomu Horikoshi and Dr. Yasuno Takayuki of NTT regarding our current research activities. I have also participated (with demonstrations and discussions) in several NTT visits to LCS.