BABYL OPTIONS:
Version: 5
Labels:
Note:   This is the header of an rmail file.
Note:   If you are seeing it in rmail,
Note:    it means the file has no messages in it.

1,,
Return-Path: <lawp@media-lab.media.mit.edu>
Received: from media.mit.edu (media-lab.media.mit.edu) by life.ai.mit.edu (4.1/AI-4.10) for welg id AA14831; Wed, 7 Jun 95 10:20:01 EDT
Received: by media.mit.edu (5.57/DA1.0.4.amt)
	id AA15837; Wed, 7 Jun 95 10:19:57 -0400
From: Laurie Pillsbury <lawp@media-lab.media.mit.edu>
Received: by shoreditch.media.mit.edu (8.6.11/DA.WS.1.0.5)
	id KAA01456; Wed, 7 Jun 1995 10:20:23 -0400
Date: Wed, 7 Jun 1995 10:20:23 -0400
Message-Id: <199506071420.KAA01456@shoreditch.media.mit.edu>
To: welg@ai.mit.edu
Subject: Media Lab ICCV Open House
Reply-To: lawp@media.mit.edu

*** EOOH ***
Return-Path: <lawp@media-lab.media.mit.edu>
From: Laurie Pillsbury <lawp@media-lab.media.mit.edu>
Date: Wed, 7 Jun 1995 10:20:23 -0400
To: welg@ai.mit.edu
Subject: Media Lab ICCV Open House
Reply-To: lawp@media.mit.edu


Eric,

Sandy asked me to forward the Names and titles of everyone giving
demos on Thursday, June 22.

Please let me know if you have any questions.

Regards,
Laurie
________________________________________________________________________

Adelson, Edward		Lightness, Transparency, and Mid-Level Vision

Agamanolis, Stefan 	Object-oriented television and Cheops processing 
			system

Askey, David		Automated Extraction and Resynthesis of Walkers

Azarbayejani, Ali	Put That There

Basu, Sumit 		Ambient Microphone Speech Recognition

Becker, Shawn 		Semiautomatic 3D model building and lens distortion 
			correction

Brand, Matt		Physics-based scene understanding

Campbell, Lee		Phase Space Recognition of Human Body Motion

Casey, Mike 		Vision-Steered Phased-Array Microphones

Darrell, Trevor		ALIVE, Active Face Tracking/Recognition/Pose Estimation

Essa, Irfan		Recognizing Facial Expressions 

Gardner, Bill		Transaural Rendering

Intille, Stephen	Closed-World Tracking

Liu, Fang		Wold-based Texture Modeling

Mann, Steve		Video Orbits for Mosaicing and Resolution Enhancement

Minka, Tom		Photobook: Content-Based Image Retrieval

Moghaddam, Baback	Large Database Face Recognition, and 
                        Active Face Recognition/Tracking/Pose Recognition

Nastar, Chahab		Thin-plate Models for Motion Analysis and 
			Object Recognition

Neveitt, Bill		Sorting Textures using Cascaded Sub-band Energy 
			Statistics

Niyogi, Sourabh		Detecting Kinetic Occlusion	

Pinhanez, Claudio 	SmartCam

Popat, Kris		High-Dimensional Probabilistic Modeling 

Sherstinsky, Alex	M-Lattice -- Nonlinear Dynamics For Vision and 
			Image Processing

Starner, Thad		Real-time visual recognition of American Sign Language 

Szummer, Martin		Scene Cut Detection and Motion Texture Modeling

Wachman, Josh		Query by Content in Video Sequences

Wang, John		Layered Image Representation

Weiss, Yair		Non-Rigid Motion Segmentation: Psychophysics and 
			Modeling

Wilson, Andrew		Learning Visual Behavior for Gesture Analysis

Wren, Chris		ALIVE



1, answered,,
Return-Path: <lawp@media-lab.media.mit.edu>
Received: from aleve.media.mit.edu by life.ai.mit.edu (4.1/AI-4.10) for welg id AA25097; Tue, 13 Jun 95 16:05:14 EDT
Received: from shoreditch.media.mit.edu by aleve.media.mit.edu; (5.65/1.1/06Jun95-8.2MPM)
	id AA06056; Tue, 13 Jun 1995 16:05:12 -0400
From: Laurie Pillsbury <lawp@media-lab.media.mit.edu>
Received: by shoreditch.media.mit.edu (8.6.11/DA.WS.1.0.5)
	id QAA02385; Tue, 13 Jun 1995 16:05:36 -0400
Date: Tue, 13 Jun 1995 16:05:36 -0400
Message-Id: <199506132005.QAA02385@shoreditch.media.mit.edu>
To: welg@ai.mit.edu
Subject: Demo Descriptions, Media Lab
Reply-To: lawp@media.mit.edu

*** EOOH ***
Return-Path: <lawp@media-lab.media.mit.edu>
From: Laurie Pillsbury <lawp@media-lab.media.mit.edu>
Date: Tue, 13 Jun 1995 16:05:36 -0400
To: welg@ai.mit.edu
Subject: Demo Descriptions, Media Lab
Reply-To: lawp@media.mit.edu

ADELSON, EDWARD		"Lightness, Transparency, and Mid-Level Vision"

Some new brightness illusions will be demonstrated.  These illusions
indicate the importance of mid-level mechanisms involving transparency,
occlusion, and lighting.

AGAMANOLIS, STEFAN 	"Object-oriented television and Cheops processing 
			system"

Cheops is a data-flow computer built in the lab which can be used to
display such "structured video" programs in real time.  Structured
video is represented as 2D and 3D objects rather than pixels or
frames. These objects are "transmitted" together with a script that
tells how to assemble them to make a television program.


ASKEY, DAVID		"Automated Extraction and Resynthesis of Walkers"

Automated layer decomposition of walkers in an image sequence:
an approach for efficient coding and resynthesis of walking
motion using component layers.

AZARBAYEJANI, ALI	"Put That There/Models from Video"

We will show a wide-baseline stereo system for tracking people in 3-D
based on symbolic correspondence.  The system is self-calibrated and
the output is used for gestural control in a 3-D audio visual
environment.

We will also show a system for building 3D models from video, which is
based on our Structure-from-Motion research, described in last month's
IEEE PAMI.

BASU, SUMIT 		"Ambient Microphone Speech Recognition"

This demonstration illustrates the use of an array of
microphones along with visual cues to perform speech
recognition "at a distance" in a noisy, open environment.
	
BECKER, SHAWN 		"Semiautomatic 3D model building and lens distortion 
			correction"

Reconstructing camera parameters, planar 3-D geometry and surface
texture, given one or more views of a scene with pre-selected parallel
and coplanar edges.  This technique has been used to generate a 3-D
textured database from a set of still images taken with an
uncalibrated 35mm camera. This technique has also been used to
determine 3-D positions of actors from video.

BRAND, MATT		"Physics-based scene understanding"

Knowledge-intensive vision systems can understand scenes with complex
visual and causal structure.  This demo shows the visual analysis and
explanation of a variety of artifacts, including mechanical transmissions.

CAMPBELL, LEE		"Phase Space Recognition of Human Body Motion"

This work presents a method for representing and recognizing human
body motion. It identifies sets of constraints that are diagnostic
of a movement; different constraints identify different movements.

CASEY, MIKE 		"Vision-Steered Phased-Array Microphones"

A beam-forming microphone array is used to capture noisy speech input 
from the ALIVE space. Using the position information provided by the 
vision system, we obtain audio singal enhancements up to 10dB.

DARRELL, TREVOR		"ALIVE, Active Face Tracking/Recognition/Pose 
			Estimation"

We will show active face tracking, recognition and pose estimation in
the ALIVE system. Users can walk about a room and interact with autonomous
virtual creatures in a `Magic Mirror' paradigm; the creatures can
recognize/track/respond to the users face as well as body position and
hand gestures.

ESSA, IRFAN		"Recognizing Facial Expressions"

We describe our methods for extracting detailed representations
of facial motion from video.  We will show how these representation
can be used for Coding, Analysis, Recognition, Tracking and Synthesis
of Facial Expressions.

GARDNER, BILL		"Transaural Rendering"

The STIVE demo will feature a three-dimensional audio system which
uses only two speakers to create the illusion of sounds emanating from
arbitrary directions around the listener.

INTILLE, STEPHEN	"Closed-World Tracking"

Tracking for video annotation using contextual information to
dynamically select tracked features. Example domain: football plays.

LIU, FANG		"Wold-based Texture Modeling"

We apply the Wold-based texture model to image database retrieval.
The Wold model provides perceptually sensible features which
correspond well to the reported most important dimensions of human
texture perception -- periodicity, directionality, and randomness.

MANN, STEVE		"Video Orbits for Mosaicing and Resolution Enhancement/
			 Wearable Computers"

New featureless multiscale method of estimating the homographic coordinate 
transformation between a pair of images.  This method is used to make pictures 
with a "Visual filter" equipped with image acquisition and display capability.
Standing in a single location.  A scene is scanned on a large "video canvas"
where each new frame undergoes the appropriate homographic coordinate 
transformation to insert it correctly into the image mosaic.

I will also show my work on wearable computers and NetCam.

MINKA, TOM		"Photobook: Content-Based Image Retrieval"

Content-based image annotation is complicated by the fact that feature
salience varies with context. FourEyes indexes images using several
features which are consulted independently, based on user interaction.

MOGHADDAM, BABACK	"Large Database Face Recognition, and 
                        Active Face Recognition/Tracking/Pose Recognition"

An automatic system for detection, recognition and model-based coding
of human faces is presented. The system is able to detect human faces
(at various scales and different poses) in the input scene and
geometrically align them prior to recognition and compression. The
system has been tested successfully on over 2,000 faces from ARPA's
FERET program.

NASTAR, CHAHAB		"Thin-plate Models for Motion Analysis and 
			Object Recognition"

We present a deformable model for nonrigid motion tracking
(e.g. heart motion). A similar model can be
used for object recognition (e.g face recognition).

NIYOGI, SOURABH		"Detecting Kinetic Occlusion"

Description: Detecting motion boundaries in image sequences through
local spatiotemporal junction analysis; deducing ordinal depth locally
from accretion and deletion cues.

PINHANEZ, CLAUDIO 	"SmartCam"

An SmartCam is a robotic camera which operates in a TV studio without
a cameraman, using computer vision to find objects and people in
complex scenes.  The development of SmartCams requires new methods and
ideas in context-based vision, action recognition, and architecture of
computer vision systems.

POPAT, KRIS		"High-Dimensional Probabilistic Modeling"

Improved probabilistic models often mean better performance in a
variety of systems.  Accurate modeling usually requires
high-dimensional modeling, with its attendant difficulties.
We explore some approaches to high-dimensional modeling, and
explore their application to image compression and restoration,
and to texture synthesis and classification.

SHERSTINSKY, ALEX	"M-Lattice -- Nonlinear Dynamics For Vision and 
			Image Processing"

This research investigates the mathematical properties of the
Reaction-Diffusion model and its derivative the new "M-Lattice" system.
Originated by Alan Turing in order to explain morphogenesis, we 
demonstrate these models' applications to computational vision and
image processing.

STARNER, THAD		"Real-time visual recognition of American Sign Language
			/Wearable Computing"

Full-sentence, 40-word lexicon ASL is recognized with an accuracy of
99.2% in real-time without explicit modelling of the fingers.  One
color camera is used for tracking.

I will also show my work on wearable computers and rememberance agents.

SZUMMER, MARTIN		"Scene Cut Detection and Motion Texture Modeling"

1) A robust algorithm for finding cuts in video -- to "skip ahead to
the next shot."  2) A stochastic motion model for estimating and
resynthesizing spatio-temporal patterns. (water, smoke, etc.)

WACHMAN, JOSH		"Query by Content in Video Sequences"

Unsupervised, Cross-Modal Characterization of Discourse in Tonight
Show Monlogues: Preliminary results from analysis of audio and 
visual-kinesic features as processed with the isodata clustering
algorithm demonstrate a bottom up approach to discourse analysis

WANG, JOHN		"Layered Image Representation"

We will demonstrate novel techniques in motion estimation and
segmentation based on mid-level vision concepts for applications in
image coding, data compression, video special effects, and 3D
structure recovery.

WEISS, YAIR		"Non-Rigid Motion Segmentation: Psychophysics and 
			Modeling"

Estimating non-rigid motion requires integrating multiple constraints
while segmenting others. We will show psychophysical demonstrations
which reveal how the human visual system solves this dilemma. 

WILSON, ANDREW		"Learning Visual Behavior for Gesture Analysis"

The "visual behavior" of gesture is recovered from a number of example image
sequences by concurrently training the temporal model and multiple models of
the visual scene.  The training process is demonstrated.

WREN, CHRIS		"ALIVE"

We will show active face tracking, recognition and pose estimation in
the ALIVE system. Users can walk about a room and interact with autonomous
virtual creatures in a `Magic Mirror' paradigm; the creatures can
recognize/track/respond to the users face as well as body position and
hand gestures.

