A self-adaptive architecture for interpretation problems
Date Friday, 26JAN01
Time 2-3pm
Speaker Paul Robertson
Affiliation MIT AI Lab
Abstract Much of A.I. is concerned with interpretation problems. Vision programs are concerned with interpreting visual scenes, speech understanding systems are concerned with interpreting spoken words, natural language systems are concerned with interpreting strings of words, and robot controllers are concerned with interpreting the landscape through which the robot must navigate. Programs that learn are concerned with interpretating data as a model, programs that generate code interpret specifications, and programs that follow a plan interpret the plan as a sequence of actions. Interpretation as a general concept is useful in building systems that are involved with understanding complex environments.

Building systems that can interact intelligently with a complex environment requires complex programs. Over the years numerous architectures have been developed to facilitate the problem of building complex systems that exhibit interesting bahavior in the face of a complex environment. Examples of such architectures include Blackboards, Forward chaining rule based systems, Schemas, and subsumption.

I will describe a self-adaptive reflective architecture for interpretation problems that has been applied to the problem of interpreting satelite images. I will not describe any of the vision algorithms but will focus on the details of the architecture and how it relates to other approaches.
Location 545 Technology Square (aka "NE43")
Room 8th Floor Playroom
Bio