CSAIL Publications and Digital Archive header
bullet Technical Reports bullet Work Products bullet Historical Collections bullet

link to publications.csail.mit.edu link to www.csail.mit.edu horizontal line

CSAIL Digital Archive - Artificial Intelligence Laboratory Series
Publications 500 through 599

AI PublicationsLast update Sun Mar 19 05:05:02 2006


AIM-502A

Author[s]: Gerald Jay Sussman and Guy Lewis Steel, Jr.

Constraints: A Language for Expressing Amost-Hierarchical Descriptions

August 1981

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-502a.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-502a.pdf

We present an interactive system organized around networks of constraints rather than the programs which manipulate them. We describe a language of hierarchical constraint networks. We describe one method of deriving useful consequences of a set of constraints which we call propagation. Dependency analysis is used to spot and track down inconsistent subsets of a constraint set. Propagation of constraints is most flexible and useful when coupled with the ability to perform symbolic manipulations on algebraic expressions. Such manipulations are in turn best expressed as alterations or augmentations of the constraint network. Almost-Hierarchical Constraint Networks can be constructed to represent the multiple viewpoints used by engineers in the synthesis and analysis of electrical networks. These multiple viewpoints are used in terminal equivalence and power arguments to reduce the apparent synergy in a circuit so that it can be attacked algebraically.


AIM-502

Author[s]: Guy Lewis Steel, Jr. and Gerald Jay Sussman

Constraints

November 1978

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-502.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-502.pdf

We present an interactive system organized around networks of constraints rather than the programs which manipulate them. We describe a language of hierarchical constraint networks. We describe one method of deriving useful consequences of a set of constraints which we call propagation. Dependency analysis is used to spot and track down inconsistent subsets of a constraint set. Propagation of constraints is most flexible and useful when coupled with the ability to perform symbolic manipulations on algebraic expressions. Such manipulations are in turn best expressed as alterations of augmentations of the constraint network. Numerous diagrams ornament the text.


AITR-503

Author[s]: Howard Elliot Shrobe

Dependency Directed Reasoning for Complex Program Understanding

April 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-503.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-503.pdf

Artificial Intelligence research involves the creation of extremely complex programs which must possess the capability to introspect, learn, and improve their expertise. Any truly intelligent program must be able to create procedures and to modify them as it gathers information from its experience. [Sussman, 1975] produced such a system for a 'mini-world'; but truly intelligent programs must be considerably more complex. A crucial stepping stone in AI research is the development of a system which can understand complex programs well enough to modify them. There is also a complexity barrier in the world of commercial software which is making the cost of software production and maintenance prohibitive. Here too a system which is capable of understanding complex programs is a necessary step. The Programmer's Apprentice Project [Rich and Shrobe, 76] is attempting to develop an interactive programming tool which will help expert programmers deal with the complexity involved in engineering a large software system. This report describes REASON, the deductive component of the programmer's apprentice. REASON is intended to help expert programmers in the process of evolutionary program design. REASON utilizes the engineering techniques of modelling, decomposition, and analysis by inspection to determine how modules interact to achieve the desired overall behavior of a program. REASON coordinates its various sources of knowledge by using a dependency-directed structure which records the justification for each deduction it makes. Once a program has been analyzed these justifications can be summarized into a teleological structure called a plan which helps the system understand the impact of a proposed program modification.


AIM-505

Author[s]: Carl Hewitt, Giuseppe Attardi and Henry Lieberman

Specifying and Proving Properties of Guardians for Distributed Systems

June 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-505.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-505.pdf

In a distributed system where many processors are connected by a networ and communicate using message passing, many users can be allowed to access the same facilities. A public utility is usually an expensive or limited resource whose use has to be regulated. A GUARDIAN is an abstraction that can be used to regulate the use of resources by scheduling their access, providing protection, and implementing recovery from hardware failures. We present a language construct called a PRIMITIVE SERIALIZER which can be used to express efficient implementations of guardians in a modular fashion. We have developed a proof methodology for proving strong properties of network utilities e.g. the utility is guaranteed to respond to each request which it is sent. This proof methodology is illustrated by proving properties of a guardian which manages two hardcopy printing devices.


AIM-506

Author[s]: Charles Rich, Howard E. Shrobe and Richard C. Waters

Computer Aided Evolutionary Design for Software Engineering

January 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-506.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-506.pdf

We report on a partially implemented interactive computer aided design tool for software engineering. A distinguishing characteristic of our project is its concern for the evolutionary character of software systems. Our project draws a distinction between algorithms and systems, centering on its attention on support for the system designer. Although verification has played a large role in recent research, our perspective suggests that the complexity and evolutionary nature of software systems requires a number of additional techniques, which are described in this paper.


AIM-507

Author[s]: Howard E. Shrobe, Richard C. Waters and Gerald J. Sussman

A Hypothetical Monologue Illustrating the Knowledge Underlying Program Analysis

January 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-507.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-507.pdf

Automated Program Analysis is the process of discovering decompositions of a system into sub-units such that the behavior of the whole program can be inferred from the behavior of its parts. Analysis can be employed to increase the explanatory power of a program understanding system. We identify several techniques which are useful for automated program analysis. Chief among these is the identification and classification of the macro-scale units of programming knowledge which are characteristic of the problem domain. We call these plans. This paper presents a summary of how plans can be used in program analysis in the form of a hypothetical monologue. We also show a small catalogue of plans which are characteristic of AI programming. Finally, we present some techniques which facilitate plan recognition.


AIM-510

Author[s]: W.E.L. Grimson

Differential Geometry, Surface Patches and Convergence Methods

February 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-510.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-510.pdf

The problem of constructing a surface from the information provided by the Marr-Poggio theory of human stereo vision is investigated. It is argued that not only does this theory provide explicit boundary conditions at certain points in the image, but that the imaging process also provides implicit conditions on all other points in the image. This argument is used to derive conditions on possible algorithms for computing the surface. Additional constraining principles are applied to the problem; specifically that the process be performable by a local-support parallel network. Some mathematical tools, differential geometry, Coons surface patches and iterative methods of convergence, relevant to the problem of constructing the surface are outlined. Specific methods for actually computing the surface are examined.


AITR-512

Author[s]: Kent A. Stevens

Surface Perception from Local Analysis of Texture and Contour

February 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-512.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-512.pdf

The visual analysis of surface shape from texture and surface contour is treated within a computational framework. The aim of this study is to determine valid constraints that are sufficient to allow surface orientation and distance (up to a multiplicative constant) to be computed from the image of surface texture and of surface contours.


AIM-513

Author[s]: Kenneth M. Kahn

Making Aesthetic Choices

March 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-513.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-513.pdf

A framework is presented for making choices that are primarily constrained by aesthetic, as opposed to, pragmatic considerations. An example of the application of this framework is a computer system called “Ani”, capable of making simple computer animation in response to high-level incomplete story descriptions. Aesthetic choice is presented as a parallel computation in which each choice point gathers together and evaluates suggestions. When faced with difficulties these choices can be postponed. The order in which inter-dependent choices are made is strongly influenced by the focus of the problem.


AIM-514

Author[s]: Guy Lewis Steele, Jr. and Gerald Jay Sussman

Design of LISP-based Processors, or SCHEME: A Dielectric LISP, or Finite Memories Considered Harmful, or LAMBDA: The Ultimate Opcoed

March 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-514.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf

We present a design for a class of computers whose 'instruction sets' are based on LISP. LISP, like traditional stored-program machine languages and unlike most high-level languages, conceptually stores programs and data in the same way and explicitly allows programs to be manipulated as data. LISP is therefore a suitable language around which to design a stored-program computer architecture. LISP differs from traditional machine languages in that the program/data storage is conceptually an unordered set of linked record structures of various sizes, rather than an ordered, indexable vector of integers or bit fields of fixed size. The record structures can be organized into trees or graphs. An instruction set can be designed for programs expressed as such trees. A processor can interpret these trees in a recursive fashion, and provide automatic storage management for the record structures. We describe here the basic ideas behind the architecture, and for concreteness give a specific instruction set (on which variations are certainly possible). We also discuss the similarities and differences between these ideas and those of traditional architectures. A prototype VLSI microprocessor has been designed and fabricated for testing. It is a small-scale version of the ideas presented here, containing a sufficiently complete instruction interpreter to execute small programs, and a rudimentary storage allocator. We intend to design and fabricate a full-scale VLSI version of this architecture in 1979.


AITR-515

Author[s]: Matthew Thomas Mason

Compliance and Force Control for Computer Controlled Manipulators

April 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-515.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-515.pdf

Compliant motion occurs when the manipulator position is constrained by the task geometry. Compliant motion may be produced either by a passive mechanical compliance built in to the manipulator, or by an active compliance implemented in the control servo loop. The second method, called force control, is the subject of this report. In particular, this report presents a theory of force control based on formal models of the manipulator, and the task geometry. The ideal effector is used to model the manipulator, and the task geometry is modeled by the ideal surface, which is the locus of all positions accessible to the ideal effector. Models are also defined for the goal trajectory, position control, and force control.


AIM-516

Author[s]: Marvin Minsky

K-Lines: A Theory of Memory

June 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-516.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-516.pdf

Most theories of memory suggest that when we learn or memorize something, some “representation” of that something is constructed, stored and later retrieved. This raises questions like: How is information represented? How is it stored? How is it retrieved? Then, how is it use? This paper tries to deal with all these at once. When you get an idea and want to “remember” it, you create a “K-line” for it. When later activated, the K-line induces a partial mental state resembling the one that created it. A “partial mental state” is a subset of those mental agencies operating at one moment. This view leads to many ideas about the development, structure and physiology of Memory, and about how to implement frame-like representations in a distributed processor.


AIM-517

Author[s]: Anna R. Bruss

Some Properties of Discontinuities in the Image Irradiance Equation

April 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-517.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-517.pdf

The image irradiance equation is a first order partial differential equation. Part of this paper is a “comprehensive” guide to solving this kind of equation. The special structure of the image irradiance equation is explored in order to understand the relation of discontinuities in the surface properties and in the image intensities.


AIM-518

Author[s]: D. Marr and E. Hildreth

Theory of Edge Detection

April 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-518.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-518.pdf

A theory of edge detection is presented.


AIM-519A

Author[s]: Richard M. Stallman

EMACS: The Extensible, Customizable, Self-Documenting Display Editor

March 1981

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-519A.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-519A.pdf

EMACS is a display editor which is implemented in an interpreted high level language. This allows users to extend the editor by replacing parts of it, to experiment with alternative command languages, and to share extensions which are generally useful. The ease of extension has contributed to the growth of a large set of useful features. This paper describes the organization of the EMACS system, emphasizing the way in which extensibility is achieved and used.


AIM-520

Author[s]: Patrick H. Winston

Learning and Reasoning by Analogy: The Details

April 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-520.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-520.pdf

We use analogy when we say something is a Cinderella story and when we learn about resistors by thinking about water pipes. We also use analogy when we learn subjects like Economics, Medicine and Law. This paper presents a theory of analogy and describes an implemented system that embodies the theory. The specific competence to be understood is that of using analogies to do certain kinds of learning and reasoning. Learning takes place when analogy is used to generate a constraint description in one domain, given a constraint description in another, as when we learn Ohm’s law by way of knowledge about water pipes. Reasoning takes place when analogy is used to answer questions about one situation, given another situation that is supposed to be a precedent, as when we answer questions about Hamlet by way of knowledge about Macbeth. The input language used and the treatment of words implying CAUSE have been improved. AIM 632, “Learning New Principles from Precedents and Exercises,” describes these improvements and subsequent work. It is, at this writing, in publication in the Artificial Intelligence Journal.


AIM-521

Author[s]: Jon Doyle

A Truth Maintenance System

June 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-521.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-521.pdf

To choose their actions, reasoning programs must be able to make assumptions and subsequently revise their beliefs when discoveries contradict these assumptions. The Truth Maintenance System (TMS) is a problem solver subsystem for performing these functions by recording and maintaining the reasons for program beliefs. Such recorded reasons are useful in constructing explanations of program actions in guiding the course of action of a problem solver. This paper describes (1) the representations and structure of the TMS, (2) the mechanisms used to revise the current set of beliefs, (3) how dependency-directed backtracking changes the current set of assumptions, (4) techniques for summarizing explanations of beliefs, (5) how to organize problem solvers into “dialectically arguing” modules, (6) how to revise models of the belief systems of others, and (7) methods for embedding control structures in patterns of assumptions. We stress the need of problem solvers to choose between alternative systems of beliefs, and outline a mechanism by which a problem solver can employ rules guiding choices of what to believe, what to want, and what to do.


AIM-522

Author[s]: Kent A. Stevens

Constraints on the Visual Interpretation of Surface Contours

March 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-522.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-522.pdf

This article examines the computational problems underlying the 3-D interpretation of surface contours. A surface contour is the image of a curve across a physical surface, such as the edge of a shadow cast across a surface, a gloss contour, wrinkle, seam, or pigmentation marking. Surface contours by and large are not as restricted as occluding contours and therefore pose a more difficult interpretation problem. Nonetheless, we are adept at perceiving a definite 3-D surface from even simple line drawings (e.g. graphical depictions of continuous functions of two variables). The solution of a specific surface shape comes by assuming that the physical curves are particularly restricted in their geometric relationship to the underlying surface. These geometric restrictions are examined.


AIM-523

Author[s]: Jeanne Bamberger

Logo Music Projects: Experiments in Musical Perception and Design

May 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-523.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-523.pdf

This memo gives a series of experiments which one can use to get a better understanding of how music works and how music is apprehended by an active and knowing listener. It does so by using the children’s computer language, LOGO, and capitalizes on the use of procedural thinking and other programming concepts (for example, the use of variables) in the designing and analysis of melody and rhythm.


AIM-524

Author[s]: D. Marr and S. Ullman

Directional Selectivity and Its Use in Early Visual Processing

June 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-524.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-524.pdf

The construction of directionally selective units and their use in the processing of visual motion are considered.


AIM-526

Author[s]: Gerald Jay Sussman, Jack Holloway and Thomas F. Knight, Jr.

Computer Aided Evolutionary Design for Digital Integrated Systems

May 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-526.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-526.pdf

We propose to develop a computer aided design tool which can help an engineer deal with system evolution from the initial phases of design right through the testing and maintenance phases. We imagine a design system which can function as a junior assistant. It provides a total conversational and graphical environment. It remembers the reasons for design choices and can retrieve and do simple deductions with them. Such a system can provide a designer with information relevant to a proposed modification and can help him understand the consequences of simple modifications by pointing out the structures and functions which will be affected by modifications. The designer’s assistant will maintain a vast amount of such annotation on the structure and function of the system being evolved and will be able to retrieve the appropriate annotation and remind the designer about the features which he installed too long ago to remember, or which were installed by other designers who work with him. We will develop the fundamental principles behind such a designer’s assistant and we will construct a prototype system which meets many of these desiderata.


AIM-527

Author[s]: Guy Lewis Steele, Jr. and Gerald Jay Sussman

The Dream of a Lifetime: A Lazy Scoping Mechanism

November 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-527.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-527.pdf

We define a “rack”, a data abstraction hybrid of a register and a stack. It is used for encapsulating the behavior of the kind of register whose contents may have an extent which requires that it be saved during the execution of an unknown piece of code. A rack can be implemented cleverly to achieve performance benefits over the usual implementation of a stack discipline. The basic idea is that we interpose a state machine controller between the rack abstraction and its stack/registers. This controller can act as an on-the-fly run-time peephole optimizer, eliding unnecessary stack operations. We demonstrate the sorts of savings one might expect by using cleverly implemented racks in the context of a particular caller-saves implementation of an interpreter for the SCHEME dialect of LISP. For sample problems we can expect that only one out of every four pushes that would be done by a conventional machine will be done by a clever version.


AIM-528

Author[s]: Thomas F. Knight, Jr., David A. Moon, Jack Holloway and Guy L. Steele, Jr.

CADR

May 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-528.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-528.pdf

The CADR machine, a revised version of the CONS machine, is a general-purpose, 32-bit microprogrammable processor which is the basis of the Lisp-machine system, a new computer system being developed by the Laboratory as a high-performance, economical implementation of Lisp. This paper describes the CADR processor and some of the associated hardware and low- level software.


AITR-529

Author[s]: Johan de Kleer

Causal and Teleological Reasoning in Circuit Recognition

September 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-529.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-529.pdf

This thesis presents a theory of human-like reasoning in the general domain of designed physical systems, and in particular, electronic circuits. One aspect of the theory, causal analysis, describes how the behavior of individual components can be combined to explain the behavior of composite systems. Another aspect of the theory, teleological analysis, describes how the notion that the system has a purpose can be used to aid this causal analysis. The theory is implemented as a computer program, which, given a circuit topology, can construct by qualitative causal analysis a mechanism graph describing the functional topology of the system. This functional topology is then parsed by a grammar for common circuit functions. Ambiguities are introduced into the analysis by the approximate qualitative nature of the analysis. For example, there are often several possible mechanisms which might describe the circuit's function. These are disambiguated by teleological analysis. The requirement that each component be assigned an appropriate purpose in the functional topology imposes a severe constraint which eliminates all the ambiguities. Since both analyses are based on heuristics, the chosen mechanism is a rationalization of how the circuit functions, and does not guarantee that the circuit actually does function. This type of coarse understanding of circuits is useful for analysis, design and troubleshooting.


AIM-530

Author[s]: David A. Smith

Using Enhanced Spherical Images for Object Representation

May 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-530.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-530.pdf

The processes involved in vision, manipulation, and spatial reasoning depend greatly on the particular representation of three-dimensional objects used. A novel representation, based on concepts of differential geometry, is explored. Special attention is given to properties of the enhanced spherical image model, reconstruction of objects from their representation, and recognition of similarity with prototypes. Difficulties associated with representing smooth and non-convex bodies are also discussed.


AIM-531

Author[s]: Mitchell P. Marcus

An Overview of a Theory of Syntactic Recognition for Natural Language

July 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-531.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-531.pdf

Assume that the syntax of natural language can be parsed by a left-to-right deterministic mechanism without facilities for parallelism or backup. It will be shown that this “determinism” hypothesis, explored within the context of the grammar of English, leads to a simple mechanism, a grammar interpreter, having the following properties: (a) Simple rules of grammar can be written for this interpreter which capture the generalizations behind various linguistic phenomena, despite the seeming difficulty of capturing such generalizations in the framework of a processing model for recognition. (b) The structure of the grammar rules cannot parse sentences which violate either of two constraints which Chomsky claims are linguistic universals. This result depends in part upon the computational use of Chomsky’s notion of Annotated Surface Structure. (c) The grammar interpreter provides a simple explanation for the difficulty caused by “garden path” sentences, such as “The cotton clothing is made of grows in Mississippi”. To the extent that these properties, all of which reflect deep properties of natural language, follow from the original hypothesis, they provide indirect evidence for the truth of this assumption. This memo is an abridged form of several topics discussed at length in [Marcus 77]; it does not discuss the mechanism used to parse noun phrases nor the kinds of interaction between syntax and semantics discussed in that work.


AIM-533

Author[s]: John M. Hollerbach

A Recursive Lagrangian Formulation of Manipulator Dynamics

June 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-533.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-533.pdf

An efficient Lagrangian formulation of manipulator dynamics has been developed. The efficiency derives from recurrence relations for the velocities, accelerations, and generalized forces. The number of additions and multiplications varies linearly with the number of joints, as opposed to past Lagrangian dynamics formulations with an n4 dependence. With this formulation it should be possible in principle to compute the Lagrangian dynamics in real time. The computational complexities of this and other dynamics formulations including recent Newton-Euler formulations and tabular formulations are compared. It is concluded that recursive formulations based either on the Lagrangian or Newton-Euler dynamics offer the best method of dynamics calculation.


AITR-534

Author[s]: John Hollerbach

Theory of Handwriting

March 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-534.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-534.pdf

Handwriting production is viewed as a constrained modulation of an underlying oscillatory process. Coupled oscillations in horizontal and vertical directions produce letter forms, and when superimposed on a rightward constant velocity horizontal sweep result in spatially separated letters. Modulation of the vertical oscillation is responsible for control of letter height, either through altering the frequency or altering the acceleration amplitude. Modulation of the horizontal oscillation is responsible for control of corner shape through altering phase or amplitude. The vertical velocity zero crossing in the velocity space diagram is important from the standpoint of control. Changing the horizontal velocity value at this zero crossing controls corner shape, and such changes can be effected through modifying the horizontal oscillation amplitude and phase. Changing the slope at this zero crossing controls writing slant; this slope depends on the horizontal and vertical velocity zero amplitudes and on the relative phase difference. Letter height modulation is also best applied at the vertical velocity zero crossing to preserve an even baseline. The corner shape and slant constraints completely determine the amplitude and phase relations between the two oscillations. Under these constraints interletter separation is not an independent parameter. This theory applies generally to a number of acceleration oscillation patterns such as sinusoidal, rectangular and trapezoidal oscillations. The oscillation theory also provides an explanation for how handwriting might degenerate with speed. An implementation of the theory in the context of the spring muscle model is developed. Here sinusoidal oscillations arise from a purely mechanical sources; orthogonal antagonistic spring pairs generate particular cycloids depending on the initial conditions. Modulating between cycloids can be achieved by changing the spring zero settings at the appropriate times. Frequency can be modulated either by shifting between coactivation and alternating activation of the antagonistic springs or by presuming variable spring constant springs. An acceleration and position measuring apparatus was developed for measurements of human handwriting. Measurements of human writing are consistent with the oscillation theory. It is shown that the minimum energy movement for the spring muscle is bang-coast-bang. For certain parameter values a singular arc solution can be shown to be minimizing. Experimental measurements however indicate that handwriting is not a minimum energy movement.


AIM-536

Author[s]: Berthold K.P. Horn

SEQUINS and QUILLS: Representations for Surface Topography

May 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-536.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-536.pdf

The shape of a continuous surface can be represented by a collection of surface normals. These normals are like a porcupine’s quills. Equivalently, one can use the surface patches on which these normals rest. These in turn are like sequins sewn on a costume. These and other representations for information which can be obtained from images and used in the recognition and description of objects in a scene will be briefly described.


AITR-537

Author[s]: Candace Lee Sidner

Towards a Computational Theory of Definite Anaphora Comprehension in English Discourse

June 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-537.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-537.pdf

This report investigates the process of focussing as a description and explanation of the comprehension of certain anaphoric expressions in English discourse. The investigation centers on the interpretation of definite anaphora, that is, on the personal pronouns, and noun phrases used with a definite article the, this or that. Focussing is formalized as a process in which a speaker centers attention on a particular aspect of the discourse. An algorithmic description specifies what the speaker can focus on and how the speaker may change the focus of the discourse as the discourse unfolds. The algorithm allows for a simple focussing mechanism to be constructed: and element in focus, an ordered collection of alternate foci, and a stack of old foci. The data structure for the element in focus is a representation which encodes a limted set of associations between it and other elements from teh discourse as well as from general knowledge.


AIM-539

Author[s]: Katsushi Ikeuchi and Berthold K.P. Horn

An Application of the Photometric Stereo Method

August 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-539.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-539.pdf

The orientation of patches on the surface of an object can be determined from multiple images taken with different illuminations, but from the same viewing position. This method, referred to as photometric stereo, can be implemented using table lookup based on numerical inversion of experimentally determined reflectance maps. Here we concentrate on objects with specularly reflecting surfaces, since these are of importance in industrial applications. Previous methods, intended for diffusely reflecting surfaces, employed point source illumination, which is quite unsuitable in this case. Instead, we use a distributed light source obtained by uneven illumination of a diffusely reflecting planar surface. Experimental results are shown to verify analytic expressions obtained for a method employing three light source distributions.


AITR-540

Author[s]: Kenneth Michael Kahn

Creation of Computer Animation from Story Descriptions

August 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-540.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-540.pdf

This report describes a computer system that creates simple computer animation in response to high-level, vague, and incomplete descriptions of films. It makes its films by collecting and evaluating suggestions from several different bodies of knowledge. The order in which it makes its choices is influenced by the focus of the film. Difficult choices are postponed to be resumed when more of the film has been determined. The system was implemented in an object- oriented language based upon computational entities called “actors”. The goal behind the construction of the system is that, whenever faced with a choice, it should sensibly choose between alternatives based upon the description of the film and as much general knowledge as possible. The system is presented as a computational model of creativity and aesthetics.


AIM-541

Author[s]: D. Marr, E. Hildreth and T. Poggio

Evidence for a Fifth, Smaller Channel in Early Human Vision

August 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-541.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-541.pdf

Recent studies in psychophysics and neurophysiology suggest that the human visual system utilizes a range of different size or spatial frequency tuned mechanisms in its processing of visual information. It has been proposed that there exist four such mechanisms, operating everywhere in the visual field, with the smallest mechanism having a central excitatory width of 3’ of arc in the ventral fovea. This note argues that there exists indirect evidence for the existence of a fifth, smaller channel, with a central width in the fovea of 1.5’.


AITR-542

Author[s]: Luc Steels

Reasoning Modeled as a Society of Communicating Experts

June 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-542.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-542.pdf

This report describes a domain independent reasoning system. The system uses a frame- based knowledge representation language and various reasoning techniques including constraint propagation, progressive refinement, natural deduction and explicit control of reasoning. A computational architecture based on active objects which operate by exchanging messages is developed and it is shown how this architecture supports reasoning activity. The user interacts with the system by specifying frames and by giving descriptions defining the problem situation. The system uses its reasoning capacity to build up a model of the problem situation from which a solution can interactively be extracted. Examples are discussed from a variety of domains, including electronic circuits, mechanical devices and music. The main thesis is that a reasoning system is best viewed as a parallel system whose control and data are distributed over a large network of processors that interact by exchanging messages. Such a system will be metaphorically described as a society of communicating experts.


AIM-543

Author[s]: Luc Steels

Procedural Attachment

August 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-543.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-543.pdf

A frame-based reasoning system is extended to deal with procedural attachment. Arguments are given why procedural attachment is needed in a symbolic reasoner. The notion of an infinitary concept is introduced. Conventions for representing procedures and a control structure regulating their execution is discussed. Examples from electrical engineering and music illustrate arithmetic constraints and constraints over properties of strings and sequences.


AIM-544

Author[s]: Marvin Minsky

Toward a Remotely-Manned Energy and Production Economy

September 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-544.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-544.pdf

We can solve many problems of Energy, Health, Productivity, and Environmental Quality by improving the technology of remote control. This will produce Nuclear Safety and Security, Advances in Mining, Increases in Productivity, Economies in Transportation, New Industries and Markets. By creating “mechanical hands” that are versatile and economical enough, we shape a new world of health, energy and security. It will take 10 to 20 years, and cost about a billion dollars.


AIM-545

Author[s]: Seymour Papert, Daniel Watt, Andrea diSessa and Sylvia Weir

Final Report of the Brookline LOGO Project. Part II: Project Summary and Data

September 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-545.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-545.pdf

During the school year 1977/78 four computers equipped with LOGO and Turtle Graphics were installed in an elementary school in Brookline, Mass. All sixth grade students in the school had between 20 and 40 hours of hands-on experience with the computers. The work of 16 students was documented in detail.


AIM-546

Author[s]: Seymour Papert, Daniel Watt, Andrea diSessa and Sylvia Weir

Final Report of the Brookline LOGO Project. Part III: Profiles of Individual Student's Work

September 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-546.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-546.pdf

During the school year 1977/78 four computers equipped with LOGO and Turtle Graphics were installed in an elementary school in Brookline, Mass. All sixth grade students in the school had between 20 and 40 hours of hands-on experience with the computers. The work of 16 students was documented in detail.


AIM-548

Author[s]: Glenn A. Iba

Learning Disjunctive Concepts From Examples

September 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-548.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-548.pdf

This work proposes a theory for machine learning of disjunctive concepts. The paradigm followed is one of teaching and testing, where the teaching is accomplished by presenting a sequence of positive and negative examples of the target concept. The core of the theory has been implemented and tested as computer programs. The theory addresses the problem of deciding when it is appropriate to merge descriptions and when it is appropriate to form a disjunctive split. The approach outlined has the advantage that it allows recovery from over generalizations. It is observed that negative examples play an important role in the decision making process, as well as in detecting over generalizations and instigating recovery. Because of the ability to recover from over generalizations when they occur, the system is less sensitive to the ordering of the training sequence than other systems. The theory is presented in a domain and representation independent format. A few conditions are presented, which abstract the assumptions made about any representation scheme that is to be employed within the theory. The work is illustrated in several different domains, illustrating the generality and flexibility of the theory.


AIM-549

Author[s]: Richard C. Waters

Mechanical Arm Control

October 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-549.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-549.pdf

This paper discusses three main problems associated with the control of the motion of a mechanical arm. 1) Transformation between different coordinate systems associated with the arm. 2) Calculation of detailed trajectories for the arm to follow. 3) Calculation of the forces which must be applied to the joints of the arm in order to make it move along a specified path. Each of the above problems is amenable to exact solution. However, the resulting equations are, in general, quite complex and difficult to compute. This paper investigates several methods for speeding up this calculation, and for getting approximate solutions to the equations.


AITR-550

Author[s]: David Allen McAllester

The Use of Equality in Deduction and Knowledge Representation

January 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-550.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-550.pdf

This report describes a system which maintains canonical expressions for designators under a set of equalities. Substitution is used to maintain all knowledge in terms of these canonical expressions. A partial order on designators, termed the better-name relation, is used in the choice of canonical expressions. It is shown that with an appropriate better-name relation an important engineering reasoning technique, propagation of constraints, can be implemented as a special case of this substitution process. Special purpose algebraic simplification procedures are embedded such that they interact effectively with the equality system. An electrical circuit analysis system is developed which relies upon constraint propagation and algebraic simplification as primary reasoning techniques. The reasoning is guided by a better-name relation in which referentially transparent terms are preferred to referentially opaque ones. Multiple description of subcircuits are shown to interact strongly with the reasoning mechanism.


AIM-551

Author[s]: David A. McAllester

An Outlook on Truth Maintenance

August 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-551.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-551.pdf

Truth maintenance systems have been used in several recent problem solving systems to record justifications for deduced assertions, to track down the assumptions which underlie contradictions when they arise, and to incrementally modify assertional data structures when assumptions are retracted. A TMS algorithm is described here that is substantially different from previous systems. This algorithm performs deduction in traditional propositional logic in such a way that the premise set from which deduction is being done can be easily manipulated. A novel approach is also taken to the role of a TMS in larger deductive systems. In this approach the TMS performs all propositional deduction in a uniform manner while the larger system is responsible for controlling the instantiation of universally quantified formulae and axiom schemas.


AIM-552

Author[s]: Beth C. Levin

Instrumental With and the Control Relation in English

November 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-552.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-552.pdf

This paper explores the nature of the underlying representation of a sentence, that representation formulated to make explicit the semantic structure of a sentence as a description of an event. It argues that the typical conception of an underlying representation as a predicate-argument representation, exemplified in systems of case and thematic relations, must be modified. An underlying representation must include semantic relations between noun phrases as well as the predicate-argument relations of noun phrases to a verb. An examination of instrumental with will be used to motivate and justify this revision. In particular, an account of instrumental with requires the introduction of the control relation, a relation between two noun phrases.


AIM-554

Author[s]: Richard M. Stallman

EMACS Manual for ITS Users

October 1981

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-554.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-554.pdf

A reference manual for the extensible, customizable, self-documenting real-time display editor. This manual corresponds to EMACS version 162.


AIM-555

Author[s]: Richard M. Stallman

EMACS Manual for TWENEX Users

March 1983

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-555.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-555.pdf

A reference manual for the extensible, customizable, self-documenting real-time display editor. This manual corresponds to EMACS version 162.


AIM-556

Author[s]: Richard M. Stallman

Phantom Stacks: If You Look Too Hard, They Aren't There

July 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-556.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-556.pdf

A Stack is a very efficient way of allocating and deallocating memory, but it works only with a restricted pattern of usage. Garbage collection is completely flexible but comparatively costly. The implementation of powerful control structures naturally uses memory which usually fits in with stack allocation but must have the flexibility to do otherwise from time to time. How can we manage memory which only once in a while violates stack restrictions, without paying a price the rest of the time? This paper provides an extremely simple way of doing so, in which only the part of the system which actually uses the stack needs to know anything about the stack. We call them Phantom Stacks because they are liable to vanish if subjected to close scrutiny. Phantom Stacks will be used in the next version of the Artificial Intelligence Lab’s Scheme microprocessor chip.


AIM-557

Author[s]: Francis H.C. Crick, David C. Marr and Tomaso Poggio

An Information Processing Approach to Understanding the Visual Cortex

April 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-557.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-557.pdf

An outline description is given of the experimental work on the visual acuity and hyperacuity of human beings. The very high resolution achieved in hyperacuity corresponds to a fraction of the spacing between adjacent cones in the fovea. We briefly outline a computational theory of early vision, according to which (a) retinal image is filtered through a set of approximately bandpass, spatial filters and (b) zero- crossings may contain sufficient information for much of the subsequent processing. Consideration of the optimum filter lead to one which is equivalent to a cell with a particular center-surround type of response. An “edge” in the visual field then corresponds to a line of zero-crossings in the filtered image. The mathematics of sampling and of Logan’s zero-crossing theorem are briefly explained.


AIM-558

Author[s]: David C. Marr and Tomaso Poggio

Some Comments on a Recent Theory of Stereopsis

July 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-558.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-558.pdf

A number of developments have taken place since the formulation of Marr and Poggio’s theory of human stereo vision. In particular, these concern the shape of the underlying receptive fields, the control of eye movements and the role of neuronal pools in the so-called pulling effect. These and other connected matters are briefly discussed.


AIM-559

Author[s]: Jack Holloway, Guy L. Steele Jr., Gerald Jay Sussman and Alan Bell

The SCHEME-79 Chip

January 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-559.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-559.pdf

We have designed and implemented a single-chip microcomputer (which we call SCHEME-79) which directly interprets a typed pointer variant of SCHEME, a dialect of the language LISP. To support this interpreter the chip implements an automatic storage allocation system for heap-allocated data and an interrupt facility for user interrupt routines implemented in SCHEME. We describe how the machine architecture is tailored to support the language, and the design methodology by which the hardware was synthesized. We develop an interpreter for SCHEME written in LISP which may be viewed as a microcode specification. This is converted by successive compilation passes into actual hardware structures on the chip. We develop a language embedded in LSIP for describing layout artwork so we can procedurally define generators for generalized macro components. The generators accept parameters to produce the specialized instances used in a particular design. We discuss the performance of the current design and directions for improvement, both in the circuit performance and in the algorithms implemented by the chip. A complete annotated listing of the microcode embodied by the chip is included.


AIM-561

Author[s]: William A. Kornfeld

Using Parallel Processing for Problem Solving

December 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-561.ps

ftp://publications.ai.mit.edu/ai-publicaitons/pdf/AIM-561.pdf

Parallel processing as a conceptual aid in the design of programs for problem solving applications is developed. A pattern directed invocation language know as Ether is introduced. Ether embodies tow notions in language design: activities and viewpoints. Activities are the basic parallel processing primitive. Different goals fo the system can be pursued in parallel by placing them in separate activities. Language primitives are provided for manipulating running activities. Viewpoints are a generalization of context mechanisms and serve as a device for representing multiple world models. A number of problem solving schemes are developed making use of viewpoints and activities. It will be demonstrated that many kinds of heuristic search that are commonly implemented using backtracking can be reformulated to use parallel processing with advantage in control over the problem solving behavior. The semantics of Ether are such that such things as deadlock and race conditions that plague many languages for parallel processing cannot occur. The programs presented are quite simple to understand.


AIM-564

Author[s]: Lucia M. Vaina

Towards a Computational Theory of Semantic Memory

February 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-564.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-564.pdf

Research in memory has been a frustrating task not least because of the intimate familiarity with what we are trying to understand, and partly also because the human cognitive system has developed as an interactive whole; it is difficult to isolate its component modules – a necessary prerequisite for their thorough elucidation. Memory cannot be studied in isolation since it is essentially only an adjunct to the proper execution of our ordinary information processing tasks. In order to try to formulate specifically some of the basic requirements of memory we must therefore examine the structure of the processing tasks for which it is used.


AIM-565

Author[s]: W.E.L. Grimson

A Computer Implementation of a Theory of Human Stereo Vision

January 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-565.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-565.pdf

Recently, Marr and Poggio (1979) presented a theory of human stereo vision. An implementation of that theory is presented and consists of five steps: (1) The left and right images are each filtered with masks of four sizes that increase with eccentricity; the shape of these masks is given by $ abla^{2}G$, the laplacian of a gaussian function. (2) Zero-crossing in the filtered images are found along horizontal scan lines. (3) For each mask size, matching takes place between zero-crossings of the same sign and roughly the same orientation in the two images, for a range of disparities up to about the width of the mask's central region. Within this disparity range, Marr and Poggio showed that false targets pose only a simple problem. (4) The output of the wide masks can control vergence movements, thus causing small masks to come into low resolution to dealing with small disparities at a high resolution. (5) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 1/2 dimensional sketch. To support the sufficiency of the Marr- Poggio model of human stereo vision, the implementation was tested on a wide range of stereograms from the human stereopsis literature. The performance of the implementation is illustrated and compared with human perception. As well, statistical assumptions made by Marr and Poggio are supported by comparison with statistics found in practice. Finally, the process of implementing the theory has led to the clarification and refinement of a number of details within the theory; these are discussed in detail.


AIM-566

Author[s]: Katsushi Ikeuchi

Numerical Shape from Shading and Occluding Contours in a Single View

November 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-566.ps

ftp://publications.ai.mit.edu/ai-publicaitons/pdf/AIM-566.pdf

An iterative method of using occluding boundary information is proposed to compute surface slope from shading. We use a stereographic space rather than the more commonly used gradient space in order to express occluding boundary information. Further, we use “average” smoothness constraints rather than the more obvious “closed loop” smoothness constraints. We develop alternate constraints from the definition of surface smoothness, since the closed loop constraints do not work in stereographic space. We solve the image irradiance equation iteratively using a Gauss- Seidel method applied to the constraints and boundary information. Numerical experiments show that the method is effective. Finally, we analyze SEM (Scanning Electron Microscope) pictures using this method. Other applications are also proposed.


AIM-567

Author[s]: Katsushi Ikeuchi

Shape from Regular Patterns: An Example of Constraint Propagation in Vision

March 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-567.ps

ftp://publications.ai.mit.edu/ai-publicaitons/pdf/AIM-567.pdf

An algorithm is proposed for obtaining local surface orientation from the apparent distortion of surface patterns in an image. A spherical projection is used for imaging. A mapping is defined from points on this image sphere to a locus of points on the Gaussian sphere which corresponds to possible surface orientations. This mapping is based on the measurement of the local distortions of a repeated known texture pattern due to the imaging projection. This locus of possible surface orientations can be reduced to a unique orientation at each point on the image sphere using 3 vantage points and taking the intersection of the loci of possible orientations derived from each vantage. It is also possible to derive a unique surface orientation at each image point through the use of an iterative constraint propagation technique along with the orientation information available at occluding boundaries. Both method are demonstrated for real images.


AIM-568

Author[s]: Jon Doyle and Philip London

A Selected Descriptor-Indexed Bibliography to the Literature on Belief Revision

February 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-568.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-568.pdf

This article presents an overview of research in an area loosely called belief revision. Belief revision concentrates on the issue of revising systems of beliefs to reflect perceived changes in the environment or acquisition of new information. The paper includes both an essay surveying the literature and a descriptor-indexed bibliography of over 200 papers and books.


AIM-569A

Author[s]: Henry Lieberman and Carl Hewitt

A Real Time Garbage Collector Based on the Lifetimes of Objects

October 1981

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-569a.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-569a.pdf

In previous heap storage systems, the cost of creating objects and garbage collection is independent of the lifetime of the object. Since objects with short lifetimes account for a large portion of storage use, it’s worth optimizing a garbage collector to reclaim storage for these objects more quickly. The garbage collector should spend proportionately less effort reclaiming objects with longer lifetimes. We present a garbage collection algorithm which: Makes storage for short-lived objects cheaper than storage for long-lived objects. Operates in real time – object creation and access times are bounded. Increases locality of reference, for better virtual memory performance. Works well with multiple processors and a large address space.


AIM-570

Author[s]: Sylvia Weir

The Evaluation and Cultivation of Spatial and Linguistic Abilities in Individuals with Cerebral Palsy

October 1979

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-570.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-570.pdf

The work of the Cerebral Palsy project (members: Seymour Papert, Sylvia Weir, Jose Valente and Gary Drescher) over the past eighteen months is summarized, and the next phase of activity is outlined. The issues to be addressed by the proposed research are as follows: 1. An investigation of computer-based techniques to maximize the acquisition of spatial and linguistic skills in severely Cerebral Palsied children, to serve the educational and therapeutic needs of this population. 2. Developing a set of computer- based diagnostic tools for use with physically handicapped persons which could contribute to the provision of a functional specification of subcategories of Cerebral Palsy. 3. Investigating the ways in which findings on Cerebral Palsy subjects can inform our theories of cognitive development and the adult functioning of normal individuals.


AIM-572

Author[s]: Berthod K.P. Horn and Brian G. Schunck

Determining Optical Flow

April 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-572.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-572.pdf

Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.


AIM-573A

Author[s]: J. Richter and S. Ullman

A Model for the Spatio-Temporal Organization of X- and Y-Type Ganglion Cells in the Primate Retina

April 1980 (Updated October 1981)

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-573a.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-573a.pdf

A model is proposed for the spatial and temporal characteristics of X- and Y-type responses of ganglion cells in the primate retina. The model is related to a theory of directional selectivity proposed by Marr & Ullman (1981). The X- and Y-type responses predicted by the model to a variety of stimuli are examined and compared with electrophysiological recordings. A number of implications and predictions are discussed.


AIM-573A

Author[s]: J. Richter and S. Ullman

A Model for the Spatio-Temporal Organization of X- and Y-Type Ganglion Cells in the Primate Retina

April 1980 (Updated October 1981)

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-573a.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-573a.pdf

A model is proposed for the spatial and temporal characteristics of X- and Y-type responses of ganglion cells in the primate retina. The model is related to a theory of directional selectivity proposed by Marr & Ullman (1981). The X- and Y-type responses predicted by the model to a variety of stimuli are examined and compared with electrophysiological recordings. A number of implications and predictions are discussed.


AIM-574

Author[s]: S. Ullman

Against Direct Perception

March 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-574.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-574.pdf

Central to contemporary cognitive science is the notion that mental processes involve computations defined over internal representations. This notion stands in sharp contrast with another prevailing view – the direct theory of perception whose most prominent proponent has been J.J. Gibson. The publication of his recent book (The Ecological Approach to Visual Perception – Boston, Houghton Mifflin Company, 1979) offers an opportunity to examine the theory of direct perception and to contrast it with the computational/representational view. In this paper the notion of direct perception is examined primarily from a theoretical standpoint, and various objections are raised against it. An attempt is made to place the theory of direct perception in perspective by embedding it in a more comprehensive framework.


AIM-575

Author[s]: R.W. Lawler

One Child's Learning: Introducing Writing with a Computer

March 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-575.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-575.pdf

This is a case study of how one child learned to write in a computer-rich setting. Although computer access did affect her learning significantly, the details presented here go beyond supporting that claim. They provide a simple example of what a computer-based introduction to writing might be like for other children. We conclude with a short discussion of issues raised by the study.


AIM-576

Author[s]: Randall Davis

Meta-Rules: Reasoning About Control

March 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-576.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-576.pdf

How can we insure that knowledge embedded in a program is applied effectively? Traditionally the answer to this question has been sought in different problem solving paradigms and in different approaches to encoding and indexing knowledge. Each of these is useful with a certain variety of problem, but they all share a common problem: they become ineffective in the face of a sufficiently large knowledge base. How then can we make it possible for a system to continue to function in the face of a very large number of plausibly useful chunks of knowledge? In response to this question we propose a framework for viewing issues of knowledge indexing and retrieval, a framework that includes what appears to be a useful perspective on the concept of a strategy. We view strategies as a means of controlling invocation in situations where traditional selection mechanisms become ineffective. We examine ways to effect such control, and describe meta-rules, a means of specifying strategies which offers a number of advantages. We consider at some length how and when it is useful to reason about control, and explore the advantages meta-rules offer for doing this.


AIM-577

Author[s]: Henry Lieberman and Carl Hewitt

A Session with TINKER: Interleaving Program Testing with Program Design

September 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-577.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-577.pdf

Tinker is an experimental interactive programming system which integrates program testing with program design. New procedures are created by working out the steps of the procedure in concrete situations. Tinker displays the results of each step as it is performed, and constructs a procedure for the general case from sample calculations. The user communicates with Tinker mostly by selecting operations from menus on an interactive graphic display rather than by typing commands. This paper presents a demonstration of our current implementation of Tinker.


AITR-579

Author[s]: Ellen C. Hildreth

Implementation of a Theory of Edge Detection

April 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-579.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-579.pdf

This report describes the implementation of a theory of edge detection, proposed by Marr and Hildreth (1979). According to this theory, the image is first processed independently through a set of different size filters, whose shape is the Laplacian of a Gaussian, ***. Zero-crossings in the output of these filters mark the positions of intensity changes at different resolutions. Information about these zero-crossings is then used for deriving a full symbolic description of changes in intensity in the image, called the raw primal sketch. The theory is closely tied with early processing in the human visual systems. In this report, we first examine the critical properties of the initial filters used in the edge detection process, both from a theoretical and practical standpoint. The implementation is then used as a test bed for exploring aspects of the human visual system; in particular, acuity and hyperacuity. Finally, we present some preliminary results concerning the relationship between zero-crossings detected at different resolutions, and some observations relevant to the process by which the human visual system integrates descriptions of intensity changes obtained at different resolutions.


AIM-580

Author[s]: K.F. Prazdny and Mike Brady

Extra-Retinal Signals Influence Induced Motion: A New Kinetic Illusion

May 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-580.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-580.pdf

When a moving dot, which is tracked by the eyes and enclosed in a moving framework, suddenly stops while the enclosing framework continues its motion, the dot is seen to describe a curved path. This illusion can be explained only by assuming that extra- retinal signals are taken into account in interpreting retinal information. The form of the illusion, and the fact that the phenomenal path cannot be explained on the basis of positional information alone, suggests that the perceived path is computed by integrating (instantaneous) velocity information over time. A vector addition model embodying a number of simplifying assumptions is found to qualitatively fit the experimental data. A number of follow-up studies are suggested.


AITR-581

Author[s]: Jon Doyle

A Model for Deliberation, Action, and Introspection

May 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-581.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-581.pdf

This thesis investigates the problem of controlling or directing the reasoning and actions of a computer program. The basic approach explored is to view reasoning as a species of action, so that a program might apply its reasoning powers to the task of deciding what inferences to make as well as deciding what other actions to take. A design for the architecture of reasoning programs is proposed. This architecture involves self- consciousness, intentional actions, deliberate adaptations, and a form of decision-making based on dialectical argumentation. A program based on this architecture inspects itself, describes aspects of itself, and uses this self-reference and these self-descriptions in making decisions and taking actions. The program’s mental life includes awareness of its own concepts, beliefs, desires, intentions, inferences, actions, and skills. All of these are represented by self-descriptions in a single sort of language, so that the program has access to all of these aspects of itself, and can reason about them in the same terms.


AIM-585

Author[s]: Judi Jones

Primer for R users

September 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-585.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-585.pdf

R is a text formatter. The information in this primer is meant to explain, in simple English, the basic commands needed to use R. Input for R is prepared on computer systems using a text editor. Which editor employed depends on which computer system you use, and your personal preference. Almost every characteristic of a document can be controlled or changed if necessary.


AIM-586

Author[s]: Robert W. Lawler

The Progressive Construction of Mind

June 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-586.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-586.pdf

We propose a vision of the structure of knowledge and processes of learning based upon the particularity of experience. Highly specific cognitive structures are constructed through activities in limited domains of experience. For new domains, new cognitive structures develop from and call upon the knowledge of prior structures. Applying this vision of disparate cognitive structures to a detailed case study, we present an interpretation of addition-related matter from the corpus and trace the interplay of specific experiences with the interactions of ascribed, disparate structures. The interpretive focus is on learning processes through which a broadly applicable skill emerges from the interaction and integration of knowledge based on specific, particular experiences.


AIM-587

Author[s]: Guy L. Steele, Jr.

Destructive Reordering of CDR-Coded Lists

August 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-587.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-587.pdf

Linked list structures can be compactly represented by encoding the CDR (“next”) pointer in a two-bit field and linearizing list structures as much as possible. This “CDR- coding” technique can save up to 50% on storage for linked lists. The RPLACD (alter CDR pointer) operation can be accommodated under such a scheme by using indirect pointers. Standard destructive reordering algorithms, such as REVERSE and SORT, use RPLACD quite heavily. If these algorithms are used on CDR-coded lists, the result is a proliferation of indirect pointers. We present here algorithms for destructive reversal and sorting of CDR-coded lists which avoid creation of indirect pointers. The essential idea is to note that a general list can be viewed as a linked list of array-like “chunks”. The algorithm applied to such “chunky lists” is a fusion of separate array- and list-specific algorithms; intuitively, the array-specific algorithm is applied to each chunk, and the list algorithm to the list with each chunk considered as a single element.


AITR-589

Author[s]: Andrew P. Witkin

Shape from Contour

November 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-589.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-589.pdf

The problem of using image contours to infer the shapes and orientations of surfaces is treated as a problem of statistical estimation. The basis for solving this problem lies in an understanding of the geometry of contour formation, coupled with simple statistical models of the contour generating process. This approach is first applied to the special case of surfaces known to be planar. The distortion of contour shape imposed by projection is treated as a signal to be estimated, and variations of non-projective origin are treated as noise. The resulting method is then extended to the estimation of curved surfaces, and applied successfully to natural images. Next, the geometric treatment is further extended by relating countour curvature to surface curvature, using cast shadows as a model for contour generation. This geometric relation, combined with a statistical model, provides a measure of goodness-of-fit between a surface and an image contour. The goodness-of-fit measure is applied to the problem of establishing registration between an image and a surface model. Finally, the statistical estimation strategy is experimentally compared to human perception of orientation: human observers' judgements of tilt correspond closely to the estimates produced by the planar strategy.


AIM-590

Author[s]: Robert W. Lawler

Extending a Powerful Idea

July 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-590.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-590.pdf

Mathematics is much more than the manipulation of numbers. At its best, it involves simple, clear examples of thought so apt to the world we live in that those examples provide guidance for our thinking about problems we meet subsequently. We call such examples, capable of heuristic use, POWERFUL IDEAS, after Papert (1980). This article documents a child’s introduction to a specific powerful idea in a computer environment. We trace his extensions of that idea to other problem areas, the first similar to his initial experience and the second more remote from it.


AIM-591

Author[s]: Shimon Ullman

Interfacing the One-Dimensional Scanning of an Image with the Applications of Two-Dimensional Operators

April 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-591.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-591.pdf

To interface between the one-dimensional scanning of an image, and the applications of a two-dimensional operator, an intermediate storage is required. For a square image of size n2, and a square operator of size m2, the minimum intermediate storage is shown to be n .(m-1). An interface of this size can be conveniently realized by using a serpentine delay line. New kinds of imagers would be required to reduce the size of the intermediate storage below n.(m-1).


AIM-592

Author[s]: D.D. Hoffman

Inferring Shape from Motion Fields

December 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-592.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-592.pdf

The human visual system has the ability o utilize motion information to infer the shapes of surfaces. More specifically, we are able to derive descriptions of rigidly rotating smooth surfaces entirely from the orthographic projection of the motions of their surface markings. A computational analysis of this ability is proposed based on “shape from motion” proposition. This proposition states that given the first spatial derivatives of the orthographically projected velocity and the acceleration fields of a rigidly rotating regular surface, then the angular velocity and the surface normal at each visible point on that surface are uniquely determined up to a reflection.


AIM-593

Author[s]: Mike Brady

Toward a Computational Theory of Early Visual Processing In Reading

September 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-593.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-593.pdf

This paper is the first of a series aimed at developing a theory of early visual processing in reading. We suggest that there has been a close parallel in the development of theories of reading and theories of vision in Artificial Intelligence. We propose to exploit and extend recent results in Computer Vision to develop an improved model of early processing in reading. This first paper considers the problem of isolating words in text based on the information which Marr and Hildreth’s (1980) theory asserts is available in the parafovea. We show in particular that the findings of Fisher (1975) on reading transformed texts can be accounted for without postulating the need for complex interactions between early processing and downloading information as he suggests. The paper concludes with a brief discussion of the problem of integrating information over successive saccades and relates the earlier analysis fo the empirical findings of Rayner.


AITR-595

Author[s]: Guy Lewis Steele Jr.

The Definition and Implementation of a Computer Programming Language Based on Constraints

August 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AITR-595.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AITR-595.pdf

The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.


AIM-596

Author[s]: Koji Fukumori

Fundamental Scheme for Train Scheduling

September 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-596.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-596.pdf

Traditionally, the compilation of long-term timetables for high-density rail service with multiple classes of trains on the same track is a job for expert people, not computers. We propose an algorithm that uses the range- constriction search technique to schedule the timing and pass-through relations of trains smoothly and efficiently. The program determines how the timing of certain trains constrains the timing of others, finds possible time regions and pass-through relations and then evaluates the efficiency of train movement for each pass-through relation.


AIM-597

Author[s]: David Marr and Lucia Vaina

Representation and Recognition of the Movement of Shapes

October 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-597.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-597.pdf

The problems posed by the representation and recognition of the movements of 3-D shapes are analyzed. A representation is proposed for the movements of shapes that lie within the scope of Marr & Nishihara’s (1978) 3-D model representation of static shapes. The basic problem is, how to segment a stream of movement into pieces each of which can be described separately. The representation proposed here is based upon segmenting a movement at moments when a component axis, e.g. an arm, starts to move relative to its local coordinate frame (here, the torso). So that for example walking is divided into a sequence of the stationary states between each swing of the arms and legs, and the actual motions between the stationary points (relative to the torso, not the ground). This representation is called the state-motion-state (SMS) moving shape representation, and several examples of its application are given.


AIM-598

Author[s]: John Batali and Anne Hartheimer

The Design Procedure Language Manual

September 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-598.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-598.pdf

This manual describes the Design Procedure Language (DPL) for LSI design. DPL creates and maintains a representation of a design in a hierarchically organized, object-oriented LISP data-base. Designing in DPL involves writing programs (Design Procedures) which construct and manipulate descriptions of a project. The programs use a call-by-keyword syntax and may be entered interactively or written by other programs. DPL is the layout language for the LISP-based Integrated Circuit design system (LISPIC) being developed at the Artificial Intelligence Laboratory at MIT. The LISPIC design environment will combine a large set of design tools that interact through a common data-base. This manual is for prospective users of the DPL and covers the information necessary to design a project with the language. The philosophy and goals of the LISPIC system as well as some details of the DPL data-base are also discussed.


AIM-599

Author[s]: Boris Katz

A Three-Step Procedure for Language Generation

December 1980

ftp://publications.ai.mit.edu/ai-publications/500-999/AIM-599.ps

ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-599.pdf

This paper outlines a three-step plan for generating English text from any semantic representation by applying a set of syntactic transformations to a collection of kernel sentences. The paper focuses on describing a program which realizes the third step of this plan. Step One separates the given representation into groups and generates from each group a set of kernel sentences. Step Two must decide based upon both syntactic and thematic considerations, the set of transformations that should be performed upon each set of kernels. The output of the first two steps provides the “TASK” for Step Three. Each element of the TASK corresponds to the generation of one English sentence, and in turn may be defined as a triple consisting of: (a) a list of kernel phrase markers; (b) a list of transformations to be performed upon the list of kernels; (c) a “syntactic separator” to separate or connect generated sentences. Step Three takes as input the results of Step One and Step Two. The program which implements Step three “reads” the TASK, executes the transformations indicated there, combines the altered kernels of each set into a sentence, performs a pronomialization process, and finally produces the appropriate English word string. This approach subdivides a hard problem into three more manageable and relatively independent pieces. It uses linguistically motivated theories at Step Two and Step Three. As implemented so far, Step Three is small and highly efficient. The system is flexible; all the transformations can be applied in any order. The system is general; it can be adapted easily to many domains.


 
horizontal line

MIT logo Computer Science and Artificial Intelligence Laboratory (CSAIL)
The Stata Center, Building 32 - 32 Vassar Street - Cambridge, MA 02139 - USA
tel:+1-617-253-0073 - publications@csail.mit.edu