Natural Tasking of Robots Based on Human Interaction Cues

MIT Computer Science and Artificial Intelligence Laboratory
The Stata Center
32 Vassar Street
Cambridge, MA 02139
USA

PI: Rodney A. Brooks


DARPA logo

[Project Overview], [Approach], [Research Questions], [Achieved Deliverables], [Future Deliverables], [People], [Publications]


Cog turns a crank
M4 robot head drawing
Kismet plays with a frog
Coco the gorilla robot

Achieved Deliverables

1999-2000 | 2000-2001 | 2001-2002

2002-2003:

  1. Guided Training via a Modular Software System for Learning from Interaction with the Environment and People:

    • Cog learns simple arm and end effector tasks via a combination of self-exploration and explicit training. With tactile reinforcement signals, Cog is taught by a human trainer to perform simple postural arm and hand actions. Subsequently, the trainer teaches the robot to perform such learned actions in response to tactile (touch to particular fingers) and visual (objects of particular colors) stimuli.

     

  2. Expoiting a Model of Muscle Fatique for Human-like Movement.

    • Cog has a fatigue model for its virtual musculature. This simulation of biological muscle fatigue provided signals that modulated motor performance and provided negative reinforcement to the learning module to guide the acquisition of more natural human-like motor movement.

     

  3. Learning How Joints Move in Relation to Virtual Muscle Groups

    • Starting simply, from an inclination to randomly move its virtual muscles, Cog learns to activate its muscle model so it can move to particular points in joint angle space. Cog acquires an unsupervised linear dependency model between joint velocities and controller modules that supervise multiple muscles in combination.

     

  4. Active segmentation

    • Cog uses active exploration to resolve visual ambiguity in its workspace. Objects can sometimes be difficult to locate if their visual appearance is similar to the general background. Cog solves this problem by sweeping its arm through regions of interest. If no object is there, the arm passes unimpeded. If an object is present, the impact between it and the robot's arm causes the object to move, revealing its boundary.

     

  5. Cog uses a mirror neuron model to learn how different objects respond to the actions it can perform.

    • If the robot taps an object and it slips and rolls, it learns to predict the direction of slip based on visaul evidence, and can then use that information to deliberately trigger or avoid rolling an object while tapping it. The mirror neuron model allows the robot to mimic an action demonstrated by a human relative to the natural behavior of the object, rather than pure geometry.

     

  6. Open object recognition

    • With open object recognition, the set of objects Cog can recognize grows over time, as it accumulates experience through active segmentation and other experimental methods. The robot clusters episodes of object interaction to learn the properties of novel, unfamiliar objects. An operator can introduce names for objects to facilitate further task-related communication.

     

  7. Perceptual cycle

    • Cog uses the constraints of known activities to learn about the objects used within those activities -- for example, during manipulation. Cog can track known objects to learn about activities they occur in, such as a sorting task or object search. By combining the ability to learn about objects through activity constraints and activities through tracking objects, the robot can achieve a virtuous cycle of perception.

     

  8. Adaptive Control of Cog’s Arm Using a Nonlinear Sliding-Modes Controller

    • Two degrees of freedom on Cog’s arm operate via non-parametric adaptive control using a nonlinear sliding-modes controller. This sufficiently mitigates the high signal to noise ratio arising in Cog’s arm (due to a small strain gauge signal that experiences capacitive coupling with other signals) and allows semi-autonomous, task adequate control.

     

  9. Learning Actions and Objects from Observed Use

    • While Cog watches an event involving someone’s arm handling an object (e.g. filing a surface, swinging a pendulum), its vision system extracts both the nature of the arm movement and derives a predictive dynamical model of the object.

     

  10. A Compact Linear Series Elastic Actuator Design For Human-like Neck Joint

    • For a new robotic head, two new coupled neck axes were designed and built using linear series elastic actuators aligned in parallel. The design is compact: the two axes have intersecting centers of rotation. Force control in combination with elastic actuation provides safe, human like compliancy.

     

  11. The ALIVE architecture

    • The ALIVE architecture consisting of a stack and the CreaL software development environment controls the new robotic head. The stack is a special purpose, extensible, real-time, small form-factor hardware architecture of controller boards, sensor boards, network board, and off-the-shelf processor. CreaL, which is retargetable, extracts efficient computational power to allow many lightweight threads from the relatively cheap off-the-shelf processor via efficient software scheduling, compilation and language abstraction. The ALIVE architecture facilitates complete designer control over startup and failure sequences which is essential for continuous, safe robot operation.

home button
next button

[Project Overview], [Approach], [Research Questions], [Achieved Deliverables], [Future Deliverables], [People], [Publications]