NEST Demo Day Description

(Revised 26 March 2002)

PI and Project Title:

Robust Engineering of Network Embedded Systems

Brian Williams, Howard Shrobe and Robert Laddaga


We will demonstrate a layer of capabilities that allow an ensemble of Motes to be controlled by a higher tier processor as if they were a single entity. The capabilities at this level include: group formation, flooding commands, calculating a gradient from a reference point, estimating location from three gradients, executing a command at each node, collating a value over the entire ensemble.

Technical Description:

We will use 15 motes covering a large table (or similar floor area). No special needs are anticipated.

Relationship to the Challenge Problem:

We are trying to build an environment very similar to the Berkeley Challenge Problem at MIT to drive our research. We will be using several mobile rovers that are already available to us in the Space Systems Lab; each rover is equipped with a camera. Our long term goal is to demonstrate the use of the Kirk/Titan distributed execution environment to control the Rovers as the pursue an evader, using both their cameras and the motes as sensors.


The planned demonstration will illustrate the ability of the high level system to interface with the ensemble of motes at an appropriate abstraction level. We will show the ability to establish communication groups, route messages to specific processors, flood messages to all processors, send messages to accumulate hop-count and use these to estimate position. Finally we will show the ability to detect a light source and track it as it moves across the processor ensemble. All these will be illustrated using the UI on a base station as well as the lights on the processors.


The middleware we are building at this point is a layered set of programming constructs. The first layer consists of 2 queues: The message queue captures incoming messages from the network and sensor events; The Task queue captures tasks that are to be run at less than interrupt priority. The main loop of this layer is run at the same level as interrupt handling. Each iteration of the main loop handles all messages in the queue and the first task on the queue. We are developing a high-level, Lisp-like language for describing the message formats, the task-descriptor formats, the message handlers and the task handlers. We will be developing compilation tools that take in these descriptions as a whole and then generate the layout of memory.

This layer is then used to implement the next layer of capability, which consists of group formation, flooding a common command, gradient transmission and coordinate estimation. Finally, this layer is used to implement a layer at which other array-like operations are performed.

We believe that all these are very basic components of the middleware that will be needed for the Challenge Problem. In addition, we will be working on the Kirk/Titan System for plan execution and monitoring that we believe will be valuable for controlling the rovers and the sensor fields in the CP but that is planned for later demonstration (and we currently believe that Kirk/Titan will mainly runs in the higher tier systems, as opposed to the Motes).

We believe that error and uncertainty management are an inherent part of this problem space. However, we are not planning to deal with these issues in any substantial way until after the next PI mtg.


Evaluation Criteria

There are two main forms of evaluation that are relevant to this phase of our effort. First, we are designing a programming language interface that allows higher level systems to control ensembles of motes. This language can be evaluated in terms of how well it can be compiled, how easy and natural it is to code basic operations within it. For explicit measures of compilability we intend to measure the size of compiled files in bytes, and secondarily the time to compile (in seconds), as compared to native Mote coding, a series of small test cases. For explicit measures of naturalness of coding operations, we will compare the number of lines of source code to the number of lines of native code for the same series of small tests. In this case a reduction in number of lines by a factor of 5 (or more) would be acceptable.

The second criteria is a performance criteria dealing with the intended task which is detecting and tracking a light source. This, in turn, can be measured by end-to-end performance as well as by component level metrics: speed of notification of neighbors, routing efficiency, etc. More specifically, we intend to measure:

1. localization latency – measure the time to get a location of a light source out of a sensor network of Motes normalized by number of motes and accuracy goal

2. localization accuracy – determine the precision of localization measurement normalized for number/density of motes, ratio of total diameter to one Mote’s radio's range

3. scalability – determine the effect of number of light sources and sensors on latency (in milliseconds) and accuracy in terms of radius of error.

4. fault tolerability – measure latency and accuracy as above with induced Mote failures of 1, 3, 5 and one half the sensor Motes.

5. tracking accuracy – the mean squared error of the measured light path as compared to the actual light path.

Collaboration Notes

We intended to work closely with other members of the Berkeley group: in particular with David Culler, Deborah Estrin, Cordell Green. We also plan to interact with John Rushby.