%% from Tom Toffoli
\documentstyle{pd}
%% TO PHYSICA D
%%Who was the fool that added the final periods in the pd.sty sections?
%%Who tampered with footnotes? Look at what it does when you reach two digits!
%%I went back to normal section references
%\def\thesection {\arabic{section}.}
%\def\thesubsection {\thesection\arabic{subsection}.}
%\def\thesubsubsection {\thesubsection \arabic{subsubsection}.}
\def\thesection {\arabic{section}}
\def\thesubsection {\thesection.\arabic{subsection}}
\def\thesubsubsection {\thesubsection.\arabic{subsubsection}}
%\catcode`\@=11
\def\@sect#1#2#3#4#5#6[#7]#8{\ifnum #2>\c@secnumdepth
\def\@svsec{}\else
\refstepcounter{#1}\edef\@svsec{\csname the#1\endcsname\hskip 1em }\fi
\@tempskipa #5\relax
\ifdim \@tempskipa>\z@
\begingroup #6\relax
\@hangfrom{\hskip #3\relax\@svsec}{\interlinepenalty \@M #8\par}
\endgroup
\csname #1mark\endcsname{#7}\addcontentsline
{toc}{#1}{\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}\fi
#7}\else
\def\@svsechd{#6\hskip #3\@svsec #8\csname #1mark\endcsname
{#7}\addcontentsline
{toc}{#1}{\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}.\fi
#7}}\fi
\@xsect{#5}}
%\catcode`\@=12
\catcode`\@=11
%to change captions from normalsize to small
\long\def\mycaption#1[#2]#3{\addcontentsline{\csname
ext@#1\endcsname}{#1}{\protect\numberline{\csname
the#1\endcsname}{\ignorespaces #2}}\par
\begingroup
\@parboxrestore
\small
\@makecaption{\csname fnum@#1\endcsname}{\ignorespaces #3}\par
\endgroup}
\renewcommand\caption{\refstepcounter\@captype \@dblarg{\mycaption\@captype}}
%\catcode`\@=12
%%%% Single-column figure
\newcommand{\figone}[3]{% label, height, caption
\begin{figure}
\hbox{\vrule height#2}
\caption{#3}
\label{#1}
\end{figure} }
%%%% Two-column figure
\newcommand{\figtwo}[3]{% label, height, caption
\begin{figure*}
\hbox{\vrule height#2}
\caption{#3}
\label{#1}
\end{figure*} }
\renewcommand{\mathindent}{1.0em}
\renewcommand{\topfraction}{1}
\renewcommand{\dbltopfraction}{1}
\renewcommand{\bottomfraction}{1}
\renewcommand{\textfraction}{0}
\setcounter{topnumber}{10}
\setcounter{dbltopnumber}{10}
\setcounter{totalnumber}{10}
\newcommand{\sect}[1]{\S\ref{sect.#1}} % Ref. to section or subsection
% to appear as, say, $2.37
\newcommand{\eq}[1]{(\ref{eq.#1})} % Ref. to equation
% to appear as, say, (2.37)
\newcommand{\foot}[1]{footnote \ref{foot.#1}} % Ref. to footnote
% to appear as, say, `footnote 7'.
\newcommand{\fig}[1]{Fig.~\ref{fig.#1}}
\newcommand{\theor}[1]{Theorem \ref{theor.#1}}
\newcommand{\lem}[1]{Lemma \ref{lem.#1}}
\newcommand{\conj}[1]{Conjecture \ref{conj.#1}}
\newcommand{\parhead}[1]{{\sl #1}.\quad} % Distinguished paragraph heading; slanted and followed by \quad
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{conjecture}[theorem]{Conjecture}
\newenvironment{proof}{\par \small {\sl Proof.\ }}{\par\medskip}
\newcommand{\by}{$\times$} % 3\by4 will spell 3x4
\newcommand{\inmap}[1]{\mathop{\to}\limits^{#1}}
\newcommand{\inmapsto}[1]{\mathop{\mapsto}\limits^{#1}}
\def\ps{{\sc ps}}
\def\ca{{\sc ca}}
\def\Ca{{\sc Ca}}
\def\ica{{\sc ica}}
\def\Ica{{\sc Ica}}
\def\lor{{\sc or}}
\def\lxor{{\sc xor}}
\def\twosquares{1.48in}
\def\threesquares{0.98in}
\begin{document}
%% intro.tex
\title{INVERTIBLE CELLULAR AUTOMATA: A REVIEW}
\shorttitle{Invertible Cellular Automata}
\author{Tommaso Toffoli and Norman Margolus}
\address{MIT Laboratory for Computer Science\\Cambridge, MA 02139, USA}
\shortauthor{T. Toffoli, N. H. Margolus}
\received{15 January 1990}
\revised{15 January 1990}
%\volume{999} \issue{9}
%\firstpage{19} \lastpage{25}
\begin{abstract}
In the light of recent developments in the theory
of invertible cellular automata, we attempt to give a unified
presentation of the subject and discuss its relevance to computer
science and mathematical physics.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{Preliminaries}
One of the goals of computer science is to provide abstract models of
concrete computers, i.e., of whatever computing apparatus can
ultimately be built out of the physical world. In this, one seeks {\em
expressiveness} (whatever aspects of the computer are deemed relevant
should be captured by the model) and {\em accuracy} (whatever one can
prove about the model should be true about the computer). {\em
Invertible cellular automata} (\ica) are an important development in
this direction, of significance comparable to the introduction of
Turing machines (and similar paradigms of effective computation) in
the late '30s.
\medskip
Turing machines represent a conscious effort\cite{turing} to capture,
in axiomatic form, those aspects of physical reality that are most
relevant to computation. In this, they are quite unlike Cantor's
transfinite sets or similar inventions of the romantic era, and much
more like Euclid's principles, which were meant to describe the
geometry of our physical world (and one should not forget that, for the
Greeks, computation was synonymous with geometrical construction).
Cellular automata are more expressive than Turing machines, insofar as
they provide explicit means for modeling {\em parallel} computation on
a spacetime background. However, both classes of models are
indifferent to one fundamental aspect of physics, namely {\em
microscopic reversibility}, and thus help create the illusion that,
computationally speaking, one lives in a fairy world where the second
principle of thermodynamics is not enforced, and ``perpetual
computation'' (in the same sense as ``perpetual motion'') is possible.
By explicitly coming to terms with this aspect of reality, {\em
invertible cellular automata} provide a modeling environment that is
more accurate and, in the end, more productive.
\medskip
Only a few years ago, what was known about \ica\ could be summarized
in a few lines---and wasn't very exciting either. Today, one can tell
a more interesting story, and we shall try to do so in this paper.
Our involvement with \ica\ represents the
convergence of several research trails, including
\begin{itemize}
\item Concurrent computation in networks having uniform structure.
\item Reversible (or ``information preserving'') computing processes.
\item Fundamental connections between physics and computation.
\item Foundations of relativity.
\item Quantum computation.
\item Fine-grained modeling of physical systems.
\item High-performance simulation of cellular automata.
\item Data encryption.
\end{itemize}
\subsection{An apology}
We have many things to say, and we have a quite varied audience in
mind. We want to make sure that the readers perceive the essential
issues before plunging into endless technicalities. For these
reasons, we shall follow an informal approach when this seems to
enhance clarity, and we shall often present a specific example rather
than the most general case.
\section{Invertible cellular automata}\label{sect.invca}
\subsection{Cellular automata}\label{sect.ca}
Cellular automata are abstract dynamical systems that play a role in
discrete mathematics comparable to that played by partial differential
equations in the mathematics of the continuum. In terms of structure
as well as applications, they are the computer scientist's counterpart
to the physicist's concept of a `field' governed by `field equations'.
It is not surprising that they have been reinvented innumerable times
under different names and within different disciplines. The canonical
attribution is to Ulam and von Neumann\cite{ulam,vonneumann} (circa
1950).\footnote
{At about the same time but quite independently,
Zuse\cite{zuse69} proposed structures, intended as digital models of
mechanics, that are essentially cellular automata.}
\medskip
Concise formal definitions are given in \sect{formal} (for a more
complete formal treatment, see \cite{toffoli-thesis,ica-full}).
Intuitively, a {\em cellular automaton} is an indefinitely extended
network of trivially small, identical, uniformly interconnected, and
synchronously clocked digital computers.
More specifically, we start with an indefinitely extended
$n$-dimensional lattice which represents ``space''
(typically, $n$ equals 1, 2, or 3 in physical modeling applications).
To each site of this lattice, or {\em cell}, there is
associated a state variable, called the {\em cell state}, ranging over
a finite set called the {\em state alphabet} (typically, the cell
state consists of just a few bits' worth of data).
``Time'' advances in discrete steps; the dynamics is given by an
explicit recipe, called the {\em local map}, which is used
at every time step by each cell to determine its new state from the
current state of certain cells in its vicinity.
The local map itself is the composition of two operators, namely, the
{\em neighborhood}, which specifies {\em which} cells affect the given
cell, and the {\em table}, which specifies {\em how} those cells
affect it. In more detail,
\begin{itemize}
\item The {\em neighborhood} lists the relative positions,
with respect to the generic cell (in this context also called {\em
center cell}\/), of a finite number of cells called the cell's {\em
neighbors}. (The neighbors of a cell need not coincide with the
cell's ``first neighbors'' in the array. They may include the cell
itself or cells that are several sites away, while cells in between
may be skipped. All that's required is that they be finite in number,
and be arranged in the same spatial pattern with respect to each
cell.)
\item The {\em table} takes the states of a cell's neighbors
as arguments, and returns as a result the cell's corresponding new
state.
\end{itemize}
\medskip
Thus, a cellular automaton's laws are {\em local} (no
action-at-a-distance) and {\em uniform} (the same rule applies to all
sites at all times); in this respect, they reflect two fundamental
aspects of physics. Moreover, the system's laws are {\em finitary},
that is, by means of the local map one can explicitly construct {\em
in an exact way} the forward evolution of an arbitrarily large portion
of a cellular automaton through an arbitrary length of time, all by
finite means. It is such strong effectiveness built in their
definition that makes dynamical systems based on cellular automata
appealing to the computer scientist. In continuous dynamical systems,
such as those defined by differential equations, the state variables
range over an uncountable set, and one has to accept a weaker standard
for ``effectiveness;'' i.e., a specification of the dynamics is
accepted as effective if the state of any finite portion of the system
at any future time can be computed with an {\em arbitrarily small
error} by finite means. In cellular automata, we demand and obtain
zero error.
In this sense, cellular automata present themselves as a finitary alternative
to the methods of the calculus in the modeling of spatially extended
systems.
\subsection{Invertibility}\label{sect.invertibility}
An assignment of states to all cells, i.e., a state for the entire
cellular automaton, is called a {\em configuration}. By applying the
local map to every site of the array, from any configuration $q$ one
obtains a new configuration $q'$, called its {\em successor}.
Thus, the local map defines a transformation $q\inmapsto\tau q'$,
called the {\em global map}, on the set of configurations.
A cellular automaton is {\em invertible} if its global map is
invertible, i.e., if every configuration---which, by definition, has
exactly one successor---also has exactly one predecessor.
\medskip
In the context of dynamical systems, invertibility coincides with
what the physicists call `microscopic reversibility'. This should not
be confused with `invariance under time reversal', which is a stronger
property.\footnote
{\protect\label{foot.time-reversal} Let $\tau:Q\to Q$ be an
invertible dynamical system. A bijective mapping $\phi:Q\to Q$
obeying appropriate regularity properties (e.g., continuity,
translation invariance, etc., depending on the context) is called a
{\em time-reversal} operator if $\tau^{-t}=\phi^{-1}\tau^t\phi$; the
system is {\em invariant under time reversal} if it admits of such an
operator. Thus, a time-reversal invariant system is not only
invertible but also {\em isomorphic to its inverse}. Hamiltonian
mechanics has the well-known time-reversal operator $\phi: \langle
q,p\rangle\mapsto\langle q,-p\rangle$.}
\medskip
Initially, cellular automata were used chiefly as ``toy models'' for
phenomenology associated with dissipative (i.e., macroscopically
irreversible) processes; typical topics were biological
organization\cite{linden,gardner70},
self-reproduction\cite{vonneumann}, chemical reactions, and visual
pattern processing\cite{preston}. Since the interest was more in
exploring the {\em consequences} of irreversible behavior rather than
its {\em origins}, it was not only harmless but actually expedient
(given the severely limited computing resources) to use models where
irreversibility happened to be built-in.\footnote
{This interest is not abating; see, for instance,
\cite{packard85,bennett85,bennett-complexity,ca86}.}
Thus, it is not surprising
that no need was felt for \ica.
\medskip
It should be noted that the issue of invertibility wasn't even
present in the minds of most cellular automata investigators. To the
few to whom it was, it wasn't at all clear whether \ica\ could
actually lend themselves to the modeling of microscopically
reversible physics.
The perceived difficulties were of two kinds. On one hand, there were
no practical procedures known for constructing nontrivial \ica; on
the other, it was suspected and argued that \ica\ did not have
adequate computing capabilities.
Both of these difficulties have now been amply removed. Indeed, \ica\
have become an important tool of computational physics in applications
where the explicit modeling of reversible phenomena is concerned.
Moreover, they are playing an increasingly important role as
conceptual tools of theoretical physics.
\subsection{Historical notes}\label{sect.hist}
As already mentioned, cellular automata were initially treated as
some sort of conceptual erector set---a plaything for
interdisciplinary biologists and computer scientists---and drew
little attention from professional mathematicians. This may explain
why the issue of invertibility---which in mathematical systems
theory is one of obvious priority---was slow to be explicitly
recognized by the cellular automata community.
\medskip
In 1962, Moore\cite{moore} asked whether there could exist
``Garden of Eden'' configurations---i.e., configurations that don't
have a predecessor---and proved that, under certain conditions, if a
configuration has more than one predecessor then there must be a
configuration that has none\cite{moore}. The converse was
proved by Myhill\cite{myhill} in 1963.
Moore's and Myhill's results originated a lengthy ``Garden of Eden''
debate (see references in \cite{sato}), which brought to light a
number of subtle issues somehow related to invertibility. But
invertibility was explicitly addressed only in 1972, in seminal papers
by Richardson\cite{richardson} and Amoroso and
Patt\cite{amoroso}.\footnote
{Unbeknownst to those authors, systems that are in essence
one-dimensional cellular automata had already been studied in an
abstract mathematical context by Hedlund and associates as early as
1963\cite{hedlund63,hedlund69}; both Richardson's results on
invertibility (\sect{lemmas}) and Patt's search for \ica\ (\sect{guarded}) had
been anticipated by Hedlund's school.}
After that, theoretical work on invertibility in cellular automata
proliferated\cite{amo-cooper,sato,nasu,maruoka76,maruoka79,maruoka82,yaku,jacopini}.
In spite of that work, however, for many years the most interesting
\ica\ actually exhibited remained an extremely simple-minded one (the
longest orbit is of period two!), discovered by Patt through
brute-force enumeration\cite{patt}. \Ica\ continued to ``appear to be
quite rare''\cite{amoroso}.
Not only rare, but also simple-minded! On the basis of various kinds
of circumstantial evidence (cf., e.g., \cite{smith}), Burks
conjectured that an \ica\ cannot be computation-universal\cite{burks}
(note that that was at a time when computation universality was being
turned up under almost every stone), and soon Aladyev\cite{aladyev}
appeared to have proved Burk's conjecture.
Finally, except for the one-dimensional case\cite{amoroso}, no one
even knew of a systematic procedure for telling whether or not a
cellular automaton was invertible.
In sum, for a long time \ica\ seemed to lack any appeal or
promise.
\medskip
Following fundamental results by Bennett on invertible Turing
machines\cite{bennett-turing}, in 1976 one of us (Toffoli) proved the
existence of \ica\ that are computation- and
construction-universal\cite{toffoli-univ}, and noted the relevance of
this, in principle, to the modeling of physics\cite{toffoli-thesis}.
In the same year, and unnoticed by the (then meager) cellular
automata establishment, Pomeau\cite{hpp} discussed, as a model for
hydrodynamics, a ``lattice gas'' that is in fact an \ica\ and a
useful stylization of certain microscopically reversible physical
interactions.
Independently, Fredkin had been studying invertible
recurrences as models of dynamical behavior; had arrived at
techniques for synthesizing arbitrary sequential behavior
out of invertible Boolean primitives\cite{fredkin}; and had
studied a class of \ica\ (see \sect{second-order}) that displayed some
analogy with Lagrangian mechanics.
\medskip
Rapid, synergistic developments finally started taking place in the
early '80s.
In 1981, a conference on ``Physics and Computation''\cite{endicott}
explicitly addressed the theme of {\em fundamental} connections
between physics and computer science (rather than more incidental
ones, such as computer programs for the numerical integration of
differential equations). Ideas such as ``virtually nondissipative
computation''\cite{fredkin} and ``quantum
computation''\cite{benioff,feynman-quantum} started gaining
legitimacy.
The links between a number of physicists and computer scientists
interested in these themes were tightened by a follow-up workshop on
Moskito Island (1982), where an early prototype of cellular automata
machine was demonstrated by us.
Wolfram's 1983-86 sortie into the cellular automata
arena\cite{wolfram-stat,wolfram-univ,wolfram-comp,wolfram-hydro,wolfram-book},
stimulated by that workshop, was in turn a determining factor in
introducing a generation of mathematical physicists to the
cellular-automaton paradigm.
Inspired by Fredkin's ``billiard-ball'' model of
computation\cite{fredkin}, Margolus arrived in 1983 at a very
simple computation-universal \ica\cite{margolus-bbm} that is
suggestive of how a computer could in principle be built out of
microscopic mechanics. At about the same time,
Vichniac\cite{vichniac} and Creutz\cite{creutz} pioneered the used of
cellular automata for the microcanonical modeling of Ising spin
systems.
\medskip
The introduction of dedicated cellular automata machines\cite{cam5}
encouraged much new experimental work on \ica,
and stimulated further theoretical developments.
For instance, according to Pomeau, seeing (at a second Moskito Island
workshop, in 1984) his lattice-gas model running on one of these
machines made him realize that what had been conceived primarily as a
{\em conceptual} model could indeed be turned, by using suitable
hardware, into a {\em computationally accessible} model. This
stimulated his interest in finding lattice-gas rules that would
provide better models of fluids. In the past few years, lattice-gas
hydrodynamics has grown into a substantial scientific business (see
\sect{lattice-gas}).
In turn, the growing interest in fine-grained models of physics and in
their potential applications to important practical problems created
the need for computers capable of handling large models of this kind
much more efficiently than conventional scientific
computers\cite{margolus-super}. A second generation of cellular
automata machines, whose development is almost complete\cite{cams,pm},
reflects in its architecture the objective to efficiently support the
simulation of \ica---which are likely to constitute a major portion of
its fare.
\subsection{Terminology}\label{sect.formal}
The present section complements with precise definitions and notation
the informal terminology introduced in
\sect{ca}--\ref{sect.invertibility}. Refer to \cite{hedlund69,pde} for
more abstract, but equivalent, definitions given in terms of
continuity in the Cantor-set topology.
\smallskip
\parhead{Space} Let $S=Z^n$ denote the Abelian group of translations of an
$n$-dimensional lattice $I$ onto itself. It will be convenient to call
the elements of $S$ {\em displacements}. The sum and the difference of
two displacements are again displacements. The application of a
displacement $s\in S$ to a site $i\in I$ yields a new site denoted by
$i+s$. The {\em difference} $i'-i$ between two sites is the displacement
$s$ such that $i'=i+s$.
\smallskip
\parhead{Interconnection} A {\em neighborhood} is a finite set of
displacements (i.e., a subset of $S$). Typically, a neighborhood $X$
is applied as an operator to a site $i$, yielding a set of sites.
That is, the {\em $X$-neighborhood} (or simply the {\em neighborhood},
when $X$ is understood) of $i$ is the set $i+X=\{i+x|x\in X\}$; the
elements of $i+X$ are the {\em neighbors} of $i$, and are naturally
indexed by the elements of $X$.
The {\em radius} of $X$ is the length of its longest element.\footnote
{For our purposes, the Euclidean metric will do, even though
it's an overkill.}
\smallskip
\parhead{State} Given a lattice $I$ and a nonempty state alphabet $A$,
the set $Q$ of {\em configurations} (of $A$ over $I$) is the Cartesian
product of copies of $A$ indexed by the set $I$, i.e., $Q=A^I$. The
$i$-th component, in this product, of a configuration $q$ is called
the state of site $i$ in configuration $q$, and is denoted as usual by
$q_i$. Note that $q_i\in A$ can be thought of as the result of
applying to configuration $q\in A^I$ the projection operator $[i]$
associated with the $i$-th coordinate of the Cartesian product, i.e.,
$q_i=[i]q$.
More generally, a {\em neighborhood projection} operator
$[i+X]$ will extract from a configuration $q$ the collective state of
the neighbors of $i$, denoted by $[i+X]q$ or $q_{i+X}$. Note that
$q_{i+X}\in A^X$.
\smallskip
\parhead{Dynamics} A {\em local map} is a pair $\lambda=\langle
X,f\rangle$, where $X$ is a neighborhood and $f$ a {\em table}, i.e., a
function of the form $f:A^X\to A$. The table $f$ can be applied to any
site $i$ of a given configuration $q$ through the agency of the
neighborhood projection operator, which will extract from $q$ and
supply to $f$ the correct set of arguments. Let $q_i'$ be the result
of this application, i.e.,
\begin{equation}\label{eq.table}
q_i'=fq_{i+X}\ (=f[i+X]q);
\end{equation}
The symbol $q_i'$ can be interpreted as the state at site $i$
of a new configuration $q'$. The relation $q'=\tau q$ defines a new
function $\tau:Q\to Q$, called the {\em global map} induced by the
local map $\lambda$.
Note that the local map can be thought of as a function $\lambda:
\langle Q,I\rangle\to A$ defined (cf.\ \eq{table}) by
\begin{equation}
\lambda(q,i)=\langle X,f\rangle(q,i)=fq_{i+X}.
\end{equation}
The sequence of configurations obtained from an
initial configuration $q^0$ by iterating the map $\tau$ will be denoted
by
\begin{equation}
q^0,q^1,q^2,\ldots\quad,
\end{equation}
where the superscript represents the sequence
index rather than an exponent.
A table $f$, formally given as a function of $k$ arguments ($k$ is the
size of the neighborhood $X$), may happen to depend vacuously on some
of these arguments. In that case, one can maximally reduce
neighborhood and table in an obvious way, yielding the {\em effective}
neighborhood and the corresponding table---which together make up the
{\em reduced} local map. Unless otherwise noted, we shall tacitly
assume that local maps are given in reduced form.
\section{Universality}
Little needs to be added here on the universality theme.
In \cite{toffoli-univ}, the computation universality of \ica\ was
proved by showing that every computation-universal cellular automaton
can be embedded in an invertible one having one more dimension. This
left open the question of whether {\em one}-dimensional \ica\ could be
computation-universal. A positive answer was recently given by Morita
and Harao\cite{morita89}.
The constructions of \cite{toffoli-univ,morita89} are more concerned
with existence than with efficiency. More direct constructions can be
more instructive as well as more efficient. Indeed, if one wants to
build a general-purpose computing structure within a cellular
automaton, the most practical approach is to start with a local map
that directly supports logic gates and wires, and then build the
appropriate logic circuits out of these
primitives\cite{banks,cambook}. As explained in
\cite{toffoli-rev,fredkin}, in an {\em invertible} cellular automaton
the gates will have to be invertible; because of this constraint, a
complete, self-contained logic design will have to explicitly provide,
besides circuitry for the desired logic functions, additional
circuitry for functions (analogous to energy supply and heat removal
in ordinary computers) concerned with entropy balance. This issue,
which had been bypassed in \cite{toffoli-univ}, is directly addressed
by \ica\ such as the {\tt BBM} model devised by Margolus\cite{margolus-bbm}.
\section{Decidability}\label{sect.decid}
One of the first questions to come to mind is, of course, ``How
does one tell if a cellular automaton is invertible?'' We'll now present
the essential aspects of this question; further details will be
given in the following sections.
\subsection{A parable}
%\begin{quote}
\hangindent \parindent \hangafter -10 \noindent
{\sc For sale:} Local map $\lambda$ of invertible cellular automaton {\tt
SPRIZE}. Interesting behavior, lots of fun. \$29.95. Call John at
$\times$3194.
%\end{quote}
\medskip
\noindent This ad catches my attention. I already have a bunch of
cellular automata at home, that I can run on my personal computer, but
this one claims to be invertible: its global map $\tau$ has
an inverse $\tau^{-1}$, and by running $\tau^{-1}$ I can watch the
automaton go backwards in time! I send my check. Four days later I
receive a diskette containing a data file {\tt SPRIZE.LOC} (which, I
presume, tabulates the local map $\lambda$ of {\tt SPRIZE}). I bring
up my cellular-automaton simulation program, load {\tt SPRIZE.LOC},
initialize the screen with a blob of random junk on a clear
background, and hit the {\sc run} key. The blob starts
churning---perhaps it's spreading a bit---yes, it's spreading, but
very slowly---rather boring, I'd say. Wait---look at that long
caterpillar crawling up the screen! How the heck did it get started?
Can I go backwards in time and see exactly how the caterpillar emerged
out of the random blob? I look in the diskette directory, and I find
a second file labeled {\tt -SPRIZE.LOC}. That must be it---the local
map $\overline\lambda$ of {\tt SPRIZE}'s inverse! I load it. There
goes the caterpillar crawling backwards, curling up into some sort of
cocoon at the edge of the blob, and finally dissolving into
randomness!
I make a few more experiments. Indeed, the cellular automaton {\tt
SPRIZE} defined by the local map $\lambda$ is invertible and
$\overline\lambda$ is the local map of its inverse.
\medskip
That's a happy conclusion---but the story could have ended
differently.
Suppose I didn't find a file {\tt -SPRIZE.LOC}. What good is knowing
that {\tt SPRIZE} is invertible if I don't have an effective way to
run its inverse? Worse yet, in this situation how can I be sure that
John's ad was truthful---that {\tt SPRIZE} {\em is} invertible? Can I
prove it? Or perhaps disprove it and claim a refund?
Assuming that {\tt SPRIZE} is after all invertible, can I find
$\overline\lambda$ by myself, starting from $\lambda$? How long will
that take? By definition, $\overline\lambda$ is a finite object, and
I can sequentially generate all possible candidates. But how would I
recognize the right one? And is the very existence of $\overline\lambda$
guaranteed? In other words, if the global map $\tau$ has a local
description, does it follow that also its inverse $\tau^{-1}$ has a
local description---that the inverse of a cellular automaton is again
a cellular automaton?
\subsection{The finitary connection}
To summarize, for any given cellular automaton we would
like to know whether or not it is invertible; if it is, we would like
to have a finite recipe also for its backward evolution, i.e., the
{\em inverse} local map---as contrasted to the {\em direct} local map
of the forward process.
For the sake of comparison, let's look at a dynamical system defined by a
differential equation. Consider, for example, the evolution of the
temperature distribution $q(x,t)$ along a metal bar, according to the
``heat'' equation
\begin{equation}
{\partial q\over \partial t}={\partial^2 q\over\partial x^2};
\end{equation}
This equation can be thought of as a local recipe for an
``infinitesimal'' forward step,
\begin{equation}
q(t+dt)\vert_x=
q(t)\vert_x+{\partial^2 q\over\partial x^2}\Big\vert_x dt
\end{equation}
\noindent and by just turning $+$'s into $-$'s one immediately
obtains a local recipe for an infinitesimal backward step. Thus,
whenever the forward evolution is defined, so is the backward
one, and a local inverse recipe is known as soon as a direct one
is. In conclusion, with differential equations there is an immediate
connection between direct and inverse local recipes. (However, one
should keep in mind that, as noted in \sect{invca}, these local
recipes are of a less effective kind than those of cellular automata.)
On the other hand, with \ica, the only connection we have
in principle between direct and inverse local maps is through a {\em
non-finitary} route, as is illustrated by the following diagram
\begin{equation}
\matrix{
\hbox{\em finitary} &&\hbox{\em non-finitary}\cr
\em constructs &&\em constructs\cr
\vrule height3ex width0pt
\hbox{direct local map}&\longrightarrow&\hbox{direct global map}\cr
&&$\Big\downarrow$\cr
\hbox{inverse local map}&\longleftarrow&\hbox{inverse global map}
}
\end{equation}
Under what conditions can we establish a {\em finitary}
route from a direct local map to an inverse one?
\subsection{The fundamental lemmas}\label{sect.lemmas}
Whatever the technical difficulties in charting such a route, it is
important to know that the endpoint exists and is recognizable, as
shown by the following two lemmas.
\begin{lemma}\label{lem.richardson}
{\rm (Richardson\cite{richardson})} If a cellular automaton is
invertible, then its inverse is a cellular automaton.
\end{lemma}
This is a fundamental result. It tells us that if the global process
described by a local map is invertible, then also the inverse global
process has a local map, i.e., it can be described in local terms. On
the other hand, the proof (which is based on topological arguments of
a general nature) doesn't give any explicit method for constructing
the inverse local map.
\begin{lemma}\label{lem.test}
There is an effective procedure for deciding, for any two
local maps $\lambda$ and $\lambda'$ defined on the same set of
configurations, whether the corresponding global maps $\tau$ and
$\tau'$ are the inverses of one another.
\end{lemma}
\begin{proof} The composition $\lambda''=\lambda'\lambda$ is a new
map operating on the neighborhood consisting of the neighbors
(according to $\lambda$) of the neighbors (according to $\lambda'$) of
the generic cell. If for any value $a$ of this cell the map
$\lambda''$ returns the value $a$, independently of the values of the
other neighbors, then $\tau'\tau$ is obviously the identity
function. If a value different from $a$ is returned from some choice
of values for the other neighbors, then $\tau'\tau$ obviously
differs from the identity. \end{proof}
\medskip
The construction in the above proof provides a way to verify whether
an alleged inverse local map $\lambda'$ of a direct local
map $\lambda$ is indeed such an inverse. This test is our
touchstone for certifying that a cellular automaton is invertible.
Whether the candidate $\lambda'$ was generated by a rigorous
construction, suggested by heuristic methods, dictated by an oracle,
encountered on a search, or even arrived at by faulty arguments, if it
passes the test it can be accepted without further question.
Conceivably, it might be possible to prove the invertibility of a
cellular automaton without exhibiting a local map for its inverse.
However, we don't know of any cases where this has been done. For all
of the \ica\ known today, the inverse local map comes ``bundled,'' as
it were, with the direct one.
\subsection{The fundamental theorems}\label{sect.theorems}
From \lem{test} one immediately obtains
\begin{theorem}\label{theor.re}
The class of invertible cellular automata is recursively
enumerable.
\end{theorem}
\begin{proof}
Sequentially generate all local maps, say, in order of
increasing complexity (measured in terms of state-alphabet size
and neighborhood radius). For each item $\lambda$ in this sequence,
start a new enumeration of all local maps, and for each item $\lambda'$
of this enumeration test whether $\lambda'$ is the inverse local map
of $\lambda$. Each match yields an \ica, and every \ica\ will
eventually turn up in the course of this double enumeration.
\end{proof}
In other words, invertibility in cellular automata is at least {\em
semi}\/decidable: if a specific cellular automaton is invertible, the
above procedure will positively let us know; however, if it is not
invertible, at no moment in time will we be positively told of that.
How about {\em full} decidability? One partial result was already
mentioned in \sect{hist}, namely,
\begin{theorem}\label{theor.big-plus}
{\rm (Amoroso and Patt\cite{amoroso})} There is an effective
procedure for deciding whether or not an arbitrary one-dimensional
cellular automaton, given in terms of a local map, is invertible.
\end{theorem}
Amoroso and Patt thought that the techniques employed by them
were ``in principle extendable to arrays of higher dimension.'' Since,
however, these techniques were ``difficult to manage beyond dimension
one,'' they expected that ``generalizations of their results to higher
dimensions'' would ``most likely require a different approach.''
Since then, for almost twenty years a quest for these
``generalizations'' to more than one dimension went on with little
success. (Invertibility and related properties for
the one-dimensional case were revisited in
\cite{nasu,wolfram-comp,culik,head}.) Many equivalent
characterizations of \ica\ were
given\cite{yaku,maruoka79,maruoka82,jacopini}, but none that offered a
finitary handle on invertibility.
Finally, quite recently, Kari proved that
\begin{theorem}\label{theor.big-minus}
{\rm (Kari\cite{kari-thesis,kari})} There is no effective
procedure for deciding whether or not an arbitrary two-dimensional
cellular automaton, given in terms of a local map, is invertible.
\end{theorem}
His proof is by reduction to the undecidability of the tiling
problem, and depends on the availability of a set of tiles having
certain properties. He exhibits such a set, but the construction runs
over fifteen pages; given the importance of the result, we hope that a
shorter proof will be found soon.
\medskip
Thus, the invertibility of a cellular automaton is, in general,
undecidable. This has important implications, some of which we intend
to discuss (\sect{structural}). But, lest the readers feel that they
are groping completely in the dark, let us first balance the negative
result of \theor{big-minus} with a body of positive results concerning
\ica.
\section{Ways to make invertible cellular automata}\label{sect.exhib}
Suppose one wanted to get a general feeling for what kinds of behavior
are possible with \ica. One could start with a
good assortment of these automata, and study a number of
cases in detail. The problem is, how does one put together such an assortment?
It turns out that of all cellular automata the invertible ones
constitute a vanishingly small subclass\cite{sears}. Moreover, as
we've seen in \sect{decid}, there is no effective procedure for
determining whether or not an arbitrary cellular automaton (as
specified by a local map) is invertible. Finally, even if one were
willing to fall back on a brute-force search (\theor{re}),
a long search time would generate only a few items,
and even those would be for the most part quite uninteresting.
In our experience, rather than spend an inordinate amount of resources
on a blind search for these objects that are rare, hard-to-recognize,
and more often than~not quite plain, it is more rewarding to attempt
the direct synthesis of special cases having certain desirable
features. In this section, we shall discuss a number of synthesis
techniques which yield a rich variety of \ica.
\subsection{Trivial cases}\label{sect.trivial}
Let us consider the following two cases:
(a) \parhead{Each cell is allowed to look only at itself as a neighbor}
Then the cellular automaton reduces to a collection of finite,
isolated systems---one per cell.
(b) \parhead{Each cell is instructed to copy, say, its {\em left} neighbor}
Then the whole configuration will shift one cell to the {\em right} at
each time step. This `uniform shift' behavior can be factored out of
the dynamics by a simple coordinate transformation of the form
$x\mapsto x-t$. After this transformation, the global map reduces to
the identity, and again the cellular automaton collapses into a
collection of finite, noninteracting systems.
A cellular automaton whose neighborhood consists of at most one
cell, as in (a) or (b) above---is called {\em
trivial}\/\cite{amoroso}.
\medskip
\def\lowf{{\lower.25ex\hbox{$\scriptstyle f$}}}
Clearly, a trivial cellular automaton is invertible if and only if its
effective neighborhood $X$ consists of {\em exactly one} element, and
its table, of the format $A\inmap\lowf A$, is invertible, i.e., is a
{\em permutation} of the state alphabet $A$.
\subsection{The incredible shrinking neighborhood}
It would be nice if the invertibility of a cellular automaton could
always be reduced, as in the previous section, to the invertibility of
its table. Unfortunately, when the neighborhood $X$ consists of more
than one element, the table $f$ as such cannot be invertible, since
the two sets that appear in $A^X\inmap\lowf A$ have different
cardinalities.
\def\new{^{\rm new}}
\def\old{^{\rm old}}
Let us concentrate on this point. From the viewpoint of the global map,
$q\old\inmapsto\tau q\new$, the configuration $q\old$
contains all the information needed to construct $q\new$ (via $\tau$);
if the automaton is invertible, also $q\new$ contains all the
information needed to construct $q\old$ (via $\tau^{-1}$). This
symmetry between the two directions of time is not preserved, in
general, when the same dynamics is expressed by a local map
$q\old_{i+X}\inmapsto\lowf q\new_i$; in fact, while the
new state of a cell is completely determined by the old state of its
neighbors, the latter is not completely determined by the
former---{\em even if the cellular automaton is invertible}.\footnote
{Intuitively, the neighbors of site $i$ will also affect sites
other than $i$; in turn, when time is made to flow backward, they may
be affected by sites other than $i$.}
Yet, the only way we know to construct \ica\ is to somehow manage to
overcome the above difficulty of format, and in the end express the
local map in terms of permutations of the state alphabet. Different
ways of doing this are presented in the next three sections; here we'll
try to capture the flavor of this approach.
\medskip
In a trivial \ica\ (cf.\ previous section), a new configuration is
obtained from an old one by applying a given permutation of the state
alphabet, $\pi$, to each cell, i.e., $q_i\old\inmapsto\pi q_i\new$; of
course, $\pi$ itself is independent of $i$. Consider now the set $\Pi$
of {\em all} permutations of the state alphabet, and make a dynamical
system where each site $i$ uses a permutation $\pi_i^t\in\Pi$ that may be
{\em different} from site to site and from moment to moment, i.e.,
\begin{equation}\label{eq.permit}
q_i^{t+1}=\pi_i^t q_i^t.
\end{equation}
This system is not a cellular automaton, because its dynamics
is space- and time-dependent, but is still invertible. In fact, since
the $\pi_i^t$ are assigned once and for all for each $i$ and $t$, the
inverse system is explicitly given by
\begin{equation}
q_i^t=(\pi_i^t)^{-1}q_i^{t+1}.
\end{equation}
Now, the trick to restore space- and time-invariance is to
make the choice of $\pi_i^t$ depend on $i$ and $t$ not directly, but
only indirectly, as a function of the ``landscape'' that can be seen
from site $i$ at time $t$ (i.e., the state of the neighborhood not
including the center cell itself). Now we are back to a cellular
automaton, but, unless we are careful, we may lose invertibility,
since the landscape itself will in general change under the action of
the local map. All that is left to do is make sure of the following:
If the ``old'' landscape of site $i$ told us (by means of a definite
recipe $\rho$) to use permutation $\pi_i^t$ at site $i$ and time $t$
while going forward in time, then the ``new'' landscape of site $i$
must be able to point (by means of a matching recipe $\overline\rho$)
at the same permutation $\pi_i^t$ (which we will then invert) when
going backwards in time.
In \sect{guarded}, this is done by making sure that the relevant
landscape does not change at all, so that $\pi_i^t$ does not in fact
depend on $t$. In \sect{second-order}, only half of the state variables
are allowed to change at each step, while the other half, which does
not change, provides a landscape that is recognizable from either
direction of time-travel; the two halves exchange roles after each
step. Finally, in \sect{partition}, center cell and landscape are
fused into an indivisible block of cells, and the dynamics is effectively
given by a permutation that acts on the entire block rather than on
a single cell.
\subsection{Conserved-landscape permutations}\label{sect.guarded}
In 1971, Patt\cite{patt} conducted a search for nontrivial \ica,
restricting himself to one dimension (where a decision procedure is
known), two states per cell, and contiguous neighbors. He found none
for neighborhoods of size 2 and 3. His search stopped at size 4,
where he found exactly eight cases (out of 65,536). All the eight
cases are variants (obtained by reflection or complementation) of a
single cellular automaton, described by the following local map (where
the center cell is underscored):
\def\PP#1#2#3#4#5{#1\hbox{\underline#2}#3#4\mapsto\hbox{\underline
#5}}
\begin{equation}\label{eq.guard}
{\small
\matrix
{\PP 00000\hfill&\PP
01001\hfill&\PP 10000&\PP 11001\hfill\cr \PP 00010\hfill&\PP
01011\hfill&\PP 10010&\PP 11011\hfill\cr \PP 00101\ast &\PP 01100\ast
&\PP 10100&\PP 11101\hfill\cr \PP 00110\hfill&\PP 01111\hfill&\PP
10110&\PP 11111.\cr}
}
\end{equation}
\noindent What makes this cellular automaton invertible? Note that
the local map specifies `no change' except for those two entries
(starred in the table) where the center cell is surrounded by the
landscape $0{\bullet}10$---in which case the cell itself invariably
``flips,'' i.e., complements its state. What property of this landscape
is crucial to the automaton's invertibility?\footnote
{Notice that when the flip is conditioned by a different landscape,
the resulting cellular automaton is not, in general, invertible. For
instance, with the rule where the center cell flips in the landscape
$1{\bullet}00$, the configuration $\ldots 00000100000 \ldots$ has no
predecessors.}
It turns out that, with local map \eq{guard}, changing the state of the
center cell in the landscape $0{\bullet}10$ cannot lead to the
creation or the destruction of other occurrences of that
landscape---the landscape is {\em conserved}. This is easier to verify
if the local map, of the form
\begin{equation}\label{eq.guarded}
q_i\new=\pi_i q_i\old,
\end{equation}
is explicitly tabulated as follows (`$-$' denotes a ``don't care'' argument)
\begin{equation}\label{eq.guarded-table}
\begin{array}{|c@{}c@{}c@{}c@{}c@{}c@{}c|l|}\hline
\multicolumn{7}{|c|}{{\rm landscape}(i)} &
\multicolumn{1}{c|}{\pi_i}\\\hline\hline
-&\bullet&-&\hbox to1em{\hfill1\hfill}&\phantom-&&&\hbox{`no change'} \\
&0&\bullet&1&0&&& \hbox{`flip'}\\
&\phantom-&-&\bullet&0&-&& \hbox{`no change'} \\
&&&1&\bullet&-&-& \hbox{`no change'}\\\hline
\end{array}\quad.
\end{equation}
In these circumstances, if one attempted to ``undo'' one step of the
dynamics, one would know exactly {\em which} cells have just been
changed (because wherever a change was permitted the corresponding
landscape was conserved), and {\em how} to undo the changes (because
the effect of a `flip' can be reversed by flipping again) In fact,
the inverse local map for this dynamics is identical to the direct
one.
Note that a conserved landscape prevents most of the cells of a
configuration from ever changing state; at a higher hierarchical level,
those cells can be regarded as structural parameters of the machinery
rather than state variables. The remaining cells, which constitute the
effective state variables, are decoupled from each other, leading to
a situation where invertibility is determined in a trivial way, much
as in the previous section.
\medskip
Invertibility in conserved-landscape cellular automata is not limited
to trivial cases. For example, by making use of several conserved
landscapes that partially overlap one another, one can selectively
retain some coupling between cells; in particular, one can construct
\ica\ that simulate any second-order \ica\ (cf.\ \sect{second-order}).
\subsection{Second-order cellular automata}\label{sect.second-order}
We shall give a simple method for obtaining, starting from an
arbitrary cellular automaton, a new one that is invertible and has a
neighborhood at least as large as that of the original---and thus is
nontrivial if the original was nontrivial.
To paraphrase Zeno, if we cut a single frame out of the movie of a
flying bullet, we have no way of knowing what the bullet is doing.
However, if we are given two consecutive frames, then we can figure
out the bullet's trajectory. That is, from these two frames,
interpreted as the bullet's ``past'' and ``present'' positions, we can
construct a third frame giving the bullet's ``future'' position; this
procedure can be iterated.
The laws of Newtonian mechanics happen to be such that, if for some
reason the two frames got exchanged, we would end up figuring the
bullet's trajectory {\em in reverse}. The present approach to
invertibility in cellular automata, suggested by Ed Fredkin of MIT, is
based on the above mechanical metaphor.
\medskip
Let us start with a dynamical system in which the sequence of
configurations that make up a trajectory is given by a relation of the
form
\begin{equation}\label{eq.first}
q^{t+1}=\tau q^t.
\end{equation}
For the moment, we can think of the $q^t$ as real variables.
In general, \eq{first} gives rise to a noninvertible dynamics (i.e.,
there may be no way, or no unique way, to extend the sequence
backwards). For the dynamics to be invertible, $\tau$ itself must be
invertible.
Now, let's consider a new system, defined by the relation
\begin{equation}\label{eq.forward}
q^{t+1}=\tau q^t-q^{t-1}.
\end{equation}
This is an example of {\em second-order} system, where the
``next'' configuration is a function of both the ``current'' and the
``previous'' one---and thus it takes a {\em pair} of consecutive
configurations to completely determine the forward trajectory. In
general, second-order relations give rise to noninvertible dynamics.
However, a relation of the specific form \eq{forward} guarantees
the invertibility of the dynamics for an {\em arbitrary} $\tau$. In
fact, by solving \eq{forward} with respect to $q^{t-1}$, one
obtains the relation
\begin{equation}\label{eq.back}
q^{t-1}=\tau q^t-q^{t+1};
\end{equation}
that is, a pair of consecutive configurations suffice to
determine in a unique way also the {\em backward} trajectory.
The above considerations can immediately be applied to cellular
automata (see \cite[Chapter 6]{cambook} for an intuitive
presentation). In equation \eq{first}, let the $q^t$ be
configurations of a cellular automaton, and $\tau$ a global map.
The local map will be of the form
\begin{equation}\label{local}
q_i^{t+1}=fq^t_{i+X}.
\end{equation}
We shall identify the $r$ elements of the state
alphabet $A$ with the integers $0,1,\ldots,r-1.$ Then \eq{forward}, with
``$-$'' denoting the {\em difference}\footnote
{Of course, ``$-$'' as an operator on configurations is
induced from ``$-$'' as an operator on single cells, as used for instance
in \protect\eq{forward-local} below, by applying it in
parallel to all sites.}
mod $r$ between two configurations, defines a particular {\em
second-order} cellular automaton whose local map is
\begin{equation}\label{eq.forward-local}
q_i^{t+1}=fq^t_{i+X}-q_i^{t-1}.
\end{equation}
Note that the above equation can be rewritten as
\begin{equation}\label{eq.forward-perm}
q_i^{t+1}=\pi_{q_{i+X}}q_i^{t-1},
\end{equation}
where the permutation $\pi$ that turns $q_i^{t-1}$ into
$q_i^{t+1}$ changes, from site to site, as a function of the
``landscape'' $q_{i+X}$.
With this method, from any ordinary cellular automaton with state
alphabet $A$ and global map $\tau$ one immediately obtains from
\eq{forward-local} a second-order cellular automaton that is
invertible. The latter, in turn, can always be written as an {\em
ordinary} cellular automaton, with state alphabet $A\times A$,
configurations of the form $\langle q^{t-1}, q^t\rangle$ and global
map of the form
\begin{equation}\label{eq.enlarged-alphabet}
\langle q^{t-1}, q^t\rangle\mapsto
\langle q^t,\tau q^t\hbox{$-$}q^{t-1}\rangle.
\end{equation}
Thus, in spite of the great ``rarity'' of \ica, the ones we
can construct are at least ``as many'' as the noninvertible ones!
\medskip
Second-order recurrences in which the individual state variables are
real numbers (rather than bits) are, of course, routinely used in
constructing finite-difference schemes for Lagrangian systems, and
recurrences of the form \eq{enlarged-alphabet}, in particular, for
obtaining invertible dynamics (cf.\ \cite{courant}). Note, however,
that---no matter whether the state variables are real numbers or
symbols from a finite state alphabet---a permutation operator of the
form $\pi_{q_{i+X}}$, as in \eq{forward-perm}, is much more general
than a difference operator of the form `$fq^t_{i+X}-$', as in
\eq{forward-local}.
\medskip
We shall briefly present two examples of second-order \ica\ that
combine richness of behavior with economy of means.\footnote
{Another example is the earlier cellular-automaton realization
(2 dimensions, 3 states, 9 neighbors) of the ``billiard ball'' model
of computation\cite{fredkin}, discussed by Margolus in
\cite[Appendix A]{margolus-bbm}. The encouraging results
obtained through this construction eventually lead to
to a more compact realization of the billiard-ball model
via the partitioning technique (\sect{partition}).}
\newcommand{\up}{\uparrow}
\newcommand{\dn}{\downarrow}
\medskip
\figone{fig.q2r}{\twosquares}{Equilibrium configurations of {\tt Q2R}
above the critical energy (left) and at the critical energy (right).}
The {\tt Q2R} rule, introduced by G\'erard Vichniac\cite{vichniac}
(for more detail, see \cite[Chapter 17]{cambook}), is the simplest
microcanonical model of a two-dimensional {\em Ising spin system}.
The two elements of the state alphabet $A=\{\up,\dn\}$ can be thought
of as the two orientations of a spin-$\scriptstyle 1\over2$ particle
tied to each lattice site. The neighborhood consists of the four
``first neighbors'' of a cell (in the four directions of the compass).
The operator $\pi$ in \eq{forward-perm} specifies `flip' if two of the
four neighbors are spin-up and two spin-down, and `no change' in
all other cases. If one assigns one unit of potential energy to each
occurrence of an antiparallel bond ($\up\dn$) between a cell and one
of its neighbors, the above rule is equivalent to flipping a spin
whenever this operation is energetically indifferent. Thus, the state of the
entire spin array moves, in phase space, along a surface of constant
energy.\footnote
{\label{foot.q2r}{\tt Q2R} actually consists of two
intermeshed but independent sublattices, one evolving in the ``white''
squares an the other in the ``black'' squares of a spacetime
checkerboard, and each separately conserving energy. In
\cite{cambook}, we discuss a number of ways to overcome this
redundancy by using cellular automata of slightly different formats.}
Innumerable variants and generalizations of this basic model
can, of course, be devised (cf.\ \cite{cambook}). But even in this
bare form the model is adequate for illustrating the richness and the
theoretical challenges of critical phenomena theory (symmetry
breaking, phase transitions, long-range correlations, etc.). \fig{q2r}
illustrates the onset of phase separation as one crosses the critical
temperature.
\medskip
The {\tt SCARVES} rule\cite[p.~97]{cambook}, introduced and
extensively studied by Charles Bennett, is a one-dimensional
system; it supports a variety of ``elementary
particles'' that travel at different speeds and undergo various types
of interaction, as illustrated in \fig{scarves}. The four neighbors
are now the two ``first neighbors'' and the two ``second neighbors''
in the one-dimensional array; except for that, the operator $\pi$ can
be described by the same words as that for {\tt Q2R}.
\figone{fig.scarves}{\twosquares}{A spacetime history from the {\tt
SCARVES} rule. Time progresses righwards.}
\medskip
Other simple examples of second-order cellular automata relevant to
statistical mechanics are discussed in \cite{cambook} and in
\cite{takesue87,takesue00}; the analogy with Lagrangian and Hamiltonian
mechanics is explored further in \cite{margolus-thesis,kaneko88}.
\subsection{Partitioning cellular automata}\label{sect.partition}
The advantage of trivial cellular automata is that the domain and the
range of the table are the same set, so that invertibility of the
table implies invertibility of the dynamics; the disadvantage, of
course, is that each cell is its only neighbor, so that there is no
communication between cells.
Following \cite{margolus-bbm,margolus-thesis}, let us try to keep the
advantage and remove the disadvantage. At time $0$, let's cut up
space into finite regions, obtaining a partition $p_0$ of the set of
sites (in \fig{grid}, the thick lines delimit regions consisting of
four-cell squares). Let's introduce a new kind of local map, that
takes as input the contents of a region and produces as output the new
state of the {\em whole} region (rather than of a single cell). Such a
map $f_0$ allows information to be exchanged between cells of a
region, but not across region boundaries. If $f_0$ is invertible
(note that the format of $f_0$ is ``four cells to four cells''), the
corresponding global map $\tau_0$ is also invertible. If we kept
iterating $\tau_0$, information would remain locked up within each
region. Instead, at the next step let's use a {\em different}
partition, $P_1$ (thin lines in \fig{grid}), and a local map $f_1$
of the same kind as $f_0$, but acting on the regions of $P_1$. Again,
if $f_1$ is invertible so is the corresponding global map $\tau_1$.
The regions of $P_1$ straddle the boundaries between regions of $P_0$,
and so information that at step 0 had been dammed up within a region
of $P_0$ may at step 1 spill over adjacent regions. We shall keep
alternating between $\tau_0$ and $\tau_1$, respectively at even and
odd time-steps.
\figone{fig.grid}{\twosquares}{Even (thick lines) and odd (thin lines)
partitions of a two-dimensional array into four-cell blocks. One block
in each partition is shown shaded.}
We have thus achieved our two main goals, namely, we have constructed
a structure, called a {\em partitioning} cellular automaton
(generalizations to larger regions and longer time-step cycles are
obvious), in which (a) global invertibility derives in a
straightforward way from local invertibility, and (b) information can
be transmitted over any distance (i.e., the system is an interacting
whole rather than a collection of isolated subsystems).\footnote
{The definition of partitioning cellular automata introduces a
minor departure from uniformity in space and time; but the full
uniformity of an {\em ordinary} cellular automaton can easily be
restored; in the present example, one would consider ``super-cells''
(each consisting of the four cells that make up a region of partition
$P_0$) and ``super-steps'' (consisting of the composition of two
consecutive steps)---yielding a global map of the form
$\tau=\tau_1\tau_0$. Quite generally, a wide class of finitary
rules that have a periodic spacetime structure can be recast
as ordinary cellular automata.}
\medskip
The partitioning technique is particularly useful for constructing
systems consisting of moving particles. To ``move'' a particle in a
cellular automaton one must actually {\em erase} the particle from its
current site $i$ and {\em create} a new copy of it on an adjacent site
$j$. These two operations must be carefully matched, lest particles
vanish or multiply; unfortunately, in ordinary cellular automata
the information available to the local map when acting on site $i$
(i.e., the state of the neighborhood of $i$) is different from that on
site $j$, and that makes it difficult for two distinct applications of
the local map to carry out the two halves of the {\em same} decision
(`move' or `not move'). Special handshakes must be
devised.\footnote
{The ``firing-squad'' problem, of which this is a special
case, dates back to the origins of cellular automata\cite{moore-squad}.}
With partitioning cellular automata, the two halves of a
`move' operation can be made to fall within the scope of the same
neighborhood, and thus coordination is trivial. This feature has an
immediate application in lattice gases (\sect{lattice-gas}) and
similar interacting particle systems.
In the same way as one insures the conservation of the number of
particles, one can insure the conservation of other quantities of
physical interest (variables that represent momentum, charge, etc.).
\subsection{Lattice gases}\label{sect.lattice-gas}
Intuitively, a {\em lattice gas} is a system of particles that move in
discrete directions at discrete speeds, and undergo discrete
interactions. It will be clear in a moment that a lattice gas is but a
special format of cellular automaton; however, it will be useful to
start with a example in which the more picturesque terminology of
continuous motion is retained.
\medskip
In the HPP lattice gas\cite{hpp}, identical particles move at unit
speed on a two-dimensional orthogonal lattice, in one of the four
possible directions. Isolated particles move in straight lines. When
two particles coming from opposite directions meet, the pair is
``annihilated'' and a new pair, traveling at right angles to the
original one, is ``created'' (Fig.\ \ref{fig.hpp}a). In all other
cases, i.e., when two particles cross one another's paths at right
angles (Fig.\ \ref{fig.hpp}b) or when more than two particles meet,
all particles just continue straight on their paths.
\figone{fig.hpp}{1in}{In the HPP gas, particles colliding head-on
are scattered at right angles (a), while particles crossing one
another's paths go through unaffected (b).}
\figone{fig.wave}{\threesquares}{Wave propagation in the HPP lattice
gas. Note the emergence of circular symmetry.}
As soon as the numbers involved become large enough for averages to be
meaningful---say, averages over spacetime volume elements containing
thousands of particles and involving thousands of collisions---a
definite continuum dynamics emerges. And, in the present example, it
is a rudimentary {\em fluid} dynamics, with quantities recognizably
playing the roles of density, pressure, flow velocity, viscosity,
speed of sound, etc. \fig{wave} illustrates sound-wave propagation in
this model. Note that, even though the microscopic interactions only
display a more limited form of rotational symmetry (namely, invariance
for quarter-turn rotations), the speed of sound is fully
isotropic.
Unlike sound speed, sound attenuation is {\em not} isotropic in the
HPP model. It turns out that, besides conserving energy and momentum
(see \sect{hpp-vs-ns}), HPP separately conserves the horizontal component of
momentum on each horizontal row and the vertical component on each
vertical column. These {\em spurious} conservations (they have no
counterpart in ordinary physics) lead to significant departures from
the behavior one would expect from a physical fluid.
The slightly more complicated FHP lattice gas model\cite{fhp}---which
uses six rather than four particle directions (always in two
dimensions)---gives, in an appropriate macroscopic limit, a fluid
obeying the well-known {\em Navier-Stokes} equation, and which is thus
suitable for modeling actual hydrodynamics (see \cite{discrete-fluids}
for a tutorial). Recently, analogous results for three-dimensional
models have been obtained by a number of researchers\cite{frisch-doolen}.
\medskip
Lattice gases are rapidly beginning to encroach into modeling niches
dominated until recently by differential
equations\cite{doolen,monaco}; their success in this role is chiefly
due to the ease with which they can be made to satisfy local
continuity equations,\footnote
{In this traditional but unfortunate term, `continuity' does
not refer to the continuum, in opposition to `discreteness'; rather,
it refers to ``continuity of existence''---detailed balance, in other
words, as in Kirkhhoff's laws.}
as discussed below and, more leisurely, in \cite{pde,cheap}).
\figone{fig.hpp-event}{3in}{Spacetime layout of the HPP gas.}
\figone{fig.hpp-cross}{2in}{Evenly spaced spacelike surfaces of the
form $t$=constant in a lattice-gas spacetime diagram.}
Let us consider the spacetime texture induced by a lattice gas such as
HPP.\footnote
{Much as in the case of {\tt Q2R} (cf.\ \protect\foot{q2r}), also
in HPP the lattice splits into two intermeshed but independent
sublattices. Here we shall consider only one of these sublattices.}
In \fig{hpp-event}, the arcs represent the spacetime paths
available to the particles ($a,b,c,d$ denote the four possible
directions of motion) and the nodes (labeled $f$) represent the
available collision sites; the entire structure is iterated in space
and time, yielding a body-centered cubic lattice. Thus, we have a
spacetime diagram (the arcs are {\em signals} and the nodes {\em
events}) of a kind that is routinely used in illustrating
special-relativity arguments. From this diagram, a particular history
is obtained by assigning, as initial conditions, a definite occupation
state (`particle' or `no particle', which may be denoted by `0' and
`1' respectively) to each arc that crosses a given spacelike surface,
and then extending the state assignment timewards, using the following
table at each event
\begin{equation}\label{eq.hpp-table}
\begin{array}{cc|c}
&{\rm in}&{\rm out}\\
&abcd&abcd\\\cline{2-3}
&0000&0000\\
&0001&0001\\
&0010&0010\\
&0011&0011\\
&0100&0100\\
\ast&0101&1010\\
&0110&0110\\
&0111&0111\\
\end{array}\quad
\begin{array}{cc|c}
&{\rm in}&{\rm out}\\
&abcd&abcd\\\cline{2-3}
&1000&1000\\
&1001&1001\\
\ast&1010&0101\\
&1011&1011\\
&1100&1100\\
&1101&1101\\
&1110&1110\\
&1111&1111
\end{array}\quad.
\end{equation}
The table itself represent the collision rule given in words
above. Note that in only two cases (marked with an asterisk) out of
sixteen does an interaction take place; in all other cases, each of the four
signals proceeds undisturbed.
Since the function $f$ represented by this table is invertible, the
spacetime history may be extended backwards as well as forwards in
time.
\medskip
If one now considers surfaces of the form $t={\rm constant}$, drawn
midway between rows of events, as in \fig{hpp-cross} (where, for
clarity, only one spatial dimension is indicated), it is clear that
the collective state of the signals that traverse each such surface
can be thought of as the configuration of a cellular automaton. If
one groups together the four signals that converge on the same event
(\fig{hpp}), and notes the regular alternation, at odd and even times,
of these groupings (cf.\ \fig{hpp-cross}), it will be evident (cf.\
\fig{grid}) that the updating scheme of \fig{hpp} is isomorphic to
that described in \sect{partition}.
In other words, lattice gases and partitioning cellular automata are
formally the same thing. Nonetheless, the tendency is to reserve the
term `lattice gas' for cellular automata in which the `gas' metaphor
can be defended\cite{henon-vs}. Typically, one has a distinguished
``vacuum'' state (`0' in table \eq{hpp-table}) on whose background
isolated ``particles'' (`1' in table \eq{hpp-table}) travel with
inertial motion.
\medskip
Figures \ref{fig.hpp-event} and \ref{fig.hpp-cross} make it clear not
only that (a) the global dynamics is invertible if $f$ is invertible, but
also that (b) any additive quantity carried by individual signals
(occupation number, momentum, etc.) is globally conserved if it is
conserved by $f$. For example, in \cite{bond-energy} we show how
certain second-order \ica\ can be rewritten in a straightforward way
as lattice-gas \ica; conserved quantities (such as bond energy) that
in the second-order format it took some effort to
discover\cite{pomeau84} are immediately visible in the lattice-gas
format.
\section{Physical modeling}
The question that we are most often asked about cellular automata
is the following.
``I've been shown cellular automata that make surprisingly good models
of, say, hydrodynamics, heat conduction, wave scattering, flow through
porous media, nucleation, dendritic growth, phase separation, etc.
But I'm left with the impression that these are all {\em ad hoc}
models, arrived at by some sort of magic.
``I'm a scientist, not a magician. Are there well-established {\em
correspondence rules} that I can use to translate features of the
system I want to model into specifications for an adequate
cellular-automaton model of it?''
\medskip
Physical modeling with cellular automata is a young discipline. Similar
questions were certainly asked when differential equations were
new---and, despite three centuries of accumulated experience, modeling
with differential equations still requires a bit of magic. Mainly, we
get new models by dressing up old models, or by microdynamical
analogy. As it stands, a ``rational'' or ``analytical'' mechanics of
cellular automata is beginning to take shape (cf., e.g.,
\cite{guto87,guto00}). Ironically, the most mature aspects of this
understanding concern the modeling of {\em continuum} phenomena. In
the rest of this section, we'll try to convey at least a feeling for
the issues involved.
\subsection{HPP vs Navier-Stokes}\label{sect.hpp-vs-ns}
How does a ridiculously simple interaction such as that of \fig{hpp}a
(represented, in the two possible spatial orientations, by the starred
entries in table \eq{hpp-table}) possibly come close to capturing the
richness of the Navier-Stokes equation?
Let us attempt to sketch an answer, using drastically simplified
arguments.
\medskip
By inspection of table \eq{hpp-table}, one concludes that (a') the
dynamics specified by this local map is {\em invertible}; (b') {\em
energy}, represented simply by particle count, is {\em conserved} by
each event; (c') the two components of {\em momentum}, obtained by
separately counting (with a $+$ or $-$ sign depending on the sense)
particles traveling in the two orthogonal directions, are similarly
{\em conserved}\/; (d') the dynamics is {\em rotationally invariant} for
quarter-turn rotations; and (e') there is {\em some coupling} between
the two spatial {\em directions}.
By inspecting the Navier-Stokes equation, one concludes that this
equation specifies that (a) the dynamics is {\em invertible}; (b) that
{\em energy} and (c) {\em momentum} are {\em conserved} by contact
processes (no action-at-a-distance); (d) the dynamics is {\em
rotationally invariant} for continuous rotations; (e) pressure is
a {\em scalar} quantity, i.e., is independent of {\em direction};
and (f) {\em nothing else} that doesn't already follow from (a)--(e).
\medskip
In the cellular-automaton model, the moment one puts on {\em
macroscopic} glasses, as it were, and is only able to see details
that are much coarser than the pitch of the lattice, the discreteness
of energy, momentum, and position washes out, so that points (a'),
(b'), and (c') above closely match (a), (b), and (c).
As for point (d'), it is well known that 90$^\circ$ rotational
invariance is sufficient to yield full isotropy for {\em diffusive}
phenomena.\footnote
{Consider the set of all possible algorithms telling an
individual how to move, one block at a time, North, South, East, or
West on an orthogonal street grid (at each step, the choice may take
into account local landscape features, the actions of neighboring
individuals, the weather, etc.). {\em Almost all} such algorithms
yield an exploration pattern that is close to a superposition of
independent random walks in the two orthogonal directions. In the
limit as the binomial distribution goes over to the Gaussian
distribution, such a pattern displays full rotational invariance (i.e.,
$\exp x^2\cdot\exp y^2=\exp(x^2+y^2)= \exp r^2$)---and this in spite
of the fact that the microscopic rule can at best have quarter-turn
invariance.}
However, in a lattice gas not too far from equilibrium (cf.\
\cite{toffoli-ibm}) the chief transport mechanisms are both diffusive
transport and wavelike transport, and for the full isotropy of {\em
wavelike} phenomena it turns out that one needs nondiffusive transport
of horizontal momentum in the vertical direction and viceversa---which
is hard to achieve with 90$^\circ$ invariance and is much easier to
achieve with six particle directions and 60$^\circ$ invariance (cf.,
e.g., \cite{wolfram-hydro}). This is why the fluid modeled by the HPP
gas significantly departs from the Navier-Stokes equation (cf.\
\sect{lattice-gas}), and why a model such as FHP manages to achieve
the goal.
As for point (e'), by considerations similar to the above one can show
that {\em almost any} coupling between the $x$ and $y$ directions
leads to equalization of pressure in all directions.
\medskip
The only philosophical difficulty lies with the `nothing else' of
point (f). Like sorcerer's apprentices, we commanded HPP to conserve a
certain few quantities. HPP complied; but it didn't warn us that it
had taken the initiative to conserve an infinity of other quantities
as well (\sect{lattice-gas}). And these spurious conservations
undermined our modeling efforts.
How do we know that something like this will not happen to us the next
time? How can one tell that a model has no spurious
invariants\cite{dhumieres89}? (For that matter, how does one know
that a certain unanticipated invariance is `spurious'? may it not
turn out to be a ``feature'' rather than a
``bug''?\footnote
{For instance, certain constraints that in lattice gases lead
to a departure from Galilean invariance\cite{hayot} may actually bias the
system toward {\em Lorentz} invariance\cite{cheap} (also cf.\
\cite{mark}).})
A general method for keeping spurious constraints out of a
cellular-automaton model is suggested by Jaynes's maximum entropy
principle\cite{jaynes}. The idea is to make the local map {\em
maximally random} with respect to any features that are not explicitly
demanded by the modeling context.\footnote
{H\'enon's strategy for rule optimization in lattice gas
hydrodynamics\cite{henon} is somewhat related to this approach.}
Unfortunately, to guarantee a closer and closer approximation
to this ideal of maximal randomness one may have to introduce larger
and larger state alphabets and neighborhoods, at a corresponding
sacrifice in simulation efficiency. In the end, one must know where to
draw the line between accuracy and efficiency.
\subsection{Invertibility, symmetries, and conserved
quantities}\label{sect.invariants}
An invertible cellular automaton shares certain important traits with
physical systems even when it does not manifestly support waves,
particles, energy, momentum, and similar symptoms of ``physicalness.''
\medskip
Even in dynamical systems that have very little structure one can show
some connections between symmetries and conservation laws.\footnote
{Given any group of transformations that commutes with the
dynamics, each configuration belongs to a definite symmetry class (a
normal subgroup of the group itself); all the points of an orbit
belong to the same symmetry class, which is thus a {\em conserved
quantity}.}
However, such connections manifest themselves with special
strength in Hamiltonian systems, where, according to Noether's
theorem, to each continuous one-parameter group of transformations
that commutes with the dynamics there is associated a real-valued,
conserved quantity. Thus, energy is associated with
time-invariance of the dynamics, the three components of momentum with
translation invariance, etc. These quantities are functions of the
system's state as such (i.e., its ``mechanical'' or ``microscopic''
state); one can literally speak of, say, the {\em energy content} of a
certain state.
The Hamiltonian structure itself, i.e., invariance with respect to
canonical transformations, leads to the conservation of an additional
quantity, called {\em (fine-grained) entropy}, which is defined not on
individual mechanical states but on {\em statistical} states
(probability distributions on the set of mechanical states); a special
case of this conservation is the well-known ``incompressibility'' of
volume in phase space (Liouville's theorem). In other words, (a) the
Hamiltonian structure provides an unequivocal way of measuring the
{\em information} content\cite[Chapt.~II]{everett} of a statistical
state, and (b) the quantity thus measured is {\em conserved} by the
dynamics.
\medskip
Continuum dynamical systems that do not have a Hamiltonian structure
lack, in general, an intrinsic ``yardstick'' for measuring the
information content of a (statistical) state, and thus do not come
with a built-in statistical mechanics. Cellular automata, on the
other hand, owing to their local finiteness (the lattice is discrete,
and state alphabet and neighborhood are finite), do possess a natural
information measure.\footnote
{This is the {\em uniform} measure, which gives
equal weight to all configurations.}
And the foremost property of {\em invertible} cellular
automata is, of course, that they are {\em information-conserving}.
Because of this ``information-losslessness''\break (ach!), \ica\
automatically obey the second principle of thermodynamics and, more
generally, display a full-featured statistical mechanics analogous to
that of Hamiltonian systems. As additional structure is introduced (for
instance, particle conservation), macroscopic mechanical features
such as elasticity, inertia, etc.\ naturally emerge out of statistics itself.
In sum, once we make sure that it is conserved, information has an
irresistible tendency to take on a strikingly tangible aspect (cf.\
\cite{toffoli-ibm})---to {\em materialize} itself.
\medskip
For the above reasons, and because they lend themselves to very
efficient computer simulations, \ica\ are an ideal medium for the
qualitative study of the connections between microscopic mechanics and
statistical mechanics on one hand (cf.\
\cite{margolus-thesis,cambook,wolfram-stat,creutz}), and between
statistical mechanics and macroscopic mechanics on the other (cf.\
\cite{wolfram-hydro}). They are also suitable for the the modeling
of an increasingly important type of generalized mechanical activity,
namely {\em computation}\cite{margolus-thesis}.
\medskip
In physics, additive invariants, whether represented by mechanical
quantities such as energy or statistical quantities such as entropy,
bear a major responsibility for the emergence of nontrivial
macroscopic properties. Intuitively, one may expect that almost every
detail of the microscopic interactions will be washed out by
macroscopic averaging; only features that are supported by a definite
conspiracy (such as a particular symmetry or conservation law) will
bubble up all the way to the macroscopic surface and emerge there as
recognizable laws.
The study of invariants in \ica\ has barely started. Here we can only
refer the reader to a few literature items, such
as \cite{pomeau84,margolus-thesis,dhumieres89,takesue00}. It must be
noted that in general the problem of {\em discovering} the invariants
of a cellular automaton is presumably of a difficulty comparable to
that of deciding its invertibility (cf.\ \sect{decid}). Much as for
invertibility, it is much easier to directly {\em synthesize} a cellular
automaton having certain desired invariants than to figure out the
invariants of a given one.
\subsection{Modeling first principles?}\label{sect.first-principles}
As we hinted in \sect{lattice-gas}--\ref{sect.invariants}, \ica\ are
quite successful at explaining complex macroscopic phenomenology in
terms of the collective behavior of an enormous number of very simple
subsystems. So much so, that one begins to wonder whether
aspects of physics that are usually regarded as primitive
(such as the principles of analytical mechanics) are not,
after all, similarly derivable as emergent aspects of an
extremely fine-grained underlying dynamical structure.
In this role, \ica\ have been used with some success as fine-grained
models of basic aspects of special and general
relativity\cite{cheap,thooft,mark} and of quantum mechanics\cite{joe}
\section{Decidability, revisited}\label{more-decid}
Now that we have some concrete examples of \ica\ in mind, let us pick
up the trail that we left off at the end of \sect{theorems}.
\medskip
A dynamical system is {\em surjective} if every state has at least one
predecessor; {\em injective}, if it has at most one
predecessor. If it is both surjective and injective it is, of course,
invertible (or {\em bijective}). It will be convenient
to call {\em properly surjective} a surjective system that is not
invertible, and similarly {\em properly injective} an injective system
that is not invertible. The situation is summed up in \fig{jective}.
\figone{fig.jective}{1.45in}{Venn diagram for surjectivity and
injectivity.}
\figone{fig.ca-jective}{0.8in}{Surjectivity and
injectivity in cellular automata; `r.e.' denotes a recursively
enumerable set.}
For cellular automata, the `properly injective' lobe in \fig{jective}
is empty, i.e., injectivity is equivalent to invertibility
(Richardson\cite{richardson}), leaving us with the simpler situation
of \fig{ca-jective}.
\medskip
As we have seen in \sect{theorems}, the left set of \fig{ca-jective}
(i.e., the class of invertible cellular automata) is recursively
enumerable (\theor{re}). The right set too is recursively enumerable,
according to the following
\begin{theorem}\label{theor.re-left}
The class of nonsurjective cellular automata is recursively
enumerable.
\end{theorem}
\begin{proof}
According to Myhill's theorem\cite{myhill} (cf.\ \sect{hist}),
if a configuration has no predecessors there must exist a configuration
having two predecessors that are identical except over a finite area.
If such a pair exists, brute-force enumeration (apply the local
map to larger and larger finite areas) will eventually turn it up.
\end{proof}
If also the middle set in \fig{ca-jective}, i.e., the set of properly
surjective (\ps) cellular automata, were recursively enumerable, then
all three sets would be fully {\em recursive} (i.e., the corresponding
predicates would be decidable). From \theor{big-minus}, we must
conclude that the middle set is not even recursively enumerable. We shall
examine the makeup of this \ps\ set in more detail.
\medskip
Let us apply the local map of a cellular automaton on a {\em finite},
wrapped-around space array (which is equivalent to imposing periodic
boundary conditions along each dimension).\footnote {To simplify
the statement of some of the assertions below, we shall consider only
finite spaces that are large enough to keep all of the neighbors of a
cell distinct.} The resulting {\em finite cellular automaton}
will be invertible if the original was invertible, and nonsurjective
if the original was nonsurjective. If, however, the original was \ps,
then the resulting cellular automaton will be forced to abandon
`\ps{}-ness' and ``choose'' between being invertible and being
nonsurjective (by a simple counting argument, the \ps\ middle ground
is forbidden); note that either choice may occur with the {\em same}
local map, for different sizes of the finite space.
\figone{fig.residue}{1.5in}{Between invertibility and local
noninvertibility, which are both semidecidable, there is a no man's
land that is not semidecidable.}
Those cellular automata that become nonsurjective when thus forced on
a finite space (at least for some space size), together with
those that were nonsurjective to begin with, will be called {\em
locally noninvertible} (they are exactly those cellular automata whose
noninvertibility can be verified by local arguments). Local
noninvertibility is semidecidable (the proof is similar to that of
\theor{re-left}). What is left (\fig{residue}) is a residual class of
cellular automata having rather counterintuitive behavior; that is,
they
\begin{itemize} \item are invertible on {\em all} finite
spaces, but \item become noninvertible (specifically, properly
surjective) on the infinite array.
\end{itemize}
Because of \theor{big-minus}, this residual class cannot be
empty; on the other hand, no instance of this class has, to our
knowledge, ever been exhibited.
\medskip
Let us go back to the enumeration in the proof of \theor{re}. If from
$\lambda$ we could determine an upper bound to the neighborhood radius
of its hypothetical inverse, then we would, eventually, positively
know whether or not this inverse exists. From \theor{big-minus}, we
must conclude that this is not the case; in particular, for any $n$
there must be \ica\ for which the radius of the {\em inverse
neighborhood} (the neighborhood of the inverse) is at least $n$ times
larger than the radius of the {\em direct} neighborhood.
\figone{fig.mismatch}{1in}{An invertible function with an
asymmetric dependency pattern.}
We shall show how to construct such an \ica, for any $n$ (note that in
this example $n$, though arbitrarily large, is a known function of the
size of the state alphabet, so that we {\em do} have an upper bound
for the neighborhood radius). Consider the function with $n$ binary
inputs $x_1,\ldots,x_n$ and $n$ binary outputs $y_1,\ldots,y_n$
defined by
\begin{equation}\label{eq.mismatch}
\begin{array}{l@{}c@{}lc@{}l}
y_1 & = & x_1 & & \\
y_2 & = & x_2\oplus x_1 & (= & x_2\oplus y_1) \\
y_3 & = & x_3\oplus x_2\oplus x_1 & (= & x_3\oplus y_2) \\
& \ldots & \\
y_n & = & x_n\oplus x_{n-1}\oplus\cdots\oplus x_1 & (= & x_n\oplus y_{n-1}) \\
\end{array},
\end{equation}
where $\oplus$ denotes Boolean exclusive-{\sc or}.
This function, pictorially shown in \fig{mismatch}, is
invertible, and its inverse is given by
\begin{equation}
\begin{array}{l@{}c@{}l}
x_1 & = & y_1 \\
x_2 & = & y_2\oplus y_1 \\
x_3 & = & y_3\oplus y_2 \\
& \ldots & \\
x_n & = & y_n\oplus y_{n-1} \\
\end{array}.
\end{equation}
Note the asymmetric dependency pattern: a $y$
may depend on up to $n$ of the $x$'s, while each $x$ depends on no
more than {\em two} of the $y$'s.
\figone{fig.dominoes}{1.38in}{Infinite juxtaposition of copies of
function \protect\eq{mismatch} yields a one-dimensional \ica\ having
$n$ bits per cell. Here $n=4$ for definiteness.}
From \eq{mismatch}, by using a step-and-repeat construction
(\fig{dominoes}), we obtain a one-dimensional cellular automaton
having $n$ bits per cell. This cellular automaton is invertible;
going forward in time, the new state of each cell depends on the
current state of itself and the $n-1$ cells to its left; going
backwards, each cell depends only on itself and the cell on its left.
The neighborhood radii for the direct local map $\lambda$ and the
inverse local map $\overline\lambda$ are, respectively, $n$ and 1.
This cellular automaton is an invertible system governed by local and
uniform laws, but the cause-and-effect tree (the ``light cone'') in
one direction of time is much wider than in the opposite
direction.\footnote
{See \cite{petri} for a related discussion on different aspects of
causality.}
Such behavior doesn't seem to have any counterpart in
physics.
\section{Structural invertibility}\label{sect.structural}
The usual definition of a cellular automaton, as given in \sect{ca},
is a {\em structural} one; i.e., one explicitly tells how a cellular
automaton {\em is made}.\footnote
{One can think of the overall state of the system as a point in
some abstract space, and of a configuration as a way of expressing
this point in a convenient coordinate system (each site represents a
coordinate); the local map tells how to update the state-component
associated with each coordinate. In \sect{formal} we mentioned the
possibility of an equivalent {\em functional} definition, where the
global map is characterized in terms of what it {\em does} to the
state points themselves, without making recourse to coordinates.}
The structure in question is a {\em uniform sequential
network}\/: the state $q_i$ of a cell (a square in \fig{ca-seq}) is
realized by a collection of flip-flops or similar memory elements,
and the table $f$ (a circle in the same figure) by a composition of
{\sc nand} gates or similar logic elements. Both
kinds of element have a straightforward implementation in a number of
technologies.
Since a flip-flop is simply a digital sample-and-hold device, used
for regulating the traffic in and out of the gates,\footnote
{Each gate is used over and over, at each time step evaluating
the same function on a new set of arguments.}
it will be conceptually simpler to represent the design of
a cellular automaton as a uniform {\em combinational} network, where
each usage of a table is explicitly represented by a separate node,
as in \fig{ca-comb}a.
\figone{fig.ca-seq}{1.8in}{A cellular automaton as a sequential
network. We illustrate the case of one dimension and neighborhood
$X=\langle -1,0,+1\rangle$. The table is denoted by $f$.}
\figone{fig.ca-comb}{2.75in}{The cellular automaton
of \protect\fig{ca-seq}, re-expressed as a combinational
network. The $f$ nodes denote occurrences of the table; the
black dots, occurrences of fan-out.}
Now, if we were given the design of a deterministic physical machine
$A$ whose behavior happens to be invertible (i.e., for any
possible state there is a unique way one can arrive at it), we
would tend to think that
\def\Abar{\overline{A}}
\begin{enumerate}
\item One can construct, out of the same technology,
a machine $\Abar$ that has the same orbit structure as $A$
but follows each orbit in reverse.
\item From the detailed design plans of machine $A$ one should
be able to arrive in a straightforward way at the design plans for
machine $\Abar$.
\end{enumerate}
Both points are certainly true if the ``technology'' in
question is Newtonian mechanics (think of the sun and the planets). In
fact, since this mechanics is time-reversal invariant (cf.\
\foot{time-reversal}), we can use the {\em same} machinery ($\Abar=A$), and
just reverse the direction of each particle in order to traverse
orbits backwards.
\lem{richardson} tells us that point (1) above is true also for more
practical technologies, where non-Newtonian devices such as
amplifiers, latches, dampers, and other dissipative devices are
available. Note that the usual way cellular automata are specified
tacitly implies the availability of such a dissipative technology. In
fact, the $f$ nodes in \fig{ca-seq} are typically noninvertible
because they are {\em noninjective}, and the fan-out nodes
noninvertible because they are {\em nonsurjective}; as explained in
\cite{fredkin}, nonsurjective computing primitives imply the
availability of a {\em power supply}, and noninjective ones imply a {\em
heat sink}.
However, \theor{big-minus} says that, in this dissipative context,
point (2) is no longer tenable. That is, from design plans of the type
represented by \fig{ca-comb} we do not know, in general, how to make
similar design plans for a new machine that, if the original machine
was invertible, will have exactly the inverse behavior. Thus, to rescue
point (2) we have to turn our attention to design plans that do not
imply dissipative primitives. For {\em finite} invertible automata
this is always possible\cite{toffoli-rev,fredkin} (see also
\cite{toffoli-crank} for the analogous problem in continuous systems);
is it possible for cellular automata?
\medskip
Now that the job of motivating it is done, let us reword the above
question. Given an arbitrary \ica, which can always be thought of as
realized by a uniform combinational network of the type of
\fig{ca-comb}, is it always possible to give an alternative
realization of it by means of a uniform combinational network in which
(a) all nodes stand for {\em invertible} functions, and (b) no extra
state variables are introduced? We shall call such a realization {\em
structurally invertible};\footnote
{Condition (b) simply asks for an {\em isomorphic}
realization. Realizations that satisfy (a) but make use of extra
state variables can be obtained by a straightforward procedure,
as shown in \cite{jacopini}.}
we shall also call {\em structurally invertible}
an \ica\ for which a structurally invertible realization exists.
It is clear that from a structurally invertible realization
one immediately obtain design plans for the inverse automaton.
\medskip
We have succeeded in exhibiting structural invertible realizations for
every \ica\ for which we tried (here we shall give only one example,
in \fig{phew}), and apparently all \ica\ we are aware of are
structurally invertible. From the above considerations, we are led to
the following
\begin{conjecture}\label{conj.structure}
All invertible cellular automata are {\em structurally}
invertible, i.e., can be (isomorphically) expressed in spacetime as a
uniform composition of finite invertible logic primitives.
\end{conjecture}
In other words, we conjecture that there are no cellular
automata for which invertibility (a {\em functional} property) cannot
be explained in terms of structural invertibility, which is, of course,
a {\em structural} property.
\medskip
\figone{fig.phew}{2.35in}{(a) Combinational network representation
of the \ica\ defined by table \protect\eq{guard}. (b) The same
\ica, in a version that exhibits structural invertibility; the
nodes are all invertible Boolean functions.}
For an example, \fig{phew} shows a structurally invertible
design for the conserved-landscape \ica\ discussed in \sect{guarded},
and contrasts it with the conventional design. The function $f$
is given by table \eq{guard}; $g$ is an invertible Boolean function
with four inputs and four outputs obtained in a straighforward
way from $f$ as follows
\begin{equation}\label{eq.guarded-struct}
\begin{array}{lcl}
y_1 & = & x_1 \\
y_2 & = & f(x_1,x_2,x_3,x_4)=(\overline{x}_1\cdot
x_3\cdot\overline{x}_4)\oplus x_2\\
y_3 & = & x_3\\
y_4 & = & x_4
\end{array},
\end{equation}
where a dot denotes Boolean {\sc and}, a bar denotes Boolean
complement, and the order (1,2,3,4) of inputs and outputs corresponds
to left-to-right in \fig{phew}.
Note that, even though the new network is uniform, its spatial
``pitch'' is coarser than that of the original network; in other
words, in \fig{phew}b we have a structure whose {\em function} is left
invariant by shifting, say, one position to the left, but whose {\em
structure} needs to be shifted {\em four} positions before it again
coincides with itself!\footnote
{More formally, the uniformity group of the structural
description is a proper subgroup of that of the functional behavior.}
We conjecture that, for a structurally invertible
realization, recourse to a network having a {\em coarser} pitch
than the original one is, in general, unavoidable.
\section{Conclusions}\label{sect.conclusions}
We have presented invertible cellular automata from a number of
angles, trying to illustrate the concrete motivations of many
theoretical questions.
This paper also constitutes an original contribution to the study of
the mechanisms and pathways of {\em causality} in distributed
reversible systems.
\section{Acknowledgments}
This research was supported in part by the Defense Advanced Research
Projects Agency (N00014-89-J-1988), and in part by the National
Science Foundation (8618002-IRI).
\newcommand{\bib}[1]{\bibitem{#1}}
\begin{thebibliography}{99}
\bib{aladyev} {\sc Aladyev}, Viktor, ``Computability in Homogeneous
Structures,'' {\sl Izv.\ Akad.\ Nauk.\ Estonian SSR, Fiz.-Mat.\ \bf21}
(1972), 80--83.
\bib{amoroso} {\sc Amoroso}, Serafino, and Y. N. {\sc Patt}, ``Decision
Procedures for Surjectivity and Injectivity of Parallel Maps for
Tessellation Structures,'' {\sl J. Comp.\ Syst.\ Sci. \bf 6} (1972),
448--464.
\bib{amo-cooper} {\sc Amoroso}, Serafino, et al., ``Some clarifications
of the concept of a Garden-of-Eden configuration,'' {\sl J. Comp.\
Syst.\ Sci. \bf 10} (1975), 77--82.
\bib{banks} {\sc Banks}, Edwin, ``Information Processing and
Transmission in Cellular Automata,'' {\sl Tech.\ Rep.\ MAC TR-81},
MIT Project MAC (1971).
\bib{benioff} {\sc Benioff}, Paul, ``Quantum mechanical Hamiltonian
models of discrete processes that erase their own histories:
Applications to Turing machines,'' {\sl Int.\ J. Theor.\ Phys.\ \bf
21} (1982), 177--201.
\bib{bennett-turing} {\sc Bennett}, Charles, ``Logical Reversibility of
Computation,'' {\sl IBM J.\ Res. Develop.} 6 (1973), 525--532.
\bib{bennett85} {\sc Bennett}, Charles, and Geoff {\sc
Grinstein}, ``Role of Irreversibility in Stabilizing Complex and
Nonenergodic Behavior in Locally Interacting Discrete Systems,'' {\sl
Phys.\ Rev.\ Lett.\ \bf 55} (1985), 657--660.
\bib{bennett-complexity} {\sc Bennett}, Charles, ``On the nature and origin
of complexity in discrete, homogeneous, locally-interacting
systems,'' {\sl Found.\ Physics \bf 16} (1986), 585--592.
\bib{bond-energy} {\sc Bennett}, Charles, Norman {\sc Margolus},
and Tommaso {\sc Toffoli}, ``Bond-energy variables for Ising
spin-glass dynamics,'' {\sl Phys.\ Rev.\ B \bf 37} (1988), 2254.
\bib{ca86} {\sc Bennett}, Charles, Tommaso {\sc Toffoli}, and
Stephen {\sc Wolfram} (ed.), ``Cellular Automata '86,''
{\sl Tech.\ Memo MIT/LCS/TM-317}, MIT Lab.\ for Comp.\ Sci.\ (December 1986).
\bib{burks} {\sc Burks}, Arthur (ed.), {\sl Essays on Cellular
Automata}, Univ.\ Ill.\ Press (1970).
\bib{courant} {\sc Courant}, R., et al., ``On the partial difference
equations of mathematical physics,'' {\sl Mathematische Annalen \bf100}
(1928), 32-74 (in German), republished in English translation in
{\sl IBM J.} (March 1967), 215--238.
\bib{creutz} {\sc Creutz}, Michael, ``Deterministic Ising
Dynamics,'' {\sl Annals of Physics} {\bf 167} (1986), 62--76.
\bib{culik} {\sc Culik}, K., ``On invertible cellular automata,''
{\sl Complex Systems \bf1} (1987), 1035--1044.
\bib{dhumieres89} ``Invariants in lattice gas models,''
\cite{monaco}, 102--113.
\bib{doolen} {\sc Doolen}, Gary, et al.\ (ed.), {\sl Lattice-Gas
Methods for Partial Differential Equations}, Addison-Wesley (1990).
\bib{everett} {\sc Everett}, Hugh, ``The theory of the universal wave
function,'' {\sl The Many-World Interpretation of Quantum Mechanics}
(Bryce {\sc DeWitt} and Neill {\sc Graham}, ed.), Princeton U. Press
(1973).
\bib{feynman-quantum} {\sc Feynman}, Richard,
``Quantum-Mechanical Computers,'' {\sl Opt.\ News} {\bf 11} (Feb.\
1985), 11--20, reprinted in {\sl Foundations of Physics \bf 16}
(1986), 507--531.
\bib{fredkin} {\sc Fredkin}, Edward, and Tommaso {\sc Toffoli},
``Conservative Logic,'' {\sl Int.\ J. Theor.\ Phys.} {\bf 21} (1982),
219--253.
\bib{endicott} {\sc Fredkin}, Edward, Rolf {\sc Landauer}, and
Tommaso {\sc Toffoli} (ed.), Proc.\ of a conference on ``Physics and
Computation,'' {\sl Int.\ J. Theor.\ Phys.} {\bf 21}:3/4, {\bf 21}:6/7, and
{\bf 21}:12 (1982).
\bib{fhp} {\sc Frisch}, Uriel, Brosl {\sc Hasslacher}, and Yves
{\sc Pomeau}, ``Lattice-Gas Automata for the Navier-Stokes Equation,''
{\sl Phys.\ Rev.\ Lett.} {\bf 56} (1986), 1505--1508.
\bib{frisch-doolen} {\sc Frisch}, Uriel, et al., ``Lattice gas
hydrodynamics in two and three dimensions,'' \cite{doolen}, 77--135.
\bib{gardner70} {\sc Gardner}, Martin, ``The Fantastic
Combinations of John Conway's New Solitaire Game `Life','' {\sl Sc.\
Am.\ \bf 223\rm:4} (April 1970), 120--123.
\bib{guto87} {\sc Gutowitz}, Howard, et al., ``Local structure theory
for cellular automata,'' {\sl Physica \bf 28\sl D} (1987), 18--48
\bib{guto00} {\sc Gutowitz}, Howard, ``A hierarchical classification
of cellular automata,'' {\sl Physica D}, this issue.
\bib{hpp} {\sc Hardy}, J., O. {\sc de Pazzis}, and Yves {\sc
Pomeau}, ``Molecular dynamics of a classical lattice gas: Transport
properties and time correlation functions,'' {\sl Phys.\ Rev.}
{\bf A13} (1976), 1949--1960.
\bib{discrete-fluids} {\sc Hasslacher}, Brosl, ``Discrete
Fluids,'' {\sl Los Alamos Science}, Special Issue No.~15(1987),
175--200 and 211--217.
\bib{hayot} {\sc Hayot}, Fernand, ``The effect of Galilean
non-invariance in lattice gas automaton one-dimensional flow,''
{\sl Complex Systems \bf 1} (1987), 753--761.
\bib{head} {\sc Head}, Tom, ``One-dimensional cellular automata:
injectivity from unambiguity,'' to appear in {\sl Complex Systems \bf
3} (1989).
\bib{hedlund63} {\sc Hedlund}, G. A., K. I. {\sc Appel}, and L.
R. {\sc Welch},
``All Onto Functions of Span Less Than or Equal To Five,''
Communications Research Division, working paper (July 1963).
\bib{hedlund69} {\sc Hedlund}, G. A., ``Endomorphism and Automorphism
of the Shift Dynamical System,'' {\sl Math.\ Syst.\ Theory \bf 3}
(1969), 51--59.
\bib{henon} {\sc H\'enon}, M., ``Optimization of collision rules
in the FCHC lattice gases, and addition of rest particles,''
\cite{monaco}, 146--159.
\bib{henon-vs} {\sc H\'enon}, M., ``On the relation between lattice
gases and cellular automata,'' \cite{monaco}, 160--161.
\bib{joe} {\sc Hrgov\v ci\'c}, Hrvoje, personal communication.
\bib{jacopini} {\sc Jacopini}, Giuseppe, and Giovanna {\sc
Sontacchi}, ``Reversible parallel computation: an evolving space
model,'' to appear in {\sl Theor.\ Comp.\ Sci.\ \bf 75} (1990).
\bib{jaynes} {\sc Jaynes}, ``Information theory and statistical
mechanics,'' {Phys.\ Rev.} (1957) {\bf 106}, 620--630, and {\bf 108}, 171--190.
\bib{kaneko88} {\sc Kaneko}, Kunihiko, ``Symplectic cellular automata,''
{\sl Phys.\ Lett.\ A \bf 129} (1988), 9--16.
\bib{kari-thesis} {\sc Kari}, Jarkko, ``Reversibility and surjectivity
of cellular automata,'' Licentiate's Thesis, University of Turku, Finland (May
1989).
\bib{kari} {\sc Kari}, Jarkko, ``Reversibility of 2D cellular automata
is undecidable,'' to appear in {\sl Physica D}, this issue.
\bib{linden} {\sc Lindenmayer}, A., and G. {\sc Rozenberg} (ed.),
{\sl Automata, Language, and Development}, North-Holland (1976).
\bib{margolus-bbm} {\sc Margolus}, Norman ``Physics-like models of
computation,'' {\sl Physica \bf 10\sl D} (1984), 81--95.
\bib{margolus-quantum} {\sc Margolus}, Norman, ``Quantum
Computation,'' {\sl Ann.\ New York Acad.\ Sci.\ \bf480}
(1986), 487--497.
\bib{margolus-thesis} {\sc Margolus}, Norman, ``Physics and
Computation'' (Ph.\ D. Thesis), {\sl Tech.\ Rep.\ MIT/LCS/TR-415}, MIT
Lab.\ for Comp.\ Sci.\ (March 1988).
\bib{cams} {\sc Margolus}, Norman and Tommaso {\sc Toffoli}, ``Cellular
Automata Machines,'' \cite{doolen}, 219--248.
\bib{margolus-super} {\sc Margolus}, Norman, Tommaso {\sc Toffoli}, and
G\'erard {\sc Vichniac}, ``Cellular-Automata Supercomputers for
Fluid Dynamics Modeling,'' {\sl Phys.\ Rev.\ Lett.\ \bf56} (1986),
1694--1696.
\bib{maruoka76} {\sc Maruoka}, Akira, and Masayuki {\sc Kimura},
``Conditions for Injectivity of Global Maps for
Tessellation Automata,'' {\sl Info.\ Control \bf 32} (1976), 158--162.
\bib{maruoka79} {\sc Maruoka}, Akira, and Masayuki {\sc Kimura},
``Injectivity and Surjectivity of Parallel Maps for Cellular
Automata,'' {\sl J.\ Comp.\ Syst.\ Sci.\ \bf 18} (1979), 47--64.
\bib{maruoka82} {\sc Maruoka}, Akira, and Masayuki {\sc Kimura},
``Strong surjectivity is equivalent to $C$-injectivity,''
{\sl Theor.\ Comp.\ Sci.\ \bf 18} (1982), 269--277.
\bib{monaco} {\sc Monaco}, R. (ed.), {\sl Discrete Kinetic Theory,
Lattice Gas Dynamics, and Foundations of Hydrodynamics}, World
Scientific (1989).
\bib{morita89} {\sc Morita}, Kenichi, and Masateru {\sc Harao},
``Computation universality of one-dimensional reversible (injective)
cellular automata,'' {\sl Trans.\ IEICE E \bf 72} (1989), 758--762.
\bib{moore} {\sc Moore}, Edward, ``Machine models of
self-reproduction,'' {\sl Proc.\ Symp.\ Appl.\ Math. \bf 14} (1962),
17--33; reprinted in \cite{burks}, 187--203.
\bib{moore-squad} {\sc Moore}, Edward, ``The firing squad
synchronization problem,'' in E. F. Moore (ed.) {\sl Sequential Machines}
Addison-Wesley (1964), 213--214.
\bib{myhill} {\sc Myhill}, John, ``The converse of Moore's Garden-of-Eden
theorem,'' {\sl Proc.\ Am.\ Math.\ Soc.\ \bf 14} (1963), 658--686;
reprinted in \cite{burks}, 204--205.
\bib{nasu} {\sc Nasu}, Masakazu, ``Local maps inducing surjective global
maps of one-dimensional tessellation automata,'' {\sl Math.\ Systems Theory
\bf 11} (1978), 327-351.
\bib{packard85} {\sc Packard}, Norman, and Stephen {\sc Wolfram},
``Two-dimensional cellular automata,'' {\sl J.~Stat.~Phys.} {\bf 38}
(1985), 901--946.
\bib{patt} {\sc Patt}, Y. N., ``Injections of neighborhood
size three and four on the set of configurations from the infinite
one-dimensional tessellation automata of two-state cells''
(unpublished report, cited in \cite{amoroso}), ECON-N1-P-1, Ft.\
Monmouth, NJ 07703 (1971).
\bib{petri} {\sc Petri}, C. A., ``State-transition structures
in physics and in computation,'' \cite{endicott}, 979--992.
\bib{pomeau84} {\sc Pomeau}, Yves, ``Invariant in Cellular
Automata,'' {\sl J. Phys.\ \bf A17} (1984), L415--L418.
\bib{preston} {\sc Preston}, Kendall, and Michael {\sc Duff},
{\sl Modern Cellular Automata}, Plenum Press (1984).
\bib{richardson} {\sc Richardson}, D., ``Tessellation with Local
Transformations,'' {\sl J. Comp.\ Syst.\ Sci.\ \bf 6} (1972), 373-388.
\bib{sato} {\sc Sato}, Tadakazu, and Namio {\sc Nonda}, ``Certain relations
between properties of maps of tessellation automata,'' {\sl J. Comp.\ Syst.\
Sci.\ \bf15} (1977), 121--145.
\bib{sears} {\sc Sears}, Michael, ``The automorphisms of the shift
dynamical systems are relatively sparse,'' {\sl Math.\ Syst.\ Theory \bf
5} (1971), 228--231.
\bib{smith} {\sc Smith}, Alvy, ``Cellular Automata Theory,''
{\sl Tech.\ Rep.\ 2}, Stanford Electronic Lab., Stanford Univ.\ (1969).
\bib{takesue87} {\sc Takesue}, Shinji, ``Reversible cellular
automata and statistical mechanics,'' {\sl Phys.\ Rev.\ Lett.\ \bf59}
(1987), 2499--2502.
\bib{mark} {\sc Smith}, Mark, ``Representation of geometrical and
topological quantities in cellular automata,'' {\sl Physica D}, this
issue.
\bib{takesue00} {\sc Takesue}, S., ``Relaxation properties of
elementary reversible cellular automata,'' {\sl Physica D}, this
issue.
\bib{toffoli-univ} {\sc Toffoli}, Tommaso, ``Computation and
construction universality of reversible cellular automata,'' {\sl J.
Comp.\ Syst.\ Sci.\ \bf 15} (1977), 213--231.
\bib{toffoli-thesis} {\sc Toffoli}, Tommaso, ``Cellular Automata
Mechanics,'' {\sl Tech.\ Rep.\ 208}, Comp.\ Comm.\ Sci.\ Dept.,
The Univ.\ of Michigan (1977).
\bib{toffoli-crank} {\sc Toffoli}, Tommaso, ``Bicontinuous
extension of reversible combinatorial functions,'' {\sl Math.\ Syst.\
Theory} {\bf 14} (1981), 13--23.
\bib{toffoli-rev} {\sc Toffoli}, Tommaso, ``Reversible
Computing,'' {\sl Automata, Languages and Programming} ({\sc de Bakker}
and {\sc van Leeuwen} eds.), Springer-Verlag (1980), 632--644.
\bib{cam5} {\sc Toffoli}, Tommaso, ``CAM: A high-performance
cellular-automaton machine,'' {\sl Physica \bf10\sl D} (1984),
195--204.
\bib{pde} {\sc Toffoli}, Tommaso, ``Cellular automata as an
alternative to (rather than an approximation of) differential
equations in modeling physics,'' {\sl Physica \bf10\sl D} (1984), 117--127.
\bib{toffoli-ibm} {\sc Toffoli}, Tommaso, ``Information Transport
Obeying the Continuity Equation,'' {\sl IBM J. Res.\ Develop.\ \bf
32}:1 (January 1988), 29--36.
\bib{four-topics} {\sc Toffoli}, Tommaso, ``Four topics in
lattice gases: Ergodicity; Relativity; Information flow; and Rule
compression for parallel lattice-gas machines,'' \cite{monaco}, 343--354.
\bib{cheap} {\sc Toffoli}, Tommaso, ``How cheap can mechanics' first
principles be?'' to appear in {Complexity, Entropy, and the Physics
of Information} (W. H. {\sc Zurek} ed.), Addison-Wesley (1990).
\bib{frontiers} {\sc Toffoli}, Tommaso, ``Frontiers in computing,'' {\sl
Information Processing 89} (G. X. {\sc Ritter} ed.), North-Holland (1989),
1.
\bib{cambook} {\sc Toffoli}, Tommaso,
and Norman {\sc Margolus}, {\sl Cellular Automata Machines---A New
Environment for Modeling}, MIT Press (1987).
\bib{pm} {\sc Toffoli}, Tommaso, and Norman {\sc Margolus},
``Programmable matter,'' to appear in {\sl Physica D}, November 1990.
\bib{ica-full} {\sc Toffoli}, Tommaso, and Norman {\sc Margolus},
{\sl Invertible cellular automata}, occasionaly updated technical memo
(MIT Lab.\ for Comp.\ Sci.) eventually to appear in book
form.
\bib{thooft} {\sc 't Hooft}, G\'erard, ``A two-dimensional model
with discrete general coordinate-invariance'' (preprint).
\bib{turing} {\sc Turing}, Alan, ``On computable numbers, with an
application to the Entscheidungsproblem,''{\sl Proc.\ London Math.\
Soc., Ser.\ 2 \bf 42} (1936), 230--265.
\bib{ulam} {\sc Ulam}, Stanislaw, ``Random Processes and
Transformations,'' {\sl Proc.\ Int.\ Congr.\ Mathem.} (held in 1950)
{\bf 2} (1952), 264--275.
\bib{vichniac} {\sc Vichniac}, G\'erard, ``Simulating physics
with cellular automata,'' {\sl Physica \bf10\sl D} (1984), 96--115.
\bib{vonneumann} {\sc von Neumann}, John, {\sl Theory of
Self-Reproducing Automata} (edited and completed by Arthur {\sc Burks}),
Univ.\ of Illinois Press (1966).
\bib{wolfram-stat} {\sc Wolfram}, Stephen, ``Statistical mechanics of cellular
automata,'' {\sl Rev.\ Mod.\ Phys.\ \bf 55} (1983), 601.
\bib{wolfram-univ} {\sc Wolfram}, Stephen, ``Universality and
Complexity in Cellular Automata,'' {\sl Physica \bf10\sl D} (1984),
1--35.
\bib{wolfram-comp} {\sc Wolfram}, Stephen, ``Computation Theory
of Cellular Automata,'' {\sl Commun.\ Math.\ Phys.\ \bf 96} (1984), 15-57.
\bib{wolfram-hydro} {\sc Wolfram}, Stephen, ``Cellular automaton fluids 1:
Basic theory,'' {\sl J. Stat.\ Phys.\ \bf 45} (1986, 471--526.
\bib{wolfram-book} {\sc Wolfram}, Stephen (ed.), {\sl Theory
and Applications of Cellular Automata}, World Scientific (1986).
\bib{yaku} {\sc Yaku}, Takeo, ``Inverse and injectivity of parallel
relations induced by cellular automata,'' {\sl Proc.\ Am.\ Math.\ Soc.\
\bf58} (1976), 216--220.
\bib{zuse69} {\sc Zuse}, Konrad, {\sl Rechnender Raum}, Vieweg,
Braunschweig (1969); translated as ``Calculating Space,'' {\sl Tech.\
Transl.\ AZT-70-164-GEMIT}, MIT Project MAC (1970).
\end{thebibliography}
\end{document}