Submission by the UK Computing Research Committee to the Nanotechnology Working Group of the Royal Society and the Royal Academy of Engineering

Prepared by Robin Milner and Susan Stepney, August 2003

Summary

Advances in technology repeatedly allow software to permeate the design of
artefacts at lower and lower levels. This occurred long ago with microprogramming,
when computers were stand-alone; more recently it occurred with programmable
networks (e.g. routers); it is now occurring with wireless networks. Nanotechnology
will lead to a new dramatic step of this kind. Although it will include applications
that require little computational input, it will also provide opportunities
for exploitation that involve complex computational interactions among structured
populations of nanoscopic agents. The more sophisticated these structures and
interactions, the more computer science (CS) research will be involved.

This submission is arranged as follows: Section 1 discusses in more detail the
character of the applications where CS is relevant. Section 2 outlines three
possible kinds of relevant pragmatic theory that computer scientists can expect
to study. Section 3 surveys how these theories may develop as generalisations
and refinements of current trends in CS. Section 4 discusses how CS is relevant
to the issues for safety and dependability that are raised by nanotechnology.

Two main points emerge. First, nanotechnology presents research challenges that
will lead to a greatly enriched and more general science of computation, which
is a scientific goal of intrinsic value. Second, safety and dependability will
present unprecedented demands; the science will be responsible not only for
robust design to meet these demands, but for robust analysis to show they have
been met.

*Nanotechnology* is the design, development and use of devices on the
nanometre (atomic) scale. In this document we are not so much concerned with
nano-scale artefacts that take the current trend of miniaturisation a few orders
of magnitude further. Rather we are interested in those *active* physical
nano-devices that themselves manipulate the world at their nano-scale in order
to manufacture macroscopic artefacts. This is Drexler’s [D86][D92] vision
of nano-scale *assemblers* that build (assemble) *macroscopic*
artefacts. (Such assemblers are often known as *nanites* or *nanobots*.)

In order for nanites to build macroscopic objects in useful timescales, there
needs to be a vast number of them. A starting population of a few nanites assembles
more of their kind, which then assemble more, with exponentially growing numbers.
Once they exist in sufficient numbers, they can build, or become, the macroscopic
artefact. This view of nanotechnology promises many awe-inspiring possibilities.

Some argue that such a technology is too good to be true, or at least question
the detail of Drexler’s predictions. But one should note that there is
no conclusive counter-argument to them; indeed, proteins and their associated
cellular machinery routinely assemble macroscopic artefacts, or, to use more
biological terminology, they *grow organisms*. This submission confines
itself to computational structures that will be relevant whenever *some*
technology for sophisticated populations of nanites is achieved, even if not
all that has been predicted.

In principle it is possible for nanites to assemble *any* physical artefact,
by carefully controlled placement of every component atom (possibly requiring
the use of much scaffolding). But in general this is infeasible: in the worst
case it could need the *global* control and choreography of the behaviour
of every individual nanite. A more feasible approach is to exploit mainly *local*
cooperation between suitably-programmed neighbouring nanites, possibly mediated
by their shared local environment (which also more closely mirrors the way biological
organisms grow).

In order for nanotechnology to be possible, the initial nanites must be fabricated
somehow. This complex engineering problem requires collaborative research by
physicists, chemists, engineers, and biologists. To the extent that the nanites
need to be *programmed* to perform their assembly tasks, computer science
(CS) also has a crucial role in this interdisciplinary challenge. We need to
develop capabilities to design, program and control complex networks of nanites,
so that they safely and dependably build the desired artefacts, and so that
they do *not* accidentally build *un*desired ones.

Initial CS research needs to focus on potential ways of designing and assembling
artefacts in ways that can be described in terms of predominately local interactions,
that is, in terms of the *emergent properties of vast numbers of cooperating
nanites*. This requires *analysis* of emergent behaviour; given the
orders of magnitude involved, this can be done only with a hierarchy of computational
models, explaining the assembly at many different levels of abstraction.

What CS theory and practice do we need in order to be able to design, program
and control networks of nanites?

**Emergent properties
** We need a pragmatic theory of

In much the same way that an organism is an emergent property of its genes and proteins, the assembled artefact will be an emergent property of the assembling nanites and their programming. In general, this problem is computationally irreducible, that is, there is no “short cuts” to understanding or prediction, beyond watching the behaviour unfold. Thus reasoning about the precise behaviour of arbitrary networks with a number of nodes comparable to the number of cells in the human body (~1013) is (currently) well beyond the state of the art. However, inability to solve the general problem, in principle or in practice, does not prevent exploration of large classes of specific and interesting problems (consider the theoretical impossibility of the general Halting Problem

Growth and Development

We need a pragmatic theory of *development and growth.*

A population of nanites first “grows” a vastly larger population,
then “grows” the artefact in question. Again, we need a *sufficient*
theory of growth – to enable us to reason about structures that are the
result of a growth process.

Biological insights from embryology and development will be fruitful here, and
the relevant ideas need to be abstracted and adapted for nanite assemblers.
This “artificial development” also has its own properties: for example,
the use of scaffolding will probably be much more important.

Research here will also feed back into biology. Which features of biological
organisms are consequences of growth in general, and which are consequences
of “wet-ware” growth, and so are different in assembled hardware?
What constraints are there in the growth process: is it possible to “grow”
a *cooked* steak *ab initio*, or must if first be grown raw (isolated,
or as part of a cow), and then chemically modified?

Complex Networks

We need a pragmatic theory of *dynamic, heterogeneous, unstructured, open
networks *[Ste].

*Dynamic*: the network is not in steady state or equilibrium, but is
far from equilibrium, governed by attractors and trajectories. If the nanites
were in equilibrium, they would not be assembling anything: biological systems
in equilibrium are dead. Swarm networks may offer insights to this kind of dynamics
[BDT])

*Heterogeneous*: the nodes (nanites), the connections (neighbourhoods),
and the communications (messages) can be of many different types, including
higher order types (for example, a message might communicate the instructions
for a new kind of nanite to be assembled). If it were *homo*geneous,
it would most likely form some unpleasant formless mass.

*Unstructured*: the network connectivity has no particular regularity:
it is not fully regular, or fully connected, or even fully random. Clearly there
need to be *some* kinds of regularity present, but these are likely to
be of kinds that cannot be reasoned about in terms of simple averages or mean
field notions; they are more likely have fractal structure. Some recent advances
in *Small World* networks offer intriguing new insights [Bar][Wat].

*Open (metadynamic)*: the structures are unbounded, and the components
are not fixed: nodes and connections may come and go; new *kinds* of
nodes and connections may appear. Nodes and connections appear as nanites are
born, change as nanites move around, and disappear as scaffolding is removed.

Much current mathematical network theory is restricted to static, homogeneous,
structured, closed networks. We need to relax all of these assumptions. Progress
and intuition in this highly complex area is likely to be inspired and driven
by CS behavioural modelling, simulation, and patterns.

Complex Adaptive Systems

All the CS advances mentioned above would have application well beyond nanotechnology.
All are basic requirements for the general area of *Complex Adaptive Systems*,
of which nanotechnology is but one exemplar. Real world examples of CASs include
swarms and flocks, ants, immune systems, brains, autocatalytic networks, life,
ecologies, and so on. Artificial CASs include complex control systems (industrial
plants, Air Traffic Control, etc), eCommerce supply chains and webs, telecoms
systems and the Internet, and ubiquitous computing with its hordes of communicating
smart devices, economic systems, and so on.

3 Behavioural Modelling

The pragmatic theories for Complex Adaptive Systems, above, must be developed
in response to the challenge of nanotechnology, but they need not start from
scratch. During the last two or three decades computer scientists have eroded
the boundary between programming, which *prescribes* behaviour of a system,
and modelling, which *analyses* it. This trend arises naturally from
a change of emphasis, from stand-alone computers doing one thing at a time to
*distributed* systems – networks of devices each acting independently,
with no centralised control. The resulting computational models are in varying
degrees logical, algebraic, non-deterministic, stochastic. They have been effectively
used to analyse programming languages and communication disciplines. They have
also been applied to computer security, mobile phone systems, behaviour in ant
colonies, business processes, and signal transduction in biological cells.

A large system such as the Internet can be modelled at many levels of *abstraction*,
correlated where possible with the structure of the system. At the higher levels,
the analysis of agents’ behaviour need not depend on the underlying technology
used to realise them. A natural research direction is therefore to extrapolate
existing CS models to nanosystems where, despite orders of magnitude increase
in population size (compared with, say, the Internet), many of the same general
principles of emergence and behaviour may apply.

At the lowest levels of abstraction, which may be called *embodiment*,
the analysis of agents’ behaviour depends crucially the underlying technology
used to realise them. For example, individual nanites are made of only small
numbers of atoms, so a one-atom mutation to a nanite –caused by faults
in manufacture, by other nanites, by random impact of cosmic rays – could
have a dramatic effect on behaviour. In order to reason about the kinds of change
that mutations might make (to reason about the “adjacent possible”
[Kau] of the nanite), it is essential to know the detailed make-up and characteristics
of the system undergoing mutation.

Close cooperation is therefore needed among many research disciplines, of which
CS is one, in order to understand nanopopulations fully. From the CS viewpoint,
the gain (as we said in the introduction) will be a greatly enriched and more
general science of computation. We conclude this section by summarising some
of the concepts, theories and tools that CS can bring to the cooperation at
the outset. We cite only a selection from the large literature.

Stand-alone computation

Before distributed computing systems became the norm, much computing research
laid foundations for the models and tools that those systems need. A start was
made in establishing the *verification* of computer programs as an activity
in formal logic [Flo][Hoa]. Tools for computer-assisted verification, especially
for computer hardware designs [Gor], were pioneered. The status of computer
programs as mathematical descriptions of behaviour was established [ScS]. Theories
of types began to emerge as a powerful aid to behavioural analysis as well as
to programming [Rey]. Even in the 1940s, von Neumann’s model of self-reproducing
cellular automata anticipated some of the central ideas of nanotechnology [voN].

Abstract machines and process calculi

The first model to capture the complex interplay between non-determinism and
concurrency in distributed systems was Petri Nets [Pet]; these nets were designed
for information flow in natural as well as man-made systems. In the early eighties,
algebraic process calculi [BHR][Mil] were designed to model interactive systems
hierarchically, and to model their behaviour abstractly. The Chemical Abstract
Machine [BeB] captured the spatial structure of systems. The pi calculus [MPW]
and mobile ambient calculus [CaG] made a further step in modelling systems that
can reconfigure both their spatial arrangement and their connectivity.

These models have influenced the design of programming and specification languages,
for example LOTOS, occam and Ada. They have been developed to model systems
stochastically, and to deal with hybrid discrete/continuous systems. Recently
their theory has been seen to extend to graphical models that are *a priori*
suitable for populations of agents such as nanites.

Logics and Tools

Allied to algebraic calculi are new forms of mathematical logic, especially modal logics, specially designed to specify the properties that an interactive system should satisfy. Well-known examples are dynamic logic [Pra], temporal logic [Pnu], the temporal logic of actions [LeL] and the mu calculus [Koz]. These logics often have a close link with algebraic calculi; an algebraic term denotes (part of) a system. while a logical formula says (in part) how it should behave. This underlies a successfully applied incremental methodology for system analysis; one verifies more and more properties of more and more parts (even the whole) of a system. Such verification is aided by software tools: model-checkers that can automatically verify properties of fairly complex finite-state systems [CES]; and semi-automated tools that can perform verifications with human guidance [CPS].

4 Safety and dependability

Nanites can disassemble, as well as assemble, structures. This has led to
the notion of the so-called “grey goo” problem: nightmare visions
of hordes of rogue nanites disassembling the wrong things; disassembling people,
or even disassembling the entire planet. It is potentially the ultimate terrorist
weapon.

Even if nanites are not deliberately engineered to be destructive, such objects
will “naturally” appear in any replicating swarm of nanites. We
are dealing with such vast numbers of nanites that some will spontaneously “mutate”.
Given the three features of reproduction, variation, and selection, some form
of evolution will inevitably occur, leading to populations of “adjacent
possible” undesigned nanites. Computer science, allied with biology, is
crucial to the task of investigating and understanding these artificial evolutionary
processes, and the defences we can design against them.

More generally, our discussion of behavioural modelling makes it clear that
*dependability* – the quality of a system that justifies its use
even in critical conditions – is already a topic of extensive research
in computer science. It involves mathematical analysis, as in the case of program
verification and computer security; more widely, it involves making systems
aware of, and able to report upon, their behaviour. It cannot exist without
good modelling. The modelling of nanopopulations with dependability in mind,
given their emergent properties and the inevitability of mutation, offers a
huge challenge to CS, which its researchers will welcome.

Conclusion

Nanotech assemblers offer the promise of fantastic rewards. Some forms of nano-assemblers may well be exploitable and exploited in many ways without much CS input. Before we can achieve the full promise, however, there are many hard computer science problems to solve, concerning the design of emergent properties, the growth of physical artefacts, the programming and control of nanites, and defences against the “grey goo” and other safety critical scenarios.

References

[Bar] A.-L. Barabasi. Linked: the new science of networks. Perseus, 2002.

[BeB] G. Berry, G. Boudol. The chemical abstract machine. Theoretical Computer
Science 96, 217-248, 1992.

[BDT] E. W. Bonabeau, M. Dorigo, G. Theraulaz. Swarm Intelligence: from natural
to artificial systems. OUP, 1999.

[BHR] S. D. Brookes, C. A. R. Hoare, A. W. Roscoe. A theory of communicating
sequential processes. Journal of the ACM, 31, 560-699, 1984.

[CaG] L. Cardelli, A. Gordon. Mobile ambients. Theoretical Computer Science,
240, 177-213, 2000.

[CES] E. M. Clarke, E. A. Emerson, A. P. Sistla. Automatic verification of finite-state
concurrent systems using temporal logic specifications. ACM Transactions on
programming Languages and Systems, 8(2),244-263, 1986.

[CPS] R. Cleaveland, J. Parrow, B. Steffen. The Concurrency Workbench: A Semantics
Based Tool for the Verification of Concurrent Systems. ACM ToPLaS, 15, 36-72,
1993.

[D86] K. Eric. Drexler. Engines of Creation: the coming era of nanotechnology.
Doubleday, 1986

[D92] K. Eric Drexler. Nanosystems: molecular machinery, manufacturing and computation.
Wiley, 1992.

[Flo] R. W. Floyd. Assigning meanings to programs. Mathematical Aspects of Computer
Science, Proceedings of Symposia in Applied Mathematics 19, American Mathematical
Society, 19-32, 1967.

[Gor] M. J. C. Gordon. HOL: A proof generating system for higher-order logic.
VLSI Specification, Verification and Synthesis, Kluwer 1987.

[Hoa] C. A. R. Hoare. An axiomatic basis for computer programming. CACM, 14(1),
39-45, 1971.

[Kau] Stuart A. Kauffman. Investigations. OUP, 2000

[Koz] D. Kozen. Results on the propositional mu-calculus. Theoretical Computer
Science, 27, 333-354, 1983.

[LeL] L. Lamport. The Temporal Logic of Actions. ACM Toplas, 16(3), 872-923,
1994.

[Mil] R. Milner. A calculus of communicating systems. Lecture Notes in Computer
Science 92, Springer, 1980.

[MPW] R.Milner, J. Parrow, D. Walker. A calculus of mobile processes. Information
and Computation, 100(1),1-77, 1992.

[Pet] C.A. Petri. Kommunikation mi automaten. PhD Thesis, Technical Report 2,
Institut fur Instrumentelle Mathemematik, Bonn, 1962.

[Pnu] A. Pnueli. The temporal logic of programs. Proceedings of FOCS, 46-77,
IEEE, 1977.

[Pra] V.R. Pratt. Semantical considerations on Floyd-Hoare logic. Proc. 17th
Symposium on Foundations of Computer Science, IEEE, 109-121, 1976.

[PrL] P. Prusinkiewicz, A. Lindenmayer. The Algorithmic Beauty of Plants. Springer,
1990.

[Rey] J.C. Reynolds. Towards a theory of type structure. Proceedings of Paris
Symposium on Programming, Lecture Notes in Computer Science 16, 408-425, 1974.

[ScS] D. S. Scott, C. Strachey. Towards a mathematical semantics for computer
languages. Proceedings of Symposia on Computers and Automata, Microwave Research
institute Symposia 21, 19-46, 1971.

[Ste] Susan Stepney. Critical Critical Systems. In Formal Aspects of Security,
FASeC'02. Lecture Notes in Computer Science, vol 2629. Springer, 2003.

[voN] J. von Neumann. Theory of self-reproducing automata. University of Illinois
Press, 1966.

[Wat] Duncan J. Watts. Small Worlds: the dynamics of networks between order
and randomness. Princeton University Press, 1999.