The reigning computing technology of the past fifty years, often referred to as the von Neumann architecture, is all but ubiquitous nowadays. Having proliferated into every aspect of our daily lives, the basic principle can be summed up as follows: one complex processor that sequentially performs a single complex task (at a given moment). In recent years we have been witness to a growing number of researchers who are interested in novel computational systems based on entirely different principles. Though coming from disparate domains, their work shares a common computational philosophy, which I call cellular computing.
At the heart of this paradigm lie three principles:
Let us take a closer look at what is meant by these three principles. Firstly, the basic processor used as the fundamental unit of cellular computing---the cell---is simple. By this I mean that while a current-day, general-purpose processor is capable of performing quite complicated tasks, the cell can do very little in itself. Formally, this notion can be captured, say, by the difference between a universal Turing machine and a finite state machine. In practice, our experience of fifty years in building computing machines has left us with a good notion of what is meant by ``simple.''
The second principle is vast parallelism. Though parallel computers have been built and operated, they usually contain no more than a few dozen processors. In the parallel computing domain, ``massively parallel'' is a term usually reserved for those few machines that comprise a few thousand (or at most tens of thousands) processors. Cellular computing involves parallelism on a whole different scale, with the number of cells measured at times by evoking the exponential notation, 10x. To distinguish this huge number of processors from that involved in classical parallel computing, I shall use the term vast parallelism (as opposed to ``mere'' massive parallelism). This quantitative difference leads, as I shall argue, to novel qualitative properties, as nicely captured by the title of a 1972 paper by Philip Anderson ``More is Different.''
The third and final distinguishing property of cellular computing concerns the local connectivity pattern between cells. This means that any interactions taking place are on a purely local basis---a cell can only communicate with a small number of other cells, most of which (if not all) are physically close by. Furthermore, the connection lines usually carry only a small amount of information. One implication of this principle is that no one cell has a global view of the entire system---there is no central controller.
Combining these three principles results in the definition cellular computing = simplicity + vast parallelism + locality. It is important to note that changing any single one of these terms in the equation results in a different paradigm (Figure 1). The three are thus seen to be highly interrelated (e.g., the attainment of vast parallelism is notably facilitated by the cells' simplicity and local connectivity).
Cellular computing is at heart a paradigm that aims at providing new means for doing computation in a more efficient manner than other approaches (in terms of speed, cost, power dissipation, information storage, quality of solutions), while potentially addressing much larger problem instances than was possible before---at least for some application domains. The aim of this paper is to present the philosophy underlying the cellular computing paradigm. My intention is not to present a formal account nor a comprehensive overview, but rather to provide the reader with an understanding of the principles, the theoretical and practical issues involved, and some of the possible applications.