Following is a list of recommended readings compiled by Jim Kennedy, coauthor of Swarm Intelligence. We will receive lists of recommended readings by other complexity experts, and as those lists become available, we will compress this page. In the meantime, we believe you may find the following commentary provided by Dr. Kennedy an interesting and useful tool.

Some pieces of writing are fundamental to complex systems research, and have flavored everything that came after them. These works should be read by all students, if only to establish a common vocabulary for discussing complex systems and related phenomena. The books I list here are all standard, mainstream literature, which anyone in the field should have familiarity with. Please don't take my recommendations to mean that I agree with every word written in these books; this is not an elitist recommendation, but rather one intended for the well-rounded scholar who just wants to known everything. Most of these books are not new, but you will find every one of them mentioned at some time or another. -Jim Kennedy

S. Forrest
(Ed.) (1991). Emergent Computation: Self-organizing, collective, and cooperative phenomena in natural and artificial computing networks. Cambridge, Massachusetts: MIT Press.

Almost all of the papers in this volume are classics, and several are absolutely essential reading. Primary among these is Christopher Langton's "Computation at the edge of chaos: phase transitions and emergent computation." It is necessary for the student to read Langton's paper, if nothing else in the field. I found J. D. Farmer's "A Rosetta Stone for connectionism" to be enlightening, though it was never as widely cited as it should have been; the same can be said for Paul Churchland's "On the nature of explanation: a PDP approach," which I xeroxed and gave to all my friends. Steve Harnad's "The symbol grounding problem" is cited very frequently, a cornerstone of work in computational intelligence following its publication. This compilation, a special issue of Physica D, is groundbreaking in every sense of the word.

Holland, J.
(1975). Adaptation in natural and artificial systems. Ann Arbor: University of Michigan Press.

This eye-opening pioneering work was ignored for the first decade of its existence, and has grown to be a bible in its field. Holland develops and implements many of the fundamental ideas of complexity science. Note, though, that some of his ideas have come under attack, for instance the building-block hypothesis and the schema theorem; read skeptically, as usual. If you are going to study complexity, you are going to have read this book.

Wolfram, S.
(1994). Cellular automata and complexity: Collected papers. Reading, Massachusetts: Addison-Wesley Publishing Company.

This is not the "new" book on the "new kind" of science (which is scarcely science at all). This is a classic collection of groundbreaking papers on computation with cellular automata. All through the 1980s, Wolfram was setting the stage for the science of complexity as we know it now, with his meticulous and thoughtful study of these computer programs that make complex patterns out of simple instructions. While you're at it, you should go to Rudy Rucker's web site at and get his cellular automaton software, which runs in DOS but is still the best out there; then you can experiment with these programs yourself. You will also want the excellent Game of Life program for Windows called Life32 that can be found at

Kauffman, S.
(1995). At home in the universe: The search for the laws of self-organization and complexity. Oxford: Oxford University Press.

Kauffman came out with two books at about the same time, covering about the same subject matter. The difference is that this one was made to be read by an intelligent, well-educated reader, while The origins of order is intended for a technical audience already intimate with the most arcane methods of complexity theory. In At home in the universe, Kauffman explains such important concepts as NK landscapes, self-organizing Boolean networks, and evolvability on fitness landscapes in a warm, personal, and very readable way. You will spend some time studying the graphs and tables, but it will be worth it.

Dawkins, R.
(1976). The selfish gene. New York: Oxford University Press.

This is where the meme of memes came from, and the book offers a provocative perspective on genetics and evolution. You will commit a faux pas at a complex-systems dinner party if you are not familiar with this book. Be aware though that some of Dawkins' ideas have met disapproval, even by other evolutionary theorists (who fight among themselves like cats and dogs: for the real dirt, you should read Ullica Segerstråle's Defenders of the truth). If you like The selfish gene, you should also read The blind watchmaker, by the same author. At the least, look at his biomorphs, an early implementation of a clever kind of genetic algorithm different from Holland's but illustrative nevertheless of the power of simple variation and selection.

Brooks, R.
(1999). Elephants don't play chess. Published in Robotics and Autonomous Systems, 6 (1990), pp. 3-15, but you can get it at

Rodney Brooks builds robots at MIT. In this highly entertaining paper he undercuts all the assumptions of good old-fashioned artificial and shows that a self-organizing distributed system is superior for controlling the behavior of robots. This paper changed the way I thought about organization, not just in robots but in everything.

Watts, D.
(1999). Small worlds. Princeton, NJ: Princeton University Press.

Not light reading, but a useful summary of recent research in social networks, and incidentally other kinds of networks. I recommend this book because of Watts' clear and thorough explanation of a number of important topics in graph theory. The topology of a network greatly affects the way information can flow through it, whether it is a social group, an ecological system, or anything else. Clusters, cliques, and distance are just some factors that influence the behavior of an interconnected complex system.

Wolpert, D. H.,
and Macready, W. G. (1995) No free lunch theorems for search. Santa Fe Institute working paper SFI-TR-95-02-010, and Wolpert, D. H., and Macready, W. G. (1996). No free lunch theorems for optimization. IEEE Transactions of Evolutionary Computation, 1, 67-82.

You don't have to read both of these papers, but any serious complexity theorist must get through at least one of them, and must absorb their implications. I recommend them because I would like to hear the greatest possible number of arguments as to why the NFL theorems (as they are known) are absurd, illogical, unlikable, and/or irrelevant. The theorems prove that no search or optimization algorithm can be better than any other, averaged over all possible functions. The problem is that the reasoning of the theorems is impeccable - they must be correct. But if they are correct, there is no sense in trying to do anything better, because what's better in one situation is worse in another. This is a huge debate, in its infancy, and you might have something to say about it.

Rumelhart, D. E.,
and McClelland, J. L. (1986). Parallel Distributed Processing - Vol. 1 : Foundations, and McClelland, J. L. and Rumelhart, D. E. (1986). Parallel Distributed Processing - Vol. 2. Cambridge, Massachusetts: The MIT Press.

These books, known almost officially as "the bible of connectionism," introduced the world to the concept of neural networks. The authors are psychologists, and their presentation is an explanation of cognitive processes, but of course neural-nets have taken off as an engineering tool, a computer science paradigm unto itself, and a way to model and understand complex systems of very many types. Again, this two-volume set should be on every complexity researcher's bookshelf; there's just too much here to ignore, you've got to read it.

Dyson, G. B.
(1997). Darwin among the machines: The evolution of global intelligence. Reading, Massachusetts: Perseus Books.

Over the years, there have been a number of very good books chronicling the emergence of the complexity movement in science, including:

· Complexity, by W. Mitchell Waldrop

· Darwin's dangerous idea, by Daniel Dennett

· Out of control, by Kevin Kelly

· Complexification, by John Casti

· The quark and the jaguar, by Murray Gell-Mann

· Chaos, by James Gleick

· Dreams of reason, by Heinz Pagels

These books are all accessible high-level overviews of some aspect of complexity science, ranging from the GED-optional level (Kelly) to the rather technical (Gell-Mann), and it seems to me that everyone in the field should have read all of them.

Darwin among the machines covers much the same ground, but goes beyond these other introductory volumes. Dyson will challenge you with new ways to think about such things as artificial life and intelligent machines. His account is exceptionally personal and thoroughly informed; this book just transcends the others listed here.