The Bactra Review: Occasional and eclectic book reviews by Cosma Shalizi   132 Stephen Wolfram, A New Kind of Science

A New Kind of Science

by Stephen Wolfram

Wolfram Media, 2002

A Rare Blend of Monster Raving Egomania and Utter Batshit Insanity

Attention conservation notice: Once, I was one of the authors of a paper on cellular automata. Lawyers for Wolfram Research Inc. threatened to sue me, my co-authors and our employer, because one of our citations referred to a certain mathematical proof, and they claimed the existence of this proof was a trade secret of Wolfram Research. I am sorry to say that our employer knuckled under, and so did we, and we replaced that version of the paper with another, without the offending citation. I think my judgments on Wolfram and his works are accurate, but they're not disinterested.

With that out of the way: it is my considered, professional opinion that A New Kind of Science shows that Wolfram has become a crank in the classic mold, which is a shame, since he's a really bright man, and once upon a time did some good math, even if he has always been arrogant.

As is well-known (if only from his own publicity), Wolfram was a child prodigy in mathematics, who got his Ph.D. in theoretical physics at a tender age, and then, in the early and mid-1980s, was part of a wave of renewed interest in the subject of cellular automata. The constant reader of these reviews will recall that these are mathematical systems which are supposed to be toy models of physics. Space consists of discrete cells arranged in a regular lattice (like a chess-board, or a honeycomb), time advances in discrete ticks. At each time, each cell is in one of a finite number of states, which it changes according to a preset rule, after examining the states of its neighbors and its own state. A physicist would call a CA a fully-discretized classical field theory; a computer scientist would say each cell is a finite-state transducer, and the whole system a parallel, distributed model of computation. They were introduced by the great mathematician John von Neumann in the 1950s to settle the question of whether a machine could reproduce itself (answer: yes), and have since found a productive niche in modeling fluid mechanics, pattern formation, and many kinds of self-organizing system.

After the foundational work of von Neumann and co., there was a long fallow period in the study of CAs, when publications slowed to a trickle, and people were more likely to think of themselves as studying the statistical mechanics of spin systems, or the ergodic properties of interacting particle systems, than cellular automata as such. The major exception was a popular CA invented by John Conway, the Game of Life, or just Life, which spawned a dedicated following, trying to fathom how such a ridiculously simple set of rules could produce such monstrously complicated results. In the late 1970s, mathematicians and physicists began to become increasingly interested in CAs as such, largely owing to the advent of (comparatively) cheap and powerful desktop computers, which let people simulate and visualize CAs. There was a school of thought — obscure, but surprisingly widely known — which, following the physicist Ed Fredkin, thought that the universe as a whole might in some sense be a CA. Many people participated in this revival, in many places — prominent names include, alphabetically, Crutchfield, Durrett, Farmer, Frisch, Goles, Grassberger, Liggett, Margolus, Packard, Toffoli, Vichniac, etc.

Wolfram's first paper on CAs, published in 1983, was titled "The Statistical Mechanics of Cellular Automata". It focused its attention on particular simple — he said "elementary" — CAs: one spatial dimension, two possible states for each cell, and a neighborhood consisting of the sites to the immediate right and left of a given cell. There are 8 possible configurations for such neighborhoods, and so 256 possible elementary CA rules; in the paper, Wolfram introduced a useful scheme for referring to those rules, and others, by number, so that we speak of rule 18, rule 22, rule 90, rule 110 (of which much more below), etc. Beyond that, the paper largely consisted of calculating the entropy of configurations generated by different rules, and saying that, while the rules were simple, the patterns they could generate were complicated and intriguing. Well, and so they were; and so said many other people at the first major modern conference on CAs, organized by Farmer, Toffoli and Wolfram at Los Alamos in 1983.

Wolfram went on to publish a bunch more papers on CAs over the next few years: probably the most noteworthy are "Computation Theory of Cellular Automata" (1984), where he used a familiar device of elementary computer science (regular languages and their equivalent finite automata) to characterize the set of configurations it is possible for a CA to produce, and "Universality and Complexity in Cellular Automata", where he proposed a four-fold classification of CAs based on their long-run behavior. Class I decay to a fixed, static configuration; class II to periodic oscillation; class III to seething, pseudo-random, chaotic gurp; class IV were supposed to have complicated ordered structures interacting in odd ways, and never really settle down. This scheme was popular for a while, but no one (including Wolfram) was ever able to make it any more precise, and it's proved basically worthless for understanding what CAs do; it was a Nice Try. (For more on the problems with this scheme, see Lawrence Gray's review of this book [PDF].)

In the mid-1980s, Wolfram had a position at the University of Illinois-Urbana's Beckman Institute for complex systems. While there, he and collaborators developed the program Mathematica, a system for doing mathematics, particularly algebraic transformations and finding exact-form solutions, similar to a number of other products (Maple, Matlab, Macsyma, etc.), which began to appear around the same time. Mathematica was good at finding exact solutions, and also pretty good at graphics. Wolfram quit Illinois, took the program private, and entered into complicated lawsuits with both his former employee and his co-authors (all since settled).

Wolfram has since retreated from normal scientific life, in to, on the one hand, tending the Mathematica empire, and, on the other, his peculiar scientific vision and method. The vision is of the universe as, if not exactly a CA, then a simple discrete program of some sort. The method has involved an enormous number of man-hours on the part of subordinates who are, as it were, enserfed to him, scanning the behavior of likely-looking CAs and signing over the rights to their discoveries to Wolfram; their efforts are supplemented by frequent lawsuits and threats of lawsuits against those whom Wolfram feels have infringed on his turf. (In 1986, for instance, Wolfram filed a patent on the idea of using CAs as discrete approximations to partial differential equations, long after the idea was commonplace in the field; it was, for instance, expounded at length in two papers in the 1983 conference proceedings he helped edit. His 1986 paper on the subject is, however, a very important contribution to the field, now known as lattice-gas hydrodynamics.)

What, then, is the revelation Wolfram has been vouchsafed? What is this new kind of science? Briefly stated, it is the idea that we should give up trying on complicated, continuous models, using normal calculus or probability theory or the like, which try to represent the mechanisms by which interesting phenomena are produced, or at least to accurately reproduce the details of such phenomena. Instead we should look for simple, discrete models, like CAs ("simple programs", as he calls them) which qualitatively reproduce certain striking features of those phenomena. In addition to this methodological advice, there is the belief that the universe must in some sense be such a simple program — as he has notoriously said, "four lines of Mathematica". Most of the bulk of this monstrously bloated book is dedicated to examples of this approach, i.e., to CA rules which produce patterns looking like the growths of corals or trees, or explanations of how simple CAs can be used to produce reasonably high-quality pseudo-random numbers, or the like.

As the saying goes, there is much here that is new and true, but what is true is not new, and what is new is not true; and some of it is even old and false, or at least utterly unsupported. Let's start with the true things that aren't new.

Wolfram refers incessantly to his "discovery" that simple rules can produce complex results. Now, the word "discovery" here is legitimate, but only in a special sense. When I took pre-calculus in high school, I came up with a method for solving systems of linear equations, independent of my textbook and my teacher: I discovered it. My teacher, more patient than I would be with adolescent arrogance, gently informed me that it was a standard technique, in any book on linear algebra, called "reduction to Jordan normal form", after the man who discovered it in the 1800s. Wolfram discovered simple rules producing complexity in just the same way that I discovered Jordan normal form.

I am not going to dwell on the way that finding simple laws to account for multitudes of complex phenomena has been the highest aim of the exact sciences since at least Galileo and Newton. But this idea has been a driving force in mathematical logic and computer science since Alan Turing, A. N. Kolmogorov and Emil Post (he of the "tag" system, of which more later). Herbert Simon eloquently explained the importance of the idea for the study of adaptation, psychology and society in his famous 1969 book, The Sciences of the Artificial. In 1973, the physicist-turned-ecologist Robert May published a well-known paper in Nature on, as the title had it, the complicated dynamics of a simple equation. It was an idea that was very much in the air, everywhere, in the early 1980s when Wolfram came on the scene — to pick two books at random from 1984, for instance, neither especially hard reading, there was Valentino Braitenberg's wonderful venture in neuroscience and AI, Vehicles, and William Poundstone's popular book on cellular automata and "cosmic complexity," The Recursive Universe (sadly out of print). I could multiply instances ad nauseam, if I haven't already.

In an atrocious chapter on processes of learning and perception, Wolfram says that Mathematica, because it works by applying transformation rules to expressions that fit certain patterns, has a unique affinity for the way the human mind works, an affinity that isn't captured by any theory in cognitive science or AI. But this is just a rough description of the production-rules approach to modeling cognition, including memory, which was pioneered in the early 1950s by Herbert Simon and Allen Newell. In fact, their work helped drive the development of the LISP programming language, from which Mathematica descends. The book is full to bursting with this kind of thing, in every area of science it touches on that I'm at all familiar with. I could go over Wolfram's discussion of biological pattern formation, gravity, etc., etc., and give plenty of references to people who've had these ideas earlier. They have also had them better, in that they have been serious enough to work out their consequences, grasp their strengths and weaknesses, and refine or in some cases abandon them. That is, they have done science, where Wolfram has merely thought.

By way of making the transition to the new, untrue stuff, let's consider what we mean by "simple" and "complex". This is a highly involved subject, as there are many different proposed measures of complexity. (Badii and Politi's Complexity remains the best survey.) The classical notion is that of algorithmic complexity, often called Kolmogorov complexity after one of its three simultaneous discoverers. (The other two were Ray Solomonoff and Gregory Chaitin.) The algorithmic complexity of an object, say a string of digits, is the length of the shortest computer program which will produce that object and then halt. There is always a program which can do this; if the object is x, the program "print(x)" will do, and the length of this program is the same as the length of x, plus a trivial constant. Simple objects only need short programs; complex objects need programs approximately as long as the original object, and are called "incompressible". Kolmogorov's goal, in setting up algorithmic complexity, was to give a non-probabilistic definition of randomness. Probability theory tells us much about what random sequences of digits must be like; in a uniform random sequence of binary digits, for instance, there must be as many instances of "01" as "00", to give a trivial example. Speaking roughly, Kolmogorov showed in the 1960s that infinitely long incompressible sequences have all the properties of random sequences.

Around 1970, Kolmogorov's disciple, Per Martin-Löf, extended this, in a very interesting direction. If we consider a finite random sequence, there is some probability that it won't exactly satisfy any given property of the ideal infinite random sequence — it might contain "01" slightly more often than "00", say. This leads to the idea of testing for randomness, and accepting as random only objects which are sufficiently close to the ideal, i.e., which do not deviate from the ideal in ways which would be very improbable. Martin-Löf showed, again roughly speaking, that complex objects will pass many high-reliability tests for randomness, and conversely objects which pass randomness tests must have high algorithmic complexity.

Wolfram (naturally) ignores all existing measures of physical complexity, and (quite astonishingly) avoids giving any quantitative measure of complexity of his own. Instead, he just says he'll count as complex things which are (I) visually interesting and (II) pass standard tests for randomness, as used in computer programming and cryptography. Now, if something satisfies (I), that is as much a fact about Stephen Wolfram, or, more generously, about the visual cortex of the East African Plains Ape, as it is a fact about the object. Passing (II) just means, by virtue of Martin-Löf's results, that the algorithmic complexity is not too low. Significantly, this part of Martin-Löf's work is never mentioned by Wolfram.

As for the new, untrue stuff, there's so much I hardly know where to start. But he and I were both trained as physicists, so I'll pick on his account of quantum mechanics, relativity and gravitation. I said earlier that CAs are the discrete counterparts of classical field theories. If the universe worked by classical physics, then one could argue that a CA approach to physics might work just as well as the continuous-field approach, provided you made the scale of the cells small enough, and the number of states per cell large enough. (Many people have so argued.) After all, movies demonstrate that apparent continuity is no argument for real continuity, and it's hard to see how any test could do more than put an upper bound on the scale of the cells. But we know classical physics isn't right; the universe is (to a much better approximation) quantum and relativistic. It's comparatively easy to define quantum cellular automata; this is in effect what the sub-industry of lattice quantum field theory does. What's harder is to get relativity right. Ordinary CAs are well-adapted to classical space-time, but not to the space-time of either special or general relativity. (They can have Galilean invariance, but not Lorentzian.) With some ingenuity, you can get a discretized version of special relativity to work in CAs (this was shown by Mark Smith in a 1994 dissertation at MIT). No one has figured out how to make general relativity, and with it gravity, work in a CA. Wolfram is aware of this, and tries to explain both quantum mechanics and gravity through deterministic dynamics on a kind of recursive network (not a CA). This is reminiscent of much more advanced work on "spin foams" in quantum gravity, which however does not try to explain away quantum effects, and is susceptible to actual calculations. (This approach is also known as "loop quantum gravity".) In any event, not long after this book was published, Scott Aaronson proved that Wolfram's scheme must either conflict with special relativity (by breaking Lorentz invariance), or conflict with quantum mechanics (by obeying Bell's inequalities), or indeed both. It is a non-starter.

Another egregious weakness is biology. Wolfram displays absolutely no understanding of evolution, or what would be necessary to explain the adaptation of organisms to their environments. This is related to his peculiar views on methodology. If you want to get a rough grasp of how the leopard might get its spots, then building a CA model (or something similar) can be very illuminating. It will not tell you whether that's actually how it works. This is an important example, because there is a classic theory of biological pattern formation, or morphogenesis, first formulated by Turing in the 1950s, which lends itself very easily to modeling in CAs, and with a little fine-tuning produces things which look like animal coats, butterfly wings, etc., etc. The problem is that there is absolutely no reason to think that's how those patterns actually form; no one has identified even a single pair of Turing morphogens, despite decades of searching. [See "Update, 4 March 2012" below.] Indeed, the more the biologists unravel the actual mechanisms of morphogenesis, the more complicated and inelegant (but reliable) it looks. If, however, you think you have explained why leopards are spotted after coming up with a toy model that produces spots, it will not occur to you to ask why leopards have spots but polar bears do not, which is to say that you will simply be blind to the whole problem of biological adaptation.

Leaving evolution and adaptation to one side, saving the qualitative phenomena doesn't mean that you have the right mechanism, even qualitatively. If, in addition, you want quantitative accuracy — either for engineering purposes, or to compare hypotheses which all produce the same qualitative results — you obviously can't just get by with Wolfram's "new kind of science" — or, as we say in the trade, with toy models. To be fair, toy models sometimes can be quantitatively accurate, but only in peculiar circumstances which do not generally obtain, and certainly don't extend to Wolfram's toys. We must not, however, expect this to deter a man capable of summarizing his methodology in the brilliant aphorism, "I am my own reality check."

There is one new result in this book which is genuinely impressive, though not so impressive as Wolfram makes it out to be. This is a proof that one of the elementary CAs, Rule 110, can support universal computation. To explain this needs a slight detour through the foundations of computer science.

Theoretical computer science studies the properties of abstract, formal systems which do computations. A common type of problem goes like this: given an abstract machine, an automaton, of a certain sort, what kinds of computations can it do? For instance, if you have a machine with finite memory, can it divide sequences of left and right parentheses into sequences with an even number of right parentheses from those with an odd number of them? (Answer: yes.) Could it recognize balanced sequences, in which every left parenthesis is a later matched by a corresponding right parenthesis, and vice versa? (Answer: No; you need an unbounded memory. Proof is left as an exercise.) While these examples are trivial, more complicated versions become serious questions in linguistics, optimization, cryptography, data-bases, compilers, etc.

In 1936, Turing proposed a class of automata which have since come to be called Turing machines, and showed that every function which can be defined recursively can be computed by some Turing machine. More remarkably yet, he showed that there were universal Turing machines — ones which could be programmed to emulate any other Turing machine. Now, there are other models of computation, other basic classes of automata, but so far it has turned out that everything they can compute can also be computed by Turing machines. We can show that some of these, in turn, can compute anything a Turing machine can compute. This has led to the Church-Turing Thesis, that any function which can be specified effectively can be computed by a Turing machine; a system which can emulate a universal Turing machine is thus a universal computer.

The easiest way to show that something is a universal computer is to show that it can emulate something you already know is a universal computer, like a Turing machine. Now, it's been known since the work of Emil Post in the 1930s that something called a Post tag system (see here or here for more or less mathematical explanations) is Turing-equivalent. A New Kind of Science describes a new formal system, called a cyclic tag system (Wolfram drops "Post"), which is equivalent to a Post tag system, and so to a universal Turing machine. Finally, there is a sketch of how propagating structures ("gliders") in Rule 110 can be used to implement a cyclic tag system, assuming you had an infinite lattice to play with.

This is a genuinely new result. Rule 110 is the simplest CA (in terms of the number of states and the rule radius) which is known to support universal computation. (Indeed, in his 1985 book on cellular automata, Wolfram declared that universal computation in an elementary CA was obviously impossible.) However, lots of things are capable of universal computation — there's less interest in this kind of result than there was in, say, 1970. In 1990, for instance, Cristopher Moore devised a kind of idealized pin-ball machine which is capable of universal computation. This result, like the one about rule 110, is neat for people who care about dynamical models of universal computation — on the order of a thousand scientists and mathematicians world wide. What Wolfram wants to claim is that, since one universal computer is equivalent to another, by studying the behavior of one we learn things which are true of all others (true), therefore Rule 110 is as complex as anything in the universe, and all intelligent life, including, perhaps, the gods must have much in common. This, to put it mildly, does not follow. Wolfram even goes on to refute post-modernism on this basis; I won't touch that except to say that I'd have paid a lot to see Wolfram and Jacques Derrida go one-on-one.

The real problem with this result, however, is that it is not Wolfram's. He didn't invent cyclic tag systems, and he didn't come up with the incredibly intricate construction needed to implement them in Rule 110. This was done rather by one Matthew Cook, while working in Wolfram's employ under a contract with some truly remarkable provisions about intellectual property. In short, Wolfram got to control not only when and how the result was made public, but to claim it for himself. In fact, his position was that the existence of the result was a trade secret. Cook, after a messy falling-out with Wolfram, made the result, and the proof, public at a 1998 conference on CAs. (I attended, and was lucky enough to read the paper where Cook goes through the construction, supplying the details missing from A New Kind of Science.) Wolfram, for his part, responded by suing or threatening to sue Cook (now a penniless graduate student in neuroscience), the conference organizers, the publishers of the proceedings, etc. (The threat of legal action from Wolfram that I mentioned at the beginning of this review arose because we cited Cook as the person responsible for this result.)

Of course, lots of professors add their names to their students' papers, and many lab-chiefs owe their enviable publication records to the fact that they automatically add their names to the end of every manuscript prepared under their command. If Wolfram did that, he would merely be perpetuating one of the more common abuses of the current scientific community. But to deny Cook any authorship, and to threaten people with lawsuits to keep things quiet, is indeed very low. Happily, the suit between Wolfram and Cook has finally been resolved, and Cook's paper has been published, under his own name, in Wolfram's journal Complex Systems.

So much for substance. Let me turn to the style, which is that of monster raving egomania, beginning with the acknowledgments. Conventionally, this is your chance to be modest, to give credit to your sources, friends, and inevitably long-suffering nearest and dearest. Wolfram uses it, in five point type, to thank his drudges (including Matthew Cook for "technical content and proofs"), and thank people he's talked to, not for giving him ideas and corrections, but essentially for giving him the opportunity to come up with his own ideas, owing nothing to them. (Bringing to mind Monty Python: "Your Majesty is like a stream of bat's piss: you shine out like a shaft of gold, when all around is dark.") This extends to thanking his mother for giving him a classical British education. (Wolfram's ideas of gentlemanly conduct seem somewhat at variance with those traditionally associated with Eton.) The customary self-effacement of scientific prose sometimes leads to boring writing, but it is immensely preferable to incessant self-aggrandizement. This extends even to grammar: wherever possible, Wolfram talks about other people's efforts in the passive voice ("the notion of undecidability was developed in the 1930s", not "Gödel and Turing developed the notion of undecidability"). Wolfram actually has the gall to say that he's deliberately writing like this, to help readers understand his difficult ideas!

Normally, scientific work is full of references to previous works, if only to say things like "the outmoded theory of Jones [1], unable to accommodate stubborn experimental facts [2--25], has generally fallen out of favor". This is how you indicate what's new, what you're relying on, how you let readers immerse themselves in the web of ideas that is an particular field of research. Wolfram has deliberately omitted references. Now, this is sometimes done: Darwin did it in The Origin of Species, for instance, to try to get it to press quickly. But Wolfram has written 1100 pages over about a decade; what would it have hurt to have included citations? In his end-notes, where he purports to talk about what people have done, he is misleading, or wrong, or both. (An indefinite number of examples can be provided upon request.) To acknowledge that he had predecessors who were not universally blinkered fools would, however, conflict with the persona he tries to project to others, and perhaps to himself.

Let me try to sum up. On the one hand, we have a large number of true but commonplace ideas, especially about how simple rules can lead to complex outcomes, and about the virtues of toy models. On the other hand, we have a large mass of dubious speculations (many of them also unoriginal). We have, finally, a single new result of mathematical importance, which is not actually the author's. Everything is presented as the inspired fruit of a lonely genius, delivering startling insights in isolation from a blinkered and philistine scientific community. We have been this way before.

[Some cranks] are brilliant and well-educated, often with an excellent understanding of the branch of science in which they are speculating. Their books can be highly deceptive imitations of the genuine article — well-written and impressively learned....
[C]ranks work in almost total isolation from their colleagues. Not isolation in the geographical sense, but in the sense of having no fruitful contacts with fellow researchers.... The modern pseudo-scientist... stands entirely outside the closely integrated channels through which new ideas are introduced and evaluated. He works in isolation. He does not send his findings to the recognized journals, or if he does, they are rejected for reasons which in the vast majority of cases are excellent. In most cases the crank is not well enough informed to write a paper with even a surface resemblance to a significant study. As a consequence, he finds himself excluded from the journals and societies, and almost universally ignored by competent workers in the field..... The eccentric is forced, therefore, to tread a lonely way. He speaks before organizations he himself has founded, contributes to journals he himself may edit, and — until recently — publishes books only when he or his followers can raise sufficient funds to have them printed privately.

Thus Martin Gardner's classic description of the crank scientist in the first chapter of his Fads and Fallacies. In lieu of superfluous comments, let us pass on to Gardner's list of the "five ways in which the sincere pseudo-scientist's paranoid tendencies are likely to be exhibited."

  1. He considers himself a genius.
  2. He regards his colleagues, without exception, as ignorant blockheads. Everyone is out of step except himself....
  3. He believes himself unjustly persecuted and discriminated against....
  4. He has strong compulsions to focus his attacks on the greatest scientists and the best-established theories. When Newton was the outstanding name in physics, eccentric works in that science were violently anti-Newton. Today, with Einstein the father-symbol of authority, a crank theory of physics is likely to attack Einstein in the name of Newton....
  5. He often has a tendency to write in a complex jargon, in many cases making use of terms and phrases he himself has coined....

(1) is clearly true. (2) is clearly true. (3) is currently false, or at least not much on display in this book. (4) is clearly true, though Wolfram, befitting someone who was once a respectable physicist, aims to undermine Newton and Einstein, indeed the entire tradition of physical science since Galileo. (5) is true only to a very small degree (mercifully).

When the crank's I.Q. is low, as in the case of the late Wilber Glenn Voliva who thought the earth shaped like a pancake, he rarely achieves much of a following. But if he is a brilliant thinker, he is capable of developing incredibly complex theories. He will be able to defend them in books of vast erudition, with profound observations, and often liberal portions of sound science. His rhetoric may be enormously persuasive. All the parts of his world usually fit together beautifully, like a jig-saw puzzle.

The natural result is a cult following. Wolfram certainly has that, to judge from his sales, the attendance at his "New Kind of Science" conventions, and the reader reviews on Amazon. (I presume they are not all a claque hired by Wolfram Media.) This frankly is part of a disturbing trend, pronounced within the field of complex systems. In addition to Wolfram, I might mention the cult of personality around Ilya Prigogine, and Stuart Kauffman's book Investigations, or even the way George Lakoff uses "as cognitive science shows" to mean "as I claimed in my earlier books".

This brings me to the core of what I dislike about Wolfram's book. It is going to set the field back by years. On the one hand, scientists in other fields are going to think we're all crackpots like him. On the other hand, we're going to be deluged, again, with people who fall for this kind of nonsense. I expect to have to waste a lot of time in the next few years de-programming students who'll have read A New Kind of Science before knowing any better.

I don't object to speculation or radical proposals, even to radical, grandiose speculative proposals; I just want there to be arguments to back them up, reasons to take them seriously. I don't object to scientists displaying personality in their work, or staking out positions in vigorous opposition to much of the opinion in their field, and engaging in heated debate; I do object to ignoring criticism and claiming credit for commonplaces, especially before popular audiences who won't pick up on it. I don't even object to writing 1000 page tomes vindicating one's own views and castigating doubters; I do object to 1000 page exercises in badly-written intellectual masturbation. Consider, by way of contrast, the late physicist E. T. Jaynes. For four decades, he defended original, radical views on the role of probability in physics, the nature of statistical mechanics and the place of inductive reasoning in science. These views were hotly contested on all sides; Jaynes met criticism with astringent but engaged and intellectually honest replies. At the time of his death, he was working, as he had been for years, on a mammoth book that would have been the final, definitive statement of his views; even in the fragmentary state he left it, it was roughly as long as Wolfram's tome, and infinitely more valuable. I think Jaynes's ideas were dead wrong, but I wouldn't dream of calling him a crank.

I suppose it's customary in writing reviews of this sort to try to say what has driven Wolfram to write such a bad, self-destructive book. But the truth is I couldn't care less. He has talent, and once had some promise; he has squandered them. I am going to keep my copy of A New Kind of Science, sitting on the same shelf as Atlantis in Wisconsin, The Cosmic Forces of Mu, Of Grammatology, and the people who think the golden ratio explains the universe.


Update, 4 March 2012: There is now a fairly convincing example of a pair of Turing morphogens in actual biology:

Andrew D. Economou, Atsushi Ohazama, Thantrira Porntaveetus, Paul T Sharpe, Shigeru Kondo, M. Albert Basson, Amel Gritli-Linde, Martyn T. Cobourne and Jeremy B. A. Green, "Periodic stripe formation by a Turing mechanism operating at growth zones in the mammalian palate", Nature Genetics 44 (2012): 348--351
Thanks to a reader for letting me know about this.
Thanks are due to a number of friends, who might perhaps rather not be named in this connection. Also: my present and past employers aren't responsible for this in any way.
1192 pp., many handsome black-and-white illustrations, index of names and subjects
Cellular Automata / Physics / Self-Organization, Complexity, etc.
In print as a hardback, ISBN 1579550088, US$44.95. Full text free online from the author
Mostly written July-August 2002; dusted off and made public 21 October 2005. URL for Cook's paper on rule 110 updated 20 March 2014. Typo correction, 21 July 2014.<