The Bactra Review: Occasional and eclectic book reviews by Cosma Shalizi   47

Brainchildren

Essays on Designing Minds

by Daniel C. Dennett

Representation and Mind series. MIT Press/Bradford Books, 1998

An Attempt to Introduce the Experimental Method of Reasoning into Moral Subjects

Brainchildren is a collection of Dennett's recent papers on the nature of minds, artificial intelligence, consciousness, etc., previously available only in various more-or-less obscure journals, and (until publication of this book) at Dennett's web site. A good place to start in on it is with a figure who doesn't appear in the book at all, namely William James. There are enough similarities between the two men for a conspiracy theory or a book on reincarnation: both came from literary, humanist families; both abandoned early attempts at careers in art (James in painting, Dennett in sculpture, which is suggestive of the differences in their prose styles); both focus on philosophical psychology (lately re-named "philosophy of mind"); both are devout and astute Darwinians (James seems to have been the first to advance truly selectionist theories of mental function and social development); both stress the need for experimentation and experimental facts (James was, in fact, a practicing experimental physiologist as well as a psychologist; Dennett is helping to build a conscious robot), and rebelled against the philosophy prevalent in their youth (Hegelianism, ordinary-language) for its neglect of such facts and its anti-scientific attitude; both have done much to "embody mind," to attach it firmly to our bloated sacks of protoplasm, to cut consciousness and the self down to size; both are champions of free will (James against determinism; Dennett, it half seems, against indeterminism); both are fine, amusing, convincing writers; and both teach at Boston-area colleges. It is these similarities which throw the really important differences into such striking relief: How on Earth does it come about that the modern avatar of William James is one of the ablest champions of mechanism, materialism, reductionism (put together, roughly James's "medical materialism") and the "automaton hypothesis", the very things James attacked with such force?

The short answer is that those doctrines have changed, and James himself could now adopt them in good conscience. For James, the "mark and criterion of the presence of mentality" was the "pursuance of future ends and the choice of means for their attainment" (Principles of Psychology, vol. I, ch. 1, p. 8). No machine was capable of this, of varying its means to attain a fixed end; more over, an unaided mechanism, as sensitive to minute causes as the higher centers of the brain are, could produce nothing but randomness and noise. "The dilemma in regard to the nervous system seems, in short, to be of the following kind. We may construct one which will react infallibly and certainly, but it will then be capable of reacting to very few changes in the environment --- it will fail to be adapted to the rest. We may, on the other hand, construct a nervous system potentially adapted to respond to an infinite variety of minute features in the situation; but its fallibility will then be as great as its elaboration" (Principles, vol. I, ch. v, p. 140). Consequently, there must be something else, some non-mechanical, non-automatic factor capable of adjusting means to ends, and "loading the dice" so as to exploit the instability of the brain. This is coherent and reasonable, as James almost always was; but it is also wrong. The error lies not in the reasoning, but in the empirical facts on which it was based, and was corrected, not through abstract reasoning, but through actually building machines which adjust themselves, which are informationally-sensitive but not chaotic.

Self-regulating machines existed even in James's day, but in fairness to him it must be said that those were painfully primitive, and even more painful to understand (Watt's steam-engine governor went unanalyzed for nearly a century, and even then was a problem worthy of the attention of James Clerk Maxwell). It would be positively churlish to belabor James with not seeing the possibilities implicit in Watt's governor, the Jacquard loom, or that grandiose boondoggle, Babbage's Analytical Engine, but it is remarkable how many people today, surrounded by self-regulating, adaptive gadgets, by plastic-and-silicon entelechies, simply do not twig, do not realize that (to indulge in a little metaphysical dramatization) by the 1940s the engineers had effaced the boundary between the organic and the artificial, the mechanical and the intelligent. Dennett not only twigs; he branches out from there in all directions.

Those who wish to keep on insisting that the intelligence of living things (in particular Homo sapiens) is not of the same order as the informational sensitivity of machines have taken to claiming that, while automata may act as if they were pursuing ends, might show approximate sensitivity to meanings, goals, and the like, they can never really do so. At best, their engineers have exploited the fact that certain meaningless things (variously called physical, mechanical, syntactical) will usually track meaningful ones, in a certain range of circumstances. (The soda machine doesn't recognize quarters, it recognizes circular hunks of a certain diameter and mass; car alarms, notoriously, do not know whether the car is being stolen but switch on whenever this wire gets jostled, and so forth.) Dennett will have none of this, and I don't think James would've either. These machines are all imperfect approximations of ideal goal-followers or meaning-trackers, but so are we, and so are all intelligent living things. "When mechanical push [comes] to shove, a brain [is] always going to do what it [is] caused to do by current, local, mechanical circumstances, whatever it ought to do, whatever a God's-eye view might reveal about the actual meanings of its current states. But over the long haul, brains [can] be designed --- by evolutionary processes --- to do the right thing (from the point of view of meaning) with high reliability" (Brainstorms, p. 357). Natural selection has no interest in perfect meaning-recognizers, even supposing them to be possible; things which will usually act more or less as though they were recognizing meaning under most circumstances will do. (The frog's eye tells the frog's brain not about flies, but small moving black blotches: good enough for government work, or dinner.) Just how reasonable a facsimile of a Real Meaner an organism or an artifact has to be will naturally depend on circumstances --- the kinds of errors the facsimile is prone to, the frequency with which it makes them, the costs of improving the facsimile. (Car alarms cry aloud their definite need for improvement.)

These issues are tackled in a marvelous essay on "Real Patterns," which ought to be required reading for everyone studying complexity, and not just because he takes his examples from cellular automata. There is a pattern to something if there is a description of it which takes less information to convey than the original; a signal is pattern-less, random, if it cannot be compressed in this fashion. (This is the Kolmogorov notion of complexity, duly acknowledged, which has already been discussed in these pages.) Equivalently, finding a pattern in something means finding regularities in it you can use to make predictions. But suppose you tolerated less than perfect predictions; you could then economize on the elaboration of the pattern you see in the signal by allowing as how it is contaminated with a certain degree of noise. This leads to a trade-off, between increasing the elaboration of your pattern, its predictive power, and the computational and informational costs of using it, and increasing your tolerance for noise, simplifying the pattern, and saving on computation at the cost of losing predictive leverage. Beyond a certain point, of course, noise will swamp your putative pattern (and if you can tolerate that, do you really care about the signal at all?); but between there and perfect reproduction of the signal there is a large territory --- call it model space --- where one man's noise is part of another man's pattern, and it doesn't make sense to say that one location in model space is the real pattern. Any regularity which gives predictive leverage is a genuine pattern, and even for a fixed level of noise, none is absolutely preferred, which Dennett sees as connected to his old teacher Quine's famous notions of the indeterminacy of translation and the under-determination of theories. [Log-rolling and self-promotional footnote.]

This consideration of patterns gains us several things. First, it makes concrete and respectable the notion that Mother Nature can get away with merely approximate and noisy pattern-detectors, and that more sophisticated pattern-detectors have to justify themselves by (in James's notorious phrase) "cash-value," by delivering more in the way of useful sensitivity to the environment than they cost in computational elaboration.

Second, it feeds in naturally to cognitive ethology --- less academically, to figuring out what animals think. What patterns in their environments do they need to notice, what decision do they need to make in response to them, and at what level of detail and nuance? Answer those questions, says Dennett, and you've got a pretty good first guess as to what the beast will notice and think about and do: because those that don't have a "pathetic but praiseworthy" way of exiting the gene-pool. (Cf. Spikes on the efficient neural coding of natural stimuli.) James has a brilliant and well-known passage here, too:

[T]he mind is at every stage a theater of simultaneous possibilities. Consciousness consists in the comparison of these with each other, the selection of some, and the suppression of the rest by the reinforcing and inhibiting agency of attention. The highest and most elaborated mental products are filtered from the data chosen by the faculty next beneath, out of the mass offered by the faculty below that, which mass in turn was sifted from a still larger amount of yet simpler material, and so on. The mind, in short, works on the data it receives very much as a sculptor works on his block of stone. In a sense the statue stood there from eternity. But there were a thousand different ones beside it, and the sculptor alone is to thank for having extricated this one from the rest. Just so the world of each of us, how so ever different our several views of it may be, all lay embedded in the primordial chaos of sensations, which gave the mere matter to the thought of all of us indifferently. We may, if we like, by our reasonings unwind things back to that black and jointless continuity of space and moving clouds of swarming atoms which science calls the only real world. But all the while the world we feel and live in will be that which our ancestors and we, by slowly cumulative strokes of choice, have extricated out of this, like sculptors, by simply removing portions of the given stuff. Other sculptors, other statues from the same stone! Other minds, other worlds from the same monotonous and inexpressive chaos! My world is but one in a million alike embedded, alike real to those who may abstract them. How different must be the worlds in the consciousness of ant, cuttlefish, or crab! [Principles, vol. I, ch. IX, p. 288]
It is tempting to ask: which patterns are the ones really there in the world --- ours, or the cuttlefish's, or the crab's, or even those of E. coli as it hunts for glucose? --- but the temptation should be firmly resisted. All the patterns provide predictive leverage, or neither we nor the cuttlefish nor the swarming occupants of our guts would be around to carve them out of "black and jointless continuity"; and that is as far as reality extends for patterns.

We come thus, as it were by two routes, to the third advantage Dennett extracts from his pattern theory, namely an explication of his famous "three stances" which we adopt to explain and predict (in sound positivist fashion: mostly predict) what something will do. Consider, for instance, a computer running a chess-playing program. To figure out what move it will make next, one could assume that it knows the rules of the game and the configuration of the board, and wants to check-mate you with minimal risk of being mated itself (the intentional stance); or, given the algorithm it's running, work out what it will do if everything works as it should (the design stance); or actually simulate the circuitry at some suitable level of detail (the physical stance). Each of these stances amounts to betting on a certain pattern in the behavior of objects, and each is a safer better than the last: considerations of design trump those of intention, and physical considerations trump both. Now, in the intentional stance we work by attributing to the object of our interest certain beliefs and desires; these are (at the very least) patterns in its behavior. But we've just seen that two (or more) different patterns can both describe the same data equally well, and two different attributions of belief-and-desire may be equally successful in psyching somebody out. One or the other set of beliefs-and-desires, may correspond reasonably directly and concretely with something going on in the little grey cells, as, say, the abstract patterns of Mendelian genes correspond to DNA sequences, but then again none of them might, just as pre-scientific idea of heredity, while they certainly had some predictive power ("Neither the maid nor her husband the butler have red hair; the maid's new baby has red hair; my husband has red hair...") have only a very complicated and tenuous relationship to the molecular realities of inheritance.

What this is hinting at, obviously, is the vexed debate over what has come to be called "folk psychology," our workaday practice of winning friends and influencing people in which we impute to them all kinds of beliefs, desires, aversions, emotions --- in short, mental contents. This practice works pretty well, but not (except in Sherlock Holmes stories, perhaps) perfectly. We think we have thoughts, and can get away with it: but, opponents of folk psychology point out, we used to think we had souls which went walk-about during dreams, too, and that forty-seven or fifty-five unmoved movers rotated the celestial spheres, and someday a cognitive or neural account of our behavior will put sentences like "His intolerable feelings of guilt and remorse over the treatment of child workers in his Central American factories drove him to make huge donations to orphanages and UNICEF" on all fours with "The Sun rose just as the Moon was setting." The defenders respond: "Promises, promises."

Dennett's take is (depending on your taste for philosophical vehemence) either judicious or bet-hedging. On the one hand, the mere fact that we can make very accurate, reliable predictions of what each other will do using folk psychology and very little effort (to say nothing of our having emotional lives!) means that its patterns are perfectly respectable as patterns. There may be another set of patterns, another way of predicting human behavior, which doesn't invoke thoughts or feelings at all; but, if this is more accurate than folk psychology, we should expect to be able to recover the latter as a none-too-shabby approximation. (To commit a vile pun: folk psychology is the phenomenology of the human mind.) As social primates, we have adapted to understanding what makes each other tick, by means of the intentional stance and folk psychology. By spinning out stories on those lines, we quickly and reliably establish mutually satisfied (if not entirely satisfactory) expectations, and are able to live with each other with much less bloodshed and violence than any other sort of social animal. Whatever the scientific advantages folk psychology's rivals may enjoy, they will not replace it in everyday life. Illusion folk psychology may be; but it's part of the illusion which we are.

We come at last to the part of Dennett's philosophy which is most notorious, his explanation of consciousness, his account of the self as a mere "center of narrative gravity." There is a discrepancy between the brain as disclosed by neurology and cognitive science --- where, to quote once again my favorite neuro. professor's favorite saying, "the more you look at the brain, the less it seems like there's anybody home" --- and our ordinary view of ourselves and others as unified individuals. This is related to the tension between folk psychology and its putative replacements, but not the same, and to understand how Dennett eases the tension, we need to step back and say a little more about the changes in the notion of mechanism and mechanistic explanation since James's day.

To begin with, our understanding of what counts as a mechanism has been transformed by computation, both as a practical technology and as a body of theory, a specialization of mathematical logic (pioneered by James's friend C. S. Peirce). It would be going too far to say, with one of the characters in Permutation City, that "computers aren't made of matter"; but he has a point. There is a strong sense in which computers are really algorithms, that is to say abstracta on a level with perfect circles and straight lines. The lumps sitting on your desk and mine are approximations to those abstracta, and software is an industry concerned with the manufacture of Platonic Ideals. (The imperfection of the lumps can be more or less gross, just as what I draw with a compass is a worse circle than what Giotto drew free-hand.) Now, it is one of the curious properties of algorithms (and all their many equivalents, like Turing machines and the lambda calculus and McCulloch-Pitts neural nets) that some algorithms can run other algorithms. (In fact, some algorithms can run any other algorithm whatsoever.) This nesting of algorithms can be carried to any depth you like, and the deeper levels are said to be "virtual machines." I am, for instance, writing this in Emacs, running under X Windows, running inside a Unix shell, running on top of a Sun operating system --- at least three levels of virtual machinery. What makes them machines is not that they're made of things a 17th-century clock-maker would recognize, but that their behavior is governed by rules, for that is what mechanism, these days, boils down to.

Dennett's idea is that our highly serial consciousness --- the "Joycean Machine" (or, one presumes, the Jamesian Machine) is a virtual machine, somewhat imperfectly implemented by our highly parallel and decentralized nervous system. Now, parallel and decentralized computers can implement centralized virtual machines (and vice versa), but it is always reassuring --- and of great professional interest to me --- to know how the trick is done. Dennett's answer is only a hand-waving sketch (well, he is a philosopher), but it's a fairly convincing one --- to some of us, anyway. In his "multiple drafts" picture, there are always competing versions of what-has-happened and what-to-do-about-it swarming about in the various small, functionally-specialized bits of the brain. They are literally competing, not just for occupancy of those bits of tissue and implementation, but for (as it were) recognition by other hunks of tissue. There is no center of consciousness, no "Cartesian theater" whose contents are What We Are Conscious Of Now, but there are what we might (though Dennett does not) call "commanding heights," modules with a disproportionate influence on the future of the organism and the contents of the rest of the brain. Thoughts or feelings which can seize the commanding heights can reshape the rest of the mind's contents into conformity with themselves. In human beings, these commanding heights are tied up with the production of language; our interior monologues are stenographic records of successive palace coups; our persistent personalities amount to the fact that the new boss is the same as the old boss. (There are good reasons why we can't just declare the commanding heights the Cartesian Theater, which it would take too long to go into here.) The lesson here is that of Order without an Organizer, an idea which certainly existed in James's day, but which has been hammered home by decades of study of mechanisms of collective behavior --- in statistical physics, in biology, in economics, and, almost from its beginning, in computer science. Dennett agrees that the mind is a computer, but not so much one like Deep Blue as like Oliver Selfridge's forty-year-old Pandemonium, mutating, shifting, always at least a bit fuzzy around the edges (cf. James on "the fringe"), and designed to exploit its own noisiness as a source of variation for re-design.

The standard objection to Dennett's view of the mind is that it makes no allowance for the difference between creatures with inner lives, namely us, and those without, namely zombies. Zombies might have quite sophisticated dispositions and sensitivities to their external environment, and even to their own information-processing (so the objection goes), but they'd have no inner experience --- they might be able to discriminate red roses from yellow roses, but they'd have no experience of redness, no red qualia. Dennett's quite characteristic response to this objection is to argue that there is no defensible difference between sufficiently nuanced sensitivities and qualia. Consider the case, he asks us, not of zombies per se but of zimboes, who are behaviorally just like us conscious human beings, but have no inner lives. Zombies are the mindless malevolent minions in a Boris Karloff movie; zimboes, when villainous, are more in the Sidney Greenstreet line but, by hypothesis, they show just the same range of heroism, vice, and moral muddle that we do. They'd certainly talk and act as though they thought they had qualia. Maybe brain damage can make people into zimboes --- only they'd insist nothing was wrong! Maybe lots of people (all, of course, normal-seeming) are zimboes --- John Searle, for instance, or this reviewer, or your landlord. They could be everywhere. Consciousness could be a genetic abnormality. Even your best-beloved could be a mere zimbo. In fact, how do you know that you are not a zimbo?

Dennett's answer is that you don't, because, as it happens, you are. Turned around: zimboes, creatures with sophisticated sensitivities to the external world and their inner environment, enjoy just as much consciousness as there is to be had. (You may want to apologize to the light of your life now.) Consciousness is "more like fame" --- coming in degrees, possibly patchy or restricted ("a legend among distributors of dental-hygiene products") and transitory, but not, in the nature of things, instantaneous or confined to a single point --- "than like being on television" --- a thoroughly unambiguous, on-or-off thing.

This doesn't sound much like traditional philosophical pictures of the mind, even those of Hume or James, but it has the advantage over them (for reasons spelled out at length in Consciousness Explained, and more summarily here) that it might just be right. What is more curious, since Dennett is almost the house philosopher of artificial intelligence, is that it also doesn't sound much like most of what gets done in that field. Certainly, there is a lot for one of an empirical, tough-minded temper to admire in the work of the artificial intelligentsia, and Dennett duly admires it: everybody talks about the mind, but they actually do something about it. In the process, as a necessary concomitant to designing minds and parts of minds, we learn a good deal about how mind-like things must work, and the problems in the way of any kind of mind. Things that, to rough common sense (and its academically respectable cousins, like phenomenology in the philosopher's sense) appear simple and immediate --- speaking, well-honed motions, recognizing an apple when it's in front of your face --- prove complicated, analyzable, and demanding. Dennett even thinks that AI has uncovered a genuinely new epistemological problem, the "frame problem," which might, informally, be put thus: How does the prospect of being hanged concentrate the mind? That is, how does a rational agent, which hasn't the time or the capacity to consider everything it knows in relation to the task at hand, or even to go through all its knowledge and put most of it aside as irrelevant, manage to ignore all (or almost all) of its irrelevant beliefs? A whole splendid essay is devoted to this poser and the ways workers in AI have tried to overcome it.

It is here that Dennett breaks with Good Old-Fashioned AI (GOFAI), since he views most of these attempted solutions to the frame problem, and many other problems in AI, as so many "cognitive wheels" --- mechanisms which, while, at a sufficiently abstract level, do the same job as some human cognitive ability, do it in a thoroughly unbiological way, having the same relationship to human cognition as wheels do to human legs. Cognitive wheels may be excellent engineering, but they're very dubious science. Dennett wants to lift some of the most restrictive assumptions of GOFAI ("high church computationalism") and break its addiction to things which can be cleanly coded in LISP --- in short, to get the artificial intelligentsia to stop hand-crafting cognitive wheels. Instead, their products need to be less brittle, less driven by elegant visions of the way the mind must work, and more oriented towards action. Living minds evolved to control bodies, and their powers and limitations are dictated by those bodies' needs; while it might be possible-in-principle to make something completely disembodied (not even embedded in a simulated environment) and yet intelligent, it will probably be much easier to make something clever enough to come in out of the rain. At the same time, he doesn't want to give up any of the good things unearthed by GOFAI, and he certainly doesn't want to open the gates to the barbarian metaphysical hordes, whose views on topics such as "embodiment" are superficially similar to his own. (It's a shame that his essay-review of Varela et alii, The Embodied Mind, isn't in Brainchildren.) The key differences between Dennett and the hordes are that Dennett actually understands what GOFAI has accomplished (cognition is computation --- even if it's some odd-hack kind of connectionist computation which, to J. Random C++ Codeslave, might as well be from the planet Mongo), and that he realizes the crucial importance of actually making things.

Human beings are not well-suited to carrying on long and accurate trains of abstract reasoning. Unless we are constantly checked, our attempts at doing so swiftly degenerate into what Russell once described as his fellow philosophers' "mere thinking." Formal logic and mathematical calculation are one check upon our thinking; experiment and observation another; and construction, reducing ideas to practice, yet another. The decisive contribution of AI to method is that at last we can apply the check of construction to our thinking about the mind. It is the prospect of exploiting this control to the fullest, to taking good ideas along their full life-cycle from philosophy to engineering, that attracts Dennett to the work of Doug Hofstadter and his pupils (in software) and to Cog, a project to actually build a conscious robot (in hardware), and to artificial life. It is the inability of the metaphysical hordes to make their ideas specific enough to code up, or, even better, to wire up, which condemns them to mere thinking.

I have by now covered maybe half of Dennett's ideas in Brainchildren, and as it is managed to say nothing about his two interesting (indeed, disturbing) papers on ethics, or artificial life, or many other subjects. I suspect, though, that this is about all you, dear readers, can stand of my prose, so I shall call a halt here and simply urge you to go straight to the source. Like all his books, Brainchildren is well-written: despite the chapters' origins as separate articles in, for the most part, philosophy journals and conference proceedings (plus Poetics Today!) his writing is clear, his exposition skillful, and his humor never very far from the surface. Some of these essays take up issues from the work of other philosophers (e.g. "Do-It-Yourself Understanding" on Fred Dretske), but even these will be quite comprehensible and valuable to readers unfamiliar with the authors Dennett is handling. (There are, the gods be thanked, no "reconstructions," and a constant awareness that philosophy ought to be about problems, not about what other philosophers have said.) I can't say I agree with everything unreservedly, but everything in Brainchildren is not only interesting but sensible. This probably isn't the very best place to start reading Dennett (that, for my money, is Elbow Room), but nothing in it is beyond the grasp of anyone with some knowledge of contemporary philosophy of mind, artificial intelligence or cognitive science, and it contains some of his best work: and that is praise indeed.


Disclaimer: I asked for, and got, a review copy of Brainchildren from Prof. Dennett and the MIT Press, with a nifty post-card of the cover thrown in; but I have no stake in its success.
Typos: p. 71, extra right parenthesis; p. 76, "next" for "nest"; p. 85, "Creenwood" for "Greenwood"; p. 102, the re-ordered letters have substituted a 3rd S for the C; p. 103, right parenthesis for the Greek letter beta; p. 205n, missing end-quote after articulated propositions.
xiii + 418 pp., several black and white photos and diagrams, unified bibliography, name and subject index
Artificial Life / Cognitive Science / Computers and Computing / Mind, Consciousness, etc. / Philosophy / Self-organization, Complexity, etc.
Currently in print as a hardback, ISBN 0-262-04166-9, US$40 [buy from Powell's], and as a paperback, ISBN 0-262-54090-8, US$20 [buy from Powell's]. Outside the US, sold by Penguin Books
30 May -- 1 June 1998