The nineteenth century, and to a lesser degree this one, have witnessed a dramatic expansion in the numbers of us engaged in administration, bureaucracy, management, oversight --- that is to say, in formally-organized tasks of collective cognition and control. We did not invent bureaucracy, the mainstay of the ancient empires, but we're much, much better at it than they were. A random American town of 200,000 --- Piffleburg, WI, let us say --- will have police, a rescue squad, a fire department, a hospital, universal schooling, several large factories, insurance offices, banks, a community college, a public library with several thousand volumes at least, a post office, public utilities, political parties, garbage collection, paved and usable roads everywhere, mercantile connections stretching across the country, and, with some luck, unions. These are corrupt, inefficient institutions which work poorly; every election, Piffleburg's citizens mutter something like "what do we pay taxes for anyway?" Yet to run any one of these institutions at the level of honesty, efficiency and efficacy which makes Piffleburg grumble would have demanded the full powers and attention of even the ablest Roman propraetor or T'ang magistrate. That all of those institutions, plus the ones not restricted to a single city, could be run at once, and while governed by a very ordinary slice of common humanity, would have seemed to such officials flatly impossible.
The immediate question this raises, of why we are so much better at collective endeavors than the ancients, can be answered fairly simply. To a first approximation, the answer is: brute force and massive literacy. We teach nearly everyone to read and write, and to do it, by historical standards, at a high level. This lets us staff large bureaucracies (by some estimates, over 40% of the US workforce does data-handling), which lets us run an industrial economy (the trains run on time), which makes us rich enough to afford to educate everyone and keep them in bureaucratic employment, with some surplus left over to expand the system. This would do us no good if our ideas of administration were as shabby as those of our ancestors in the dark ages, but they're not: we inherited those of the ancient empires, and have had quite a while to improve upon them (and improvements are made easier and faster by the large number of administrators and the high standard of literacy). Among the improvements are many techniques (standardized procedures, standardized parts, standardized credentials and jobs, explicit qualifications for jobs and goods, files, standardized categories) and devices (forms, punch cards, punch card tabulators, adding machines, card catalogs, and, recently, computers) for making the administration of people and things easier. (We've been over parts of this before, looking at James Beniger's book on The Control Revolution and Ernest Gellner's Nations and Nationalism.)
All this is in the realm of technique; when it comes to theory, we are quite at a loss. We can see, in a rough, common-sensical way, what makes us better at running things than the Romans were, but we don't understand how either they or us pull off the trick at all. That is to say, we don't really have a good theory about how collective action and cognition work, when and why they do, how they can be made to work better, why they fail, what they can and cannot accomplish, and so forth. Intellectually, these are large, tempting problems; technologically, they have obvious relevance to the design of parallel and distributed computers; economically, they could mean real money, not just billions; and, in general, it'd be nice to know what it is we've gotten ourselves into.
Now, in a sense, this problem has been approached by many of the social sciences. Historians and sociologists of science have investigated the ties between the social structure of scientific communities and their intellectual achievements. Other sociologists and political scientists have sought to analyze bureaucracy (though rarely considering it a way of effecting collective cognition), and particular sorts of collective action, like mobs and social movements. Much of the most interesting research on these problems has been done by economists. The great Friedrich Hayek (that is, Friedrich Hayek the profound social scientist, not to be confused with his evil twin, Friedrich Hayek the right-wing ideologue) was apparently the first to point out that markets perform a kind of collective cognition or calculation which would be beyond the scope of the individual actors in the markets. Since his time, the economists have devoted considerable thought to how the way a group is put together --- its procedures, the distribution of power, resources, beliefs and preferences within it --- effects the decisions it arrives at, the courses of action open to it. Some of this work, like Arrow's Social Choice and Individual Values and Olson's Logic of Collective Action --- is now classical, and, under various names, it's an active, thriving area of inquiry.
Still, however valuable these works, all of them take for granted (more or less) that we are capable of collective action and cognition; but why is this so? Many animals are not; those which can act in a group and communicate do so with nothing like the sophistication and efficiency of a Scottish soccer mob (though perhaps matching an English one). As Locke might have said, "beasts organize not." Yet we do: what's the difference?
This is not, really, a question about our evolution, but about what goes on between the ears of human beings at the present day, about how one congregation of little grey cells coordinates with another, without the benefit of telepathy. Fortunately, there is a discipline which studies what happens between the ears in the way of cognition, decisions and the control of action. Naturally enough, it calls itself cognitive psychology, or cognitive science, or just "cognitivism." It has slouched across these pages before, but a brief review of the leading ideas of its orthodox forms might not be amiss, especially since the book under review (I promise, there is a book under review) disputes many of them.
The orthodoxy, then, as laid down in, say, Herbert Simon's Sciences of the Artificial, or The Computer and the Mind by Philip Johnson-Laird (to name two good, elementary and well-received books at random) runs more or less as follows. Cognition, whether human, animal or artificial, is a kind of information-processing, taking place, in our case, in the brain. The information takes the form of representations (of sensory stimuli, of states of parts of the world, of facts, of relations, of possible states of parts of the world, of courses of action, or what-not). The processing consists of the transformation of these representations according to definite, though perhaps stochastic, rules. (So far, we have not excluded the connectionist heretics.) An immense amount of information-processing takes place subconsciously, particularly that which turns raw irritation of the afferent nerves into useful perceptions of the world about us, and turns volitions into raw stimulations of the efferent nerves. To recognize a dagger you see before you involves a lot of computational work; some people, having been wounded in the parts of the brain which do the computations, cannot. At least at some level of abstraction, the representations and transformations are usefully, conveniently and/or accurately thought of as structures of symbols and as algorithms, respectively. (This does rule out the connectionists.) The algorithms may be (or, if you like, instantiate) rules of inference, or rules for producing new representations from old ones more generally ("production systems"). One particularly well-studied kind of cognition, sometimes taken as the paradigm of all cognition, is problem-solving, conceived of as turning a representation of the problem, step by step, into a representation of a solution, or something close enough to a solution to satisfy the problem-solver. (Expertise in solving a kind of problem consists in knowing good algorithms to apply to it, being able to represent a problem in a way which makes it easy to solve, and being able to recognize a solution when you have one.) In principle, all this takes place in the brain; in practice, we can fake a larger and more accurate memory than we possess by either using external symbols, or by taking advantage of regular and persistent parts of our environment.
For the orthodox cognitivist, collective effort is just another particular environment for the several cognitive agents involved, one in which all the usual principles apply. Much of the trouble of making things work will be in communication, in getting sufficiently similar ideas of the world into everybody's head that that they agree, near enough, on how to change it, or at least so that they all know what part they are to play in the change. These problems have not been totally ignored by cognitivists, but they've not exactly been burning issues either.
Edwin Hutchins proposes to change that. By training he is a cognitive psychologist and an anthropologist. He is alive to the problem of collective cognition, especially in its most starkly computational forms, and has conducted valuable field-work, studying navigation on a US navy ship based in San Diego, with this problem in mind. In fact, he wants to use this work, or rather his interpretation of it, to launch a complete reformation of cognitive science. Symbols, the individual problem-solver, problem-solving, thought as something happening between the ears, culture as a set of beliefs ("All that the Church believes, I believe..."), all the other heathen trappings of "quasi-religious" (p. 370) orthodox cognitivism are to go. I persist with this metaphor, even though the Reformed science of his desires is not very Protestant. Computational salvation is achieved, not by the individual, but by the whole "socio-cultural system" (a Phrase Which Must Be Destroyed, and accordingly shall not be used again here), and is demonstrated not by correctness of representations (Hutchins goes out of his way to be agnostic about whether people have internal representations of various and sundry things), but by actions. The idea that cognition is a kind of computation is demoted to a mere "metaphor," a fishy one at that. Let us examine the grounds for his reformation.
Hutchins's field evidence consists of very detailed records, taken in the early 1980s, on the performance of the navigation crew of a helicopter carrier ship he calls the Palau, principally as they fix their location and plot their course near shore. The way it worked, in those pre-GPS days, was, roughly, this: three land-marks on shore, of known location on the navigation charts, would be selected by the main person in charge, the "quartermaster of the watch." Then they'd "take bearings" on these, i.e. find the orientation of the line from the landmark to the ship. These lines would be drawn on the chart. Now, it's an elementary result in Euclidean geometry that any two lines meet at a single point (unless they're parallel); three lines form a triangle (unless they all meet at the same point). Somewhere within that triangle is the ship: this fixes the current position. The position of the ship at the next fix is estimated by "dead reckoning," which is simply taking the current position and heading of the ship, and its planned speed, and extrapolating forward along the line of its heading. A single person can do this, if he's not too rushed. Close to shore, the Navy gets worried, and demands fixes every few minutes, so the task gets broken down: naval flunkies take the bearings, a different flunky tells them when to take the bearings, and so on. There's a fairly rigid protocol for coordinating all these actions, and for communicating their results in a usable form, and specialized instruments for making the job easier.
So, what does all this actually show? Well, that cognitive tasks can get spread over several people; that, in this instance, those who do tasks which require input from other people are generally superior to them in rank; that the official job descriptions do not quite correspond to what people do; that people have a hard time believing things which are strange to them, and tend to ask those who report them questions along the lines of "Are you sure?"; that, if you don't know what something looks like, a verbal description can be very unhelpful; that the right tools can make the job simpler; and that building computation into tools can make the job simpler for people, since it's easier to use a slide-rule than take a sine or a logarithm in your head; that, if you can't talk about something, it's hard to make plans with someone else about it. There's more, but they're along the same lines.
These are not exactly earth-shaking results; in fact, they're about what common sense says to us. This doesn't make it useless to check them, since common sense is so often wrong; but even then, Hutchins has checked them against the performance of one task (navigation; more particularly, location fixing), in one set of social groups (a couple of ships of the US Navy) --- ones where the social system is designed, and has received several centuries of re-design from people whose common sense more or less agrees with the above. (One wonders if they did things differently aboard the Potemkin.)
Even if we ignore such navel-gazing complications, there are two big problems with allowing Hutchins to press the new wine he wants from his heaped-up evidential grapes. The first is that he's done, really, nothing to show that this kind of shared, collaborative computation doesn't just constitute a special environment for the old-fashioned, symbol-processing problem-solvers of cognitive science. (Indeed, his discussion of the members of the navigation team as production systems suggests one way to do this.) If anything he discovered in his case study is not compatible with cognitivist orthodoxy, he's not shown it. I can, in fact, readily imagine someone like Herb Simon snarfing up Hutchins's facts to illustrate orthodox notions. If one can divide up the task in such a way that everyone's job is easy, or replace difficult things (like taking a sine in your head) with easy ones (like using a slide-rule) then ordinary prudence dictates doing so; but the ways a computation can be carved up, and what is easy and what is hard, will depend on the abilities of the individuals involved, i.e. their characteristics as information-processors, à la Simon. (I'll note in passing here that the connectionist models of "interpretation," "authority" and "consensus" in ch. 5 really do nothing for the argument, still less the exposition. I suspect they're a hazing ritual of the UCSD cognitive science department.)
The second problem is that, when it comes to explaining how people "think together," Hutchins doesn't so much theorize as wave his hands with vigor and emphasis. Nobody else has a theory of this subject either, of course, but that's much less of a worry if you're not making it the heart, and very nearly the end-all and be-all, of cognitive science. I find this lack of a theory of collective cognition especially worrisome when it comes to Hutchins's evident belief that the system has representations, takes actions, etc. From where I sit, the chart on which bearings are plotted and courses extrapolated is not an internal representation for the ship, but an external one for the quartermaster of the watch. Nothing he presents looks like a representation which can't be localized to a single person. By contrast, in cases where "emergent computation" (as we say in the trade) clearly is happening, one really does need to postulate representations and computations to make sense of what's happening, but can't assign them to particular components (as, for instance, our ideas can't be localized to particular neurons). This doesn't mean that Hutchins is necessarily wrong, but it does raises suspicions in someone trained to think that social phenomena are explained by "real individuals, their activity and the material conditions under which they live, both those which they find already existing and those produced by their activity," rather than by a reified History or Culture or Society.
This almost neo-Platonic fondness for making abstractions into things is part of a syndrome, wide-spread in American social science, for which Hutchins might almost serve as an type-case. The other components include: a penchant for mediocre and irrelevant philosophizing (as e.g. in the discussion of the question "Where am I?", pp. 12--13); seeing Culture lurking under every bush, often enough invisible to all but the social scientist; pointless personal details about the writer; "negotiations" without any actual negotiating (these last two combined in the section [pp. 21--23] on how "they [the crew] and I negotiated an identity for me," really just about what the navigation crew thought about Hutchins, with no discernible relation to anything else in the book whatsoever); ignorance of separate but relevant fields of inquiry (the closest he gets to the work on organizations, collective activities and collective decision-making is a second-hand citation of Wittfogel's great book on Oriental Despotism); indifference to consistency (e.g., quoting with approval Bruno Latour's call for a cognition-free study of formalism on p. 132, and spending much of the rest of the book arguing that formalisms do nothing unless put to work by some cognitive agent --- a point made in the first chapter of The Sciences of the Artificial); gratuitous and historically ill-informed attacks on "Cartesian" traditions; insinuation that one's opponents are next door to priests (when one mob of secular materialists accuses another of being quasi-theologians, it's an almost sure sign that they've run out of good arguments against them); and really stultifying writing. It's not just that Hutchins tells us in painstaking detail how the Palau was navigated, or even that he gives us unedited transcripts of the navigation crew's conversations; it's that he's plodding, dull, and pedantic, and capable of producing an explanation of the fact that speed is distance divided by time so obscure it confused even me, after I've spent years teaching it. (There are also an unconscionable number of typographic mistakes, especially for a book from a first-rate academic publisher; I gave up counting them about page ninety.)
We know next to nothing about how collective cognition works, or when it works, or how to make it work better; we have some ideas about it, but at best they've the status of artisanal rules of thumb. Cognition in the Wild could have been, and claims to be, a major breakthrough in understanding this horrid tangle of questions. Alas, it was written by someone who's a prime candidate for membership in Gellner's proposed "Hermeneutics Anonymous"; by far the most valuable thing in it is the ethnographic data, presented in mind-numbing detail. Cognitive scientists should certainly read it, if only because some of them might be able to gradgrind those facts into something useful; the rest of us will find our time better spent elsewhere.