June 15, 2012

The Impossible Takes a Little Longer

Attention conservation notice: Is there any form more dreary than a response to comments?

Thanks, once again, for the very gratifying response. Mostly I want to reply to criticism, because I have little constructive to say, as usual. So I will pass over Scott Martens, and Peter Dorman, and David Childers, and many others.

Plans Happen I should re-iterate that Kantorovich-style planning is entirely possible when the planners can be given good data, an unambiguous objective function, and a problem of sufficiently limited scope. Moreover, what counts as "sufficiently limited" is going to grow as computing power does. The difficulties are about scale, not principle; complexity, not computability.

Probably more importantly, there are other forms of control, with good claims on the name "planning", which are not this sort of mathematical programming, and plausibly have much lower computational complexity. (Central banks, for instance, are planning bodies which set certain prices.) In particular, intervening in existing market institutions, or capitalist firms, or creating non-market institutions to do things — none of these are subject to the same critique as Kantorovich-style planning. They may have their own problems, but that's a separate story. I should have been clearer about this distinction.

Let me also add that I focused on the obstacles in the way of planning because I was, at least officially, writing about Red Plenty. Had the occasion for the post been the (sadly non-existent) Red, White, and Blue Plenty, it would have been appropriate to say much more about the flaws of capitalism, not just as we endure it but also it in its more idealized forms.

"Shalizi's discovery of Sweden" I took it to be obvious that what I was advocating at the end was a rather old-fashioned social democracy or market socialism — what Robert Heilbronner used to call a "slightly imaginary Sweden". The idea that the positions I like are at all novel would be silly. It is also something which I evidently failed to convey clearly, given the responses by Gordon and Tim Wilkinson.

For the record, I think it is a horrid mistake to think or act as though the level of inequality is constant, or that current institutional arrangements within capitalism are either unalterable or very good. I also think that spending thirty years deliberately dismantling safe-guards, out of a mixture of ideology and greed, was criminal folly. Red White and Blue Plenty would also have a lot to say about economists' dreams — and their zombies — and the effects of chasing those dreams.

I don't see, though, that it's so obvious how to make things better now. It's pretty clear, I hope, that defined-contribution, 401(k), retirement plans have been a colossal failure (except for the financial industry, for whom they present a steady flow of dumb money), but going back to the sort of defined-benefit plan which presumes long-term employment by a single stable firm is also a non-starter. (Indeed the one good thing about 401(k)'s was that they didn't tie workers to one firm.) I'm not saying there's no solution, but if there's an obvious fix I don't see it. Or, again, the inequalities in public education in this country are obscene, and make a mockery of even stated conservative ideals, but even if we swept away obstacles like local and state school boards, funding from local property taxes, etc., is it really clear what a better system, that works under modern American conditions, would be? That we'd get it right the very first time? Or, saving the most important for last, does anyone seriously want to say that coming up with viable institutions to strengthen workers' bargaining power is straightforward? That just doing what the CIO did in the '30s will work again? There has been a very concerted, and dubiously legal, effort to crush unions and the labor movement, and stopping that has got to be a step in the right direction, but beyond that?

Turning to Wilkinson. I cheerfully accept correction that profit is not the only unambiguous objective function which could guide planners; that was sloppy writing on my part. And I would be very happy to see firms and other centers of power and planning brought under more democratic control; generally speaking, the exercise of power should be accountable to those over whom it is wielded. (Though there is a real potential for conflict here between democratic control by workers in a particular enterprise and the people at large: I don't know how we ought to deal with that.) But beyond that... Look, I didn't make up "shadow prices"; that's standard terminology, though if you don't like it, feel free to read "objectively-determined valuations", or even "Lagrange multipliers", throughout. Perhaps-relatedly, I fail to see how token distribution, with tokens exchanged for goods and services, is not a market — indeed not commodity production. (Again, there has to be a reason for production to shift to match patterns of demand, or else we don't have a feedback loop, we have blinkenlights.) If there's a distinction there, what is it? And what would be the general term which embraces markets and the institutionalized-exchange-of-tokens-for-goods-which-is-not-a-market? Decentralized planning sounds nice — though it was not what was being discussed in Red Plenty but inevitably those plans are going to have to be coordinated. How? Processes of bilateral or multilateral mutual adjustment are going to have lots of the features of markets — again, I'm not going to insist that this could only be done through markets — but the alternative would seem to be central authority.

Cockshott, and More Equations The most important issue raised, I think, was the claim that Cockshott has shown that central planning is computationally tractable after all. I don't agree, but unfortunately, there's going to need to be a bit more math.

The linear programming problem is \[ \DeclareMathOperator*{\argmax}{argmax} \argmax_{\mathbf{C}y \leq x}{\left( v \cdot y \right)} \] where \( x \) is the (given, known) vector of available inputs or resources, \( \mathbf{C} \) is the (given) matrix of constraints (including the production technologies), \( v \) is the (given) vector of relative values of outputs, \( y \) is the (unknown, sought-for) vector of outputs. The inequality \( \mathbf{C} y \leq x \) is to be "taken componentwise", i.e., each component of \( \mathbf{C}y \) must be less than or equal to the corresponding component of \( x \). The idea is that \( \mathbf{C} \) encodes facts like "each diaper needs so much fabric, machine time, electricity, labor time, etc." to make. Through the magic of Lagrange multipliers, this constrained maximization is equivalent to the unconstrained maximization \[ \argmax_{y, \lambda} {\left( v\cdot y - \lambda\cdot \left( \mathbf{C}y-x\right) \right)} \] where now the vector \( \lambda \) contains the Lagrange multipliers, a.k.a. shadow prices, a.k.a. objectively determined valuations, for each of the constraints. The problem is to figure out what we ought to make and how we ought to make it, which depends on the values we assign to different goods, the resources we have, and the different ways those resources can be used to make stuff we want.

When I talked about the complexity of solving the planning problem, I was talking about the complexity of this linear programming problem, and I was allowing for it to be solved only up to an accuracy of \( \pm \epsilon \), i.e., the solution only had to come to within \( \epsilon \) of the optimum, and in fact only to within \( \epsilon \) of satisfting the constraints. Since the computational complexity of doing so only grows proportionally to \( \log{1/\epsilon} \), however, if we can do this at all we can ask for very good approximations. Or, pessimistically, if some other part of the problem, like the number of variables, is demanding lots of resources, we'd have to make the slop \( \epsilon \) (literally) exponentially larger to make up for it.

(Incidentally, one issue which was not explicitly raised, but which I should have mentioned, was the possibility of replacing approximate optimization with satisficing, e.g., taking the first plan where the value of the output was above some threshold, say \( T \), and all constraints were met. [This would still leave the computational-political problem of coming up with the value vector \( v \).] I have been unable to discover any literature on the complexity of linear satisficing, but I suspect it is no better than that of approximate linear programming, since you could use the former as a sub-routine to do the latter, by ratcheting up the threshold \( T \), with each satisficing plan as the starting-point for the next round of the ratchet.)

And so to Cockshott. I have not had a chance to read Toward a New Socialism, but I have read his 1990 paper, and I'm underwhelmed. It is not about solving the planning problem. Rather, it is about solving a simple system of linear equations, \[ \mathbf{A}x = y \] where again \( y \) is the vector of desired outputs, which he takes to be given, \( \mathbf{A} \) is a known and completely linear production technology, and \( x \) is the unknown vector of resources required, not available. His claim is that if the number of non-zero entries in \( \mathbf{A} \) is small, averaging \( k \) per row, then \( x \) can be found, to within an accuracy of \( \pm \epsilon \), in a time on the order of \( kn \log{1/\epsilon} \).

I have no doubt that it is possible to do this, because iterative algorithms for solving sparse systems of linear equations, with precisely this complexity, have been known since the work of Jacobi, and of Gauss and Seidel, in the early 19th century. (This is not mentioned in his 1990 paper, but does show up in some later ones.) The basic procedure is to start with some random guess at \( x \), say \( x_0 \), calculate \( \mathbf{A} x_0 \), and see how that compares to \( y \). The vector of "residuals", \( \mathbf{A} x_0 - y \), is used to adjust the initial guess to some \( x_1 \) which should be closer to the desired solution, \( \| x_1 - x\| \leq \| x_0 - x \| \). The cycle is then repeated with \( x_1 \), until the program either hits an exact solution or gets tired. The many different algorithms of this form all differ somewhat in details, but share this general flavor, and most have the property that if \( \mathbf{A} \) is not too ugly, then each iteration brings the approximation closer to the solution by at least a constant factor, \[ \| x_{t+1} - x \| \leq \kappa \| x_t - x\| ~, ~ \kappa < 1 \] — this where the \( \log{1/\epsilon} \) comes from, not information theory. Specifically, Cockshott's algorithm seems like a variant of Jacobi's, though with an un-necessary asymmetry between how positive and negative residuals are handled, and a quite superfluous step of sorting the variables in order of how big their residuals are.

For Cockshott's algorithm, or any other linear-equation solver, to be of real relevance here, we need to presume that

  1. we have settled on exactly how much of every good (and service) we want the economy to produce, including indexing by time and space;
  2. we have, for each good in the economy, exactly one way to produce every good in the economy, or we have, somehow, settled on one such way per good;
  3. every good in the economy is produced by combining inputs in fixed, known proportions, with no possibility of substitutions, alternative methods, increasing (or decreasing) returns, etc.;
  4. we do not need to check whether we actually have sufficient resources to achieve the desired level of output with the given technology
However, it would be easy enough (linear time) to check, at the end, whether the required resources exceed available stocks, so the last point is not all that bad.

What is bad is completely assuming away having to chose what to make and having to chose how to make it. Kantorovich was not, after all, led to consider the allocation of production across alternative techniques through idle mathematical whims, but because of real, concrete problems facing industry --- and such problems keep coming up, which is why linear programming got re-invented in the West. (And it does no good to say "well, just use the most efficient technique for everything", because, unless there is a technique which uses less of every input than all its competitors, "efficiency" is going to depend on the relative values of inputs.) Once we realize this, the attempt to replace linear programming with merely solving a system of linear equations collapses.

To sum up, what Cockshott has done is reminded us that it's (sometimes) not that hard to add up all the resources the plan calls for, once we have the plan. This adding-up is at best one step which will have to be repeated many, many times in order to come up with the plan. To think that adding-up was the hard part of mathematical planning is, and I use the words deliberately, preposterously optimistic.

(To be fair, Cockshott also has some later papers where he states the optimization problem properly, but does not go into its complexity, or the fact that it's significantly higher than that of just solving a linear system.)

Parallelism is very important, and it's what distinguishes what you can do with a (modern) supercomputer as opposed to a (modern) desktop, much more than raw clock speeds. To the best of my knowledge — and I would be very happy to be wrong here, because it would really help with my own work — to get any real speed-up here easily, your constraint matrix \( \mathbf{C} \) needs to be not just sparse, but also very specially structured, in ways which have no particular plausibility for economic problems. (Those kinds of structure have a lot of plausibility for linear problems coming from finite-scale approximations to continuous mechanical engineering problems, however.) If your matrix is sparse but unstructured, your best hope is to treat it as the adjacency matrix of a graph, and try to partition the graph into weakly-coupled sub-graphs. (Since I keep coming back to Herbert Simon, this is his "near decomposability".) Finding such a partition is, itself, a hard computational problem. The slides by Gondzio, referred to by Keshav, are about fast parallelism of linear programming problems under the assumption not just that such a decomposition exists, but that it's already known (slides 15ff). It's very cool that so much can be done when that's the case, but it doesn't seem to address the our problem.

Of course, if we have such a decomposition, each processor becomes its own center of calculation and control, as I alluded to, and we need to worry about coordinating these centers. Which brings us back to where we were before.

Stafford Beer and Allende's Chile I don't know enough to have an opinion about what was tried, or what its prospects might have been, in the absence of Pinochet and the CIA. The little of Beer I have read was not particularly impressive, but no real basis for judgment.

Innovation This is obviously hugely important, but I didn't say anything about it because I really don't understand it well enough. There's no question that the economies of the capitalist core have been very good at developing and applying new technologies. It's also obvious that governments have been intimately involved in this every step of the way, developing specific technologies (e.g., GPS, for fighting the Cold War), demanding specialized products with no immediate commercial need whose development led to huge spill-overs (e.g., microelectronics, for fighting the Cold War), etc. More generally, if all resources are efficiently devoted to meeting current needs, there is no slack available to waste on developing new things, especially as any development process is going to involve dead ends.

Now whether this can only be arranged by giving a slightly-random selection of inventors really huge pools of money after the fact is not at all clear. If they do need to be motivated by rewards, above some level of personal comfort, what more money provides is bragging rights and adulation. Might this not come as well from medals and prizes as from money? On top of this, if we take people like this at their word, they are obsessive control freaks who are driven to do their jobs. But if they're intrinsically motivated, they don't need to be paid so much...

Post-Scarcity For the linear-programming objective function, \( v^T y \), we need some constraints, or else the optimum is going to be "produce infinitely much of everything". And at least some of those constraints will have to be active, to "bite", to keep the solution from going off to infinity, which means non-zero (shadow) prices for at least some things. We could imagine, however, replacing this objective function with one which allows for satiation: over the five year plan, you can only eat so many beets, drink so much quince brandy, drive so many cars, wear so many coats, need so many hours of baby-sitting. If the objective function can be satiated within the constraints of available resources and technologies, then all of the shadow prices go to zero exactly. There would still be an allocation problem, that of assigning the resources to production processes to make sure enough was made to satiate demand. (This is what Cockshott mistakes for the planning problem.) But once that was taken care of everyone could just take what they liked.

I don't see anything self-contradictory in this vision. It does seem to either presume complete automation, to the point of AI, or being able to count on volunteered human labor. But it's thoroughly conceivable, if the objective function satiates.

This brings me to non-market signals. I think it would be a very good thing to have many different ways of getting feedback from consumers to producers about what to make and how, beyond the market mechanism. (If not nothing else, markets as institutions would be healthier for the competition.) So the suggestion by "Nobody" that "every good Web 2.0 site is a non-pricing/market solution to revealing preferences" is quite interesting. But it seems to rest on a number of peculiarities of online informational goods, namely that the opportunity costs of producing one more copy for anyone who wants one is almost (though not quite) zero. (Copying takes very few resources, my using my copy doesn't interfere with you using yours, etc.) The answer to "how many copies of this document (movie, song, optimization suite) should we make?" is therefore "as many as people want". The real question is rather "what should people pay attention to?", and there non-market signals about what other people have found worthwhile can be very helpful indeed. (Though, like everything else, they have their own weird failure modes, without even getting into spam.) Approaching this point is very positive.

Does this close the loop, to get us more of what people value and less of what they don't? Not on its own, I don't think. People who like receiving this sort of attention will respond to it, but you can't eat attention, and producing content in the first place is costly. The resources to support production have to come from somewhere, or else it will be hard for people to respond to attention. (This is a version of Maynard Handley's point that there people need to have reasons to respond to feedback signals, and capacity to do so.) For some time now the capitalist solution has been intellectual property, i.e., abandoning free markets and economic efficiency; that, and/or advertising. But there are many other possibilities. We could accept amateurism, people producing these goods in the time left free from their day-jobs. We could also try patronage, perhaps distributed from many patrons, or creating jobs where some production is part of the normal duties, even though the output isn't sold (as in science). Doubtless there are yet other ways of organizing this.

I don't see how to adapt any such system to providing tomatoes, or haircuts, or tractors, where, at least for the foreseeable future, the opportunity costs of producing one more unit are very much larger than zero. To repeat a point from the original post, we already allocate some services with positive cost along non-market lines, and rightly so (e.g., the services of the public schools, the police, the fire department, ...). All the examples like this I can think of are ones where we need (and employ) very little feedback. We can and should explore more options for providing other goods and services along such lines. We should also explore non-market modes of signaling, but collaborative filtering is not such a mode, though it might end up being part of one.

Manual trackback: Brad DeLong; The File Drawer

The Dismal Science; The Progressive Forces; Automata and Calculating Machines

Posted at June 15, 2012 07:05 | permanent link

Three-Toed Sloth