For instance, their prototypical job search model imagines that an
unemployed worker wants to maximize the net present value of their future
income; that at each time period they receive a job offer, with the offered
wages being independent, identical samples from a fixed, known distribution;
and that once a job is accepted, it lasts forever (and you can't switch). If
your discount rate is *d* and you get an offered wage of *w*, the net
present value of that offer is *w*/(1-*d*). It makes sense to take
this offer if it's more than what you'd expect to get by waiting to see what
tomorrow (or the day after or the week after...) might bring. Solving this
model, there turns out to be a "reservation wage", which depends on the
distribution of offers and on the discount rate, such that your optimal
strategy is to accept all and only offers which exceed the reservation wage.
(Basically, you ask if the present value of accepting this offer exceeds the
present value of getting nothing today and continuing the search tomorrow.)
Since of course we do not have data on the offers unemployed job-seekers may
have rejected, their reservation wages are not actually identifiable from data,
absent strong assumptions with no grounding in economic theory (see sec. 2.6 of
Manski's Identification),
assumptions which Christensen and Kiefer go on to make. (I should add that
this is the most bare-bones version of the job search model and they consider
many more sophisticated refinements.)

Estimating models like this, ones which are based on dynamic programming,
involves a lot of special features. First of all, you generally have to solve
the dynamic programming problem! (But not always.) Secondly, taken seriously
they are, in some ways, far too specific: the optimal action is (generally)
a *deterministic* function of the state variables, and since people
don't always do the same thing in what the model says are identical situations,
the models are in fact false. Christensen and Kiefer recommend addressing this
problem by adding random, unobservable noise to the agents' utility functions,
lifting the "curse of determinacy". Thirdly, the models are too vague in other
ways, since many parameters (often discount rates and others relating to the
agents' objective functions) are not fully identifiable from even ideal
data.

By this point we have committed ourselves to economic agents solving
infinite-horizon stochastic planning problems by maximizing utility functions
which we will never be able to check against any conceivable data. (Ponder,
for a moment, the assumption that the agents *know* their own utility
functions will contain an absolutely unpredictable component, but they will
nonetheless pick their actions now to maximize it in the future.) To do this
they are assumed to know not merely the correct form of the economic model, but
also its parameter values, which is what the econometrician hopes to learn by
estimating the model; and they are solving exactly optimization problems for
which the best available methods currently (and for the foreseeable future)
provide only approximations at great computational expense. (Indeed, serious
industrial work on dynamic programming problems
[e.g., this]
often uses techniques like
reinforcement
learning, which would be dismissed in this approach to economic modeling as
postulating irrational agents.) All of this, moreover, completely ignores all
issues of strategic interaction. Much of this seems to be a superfluous
loading of metaphysics on to fairly simple (and implausible) behavioral models.
E.g., take the prototypical job search model. As far as the data go, it looks
no different from just positing the behavioral rule "Accept offers over your
reservation wage". Where reservation wages come from would be an important
question, but writing them as functions of unobservable, unidentifiable
quantities doesn't help answer it. I would be prepared to swallow some of this
metaphysics if the results gave us outstanding matches to the data, much better
than could be achieved by any more plausible assumptions, but honestly they
don't, and I am tempted to read something into the fact that this book has a
lot about point estimates, less about standard errors, little about confidence
intervals, and nothing about specification and goodness-of-fit testing.

If economic models like this sound good to you, then you should by all means read this book, because it really does an admirable job of laying out how to estimate them. But if those models sound good to you, then you have much bigger problems than the subtleties of joining the Bellman equation to maximum likelihood.

488 pp., line diagrams, bibliography, index

Probability and Statistics; Economics

Currently in print as a hardback, ISBN 978-0-691-12059-1 [Buy from Powell's], US$49.50

21 September 2009