11/07/2011

Computational and Mathematical Modeling in the Social Sciences Review

Computational and Mathematical Modeling in the Social Sciences
Average Reviews:

(More customer reviews)
Imagine that you're out for a relaxing dinner at your neighborhood bistro. Your waiter, a lanky young lad named Trey, sidles up to your table and describes the evening's specials beginning with a free range, grilled, Sonoma chicken bathed in a white wine and balsamic reduction and peppered with bits of black truffle. You think that the dish sounds wonderful and accept it for consumption with no revisions.
After reading this book, I can only guess that Scott de Marchi's reaction would be a little different. He'd point out that grilling was just one of many options. Alternatively, the chef could have fried it, baked it, braised it, seared it, roasted it, or even cooked it at low temperature in a Ziplock bag. Why did the chef choose grilling? And, oh by the way, why free range Sonoma chicken? Why not organic chicken - wouldn't its stronger taste hold up better to the wine and balsamic reduction? Heck, why not go all out and get one of those chickens that was hand-fed corn mash by Italian monks who gave it daily massages and hour long walks thru Tuscan valleys? And what about the number of possible spice and sauce combinations? Why black truffles?
The chicken entrée suffers from a curse of dimensionality. By modifying our choices on each dimension, we can create enough chicken variations to awe even the late Carl Sagan. Gourmands benefit from the curse - we can expect an original special every week. Social scientists - historians, theorists and empiricists alike - take the curse on the chin. It obliges a rethinking of how we construct and evaluate a model, or so says Scott de Marchi in this fascinating and challenging new book. To feel the effects of the curse, suppose that you're writing an empirical model of why countries go to war. In choosing variables for your regression, you pick ten from a set of twenty. You then toss in an interaction term, chosen with great care from the forty-five possible pairs. You then choose a model specification - linear, log linear, non parametric, or whatever. When you step back and look at the process of creating your empirical model, you realize that you have as many possible regressions as the bistro's chef has chicken entrees.
Suppose instead that you're writing a game theory model of first strikes. Professor de Marchi has a few questions to ask: Is the game one shot or repeated? Are moves sequential or simultaneous? Is information asymmetric? Are the players risk averse? Are preferences separable?
Given the billions of possible model specifications, the task of finding significant coefficients or proving (wink-wink) a general theorem suddenly doesn't look so impressive. Even combining the two: integrating a theoretical model and empirical analysis (EITM anyone?) looks about as hard as cooking up a little Bonferroni chicken to go with that Oregon Coast Pinot Noir. Once aware of the existence of the curse, we can see no shortage of naked emperors (some of whom de Marchi reveals with some relish). We can also try to get around it, to conjure up a counter hex.
The counter hex proposed by de Marchi consists of three parts. First, he wants us to split our data into training sets and testing sets -- a good idea, but it comes with a cost. A little math shows that dividing the data results in data sets that are, on average, only half as big, so we'll need a lot more wars for IR to have any hope of finding statistical significance.
Second, he wants us to analyze classes of models and not individual models with idiosyncratic (and possibly brittle) assumptions. In demanding that we consider classes of models, de Marchi implicitly charges some mathematical theorists with selling magic beans in the form of theorems that rely on specific functional forms. Results for a single functional form do not a general theorem make. The difference between a three person, three alternative example of a Condercet Cycle and Arrow's Possibility Theorem is the difference between predicting that a falling apple will hit the ground and formulating the theory of gravity. But proving general results is not easy. In fact, few general results exist. So why not be honest about the lack of generality rather than cooking up specific models that give the desired result?
The proposed solution, to create a feature space (dimensions on which we make various assumptions) and explore all of the models within that space, sounds good but it creates a problem unless we can increase the birth rates in Pasadena and Rochester. We still have too many models to explore. To get around the problem of too many models and too little time, de Marchi has a novel solution: use computational methods to explore the space of possible models. If we're using specific functional forms anyway, we might as well simulate them and not bother with formal proofs. Simulation is quicker. By simulating within feature space, we can distinguish robust findings from brittle examples. This approach requires combining art and science. We must constrain the feature space so that we're merely stunned and not cursed by the dimensionality.
Third, de Marchi wants our models to be more realistic. (Who doesn't?) But, how do we achieve realism and yet maintain a limited feature space that we explore in depth? Can we be realistic and remain within or at least comfortably near Chris Achen's three-variable world? de Marchi believes that we can, provided that we start simple and build up toward realism. Thus, we have complicated models as the sum of lots of simple models, all of which we understand fully as a result of exploring their feature spaces. As an example of a realistic model, he goes outside of social science and looks at machine chess programs. These programs don't just apply to chess in some metaphorical sense, as in "the Colonel Blotto game captures the essence of chess." They actually play chess and play better than people do. Having a model that plays chess produces a further advantage: the modeler can use real data from games.
Let's suppose we take a vote between continuing with the status quo and accepting de Marchi's vision of the future. The status quo consists of unrealistic, narrow models that we test using all of our data with substantial freedom over what control variables we include. de Marchi's alternative consists of cumulative realistic models (as well as nearby models to make sure that our theory is robust) that are calibrated on training sets and tested on separate data. The vote would be Roosevelt-Landon 1936 all over again. Apart from some holdouts in Vermont and Maine, de Marchi would win everywhere.However, in this election, we don't just pull levers. We have to vote with our heads, which can be thick and slow to respond.
The path de Marchi would like us to take requires nontrivial changes in how we build models and how we test them. Sure we can learn to split our data sets in two. But will we learn Perl? Will we take the time to construct a feature space? And what if that feature space reveals brittleness? Will we bail out and write a paper with quasi-linear preferences or with a one-dimensional preference space? Not only does he require that we learn new tools, he's asking us to change our standards. Rather than bestow awards on books that consist of (a) a captivating anecdote from history (b) a narrow specific functional form model that provides the key intuition (c) an empirical test with ten control variables and one interactive term that demonstrates validity of key intuition and (d) a rich case study that fills in all the gaps, we might see these books as cursed by problems of dimensionality. With so much history, so many models, and so many variables to choose from, these books should be as easy to make as the Chicken Marbella from The Silver Palate.
This critique of the status quo may get under the skin of some readers. Sure, your average PhD student can choose from among thousands of theoretical models and econometric specifications, but finding two that align -- where the econometrics support what the theory predicts - is not as easy as he makes it sound. If it were, we'd have many more papers that met this standard, and we wouldn't have summer courses sponsored by the NSF teaching students how to integrate these methods taught by (among others) Scott de Marchi. Furthermore, the models in these award-winning books aren't all that brittle. They do meet qualitative robustness criteria. Most theorists and econometricians can sniff out rigged models. We can tell a universal insight from a unicorn. When we see a model with quasi-linear preferences or the monotone likelihood ratio property, we know the rabbit has been placed in the hat, and we take the author to task accordingly.
Given that we're all aware of the curse, and we're qualitatively mindful of it when evaluating research, de Marchi's claims seem less provocative, and at the same time, more reasonable. He's advocating that we supplement our reasoned judgment with a scientific approach based on feature spaces and computational models. Any time we can replace subjective criteria with more objective, scientific criteria, we move science forward, which this book urges us to do.
Some critiques may complain that this book explains how to do it, but it doesn't actually do it. True, the book would be stronger if it took us on a complete tour of the shiny new city on the hill that it constructs. A short chapter on how de Marchi built a model (and critics may say an unrealistic one - ouch!) along with a smidgen of Perl code won't sway the masses. The book would be more convincing if had a six hundred page companion volume that took on a puzzle, defined and explored a feature space, tested the robust conclusions out of...Read more›

Click Here to see more reviews about: Computational and Mathematical Modeling in the Social Sciences

Mathematical models in the social sciences have become increasingly sophisticated and widespread in the last decade.This period has also seen many critiques, most lamenting the sacrifices incurred in pursuit of mathematical perfection. If, as critics argue, our ability to understand the world has not improved during the mathematization of the social sciences, we might want to adopt a different paradigm.This book examines the three main fields of mathematical modeling--game theory, statistics, and computational methods--and proposes a new framework for modeling.

Buy NowGet 6% OFF

Click here for more information about Computational and Mathematical Modeling in the Social Sciences

No comments:

Post a Comment