I blame Alan Sokal. The trick of showing up various publications by fooling them into publishing documents which seem impressively technical, but which are obviously nonsense to anyone minimally skilled in the field — well, I thought it was hilarious the first time, but inevitably there are imitators, and they never match the spirit of the first effort.
The latest epigone is one Peter D. Salins, a professor of political science at SUNY Stony Brook and former provost of the SUNY system, and his victim is the editorial page of the New York Times. He purports to offer evidence that the SAT score has some power to predict academic outcomes in college — specifically, whether students will graduate or not — over and above its relationship to high school grades:
In the 1990s, several SUNY campuses chose to raise their admissions standards by requiring higher SAT scores, while others opted to keep them unchanged. With respect to high school grades, all SUNY campuses consider applicants' grade-point averages in decisions, but among the total pool of applicants across the state system, those averages have remained fairly consistent over time.
Thus, by comparing graduation rates at SUNY campuses that raised the SAT admissions bar with those that didn't, we have a controlled experiment of sorts that can fairly conclusively tell us whether SAT scores were accurate predictors of whether a student would get a degree. ...
Among the campuses that raised selectivity, the average incoming student's SAT score increased 4.5 percent (at Cortland) to 13.3 percent (Old Westbury), while high school grade-point averages increased only 2.4 percent to 3.7 percent — a gain in grades almost identical to that at campuses that did not raise their SAT cutoff. Yet when we look at the graduation rates of those incoming classes, we find remarkable improvements at the increasingly selective campuses. These ranged from 10 percent (at Stony Brook, where the six-year graduation rate went to 59.2 percent from 53.8 percent) to 95 percent (at Old Westbury, which went to 35.9 percent from 18.4 percent). Most revealingly, graduation rates actually declined at the seven SUNY campuses that did not raise their cutoffs and whose entering students' SAT scores from 1997 to 2001 were stable or rose only modestly. Even at Binghamton, always the most selective of SUNY's research universities, the graduation rate declined by 2.8 percent.
I submit that Salins has Sokaled the Times, since there is no way someone with enough grasp of social-scientific methods to hold his position could make such huge howlers unintentionally.
Item: The question of interest is at the individual level: given otherwise similar students in the same academic environment, does a higher SAT score predict better academic outcomes, i.e., a higher likelihood of graduation. The data presented, however, are at the institutional level. At best they speak to whether more selective colleges have higher graduation rates, averaging over all students. This is compatible with nearly any relationship whatsoever between SAT scores and graduation rates at the individual level. (Likewise: In every state, rich people are more likely to vote for the Republican party, but richer states are less Republican.) Are we to suppose that Salins doesn't understand that there are different levels of aggregation here, that he has never heard of the ecological fallacy or Simpson's paradox?
Item: This was not a controlled experiment. There was neither actual control of variables other than SAT demands, nor effective control via randomization. The campuses which became more SAT-selective differ in many ways from the ones which didn't, and there is no control for that here. Even when he makes paired comparisons, there are huge differences *. Are we to believe that Salins doesn't know what the phrase "controlled experiment" means?
Item: By Salins's own account, many of the campuses which increased their selectivity, as measured by SAT scores, actually saw their graduation rates decline. The cases Salins mentions are Albany (SAT scores up 1.3%, graduation rate down 2.7%), Oswego (+3% and -1.9%, respectively) and Plattsburgh (+1.3% and -6.3%). Salins's resolution of this apparent contradiction is conspicuous by its absence. It's possible that there is some sort of threshold effect, so that (proportional) increases in SAT scores which fall below that threshold (around 3%, perhaps?) have no or even negative effects on graduation rates, but larger gains raise graduation rates, but this is hardly the case Salins says he is making.
Since we cannot believe that someone in Salins's position is actually writing seriously with so many mistakes and internal contradictions, we are forced to reject the idea that his actual meaning is his apparent meaning. It could be that Salins is engaging in esoteric writing (in the sense of Strauss), but it seems simpler to me to suppose that he was bored, and decided to see if he could get the times to believe that inconclusive noodling is a decisive and boldly contrarian finding.
— For the record, I would actually be a bit surprised if, ceteris paribus, higher SAT scores didn't predict higher likelihood of graduation. (Bad arguments do not become correct because their conclusions are true.) Also for the record, there are sensible ways of doing ecological inference, and of drawing causal inferences from observational data; but what Salins does isn't even close.
(Thanks to Kristina for pointing out the op-ed and discussing it with me.)
*: For instance, he pairs Albany with Stony Brook, because they are both research university campuses. This is true, but they are very different research universities. For one thing, and I say this with all due respect for my colleagues at Albany, Stony Brook has an immensely stronger scholarly reputation, e.g., three Nobel Prize winners on the faculty vs. zero. (Salins may, generously, be trying to reduce Stony Brook's advantage on this score.) For another, Stony Brook is a much nicer place to live. (When I went to Albany last year to give a talk, the campus was plastered with official posters warning students that "Walking alone at night makes you a target". This sort of thing tends to have a discouraging effect on prospective students and their parents.) Now, since we are interested in explaining changes in graduate rates, if these differences between campuses were stable over the period, it'd be harder to see them accounting for that change. (But not impossible; they might modulate how the graduation rate responded to some other factor which did change over the period, e.g., the perceived extra value of attending a higher-prestiege school.) But there's no reason to think that the relevant differences were stable.
Posted at November 19, 2008 15:29 | permanent link