Total Drek

Or, the thoughts of several frustrated intellectuals on Sociology, Gaming, Science, Politics, Science Fiction, Religion, and whatever the hell else strikes their fancy. There is absolutely no reason why you should read this blog. None. Seriously. Go hit your back button. It's up in the upper left-hand corner of your browser... it says "Back." Don't say we didn't warn you.

Thursday, July 08, 2004

"Baby, I know how to satisfice you."

Yeah, that just doesn't have quite the same ring to it, does it? Although from what my female friends tell me, we men would probably be more honest if we DID use that line.

And if you're amazed that I have female friends... well, you probably should be.

I bring up satisficing, or satisfying needs through sub-optimal solutions, today for a reason. It's going to take me awhile to get around to it, though, so please be patient with my rambling.

Wait, shit, what am I saying? This is a blog for crying out loud! It's supposed to be incredibly self-absorbed and torturous! Well, hell, never mind then.

As a part of the endless Bataan-Death-March-of-Fun that constitutes graduate school, I've recently been trying to replicate previous research studies. It seems like this would be an easy task. I mean, the authors have essentially provided a cookbook in the form of their articles, right? All you have to do is follow their instructions and, whammo, you're there! Yeah, and you know what else? If I get a good running start, I can jump through my own ass BACKWARDS!!! No lie, all the way through!

No, in fact, replicating previous research is NOT easy. As it happens, this is some of the toughest shit I've ever done in my profession. That's saying something, too, considering that when I worked in the private sector I once had a two week argument, via fax machine, about a goddamn chi-squared test. Well, that wasn't really "tough" so much as "soul-suckingly retarded," but that really isn't the point.

It isn't that the articles themselves are too complicated- while I have to admit to being somewhat unfamiliar with some of the techniques, for the most part they've been pretty straightforward. No, the problem is that each author, in each article, does quite a few things that never get reported. One author, for instance, simply forgot to describe how she constructed half of the variables in her regression analyses. Another author failed to mention that he was including some cases that his own stated critera would have rejected in order to make sure two different models had the same n. Now, certainly, we can't include every detail of coding in every article- we'd all like more than one paper per issue of ASR, after all. I just want you all to understand how difficult something like this can be, and that I'm currently trapped in replication hell, from which I have little hope of escape.

This all brings me to the article that I've been working on in the last few days. I won't mention its name, or its author, or the journal it was published in, save to say that it was in one of the top three general interest journals in Sociology. For the uninitiated (as if any non-sociologists are still reading this crap) that's the "American Journal of Sociology," "Social Forces," and "American Sociological Review." What I will tell you is what I've discovered during my magical time with this scholarly work. My discovery is this: Holy SHIT is it ever wrong!

This isn't a theoretical critique, nor is it an argument against the methods used in the article. What I'm saying is it's flat out, absolutely incorrect in numerous places. I'm saying that, with the number of statistical errors in this paper, I'm starting to wonder if 2+2 really DOES equal 7. To try to give you an idea of the scale we're talking about, let me provide a few examples.

(1) Of the 22 models included in this paper, 10 of them are missing about 7% of their degrees of freedom for no discernible reason. This may not seem like a big deal, but given the type of hypothesis testing involved, it is.

(2) The fit statistic for one of these models is incorrect. I mean just flat out wrong- as in, if you performed the statistical test properly, there is no conceivable way you could get this result.

(3) Constructing variables using the author's own criteria does NOT yield the variables the author actually reports using. I've checked on this with the author, who agrees with me on this point and is basically at a loss to explain what the fuck he was thinking when he wrote this paper. So far, no reasonable alternative construction has yielded the author's stated outcomes. For all intents and purposes, it's like the ghost of Emile Durkheim possessed the hapless author and decided to fuck with us for shits and giggles. "Why Emile Durkheim," you ask?

Jesus! THAT'S what you wanna know? Why Emile Durkheim?! After this mountain of rambling, that's what you have a question about? All I can say is... wow.

(4) And my personal favorite: the journal editors (I can only assume, if it was the author I'm gonna be forced to start doing vodka shots until I pass out) accidentally replaced the first model from one table with the first model from an earlier table. This would seem to be a minor mistake, except that the earlier table had something like ten times the degrees of freedom, thus making the error obvious to anyone who paid the least bit of attention.

I could go on, but it would be pointless. I don't provide the above list (which is representative, not comprehensive) to bust the author's balls, or to scold the journal. I'm not exactly in the position to scold a journal just now as I'm in the "please-publish-me-so-I-can-get-hired" phase of my grad career. I mention it because the above is an object lesson in the satisficing that goes on in science.

We use the peer-review process as a mechanism to weed out crappy articles. Fair enough, so far peer-review, for all its faults, has done quite a lot to preserve the quality of science. Please note that I'm ignoring post-modernism as I say this in the hopes that it will go away. What actually happens, though, in peer-review?

Well, two or more anonymous strangers get an article from another anonymous stranger, read it carefully, comment on any faults, and then make a recommendation to the editor, right? Um. I'm thinking no. Not exactly. In truth, two or more anonymous strangers get a paper, read it in a hurried, cursory fashion (Since god knows we're all busier than hell) dash off an incomplete, semi-coherent review (Although this may only reflect the reviews I get. Honestly, do you people even have a passing familiarity with punctuation marks?) and leave the editor to try and decipher a set of cryptic recommendations. This gets even worse when we consider that with the ongoing fracturing of the discipline (Sociology of Animals? Astrosociology?) there's an ever shrinking pool of people who are even remotely qualified to review some of our work. "Yes, I do ethnomethodological studies of concrete sealers using non-linear boolean algebra. What? No, the concrete sealers use the algebra, I just watch."

So, what probably happens all-too-often (In case you hadn't noticed, today I'm in love with hyphens) is that the reviewers skim the article and send back a review, without ever really spending time with the mechanics of the thing. This is what I mean by "satisficing." We don't get the detailed examination we need to ensure the quality of the articles, but we do, at least, get some sort of gate keeping mechanism. It isn't an ideal solution, but I suppose it is economical. Ironically, the anonymity that helps preserve our professionalism most likely adds to the problem. As we saw in the unfortunate case of Kitty Genovese, humans appear to be subject to a sort of "bystander effect." If there's an action that needs to be taken, but an individual isn't personally responsible for taking that action, it's easy to just assume that someone else will deal with it. Call it a collective action problem, call it diffusion of responsibility, call it apathy, call in shirking, call it Lloyd, call it whatever you like, we all do it. Hell, I've done it myself when reviewing an article. Not wanting to take the time to replicate every detail of the findings, I remember thinking, "Surely one of the other reviewers will do that part?" I should have known better. Maybe one of the other reviewers DID go that extra mile, but I will always have a niggling doubt in the back of my mind about it. It really gives one pause to wonder how many of the articles that we read every day are, in fact, totally nonsensical. (Much like most of my posts, come to think of it) I'm not, after all, naive enough to think this is an isolated incident, or that it won't ever happen again. Yet, just because something is inevitable, doesn't mean we shouldn't be vigilant for it anyway. Alpha-error is inevitable some percentage of the time, but that doesn't mean we just give up on the whole concept of statistical significance. Once more, I'm ignoring post-modernism and hoping it'll go away.

Is there anything to be done? Well, first and foremost, as sociologists it might not be a bad idea if we each, once a year, tried to replicate an article. Preferrably we should do this with an article that hasn't been published yet, so as to improve the quality of our output, rather than generate more apologies for mistakes. Certainly this won't allow us to check EVERY article, but it will give us a good sample of what's actually going on. If nothing else, it can't hurt our reliability, and we can all stand the intellectual exercise such a task entails.

Secondly, we might want to take our duties as reviewers more seriously. Sure, it distracts us from our work. Sure, it doesn't do anything to get us promotions or tenure. It is, however, one of our responsibilities as scientists. Next time you're reviewing an article and you're tempted to do a half-assed job, just think of all those articles you've read and thought, "Wow, is this ever CRAP!" That ought to provide you with the motivation you need to do a whole-assed job instead. Take your time, do it right, and help keep up the standards of our discipline.

Finally, and most importantly, don't panic, and don't lose faith in the system. Frankly, I'm comforted overall by this experience. Yeah, an article with glaring errors made it into print, but the fact that the article adhered to scientific standards made it possible to find those errors once someone bothered to look. (As a side note: if, after my ongoing consultation with the author, I come to the conclusion that his original work was right, and I am a yerk-toading chiba-monkey, I'll offer a public apology here) That's the whole idea behind these standards; to make the work transparent and falsifiable. So long as we're doing that errors will always be easier to find, and fact will be simpler to identify. And that's what this game is all about.

I guess, that's enough to satisfice me.

0 Comments:

Post a Comment

<< Home

Site Meter