Total Drek

Or, the thoughts of several frustrated intellectuals on Sociology, Gaming, Science, Politics, Science Fiction, Religion, and whatever the hell else strikes their fancy. There is absolutely no reason why you should read this blog. None. Seriously. Go hit your back button. It's up in the upper left-hand corner of your browser... it says "Back." Don't say we didn't warn you.

Monday, July 30, 2007

A new wrinkle to peer review...

Last week I asked whether my readers could successfully distinguish between published work that is purported to be legitimate, and an intentional hoax. Few people made the attempt publicly, though several made guesses to me in person. As it happens, opinions were fairly evenly split between the two passages- one of which was legitimate, one of which was not. Of all those who made the attempt, however, only S.S. Stone correctly identified one of the passages as belonging to the so-called "Sokal Hoax." This leads me to conclude that my audience, at least, finds it as difficult to separate "legitimate" work of this type from illegitimate. This is not a reassuring conclusion.

All that said, I am reminded of something else at this point- the philosophy of Imre Lakatos. I've discussed Lakatos before but, for those who are too lazy to go read the previous post, the essence of my interest in him is that he develops the Popperian idea of falsification, which is often considered to be a defining trait of science. Lakatos takes Popper's initial notion and observes that often research programs are pursued even though they may appear to have been falsified- more importantly, sometimes these "falsified" programs achieve dominance and replace their rivals. As a consequence, Lakatos argues that we should think of research programs as either producing new and useful insights, or as having become "degenerate." A degenerate program, of course, continues to change and evolve as all research programs do, but it does so in a manner that produces no new insights or predictions. In essence, it becomes like the theory of epicycles- each change is meant only to squash bugs in the system, but doesn't tell us anything new about the universe. This is all well and good, but the problem is, how do we tell when a program has become degenerate? Its supporters are liable to see each change as being deeply informative, and outsiders are likely not regarded as qualified to comment. So, unlike falsification, a Lakatosian view seems to be of limited use.

And at this point, I'm reminded of yet something else. In the search for true artificial intelligence there is a major problem: how can we know that a manufactured mind is, indeed, "intelligent"? That is to say: how do we know that it has become sentient and sapient? Think about this problem for a few moments- we see other people and assume that they are sentient and sapient largely because they look and act like we do. Since we believe ourselves to be sentient and sapient* it stands to reason that other humans, by extension, are the same. What happens when we ask the same question about a box the size of a suitcase that communicates via a monitor, however? Will we as readily ascribe the qualities of sentience and sapience to it, or will we be likely to continue to deny it those labels simply because it does not look like us? Given our species' unfortunate history of racism, I think we all know the answer.** So, as a way of producing an answer to this question, Alan Turing proposed the so-called "Turing Test." Described simply, the Turing test works as follows: human participants interact via computer terminals with both several humans, and one or more computers that are candidates for artificial intelligence. These participants then rate each interactant as being either human or a machine. A machine that can consistently trick its raters into concluding that it is human is then assumed to be sentient and sapient. The logic, of course, is that if an observer without prior knowledge of which is which cannot distinguish them from each other, and one of the entities is believed to be sentient and sapient, then we must assume that the other is as well. Obviously, the test would need to be somewhat more challenging than what I have described before we extend civil rights to an intel-based machine, but the logic remains essentially the same.

So what does this have to do with anything? Simple: I wonder if we couldn't extend the logic of the Turing test to identifying degenerate research programs. We present competent researchers in a given field with a series of articles: some legitimate, some written so as to use big words but say next to nothing. To the extent that those researchers cannot tell the difference, we have identified a research program that is collapsing into bitter self-obsessed irrelevance.

Would this be a perfect system? Hell no- as proposed, I doubt it's even workable. But I'll be damned if I don't think the concept is pretty interesting.

* And it'll really put you through the ringer for a moment or three if you challenge Rene Descartes' assertion that cogito ergo sum.

** As a side note, I think the time when humans first develop self-aware AI will be one of the most critical for us ethically. How we treat an essentially manufactured slave-race of beings will tell us a great deal about ourselves.

Labels: , , ,


Anonymous Anonymous said...

Alternately, some enterprising individual needs to develop a software program akin to magnetic poetry, but geared toward their bitterly self-obsessed and irrelevant chosen field. They could use it do compose articles for top-tier journals. Genius!

Monday, July 30, 2007 5:51:00 PM  
Blogger SARA said...

This was quite interesting but took me a while to absorb it...

so now, what's my prize? ;)

Tuesday, July 31, 2007 12:00:00 AM  
Anonymous Anonymous said...

If I produce text by hammering my keyboard at random, you can't tell the difference between that and a monkey doing it. Similarly, if a sociologist produces incomprehensible jargon, you can't distinguish it from a computer doing the same. The problem isn't with your readers; the problem lies with sociology.

Tuesday, July 31, 2007 3:05:00 AM  
Blogger Drek said...

Anomie: I don't think it's genius, but it's an interesting idea to kick around. I don't think it'd be worth trying, though, until you had tenure.

S.S. Stone: The prize I traditionally offer is to let you request a post from me on a topic of your choosing. This is, of course, within limits. Does that suit you?

Anonymous: That's more or less my exact point, though I wouldn't tar the entire discipline of sociology with that brush. Besides, the passages in question came from philosophy publications, so bugger off and harass them.

Tuesday, July 31, 2007 10:18:00 AM  
Blogger SARA said...

The prize I traditionally offer is to let you request a post from me on a topic of your choosing. This is, of course, within limits. Does that suit you?

yes, that suits me fine....what are my limits?

Wednesday, August 01, 2007 6:56:00 PM  
Blogger Drek said...

S.S. Stone: Oh, not really many limits at all. Basically just that I may interpret your request in a somewhat unusual way and that I won't honor any requests that would give away my identity or pose a strong risk of doing so. Otherwise, fire away!

Wednesday, August 01, 2007 9:04:00 PM  

Post a Comment

<< Home

Site Meter