A new wrinkle to peer review...
All that said, I am reminded of something else at this point- the philosophy of Imre Lakatos. I've discussed Lakatos before but, for those who are too lazy to go read the previous post, the essence of my interest in him is that he develops the Popperian idea of falsification, which is often considered to be a defining trait of science. Lakatos takes Popper's initial notion and observes that often research programs are pursued even though they may appear to have been falsified- more importantly, sometimes these "falsified" programs achieve dominance and replace their rivals. As a consequence, Lakatos argues that we should think of research programs as either producing new and useful insights, or as having become "degenerate." A degenerate program, of course, continues to change and evolve as all research programs do, but it does so in a manner that produces no new insights or predictions. In essence, it becomes like the theory of epicycles- each change is meant only to squash bugs in the system, but doesn't tell us anything new about the universe. This is all well and good, but the problem is, how do we tell when a program has become degenerate? Its supporters are liable to see each change as being deeply informative, and outsiders are likely not regarded as qualified to comment. So, unlike falsification, a Lakatosian view seems to be of limited use.
And at this point, I'm reminded of yet something else. In the search for true artificial intelligence there is a major problem: how can we know that a manufactured mind is, indeed, "intelligent"? That is to say: how do we know that it has become sentient and sapient? Think about this problem for a few moments- we see other people and assume that they are sentient and sapient largely because they look and act like we do. Since we believe ourselves to be sentient and sapient* it stands to reason that other humans, by extension, are the same. What happens when we ask the same question about a box the size of a suitcase that communicates via a monitor, however? Will we as readily ascribe the qualities of sentience and sapience to it, or will we be likely to continue to deny it those labels simply because it does not look like us? Given our species' unfortunate history of racism, I think we all know the answer.** So, as a way of producing an answer to this question, Alan Turing proposed the so-called "Turing Test." Described simply, the Turing test works as follows: human participants interact via computer terminals with both several humans, and one or more computers that are candidates for artificial intelligence. These participants then rate each interactant as being either human or a machine. A machine that can consistently trick its raters into concluding that it is human is then assumed to be sentient and sapient. The logic, of course, is that if an observer without prior knowledge of which is which cannot distinguish them from each other, and one of the entities is believed to be sentient and sapient, then we must assume that the other is as well. Obviously, the test would need to be somewhat more challenging than what I have described before we extend civil rights to an intel-based machine, but the logic remains essentially the same.
So what does this have to do with anything? Simple: I wonder if we couldn't extend the logic of the Turing test to identifying degenerate research programs. We present competent researchers in a given field with a series of articles: some legitimate, some written so as to use big words but say next to nothing. To the extent that those researchers cannot tell the difference, we have identified a research program that is collapsing into bitter self-obsessed irrelevance.
Would this be a perfect system? Hell no- as proposed, I doubt it's even workable. But I'll be damned if I don't think the concept is pretty interesting.
* And it'll really put you through the ringer for a moment or three if you challenge Rene Descartes' assertion that cogito ergo sum.
** As a side note, I think the time when humans first develop self-aware AI will be one of the most critical for us ethically. How we treat an essentially manufactured slave-race of beings will tell us a great deal about ourselves.