The Measure of a Man
To begin, let's consider information theory. This is, as you might guess, a sort of interdiciplinary area studying signals and communication. It is generally considered to have been founded by Claude Shannon back in 1948 and employs a definition of information that may seem a little counter-intuitive to many of us. Specifically, it regards a text (for example) as having an information content equal to the shortest possible computer program required to reproduce it. So, the easier the text is for a computer to generate, the less information that text carries. To make this a little more explicit, let's consider a pair of examples. The first example is a string of numbers like this "5696585638565615275755151850760772515." I produced this string more or less at random by slapping keys on my keyboard. This of course means that it is not truly random- given my hand placement the middle keys were struck rather a lot- but it provides a simple example of an otherwise unordered string of numbers. On the other hand, let's consider a different string of numbers: "1 11 21 1211 3112 132112 312213 232221 134211." Now, believe it or not, this second string of numbers is not random. Instead, it is derived from a simple rule that, after the first entry, each successive entry describes the previous. So, for example, "11" would be read as "one-1" and "21" would be read as "two-ones," and so on. Now, because the second string of numbers is a deterministic result of a simple rule, a computer program needs only that rule in order to accurately reproduce this string and, indeed, extend it to any arbitrary length we should require. In a sense the only information in the second string is carried by the first number which determines the remainder of the sequence- other than it, all other numbers are simply logical consequences of the first number and the rule. In contrast, the first string is effectively random and no number can be used to deduce the next in the sequence. Thus, to reproduce this sequence in a computer program you would effectively have to simply record the sequence in full and read it back out of memory. Thus, according to Shannon, the first sequence contains more "information" because it requires more computer space and effort to produce while the second sequence contains less.
This may seem highly counter-intuitive because most of us are used to employing the word "information" to refer to "meaningful content." Indeed, the random hash that we hear on an untuned radio station would likely be ignored as containing no information by most people despite the fact that- like the random series of numbers- it would be more difficult to reproduce than the content of a radio signal. In other words, static is regarded as containing more information than meaningful communication. The explanation for this, however, is that the meaningfulness of a signal is not necessarily inherent in that signal. Consider, for a moment, a police drama in which an agreement is made that when one character coughs twice the rest of the police squad will storm a building. Is there any way that those two coughs could be analyzed and dissected so as to reveal an unequivocal command to storm the building? Obviously not- in this case the meaningfulness is derived from properties of the sender (i.e. the cougher) and the receiver (i.e. the rest of the police) but is not otherwise inherent. Likewise, the meaningfulness of a text written in a language we cannot read is exceedingly low. Certainly we surmise that it must mean something, but we are incapable of distinguishing a meaningful sentence in, say, cyrillic from a random hash of letters unless we can actually read that alphabet and the relevant language.* As a result of all this, we cannot quantify the meaningfulness of a message with any ease but can quantify information in Shannon's sense. It is this distinction between meaning and information that, as a side note, proves so useful to Perakh in slapping the shit out of Dembski.
Now, pausing information theory for a moment, let's talk about Alan Turing. The father of computer science, Turing developed a method for determining when a computer should be regarded as intelligent or sentient, a test that has become known as the Turing test. I've discussed his ideas before but the basic logic of the test is simple: place a human in a room and allow them to correspond with several entities. Some of those entities are other humans but others are computers. To the extent that the first human cannot reliably distinguish the humans from the machines, you must consider the machine to be sentient. This may seem a little simplistic but, really, it just mirrors the process we use when talking to other humans. We cannot directly observe each other thinking but, because we speak and act in a manner that implies that we are intelligent and sentient,** we generally assume that other humans are intelligent and sentient. The Turing test simply makes it possible for a machine- an artificial construct- to be given the same benefit of the doubt that we normally give to other hominids.
So how does all this come together? Well, here's the thing: let's say that we managed to put together an artificial intelligence that could replicate, but not exceed, the mental capabilities of the average human. It passes the most stringent Turing test, or derivative examination, we can construct with flying colors. That A.I. would, presumably, be defined at least in part by a series of software commands.*** Interestingly enough, we can precisely measure and quantify the length and complexity of a computer program. More simply, we could calculate the amount of "information" contained in that program in Shannon's terms. And if this computer program is capable of mimicking a human, if it must be regarded as intelligent and sentient, then we have a way of measuring the "information" content of a single human individual.
And that, when you get right down to it, is pretty f-ing cool.
* Actually, information theory would provide a way to distinguish nonsense from a message but, sadly, it would not get us any closer to desiphering said message.
** The folks on Conservapedia being a notable exception.
*** For those not used to thinking about issues like this, I'm engaging in a lot of handwaving here. A functional A.I. would probably not rely on software as usually conceived any more than human thought depends on a hand calculator.