Longtime readers* know that I keep an eye on the whole intelligent design
fiasco. There are a number of reasons for this but, given that I'm not interested in discussing them at length, let's just say it's because I think Wild Bill Dembski
is a sexy bitch. Mmmmmmmm.... fivehead!**
In any case, given my interest in ID I have not only developed a comprehension of the arguments of the "theory" but have also come to understand a fair amount about their rhetorical strategies. These are, of course, different in that the way you talk about a set of ideas can have a fairly large impact on how those ideas are received.*** As a consequence, no matter how brilliant your ideas, you must develop at least a rudimentary command of rhetoric if you are to be heard. What makes rhetoric so interesting, however, is that it can be used to get an idea about what's going on in someone's head. Partly, this is because when a person advances an argument for their own ideas, there's a natural tendency to try and advance defenses against anticipated criticisms. Whether these defenses are successful or not, they end up cluing the audience in as to where possible deficiencies may lie.****
With this in mind I have long been perplexed by one particular obsession of the ID crowd: front-loading. Now, if you haven't heard about this before, front-loading is the idea that the DNA sequences for all organisms on Earth were somehow loaded into the very first organisms by that mysterious intelligent designer
. Thus, while it may appear that life has evolved over millions upon millions of years, in fact each new species has been a deterministic result of the DNA "program" executing itself in the previous species. In a sense, speciation events are pre-planned. When confronted with the point that the DNA of early bacteria and such do not appear to contain all the sequences in modern higher critters, proponents of this idea are quick to observe that the earlier DNA represents a program for producing the next stage, not necessarily a copy of that stage. This is, of course, very similar to my discussion of information theory
a while back: sometimes you can have a program to write a particular line even if the line is not contained within the program.
The thing is, why all the concern with front-loading? I mean, really, it's one of the most scientifically problematic arguments advanced by the ID folks. Aside from being almost entirey unverifiable, there are huge potential issues with it: mutation, DNA transcription errors, etc. I get that it enables one to mesh intelligent design with an old Earth, thus saving some ID proponents from having to travel the way of madness
to its conclusion, but it still seems so dreadfully unnecessary. So what is the deal?
Well, as it happens, thanks to Mark Perakh
, I finally understand the reason. To get it yourself, you first need to understand the idea of "irreducible complexity
." IC is the idea that some complex systems serve a functional purpose which they would be unable to continue serving were even one part removed from them. While Michael Behe
, the originator of the concept, likes to invoke an analogy to a mousetrap I actually prefer the analogy used by David Berlinski
: a chair serves a valuable purpose as a place to sit, but a chair with one leg removed no longer serves this purpose. It is, in effect, irreducibly complex. A system that is irreducibly complex, it is argued, cannot be the result of natural processes and so must be the result of design. And, it is said, since a number of biological structures are horrendously complex, they must have been designed.
Now, Perakh has observed that there's a major problem with this argument. The issue is that- despite the inclusion of the term "complex"- the major way that Behe and his ilk are identifying design is by function. Analogy to the contrary, a chair is not a particularly complex structure but it serves a specific purpose and does so well. Therefore, we conclude that a chair may be a product of design. Perakh thus proposes that design is more likely to be the case in an instance of simplicity and functionality than it is in a case of complexity and functionality. And the reason, to summarize his argument as succinctly as possible, is that while there is only one simplest way, there are a wide variety of complex, convoluted ways. Thus, a probabalistic process is much more likely to find one of the many complex ways rather than the one simple way.
A crude analogy can be drawn, for all the quantoids out there, to writing code in Stata
. Let's say that a first-year grad student, just barely familiar with Stata, is given a task: define a variable that contains the sum of three other variables. Let's call the first variable "output" and the other variables "input1", "input2" and "input3." How might our rookie grad student handle this problem?
Well, if they're not thinking particularly clearly they might try:
The problem here, however, is that if any of those three input variables are missing***** then the value of output will also be set to missing. As it turns out, this is not acceptable. What to do?
Eventually, they would no doubt think to try something a little more complex:******
replace output=input1+input2+input3 if (input1~=. & input2~=. & input3~=.)
replace output=input1+input2 if (input1~=. & input2~=. & input3==.)
replace output=input2+input3 if (input1==. & input2~=. & input3~=.)
replace output=input1+input3 if (input1~=. & input2==. & input3~=.)
replace output=input1 if (input1~=. & input2==. & input3==.)
replace output=input2 if (input1==. & input2~=. & input3==.)
replace output=input3 if (input1==. & input2==. & input3~=.)
This code would work, but it's a horrible pain in the ass to write- especially if you need to sum up, say, nine variables instead of three. Now, the thing is, the above code serves a purpose, is arguably complex, and is irreducible (i.e. removing any line will screw up the function). Therefore, it fits (weakly) Behe's definition of "irreducible complexity."
Yet, the thing is, let's say that this same grad student tries to perform the same task a few years later when they know Stata better and can design- rather than kludge
- their programs. At that point the entire program would most likely read:
egen output=rsum(input1 input2 input3)
See, the egen/rsum command combination tells Stata to sum up all the non-missing values of the variables inside the parentheses for each respondent and put the result in a variable named "output." In other words, our more advanced grad student can do the same thing with a one-line program that used to require eight lines. Arguably, this program is simpler but, nevertheless, remains functional. And I think most of us would argue that the last program is better evidence of deliberate planning and design than either of the first two. And if you think about longer programs executing a variety of functions, the more advanced grad student will almost certainly be able to do the same functions while using far fewer lines than the less advanced grad student. Design, in this case, equates to simplicity not complexity.
So how does this get us to front-loading? Well, you see, the DNA of Humans and most other species is littered with all kinds of DNA that we just don't seem to use. This is often referred to as "junk DNA
" even though this is an oversimplification- it isn't that it definitely has no function, we just haven't discovered one yet. The thing is, since evolution is a probabalistic process, we shouldn't expect it to come up with efficient solutions to problems. It should act like the immature grad student in our earlier example: cobbling together complex structures out of a myriad of basic sub-parts. Our DNA should, more or less, be somewhat of a kludge
. And, as it happens, that's more or less the way our DNA looks right now. If DNA is a computer program, then it's a computer program that has huge swathes commented out (i.e. inactivated) and much of the rest thrown together out of a little of this and a little of that.
Ah, but what about this front-loading business? Well, you see, in that case all of that kludged DNA program is, in fact, just part of an overall design meant to produce new species. See, front-loading isn't meant to account for the complexity
of the genome at all. Instead, it's meant to explain how the non-coding gobbledegook we keep finding is, in fact, a very simple
way of accomplishing an additional function. In an ironic twist, front-loading is a way to save the "design inference" from the naturalistic demon of complexity.
And I find that awfully funny.* Hell, let's face it: if you read this blog for, like, a week you probably realize this.
** For those who are unfamiliar with the term, when someone's forehead is truly gigantic and prominent- like the melon on a dolphin- it is referred to as a "fivehead."
*** I think this is one of the more difficult lessons grad students have to learn, actually.
**** This is, as a side note, why I sometimes recommend against trying to protect yourself from criticism too early in a debate. At best it makes you look defensive. At worst, it makes your audience suspicious and deprives you of an opportunity to rebut an opponent's claim.
***** "Missing" simply means that there is no value assigned to that variable- like leaving a test question blank.
****** FYI: "~=" is read as "is not equal to" and "." is read as a missing value. "==" just means "is equal to".
Labels: evolution, humor, intelligent design, science