Total Drek

Or, the thoughts of several frustrated intellectuals on Sociology, Gaming, Science, Politics, Science Fiction, Religion, and whatever the hell else strikes their fancy. There is absolutely no reason why you should read this blog. None. Seriously. Go hit your back button. It's up in the upper left-hand corner of your browser... it says "Back." Don't say we didn't warn you.

Wednesday, July 21, 2010

Math from the Schlaf!

Some of you may remember a couple years ago when I spent some time thinking about Andrew Schlafly's plan to quantify an individual's degree of open-mindedness. Okay, that's not entirely true- Schlafly's plan was actually to come up with a seemingly rigorous way to deride his ideological opponents and he did so with his usual hilarious lack of subtlety. I, in contrast, attempted to take an interesting idea with an admittedly poor execution and make something worth-while out of it. And I even had the unique pleasure of receiving a comment from the Schlaf himself on my efforts, although said comment failed completely to address my points. I'm reminded of that whole affair by something I've recently discovered on Conservapedia, and I think y'all deserve to know about it, too.

You see, the other day I ran across the product of Schlafly's most recent psychotic break brain storm and was intrigued, albeit briefly. The article in question is on Censorability. What is Censorability, you ask? Well, let's see what the article says:



Or, in merely human language:

The censorability of a concept, movement or ideology is its vulnerability of being censored by its opponents.


A bit crude, I'll grant, but it's a superficially interesting speculation: that there is some intrinsic character of an idea, ideology, or social movement that makes it easier or harder to censor. Of course, "censor" is taken as a primitive term (i.e. not defined), which is probably unwise given the diversity of views on what constitutes censorship, but nonetheless the underlying notion is at least vaguely interesting. Rather than take the time to carefully analyze this concept and develop it, however, the Schlaf decides to dive right into producing a statistic to measure censorability.* And this is where the wheels come off the wagon:



To sum up: Schlafly defines the censorability of an entity (which I will refer to as Y for clarity, though Schlafly doesn't use this notation) to be equal to x and indicates that this is depdendent on both the environment in which Y is lodged, E, and the time interval we are examining, (t). So, in other words, the term Ex(t) means, "The censorability of Y in context E and time frame (t)". Clearly, we're talking about some sort of estimator here, and it's an estimator that is geographically, socially, and temporally bounded. He further defines c as the number of of times that entity Y has been censored in E and (t) and o as the number of times that Y has occurred without being censored in E and (t). He then combines all this as follows:

Ex(t) = (c/(c+o))*100%

We can safely ignore the left side of the equation because it just defines what we're looking for. That leaves the right side, which is effectively just a probability. A probability is defined mathematically as the number of occurrences of an event divided by the number of opportunities for the event to occur. So, for example, if a coin is flipped 100 times and 50 times it comes up heads, the probability of a heads is 50/100=0.50. Often probabilities are expressed as percentages, attainable by multiplying the probability by 100%, but while this aids explanation it has fairly undesirable mathematical properties. Returning to Schlafly's equation, he's dividing the number of times Y was censored (c) by the sum of c and the number of times Y was not censored (o). Obviously, the sum of c and o constitutes the full number of occurrences of Y, and thus his equation is simply the probability of censorship multiplied by 100% to make it pretty.

Now, it's clear that this approach doesn't capture anything about the characteristics of entity Y that make it more prone to being censored for the simple reason that characteristics of Y don't appear in the equation. Instead, this approach simply estimates the probability that a particular occurrence of Y would be censored, regardless of cause, given a particular context and time frame. This is roughly equivalent to estimating the likelihood of dying in a given year as being the number of deaths in that year divided by the sum of the number of people who died and the number who did not. That's an estimate, but clearly a poor one as the likelihood of dying if one is an eighty year old cancer patient is somewhat greater than if one is a healthy eleven year old. So, this approach just doesn't make good logical sense.**

More troubling to me, however, is the fact that this approach fails at a pragmatic level. Let's say we want to calculate the value of c, the number of occurrences of entity Y that have been censored: how would we do that? Well, in order to calculate c we have to know that there was an occurrence of Y and that this occurrence was somehow suppressed or omitted. The problem, however, is that if knowledge of the occurrence of Y was suppressed, how would we know that Y had occurred in the first place? We might as well try to calculate the percentage of facts we accept that are actually wrong- since we don't know that they're wrong, how do we find the percentage? So, obviously c is not a simple quantity but is, instead, the result of another estimator. That said, I have a difficult time imagining how to calibrate such an estimator- if we were to use the number of documented instances in which Y was known to have been censored (i.e. failed censorships) then we still have to make some sort of assumption about the distribution from which those censorship events were drawn, and there's no clear way to do that.

Still, predictably oblivious to the gaping logical flaws in his concept, Schlafly nevertheless proceeds to supply censorability scores*** for various things, including the Bible (20%), "Freedom"**** (10%), and classroom prayer (100%). And then, as if to mock me, he lists some factors he thinks should influence censorability. Why these factors didn't make it into his quantification of same I don't know, though I would speculate that actually constructing a decent model would have required too much effort. Finally, he ends by summarizing some ways to reduce censorability, which basically focus on repeating the message over and over and rote learning.

And honestly, I'm just amazed by all this. Does it take special training for Schlafly to be this unbelievably incompetent at social science, or does he just have a truly remarkable gift?


* I should note that some commenters on the talk pages seem to be laboring under the misconception that Schlafly has produced some kind of mathematical model- he has not. What he has produced is basically a statistic for estimating a quantity, and a poor statistic at that. For anyone who reads the talk page, this is why no time function is specified- because the (t) term only indicates that the data used to estimate the value of Ex derive from a specific interval of time.

** Or, to be more accurate, is conceptually half-assed and sloppy.

*** He neglects to indicate from whence his data derive, so I'm assuming these figures were obtained via rectal extraction.

**** Whatever the hell that means, given how nebulous the concept of "freedom" is.

Labels: , ,

2 Comments:

Blogger JLT said...

That reminds me of intelligent design creationists who claim that they calculate the content of functional complex specified information (or whatever) of e.g. a DNA sequence and then proceed to calculate the probability of that sequence assembling all at once. Which is, of course, exactly the same for all sequences of the same length, whether they are functional or not. Even if you ignored that no one claims that new genes turn up all at once, their calculation just doesn't calculate specified information.
Another similarity is the absence of definitions. You'd think at least "information" should be well-defined by the IDCists, but it's usually defined as "it means what I want it to mean."

I think what really sets Schlafly apart is that he's simultaneously and openly incompetent at so many things. "Normal" people realise their incompetence in certain areas and either try to learn more about it first or don't talk about it. If you asked Schlafly in which areas he's incompetent I imagine he'd have problems to come up with an answer.

Thursday, July 22, 2010 8:24:00 AM  
Anonymous Anonymous said...

I like the bit: "Since empirical data for x, E, and t are currently unavailable for the examples of censorability below, we have assumed values of c and o which will produce the correct answers as determined by faith and logic."

Friday, July 30, 2010 1:06:00 PM  

Post a Comment

<< Home

Site Meter