Math from the Schlaf!
You see, the other day I ran across the product of Schlafly's most recent
Or, in merely human language:
The censorability of a concept, movement or ideology is its vulnerability of being censored by its opponents.
A bit crude, I'll grant, but it's a superficially interesting speculation: that there is some intrinsic character of an idea, ideology, or social movement that makes it easier or harder to censor. Of course, "censor" is taken as a primitive term (i.e. not defined), which is probably unwise given the diversity of views on what constitutes censorship, but nonetheless the underlying notion is at least vaguely interesting. Rather than take the time to carefully analyze this concept and develop it, however, the Schlaf decides to dive right into producing a statistic to measure censorability.* And this is where the wheels come off the wagon:
To sum up: Schlafly defines the censorability of an entity (which I will refer to as Y for clarity, though Schlafly doesn't use this notation) to be equal to x and indicates that this is depdendent on both the environment in which Y is lodged, E, and the time interval we are examining, (t). So, in other words, the term Ex(t) means, "The censorability of Y in context E and time frame (t)". Clearly, we're talking about some sort of estimator here, and it's an estimator that is geographically, socially, and temporally bounded. He further defines c as the number of of times that entity Y has been censored in E and (t) and o as the number of times that Y has occurred without being censored in E and (t). He then combines all this as follows:
Ex(t) = (c/(c+o))*100%
We can safely ignore the left side of the equation because it just defines what we're looking for. That leaves the right side, which is effectively just a probability. A probability is defined mathematically as the number of occurrences of an event divided by the number of opportunities for the event to occur. So, for example, if a coin is flipped 100 times and 50 times it comes up heads, the probability of a heads is 50/100=0.50. Often probabilities are expressed as percentages, attainable by multiplying the probability by 100%, but while this aids explanation it has fairly undesirable mathematical properties. Returning to Schlafly's equation, he's dividing the number of times Y was censored (c) by the sum of c and the number of times Y was not censored (o). Obviously, the sum of c and o constitutes the full number of occurrences of Y, and thus his equation is simply the probability of censorship multiplied by 100% to make it pretty.
Now, it's clear that this approach doesn't capture anything about the characteristics of entity Y that make it more prone to being censored for the simple reason that characteristics of Y don't appear in the equation. Instead, this approach simply estimates the probability that a particular occurrence of Y would be censored, regardless of cause, given a particular context and time frame. This is roughly equivalent to estimating the likelihood of dying in a given year as being the number of deaths in that year divided by the sum of the number of people who died and the number who did not. That's an estimate, but clearly a poor one as the likelihood of dying if one is an eighty year old cancer patient is somewhat greater than if one is a healthy eleven year old. So, this approach just doesn't make good logical sense.**
More troubling to me, however, is the fact that this approach fails at a pragmatic level. Let's say we want to calculate the value of c, the number of occurrences of entity Y that have been censored: how would we do that? Well, in order to calculate c we have to know that there was an occurrence of Y and that this occurrence was somehow suppressed or omitted. The problem, however, is that if knowledge of the occurrence of Y was suppressed, how would we know that Y had occurred in the first place? We might as well try to calculate the percentage of facts we accept that are actually wrong- since we don't know that they're wrong, how do we find the percentage? So, obviously c is not a simple quantity but is, instead, the result of another estimator. That said, I have a difficult time imagining how to calibrate such an estimator- if we were to use the number of documented instances in which Y was known to have been censored (i.e. failed censorships) then we still have to make some sort of assumption about the distribution from which those censorship events were drawn, and there's no clear way to do that.
Still, predictably oblivious to the gaping logical flaws in his concept, Schlafly nevertheless proceeds to supply censorability scores*** for various things, including the Bible (20%), "Freedom"**** (10%), and classroom prayer (100%). And then, as if to mock me, he lists some factors he thinks should influence censorability. Why these factors didn't make it into his quantification of same I don't know, though I would speculate that actually constructing a decent model would have required too much effort. Finally, he ends by summarizing some ways to reduce censorability, which basically focus on repeating the message over and over and rote learning.
And honestly, I'm just amazed by all this. Does it take special training for Schlafly to be this unbelievably incompetent at social science, or does he just have a truly remarkable gift?
* I should note that some commenters on the talk pages seem to be laboring under the misconception that Schlafly has produced some kind of mathematical model- he has not. What he has produced is basically a statistic for estimating a quantity, and a poor statistic at that. For anyone who reads the talk page, this is why no time function is specified- because the (t) term only indicates that the data used to estimate the value of Ex derive from a specific interval of time.
** Or, to be more accurate, is conceptually half-assed and sloppy.
*** He neglects to indicate from whence his data derive, so I'm assuming these figures were obtained via rectal extraction.
**** Whatever the hell that means, given how nebulous the concept of "freedom" is.