Total Drek

Or, the thoughts of several frustrated intellectuals on Sociology, Gaming, Science, Politics, Science Fiction, Religion, and whatever the hell else strikes their fancy. There is absolutely no reason why you should read this blog. None. Seriously. Go hit your back button. It's up in the upper left-hand corner of your browser... it says "Back." Don't say we didn't warn you.

Tuesday, December 04, 2007

Transhumanitarianism

A while back I was having a chat with an academic who does a lot of simulation studies. That is, his research doesn't use data collected from the world but, instead, employs a set of simplified mathematical models to explore the effects of processes. Particularly, he uses what are known as "actor-oriented" models, meaning that they simulate a whole bunch of entities that make choices about their behavior. In other words, they model the behavior of a set of actors under certain conditions. As we were discussing this he related a story about his local IRB* which had asked him to provide assurances that the simulated actors in his work weren't being harmed. This was the source of a great deal of mirth since, all things considered, the "actors" are little more than a mathematical abstraction. Their ability to be victimized is, therefore, effectively nil.

Humorous as this is, it fits in rather appropriately with something that I've been thinking a lot about lately. It'll take a while to get to my point, however, so please get comfy.

When people think about the Human brain they often use a computer analogy: the brain is the hardware and our thoughts, feelings, personality, and so forth are the software that runs on that hardware. Most people think that we can adjust our beliefs and behaviors (software) without changing the hardware and, likewise, can change the hardware in a way that impairs the ability of that software to execute. As it happens, this latter assertion is more or less true as prefrontal lobotomies and Phineas Gage have amply demonstrated. The former of the assertions, that the software can change independent of the hardware, is less well-validated since the brain stores information in the physical configuration of its neurons but, for the purposes of this argument, let's ignore that. We can agree that it is reasonable to think of our "selves" as a sort of software that is run on the hardware of the brain like Windows is software run on the hardware of a personal computer. The thing is, if we accept this argument a question naturally arises: could we run our "selves" on a different kind of hardware?

The question is one that has been more or less present since at least the work on Turing machines, or universal computers that can perform the computations of any other turing machine of equal or lesser complexity. Put another way, imagine we have two computers of unequal capacity. If both are Turing machines then the more powerful machine can simulate the function of the less powerful machine even if their original hardware is very different. This is the logic underlying hardware emulation programs that can allow you to play old Nintendo console games on your personal computer. The PC is sufficiently more powerful that it can imitate the behavior of a different set of hardware. Given all this, we have to wonder if it might be possible to somehow duplicate the software of the human mind into a different set of hardware- effectively changing the substrate** on which our minds rest without changing the mind itself. Might we emulate the hardware of our brains on an electronic computer well enough to essentially "run" a person in emulation?

Let's assume for a moment that we can do this but that the copying process is destructive. That is, making the copy of a person's software unavoidably destroys the original. Thus, when we copy a person they come to reside within an electronic computer and their original biological brain is now empty. Having done this, we communicate with the transferred human who retains their original faculties, memories, and personality. If we communicate with this new digital version we find that we cannot tell any difference between them and their original biological form. Except, you know, for the fact that they now live in a box. Under these circumstances, would we regard the software in the box as a human being? Probably.

Now let's go a step further: imagine that the copying process has become more sophisticated. We can now duplicate a person's mind on an electronic substrate without destroying the original. If we do so, can we still regard the entity in the computer as a person? Arguably, we have to. Nothing has changed in our example except that the original individual still lives on in their biological substrate. There is no real difference between the entities in the electronic substrate in each example and, as such, we would still have to regard the copy in the computer as a person.

Now imagine that someone creates a piece of software from scratch that can interact with us just as fluidly and engagingly as a copied human mind. If we try we cannot determine that this entity is not a copied human except through learning of its true origins from a third party. Under these circumstances, do we have to regard this artificial software as a person? Well, I would argue the answer is "yes," and such is the logic underlying the Turing test. Since the only way I can tell if another human is intelligent is by seeing if they act in a way that seems intelligent, I must use the same logic to determine if a piece of software or an entity in a different kind of biological body (e.g. Dolphin, sentient extra-terrestrial) qualifies as a sort of person. So, a competely artificial piece of software that is indistinguishable in interaction from a copied human self must be regarded as being a person and, by extension, is entitled to the rights and protections of same.

Remember how I said it would take a while to get to my point? Well, we're finally here. The issue I've been grappling with lately is this: we don't just extend legal protections to humans. We also extend protections to non-human entities through animal cruelty laws and research protocols that make it difficult to perform research on different kinds of animals. It's much easier, legally speaking, to do certain kinds of things to planaria than to chimps. What happens, however, if we create a piece of software whose behavior is effectively as complex and adaptive as, say, a dog? Put another way, if we were to use our mind-copying technology on a dog we would find that the copied dog is indistinguishable from this new artificial software. Does that mean that this new piece of software deserves the same protections as a dog? If we create a piece of software as intelligent and adaptable as a chimp, does it then warrant the same protections given to a chimp? I am, more and more, forced to the conclusion that yes, such pieces of software would warrant the same protections afforded to their biological equivalents.

What particularly disturbs me about this line of thinking is that, in the realm of video games, there is a constant drive towards more and more "intelligent" enemies. "First Person Shooter" games like Quake or Half-Life often advertise that their simulated, programmed adversaries are more lifelike, more intelligent, and more challenging than before. In this constant drive to improve the simulated adversaries for human players- adversaries that are often casually annihilated in the course of game play- what happens when the new opponents develop behavior as sophisticated as a rat? How about a dog, or a raccoon, or a chimpanzee or, eventually, another human? If the software enemy becomes indistinguishable from the biological equivalent, does that mean that it deserves the same protections? And, likewise, does it make killing that simulated opponent the moral equivalent of murdering a dog for sport? Will we eventually reach the point where we recreate the gladiatorial bloodsports of Rome in silicon form? And will we have the decency as a species to take the appropriate action when that time comes?

Just a little something to think about when you're getting your simulated actors ready for their prisoner's dilemma.


* Institutional Review Board- a body that reviews research proposals to make sure that they aren't unethical or dangerous.

** I'm using the term "substrate" to refer to any medium on which the software can run. So, in this example, we have a biological substrate (i.e. the human brain) or an electronic substrate (i.e. an electronic computer). In either case, we're just talking about kinds of hardware.

Labels: , , , ,

0 Comments:

Post a Comment

<< Home

Site Meter