Illusive Mind

The Unquestionable should be questioned

Thursday, September 21, 2006

How did that digital thinker think?


I happened upon an intriguing article over at titled: How would a machine think? Probably not like us...

It seems like so long ago that I was entertaining these exact thoughts, latching onto the Sapir-Worf hypothesis to confirm my intuitions about the cognitive nature of language, before I found out their particular research was bunk.

I remember writing about how representations of AI have been woefully lacking in truly imagining the limitations and expanded possibilities of a machine mind. My story ‘Press Any Key’ was based on this very thought, but it is clear to me know how much this is in need of an update.

So I dug this out of the trashcan, on Wed Aug 04, 2004 some thoughts of mine on how that digital thinker might think!

...Think about it, we're operating on the instructions of human 95, a scared, individualistic, reproductive survival machine. That is what we were built for (if you believe in evolution) however long ago, and we get patches and updates (biologically) as often as Windows.

Greed, jealousy, selfishness, are all products of a survival mechanism built in to keep the whole human race going. But our cognitive applications have evolved faster than our hardware can keep up. We have much more of an emphasis on how we use our brains, and as such we get fat, contract heart-disease and die. Our hardware is still operating on the idea that we run 50 miles a day to get dinner.

So now turn to AI (I'd say DI but it isn't as catchy). It has no such biological limitations. It's hardware is only limited by our ability to invent it. I have yet to see a movie that accurately depicts an artificially intelligent being, (maybe T2). Why? Because like the self-loving gods that we are we shape them in our own image. I tend to think that emotions are a biological product of a certain arrangement of chemicals. Before we attached meaning and significance to them they were designed to get the job done, (it's much easier to kill when you're angry). But don't ask me what depression is for. "Crying is a puzzler"

So there is no reason to assume that they are necessary to intelligence and as such why an AI would have them. Considering that AI does not have the same biological heritage as ourselves there is also no reason to assume that it would posses the same left-overs. The inherent selfishness, and greed that is apart of being human (which isn't to say, love and compassion aren't also parts) would not be necessarily present in AI.

AI characters which emulate these characteristics (i.e. 'angry' robots in I Robot) are due to a poverty of the imagination. We are so used to humanity, that we find it difficult to imagine an emotionless intelligence. To our way of thinking, someone without any emotion is a very ill-minded person.

The consequences of realistically envisaging the AI construct are numerous and intriguing. Suppose we had a suitable default AI receptacle. A hard drive with sensory input devices and output devices. Also suppose that any particular AI could be loaded onto any particular machine, the greatest of all human traits might then be non-existent. And that is an unwavering attachment to our own particular vessels. Imagine a being that had no such hang-up, that they knew if one was broken that could always to uploaded to another. What kind of ramifications would that have for the thought processes of AI?

Would it lead to an infinite amount of Smith-like duplicates? No, I don't think so. The primal urge for reproduction is also lost. All the emotional crap that clouds our vision of reality would disappear. I'd like to think these beings would have a very Zen-like appreciation of our world. But of course, what value is there in a creature that can never be happy?

Labels: , ,


Post a Comment

<< Home