Illusive Mind

The Unquestionable should be questioned

Wednesday, November 03, 2004

Digital Intelligence

 

In keeping with the A.I. theme although I call it digital intelligence. Why? Because if intelligence is simply an attribute or function of a certain kind not limited to organic tissue then there is nothing artificial about a processor that can think for itself. Artificial intelligence is when your DVD player pretends to be friendly by saying 'Hello' when you turn it on. Even so I will refer to such cognition as it is generally regarded: AI.

I think we can create AI that can evolve beyond our own intelligence. If you were to combine something of our own intelligence merely with the speed a which processors can conduct operations you would have an intelligence superior to our own.

I would go so far as to say that currently there is a technological replacement for every function of the human being except intelligence. That our intelligence is really the only thing we've got going for us.

This brings us to the good ol’ question of mind, perhaps my favorite question. What is mind, doesn’t matter, what is matter, never mind.

Firstly I think an updated form of functionalism is the best theory of mind we have at the moment. The problem when identifying an artificial mind is that the criteria we use must also apply to ourselves. If it is right to terminate something which is not self aware, or cannot grasp a linguistic trick (which some non-intelligent programs can already do) then we are forced to terminate a whole range of human people who we would think it is wrong to kill. Therefore, there is something wrong with our criterion.

An infant is not self aware, not at least to about four of five years old. Thus supporting the “self aware” or sentience argument, is supporting infanticide. If you’re like Peter Singer, then you don’t have a problem with this, but if you also think plain abortion is wrong, then you’re going to be even more hard pressed to come up with a mutual solution.

So if we don’t want to be prejudiced we should apply the same test for human intelligence to machine intelligence, yes? This is what Alan Turing thought and as such believed a test of intelligence should involve a chat room type environment where a judge tries to see which user is the human and which is the computer. But there exists Chat bots which can fool people and we don’t feel compelled to label them intelligent. You assume I’m intelligent (insert joke here), even though the only evidence you have is what I’m typing on this thread.

However, we might like to refine our test to one of best explanation, which can be refuted by further evidence. So this means, if you read this post then the best explanation is that you are talking to an intelligent being. But if you find further evidence that suggests I am not intelligent (that I am a chat bot for example) then you should conclude I’m not intelligent. This idea was proposed by John Searle, however he thought that and further evidence which showed that the intelligence wasn’t human or “organic” leads us to believe that the being isn’t intelligent. But I think most of us would disagree.

But how can we test for intelligence, what further evidence could convince us that the program was intelligent, and what criterion could we impose which would not support infanticide?

I think the answer lies, in the capacity for intelligence. I think what matters to intelligence is not the possession of some type of information at the present time but at least the ability to posses information, the ability to learn. For without this ability intelligence does not occur, all that occurs is a programmed set of instructions designed to baffle us. Therefore, if we find such instructions in the program, we should conclude it isn’t intelligent, but if we find programming that leads us to believe the machine can learn up to and beyond a certain level (a four your old perhaps) then we should deem it intelligent.

This also solves the problem of infanticide because; we can conclude that the best evidence suggests that the child would reach a certain level of intelligence at some point. This would also mean, that it would be wrong or unlawful to terminate certain species of monkey, but I don’t see anything wrong with this consequence.

Regarding sentience, I should be clearer about what I mean. Sentience is about recognizing and feeling pain, not being self-aware. I’ve read that self-awareness comes at around three or four years of age. The reason I called you’re evidence anecdotal, was precisely because it wasn’t evidence. You call it a first-person eye-witness account, but one person saying “I remember when I was one” is not enough to constitute a solid philosophical objection, or indeed a scientific objection, but could at best demonstrate your own counter-intuitive response. But this of course is irrelevant to the argument in question.

So now if we can accept some of my assertions, what problems are left for determining intelligence? I have tried to make sure that my argument is not a potentiality argument because, that would mean, we could not terminate pocket computers because, they might some day become intelligent. So I’m trying to say that the presence of a developed learning structure that in of itself guarantees the being a certain level of intelligence.

So the two major problems are:
1. What non-arbitrary level of intelligence must the learning process must be able to attain, and how could we realistically test for this. (i.e. a dog has a learning process, but presumably we don’t want to be committed not being able to terminate a dog)

2. At what stage of development of the learning process must be in order to pass the test? For computers we could probably say, that point at which no further human programming is required, but what about organic creatures? At what non-arbitrary point would we be committed to not terminating a foetus? Does it matter that we know that the foetus will develop a learning structure (potentiality) in time?

Some of this points, will need to be answered by neuroscientists, but there are problems regarding how this rules of intelligence testing is used regarding humans. I suppose we could add a provision, that if we have knowledge to indicate that the being will develop a learning process, then we shouldn’t terminate, but this fuzzes the lines, and rules out abortion through potentiality, something I’m not sure I want to be committed to. It would also mean, wholly unintelligent machines would be unable to be terminated.

Any thoughts?

Labels:

0 Comments:

Post a Comment

<< Home