In 1950, computer scientist Alan Turing proposed a way to test a computer's intelligence. The gist is that you have a person talk to a computer, and see if the person can tell whether he or she is talking to a computer or another person. If the tester can't tell the difference, that computer is said to be intelligent.
Well, there are a million problems with this test, but one of the most important ones is the gaping question of who is doing the judging. I've known people who could talk to the recorded time on the telephone and not know they were talking to a machine. On the other hand, I've known people who can talk to you, and can detect the precise nanosecond at which your mind starts to drift and think about whether you put on matching socks that day.
Another way to think of Turing's test is: If you want to know if a computer is intelligent, talk to it and see if it seems intelligent. I think this test could be applied to humans, too, and I know quite a few who would flunk the test.
Begging the whole question of what intelligence is, we could probably put together a list of tasks which, in humans at least, require intelligence to complete. Recognizing a person in a photograph. Looking at a drawing of a room or a building, and then drawing that same room or building from a different angle. Reading some text and then answering questions on it. Comparing analogies. You know, standard SAT type stuff.
Now for any one of these tasks, it's possible today to build a machine that does really well. Does that mean that artificial intelligence has already been achieved? Or does it require the whole amalgam of skills to be intelligence?
I'll come back to these questions, if I remember.
No comments:
Post a Comment