Tuesday, April 9, 2013

Turing Tests

I am a big fan of Alan Turing and his work, and the possibility of AI within my lifetime is intriguing. I have a problem, however, with the conclusions drawn from the Turing Test.

The Turing Test has become the gold standard of AI researchers; there is at least one annual contest to build a program that can win it, and the entrants are on the cutting edge of intelligent computing. The question, or objection, I have, is that the test seems to have a very narrow and specific goal in mind, and therefore a very narrow definition of "intelligence." To use an extreme example; imagine a program built on statistical analysis and rules regarding previous conversations (or even previous Turing tests). Built around some algorithm that allowed it to produce reasonable responses to conversations; but nothing more. Such a program could probably pass a Turing Test, but it wouldn't be an AI. It seems the only solutions are to say that a) the narrow definition of Intelligence is intentional, and correct, or that b) The Turing Test, while valuable, isn't enough to detect Intelligence.

1 comment:

  1. Mike, I think you are probably onto something with your critique of the literalness with which the "Turing test" has been embraced (after all, Turing called it an imitation game, not a test, and it was hypothetical in nature). The contests make for good fun, perhaps. It's worth noting that plenty of researchers are approaching the AI question from other standpoints, e.g. cognitive science and robotics, where things like embodiment and environmental context become factors (instead of being set aside).

    ReplyDelete