Artificial Intelligence to me is a very interesting topic. For the most part, it intrigues me because, based on my knowledge of programming, you cannot design a robot that responds "intelligently" to all situations, questions, conversations, etc. An intelligent response to me means taking in the various things around you, the things you're responding to, your past experiences, and most importantly, your intent, to come to a decision about how to act/respond. Humans don't always do this, and a lot of the time, they might do it without fully 'thinking' about it, for example, a human might not recall a certain past experience that relates to a current scenario right then.
My experience of programming isn't extensive, but I feel that programming a fully "intelligent" thinking machine would be far too much programming to be at all realistic, perhaps even impossible, I simply think that there are too many responses and too many inputs to have a machine respond "intelligently," or by human standards. It's interesting that the Turing test tests for all human behaviors, regardless of their intelligence. I like that the test is for machines imitating human behavior, but in many contexts, a machine's ability to reproduce human error seems useless for the purpose of why a machine might be created. The Turing test is interesting in this way, but in terms of its practical use, I think a different test for intelligence, with a narrower view of thinking and intelligence might be used.
Daniel, good to see you on the blog again! I'm not exactly sure what you mean when you say that the Turing test "tests for all human behaviors." To me, Turing does limit the test quite extensively by essentially discarding embodiment. And I think he'd agree that trying to program an intelligent computer equivalent to an adult human would be a futile enterprise, hence his suggestion that to create "learning machines" or child-computers that have the capacity to acquire knowledge, customs, etc.
ReplyDelete