Dilbert said:
that does not mean a lot. i asked you what you would simulate. I was hoping to get a response like this:
"PIBOT will be set in a strict environment, the only sensory equipment it will have access to is the direct lingual input from a human operator. In the first simulation we will have PIBOT make a natural (or human) conversation with the operator, trying to convince him/her that it may actually be a human on the other side. In the second simulation we will let PIBOT solve riddles, puzzles and complex lingual statements and then explain, in his own words what he has accomplished.
To aid him, he has his previous knowledge; a database that took 3 years to construct and evolve."
something like that is what i wanted to hear. Now, please try again.
I don't need to try again.
In practical, you have asked me two questions:
1) What would the "DEMOnstration of PIBOT capabilities" contain?
2) What would you simulate?
The answer to the first question is that, now, "DEMOnstration of PIBOT capabilities" means exactly what I said. That is: "Demonstration of how PIBOT can think, manipulating concepts and meanings exactly like a human being. Also, demonstration of how PIBOT can simulate human experiences and converse with humans in natural language. Yet, demonstration of the potential application of PIBOT abilities in various fields. And so on".
Regarding the second question, I think that the term "simulate" is somewhat ambiguous.
Let me do an equation: Simulate=Imitate=Imitation Game=Turing Test=PIBOT WILL PASS THE "TURING TEST" BY 2007=WHY THE "TURING TEST" IS STILL VALID?=PIBOT VS TURING (ACT I).
Then, please read carefully the following:
Contrarily to what many people think, the "Turing Test" is not a blind alley. Although the "Turing Test" is not anymore a source of inspiration for the AI research, it still introduces many valences: the "Imitation the Game" that has originated the test, contains much more means than it seems at first view. In fact, in order to trick the human judge (obviously authoritative), the computer doesn't only have to answer questions about any imaginable topic (biology, psycology, computer science, mathematics, art, poetry, meteorology, chess, etc.), but it must also be able to lie, really simulating a life experience that, evidently, it have never lived. What's more, the computer must also simulate deficiencies where these don't exist: it should even be able to decide if and when to make errors intentionally, avoiding to show its for some aspects infallible nature; still worse, the computer must be able to establish independently, and with a clear reasoning (following a strategy), variations in the time intervals that elapse between each question and the related answer.
So, although the concept of "Turing Test" has evolved during years, the "Imitation the Game" on which the test is based still preserve an enormous importance for the AI research. This arise from the fact that, if the computer wants "to pass for a human" and exceed the test, it doesn't only have to limit itself to communicate in natural language, in a manner indistinguishable from that of a human being, but should also elaborate very complex "strategies of thought": therefore, the computer should be able to think in a complex way, exactly like a human being. It should be noted that the above considerations are demonstrably true, because the question-answer method of the "Turing Test" doesn't impose constraints of any kind regarding the test topics. In fact, when facing the test, the computer is forced to talk and to reason about arguments that it doesn't know at all, that nobody have previously revealed to it, and that can belong to any, also fictitious, knowledge domain: in this sense, the computer cannot have any preexistent acquaintance. Moreover, the semantic and lexical-syntactic content of the questions asked to the computer could include, isolatedly or combinatorily, various types of sentences that might be: wrong, incongruous, ambiguous, conflicting, paradoxical, illogical, foolish, etc.; they could contain rhetorical-semantic figures like: allegories, allusions, anacoluthons, anaphoras, analogies, anastrophes, amphibologies, antonomasias, asyndetons, chiasms, emphasis, euphemisms, etymology, hyperbatons, hyperboles, metaphors, oxymorons, periphrasis, pleonasms, similes, synecdoches, synesthesias, zeugmas, etc.; the questions might include hyponymy, complementariness, antinomies, reciprocity, incompatibility, polysemy, synonymy; they could also be based on a realistic, theoretical, hypothetical, imaginary or introspective nature, etc. Finally, and last but not least, the computer could decide if and how to answer the questions in this manner: giving a correct answer; giving a wrong answer; by an affirmation or by a negation; with an explanation; by an exclamation; with a question; not answer at all. Still, the computer should have to be able to lie and should possesses the sense of humour.
Concluding, if the "Turing Test" is executed strategically adopting the right combination of questions, and if it is executed for the "correct period of time", absolutely no tricks are possible. Neither a wizard nor an alchemist (stating their existence) could succeed in making to seem "intelligent" something that it is not quite so: how a software programmer could succeed in that using some "stupid" algorithm? The fact that the computer is able to exceed a similar test, with such a sophisticated level that could also create difficulties to a human being, testify that it possess true intelligent capabilities. It is not a human, but it seems human to all the effects: in truth, it is a "non-human super-intelligence". Absolutely no one tricked algorithm can guarantee the right results to exceed a test based on aprioristically unknown information/data and on an infinite tangle of lexical-semantic combinations. In spite of his genius, abilities and experience, there is not a projectist/programmer that can foresee the unforeseeable and handle the infinite.
Hence, fixed that the computer cannot cheat on its intelligence, the "Turing Test" is still valid: so, all people talking in terms of "cheats" or "tricks" regarding a computer capable to pass the test, should totally reconsider their notions and ideas about the human intellect.
You will find that, if PIBOT will be able to pass such a test (by 2007), it will do incredibly much more than:
"PIBOT will be set in a strict environment, the only sensory equipment it will have access to is the direct lingual input from a human operator. In the first simulation we will have PIBOT make a natural (or human) conversation with the operator, trying to convince him/her that it may actually be a human on the other side. In the second simulation we will let PIBOT solve riddles, puzzles and complex lingual statements and then explain, in his own words what he has accomplished"
If you don't want to raise a terminological controversy, you must accept this evidence.