Hi,
I am glad to be back again.
Well,this is kind of a lecture,so anyone who is an engineering or computer science student might find the material useful.
feel free to ask questions.
<b><i>note: this is mere an introduction of sorts for some people who dont really have any kind of background about what i am actually going to talk about.i dont want them listening to persian</b></i>
Justfication of posting the thread here:
===========================================
i found that this place will be accurate for the posting,probably because enhancements in formal languages could lead to a great and highly evolved interaction with machines.this would again converge us towards the True AI,as some people here would say.
==============================================
the history of computer theory is interesting.it was formed by fortunate conincidences,involving seemingly unrelated branches of intellectual endeavor.
As the set theory was invented there were many Paradoxes.Some of the Georg Cantor's findings could be tolerarted,like for example the Infinty comes in different sizes,but some contradictions were not,like for example,
<i>the notion that some set is bigger than the universal set</i>.
This left a big cloud over mathematics. that needed to be resolved.
In the year 1900,David Hilbert was invited to address an international congress to predict what problems would be important in the century to come.it turned out as most critics would call a mere coincidence that he was almost correct everywhere..
First of all he wanted the confusion in set theory resolved.Hilbert thought that Axioms as provided by Euclid,and set of rules of inference could be developed to avoid paradoxes.
second Hilbert was not merely satisfied that every provable result should be true;he also presumed that every true result was provable and wanted the mathematicians to find the ways to do so.
it wasnt easy for mathematicians to follow Hilbert's plan.Mathematicians are usually in business of creating proofs themselves,not the proof generating techniques.what had to be invented was a whole field of mathematics that dealt with Algorithms or programs.
in other words (as per today)we might rephrase Hilbert 's request as a demand for a computer programs to solve mathematical problems.
the road to studying algorithms wasnt a smooth one.the first Bump Occured in 1931 when Kurt Godel proved that there was no algorithm to provide proofs for all true statements in mathematics.in fact what he showed was even worse.he showed that either there were some true statements in maths that had no proofs,in which case there were certainly no algorithms that could provide such proofs,or else there were some false statements that did have proof of their correctness,in which case the algorithm would be disasterous.
Mathematicians then had to retreat to the question of what statements do have proofs and how can we generate them...
Alonzo Church(of Church thesis fame),Stephen Kleene.Emil Post(PCP fame),Andrei Markov,i dont know the middle name of his(difficult to pronounce),John newmann and Alan Turing worked independently and came up with an extraordinary simple set of building blocks that seemed to be the atoms from which all the mathematical algorithms can be comprised.
they each fashioned various(but similiar in various respects) versions of universal models for all algorithms,what we could call a universal Algorithm Machine.Turing went one step farther,he proved that there were mathematically definable fundamental questions about the machine itself that machine could not answer.
Turings theoritical model for an algorithm machine employing simple set of maths structures held out a possibility that a physical model of Turings idea could be actually constructed.if some human could figure out an algorithm to solve a particular class of problems,then machine could be told to follow that sequence to solve the problems.
the electronic discoveries coincidentally supported everything that was going on theoritically.thus,
<b>what actually started out as a mathematical theorem about mathematical theorems became single most practically applied invention since wheel and Axle.not only was this an ironic twist of fate but the whole thing happened in a span of just <i>10 years</i>.
it is natural to assume that:
1)thinking
2)learning
the two important branches of psychology and neurology play an important role in the whole scenario of Formal labguages acceptances and the machines designed for the various purposes.the languages in AI play an important role in a way that unless a formal language is designed with removal of ambiguities and perfected,the days of true AI will remain Asymptotic as our mathematical elite set would say.
this raises an important question,since our whole branch of AIs development lies in this question itself.
what should be the language of an AI machine,or for simplicity let us take a common machine,what should be the language.
this makes us go back to our roots.
(this is getting bigger,i"ll post another reply to the same thread.
....more to follow...
bye!
I am glad to be back again.
Well,this is kind of a lecture,so anyone who is an engineering or computer science student might find the material useful.
feel free to ask questions.
<b><i>note: this is mere an introduction of sorts for some people who dont really have any kind of background about what i am actually going to talk about.i dont want them listening to persian</b></i>
Justfication of posting the thread here:
===========================================
i found that this place will be accurate for the posting,probably because enhancements in formal languages could lead to a great and highly evolved interaction with machines.this would again converge us towards the True AI,as some people here would say.
==============================================
the history of computer theory is interesting.it was formed by fortunate conincidences,involving seemingly unrelated branches of intellectual endeavor.
As the set theory was invented there were many Paradoxes.Some of the Georg Cantor's findings could be tolerarted,like for example the Infinty comes in different sizes,but some contradictions were not,like for example,
<i>the notion that some set is bigger than the universal set</i>.
This left a big cloud over mathematics. that needed to be resolved.
In the year 1900,David Hilbert was invited to address an international congress to predict what problems would be important in the century to come.it turned out as most critics would call a mere coincidence that he was almost correct everywhere..
First of all he wanted the confusion in set theory resolved.Hilbert thought that Axioms as provided by Euclid,and set of rules of inference could be developed to avoid paradoxes.
second Hilbert was not merely satisfied that every provable result should be true;he also presumed that every true result was provable and wanted the mathematicians to find the ways to do so.
it wasnt easy for mathematicians to follow Hilbert's plan.Mathematicians are usually in business of creating proofs themselves,not the proof generating techniques.what had to be invented was a whole field of mathematics that dealt with Algorithms or programs.
in other words (as per today)we might rephrase Hilbert 's request as a demand for a computer programs to solve mathematical problems.
the road to studying algorithms wasnt a smooth one.the first Bump Occured in 1931 when Kurt Godel proved that there was no algorithm to provide proofs for all true statements in mathematics.in fact what he showed was even worse.he showed that either there were some true statements in maths that had no proofs,in which case there were certainly no algorithms that could provide such proofs,or else there were some false statements that did have proof of their correctness,in which case the algorithm would be disasterous.
Mathematicians then had to retreat to the question of what statements do have proofs and how can we generate them...
Alonzo Church(of Church thesis fame),Stephen Kleene.Emil Post(PCP fame),Andrei Markov,i dont know the middle name of his(difficult to pronounce),John newmann and Alan Turing worked independently and came up with an extraordinary simple set of building blocks that seemed to be the atoms from which all the mathematical algorithms can be comprised.
they each fashioned various(but similiar in various respects) versions of universal models for all algorithms,what we could call a universal Algorithm Machine.Turing went one step farther,he proved that there were mathematically definable fundamental questions about the machine itself that machine could not answer.
Turings theoritical model for an algorithm machine employing simple set of maths structures held out a possibility that a physical model of Turings idea could be actually constructed.if some human could figure out an algorithm to solve a particular class of problems,then machine could be told to follow that sequence to solve the problems.
the electronic discoveries coincidentally supported everything that was going on theoritically.thus,
<b>what actually started out as a mathematical theorem about mathematical theorems became single most practically applied invention since wheel and Axle.not only was this an ironic twist of fate but the whole thing happened in a span of just <i>10 years</i>.
it is natural to assume that:
1)thinking
2)learning
the two important branches of psychology and neurology play an important role in the whole scenario of Formal labguages acceptances and the machines designed for the various purposes.the languages in AI play an important role in a way that unless a formal language is designed with removal of ambiguities and perfected,the days of true AI will remain Asymptotic as our mathematical elite set would say.
this raises an important question,since our whole branch of AIs development lies in this question itself.
what should be the language of an AI machine,or for simplicity let us take a common machine,what should be the language.
this makes us go back to our roots.
(this is getting bigger,i"ll post another reply to the same thread.
....more to follow...
bye!