VB Strong A.I. Project

Status
Not open for further replies.
Baal Zebul
[...] i do not want any competition i try not to explain much about it.
You are planning a financial gain from your project? If that is the case, what prospects do you offer your potential programmers?

It uses the CES language, which is similar to english with some grammar modifications, i believe that i have already said that.
If it is similar to English, how do you propose dealing with the ambiguities that leda noted a few posts earlier?

From NN i have created sort of a mixture between Trail and Error and Back-Propagating systems. (Back-Propagating might be a little clue, Just a hint)
This is a bit confusing. The back propagation algorithm is based on trial and error: it is a method of adjusting weights by comparing the output of the neural network with a sample set. How do you propose a mixture of those two terms, if they are already linked with each other?

In the real world it would have eyes that can see 3D
See 3D? How does that work? Even our own eyes only see in 2D, the light falls on a flat retina obviously registering a 2D version of the world. If i remember correctly, the perception of depth is introduced in our brain, where, among other factors, the different visions from left and right eye are used to create the illusion of depth. Maybe someone in the biology section can give you a more accurate or detailed description of this process.

This is the best i can say, it is pattern recognition on multiple levels but structured as neural nets, using a mixture between Trial and Error and Back-Propagating systems, this gives a what you would get out of Genetic Algorithms and Expert Systems.
Multple levels of what? Layers in the neural network? First of all, I'm not sure if terminology is correctly applied here. As I've explained above, I can not envision a mixture between trail and error and back-propagation systems. Secondly, if such was possible i can not see how the result of it would be equivalent to genetic algorithms and expert systems. Both of these are approaches to completely different areas. Genetic algorithms is good for optimization, expert systems traditionally work on a more concretely defined rule set. What you are claiming to implement, human intelligence, is neither a pure optimilisation process nor a feasible with a concretely defined rule set.

It creates its own natural language based on interaction and knowledge. In short, it is human or next to human.
Humans learn a language by mimicking those who already mastered language. Who or what is your system mimicking? You?

I can only put it in the previous AI terminology cause then people cannot understand how it works fully.
I'm not sure if you are using AI terminology in the manner as I've understood it.

Well, what is important? If you only will open doors then you dont have to know much. However if you are to repair door then you might need to know more, right?
Yes, but how does your system know or discover which pieces of information are relevant and which are not? How does it make a correlation between an action and a state change? If they happen after eachother within a certain time frame? If so, how do you determine what is the most suitable time frame?

Well, it will not know that there is a roof since the roof in no way could affect the simulation.
But if the roof is there, and the bot can "see" the roof, why would the bot ignore it? What rule in it tells it that it should use keys, but that it should not try to use roofs? How ever silly the last piece of the question sounds, a bot without no initial knowledge what so ever, has no idea if a roof is relevant with regard to fullfilling its mission.

It will have no dictonary, it will create a dictionary.
Ok, how will it describe a wall? Attaching simply a random label is not creating a dictionary. If it is to be of any use, it should recognize what makes a wall different from other objects, like e.g. the floor.

Yes, you are right. It can think that a key can open a door when the truth is that that key just openes one door. That is proved with empirical data, so if it solves the problem once and fails two times then it has learned that that key just worked for that particullar door.
Why? There are other conclusions concievable. E.g. why not conclude that the two doors which did not open to the key appear to be not functioning? Given its lack of knowledge about the reliability of doors and its situation (a room, three doors and a key) it can not choose which hypothesis is the correct one or even assume which is more likely. How do you propose to handle this?

The robot will reason in the same manner as the simulated robot. Turning is an action and instead of sending the output to a human viewed text it just sends it to the parts in its body that are affected in the Turn command.
Key and Door are merely object (no matter what they are called), they just give X, Y and Z cordinates for the robot to use when calibrating its Turn command.
No, the real world is significantly more complex. Simple object recognition is not a trivial matter. How to recognize the key from the texture of the floor? How to recognize the key from different angles? And I'm not even speculating about the complexity in getting the key inserted in the correct manner in the door's key hole.

This will probably sound even worse but i do not care, being on the safe side is always better.
What safe side? You are worried that people are going to patent your ideas? I for one am not making a run on the patent office just yet. At this point, I can only see problems with your approach, rather than innovative solutions.
 
Baal,

Seems you first projects are usually refered to as "Ant Farms", namely worlds where multiple intereactive elements strive to survive and is based on how some people use to (and probably still) make ant farms in the real world.

I've seen a few older concepts when it was first starting to be used, like Bacteria's growth in a petri dish which was a kind of Chaos demo in itself, since it would seed differently based upon which square you clicked in a 20x20 grid, you could even seed more than one square. It followed the rule that from your seeds it would grow to the squares that neighboured it's sides, once that growth occured, the squares surrounding it would starve the bacteria where the seed occured, and then that dead bacteria area would become food to be eaten by the bacteria again. When it got to the edge of the petri dish it obviously reached an equillibrium of growth an decline. (which exists when you have closed systems)

I personally actually do want to get around to scripting a decent "Ant farm" for my own project, which involves creating a Virtual world where characters can be interacted with at the same level they compute at. (Namely you can see an object, so can the character. The object is defined a name, which both you and the character understands is the object. You can examine the object to which the character can examine the object. ... Move the object etc.)

However if such life is breathed into the characters in this way, will people still be able to play such games as Grand Theft, in the knowledge that the little old lady they ran over actually lived???
 
This doesn't approve very well thought out. You have mentioned that you are going to do a dozen or so things which require huge problems solved that people working on AI for years have not been able to solve....

...and you plan to do this with a group of random programmers who most likely have very little 'real' AI experience...

stryder said:
I personally actually do want to get around to scripting a decent "Ant farm" for my own project, which involves creating a Virtual world where characters can be interacted with at the same level they compute at. (Namely you can see an object, so can the character. The object is defined a name, which Move the object etc.)
Now this sounds more realistic and interesting. Many games and the like seem to already follow the "both you and the character understands is the object. You can examine the object to which the character can examine the object".
 
Something else I should mention is that Baal's suggestion might not be as absurd as you might think, for instance take a look at http://www.visual-prolog.com/

A discussion on prolog was raise in the forums previously but my understanding is that prolog wasn't just used as a language with syntax created for problem solving and solution making, it was about creating something that had the ability to make decisions.

It lacks the functionality of C++ but is still utilised in this area, so it suggests VB can be used in the same way too.
 
It is suggested that prolog can be used to do that... but it hasn't actually been done. All it does is simplify dealing with the syntax... which still just leaves you with a bot.
 
Your system can consistently identify arbitrary objects from an arbitrary viewpoint, particularly from a view of only a portion of the objects?

that is not the everyday english that i know but i think it means that it can still understand the object even if it only sees a small part of the object, correct?

Well, it can always identify it even if does not have the whole picture however, it might not have enough info in order to do anything intelligent with it.

As I mentioned in my last post, you are translating both the input and output when you run the AI in a text simulation.

On the computer, no not really. However it will be pre-programmed so that it will get the right label on the items.
In the real world, i would translate the input and on the output it would be translated to robotic commands (a 1 here and a 0 over there perhaps)

Are you saying that your AI doesn't need to discard apparently irrelevant information as it might be useful later? This misses the point of what mouse was saying. I hear keys tapping from other cubicles while I sit at my desk all day long. I don't remove this fact from my knowledge base. However, I also don't react to it - I simply ignore it. I think the question was more along these lines - how will your system know which environmental changes to ignore?

Yeah, that i had not thought about actually. However it is easiliy changed. Just another topic for empiricial data. Let me ask you. If you were put in command of a nuclear plant without any knowledge, then how would you know which buttons to press if something happened on the first day? You would read a manual, ask somebody or guess, right? Why would my AI do anything else?

Because it needs to know which parts of its body to move and how to move them. Since the robot is given no knowledge ahead of time, this means it not only has to figure out that putting the key in the door, turning it, and opening it will bring it closer to its goal, but it also has to figure out how to reach down, pick up the key, move to the door, put the key in the keyhole, turn the key, grasp the doorknob, and open the door. All this information is not required when the AI will simply output "get key," "open door with key."

It is of course pre-programmed to some extent. Of course it needs to know how to move. I said that it was Real Artificial Intelligence and not Divine Artificial Intelligence.

I did not say it didn't use language. I only said, if it does, what mechanisms does it have for learning it? Do you use the 'principles and parameters' approach to language learning, are you a connectionist, a lexical functionalist? What type of grammar are you using?

Im sorry, i must have missunderstood you then.
Once again english beyond me, i would normally ask someone but i can't find anyone online better than me in english. Ill get back to this when i know the question.

You are planning a financial gain from your project? If that is the case, what prospects do you offer your potential programmers?

Well, we once had it up for discussion if we should create a AI programmer that could build us lots of OS's and sell them. I can only offer 0$ right now, however i am a fair man and we are 5 right now and we would each be entitled to 20% each of what ever, the more that join, the lesser percents each.
I have students in my team, (i am a stundet myself), i a professional AI developer on my team making 100,000$ a year. I am talking to other professional ai developers about them joing us whilst i have about 20 others wanting to join my team but they are all classified as people whom i cannot trust or people who know to little.

If it is similar to English, how do you propose dealing with the ambiguities that leda noted a few posts earlier?

Since i did not understand what leda said fully ill get back to this later too.

This is a bit confusing. The back propagation algorithm is based on trial and error: it is a method of adjusting weights by comparing the output of the neural network with a sample set. How do you propose a mixture of those two terms, if they are already linked with each other?

That i cannot say. It is not full back-propagating. Maybe i should had said that it uses a neural nets concept instead. This only confusses it i see.

See 3D? How does that work? Even our own eyes only see in 2D, the light falls on a flat retina obviously registering a 2D version of the world. If i remember correctly, the perception of depth is introduced in our brain, where, among other factors, the different visions from left and right eye are used to create the illusion of depth. Maybe someone in the biology section can give you a more accurate or detailed description of this process.

i know, but what i mean with 3D is that if you stand here then you do not see the same as you can see when you have moved one meter to the right. I have been making 3D games and when i think about 2D i think of a flat surface (a platform game) where someone walks on that surface, whilst 3D means that you can move in X, Y and Z. We only see a flat surface of course but i still think that it is a 3D environment.
A morphing sequence does not exist, it is just our brains that make it happen. So when you see someone transform from a human to an alien in movies then you have made that happen, but it is still called Morphing.

Multple levels of what? Layers in the neural network? First of all, I'm not sure if terminology is correctly applied here. As I've explained above, I can not envision a mixture between trail and error and back-propagation systems. Secondly, if such was possible i can not see how the result of it would be equivalent to genetic algorithms and expert systems. Both of these are approaches to completely different areas. Genetic algorithms is good for optimization, expert systems traditionally work on a more concretely defined rule set. What you are claiming to implement, human intelligence, is neither a pure optimilisation process nor a feasible with a concretely defined rule set.

The professional AI developer in my team has already called it "Neural Genetic Natural Language" He, just as me think that it is the core of AI, "mixing" all the other AI and combined creating real AI. (Not all concepts, ill not say which)

Humans learn a language by mimicking those who already mastered language. Who or what is your system mimicking? You?

No, reality. It creates the mimick, i never said that it would use the same words as we do. However if it was to have audio input then it could mimick the speakers of a conversation and use that data in many fields.

I'm not sure if you are using AI terminology in the manner as I've understood it.

I bet you that i would if i was american.

Yes, but how does your system know or discover which pieces of information are relevant and which are not? How does it make a correlation between an action and a state change? If they happen after eachother within a certain time frame? If so, how do you determine what is the most suitable time frame?

Yes, the only disadvantage is time. I press a button in room one, how the hell could i possibly know that it opened the door in room 5? I dont think there is any way actually except the concept of Origin Tracing, one of the earliy features of my design. However, why would anyone need to know that the button in room 1 opened the door in room 5? And if it really has too then it could use Origin Tracing but that would mean an substansical amount of trial and error really.

But if the roof is there, and the bot can "see" the roof, why would the bot ignore it? What rule in it tells it that it should use keys, but that it should not try to use roofs? How ever silly the last piece of the question sounds, a bot without no initial knowledge what so ever, has no idea if a roof is relevant with regard to fullfilling its mission.

Yeah, you are right. What i meant was that everything appear in text and the roof does not appear so it can't be used. However if it is just an empty room then it would try to do something with the roof but only once till it realises that it cannot do anything with it.

Ok, how will it describe a wall? Attaching simply a random label is not creating a dictionary. If it is to be of any use, it should recognize what makes a wall different from other objects, like e.g. the floor.

k, what is specific of walls? If you can only see text? Not colors, not material.

Why? There are other conclusions concievable. E.g. why not conclude that the two doors which did not open to the key appear to be not functioning? Given its lack of knowledge about the reliability of doors and its situation (a room, three doors and a key) it can not choose which hypothesis is the correct one or even assume which is more likely. How do you propose to handle this?

In SC-11 it would not. It lacks knowledge. In a human relica it would be dynamic depeding on previous patterns.

No, the real world is significantly more complex. Simple object recognition is not a trivial matter. How to recognize the key from the texture of the floor? How to recognize the key from different angles? And I'm not even speculating about the complexity in getting the key inserted in the correct manner in the door's key hole.

Well, it would make a 3D model of it and attach a texture to it. If you are talking about recognizing the object and not using it.

What safe side? You are worried that people are going to patent your ideas? I for one am not making a run on the patent office just yet. At this point, I can only see problems with your approach, rather than innovative solutions.

You cannot even see my approach and that is the way i like it. But you also asked me about what i would offer programmers that would join me, if it is money you want then i would have you sign a NDA first or most likely you would not even be in our team. I want to trust all my members and i do not have any NDA with them in order to create a casual environment.

However if such life is breathed into the characters in this way, will people still be able to play such games as Grand Theft, in the knowledge that the little old lady they ran over actually lived???

Have not thought about it in that way. Actually i think after what you just said ill not have the AI integrated in any games where they can die. Actually when i wrote the framwork for The Republic i instructed the others that they should abort the simulation if they would ever see that it was going to fail.
Stryder, we should chat sometime, you seem to be a trustworthy character.

...and you plan to do this with a group of random programmers who most likely have very little 'real' AI experience...

Random programmers? Well, all of us have experience in game development. We all know VB and/or C# (at least). We all agree that military application is the way that we should go even if Johan thinks that we should start with industrial application since it is a larger market.

A discussion on prolog was raise in the forums previously but my understanding is that prolog wasn't just used as a language with syntax created for problem solving and solution making, it was about creating something that had the ability to make decisions.

Almost any programming could be used since it is mainly just list processing however it requires the programming of the CES algo so i would not do it with Prolog even though it might be possible. Stryder, ill contact thee. Maybe we could have a chat this weekend, have you got MSN, ICQ or any chat application or should i just mail you?

It is suggested that prolog can be used to do that... but it hasn't actually been done. All it does is simplify dealing with the syntax... which still just leaves you with a bot.

Persol, you are the kind of person id not want in my team, i had to learn that the hard way. You are too negative for me, i did not have the complete algo when i started (it might have worked but it was not universal). I needed Vincent in my team who is a professional AI programmer. I asked him "what is missing" and he answered. I thought a lil about it and i solved the problems. That is what i do, i am a problem solver. The AI is probably not complete yet either but now i know that it is universal to atleast 85, 90 percents.
 
If you do not understand my question, how are you capable of producing an entity with language understanding capabilities? You MUST have some knowledge of linguistics and semantics in order to do this. In fact, you must have the most advanced knowledge of linguistics and semantic of anyone on the planet. This is my problem. You are basically talking about the equivalent of saying that you've built a time machine.
 
Baal Zebul said:
that is not the everyday english that i know but i think it means that it can still understand the object even if it only sees a small part of the object, correct?

Well, it can always identify it even if does not have the whole picture however, it might not have enough info in order to do anything intelligent with it.

I was making the statement as general as possible, to include every conceivable situtation. It can identify an apple that's 95% occluded? It can tell the difference between that apple and a picture of an apple that's 95% occluded? It can identify an apple viewed from the top, from the side, from the bottom?

If it can identify it, why wouldn't it have enough info about it to do something with it? By identify, I mean "determine the object's identity," not simply recognize that there is something there.


Baal Zebul said:
On the computer, no not really. However it will be pre-programmed so that it will get the right label on the items.
In the real world, i would translate the input and on the output it would be translated to robotic commands (a 1 here and a 0 over there perhaps)

On the computer - yes, really. You are translating the input, period. The AI doesn't have to deal with raw video or sensor data. You're telling it, "Hey, there's a key here."

Baal Zebul said:
Yeah, that i had not thought about actually. However it is easiliy changed. Just another topic for empiricial data. Let me ask you. If you were put in command of a nuclear plant without any knowledge, then how would you know which buttons to press if something happened on the first day? You would read a manual, ask somebody or guess, right? Why would my AI do anything else?

Will there be a manual for your AI to read? Someone for it to speak to? That leaves guess, or in other words, make a decision at random, possibly influenced by previous knowlege. Since it must consult its previous knowledge, it has to be stored in some fasion, right? And if the AI never discards any information, won't this knowledge base get pretty big? Didn't you say this AI will be able to run on a single home computer? I'm not talking about storage space, either...I'm talking about computing power, specifically the amount it'll take to search that ever-growing knowledge base.

Incindentally, I recall you saying, "It will do this on the first try and do you know why?" This doesn't sound like guessing to me.

Baal Zebul said:
It is of course pre-programmed to some extent. Of course it needs to know how to move. I said that it was Real Artificial Intelligence and not Divine Artificial Intelligence.

First of all, that's not what you said before. You said, "We will start it up with no knowledge." If that's not true, then you shouldn't claim it is.

Second of all - that's fine, let it know how to move. Are you going to also program it with the knowledge of how to insert and turn a key in a lock? If not, then it's not the same as a text simulation (recall that this line of discussion was regarding the difference between a text simulation and a real world test.) And if so, then in either case (text or real world), your AI will need to construct a sequence of basic movements that result in the key being inserted and turned. This is definitely not a trivial task.

On top of all of this - particularly if the AI is given no knowledge, why should it even expect that putting the key in the lock and turning it will open the door? Presumably you might say that this is the only option - there's a door, a key, and a keyhole. If I add more complexity to the problem, like adding various objects to the room, not necessarily key-like, why should the AI pick the key instead of the apple? Or the pencil?

I'm not saying that your AI will never be able to solve the problem of exiting the room. I'm saying you're making grandiose claims that assume no prior knowledge but produce divine results.


Baal Zebul said:
Yeah, you are right. What i meant was that everything appear in text and the roof does not appear so it can't be used.

Baal Zebul said:
k, what is specific of walls? If you can only see text? Not colors, not material.

Back to my previous statement - you're translating the input. You filtered out some of the information so that your AI doesn't have to deal with it.


Baal Zebul said:
However if it is just an empty room then it would try to do something with the roof but only once till it realises that it cannot do anything with it.

And again, you said the AI will get it correct on the first try.

mouse said:
No, the real world is significantly more complex. Simple object recognition is not a trivial matter. How to recognize the key from the texture of the floor? How to recognize the key from different angles? And I'm not even speculating about the complexity in getting the key inserted in the correct manner in the door's key hole.

Baal Zebul said:
Well, it would make a 3D model of it and attach a texture to it. If you are talking about recognizing the object and not using it.

To make a 3d model of an object, you first have to recognize it. "Recognize" in this context is not "determine the identity," but rather "distinguish from the background."
 
If you do not understand my question, how are you capable of producing an entity with language understanding capabilities? You MUST have some knowledge of linguistics and semantics in order to do this. In fact, you must have the most advanced knowledge of linguistics and semantic of anyone on the planet. This is my problem. You are basically talking about the equivalent of saying that you've built a time machine.

I have also been thinking about the time machine, but i have had no progress there yet :p
No, i do not know english as good as you do. I am not saying that i am going to preprogram the linguistics of my AI either, i am saying that it will create its own language. What i mean is that all words that i can affect it in will be in english. but the label that itt has learned all by itself in our world will be a string based on previous numbers of identified items. X = X +1 basically with a string before that number.

was making the statement as general as possible, to include every conceivable situtation. It can identify an apple that's 95% occluded? It can tell the difference between that apple and a picture of an apple that's 95% occluded? It can identify an apple viewed from the top, from the side, from the bottom?

I was making the statement as general as possible, to include every conceivable situtation. It can identify an apple that's 95% occluded? It can tell the difference between that apple and a picture of an apple that's 95% occluded? It can identify an apple viewed from the top, from the side, from the bottom?

thats the plan

If it can identify it, why wouldn't it have enough info about it to do something with it? By identify, I mean "determine the object's identity," not simply recognize that there is something there.

k, lets say that you identidy a nuclear missile. Well of course you can think of the usage area to launch it but you will most likely not think about maybe using the same concepts in nuclear plants now would you?
If it has identified the apple to 100% then it will know what to do with it. If it has identified it to 5 percents then it will probably do nothing with it.

On the computer - yes, really. You are translating the input, period. The AI doesn't have to deal with raw video or sensor data. You're telling it, "Hey, there's a key here."

Yes, somewhat like that. But i am not telling it that "This is the Key"
Im saying "Put this label on the identified item", Why is this a problem for you? I thought that i had explained it that it is easier to read "Key" than "9sa8dy8y42" or something (anything)

Will there be a manual for your AI to read? Someone for it to speak to? That leaves guess, or in other words, make a decision at random, possibly influenced by previous knowlege. Since it must consult its previous knowledge, it has to be stored in some fasion, right? And if the AI never discards any information, won't this knowledge base get pretty big? Didn't you say this AI will be able to run on a single home computer? I'm not talking about storage space, either...I'm talking about computing power, specifically the amount it'll take to search that ever-growing knowledge base.

My database system is very effective. In order to be able to open the door with the key it needs 4 entries. that is one kb.
A human replica would not be able to run on a normal PC (at the moment atleast)

Incindentally, I recall you saying, "It will do this on the first try and do you know why?" This doesn't sound like guessing to me.

Give me those 4 entries (1 kb) and ill have it solve that problem without guessing.
And i know the next question, what will those 4 entries be? That is the limit, i can talk about most things up to that point.

First of all, that's not what you said before. You said, "We will start it up with no knowledge." If that's not true, then you shouldn't claim it is.

I thought that was understood.

Second of all - that's fine, let it know how to move. Are you going to also program it with the knowledge of how to insert and turn a key in a lock? If not, then it's not the same as a text simulation (recall that this line of discussion was regarding the difference between a text simulation and a real world test.) And if so, then in either case (text or real world), your AI will need to construct a sequence of basic movements that result in the key being inserted and turned. This is definitely not a trivial task.

No, it will have a command but it will fill in the "blanks", it will update its X, Y, Z data into that command (this is for the real world simulation)
It will have to construct a sequence, yes. That is one of the key features of my AI.

On top of all of this - particularly if the AI is given no knowledge, why should it even expect that putting the key in the lock and turning it will open the door? Presumably you might say that this is the only option - there's a door, a key, and a keyhole. If I add more complexity to the problem, like adding various objects to the room, not necessarily key-like, why should the AI pick the key instead of the apple? Or the pencil?

Yes, it was decided after i first posted here that it will have the goal to get from point A to point B and therefore it has to open the door. We decided this cause otherwise it might try to do something with the apple and the pear first before it uses the key on the door. If it knows that the door is its objective then it will use the key. That still remains

'm not saying that your AI will never be able to solve the problem of exiting the room. I'm saying you're making grandiose claims that assume no prior knowledge but produce divine results.

Stunning results is the best way to become recognized id say.

And again, you said the AI will get it correct on the first try.

Yes, and it will, believe me it will. It will not get the maze right on the first try. Now that would be godlike. But the door it will solve.

To make a 3d model of an object, you first have to recognize it. "Recognize" in this context is not "determine the identity," but rather "distinguish from the background."

yes, identify is detect the object and create a entry about it.
That would allow it to use it (shape like) but not fully. Id like to say some more here but i cannot do that, i do not like competition especially if someone is smarter than me.
 
Baal Zebul said:
thats the plan

Quite impressive - can you faithfully identify an apple in this setting? How do you know it's not a mutant banana with an edge that happens to look like an apple?

Baal Zebul said:
k, lets say that you identidy a nuclear missile. Well of course you can think of the usage area to launch it but you will most likely not think about maybe using the same concepts in nuclear plants now would you?
If it has identified the apple to 100% then it will know what to do with it. If it has identified it to 5 percents then it will probably do nothing with it.

I misunderstood. I thought you were referring to a lack of information about the object that prevented using it, as opposed to a lack of information about the environment.


Baal Zebul said:
Yes, somewhat like that. But i am not telling it that "This is the Key"
Im saying "Put this label on the identified item", Why is this a problem for you? I thought that i had explained it that it is easier to read "Key" than "9sa8dy8y42" or something (anything)

I understand labeling and the reasons for it. That's not my problem. You can call it a "key" and the AI won't know anything new about it from that label. My problem is that you're recognizing (not identifying) the key for the AI. You're telling it that there is some object sitting on the ground in front of it. This information is not contained in the real world test.

Let me try a different angle. The text simulation does not contain visual data that the AI will need to process. It contains some descriptive strings. In the text simulation, the AI does not construct those descriptive strings - you do. In a real world test, the AI will construct those descriptions (in a different format, most likely). Therefore, you are translating the input.
The text simulation presumably does not describe everything in the room in the same detail as images of the room would. You'll describe the key in whatever detail you like. You'll describe the door. Will you describe the floor? The walls? Any other objects? Will you do it with enough detail to distinguish very similar objects from each other, no matter how small the difference? You'll probably end up working on the first room of the simulation for more time than you'll work on the AI itself. Because you're not giving the AI all the possible information, you're filtering the input.
Because you translate and filter the input in the text simulation, there are marked differences between it and the real world test. Therefore, your results from the text simulation will not indicate feasiblity in a real world test.

Baal Zebul said:
My database system is very effective. In order to be able to open the door with the key it needs 4 entries. that is one kb.
A human replica would not be able to run on a normal PC (at the moment atleast)

Since you won't describe this with any more detail, I'll have to take your word for it. However, I refer you to your statement earlier that this AI will "not need 10 super computers, i do not even need 1. A normal PC would do just fine for my AI, but it would need to be modified a little so that it suites our AI a lil better."

Baal Zebul said:
Give me those 4 entries (1 kb) and ill have it solve that problem without guessing.

Baal Zebul said:
Yes, and it will, believe me it will. It will not get the maze right on the first try. Now that would be godlike. But the door it will solve.

I can't stand discussions where the other person isn't consistent. Let me quote from your previous posts again.

Baal Zebul said:
However if it is just an empty room then it would try to do something with the roof but only once till it realises that it cannot do anything with it.

Baal Zebul said:
Yeah, you are right. What i meant was that everything appear in text and the roof does not appear so it can't be used. However if it is just an empty room then it would try to do something with the roof but only once till it realises that it cannot do anything with it.

Do you not see that these two pairs of statements conflict? Either it will do it right the first time, or it won't. If it's adaptable as you say, surely it will be able to perform the same whether there's a roof or not.

Baal Zebul said:
Yes, it was decided after i first posted here that it will have the goal to get from point A to point B and therefore it has to open the door. We decided this cause otherwise it might try to do something with the apple and the pear first before it uses the key on the door. If it knows that the door is its objective then it will use the key. That still remains

How does it know point B is beyond the door? Couldn't it just as well be hidden behind the wall opposite the door?

I still have a major problem with how you expect an AI with no knowledge base to magically know that the object is a key, that a key is what it needs to open the door, and how to operate that key in the door.

Here is this same question restated, as I posted in my last post:

malkiri said:
On top of all of this - particularly if the AI is given no knowledge, why should it even expect that putting the key in the lock and turning it will open the door? Presumably you might say that this is the only option - there's a door, a key, and a keyhole. If I add more complexity to the problem, like adding various objects to the room, not necessarily key-like, why should the AI pick the key instead of the apple? Or the pencil?

Baal Zebul said:
yes, identify is detect the object and create a entry about it.
That would allow it to use it (shape like) but not fully. Id like to say some more here but i cannot do that, i do not like competition especially if someone is smarter than me.

You're not understanding me.

1. Recognize there's an object on the floor (distinguish the object from the background).
2. Examine the object (create your 3D model).
3. From the results of the examination, classify the object.

Creating the model is not the same as realizing there's an object there to begin with. Step 1 is what mouse's reply was asking about.
 
Quite impressive - can you faithfully identify an apple in this setting? How do you know it's not a mutant banana with an edge that happens to look like an apple?

Already been taken care of. That is really the largest problem in this world.
Lets say you have a can of Coca Cola. You put it in your hand and press it as hard as you can, to you it is now a disformed can of cola but to a robot what is it? it would see it as a new object (unless it can see the text "Coca Cola" and understand that it is a disformed cola can. Well, i have a few concepts that i would integrate in highly advanced robots but yes, it can still fail. But the more it know the more accurate)

I understand labeling and the reasons for it. That's not my problem. You can call it a "key" and the AI won't know anything new about it from that label. My problem is that you're recognizing (not identifying) the key for the AI. You're telling it that there is some object sitting on the ground in front of it. This information is not contained in the real world test.

In the real world it has eyes, it will just need the right hardware.

Therefore, you are translating the input.

Yes, in the simulated reality i am translating it, it is the easiest. in the real world i will probably too but let me just point out that i in no way have to.

Because you translate and filter the input in the text simulation, there are marked differences between it and the real world test. Therefore, your results from the text simulation will not indicate feasiblity in a real world test.

Fine, ill make one translated version and one with randomly created strings just for you ;)

I can't stand discussions where the other person isn't consistent. Let me quote from your previous posts again.

My greatest virtue is that i do not see myself as the smartest and the best. I see myself as one that can think of intelligent short cuts. That is how i have done my math, creating an own way of thinking that suites me better everytime that i find something hard. So, i believe that everybody will know all that i know cause some of you in this discussion might even be professional AI people, but the truth is that people don't. I spent 5 hours explaining my AI, i could had done it in 2 hours but i fogot to say one crucial thing cause i took it for granted.

Do you not see that these two pairs of statements conflict? Either it will do it right the first time, or it won't. If it's adaptable as you say, surely it will be able to perform the same whether there's a roof or not.

No, i meant a empty room that nothing intelligent could be done.

How does it know point B is beyond the door? Couldn't it just as well be hidden behind the wall opposite the door?

well, it has been decided to give it the task to get from coordinate 10,10 to 10;10.
 
I think we've probably gotten as far as we're going to get in this discussion. You're not quite seeing a few of my major points.

Baal Zebul said:
In the real world it has eyes, it will just need the right hardware.

I know that it has visual capabilities in the real world test. What I'm saying is that is has to decide what is an object and what is not in the real world. In the text simulation, you decide what is an object.


Baal Zebul said:
Yes, in the simulated reality i am translating it, it is the easiest. in the real world i will probably too but let me just point out that i in no way have to.

How will you prove you don't need to if that's what you do?

Baal Zebul said:
Fine, ill make one translated version and one with randomly created strings just for you ;)

No, you're not getting it. I don't care what the strings contain. Whatever you call the object doesn't matter. You can't create random descriptive strings, or they won't describe the object, will they? It's these descriptive strings that mark the major difference between text and real world. I'll say it again - in the text simulation, you describe the object for the AI. In the real world, it has to describe it for itself.

Baal Zebul said:
No, i meant a empty room that nothing intelligent could be done.

I see. However, this brings up the other point I made. Just because the AI might be able to exit the first room which is empty except for a key and door...what makes you think it'll be able to leave a room with a key, a door, and twenty other objects, in any reasonable amount of time? If it uses brute force, it could try to use each of the 22 objects on each other. If it has no knowledge, why would it decide to pick up the key and go straight to the door?

Baal Zebul said:
well, it has been decided to give it the task to get from coordinate 10,10 to 10;10.

Ok.
 
No, you're not getting it.

No, i know. I was just joking with thee.

I don't care what the strings contain. Whatever you call the object doesn't matter. You can't create random descriptive strings, or they won't describe the object, will they? It's these descriptive strings that mark the major difference between text and real world. I'll say it again - in the text simulation, you describe the object for the AI. In the real world, it has to describe it for itself.

Well, if they were descriptive strings then i could not replace them could i?
You say Apple, i say "Äpple". You say Key, i say "Nyckel". It still means the same thing, just different languages.

I see. However, this brings up the other point I made. Just because the AI might be able to exit the first room which is empty except for a key and door...what makes you think it'll be able to leave a room with a key, a door, and twenty other objects, in any reasonable amount of time? If it uses brute force, it could try to use each of the 22 objects on each other. If it has no knowledge, why would it decide to pick up the key and go straight to the door?

You have gotten that wrong. There will be 10 objects in room one and it will pick the key and use on the door (after it has identified everything cause we tell it to identify everything before interacting in SC-11)

why would it decide to pick up the key and go straight to the door?

Since it has the goal to get from Point A to Point B and the door is the first obsitcle.

I know that it has visual capabilities in the real world test. What I'm saying is that is has to decide what is an object and what is not in the real world. In the text simulation, you decide what is an object.

k, ill be brief.
Yes

I think we've probably gotten as far as we're going to get in this discussion. You're not quite seeing a few of my major points.

k, next reply. Write down you major points cause i think i have answered all that you have asked.
 
It is of course pre-programmed to some extent. Of course it needs to know how to move. I said that it was Real Artificial Intelligence and not Divine Artificial Intelligence.
It is interesting to note at this point that humans, and many animals, do learn how to move. We are born with an instinctive urge to learn how to walk as quickly as we can, but have no pre-programmed knowledge of knowing how exactly we should do that.

That i cannot say. It is not full back-propagating. Maybe i should had said that it uses a neural nets concept instead. This only confusses it i see.
How can it be not full back-propagating? At want point does back-propagation stop?

The professional AI developer in my team has already called it "Neural Genetic Natural Language" He, just as me think that it is the core of AI, "mixing" all the other AI and combined creating real AI. (Not all concepts, ill not say which)
Mixing entirely different concepts of AI is quite an effort. Simply tossing them together obviously is not going to work. E.g. GAs producing efficient NNs? It is not going to happen without throwing either an amazing set of hardware at it (which you suggested you didn't need) or a really crafty and resource friendly method of coding a NN, and all variables that can be associated with it (topology, weight modification algorithms, etc.), on a gene set. Of course, I'm quietly ignoring the ridiculous amount of time it will take to tweak such a GA for getting out local optima and such. I'd be afraid of the amount of other problems I will encounter if I start to think longer than two minutes about it.

No, reality. It creates the mimick, i never said that it would use the same words as we do.
How can you create something you are going the mimick? I'll try again: humans learn language and speech from others. An individual human baby e.g. does not learn a language, as we know it, simply by being in an empty room with a key and a door. The child will perhaps associate certain labels to the room, the key, walls, and what ever is there, but aside from that it will be deprived of something as complex and usefull as English or any other language which took centuries, if not millennia, to develop.

However if it was to have audio input then it could mimick the speakers of a conversation and use that data in many fields.
You are making a quantum leap here. Understanding spoken language is very difficult. It takes humans years to fully master it, and with good reason. Take the sound of the word "to" e.g. Based on context it can be associated with a verb "to walk", or, as a superlative (? not sure if this is the correct term) as in "too much", or as a number as in "two keys". If even humans, equipped with an impressive bunch of wetware, need long exposure to a language to figure this stuff out, how do you propose your AI can understand language in a reasonable amount of time with common hardware?

Yes, the only disadvantage is time. I press a button in room one, how the hell could i possibly know that it opened the door in room 5? I dont think there is any way actually except the concept of Origin Tracing, one of the earliy features of my design.
My problem is not with an action leading to an event it can not see or hear. My problem is this: if you associate an action with an event, how do you know which action you are going to couple with which event. While I press the buttons of this keyboard, I'm hearing music, a chopper flying at some distance, people chatting in a corridor and occassionly a phone ringing. I am seeing text appearing in a form, but I also see plants waving in the wind, I see cars driving by. While I'm hitting those buttons, I'm also performing an extensive array of other actions: breathing, regulating digestion, pumping blood around, and many many other issues I am luckily not consciously aware of. In short, while my finger hits a button, a lot of events are happening which I could associate with hitting the button, or perhaps any of the other actions I was performing at the same time. How does your AI decides which events can be associated with which action, and which not? Mind you, to discover this by just trial and error would be inmensly resource expensive, given the amount of events and actions in a real world situation.

k, what is specific of walls? If you can only see text? Not colors, not material.
To answer that, I'd need to know how you textually describe the wall to your AI.
 
Since it has the goal to get from Point A to Point B and the door is the first obsitcle.
Why? Isn't the wall the most probable first obstruction? Or is point B directly behind the door?
 
Baal Zebul said:
k, next reply. Write down you major points cause i think i have answered all that you have asked.

You've quoted most of the points, but you usually only address certain issues related to the points. Here they are:

1. Key differences between a text simulation and a real world test mean a success in a text simulation does not imply a real world application will succeed.

a. The major difference is with object detection. In a text simulation, you detect the objects for the AI, which it will then proceed to identify based on your descriptions. In a real world test, it must detect them on its own based on video input.

b. A related difference is with object identification. In a text simulation, you describe the objects in some manner the AI can understand. In a real world test, it must determine the objects' characteristics based on video input.​

2. An AI with no initial knowledge base is expected to determine that the key is the object it needs to open the door.

a. There are ten objects in the room for which the AI has no predetermined classification. There is absolutely no reason the AI will pick the key first, except by chance.

b. Related is the expectation that the AI will grasp the concept of a key. That is, it will understand what a key is - what it looks like, what it's used for. If the AI has no initial knowledge base, where does this knowledge come from? If you have no knowledge of keys or doors and I handed you a key, would you know what it's used for? No.​

3. Practical implementation of the knowledge base

a. Since you won't give more information on your database, I'm unable to pursue this any further. However, I'm naturally skeptical that you can contain and search the amount of knowledge required to produce behavior equivalent to that of a human.​


These are my major points. Incidentally, I also have a problem with how you define your goal. In a text simulation, telling the robot to move from coordinate 0,0 to 10,0 might be feasible since the simulation could provide the robot with information on its location. However, in the real world, there is often no such provider of information, except perhaps GPS. How will the robot know that coordinate 10,0 is behind the door? Because that's 10 coordinates in front of it? What if you turned it 90 degrees before you started the AI?
 
k, ill answer malkiri's questions first.

Ill try to address your problems as well as i can.

The major difference is with object detection. In a text simulation, you detect the objects for the AI, which it will then proceed to identify based on your descriptions. In a real world test, it must detect them on its own based on video input.

Yes, that is right.

A related difference is with object identification. In a text simulation, you describe the objects in some manner the AI can understand. In a real world test, it must determine the objects' characteristics based on video input.

Yes, id say that is correct too.

So, how could it be implemented in the real world? Why does it has too be any difference? No matter which "world" you are in there still remains one constant (However, i cannot tell thee, for all i know you might be working for the "enemy" (all are my enemies till the opposite is proved))
I guess that it is this you would want me to address, right? Contact me and if i believe that i can trust you then ill fill you in on my secret.

There are ten objects in the room for which the AI has no predetermined classification. There is absolutely no reason the AI will pick the key first, except by chance.

that is what you would think, yes. But i try my best to give (whoever) the WOW feeling. Especially important if we try to implement it in millitary applications as we planned to do in the first place.

Related is the expectation that the AI will grasp the concept of a key. That is, it will understand what a key is - what it looks like, what it's used for. If the AI has no initial knowledge base, where does this knowledge come from? If you have no knowledge of keys or doors and I handed you a key, would you know what it's used for? No.

No, i would only be able to guess. So you would say that it is super-human intelligence then? Well, it is conforting to know that my AI is better than humans in some fields atleast since i know that it will not be as good as humans in other.

Since you won't give more information on your database, I'm unable to pursue this any further. However, I'm naturally skeptical that you can contain and search the amount of knowledge required to produce behavior equivalent to that of a human.

Yes, and i like people who are sceptical. It just provides me with a greater challange.
But i would not want people who merely say Yes, the people who sometimes say No and ? are sometimes needed. (atleast on this project so that i can solve all the problems that might occur)

These are my major points. Incidentally, I also have a problem with how you define your goal. In a text simulation, telling the robot to move from coordinate 0,0 to 10,0 might be feasible since the simulation could provide the robot with information on its location. However, in the real world, there is often no such provider of information, except perhaps GPS. How will the robot know that coordinate 10,0 is behind the door? Because that's 10 coordinates in front of it? What if you turned it 90 degrees before you started the AI?

Well, that is a real question.
GPS? maybe in a human replica but no, not otherwise.
No, if it knows the concept of movement and knows that point B is 100 cm in that direction then it could update point B's location accordingly to its movement. It will also memorize the maze.
Bet that i did not address it like you wanted me to niether?

Malikiri, mail me instead.

k, mouse

Why? Isn't the wall the most probable first obstruction? Or is point B directly behind the door?

sure, they are equal. but it cannot interact with the wall. and if it tries then it would fail.

It is interesting to note at this point that humans, and many animals, do learn how to move. We are born with an instinctive urge to learn how to walk as quickly as we can, but have no pre-programmed knowledge of knowing how exactly we should do that.

Are you not forgetting that newborn babies can swim? an abbility they lose at a later point. They innherit the info in their genes on how to walk but it might not be connected fully.

How can it be not full back-propagating? At want point does back-propagation stop?

Mixing entirely different concepts of AI is quite an effort. Simply tossing them together obviously is not going to work. E.g. GAs producing efficient NNs? It is not going to happen without throwing either an amazing set of hardware at it (which you suggested you didn't need) or a really crafty and resource friendly method of coding a NN, and all variables that can be associated with it (topology, weight modification algorithms, etc.), on a gene set. Of course, I'm quietly ignoring the ridiculous amount of time it will take to tweak such a GA for getting out local optima and such. I'd be afraid of the amount of other problems I will encounter if I start to think longer than two minutes about it.

No, you are not getting me. I have a totally unique AI but it is similar to NN, Natural Language, Pattern Recognition. However it can provide the same results as Expert Systems, GA and almost everything you can imagine, it is human. That is all i can say about it.

How can you create something you are going the mimick? I'll try again: humans learn language and speech from others. An individual human baby e.g. does not learn a language, as we know it, simply by being in an empty room with a key and a door. The child will perhaps associate certain labels to the room, the key, walls, and what ever is there, but aside from that it will be deprived of something as complex and usefull as English or any other language which took centuries, if not millennia, to develop.

Yes, how can i? why does it has to learn an language as we learn it when it is not human itself?
k, patterns alter the pattern of thinking accordingly to the environment.

You are making a quantum leap here. Understanding spoken language is very difficult. It takes humans years to fully master it, and with good reason. Take the sound of the word "to" e.g. Based on context it can be associated with a verb "to walk", or, as a superlative (? not sure if this is the correct term) as in "too much", or as a number as in "two keys". If even humans, equipped with an impressive bunch of wetware, need long exposure to a language to figure this stuff out, how do you propose your AI can understand language in a reasonable amount of time with common hardware?

Not really, i just use my universal algorithm in reverse.
This is however an interesting topic and some problems will probably occur but we will only use this later on in (probably) military application so it is an topic for discussion then. But no problems that i cannot solve.

My problem is not with an action leading to an event it can not see or hear. My problem is this: if you associate an action with an event, how do you know which action you are going to couple with which event. While I press the buttons of this keyboard, I'm hearing music, a chopper flying at some distance, people chatting in a corridor and occassionly a phone ringing. I am seeing text appearing in a form, but I also see plants waving in the wind, I see cars driving by. While I'm hitting those buttons, I'm also performing an extensive array of other actions: breathing, regulating digestion, pumping blood around, and many many other issues I am luckily not consciously aware of. In short, while my finger hits a button, a lot of events are happening which I could associate with hitting the button, or perhaps any of the other actions I was performing at the same time. How does your AI decides which events can be associated with which action, and which not? Mind you, to discover this by just trial and error would be inmensly resource expensive, given the amount of events and actions in a real world situation.

That is a great question.
Well, if your first interaction method on a computer (without any errors) are using the mouse and the keyboard then why should you need more empirical data that the keyboard and mouse is doing what you see on the screen? I mean if you already will try to interact by using the buttons then "why change it if it aint broken"?
In a full human replica it would to the full extent as you would imagine. That is all i can say.

To answer that, I'd need to know how you textually describe the wall to your AI.

Dont actually describe the walls since it has no function. But i would describe it "Wall" and that is all that it has to know.
To open the door it has to know "Door", "Key" and that they are objects (object=something that it can see or touch and possibly interact with.) It also needs to know "Unlocked" and "Locked" and that they are the 2 statuses of the door.
The words does not matter since it will be able to use it in the same manner no matter name but i think you all get that by now.


Anything more?
 
Can we lock this thread? Baal has moved from being ambitious, to lying, to being delusional. Now he's just 'playing along'.

Hell, statements like "Not really, i just use my universal algorithm in reverse" don't deserve to be in a science forum. If I came on here and said I was going to make an AI by waving my magic wand backwards I'd expect it to be closed. Baal is just using pseudo-scientific terms... most of which are used incorrectly.
 
sure, i do not mind.

ambitious, to lying, to being delusional
Ambitious i can agree with, Lying that i have never done. Delusional, the only thing that i am delusional is when i am tired and see things moving that isn't really there.
Playing Along, yes. Thought that would be a better attitude.

Let me just explain, why it sounds so strange.
I do not have experience in other AI, i came up with my AI all alone. Do you know what CBR is? Well think the CBR technology 3 times as advanced combining some features from NN then you have my AI.
How can you combine some features from NN with CBR? well, if that is what you are interested in hearing then maybe i came to the wrong place in search of good programmers.
"If the facts don't fit the theory, then change the facts"
That is all i have to say.
 
Baal Zebul said:
sure, they are equal. but it cannot interact with the wall. and if it tries then it would fail.
But the point is that it tries. You have no way of saying in advance, if the AI has no initial knowledge about doors and walls, what it would go for first.

I have a totally unique AI but it is similar to NN, Natural Language, Pattern Recognition.
"I have a totally unique computer, it is similar to Object Oriented programming, C++ and web browsing", is a statement equally unintelligible to your statement.

However it can provide the same results as Expert Systems, GA and almost everything you can imagine, it is human. That is all i can say about it.
And based on that utterly unintelligible statement, you claim to have the holy grail of AI.

why does it has to learn an language as we learn it when it is not human itself?
The fact that it needs a bunch of humans and much time to develop a language, does not make it "not human". Learning a language is one of the things that we obviously are capable of, and is often regarded as one of the signs of intelligence.

Not really, i just use my universal algorithm in reverse.
I agree with Persol on this. Do you really mean this seriously, or are you just joking around and having fun here? In the first case, get help and a proper education. In the second case, get lost.

This is however an interesting topic and some problems will probably occur but we will only use this later on in (probably) military application so it is an topic for discussion then. But no problems that i cannot solve.
How do you know that in advance, if you do not know what problems you would encounter?

That is a great question.
No, it was quite a fundamental one. The fact that you have to think about it proofs that your project is in an infant state, if it exists in the first place.

Well, if your first interaction method on a computer (without any errors) are using the mouse and the keyboard then why should you need more empirical data that the keyboard and mouse is doing what you see on the screen?
That's knowledge. Somebody told me how computers work. Nobody has told your AI how keys work.

Dont actually describe the walls since it has no function. But i would describe it "Wall" and that is all that it has to know.
But you'd have to code which actions are allowed on the wall, and which not.

To open the door it has to know "Door", "Key" and that they are objects (object=something that it can see or touch and possibly interact with.) It also needs to know "Unlocked" and "Locked" and that they are the 2 statuses of the door.
That is incredibly simplistic. As Malkiri tried to point out to you, it is a far cry from a real world simulation.

Anything more?
Lots, but I'm afraid there is little use. You do not answer the questions in concrete terms, but rather by:
  • concatenating AI terms in random order,
  • by an unjust simplification of the problem presented until you can easily wave it away,
  • by ignoring the problem presented altogether,
  • or by stating that you can not tell more out of fear of competition.
I suppose you are indeed in the wrong place if you think you can enlist programmers with such a shaky argument.
 
Status
Not open for further replies.
Back
Top