Anyone interested in a VB.net and/or C# project for creating real AI then mail me at da_rikard@hotmail.com and i shall tell you more about the CES Project and the SC-11 simulation.
'Real AI' in VBnet is just a funny group of words that should never be used together.Baal Zebul said:Anyone interested in a VB.net and/or C# project for creating real AI then mail me at da_rikard@hotmail.com and i shall tell you more about the CES Project and the SC-11 simulation.
Ok... you are skipping some steps. Basically you are saying:Baal Zebul said:We will start a artificial lifeform up in a room (will be built as a robot in the near future). We will start it up with no knowledge. (which will prove how adapable it is). It will have to figure out how to use the key on the door. It will do this on the first try and do you know why? Because it is as smart as you and I.
On the first try? It must be pure chance if it has no knowledge. The only reason you or I would try to use the key in the lock is because we've been trained to use keys in this manner.We will start it up with no knowledge. (which will prove how adapable it is).
It will have to figure out how to use the key on the door. It will do this on the first try and do you know why? Because it is as smart as you and I.
I have been programming since i was 7, i think i can determine which language i wish to use myself.
my algorithm is universal so it would solve almost any problem.
Not knowing what to implement, is not a fatal problem if you do know what it should mimmick, e.g. a human. The Turing test circumvents our inability to accurately define intelligence with a comparison on a feature of intelligence (to fool around a questioner) between humans and software.I'd bve interested to see how you plan to implement human intelligence in a computer, when we don't know what human intelligence actually is.
Why? How is the readability of the output related to its initial knowledge of the world?Please note that i do not have to tell it anything about its world only that if i do not then it will be next to impossible to read the output.
So what state changes do you envision? How do you describe them? How is your artificial life form able to interact? Does it have the ability to actively change states? If so, should i see it as a MUD like command language (e.g. "use key on door" type of commands)?All that it really needs to get as an input from the SC-11 simulation is to know when a state changes.
Well, if you want people to get enthousiastic about your project, you at least need to be able to draw a general outline of how your concept works. As Persol and Malkiri illustrated, there are some serious issues you have left unaddressed in your explanations. Without addressing them properly, I can imagine that it will be quite difficult to inspire talented programmers to spend time on a project lacking a concrete basis.I don't want to say much about the AI and I do not want people to believe me, actually i want to prove people wrong but the truth is that ill need someone more with lots of time who is a grand VB.net/C# programmer.
Why? How is the readability of the output related to its initial knowledge of the world?
So what state changes do you envision? How do you describe them? How is your artificial life form able to interact? Does it have the ability to actively change states? If so, should i see it as a MUD like command language (e.g. "use key on door" type of commands)?
Well, if you want people to get enthousiastic about your project, you at least need to be able to draw a general outline of how your concept works. As Persol and Malkiri illustrated, there are some serious issues you have left unaddressed in your explanations. Without addressing them properly, I can imagine that it will be quite difficult to inspire talented programmers to spend time on a project lacking a concrete basis.
You could create and name an object a chair, and the Intelligence would then decide what it should remember as a chair, but do note with human intelligence a definition of chair would have to be repeated many times to ascertain what a chair actually is. In fact occasionally our primitive development when that young would cause other things to be refered to as chair.
Fine, what process would get your AI decide to produce a random string for something? Does it make an inventory of all that it "sees" in its virtual world, and attach labels to it if not done by you already? Which brings me to another question, does your system take into account a line-of-sight, does it register objects hidden behind a wall?Baal Zebul said:Stryder gets it but not you. He said the thing about calling the chair a chair. Well that is all i tell it in SC-11 (even though it does not even have to know that). But what is easiest to understand when reviewing the output, That it says "Key" or "2a42hi412sd" or some other randomly self-created "word"?
We humans have figured out which status changes are important, and which we can filter out. That took time to learn, and moreover, we have a mechanism enabling us to learn in the first place. What mechanism do you envision for your system to know what status changes are relevant to complete its goals and which can be disregarded?Yes, you should. Somewhat like that. Let me ask you, how do you know that you have opened a door? Maybe you see that the lock has moved or maybe you hear the click. There are tons of status changes.
What do you define with this information? Is it its goal list, its task list, its list of objects it can "see", a list of previous status changes, or a combination of these?However my AI only needs the status changes when it has no information
How do you qualify a minimum of information? Do you mean that if you tell it that there is a room, a door, a key and that its sole purpose in life is to get through the door, that this is the minimum information needed for it to succeed? If so, how does it know that a room consists of walls, a floor and a roof? Does it have access to a dictionary defining a room as such? And what about the definition of a door, a wall, a floor and a roof? How does it know that it can not move through walls, floors, roofs, closed doors? Does it just try to move through one of these and remembers the failure of doing so? If so, how does it generalize from one wall to another? Does it assume that if a certain action fails on a object of a certain class, that it will fail on every other object of that class? If so, how do you account for exceptions and dependencies. If it discovers that a key can be used to open a door, it could incorrectly conclude that every key can open all doors. What mechanism is built in to counter or refine previously made assumptions?i have designed SC-11 accordingly were it survives with the absolute minimum of information.
And Stryder when it comes to VR, well just say that i have an other invetion there which is lets say 5 years before its time. We plan on selling that too but the cost is somewhat large. We were first to build The Republic but that was obsolete when i thought a lil more about it. Then came The Republic 2 (TR2), a good idea since we would start them with no knowledge as in SC-11 but there were too few dead ends in TR2 so i dropped it rather than modifiying it. SC-11 was the next step, (we have changed it now thanks to a "friend" of mine) and it will be put in a maze with no knowledge (sort of like a RPG game) where it tries to get from point A to point B and along the maze there are obsticles such as the door issue.
I would suggest using www.sourceforge.net as a place to launch a project, since it would give you enough space and a website to allow you to generate a Project Goal and statement that people can refer to.
What architecture for learning language are you implementing in your A.I. construct?
If it has no language, how does it reason?
How does it solve implicatures, i.e. I know X, I know Y, therefore I infer Z.
Does your construct have access to any external ontologies/dictionaries/corpus?
What are its sensory abilities, its inputs?
What problem solving algorithms does it implement?
How are you planning on translating its output?
How are you planning on representing objects?
How are you planning on resolving abiguity between different senses of a given concept? (example: he knocked on the door, versus he walked through the door)
What data structures does it use to hold knowledge?
Is it evolving?
Does it use rules, finite state machines, ATMs?
Give us a clue!
Fine, what process would get your AI decide to produce a random string for something? Does it make an inventory of all that it "sees" in its virtual world, and attach labels to it if not done by you already? Which brings me to another question, does your system take into account a line-of-sight, does it register objects hidden behind a wall?
We humans have figured out which status changes are important, and which we can filter out. That took time to learn, and moreover, we have a mechanism enabling us to learn in the first place. What mechanism do you envision for your system to know what status changes are relevant to complete its goals and which can be disregarded?
What do you define with this information? Is it its goal list, its task list, its list of objects it can "see", a list of previous status changes, or a combination of these?
How do you qualify a minimum of information? Do you mean that if you tell it that there is a room, a door, a key and that its sole purpose in life is to get through the door, that this is the minimum information needed for it to succeed? If so, how does it know that a room consists of walls, a floor and a roof? Does it have access to a dictionary defining a room as such? And what about the definition of a door, a wall, a floor and a roof? How does it know that it can not move through walls, floors, roofs, closed doors? Does it just try to move through one of these and remembers the failure of doing so? If so, how does it generalize from one wall to another? Does it assume that if a certain action fails on a object of a certain class, that it will fail on every other object of that class? If so, how do you account for exceptions and dependencies. If it discovers that a key can be used to open a door, it could incorrectly conclude that every key can open all doors. What mechanism is built in to counter or refine previously made assumptions?
Dropping names and acronyms out of context doesn't make your project sound better. Please, at least explain what "The Republic" and "SC-11" are if you're going to refer to them.
is not the same as sitting a robot with sensors of some sort in an empty room with a door and a key. I'll even assume it can identify the key and door, which is not a trivial problem. There's a large difference between your AI replying "use key in keyhole in door" and having the AI-in-robot pick up the key, put it in the keyhole, and turning it.
Baal Zebul said:In the real world it would have eyes that can see 3D, they should be able to make a 3D view of an object and apply a texture to it. (This would allow it recognize ojects even if it only has seen bits of it) It would also be able to zoom in and out. This is the minimum but in military application it would of course also have night and heat vision and it would probably be able to detect hidden weapons and see through walls too.
Baal Zebul said:I do not have to translate the output. I do not have to translate the input.
Baal Zebul said:Well, what is important? If you only will open doors then you dont have to know much. However if you are to repair door then you might need to know more, right?
Baal Zebul said:Why not? The robot will reason in the same manner as the simulated robot. Turning is an action and instead of sending the output to a human viewed text it just sends it to the parts in its body that are affected in the Turn command.
Key and Door are merely object (no matter what they are called), they just give X, Y and Z cordinates for the robot to use when calibrating its Turn command.