Private AI development?

Status
Not open for further replies.
I call my AI "Layer based Artificial Intelligence".
The latest version i have designed as i believe that the human brain works. The layers are then called Neurons.
Basically the neural connection is a huge hierarchy connection. After this Layer this might follow and so on. Together the Layers create patterns. As a pattern evolves it is possible to change the percentage of likliness on the induvidual layers accordingly to known future or known history in the neural pattern.

The AI was developed for Problem solving but it works for speech abilities in intelligent communications and more.
It does not require an extensive amount of learning since once it has connected the layer to the neural net then it can create many patterns with it even though it has limited knowledge.
 
Zarkov,
It basically breaks down the input directly, converts into a data read, that has all the information encoded a bit like DNA..... so all information is immediately available... there is no searching..... it is a direct mathematical system.
So you encode data in a different format. Yet, to build something resembling intelligence it should be able to learn. How does your system learn?
 
Baal Zebul,

The latest version i have designed as i believe that the human brain works. The layers are then called Neurons.
Are you using the term "neuron" as it is used by others in the field of AI or biology? Layers are not neurons. Layers can be consisting of neurons though.

Basically the neural connection is a huge hierarchy connection.
?

After this Layer this might follow and so on.
?

Together the Layers create patterns. As a pattern evolves it is possible to change the percentage of likliness on the induvidual layers accordingly to known future or known history in the neural pattern.
?

I do not pretend to know how the human brain works, but from what I've learnt I can tell that what you describe certainly is not compliant with current ideas. This is mainly because your description does not make any sense to me at all, while current (genuine) research on the human brain does.
 
Are you using the term "neuron" as it is used by others in the field of AI or biology? Layers are not neurons. Layers can be consisting of neurons though.

Neither.


I do not pretend to know how the human brain works, but from what I've learnt I can tell that what you describe certainly is not compliant with current ideas. This is mainly because your description does not make any sense to me at all, while current (genuine) research on the human brain does.

No, Human brain does not function in this way.
 
>> How does your system learn?

All knowledge can be formated into a subject, operator and a result.

This is evident in our speech, time is sequence only.

All proceedures are statement to statement in time order... and so a goal is achieved.

Learning involves two aspects, one is the "statement" and the other is the "sequence" (order) of statements.

Each "object" is classed as core, attribute, operation etc...

Basically the four causes of knowledge identified by the greeks are used for interogation... what? where? when? why? answers to these questions are mathematically processed forward/reverse, and addressed via the equation A X C = B (and its combinations).

The 'how' is manufactured as a sequence of statements.

All 'packets of information' (statements/sequences are identified by one number that is analysiable for all the information that is encoded in it )

Once you realise when we speak we are creating a mathematical construct the rest is rather easy.

A certain amount of data has to be pre-programmed (ROM), and this is one reason why I believe that all LIFE was created, as this information had to be known before the first cells could even exist.

Learning is obtaining answers (data at seeked address) to the four causes outlined above, and finding the sequences of these leads to success in application. From then on interaction with humans is easy and seamless.
 
Zarkov,

that was interesting. Although you say that a certain amount has to be pre-programmed. In speech i agree but in problem solving, how much do you need pre-programmed?
I don't need any, not even movement. Although it is easier if they have the movement pre-programmed since otherwise it will take a long time before it walks perfectly.
 
>> how much do you need pre-programmed?

Apart from the mathematical analyser, ...Needs, yes/no, and data structures... ie, 'complement' such as representing passion/love, need to be preprogramed. In the matrix I have defined the initial layers such as input, group, example... however I am too rigid in my mind,,, and I expect this could all be random, without any loss by the computer... that is my next step to make all seeds random, all layers random... and dynamic tag them. That would lead to confusion in my brain but the computer would not know the difference.

I can't see how to remove all ROM data, but maybe that is because I am too close and want immediate usefulness.

Considering a child takes months/years to understand speech..

The driver is Telos, "the why" in the greek causes



Naal Zebul, how does your "neurone" process data ?
 
hmm, well honestly id say that they do not process data. They are data. The neural net processes data accoridngly to the positions of the allocated neurons.
 
>> The neural net processes data accoridngly to the positions of the allocated neurons.

Oh I see, a round robin approach where each set process is tried until a good result.

I have followed that path in speech (symbol) recognition, and for syntax.... each Earth language uses different syntax/symbols/sequence of symbols..... but the various processors have to be configured first....

I have failed so far to write a program that is capable of rewriting itself....
 
Baal Zebul said:
To me it has much to do with AI because if my AI cannot provide an answer to those questions (even a false one as long at it has some rational logic behind it) then it is not universal.

The fact that your AI cannot answer a particular question does not mean that it's not universal (or general - AGI). It should mean that it just did not learn enough yet. No matter how intelligent a system is, its ability to answer questions correctly is limited by currently available data. It will never have all data about all systems so there will always be some unknown answers.
 
ALL: If you know any linguist who is interested in AI/AGI development and who could review some AI-language ideas for free or for relatively little money then let me know please. The language is being designed for the User-AI communication and its concept is very dynamic.

Jiri Jelinek
G71ai@aol.com
 
Last edited:
The fact that your AI cannot answer a particular question does not mean that it's not universal (or general - AGI). It should mean that it just did not learn enough yet. No matter how intelligent a system is, its ability to answer questions correctly is limited by currently available data. It will never have all data about all systems so there will always be some unknown answers.

That is what i said. If it does not have enough data to understand the question then it cannot answer it. Although, it can try since it might understand fragments of the question and therefore give a result, although it is better to answer wrong than using a code saying "My database is large, but that question was too complex" (not saying any names ;) )

I see it as a good way to start as my AI would when chatting, a AI that can chat (even if wrong because of too little data) as long as it can provide some logic behind its thoughts.


Stryder, sending PM's to you does not seem to function (tried twice now.) Is there any way to get in touch with you besides AIM? Should i use the e-mail that you have on your website? Id like to chat, not mail so please tell me. MSN, ICQ, IRC those i can handle but AIM aint used by anyone in Sweden so i have not got it.
 
Baal Zebul said:
That is what i said. If it does not have enough data to understand the question then it cannot answer it. Although, it can try since it might understand fragments of the question and therefore give a result, although it is better to answer wrong than using a code saying "My database is large, but that question was too complex".
I see it as a good way to start as my AI would when chatting, a AI that can chat (even if wrong because of too little data) as long as it can provide some logic behind its thoughts.

I’m talking about being unable to answer even if you understand the question perfectly. I know that in your mind, the understanding of a particular question means being able to solve the question. I disagree with that. I think you can understand the problem very well and still be unable to find the answer. Imagine that a $10 bill was stolen from a room. A security camera shows that 2 people separately entered the room and then left shortly after that. One of them took the money (let's say the other possibilities were logically ruled out). The next day, when interrogated, both claim they know nothing about the money. Who took the bill? I guess the question is very easy to understand, yet the correct answer may not be found. Your AI should not say "it was the first one (cut his hand!) because there was a higher probability that the bill was still there". it should IMO say “I do not have enough info to decide who took the bill”.

AI that can chat..

To parse the NL correctly and get all the meaning from it is beyond the known state of the art. You would either have to make a big breakthrough with the NL or you would have to using a special communication format (which also includes some interesting challenges). If you wrote any practical article about related details then I would be interested to take a look.
 
Jiri, you could just had put that in your mail.

Do you know that there is a 50% chance that a airplane crashes into your house right now... Either it does or it does not. You cannot exactly prove that thesis wrong, there is a relative truth to it.

What i am trying to say that, even if the AI says that it is 50% chance that a alien ship crashlands in your house or if it says that it is 50% chance that a giant dog eats your house then i think he is right as long as it can provide a pattern of thought based on some logic.

Your little example there :)
Why do you think that my AI would say that it was the first one?
Jiri, it falls under the "limited data" problem. It can't say that it was the first one since there is nothing indicating it, nor can it say that it was the second. Actually, thanks Jiri.
My Ai (when chatting) does not really cover math. Although as you know i suggested incorporating a calculator. Although when using math it cannot reason in the same manner. My solution would be able to base it results on which of Person 1 and person 2 it likes the most, which of them is most valuble for the future, which of them that can help the ALF most but since i made it for Problem Solving and not chatting (using math) i have to integrate Economical Thinking with math too but there is a problem. But that problem lies in the parameters.
50% chance, when integrated in pathfinding we tell it to select one case but in this particular example it should not. Have to have seperate parameters for chatting in order to solve this one i guess.
 
Baal: I recommend you to spend some time learning more about probability. Your AI could say "it was the first one" because it's designed to make a choice (as you said) "as long as it can provide SOME logic behind its thoughts". The "some logic" can be often very misleading. An AGI system should be able to learn math as well as anything else. If there is a difference then there is something wrong with the design. Learning how to use tools (including calculators) should be a matter of course. "..base..results on which..Person..it likes the most"??? You are obviously not a big fan of justice. I'm glad that there are different rules in my country (at least theoretically). Forget your toy worlds and teach the system about the real world. Forget ALFs and using local resources to support a multiple-brain environment. You HW can get busy enough with a single brain. Let your system know what it really is and what its real role in our world is. Be honest with it and let it think about real problems. Let it analyze and apply our knowledge and don't let it to learn the hard way what we already know.
 
G71 said:
I still work on a bunch of other interesting projects (one includes a code which will generate new, hopefully nice, MIDI songs using inspiration from existing MIDI files)

If you really want to develop <a href="http://www.dennisgorelik.com/ai/StrongAI.htm">strong AI</a> you need to freeze all your projects which are not directly related to the strong AI project.
It seems that your "MIDI songs" project is not related to strong AI.
 
you could not be more wrong.

Song generator, Painter. It is more close to real AI than your dictionary.

It is that basis that makes the ALF able to adapt but of course i do not reckon you would understand that.
 
G71 said:
That's not what I'm doing. I'm trying to use limited syntax and keep the rich expression power + a new syntax can be created in my system.
Please, correct me if I'm wrong, but if you implement limited syntax in your system then you stuck with limitations of your language and your system cannot get full access to the huge collection of texts created by human civilization.
 
G71 said:
ANNs are good just for certain tasks (mainly various recognition and conversion tasks). The key AI algorithms IMO should not be based on ANN. They are not flexible and transparent enough (cannot easily explain its decisions), they are hard to use for things like complex planning and the needed teaching procedures may get very hard to design. ANNs are not as practical as many seem to think. Many say "let's follow the nature's way" (neurons), but the nature's way is not always the best way to go. Airplanes don't flap wings.

You are right if you are talking about limited ANN.
Typical modern ANN have two serious restrictions:
1) ANN cannot drop the links between neurons. It cannot add new links between neurons. Typical ANN can only increase or decrease weights of the links.
2) Because of (1) not every neuron can be linked with any specific neuron.

But these 2 restrictions are artificial. We can easily design ANN without this restrictions.
For instance in my ANN design neurons (<a href="http://www.dennisgorelik.com/ai/Concept.htm">concepts</a> are linked with other concepts by <a href="http://www.dennisgorelik.com/ai/Relation.htm">relations</a>.

So my ANN is implemented by two tables:
1) <a href="http://www.dennisgorelik.com/ai/ConceptTable.htm">Concept table</a>.
2) <a href="http://www.dennisgorelik.com/ai/CauseEffectRelationTable.htm">Cause-Effect Relation table</a>.

So, my ANN doesn't have these typical limitations and can memorize any kind of information.
 
Status
Not open for further replies.
Back
Top