Future Artificial Intelligence Consciousness

Status
Not open for further replies.
you cannot even answer a simple question as "Why does the dog chase the mailman?"
and you call me a fake?
Again, what has this to do with me? We are scrutinizing your claims to have developed a theory which would be way ahead of its time, not whether I can answer your questions about a dog's behaviour.

Yet, to answer your question, indeed: i do not know why or even if a dog chases the mailman. I can speculate that the dog perceives the mailman, entering what the dog thinks of as its territory, as a potential threat and it responds with an attempt to scare the threat off.

Whether this hypothesis is actually true, I can not tell. It only reaches a certain point of credibility, if I arrange a set of experiments of which the results would hopefully eliminate other prevalent theories on the subject and support my own.
 
so, are you telling me that you would spend 1 year or 6 months or something like that just for proving something that simple?

then a dear friend of mine was right when he said that i over-estimated the lesser mortals.
 
so, are you telling me that you would spend 1 year or 6 months or something like that just for proving something that simple?
No, I am telling you that there is a method of trying to piece together the world around us by devising a hypothesis explaining and predicting something, collecting evidence (an often dreary business, but still a necessary one), and re-evaluating that theory based on found evidence. It is not the fastest way to fame and fortune, but it does have its merrits when it comes to building a model that actually has some descriptive and predictive value.

You just take the shortcut, proclaiming you have found an universal theory but are not prepared to put it to the test.
 
well, if you cannot see something that simple then i do not think you should be too critical on me.
i see something i solve it, i see something more and i relate that to the previous hypothesis and see that it works. If you cannot simulate the brain of a dog in your head then that is your problem and you might think about working on that.
 
Until you have given some form of proof that on the basis of your universal theory an AI can be built, I think I have perfectly valid reasons to be critical.
 
i see why people do not invent real AI, they can simply not analyse a situation.
emotionally still stuck at seven years of age
yeah, i guess that is true. i don't have many feelings. i simply do not care about lesser mortals.

mouse, the thesis is.
the dog relies on his master to get him food.
and since his survival instincts drive him he is interested in a treat he swiftly learns that the mailman is someone that never enters the lawn (or in great britain, someone that never enters the house.)
The dog will think "for some reason my master(s) does not want the mailman in the house/on the lawn" so it will chase him away in hope for a treat. If the dog gets a punishment then it will learn that what it did is against its master(s) will and will therefore not do it again. Will it require more than one punishment if it has gotten no treats? no. if it has gotten many treats in the past, will it require more than two in a row? no.

i do not know if that is the correct hypothesis, but it does not really matter cause combined with the rest of the concept it does the trick for me. Now you can say that i got that hypothesis from my imaginary world if you wish. Everyone is allowed to their opinion unfortunatly. and that is why we hate the current form of democracy
 
Status
Not open for further replies.
Back
Top