CYC or COG?

Status
Not open for further replies.
OK,so a bit about neurons and neural networks,please rectify me if anything's wrong

On TV shows or movies like Star Trek and The Matrix, people see artificial intelligence represented by physical beings that are capable of falling in love, and have emotions and feelings. The familiar "Data" is created to look and behave like a human being yet functions entirely as a result of artificial intelligence. The question is, can these abilities be truly constructed through means of neural networks or artificial intelligence (AI), or do they belong only in science fiction movies? Ever since the invention of computers, people's fear of machines replacing humans began to emerge. This is largely due to people's lack of understanding of computers and their ability to do tasks that humans can not accomplish with the same degree of efficiency, accuracy or speed. This report sets out to discuss the current applications of neural networks and explain some of the algorithms associated with them.

Capabilities:
==============================================
It is often said that people demonize things they do not understand or control. However, the reality, whether connected with race, war or computers, is that this demonization is frequently unfounded. Consider people's fear for their own employment when the first personal computers were developed. This is not dissimilar to the current fear of AI.

Even though the neural network is based on the human brain structure and how the brain operates, it's almost impossible to mimic the human brain because of its complexity(Off course i feel this statement will no longer hold true because of Mind uploading techniques etc). Experimental chips that simulate neurons are still far more inferior than human brain cells hence their abilities are limited. An artificial neural network (ANN) can only recognize patterns, solving complex problems and feature recognition. They cannot think like humans do and they certainly can't achieve creative thinking or develop beyond what they are taught.


Despite the fact that neural networks cannot think like us, they can still learn and we can use them to analyze data that traditional programs cannot analyze. neural networks can do tasks faster than human experts yet still produce similar results. Compared to traditional programs, the neural network is possibly the best algorithm for programs that can learn.
History:
==============================================
Although the idea of neural networks seemed relatively new, the first discovery of biological neural networks (BNN) was quite long ago. In 1873, Alexander Bain of the United Kingdom wrote a book regarding the new findings of human brains. At that time, scientists were only able to see the BNN and were not capable of doing experiments. All they could do was to observe and generate theories. The actual study of neural networks dated back to the 1940s when pioneers, such as McCulloch and Pitts and Hebb, tried to learn more about the neurons and the neural network. They also attempted to find a formula for the structure and adaptation laws. In the 1950s, scientists around the world tried to decipher the mystery of the human brain and to create a network that could mimic the biological neural network. Hence, scientists from different branches, such as biologists, psychologists, physiologists, mathematicians and engineers, were required to work together to learn more about neural networks. Even though around that time, they discovered the limitation of the network, some people were still interested and continued to research. It was not until computers like 186 were developed that scientists and programmers began to realize that they could create an actual neural network for daily applications or further research. In the past twenty years, since this development, a lot of studies have been done, but not much progress has been made. It is the improved speed of the processor that has enabled wider application of neural network technologies.



Applications:
==============================================
Imagine a world where you can control virtually all computers and electronics without touching them. Imagine you can finish your report by simply talking and the computer will write it for you, or even do grammar and spells checking with intelligence. These technologies seem to be very advanced and expensive. However, with the help of fast processing chips and neural networks, scenarios such as those described above are no longer unrealistic. Artificial intelligence is closer than you think it is. To date, programmers and scientists are able to create "expert systems" involving neural networks. It may be that The Pentagon is the only place in the world where artificial intelligence is being pushed to its limits. However, there are lesser yet still valuable examples of its application in the world today. The following is a list of things that are commonly used and are supported by neural network technologies.


))Data analysis - use of expert systems to predict economic growth
))Geographical mapping - using neural network to analyze datas scanned from satellites
))Redirecting phone calls - use neural network to redirect phone calls from all over the world
))Palm Pilot - uses neural network to recognize handwriting
))Image Processing - FBI uses neural network, combined with other programs to enhance pictures taken from the crime scene
))Finger print recognition - uses neural network to recognize patterns in a fingerprint
))Voice Recognition - uses neural network to analyze the sound and to identify the person, or use it for inputting data
Recently, Intel has released a new chip, which can process DATA at great speeds. With this kind of processing power, programmers can create neural networks big or sophisticated enough to do visual and audio recognition at the same time. Game producers can also use neural networks to create chess games that learn from their own experiences. Most scanners use neural networks to do optical character recognition.

Briefing on Nurons:
==============================================
The human brain is a vast communication network in which around 100 billion brain cells called neurons are interconnected to other neurons. Each neuron contains a soma, nucleus, axon, yet they don't play an important role in receiving and outputting electrical impulses. Each neuron has several dendrites which connect to other neurons and when a neuron fires (sending electrical impulse), a positive or negative charge is sent to other neurons. When a neuron receives signals from other neurons, spatial and temporal summation occurs where spatial summation converts several weak signals into a large one, and temporal summation converts a series of weak signals from the same source into a large signal. The electrical impulse is then transmitted through axon to terminal buttons to other neurons. The axon hillock plays an important role because if the signal is not strong enough to pass through it, no signal will be transmitted.
Synapse
==============================================
The gap between the two neurons is called the synapse. The synapse also determines the "weight" of the signal transmitted. The more often a signal is sent through the synapse, the easier it is for the signal to be sent through. In theory, this is how humans memorize or recognize patterns; which is why when humans practice certain tasks continuously, they become more and more familiar or used to the tasks.

Approaches to mimic
==============================================
Because the neural network mimics the biological neural network, an ANN has to resemble essential parts of a BNN, such as neuron, axons, hillock and more. Currently, to create an ANN, there are two approaches. The first approach is to use experimental chips that simulate neurons and interconnect them to create a network. However, this approach is inefficient due to the expenses and the technologies behind it. The software solution, on the other hand, is much easier because as the network expands, it is harder to upgrade the network through hardware than through software.

SOFTWARE APPROACHES
==============================================
To create an ANN through the means of software, object oriented programming is required because a neuron resembles several components, and OOP is the best choice due to its capability of creating objects that contains different variables and methods. The first step is to create an object that simulates the neuron. The object would contain several functions and variables including weight (a random number generated when the neuron is created, similar to the synapse in BNN), a non-linear function (to determine whether to activate the neuron or not), a method that adds up all the inputs, and a bias/offset value (optional) for the characterization of the neuron.The output of each neuron is the sum of all the inputs multiplying the weights plus the offset value and through a non-linear function. The non-linear function acts like a hillock.
After the object is created, the next step is to create a network. A typical ANN has three layers: input layer, hidden layer and output layer. The input layer is the only layer that receives signals outside the network. The signals are then sent to the hidden layer, which contains interconnected neurons for pattern recognition and relevant information interpretation. Afterwards, the signals are directed to the final layer for outputs. Usually a more sophisticated neural network would contain several hidden layers and feedback loops to make the network more efficient and to interpret the data more accurately. The diagram on the left is an example of a three layered neural network. Using figure 5 as a model, the network is like a big matrix. However, it would be easier if the three layers were separated into three small matrixes. Each small matrix will contain neurons and when signals are inputted, the neurons will send inputs through the non-linear function to the next neuron. Afterward, the weight of the neuron is increased or decreased. Hence, the more the network is used, the more it will adapt and eventually it will produce results similar to a human expert.

The above section mentions the change of weight. The process of changing weight could be referred to as the "learning stage." ANN is like BNN and requires training. The training process is a series of mathematical operations that change the weights, so the user can obtain the output they desire. For example, the first time the network receives inputs; the outputs are some random numbers. The user then has to tell the network what the outputs should be and an algorithm should be applied so the weights are changed to get the outputs wanted.

i dont know much about various algorithms in details,about which currently i am searching on,so i"ll keep you guys posted on that.

God...this is getting bigger than i expected...

check out the next one if you"re interested...

bye!
 
ADVANTAGES AND DISADVANTAGES
==============================================
Asides from people's fear of artificial neural networks, ANN has several advantages and disadvantages. Because ANN is similar to BNN, if parts of the network are damaged, it can still carry on its works. Another advantage is its ability to learn from limited sets of examples. For instance, a handwriting recognition program can recognize handwriting even though it has only been trained using several people's handwriting. However, unlike traditional programs, if parts of the program are damaged, it could no longer function. Furthermore, the same neural network can be used for several programs without any modification. An example of that would be OCR programs. The neural network used in English OCR programs can be used in Chinese versions because the network is designed to learn patterns. By retraining the network, and changing the database, the program for the network does not need to be modified and can still do its tasks.

The speed of the ANN can be both its advantage and disadvantage. Depending on the level of AI required, a network with a larger input, hidden, and output layers may be required. If the computer is not fast enough to process the information, a tremendous amount of time may be required to process a simple question. The complexity of the network is considered to be its disadvantage because you don't know whether the network has "cheated" or not. Because a neural network can memorize and recognize patterns, it is almost impossible to find out how the network comes up with its answers. This is also known as a black box model. For example, you can provide a neural network with several pictures of a person and ask it to recognize him/her. However, there's no way to guarantee the network will recognize the person because it is possible that the network memorized the photos and, when new pictures are given, it cannot tell who the person is. Further more, it is also possible that the network recognizes the background instead of the person. Hence using neural network for any kind of recognition could be risky. When you ask a neural network to recognize a tank in the forest by providing pictures of forest with tanks and pictures of forests without tanks, it may simply recognize the weather condition taken for the two different categories. Due to the problem just described, it is essential to test the network after its training by introducing it to other inputs that the network has never experienced before.

neural networks do not have an interface that allows them to develop and learn in the ways that humans do from a young age. Furthermore, humans, as organic beings, have the potential to develop and expand their own network. Artificial intelligence has a fixed network unless we add on to it.

so all we can say is that if we"ll go with software programming we"ll or we can fall short of time.the ehm...ehm...feasibilty of such a project is at stake.furthermore its constraining enough to build expert systems carrying neural networks along with them.it is important however for us to find out correct definition of Intelligence as i stated in another thread...

any comments on the above posts are most welcome...

bye!
 
neural networks do not have an interface that allows them to develop and learn in the ways that humans do from a young age. Furthermore, humans, as organic beings, have the potential to develop and expand their own network. Artificial intelligence has a fixed network unless we add on to it.

Without an interface and continuous learning, it is not very useful. For example, I have a Samsung cell phone. It has a voice interface. When activated it says something like "who do you want to call", I say "Home" - sometimes it connects, sometimes it does not. Noisy environments, cold, morning, evening it does not understand. It does not say I did not understand, do you want to call Home? So basically it is limited to dumb programmers even though there is an advanced voice recognition software built in.

Most Call Center software claim to have expert system built in. Yet if I need to ask a question I had to push zero for the operator or the system goes nowhere for 15 minutes. Some companies disconnect the zero number too.

I had to make some changes to my Sprint account. So I went to the local Sprint store. They could not help. I had to call MARS Call Center to fix problems. If humans are like that, do you think the neural net will be any better?

Just some rants....
 
Neural networks have disadvantages,but its a learning software,it"ll learn from its previous experiences to enhance itself,hence will the ease its use naturally.if your cellphone were based on neural nets then they"d learn from previous experiences and store,off course that would require loads of memory.its all weight changing as i mentioned...

say if there is some disturbance,then based on your response it"ll act accordingly in future.
bye!
 
Todays neural nets do not have capacity to form new connections. They only change / distribute the weights and some change the Coefficients. So what you consider learning does not happen. Otherwise, we already would have a giant brain using distributed computing running arounf in the internet.
 
ISAAC ASIMOV FIRST SPEECH

Our Future in the Cosmos - Computers
==============================================

No matter how clever or artificially intelligent computers get, and no
matter how much they help us advance, they will always be strictly
machines and we will be strictly humans. When we finally do etend the
living range of humanity throughout near space, possibly throughout
the entire solar system and out to the stars, it will be done in
tandem with advanced computers that will be as intelligent as we are,
but never intelligent in the same way that humans are. They will need
us as much as we will need them.

As far as our destiny in the cosmos is concerned, I think that it will
arise out of the two important changes that are taking place before
our very eyes. One change involves the computerization of our society,
and the other change involves the extension of our capabilities
through aeronautical and space research. And the two are combined.
Decades ago we science fiction writers foresaw a great many things
about space travel, but two things we did not foresee. In all the time
that I wrote stories about our first Moon landing and about the coming
of television, nobody, as far as I know, in the pages of the science
fiction magazines, combined the two. Nobody foresaw that when the
first Moon landing took place, people on Earth would watch it on
television. Nor did science fiction writers foresee that in taking
ships out into space, they would depend quite so much on computers.
The computerization of space flight was something that eluded them
completely. So, I have two broad areas that I can discuss in talking
about our destiny in the cosmos. One area is the future of
computerization, and the other area is the future of space itself. In
this presentation, I will talk about computers and their future, and I
think I have a kind of right to do so. I have never done any work on
computers, but I have speculated freely concerning them.

Despite my gentle appearance as a gentleman a little over 30, I have
been a published writer for 45 years. If I can make it 5 more years, I
will celebrate my golden anniversary as a published writer, which
isn't too bad for a fellow in his early thirties. Perhaps the most
important thing I did as a speculator was to foresee the various
properties and abilities of computers, including those mobile
computerized objects called robots. As a matter of fact, I sometimes
astonish myself. Back in 1950, in a passage that was eventually
published as the first section of my book Foundation, I had my
protagonist pull out a pocket computer. I didn't call it a pocket
computer, I called it a "tabulator pad." However, I described it
pretty accurately, and this at a time when computers filled up entire
walls! Decades later, someone said to me, "Hey Asimov, you described a
pocket computer a long, long time ago; why didn't you patent it and
become a trillionaire?" And I said, "Did you notice, perchance, that I
only described the outside?" I'll be frank, to this day I don't know
what is inside. I have evolved a theory; I think it's a very clever
cockroach. But that was in 1950; I did a lot better in 1939, long
before many of you were born. I began writing about robots. Robots had
been written about for years before this. The word had been invented
in 1921 by Karel Capek, a Czechoslovakian playwright. However, until I
started writing, robots were, for the most part, either menaces or
sort of wistful little creatures. As menaces, they always destroyed
their creator; they were examples to humanity of what to avoid. They
were symbols of the egregious hubris of the scientist. According to
that plot you did something that infringed upon the abilities reserved
for the creator, you made life. No one objected to destroying life,
you understand (don't let me get radical here), but making life was
wrong, especially if you didn't use the ordinary method. Even if you
did use the ordinary method, the robot, as though to explain to you
that you had done wrong, sometimes killed you. Well, I got tired of
that plot. There was another plot in which the robot was a good and
noble but picked-on member of a minority group but everyone was mean
to him. I got tired of that plot too. I decided that robots really
ought to be (hold your breath now) ... machines, to do work that they
were designed to do, but with safeguards built into them. Looking
around the world, I noticed that practically everything human beings
had made had elements of danger in them, and that, as best we could
(being fallible human beings), we had safeguards built in. For
instance, you will notice that swords have hilts, so that when you
thrust the sword forward and it goes into the other guy the way it is
supposed to, your hand doesn't slide along the blade and cut all your
fingers off. So, I figured robots would also have safeguards built
in, and I finally listed these safeguards in the March 1942 issue of
Astounding Science Fiction on page 100, first column, about one-third
of the way down. Since then, I have had occasion to look up the list
and memorize it. I called it the "Three Laws of Robotics." I will now
recite these laws for you because I have memorized them. I have made a
great deal of money from them, so it sort of warms my heart to think
of them for purely idealistic reasons.

1. A robot may not injure a human being, or, through inaction, allow a
human to come to harm.

2. A robot must obey orders given to him by human beings except where
such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.

None of these laws is interesting in itself, although it is obvious
that the laws apply to all tools. If you stop to think, the first rule
of any tool is that you operate it safely. Any tool that is going to
kill you when you use it is not going to be used. It won't even be
used if it merely maims you! The second rule is that a tool should do
what it is supposed to as long as it does so safely. And the third
rule is that a tool ought to survive its use and be ready for a second
use, if that can possibly be arranged. Nowadays, people who are
working with robots actually debate the methods by which these three
rules can be installed. This flatters me, but what interests me most
is that I called these rules the Three Laws of Robotics, and that use
of the word "robotics" in the arch 1942 issue of Astounding Science
Fiction, (page 100, first column, one-third of the way down) was the
first use of this word anywhere in the English language. I made up the
word myself; this is my contribution to science. Someday, when a truly
encyclopedic history of science is written (you know, one with 275
volumes), somewhere in volume 237, where the science of robotics is
discussed, there will be a footnote: The word was invented by Isaac
Asimov. That is going to be my only mention in all 275 volumes. But,
you know, better than nothing, I always say. The truth is that I
didn't know I was inventing a word; I thought it was the word. If you
will notice, physics ends in "ics" and just about every branch of
physics, such as hydraulics, celestial mechanics, and so on, ends in
"ics." So I figured that the study of robots would be robotics, and
anyone else would have thought of that too if they had stopped to
think that there might be a study of robots. What's more, I quoted
those three laws from the Handbook of Robotics, 56th edition (c.2058
A.D.), and the first edition of such a handbook is actually about to
come out. It is a handbook of industrial robotics, and I was asked to
write the introduction. Who would have thought, when I was a little
kid writing about robots, that such a handbook would actually be
written! It just shows that if you live long enough, almost anything
can happen.

The question then is: What is going to happen with robotics in the
future? Well, as we all know, it's going to create a certain amount of
economic dislocation. Jobs will disappear as industries become
robotized. What's more, robots are dangerous, very literally dangerous
sometimes. There has already been one case of a robot killing a human
being. A few years ago a robot in a Japanese assembly line stopped
working, and a young mechanic went to see what was wrong with it. The
robot was surrounded by a chain-link fence, and the safeguard system
was designed to cut off power to the assembly line when the fence gate
was opened, thus deactivating the robot and making it just a lump of
dead metal. This safeguard was designed to implement the First Law:
Thou shalt open the chain fence before you approach the robot. (You
have to understand that what we call industrial robots are just a
bunch of computerized levers, nothing more. They're not complicated
enough to have the three laws built into them, so the laws are
implemented outside them.) Well, this mechanic thought he would save
himself a second and a half, so he lightly jumped over the fence and
manually turned off that particular robot. This will do the trick just
as well, unless you happen to push the "on" button with your elbow
while you're busy working on the robot. That is apparently what he
did, so the robot, in all innocence, started working. I believe it was
a gear-grinding robot, so it ground a gear in the place where a gear
was supposed to be, which was where the guy's back really was, and it
killed him. The Japanese government tried to keep it quiet, because
they didn't want anything to spoil their exploitation of robotics. But
it is difficult to keep a thing like that completely quiet. Eventually
the news got out. In all the newspapers the headlines were: "Robot
Kills Human Being." When you read the article you got the vision of
this monstrous machine with shambling arms, machine oil dribbling down
the side, sort of isolating the poor guy in a corner, and then rending
him limb from limb. That was not the true story, but I started getting
telephone calls from all over the United States from reporters saying,
"Have you heard about the robot that killed the human being? What
happened to the First Law?" That was flattering, but I suddenly had
the horrible notion that I was going to be held responsible for every
robotics accident that ever happened, and that made me very nervous. I
am hoping that this sort of thing doesn't happen very often. But the
question is: What's going to happen as robots take over and people are
put out of jobs? I am hoping that that is only a transition period and
that we are going to end up with a new generation that will be
educated in a different way and that will be ready for a computerized
world with considerably more leisure and with new kinds of jobs. It is
the experience of humanity that advances in technology create more
jobs than they destroy. But they are different kinds of jobs, and the
jobs that are going to be created in a computerized world are going to
require a great deal more sophistication than the jobs they destroy.
It is possible that it won't be easy to reeducate or retrain a great
many people who have spent their whole lives doing jobs that are
repetitive and stultifying and therefore ruin their brains. Society
will have to be extremely wise and extremely humane to make sure that
there is no unnecessary suffering during this interval. I'm not sure
that society is wise enough or humane enough to do this. I hope it is.
Regardless, we will eventually come to a period when we will have a
world that is adjusted to computerization.

Perhaps then we will have another and even more intractable problem.
What happens if we have computers and robots that are ever more
capable, that are ever more versatile, and that approach human
activity more and more closely? Are we going to be equaled? Are we
going to be surpassed? Will the computer take over and leave us far
behind? There are several possible answers to these questions,
depending upon your mood. If you are in a cynical mood, if you have
been reading the newspapers too closely, the answer would be: who
cares? Or if you have become even more cynical, the answer would be:
why not? You might look at it this way: The history of humanity is a
long tale of misery and cruelty, of destroying each other and the
Earth we live on, and we don't deserve to continue anymore. If there
is anything with more wisdom than people, with better brains than we
have, that can think better ... please let it take over. You might
also argue this way: For 3v2 billion years, life has been evolving on
Earth very slowly, in a hit or miss fashion, with no guiding
principle, as far as we can tell, except survival. As the environment
changes and the Earth undergoes various changes, life takes advantage
of new niches, fumbles the old niches, and eventually, after 3li2
billion years, finally develops a species with enough brains to create
a technological civilization. That is a long time to achieve so
little. You must ask yourself, "Is that the best that can be done?"
Well, maybe that is the best that can be done without a guiding
intelligence, but, after 3 1/2 billion years, the guiding intelligence,
such as it is, has been created, and now it can take over. You can
even argue that the whole purpose of evolution has been, by hit and
miss, to finally create a species that can then proceed to accelerate
evolution in a guided direction. In that case, we are designing our
own successors. Instead of waiting several million years just to
develop enough of a Broca's convolution in our brains so that we can
learn to talk, we are deliberately designing computers so that they
can speak, understand speech, and do a few more things. If there is a
grand designer up above who has chosen this way of creating human
beings, he is going to be rubbing his hands and calling for applause.
He is going to say, "Watch the next step, this is going to be faster
than you can possibly imagine."

On the other hand, there is another way to look at the future of the
computer. You can also assume that you are not going to easily
manufacture artificial intelligence that will surpass our natural
intelligence. Miserable as we are, and deplorable as our records prove
us to be, we nevertheless have 3V4 pounds of organic matter inside our
skulls that are really worthy of 3Y2 billion years of evolution. When
you are being cynical you can speak of the brain as something so
little to take so much time to evolve. But censider it closely, it is
extremely astonishing, really! There are 10 billion neurons in the
human brain, along with 10 times that many supporting cells. Each of
these 10 billion neurons is hooked up synaptically with 50 to 500
other neurons. Each neuron is not just a mere on/off switch; it is an
extremely complex system in itself, which contains many sets of very
strange and unusual molecules. We really don't know what goes on
inside the neurons on an intimate basis, nor do we know exactly the
purpose for the various connections in the human brain. We don't know
how the brain works in anything but its simplest aspects. Therefore,
even if we can make a computer with as many switches as there are
human neurons, and even if we can interconnect them as intricately as
they are connected in the human brain, will the computer ever be able
to do what we can do so easily? Now there are some things in which a
computer is far ahead of us. Even the simplest computer, the very
first computer, for that matter even before they became totally
electronic, was far ahead of us in solving problems and manipulating
numbers. There is nothing a computer can do that we can't if we are
given enough time and if we correct our errors. But that's the point;
we don't have enough time and we don't have the patience or the
ability to detect all our errors. That is the advantage of the
computer over us. It can manipulate numbers in nanoseconds without
error, unless error is introduced by the human beings who give it the
instructions. Of course, as humans we always like to think of the
other side of this argument. My favorite cartoon, in the New Yorker,
shows a computer covering an entire huge wall, as they did in those
days, and two little computer experts. (You could tell that they were
computer experts because they wore white lab coats and had long white
beards). One of them is reading a little slip of paper coming out of a
slot in the computer. He says, "Good heavens, Fotheringay, do you
realize that it would take 100 mathematicians 400 years to make a
mistake this big?"

The question is: If computers are so much better than we are in this
respect, why shouldn't they be better than we are in all respects? The
answer is that we are picking on the wrong thing. This business of
fooling around with numbers, of multiplying, dividing, integrating,
differentiating, and doing whatever else it is that computers do, is
trivial, truly trivial! The reason that computers do so much better in
these areas than we do is because our brains are not designed to do
anything that trivial. It would be a waste of time. As a matter of
fact, it's only because we are forced, in the absence of computers, to
do all this trivial work that our brains are ruined. It is like taking
an elaborate electronic instrument and because it happens to be hard
and heavy using it as a hammer. It may be a very good hammer, but
obviously you are going to destroy the instrument. Well, we take our
brain and what do we use it for? We file things alphabetically, make
lists of things, work out profit and loss, and do a trillion and one
other things that are completely trivial. We use our fancy instrument
for trivia simply because there is nothing else that can do it. Now
enters the computer. The computer is a halfway fancy instrument. It's
a lot closer to a hammer than it is to a brain. But it's good enough
to be able to do all those nonsense things that we have been wasting
our brains on. The question is, what then is it that our brain is
designed for? The answer, as far as I'm concerned, is that it is
designed to do all sorts of things that involve insight, intuition,
fantasy, imagination, creativity, thinking up new things, and putting
together old things in completely new ways; in other words, doing the
things that human beings, and only human beings, can do.

It is difficult for me to put myself into the minds of others; I can
only put myself into my own mind. For example, I know that I write
stories, and I write them as fast as I can write. I don't give them
much thought because I'm anxious to get them down on paper. I sit and
watch them being written on the paper as my fingers dance along the
keys of my typewriter, or occasionally of my word processor. I start a
story in the right place and each word is followed by another word (a
correct other word) and each incident is followed by another incident
(a correct other incident). The story ends when it is supposed to
end. Now, how do I know that all the words are correct, all the ideas
are correct, and all the incidences are correct? I don't know in any
absolute sense, but at least I can get them published. I virtually
never fail! The thing is, I literally don't give it any thought.
People ask me, "How can you write all the stuff that you write?" (I
have written 285 books at the moment, and I've been busy writing Opus
00 so it can be published as my 300th book.) I say, "Well, I cut out
the frills, like thinking." Everyone laughs; they think I'm being very
funny. But I'm not; I mean it literally. If I had to stop and think, I
couldn't possibly do all that I do. All right, I'm not as good as
Tolstoy, but considering that I don't think, I'm surprisingly good. Of
course, the real answer is that I don't consciously think. Something
inside my brain puts the pieces together and turns out the stories. I
just don't know how it's done.

It is a similar situation if I want to flex my arm. I don't know how
the devil I do it. Some change is taking place in my muscle molecules,
in the actomyosin, which causes them to assume another shape. There is
a ratchet or something that drags the actin molecules along the
myosin; who knows? The theory changes every year. But whatever it is,
I say to myself, "flex" and it flexes. I don't even know what I did;
in fact, I don't even have to say "flex." If I'm driving my automobile
and something appears before me, my foot flexes and stamps down on the
brake before I can say to myself "brake." If it didn't do that before
I could say to myself "brake," I wouldn't be alive now. The point is
that our brain does things, sometimes very complex things, that we
don't know how it does. Even the person who does it doesn't know how
he does it. If you don't want to take me as an example, consider
Mozart who wrote symphonies at the ridiculously early age of 7 or 8.
Somebody wrote to him when Mozart was an old man of 26 and asked him
how to go about writing symphonies. Mozart said, "I wouldn't if I were
you; you are too young. Start with something simple, a concerto or
sonata; work your way up to symphonies." The guy wrote back and said,
"But Herr Mozart, you were writing symphonies when you were a little
boy." Mozart wrote back, "I didn't ask anybody."

It's quite possible that we will never figure out how to make
computers as good as the human brain. The human brain is perhaps a
little more intractable than we imagined. Even if we could, would we?
Is there a point to it? There may not be, you know. We talk about
artificial intelligence as though intelligence is a unitarian,
monolithic thing. We talk about intelligence quotient as though we can
measure intelligence by a single number. You know, I'm 85, you're 86,
you're more intelligent than I am. It's not so. There are all sorts of
varieties of intelligence. I believe that people who make up
intelligence tests make up questions that they can answer. They've got
to! Suppose I want to design a test to decide which of you has the
potential to become a great punk rock musician. I don't know what to
ask; I know nothing about punk rock. I don't even know the vocabulary.
All I know are the words "punk rock." So this is not the kind of test
I can make up. My point is that we have a whole set of intelligence
tests designed by people who know the answers to the questions. You
are considered intelligent if you are like they are. If you're not
like they are you rate very low. Well, what does that mean? It just
means that it is a self-perpetuating process.

I am fortunate I happen to have exactly one kind of intelligence, the
kind that enables me to answer the questions on an intelligence test.
In all other human activities I am abysmally stupid. But none of that
counts; I'm tabbed as intelligent. For instance, suppose something
goes wrong with my car. Whatever it is that goes wrong, I don't know
what it is. There is nothing that is so simple about my car that I
understand it. So, when my car makes funny sounds, I drive it in fear
and trembling to a gas station where an attendant examines it while I
wait with bated breath, staring at him with adoration for a god-like
man, while he tells me what's wrong and fixes it. Meanwhile, he
regards me with the contempt due someone so abysmally unintelligent as
to not understand what is going on under the hood. He likes to tell me
jokes, and I always laugh very hard because I don't want to do
anything to offend him. He always says to me, "Doc," (he always calls
me Doc; he thinks it's my first name). "Doc," he says, "A deaf and
dumb man goes into a hardware store. He wants nails, so he goes up to
the counter and goes like this and they bring him a hammer. He shakes
his head and he hammers again. So, they bring him a whole mess of
nails. He takes the nails that he wants, pays for them, and walks
out." And I nod. Then the attendant says, "Next a blind man comes in
and he wants scissors. How does he ask for them?" I gesture to show
scissors but the attendant says, "No, he says, 'May I have a pair of
scissors?"' Now, from the dead silence I always get when I tell this
joke, I can tell that you agree with my answer. But a blind man can
talk, ipso facto, right? All right! Well that shows your intelligence.
It is an intelligence test, right there, and every one of you probably
flunked! So, I maintain that there are all kinds of varieties of
intelligence and that's a good thing too because we need variety. The
point is that a computer may well have a variety that is different
from all the human varieties. In fact, we may come up with a whole set
of varieties of intelligence. We would have a number of species of the
genius-human intelligence and a number of species of the
genius-computer intelligence. That's the way it should be; let the
computers do what they are designed to do and let the human beings do
what they are designed to do. Together, in cooperation, man and
computer can advance further than ei- ther could separately. Of
course, it is possible to imagine that we could somehow design a
computer in such a way that it could show human intelligence, have
insight and imagination, be creative, and do all the things we think
of as typically and truly human. But, so what? Would we build such a
computer, even if we could? It might not be cost effective.

Consider it this way we move by walking. We lift first one leg, then
the next one; we are consistently falling and catching ourselves. This
is a very good method of locomotion because we can step over obstacles
that aren't large, we can walk on uneven roads or through underbrush,
and we can make our way through crowds. Other animals move
differently; they jump, hop, fly, swim, glide, and so on. Finally, we
invented artificial locomotion with the wheel and axle. It's one
method that no living creature has developed. There are good reasons
for that; it would be very difficult for a living creature to have a
wheel and axle supplied with nerves and blood vessels. Nevertheless,
we have both artificial locomotion on wheels and human locomotion on
legs, and each has its advantages. We can move a lot faster on a
machine. On the other hand, when we walk we don't need a paved highway
or steel rails. We'd have to make the world very smooth and convenient
if we were going to take advantage of wheels. But, it's worth it, at
least most of us think so. I've heard no suggestions that we go back
to walking to New York. On the other hand, walking is not passe. I
frequently have occasion to navigate from the bedroom to the bathroom
sometime in the dead of night, and I tell you right now, I'm never
going to take an automobile to do that. I'm going to walk; that is the
sort of thing walking is for.

The question is, can you invent a machine that will walk? Of course
you can! I've seen machines that can walk, but they're usually merely
laboratory demonstrations. These machines might have very specialized
uses, but I don't think they can ever really take the place of
walking. We walk so easily that it makes no sense to kill ourselves
working up a machine that will walk. And, as far as computers and
human beings are concerned, it is wasteful to develop a computer that
can display a human variety of intelligence. We can take an ordinary
human being and train him, from childhood on, to have a terrific
memory, to remember numbers and partial products, and to work out all
kinds of shortcuts in handling addition, subtraction, multiplication,
division, square roots, and so on. In fact, people have been born with
the ability; they are mathematical wizards who can do this sort of
thing from an early age, and sometimes they can't do anything else.
But once you train that ordinary human being, what do you have? You
have a human being, which you've created, so to speak, at enormous
effort and expense, who can do what any cheap two-dollar computer can
do. Why bother? In the same way, why go through the trouble of
building an enormously complex computer, with complicated programming,
so that it can create and write a story when you have any number of
unemployed writers who can do it and who were manufactured at zero
cost to society in general, by the usual process. To sum it up, I
think we can be certain that no matter how clever or artificially
intelligent computers get, and no matter how much they help us
advance, they will always be strictly computers and we will always be
strictly humans. That's the best way, and we humans will get along
fine.

The time will come when we will think back on a world without
computers and shiver over the loneliness of humanity in those days.
How was it possible for human beings to get along without their
friends? You will be glad to put your arm around the computer and say,
"Hello, friend," because it will be helping you do a great many things
you couldn't do without it. It will make possible, I am sure, the true
utilization of space for humanity. When we finally do extend the
living range of humanity throughout near space, possibly throughout
the entire solar system and out to the stars, it will be done in
tandem with advanced computers that will be as intelligent as we are,
but never identically intelligent to humans. They will need us as
much as we need them. There will be two, not one of us. I like that
thought.

Interesting isnt it?

bye!
 
The time will come when we will think back on a world without
computers and shiver over the loneliness of humanity in those days.
How was it possible for human beings to get along without their
friends? You will be glad to put your arm around the computer and say,
"Hello, friend," because it will be helping you do a great many things
you couldn't do without it. It will make possible, I am sure, the true
utilization of space for humanity. When we finally do extend the
living range of humanity throughout near space, possibly throughout
the entire solar system and out to the stars, it will be done in
tandem with advanced computers that will be as intelligent as we are,
but never identically intelligent to humans. They will need us as
much as we need them. There will be two, not one of us. I like that
thought.
Particularly this part is very amusing.:)

bye!
 
Very amusing...indeed...when you consider the possibility that a human brain can be uploaded into a computer shell or a virtual world.

Soon, doctors will be able to transplant organs from genetically grown pigs. So, if your heart, liver, kidney etc is replaced by pig organ, are you still a human? If your mind is uploaded to PDA, are you still a human? If a whole town moved in inside a mainframe with their own world, are they still humans?

Are we going to define human as mammals of 1970 only?

So many questions....
 
Zion, as a pup this hamster ingested many Asimov seeds. Apparently some sprouted. (Original hamster thoughts or conduit for the ideas of others? After seeking seeds for so long and chewing so many how does one know?) Fun seed. Thanks.
 
Neuromorphic Approach.

istead of using the ones and zeros of digital electronics to simulate the way the brain functions, “neuromorphic” engineering relies on nature's biological short-cuts to make robots that are smaller, smarter and vastly more energy-efficient.

PEOPLE have become accustomed to thinking of artificial intelligence and natural intelligence as being completely different—both in the way they work and in what they are made of. Artificial intelligence (AI) conjures up images of silicon chips in boxes, running software that has been written using human expertise as a guide. Natural intelligence gives the impression of “wetware”—cells interacting biologically with one another and with the environment, so that the whole organism can learn through experience. But that is not the only way to look at intelligence, as a group of electronics engineers, neuroscientists, roboticists and biologists demonstrated recently at a three-week workshop held in Telluride, Colorado.

What distinguished the group at Telluride was that they shared a wholly different vision of AI. Rather than write a computer program from the top down to simulate brain functions, such as object recognition or navigation, this new breed of “neuromorphic engineers” builds machines that work (it is thought) in the same way as the brain. Neuromorphic engineers look at brain structures such as the retina and the cortex, and then devise chips that contain neurons and a primitive rendition of brain chemistry. Also, unlike conventional AI, the intelligence of many neuromorphic systems comes from the physical properties of the analog devices that are used inside them, and not from the manipulation of 1s and 0s according to some modelling formula. In short, they are wholly analog machines, not digital ones.

The payoff for this “biological validity”, comes in size, speed and low power consumption. Millions of years of evolution have allowed nature to come up with some extremely efficient ways of extracting information from the environment. Thus, good short-cuts are inherent in the neuromorphic approach.

At the same time, the electronic devices used to implement neuromorphic systems are crucial. Back in the 1940s, when computers were first starting to take shape, both analog and digital circuits were used. But the analog devices were eventually abandoned because most of the applications at the time needed equipment that was more flexible. Analog devices are notoriously difficult to design and reprogram. And while they are good at giving trends, they are poor at determining exact values.

In analog circuits, numbers are represented qualitatively: 0.5 reflecting, say, a voltage that has been halved by increasing the value of a resistor; 0.25 as a quarter the voltage following a further increase in resistance, etc. Such values can be added to give the right answer, but not exactly. It is like taking two identical chocolate bars, snapping both in half, and then swapping one half from each. It is unlikely that either of the bars will then be exactly the weight that the manufacturer delivered.
=====================================================================
“Neuromorphic engineers look at brain structures and then devise chips that contain neurons, axons, and a primitive rendition of brain chemistry.”
=====================================================================
One of the contributions of the father of the field—Carver Mead, professor emeritus at the California Institute of Technology in Pasadena—was to show that this kind of precision was not important in neural systems, because the eventual output was not a number but a behaviour. The crucial thing, he argued, was that the response of the electronic circuits should be qualitatively similar to the structures they were supposed to be emulating. That way, each circuit of a few transistors and capacitors could “compute” its reaction (by simply responding as dictated by its own physical properties) instantly. To do the same thing, a digital computer would have to perform many operations and have enough logic gates (circuits that recognise a 1 or a 0) for the computation. That would make the device not only slow and power-hungry, but also huge and expensive. For a fuller account of Carver Mead and his unique contribution to the whole of information technology, see this article.

Another advantage of the analog approach is that, partly because of their speed, such systems are much better at using feedback than their digital counterparts. This allows neuromorphically designed machines to be far more responsive to their environment than conventional robots. In short, they are much more like the biological creatures they are seeking to emulate.
=====================================================================
Going straight
One of the many projects demonstrating this concept at the Telluride meeting was a robot that could drive in straight lines—thanks to electronics modelled on the optic lobe in a fly's brain. The vision chip, built by Reid Harrison at the University of Utah in Salt Lake City, is a “pixellated” light sensor that reads an image using an array of individual cells, with additional circuitry built locally into each cell to process the incoming signals. The fact that these processing circuits are local and analog is crucial to the device's operation—and is a feature that is borrowed from the biological model.

Dr Harrison and his supervisor at Caltech and co-founder of the Telluride summer school, Christof Koch, identified the various processes taking place in the so-called lamina, medulla and lobular-plate cells in a fly's brain as being worth implementing in silicon. These cells form a system that allows the fly to detect motion throughout most of its visual field—letting the insect avoid obstacles and predators while compensating for its own motion.
*********************************************************************
in the
chip, special filters cut out any constant or ambient illumination, as well as very high frequencies that can be the source of electronic noise in the system. The purpose is to let the device concentrate on what is actually changing. In a fly's brain, this filtering role is played by the lamina cells.

In a fly's medulla, adjacent photodetectors are paired together, a time delay is introduced between the signals, and the two are then multiplied together. The length of the delay is crucial, because it sets the speed of motion that the detector is looking for. In the chip, since the delay and the distance between the two adjacent photo-diodes are known, the speed of an image moving over the two detectors can be determined from the multiplier output. Large numbers of these “elementary motion detectors” are then added together in the final processing stage. This spatial integration, which is similar to that performed in a fly's large lobular plate cells, ensures that the broad sweep of the motion is measured, and not just local variations. The same kind of mechanism for detecting motion is seen in the brains of cats, monkeys and even humans.

To prove that the chip not only worked, but could be useful, Mr Harrison attached it to a robot that had one of its wheels replaced by a larger-than-normal one, making it move in circles. When instructed to move in a straight line, feedback from the vision chip—as it computed the unexpected sideways motion of the scenery—was fed into the robot's drive mechanism, causing the larger wheel to compensate by turning more slowly. The result was a robot that could move in a straight line, thanks to a vision chip that consumed a mere five millionths of a watt of power.

For comparison, the imaging device on NASA's little Sojourner Rover that explored a few square metres of the Martian surface in 1997 consumed three-quarters of a watt—a sizeable fraction of the robot's total power. The image system that helps make the “Marble” trackball developed by Logitech of Fremont, California, a handy replacement for a conventional computer mouse, takes its cue likewise from a fly's vision system. In this case, the engineering was done mainly by the Swiss Centre for Electronics and Microtechnology in Neuchatel and Lausanne.

The concept of sensory feedback is a key part of another project shown at the Telluride workshop. In this case, a biologist, robotics engineer and analog-chip designer collaborated on a walking robot that used the principle of a “central pattern generator” (CPG)—a kind of flexible pacemaker that humans and other animals use for locomotion. (It is a chicken's CPG that allows it to continue running around after losing its head.) Unlike most conventional robots, CPG-based machines can learn to walk and avoid obstacles without an explicit map of their environment, or even their own bodies.

The biological model on which the walking robot is based was developed in part by Avis Cohen of the University of Maryland at College Park. Dr Cohen had been studying the way that neural activity in the spinal cord of the lamprey (an eel-shaped jawless fish) allowed it to move, with the sequential contraction of muscles propelling it forward in a wave motion. The findings helped her develop a CPG model that treated the different spinal segments as individual oscillators that are coupled together to produce an overall pattern of activity. Tony Lewis, president and chief executive of Iguana Robotics in Mahomet, Illinois, developed this CPG model further, using it as the basis for controlling artificial creatures.

In the walking robot, the body is mainly a small pair of legs (the whole thing is just 14cm tall) driven at the hip; the knees are left to move freely, swinging forward under their own momentum like pendulums until they hit a stop when the leg is straight. To make the robot walk, the hips are driven forwards and backwards by “spikes” (bursts) of electrical energy triggered by the CPG. This robot has sensors that let it feel and respond to the ground and its own body. Because outputs from these sensors are fed directly back to the CPG, the robot can literally learn to walk.

The CPG works by charging and discharging an electrical capacitor. When an additional set of sensors detect the extreme positions of the hips, they send electrical spikes to the CPG's capacitor, charging it up faster or letting it discharge more slowly, depending on where the hips are in the walking cycle. As the robot lurches forward, like a toddler taking its first steps, the next set of “extreme spikes” charge or discharge the capacitor at different parts of the cycle. Eventually, after a bit of stumbling around, the pattern of the CPG's charging and discharging and the pattern of the electrical spikes from the sensors at the robot's hip joints begin to converge in a process known as “entrainment”. At that point, the robot is walking like a human, but with a gait that matches the physical properties of its own legs.

Walking is only the start. Mr Lewis has endowed his robot with an ability to learn how to step over obstacles (see photo, top). It does this by changing the length of the three strides before the object, using miniature cameras as eyes, and the same kind of interaction with the CPG that it uses to synchronise its hip movement for normal walking.

The interesting thing is that the obstacle does not have to be defined in any way. It appears simply as an unexpected change in the flow of visual information from the cameras that the robot uses to see with. This makes the technique extremely powerful: in theory, it could be applied to lots of other forms of sensory input. Another factor that makes this project impressive is that its key component—the CPG chip, designed by Ralph Etienne-Cummings of Johns Hopkins University in Baltimore, Maryland—consumes less than a millionth of a watt of power.

The efficiency of CPG-based systems for locomotion has captured commercial attention. For the first time, parents can now buy their children analog “creatures”, thanks to Mark Tilden, a robotics expert at Los Alamos National Laboratory in New Mexico. Hasbro, one of America's largest toy makers, is marketing a product called BIO Bugs based on Dr Tilden's biomechanical machines.

Robot bugs invade the toy market

After teaching robots to walk and scramble over obstacles that they have never met before, how about giving them the means for paying attention? Giacomo Indiveri, a researcher at the Federal Institute of Technology (ETH) in Zurich, has been using a network of “silicon neurons” to produce a simple kind of selective visual attention. Instead of working with purely analog devices, the ETH group uses electrical circuits to simulate brain cells (neurons) that have many similarities with biological systems—displaying both analog and digital characteristics simultaneously, yet retaining all the advantages of being analog.

Like the locomotion work, the silicon neurons in the ETH system work with electrical spikes—with the number of spikes transmitted by a neuron indicating, as in an animal brain, just how active it is. Initially, this is determined by how much light (or other stimulus) the neuron receives. This simple situation, however, does not last long. Soon, interactions with the rest of the network begin to have an effect. The system is set up with a central neuron that is connected to a further 32 neurons surrounding it in a ring. The outer neurons, each connected to its partners on either side, are the ones that receive input from the outside world.

The neural network has two parameters that can be tweaked independently: global inhibition (in which the central neuron suppresses the firing of all the others); and local excitation (in which the firing of one neuron triggers firing in its nearest neighbours). By varying these two factors, the system can perform a variety of different tasks.
=====================================================================
“Millions of years of evolution have allowed nature to come up with some extremely efficient ways of extracting information from the environment.”
=====================================================================
The most obvious is the “winner-takes-all” function, which occurs when global inhibition is turned up high. In this case, the firing of one neuron suppresses firing in the rest of the network. However, global inhibition can also produce a subtler effect. If several neurons fire at the same time, then they stimulate the central neuron to suppress the whole network, but only after they have fired. The inhibition is only temporary, because the electrical activities of all the neurons have natural cycles that wax and wane. So the synchronised neurons now have some time to recover before firing again, without other neurons having much chance to suppress them.

In this situation, the important thing to note is that the synchronised signals tend to come from the same source. Consequently, if one can find a way of allowing all to cause firing at once, then it is possible to separate an individual object from the visual scene. Local excitation improves this situation further, since the synchronised neurons are likely to be next to each other.

This combination of local excitation and global inhibition is a feature of the human brain's cerebral cortex. The combination between winner-takes-all and synchronisation produces a mechanism for visual attention, because it allows one object—and one only—to be considered. Importantly, the global inhibition makes it difficult for other objects to break in, so the attention is stable. The ETH team is thinking of building a more advanced version of its attention-getter, in which the focus of attention can be switched, depending on the novelty and importance of a fresh stimulus.

Neuromorphic engineering is likely to change the face of artificial intelligence because it seeks to mimic what nature does well rather than badly. For centuries, engineers have concentrated on developing machines that were stronger, faster and more precise than people. Whether tractors, sewing machines or computer accounting software, the automata have been simply tools for overcoming some human weakness. But the essential thing has been that they always needed human intelligence to function. What neuromorphic engineering seeks to do is build tools that think for themselves—making decisions the way humans do.

But the neuromorphic route will not be an easy one. The highly efficient analog systems described above are far more difficult to design than their conventional counterparts. Also, billions of dollars have been invested in digital technology—especially in CAD (computer-aided design) tools—that makes analog tools look, in comparison, like something from the stone age. More troubling still, almost all neuromorphic chips developed to date have been designed to do one job, albeit remarkably well. It has not been possible to reprogram them (like a digital device) to do many things even adequately.

However, as work advances, neuromorphic chips will doubtless evolve to be general purpose in a different sense. Instead of using, say, a camera or a microphone to give a machine some limited sense of sight and hearing, tool makers of tomorrow will be buying silicon retinas or cochleas off the shelf and plugging them into their circuit boards.

At the other extreme, the combination of biological short-cuts and efficient processing could lead to a whole family of extremely cheap—albeit limited—smart sensors that do anything from detecting changes in the sound of a car engine to seeing when toast is the right colour.

In fact, the neuromorphic approach may be the only way of achieving the goal that has eluded engineers trying to build efficient “adaptive intelligent” control systems for years. Neuromorphic chips are going to have enormous implications, especially in applications where compactness and power consumption are at a premium—as, say, for replacement parts within the human body. This is slowly being recognised. For the first time in the Telluride workshop's history, one of the participants was a venture capitalist. After genomics, perhaps the next stockmarket buzz will be neuromorphics.


bye!
 
Status
Not open for further replies.
Back
Top