Hiya Folks,
First of all why is Neural Networking perhaps the only way forward for the next generation of AI applications?
Digital processing is inherently suited to deterministic problem solving, i.e. the problem can be well bounded with a firm set of requirements. Indeed look at any software development methodology in any software house and you’ll find very strict processes. These take the software from requirements definition, high/low level design coding and feature and verification test. The names for the various phases may differ but the overall principle is identical.
If a problem is not deterministic then we have a problem with this approach. The real issue for developing more robust AI software and hardware based systems is that a lot of real world situations can not readily modeled in the digital. This problem though has been solved elsewhere though. After billions of years of evolution nature found a way. The answer is the animal brain. Failure of the “requirements capture” or an un-encountered “error leg” is not a delay in the final product but more than likely death. This has been the incentive to get it right.
Nature seldom has a single all responsible element and very seldom processes in serial. In nature complexity is always built up through the collective effects of many simple units with redundancy built in. A computer is usually the exact opposite of this.
The first real attempts to crack the non deterministic nature of problem solving are Neural Networks. These are a new paradigm for computing based on the parallel architecture of animal brains. They are a form of multiprocessor computer system; with simple processing elements, a high degree of interconnection and adaptive interaction between elements. Early research in Neural Networks produced some very promising results. Indeed my colleagues and I have used modifications of basic Neural Networks to solve problems as diverse as multi-parameter minimization, 3D image reconstruction from stereo picture, event classification and even prediction of sunspot cycles.
My question is that Neural Networking seems to be losing its ability to reach critical mass. In the mid 1990’s there was a glut of companies founded on NN theories, striving to reach the ultimate goal of truly intelligent systems. Some of these software companies have found limited success:
Neurodynamics
Convera
AND Corporation
ERA technology
Gensym
Imagination Engines
Invention Machine
Knowmatch
Ward Systems
Their questionable financial successes aside, where are the noticeable benefits to mankind? Considering this technology has been discussed and developed for almost 20 years how come NN have not truly been able to gain a foothold in mainstream applications and automated processes? Where are the killer applications that will present untold riches to companies able to manipulate and interpret data in similar ways to our own organic thinking processes? Maybe it’s not possible to mimic organic parallel processes in an inherently serial environment?
First of all why is Neural Networking perhaps the only way forward for the next generation of AI applications?
Digital processing is inherently suited to deterministic problem solving, i.e. the problem can be well bounded with a firm set of requirements. Indeed look at any software development methodology in any software house and you’ll find very strict processes. These take the software from requirements definition, high/low level design coding and feature and verification test. The names for the various phases may differ but the overall principle is identical.
If a problem is not deterministic then we have a problem with this approach. The real issue for developing more robust AI software and hardware based systems is that a lot of real world situations can not readily modeled in the digital. This problem though has been solved elsewhere though. After billions of years of evolution nature found a way. The answer is the animal brain. Failure of the “requirements capture” or an un-encountered “error leg” is not a delay in the final product but more than likely death. This has been the incentive to get it right.
Nature seldom has a single all responsible element and very seldom processes in serial. In nature complexity is always built up through the collective effects of many simple units with redundancy built in. A computer is usually the exact opposite of this.
The first real attempts to crack the non deterministic nature of problem solving are Neural Networks. These are a new paradigm for computing based on the parallel architecture of animal brains. They are a form of multiprocessor computer system; with simple processing elements, a high degree of interconnection and adaptive interaction between elements. Early research in Neural Networks produced some very promising results. Indeed my colleagues and I have used modifications of basic Neural Networks to solve problems as diverse as multi-parameter minimization, 3D image reconstruction from stereo picture, event classification and even prediction of sunspot cycles.
My question is that Neural Networking seems to be losing its ability to reach critical mass. In the mid 1990’s there was a glut of companies founded on NN theories, striving to reach the ultimate goal of truly intelligent systems. Some of these software companies have found limited success:
Neurodynamics
Convera
AND Corporation
ERA technology
Gensym
Imagination Engines
Invention Machine
Knowmatch
Ward Systems
Their questionable financial successes aside, where are the noticeable benefits to mankind? Considering this technology has been discussed and developed for almost 20 years how come NN have not truly been able to gain a foothold in mainstream applications and automated processes? Where are the killer applications that will present untold riches to companies able to manipulate and interpret data in similar ways to our own organic thinking processes? Maybe it’s not possible to mimic organic parallel processes in an inherently serial environment?