Similarly, an ANN is a statistical modeling scheme, wherein the adjustable fitting coefficients take the form of variable connection strengths between neurons. Note however that there is a powerful difference in the functional form represented by the ANN: Rather than appear as a sum of terms, F(x) = c1F1(x) + c2F2(x) + ... + cNFN(x), the neural network takes a nested form, F(x) = FN(...F2(F1(x))...). Thus the ANN allows the modeling of causal chains through which something happens, FN, because something else happened, FN-1, stemming all the way back to some initial event, x. Within the brain, the very inspiration for the ANN, the absorption of such causal chains is both fundamental and crucial to survival of the host organism. In effect, the brain models and then anticipates opportunities and dangers, and all the causal or correlative chains leading to these life-determining scenarios.
3) (a new perspective) Practitioners of ANN technology have unconsciously expanded the definition of a neural network from a collection of “on-off” switches to that of an interconnected array of processing units that incorporate a broad range of functional relationships. For example, while still retaining the overall nested functional form, the individual functions Fi, may, for instance, be linear or Gaussian. Some neural networks may consist of a collection of neural network modules, rather than just simple computational units.
5) (new perspective) As neural networks train, the connection weights collectively take on the form of ‘logic’ circuits that effectively capture the implicit rules and heuristics concealed within the input-output pattern pairs presented to them. In effect, the network is being forced to correctly connect inputs and outputs, and in so doing, devises a ‘theory’ to account for the relationships involved. Of course such a theory, constructed from on-off switches, is unfathomable to humans.). …Allow me to note that this aspect of ANNs is the hardest for the typical outsider to accept. The notion that machines can now devise their own logic and theories, based only upon the presentation of raw data patterns from the environment, is usually the hardest to accept, but is key to the development of totally autonomous synthetic intelligence. Otherwise, hordes of computer programmers must be typing in myriad “if-then” rules in a process that can hardly be called autonomous!
(6) (new development) If, instead of using simple on-off switches as the individual processing units, we use neural network modules that incorporate various analogy bases, then the neural network devises something closer to what we would call a theory. During training, only the connections to the more important analogy networks strengthen in producing an accurate input-output mapping. Those irrelevant or inapplicable to the mapping erode away. In the end, the neural network transforms into what looks like a semantic network, connecting the most relevant analogies into a larger picture. (This is a major departure from the conventional definition of a neural network, and requires a few patented IEI technologies to build.)
(7) (new perspective) The similarity between the synthetic and biological neuron is two-fold: (1) the signals arriving at any given neuron are summed or ‘integrated’ within the neuron, and (2) if, the integrated input signal exceeds some threshold, the neuron switches from a silent to active state, outputting its own signal. My claim is that these two overlapping aspects of synthetic and biological neural networks are sufficient to create artificial cognition. After all, we have distilled the essence of flight from the biological bird, the Bernoulli effect, without having to attach feathers to aircraft or requiring them to drop messy payloads from the sky. Further, those who offer the criticism that synthetic neurons just aren’t complicated enough to capture human intelligence, I point out that intelligence is not stored in neurons. It is instead absorbed within connections between neurons!
(8) (new development) My most heretical redefinition of a neural network has to do with the radical departure from notion 1, the ANN viewed as an input-output device. The so-called Creativity Machine Paradigm, that I describe below, works without the presentation of inputs to an ANN. Instead, spontaneous and unintelligent fluctuations internal to such a network produce very intelligent outputs. In effect, there goes the old rule of garbage-in, garbage-out. Instead garbage now yields gold.