Here are some thoughts I had, any feedback would be appreciated.
Machine Intelligence
<hr>
Alan Turing postulated that if a machine could hold a conversation with a human without giving itself away as artificial, then it could be considered that it thinks and is intelligent. This may not be a good way to describe intelligence, as a conventional machine does not really think or know anything about what it does, even if the symbols they process have meaning to us.
We struggle to define intelligence, because there are so many things that influence what think of as intelligent. It’s natural that we model intelligence on our own consciousness, as it’s taken for granted that we certainly are. The main fallacy with this is in the comparison of machine versus human brain architecture, as although both can be reduced to simple deterministic pathways, the macro-structure is entirely different; a machine CPU’s only ‘objective’ is to sort and combine data. Like the brain of a nematode worm, all of its pathways are hard-wired ‘knee-jerk’ reactions that can be predicted quite easily. It receives binary signals for simple arithmetic and logistic tasks, whereas a human brain is an enormous neural network which dynamically interacts with its environment to seek out information and associate meaning. Strictly speaking, neurons can also be looked at as deterministic(1) and also computers can be made to emulate neural networks, but this argument is concerned only with the conventional ‘top-down’ approach to A.I. and the current form of computer design – which differs greatly from ours.
To understand what we would like intelligent computers to be, we should look at the human system. So what is the objective of the human neural-computer? What state does it try to approach or to maintain? Biologically, the reason we have brains is clearly to increase our success at reproduction but there is almost certainly not one line of code, or a fundamental loop of neurons, that states, ‘Do whatever it takes to survive, reproduce and look after my offspring’(2). Although that would be rather ideal from an evolutionary standpoint, it is computationally unlikely and would require a very large knowledge base to interpret the semantics of such an instruction and effectively execute it. Yet, there is consistency to the decisions made by humans; the states of the inputs do make a difference to the output. Therefore there must be certain preferences, templates and such deeply imbedded into the network. For example, a beautiful woman clearly does actually emit beauty: it is the brain of the receiver that may find her image to be pleasing. This suggests the presence of a template, or rulebook, that every captured image is referenced against. The weighted combination of many thousands of such ‘templates’ or ‘influences’ (motivations, hereforth) could contribute to a total sum used to make a decision.
The idea of a single total sum, or single prime directive might seem contrary to the fact that a human consciousness is influenced by a great deal of motivations. But what happens when two of these conflict? There must be a rule to determine which has greater weighting and as such everything will eventually boil down to a very basic single prime directive.
This model has obvious issues to be addressed, such as implication that a person cannot make a decision that they believe will put them is a lesser state of some fundamental sum. However, it is quite obviously reminiscent of a neural network and as such can most likely emulate the truth even if the actual mechanics of the brain happen to be entirely different.
To give a machine such a design would allow us to program complex motivations in, which might appear to give the characteristic of personality. I.e. Given that the computer has available to it a detailed environment model and in-built likes and dislikes of various stimuli, it could learn to manipulate the environment to satisfy the fundamental motivations. The frivolous use of ‘like/dislike’ is the pivotal point – all that is meant by this is that the system does whatever is remembered to bring a variable or synaptic node to a particular value.
Ultimately, the greater the capacity to learn and make associations, the closer this machine will be to intelligence as is meant with reference to humans.
<hr>
1. Neurons may not be completely deterministic even if they are close enough, as they can be influenced by noise and chemicals present in the blood which are not considered part of the logical circuitry. It should be noted that although there maybe randomness at the quantum level, such fluctuation will be negligible on the cellular level and not likely to influence whether a neuron fires or not.
2. Richard Dawkins, The Selfish Gene.
Machine Intelligence
<hr>
Alan Turing postulated that if a machine could hold a conversation with a human without giving itself away as artificial, then it could be considered that it thinks and is intelligent. This may not be a good way to describe intelligence, as a conventional machine does not really think or know anything about what it does, even if the symbols they process have meaning to us.
We struggle to define intelligence, because there are so many things that influence what think of as intelligent. It’s natural that we model intelligence on our own consciousness, as it’s taken for granted that we certainly are. The main fallacy with this is in the comparison of machine versus human brain architecture, as although both can be reduced to simple deterministic pathways, the macro-structure is entirely different; a machine CPU’s only ‘objective’ is to sort and combine data. Like the brain of a nematode worm, all of its pathways are hard-wired ‘knee-jerk’ reactions that can be predicted quite easily. It receives binary signals for simple arithmetic and logistic tasks, whereas a human brain is an enormous neural network which dynamically interacts with its environment to seek out information and associate meaning. Strictly speaking, neurons can also be looked at as deterministic(1) and also computers can be made to emulate neural networks, but this argument is concerned only with the conventional ‘top-down’ approach to A.I. and the current form of computer design – which differs greatly from ours.
To understand what we would like intelligent computers to be, we should look at the human system. So what is the objective of the human neural-computer? What state does it try to approach or to maintain? Biologically, the reason we have brains is clearly to increase our success at reproduction but there is almost certainly not one line of code, or a fundamental loop of neurons, that states, ‘Do whatever it takes to survive, reproduce and look after my offspring’(2). Although that would be rather ideal from an evolutionary standpoint, it is computationally unlikely and would require a very large knowledge base to interpret the semantics of such an instruction and effectively execute it. Yet, there is consistency to the decisions made by humans; the states of the inputs do make a difference to the output. Therefore there must be certain preferences, templates and such deeply imbedded into the network. For example, a beautiful woman clearly does actually emit beauty: it is the brain of the receiver that may find her image to be pleasing. This suggests the presence of a template, or rulebook, that every captured image is referenced against. The weighted combination of many thousands of such ‘templates’ or ‘influences’ (motivations, hereforth) could contribute to a total sum used to make a decision.
The idea of a single total sum, or single prime directive might seem contrary to the fact that a human consciousness is influenced by a great deal of motivations. But what happens when two of these conflict? There must be a rule to determine which has greater weighting and as such everything will eventually boil down to a very basic single prime directive.
This model has obvious issues to be addressed, such as implication that a person cannot make a decision that they believe will put them is a lesser state of some fundamental sum. However, it is quite obviously reminiscent of a neural network and as such can most likely emulate the truth even if the actual mechanics of the brain happen to be entirely different.
To give a machine such a design would allow us to program complex motivations in, which might appear to give the characteristic of personality. I.e. Given that the computer has available to it a detailed environment model and in-built likes and dislikes of various stimuli, it could learn to manipulate the environment to satisfy the fundamental motivations. The frivolous use of ‘like/dislike’ is the pivotal point – all that is meant by this is that the system does whatever is remembered to bring a variable or synaptic node to a particular value.
Ultimately, the greater the capacity to learn and make associations, the closer this machine will be to intelligence as is meant with reference to humans.
<hr>
1. Neurons may not be completely deterministic even if they are close enough, as they can be influenced by noise and chemicals present in the blood which are not considered part of the logical circuitry. It should be noted that although there maybe randomness at the quantum level, such fluctuation will be negligible on the cellular level and not likely to influence whether a neuron fires or not.
2. Richard Dawkins, The Selfish Gene.