... Yep, anyone who thinks that a complex of currents racing around on a doped-silicon substrate might come alive, or that they will endow their motherboard, chassis and power supply with a sense of self, is definitely revisiting some of the most popular themes of sci-fi, ... My point was that AI is just a program. There's nothing magic about it; physically it's no different than the program running in a garage door opener. It's the "I" in "AI" which suggests the kind of ideas being advanced here. Similarly the... There really is nothing inherent about the band gap of a semiconductor that leads to a natural conclusion that it parallels a synaptic junction. And even if someone here did have an original thought of some kind regarding this ... they would amount to nothing more than sheer flights of fantasy.
I agree, but you are focused on the technology /implementation, not pure AI.
If, as I think to be the case, human consciousness and intelligence is basically a program, perhaps more analogy than digital but little is known about its nature that happens to be implemented by neurons of the brain, there is no known reason why it could not run in some other implementation, even your "doped-silicon substrate." I.e. you are confusing, the properties of the implementation with the AI.
I believe there are two aspects: One is my body (brain included) and one is "me" that only exists in dream or wake state, not during deep sleep (or when dead, in coma etc.) (I put quotes around words when I want to make clear they refer to my psychological self, not my body.) I.e. "I" don't exist when in deep sleep. - Only when the parietal activity I call the Real Time Simulation," RTS, is active or "running" to use the computer word. Read more about the RTS and dozens of known facts from several different fields, even history, that strongly support it and cannot be explained by the more widely accepted POV of cognitive scientists about how we perceive anything, much more than "sheer flights of fantasy."
I believe that human brains (and other higher animals) do run an adequate, but not precise, simulation of the physical world their bodies have sensors following. - For example the retinal cells sense or can following a portion of the EM spectrum we call visible, but not the portion we call micro waves etc. I. e. my model of perception is quite different from that accepted by main stream cognitive scientists. They think perception "emerges" after many stages of neural transforms of the input sensory signals. That is nothing more than hand waving non-sense with zero explanatory power as says nothing about the neural mechanisms creating the perceptions that emerge. Also it strongly conflicts with well established neurological facts.
For example, the information in the sensory input signals is deconstructed into different characteristics that are further process by other neurons in widely separated parts of the brain and never again reassembled in any part of the brain, yet we perceive a unified world. To give a specific example, consider this very simple visual stimulation field:
A yellow tennis ball rolling towards a red cube of about the same size on a large green table (so large no other light is coming to the retina). After the continuous visual field has been parsed into these three objects* mainly in the visual area called V1, the three colors are set to V4 and their movement (speed and direction) to V5. In V1 and V2 their shapes are determined. So the three characteristics (shape, color & motion) are separated decomposed characteristic that never come together again in the brain; yet we correctly perceive them - as they are in the physical world. Not the seven other ways these three could be perceived. I. e. not as a stationary red table, a rolling (or sliding) yellow cube and a stationary green tennis ball.
My parietal tissue, Real Time Simulation, explains this unified perception AND why the visual field objects were decomposed into their "characteristic" (more than eight are known, things like surface texture, etc. and all processed separately in different neural tissue, never to come back to any common brain tissue.) It is supported by dozens of known facts that the accepted "perception emerges" can not explain, or even contradicts. One quick example: How does a visual experience / perception "emerge" in dreams with eyes closed in a dark room?
For more but still very partial evidence and some brief discussion read this post:
http://www.sciforums.com/showthread...Nonexistence&p=2899438&viewfull=1#post2899438
There you will see a link to about eight pages (if printed) of discussion and much more supporting evidence from many different fields of knowledge, but the focus of that link is to show how the RTS makes it possible than Genuine Free Will , GFW, to NOT be in conflict with the physical laws that control the firing of every nerve in your body, especially those in the brain. (Not a proof that GFW exists, only that it could. I tend to think GFW is the most universal of all illusions.)
* In the published paper the longer link on GFW is derived from, I also explained how the parsing in V1 is done using known properties of how neurons in V1 interact with near by neurons .- I. e. that they reinforce (have mutual stimulation) for like oriented "line detectors" (which Hubel & Wiesel discovered and got a 1962 Nobel Prize for their work)** but a mutually inhibitory influence on the near by line detectors with the orthogonal orientation and several of the Gestalt laws by using know properties of neurons, not hand waving.
** Footnote at link explains why H&W did not correctly understand their observations. The cells they took data from are not "line detectors" but part of a quasi-Fourier like transform (Gabor function transform actually). It seems that the visual system, perhaps all the brain, works in a transform space not the original space after the "retinotoptic" work is done. This may be why so little of the Brain's processes are understood. - The basic assumption of what space they are done in may be wrong. One advantage of working in the transform space is that the terms of the transform do not change as the objects location changes. That makes object identification / recognition independent of where the object is.