Will machines become conscious someday?

It may be partly due to the complexity of human brain, which is not likely to be matched in silicon, especially 2D layout on substrates.

I forget the better comparisions of how complex human brain is but think one is that there are many more synaptic connections than the largest telescope can see stars! Building that on a 2D substrate with the best current smallest scale nano transistor technology probably would require more than the Earth´s total surface! - just my guess. You can check it. Thus I think that if a man wants to make such a computer, he should find a good looking woman an seduce her - that may be the only way for at least 100 years.

WOW, just WOW, do you perhaps know the website which describes the complexity of human brain, and I see you understand this a lot.
 
You may want to brush up on nano technology in copmputers. They are building 3D platforms. http://www.pbs.org/wgbh/nova/tech/making-stuff.html#making-stuff-smaller
If you can provide a link that has three or more 2D layers stacked on top of each other I´ll thank you. Your link does not even mention going to 3D from the standard 2D circutits on a substrate.
If one wants to make a copy of the human brain and used multiple 2D layers that can be made with acceptble yield I think the 2D area required would still be larger than the USA.

There is a yield problem* for 3D silicon brains and man does not know how to solve it. I.e. 100s of brain cells die every minute, if 100s of silicon cells die the whole Von Newmann compter is trash. We need an understanding of how the brain adds new cells and tolerates them dying by the million each day more than further minaturization to even think about making a copy of human brain in silicon.

* 2D flat displays that replaced the CRT were invented in the USA, BUT we never got beyound the 2nd generation as the production yield was too low to be commercial. They solved the yield problem in Japan with really high quality control and now they are on generation seven! We lost technological leadership to Asia more than a decade ago and now have lost most of the scientic research applied to applications too. We still may lead in some areas of pure science, but are now losing that too, except perhaps not yet in some areas of "theory development."

For example, the LHC is in Europe and has just found the Higgs particle. To please the Christian right wing, GWB gave away leadership in much of stem cell research. China is probably the world leader in vacines development - they made the most effective vacine for swine flue in LESS THAN ONE MONTH!
Bill Gate´s global health foundation buys its vacines from China as they are cheaper and better. etc.
 
Last edited by a moderator:
If you can provide a link that has three or more 2D layers stacked on top of each other I�ll thank you. Your link does not even mention going to 3D from the standard 2D circutits on a substrate.
If one wants to make a copy of the human brain and used multiple 2D layers that can be made with acceptble yield I think the 2D area required would still be larger than the USA.

I'll repost the link here:
http://www.pbs.org/wgbh/nova/tech/ma...-stuff-smaller

Watch the program @ 12:00 and 14:45, it clearly shows that instead of trying to shrink ever smaller transistors into a 2D plane, we can now stack transistors vertically as well at atomic scales, creating a physical 3D matrix of transistors, transmitters, and receptors. These silicon nano wires are grown by the millions on tiny slivers of silicon (see the electron microscope pictures of the nano wires at 30,000 X). When stacked, it wil increase computing power by many factors. Imagine a computer with billions of transistors functioning at SOL in a 3D configuration (length, width, height). It will allow us to build quantum computers and we will begin to approach the evolved abilities in the human brain by thinking holographically.

As we can look deeper and deeper into atomic structures we won't need to guess anymore, all we need is to copy what took evolution billions of years to accomplish. When we master nano technology we'll just build a biochemical brain at any scale that will produce an equivalency to the human brain. Then we let that brain redesign and refine itself and eventually shrink it to human brain size.

Once we can teach a single AI to request (and later initiate) duplicating and adapting functional parts of itself for additional thought processing functions (perhaps mirror neural network and memory), it will evolve exponentially as much as human knowledge and understanding of technology and physics has expanded exponentially in the past few hundred years.
 
I'll repost the link here:
http://www.pbs.org/wgbh/nova/tech/ma...-stuff-smaller

Watch the program @ 12:00 and 14:45, it clearly shows that instead of trying to shrink ever smaller transistors into a 2D plane, we can now stack transistors vertically as well at atomic scales, creating a physical 3D matrix of transistors, transmitters, and receptors. These silicon nano wires are grown by the millions on tiny slivers of silicon (see the electron microscope pictures of the nano wires at 30,000 X). When stacked, it wil increase computing power by many factors. ...
Thanks, but new link does not worlk for me, so I went back to first given and opened the "transcript" to see what you were speaking of and found:

"... FRANCES ROSS: That's right. They're called nanowires, and the real thing is about a million times smaller than this. ...
They're hard to see. But this is not a nanowire. This is a silicon sliver Francis uses as a surface to grow them.
FRANCES ROSS: We get tens of millions of wires on each of these specimens. ...
DAVID POGUE: She carefully loads the wafer into a molybdenum clip and slides it into a custom-built oven, where she'll bake it at 1,100 degrees Fahrenheit. ...
DAVID POGUE: Oh, my gosh. So those little spires...?
FRANCES ROSS: Those are the nanowires.
DAVID POGUE: So, you bake those up?...
FRANCES ROSS: This is 30,000 times magnified. ...
FRANCES ROSS: That's right. So here's the column of silicon that's the nanowire, and here's the gold droplet, on the end, that actually makes it grow.
DAVID POGUE: It's weird. It looks like matchsticks or weird mushrooms.
FRANCES ROSS: They do...mushrooms, that's right. They look to me like mushrooms ..."

These nano wires are not trnsistors. They do NOTHING! Worse this is more than 50 year old technology, at least when tin is baked - I knew about "tin whiskers" 40 years ago when they were already useless "old hat." I bet wiki will tell more about "tin whiskers" etc.
 
Thanks, but new link does not worlk for me, so I went back to first given and opened the "transcript" to see what you were speaking of and found:

"... FRANCES ROSS: That's right. They're called nanowires, and the real thing is about a million times smaller than this. ...
They're hard to see. But this is not a nanowire. This is a silicon sliver Francis uses as a surface to grow them.
FRANCES ROSS: We get tens of millions of wires on each of these specimens. ...
DAVID POGUE: She carefully loads the wafer into a molybdenum clip and slides it into a custom-built oven, where she'll bake it at 1,100 degrees Fahrenheit. ...
DAVID POGUE: Oh, my gosh. So those little spires...?
FRANCES ROSS: Those are the nanowires.
DAVID POGUE: So, you bake those up?...
FRANCES ROSS: This is 30,000 times magnified. ...
FRANCES ROSS: That's right. So here's the column of silicon that's the nanowire, and here's the gold droplet, on the end, that actually makes it grow.
DAVID POGUE: It's weird. It looks like matchsticks or weird mushrooms.
FRANCES ROSS: They do...mushrooms, that's right. They look to me like mushrooms ..."

These nano wires are not trnsistors. They do NOTHING! Worse this is more than 50 year old technology, at least when tin is baked - I knew about "tin whiskers" 40 years ago when they were already useless "old hat." I bet wiki will tell more about "tin whiskers" etc.

One and done: Single-atom transistor is end of Moore's Law; may be beginning of quantum computing:
http://www.purdue.edu/newsroom/research/2012/120219KlimeckAtom.html

The single-atom transistor could lead the way to building a quantum computer that works by controlling the electrons and thereby the quantum information, or qubits. Some scientists, however, have doubts that such a device can ever be built.

"Whilst this result is a major milestone in scalable silicon quantum computing, it does not answer the question of whether quantum computing is possible or not," Simmons says. "The answer to this lies in whether quantum coherence can be controlled over large numbers of qubits. The technique we have developed is potentially scalable, using the same materials as the silicon industry, but more time is needed to realize this goal."
 
View attachment 6107
Thanks, but new link does not worlk for me, so I went back to first given and opened the "transcript" to see what you were speaking of and found:

"... FRANCES ROSS: That's right. They're called nanowires, and the real thing is about a million times smaller than this. ...
They're hard to see. But this is not a nanowire. This is a silicon sliver Francis uses as a surface to grow them.
FRANCES ROSS: We get tens of millions of wires on each of these specimens. ...
DAVID POGUE: She carefully loads the wafer into a molybdenum clip and slides it into a custom-built oven, where she'll bake it at 1,100 degrees Fahrenheit. ...
DAVID POGUE: Oh, my gosh. So those little spires...?
FRANCES ROSS: Those are the nanowires.
DAVID POGUE: So, you bake those up?...
FRANCES ROSS: This is 30,000 times magnified. ...
FRANCES ROSS: That's right. So here's the column of silicon that's the nanowire, and here's the gold droplet, on the end, that actually makes it grow.
DAVID POGUE: It's weird. It looks like matchsticks or weird mushrooms.
FRANCES ROSS: They do...mushrooms, that's right. They look to me like mushrooms ..."

These nano wires are not trnsistors. They do NOTHING! Worse this is more than 50 year old technology, at least when tin is baked - I knew about "tin whiskers" 40 years ago when they were already useless "old hat." I bet wiki will tell more about "tin whiskers" etc.

Ok, semi-conductors, or switches if you like.

If these silicon nano wires don't do anything, why is this program shot at IBM R&D facility? What you also missed is the entire explanation of the vertical stacking on computer chips, clearly something to do with computing, no?
Also at Intel, the program showed an existing computer chip (2D) with nearly a billion of silicon semi-conductors on a 1" chip. So if we can squeeze a billion semi-conductors on a single layer 1" chip, each time we add a vertical layer with a thickness of a few atoms, we double the processing power in the same 1" chip. Stack 1,000 layers and we now have a trillion semi-conductors on a 1" cubed. Now place 100 stacks side by side on a 10"square, 1" high plarform and we reach a significant 100 trillion semi-conductors in a space smaller than the human brain.

But as the program indicates, silicon is actually already surpassed by a new substance "graphene"

wiki
Graphene is a substance composed of pure carbon, with atoms arranged in a regular hexagonal pattern similar to graphite, but in a one-atom thick sheet. It is very light, with a 1-square-meter sheet weighing only 0.77 milligrams. http://en.wikipedia.org/wiki/Graphene

You can make your own nano semiconductors from graphene merely by placing sticky tape on a layer of graphene over and over until an actual single layer of graphene atoms emerge as a matrix. Pretty neat if you ask me.
 
I'll repost the link here:
http://www.pbs.org/wgbh/nova/tech/ma...-stuff-smaller

Watch the program @ 12:00 and 14:45, it clearly shows that instead of trying to shrink ever smaller transistors into a 2D plane, we can now stack transistors vertically as well at atomic scales, creating a physical 3D matrix of transistors, transmitters, and receptors. These silicon nano wires are grown by the millions on tiny slivers of silicon (see the electron microscope pictures of the nano wires at 30,000 X). When stacked, it wil increase computing power by many factors. Imagine a computer with billions of transistors functioning at SOL in a 3D configuration (length, width, height). It will allow us to build quantum computers and we will begin to approach the evolved abilities in the human brain by thinking holographically.

As we can look deeper and deeper into atomic structures we won't need to guess anymore, all we need is to copy what took evolution billions of years to accomplish. When we master nano technology we'll just build a biochemical brain at any scale that will produce an equivalency to the human brain. Then we let that brain redesign and refine itself and eventually shrink it to human brain size.

Once we can teach a single AI to request (and later initiate) duplicating and adapting functional parts of itself for additional thought processing functions (perhaps mirror neural network and memory), it will evolve exponentially as much as human knowledge and understanding of technology and physics has expanded exponentially in the past few hundred years.

This link does not work anymore.
 
View attachment 6107

Ok, semi-conductors, or switches if you like.

If these silicon nano wires don't do anything, why is this program shot at IBM R&D facility? What you also missed is the entire explanation of the vertical stacking on computer chips, clearly something to do with computing, no?
Also at Intel, the program showed an existing computer chip (2D) with nearly a billion of silicon semi-conductors on a 1" chip. So if we can squeeze a billion semi-conductors on a single layer 1" chip, each time we add a vertical layer with a thickness of a few atoms, we double the processing power in the same 1" chip. Stack 1,000 layers and we now have a trillion semi-conductors on a 1" cubed. Now place 100 stacks side by side on a 10"square, 1" high plarform and we reach a significant 100 trillion semi-conductors in a space smaller than the human brain.

But as the program indicates, silicon is actually already surpassed by a new substance "graphene"

wiki


You can make your own nano semiconductors from graphene merely by placing sticky tape on a layer of graphene over and over until an actual single layer of graphene atoms emerge as a matrix. Pretty neat if you ask me.

Actually I read that molybdenite is better material than graphene.
http://phys.org/news/2012-02-molybdenite-logic-circuits-graphene.html
http://techie-buzz.com/science/worlds-first-molybdenite-chip.html
http://spectrum.ieee.org/semiconductors/nanotechnology/graphenes-new-rival
 
It may be partly due to the complexity of human brain, which is not likely to be matched in silicon, especially 2D layout on substrates.

We have the technology to make 3d layouts.

I forget the better comparisions of how complex human brain is but think one is that there are many more synaptic connections than the largest telescope can see stars! Building that on a 2D substrate with the best current smallest scale nano transistor technology probably would require more than the Earth´s total surface! - just my guess. You can check it. Thus I think that if a man wants to make such a computer, he should find a good looking woman an seduce her - that may be the only way for at least 100 years.

A human brain has less then 100 billion neurons and less then 100 trillion synapses... There are 70 sextillion stars in the known universe, billions of times more stars than synapses in the human brain!

intel-spin-valve-memristor-crossbar-switch1-640x372.jpg


Now lets assume a silicon neuron and memristor synapses layered above, the memristors are layered (just etched wires), 10, 20, 50, 100 layers it does not matter, and a neuromorphic design can handle defects, so there is no limit on size verses quality. Lets assume 2D silicon neurons 2x2 uM wide (Present size achieved with laboratory test bed neuromorphic chips) that is 7.8 billion neurons can be made on a 200 mm silicon wafer, 17 billion on a 300 mm wafer. 10x10x10 memristors blocks above would be achievable with even 90 nm lithography (we are down to 22 nm today). 10-20 of these wafers would outmatch the human brain in raw neuron-synapses densities, and considering their energy usage is hundredths that of normal Turing digital computers their size, it may be possible to stack them die on die with thin liquid cooling veins in between. 20 100 mm radius wafers stacked and packaged would certainly be less than 10 cm tall, that would be less then 800 ml in volume, less space then a human brain. Other advantages are it would not need oxygen or food, only electricity and liquid cooling, would process between 1000-1,000,000 times faster then a human brain because of the speed of electricity verse our ionic-chemical transmission system, neuron connections can be programed and reprogrammed on the fly and read back out, all parts are replaceable-repairable, and most of all it could be upscale: its not limited to size of a skull or body, etc, etc

All that with near term technologies: within the next 10-20 years! Lets assume there is some physical part of the human brain that is unknown to us that is responsible for the majority of processing unknown to us, certainly this would move up the mark for us in raw analog processing we would need to match, pushing it down the line by decades, but it would not make it impossible. The only way for it to be impossible for us to replicate all the combined functions of the human brain is if there was a supernatural part of the brain, something beyond the physical, that could never be understood and replicated, that is vital to its function, that is the only way.

Alternatively even if we do create strong artificial intelligence, which appears to be conscious and as smart or smarter then humans, some people will assume it is not conscious, just appears to be such as a matter of trickery, they will assume that there is something special about people even if this specialness can't be measured or even describe, such people will likely cling to myths like your "more synapses then stars in the universe" to validate their beliefs. Machines will always be machines in the eyes of some people and human's something else, something better... some people just need to feel special.
 
...
intel-spin-valve-memristor-crossbar-switch1-640x372.jpg

Now lets assume a silicon neuron and memristor synapses layered above, the memristors are layered (just etched wires), 10, 20, 50, 100 layers it does not matter, and a neuromorphic design can handle defects, ...
I´m not well verse in this technology, but this seems to be dense memory and I can see that it could be stacked to be 3D. Each of the memristors presumably can be in the conducting or not state and in 3D the stack would have a third set of parallel wires (vertical and orthogonal to the two orthogonal parallel sets your figure shows). I don´t even have a probelm with some of the memristors being defective as like bad sections of a disk in a disk drive memory, the system could learn not to try to store (or read) data from the defective ones.

I assume reading is lower brief voltage pulse applied to one of wires in each of the three orthgonal sets (If full* possible current flows then that particular "addressed" menrisor is in the "on state.") and writing a bit might be by stronger voltage pulse or of opposite polarity etc.) But my computer is much more than its memory disk.

Also this serial read/write approachs seems to be valid only for a serial processing Von Newmann machine - the brain is a mainly parallel processing machine, but the tiny part we associate with consciousness is pehaps serial. ("stream of consciousness" etc.)

I.e. Increasingly confirmed is that decisions and choices are made in the parallel processing brain and then later consciousness learns of the results of that parallel processing (and of course being unaware of what was really done), assumes it decided / made the choice, etc.

I would not be surprized if it turns out that artificial memories can be more dense stores of individual bits than brain memory can store, but so little is known about how brain stores information now it is impossible to guess if that is true or false. It is not menory, but the processors and their programing, if required, that make me still think we are more than 100 years from amking an artificial brain equal to a human brain.

In addition to not being mainly a parallel processor like the human brain, there is the connection problem. Where and how would these three orthogonal sets of "zillions" of wires connect to the processors?
----------------
* Not sure this would even work in 3D as there is less than "full" current flow in millions of other memistors that had voltge pulses applied to both the leads connecting to them - that total is likely to be 1000s of times more current flow than in the one now being read (addressed). I.e. if reading the on/off state of the memsistor at location in the cube with coordinate (a, b, c) all the memristor with coordinates (a,b, x) where "x" is of the other layers (not layer c) also have current flow (and like wise for (a, x, c) and (x, b, c) points in the memristor cube.

Summary of this porblem: There are only two leads to any memristor that need to be activated for current flow. - How can you require three leads be activated to read on or off state of the one, and only the one, at memristor cube point (a, b, c)?

Perhaps the answer is: apply in sequence brief pulses to (x, y, 0) - i.e. zero voltage to one of the three orthogonal sets of wires while all possible x & y pairs are energized in turn, looking for current flow. Then do same lenghty sequence again for (z, 0, y) and still again for (0, x, y) points within the memory cube. When this huge delay is stacked on top of the fact the data extracted from memory will be applied to a serial Von Newmann processor, I expect the human brain will be not only much more general and self programing, but faster too for most probelms. (Not to mention it is cheaply produced by unskilled labor.)
 
Last edited by a moderator:
I´m not well verse in this technology, but this seems to be dense memory and I can see that it could be stacked to be 3D. Each of the memristors presumably can be in the conducting or not state and in 3D the stack would have a third set of parallel wires (vertical and orthogonal to the two orthogonal parallel sets your figure shows). I don´t even have a probelm with some of the memristors being defective as like bad sections of a disk in a disk drive memory, the system could learn not to try to store (or read) data from the defective ones.

I assume reading is lower brief voltage pulse applied to one of wires in each of the three orthgonal sets (If full* possible current flows then that particular "addressed" menrisor is in the "on state.") and writing a bit might be by stronger voltage pulse or of opposite polarity etc.) But my computer is much more than its memory disk.

Also this serial read/write approachs seems to be valid only for a serial processing Von Newmann machine - the brain is a mainly parallel processing machine, but the tiny part we associate with consciousness is pehaps serial. ("stream of consciousness" etc.)

I.e. Increasingly confirmed is that decisions and choices are made in the parallel processing brain and then later consciousness learns of the results of that parallel processing (and of course being unaware of what was really done), assumes it decided / made the choice, etc.

I would not be surprized if it turns out that artificial memories can be more dense stores of individual bits than brain memory can store, but so little is known about how brain stores information now it is impossible to guess if that is true or false. It is not menory, but the processors and their programing, if required, that make me still think we are more than 100 years from amking an artificial brain equal to a human brain.

In addition to not being mainly a parallel processor like the human brain, there is the connection problem. Where and how would these three orthogonal sets of "zillions" of wires connect to the processors?
----------------
* Not sure this would even work in 3D as there is less than "full" current flow in millions of other memistors that had voltge pulses applied to both the leads connecting to them - that total is likely to be 1000s of times more current flow than in the one now being read (addressed). I.e. if reading the on/off state of the memsistor at location in the cube with coordinate (a, b, c) all the memristor with coordinates (a,b, x) where "x" is of the other layers (not layer c) also have current flow (and like wise for (a, x, c) and (x, b, c) points in the memristor cube.

Summary of this porblem: There are only two leads to any memristor that need to be activated for current flow. - How can you require three leads be activated to read on or off state of the one, and only the one, at memristor cube point (a, b, c)?

Perhaps the answer is: apply in sequence brief pulses to (x, y, 0) - i.e. zero voltage to one of the three orthogonal sets of wires while all possible x & y pairs are energized in turn, looking for current flow. Then do same lenghty sequence again for (z, 0, y) and still again for (0, x, y) points within the memory cube. When this huge delay is stacked on top of the fact the data extracted from memory will be applied to a serial Von Newmann processor, I expect the human brain will be not only much more general and self programing, but faster too for most probelms. (Not to mention it is cheaply produced by unskilled labor.)

I'm not sure if you saw my post in other thread, but I'll copy my post from there:
This was the response to IBM's human brain simulation announcement:
http://www.engadget.com/2012/11/20/ibm-supercomputer-simulates-530-billion-neurons/
http://www.scientificamerican.com/a...ulates-4-percent-human-brain-all-of-cat-brain
http://www.kurzweilai.net/ibm-simul...lion-synapses-on-worlds-fastest-supercomputer

Here is the entire post:
Russ Altman began his lecture in the Unsolved Mysteries in Medical Research series with a tough question and a snappy answer. "Why can't computers simulate a living cell? That's easy -- because it's too hard. Thank you."

When the chuckles died down, Altman, MD, PhD, associate professor of medical informatics at Stanford, began the real work of explaining why computers can't yet replace living organisms in medical research.

During his April 17 lecture, Altman broke down the question into steps, each with its own problems and potential solutions. But first he issued a warning.

"Most of us are not trained to do this," Altman said of the challenge of reassembling millions of bits of experimental data into a cohesive model system that could, for instance, predict the effects of untested medication on humans. "We're taught to be reductionists, but usually the more simple a model is, the more likely it is to be wrong."

Altman said the first step in the process is identifying the individual components -- such as proteins and pools of molecules -- that affect cellular functions. Then the interactions between the components and pools must be identified and the results represented in a map format. Finally, it's necessary to translate the relationships represented by the map into equations, which can then be used to analyze input data -- such as the presence of a new drug -- and predict cellular responses.

The Human Genome Project, a national effort to identify and characterize all human genetic material, has helped to identify many of the players. But Altman emphasized that alternative splicing and multifunctional proteins could inflate the effective number of components beyond the 35,000 genes that have been identified. He also pointed out that differences in the three-dimensional distribution of molecules within a cell can affect their function.

Identifying interactions between the components is extremely complicated, Altman said. Current methods of calculating interactions between isolated components, such as the Michaelis-Menton equation used in enzyme kinetics, are not accurate when applied to living systems, he said. And it's difficult to precisely quantify interactions between feedback pathways.

"As soon as you draw both a plus and a minus on the same page of a model, you've bought yourself a quantitative problem," Altman said. These quantitative tussles can hamstring any effort to generate accurate equations.

Finally, it's not clear whether the computational power exists to crunch the numbers of the billions of interactions that occur in a cell, and whether enough experimental data exists to support this goal, Altman said.

"We may have to give up our desire to have a computer system that permits 'one-stop shopping' and -- at least for the short term -- scale back our expectations," Altman said.

When researchers associated with IBM announced that they had created a computer simulation that could be likened to a cat's brain, they hadn't talked beforehand to Ben Barres. They would have profited enormously from the conversation if they had.

In a widely covered announcement, IBM said that its researchers had simulated a brain with 1 billion neurons and 10 trillion synapses which it noted was about the complexity of a cat's brain and last year (2012) 530 billion neurons and 100 trillion synapses, .
That led many writers to conclude that IBM computers could, as one put it, "simulate the thinking power" of a cat.
Getting a computer to work like any sort of brain, even little Fluffy's, would be an epic accomplishment. What IBM did, unfortunately, didn't even come close, as was pointed out a day later by other researchers, who published a letter scolding the company for what they described as a cynical PR stunt.

Any potential over-claiming aside, IBM's brain research follows the same pattern of similar explorations at many other centers. The logic of the approach goes something like this: We know the brain is composed of a network of cells called neurons, which pass messages to each other through connections known as synapses. If we build a model of those neurons and synapses in a computer, we will have a working double of a brain.

Which is where Ben Barres can shed some light. Barres is a neurobiologist and a specialist in something called glial cells. These are brain cells that are nearly as populous as neurons, but which are usually overlooked by researchers because they are presumed to be of little use; a kind of packing material that fills up space in between the neurons, where all the action is.
Barres, though, has made remarkable discoveries about glials. For example, if you take them away, neurons basically stop functioning properly. How? Why? We have no idea.

He does his research in the context of possible treatments for Alzheimer's, but the implications for modeling the brain are obvious, since you can't model something if you don't know how it works.

"We don't even begin to understand how neural circuits work. In fact, we don't even know what we don't know," he says. "The brain is very far from being modeled."

The computer can be a tempting metaphor for the brain, because of the superficial similarities. A computer has transistors and logic gates and networks of nodes; the various parts of the brain can be described in similar terms.

Barres says, though, that engineers seem to have a diminished ability to understand biology, in all its messy glory. Glial cells are one example, as they occupy much of the brain without our knowing barely the first thing about what they really do.

Another example, he says, involves the little matter of blood. Blood flow through the brain--its amplitudes and vagaries--has an enormous impact on the functioning of brain cells. But Barres said it's one that researchers have barely even begun to think about, much less model in a computer.

There are scores of neuroscientists like Barres, with deep knowledge of their special parts of the brain. Most of them will tell you a similar story, about how amazing the brain really is and about the utterly shallow nature of our current understanding of it.

Remember them the next time you read a story claiming some brain-like accomplishment of a computer. The only really human thing these programs are doing is attracting attention to themselves.

Besides this answer to IBM, there is more here:
http://blogs.scientificamerican.com...-hard-for-science-simulating-the-human-brain/
IBM has created simulated human brain with 530 billion neurons and with 100 trillion synapses, but yet they do not know at all how does human brain actually work.
That's about it.
 
Last edited:
To Gravage: thanks for post 175 - I agree with its point completely. For years, until a little more than a decade ago, glial cells were thought to be easy to understood as they have high oil content and wrap around (several turn layers) the axons insulating one from the adjacent one and thus prevent "cross talk" corrupting the signals each axon is transmitting. Also well understood was that there is a certain amount of capacitance per unit length of axon that must be charged as the depolarization wave traveling along them bring the internal voltage up from about -70 mV to slightly positive with the in rush of Na+ ions.

You can think of this capacitance as between the conductive inside of the axon and the ionic outside fluid where there is a conductive solution with the thin wall of the axon as the as the dielectric of the capacitor. If you want less capacitance per unit length so less Na+ is needed to locally raise the interior ~70 mV, you need to make the separating dielectric thicker - that makes the capacitance per unit length less. Well that too was known as a function of the glial cell. These cells block the Na+ influx but there are "nodes" between adjacent glial cells thru which the Na+ flows. The net effect is that the interior charges up more rapidly and thus the neural pulse traveling down the axon is faster.

A crude rule is that if the axon is long, it will have glial cells around it to speed data flow, but if short it will not as glial cells do reduce the density possible of axon bundles ("white matter"). In the old view, glial cells were not doing much more than layers of black electrical tape does around wires in a dense bundle of wires. Computer people are thinking they only need to consider the neurons and their synaptic connections. They tend to have a very ill informed old view of what other important things glial cells are now known to do (and we probably don´t yet know the half of it!).

Now we know that glial cells make about a half dozen different hormones that the brain uses, perhaps to set levels of excitability of in specific neuron types? - I have not followed the field for 5 years, so perhaps what this set of many different hormone like secretions from glial cell do may be better known now, but certainly not by those silly and arrogant enough to think the brain can be modeled by a large set of switches and interconnections. The brain does not use electron flows! - It is a chemically driven machine with at least 20 different neuro-transmiter chemicals,* perhaps a dozen hormones and even some gases like NO that are disolved in the fulids and seem to act as both neuro-transmitters and hormones!

* Some are highly specific to act only on a specific receptor site type on cell surfaces, others, like GABA, are very general. GABA is a universal inhibitor of cell firing. I think it must be made in the brain as I had a cattle ranch and when cow was infested with paracites, we injected them with GABA. The neural activity in the paracites stopped and they died but cow´s brain was protedted by the blood-brain barrier.
 
Simulated a Cell and producing complex thought are two very different things. Think of it this way: How many cellular interactions directly take part in cognitive thought? We know neurons and synapses communicated and change strengths rapidly and this is the were we reason most of the processing of cognitive thought is taking place. Hormones and Nitrous oxide are too slow and general to be vital for conscious thought. Like wise different neurotransmitters can be simulated with coded synapse in neurmorphic circuits: a 8-bit synapse could be coded to make 256 "neuron-transmitters". Neuroglia cells at best assist in changing the strength of synapse, act a hormonal receivers and in general don't take part in much data processing Finally how much data processing can the brain actually do? Is it "more then stars in the universe?" or is it a number that actually exists (something like 36.8 petaflops of data), if so what going to prevent us from matching and exceeding that number even with Turing machines some day?

Finally making an artificial intelligence that model after biological neurons does not mean its a model off the "brain". When people were trying to make heavy then air flight they model their wings off birds to get an understanding of aerodynamics, but that does not mean a F-22 Raptor is the same as actually birds of pray. They may have the same size radar reflection but one can fly at mach 2, travel thousands of miles rapidly, carry tons of load in weapons and reach attitudes of over 20 km.

AI using neuromophic circuits that do analogy neuron-synapse spiking computations as biology does will very likely act and think very differently from a human. Questions like "is it conscious" or "does it have a soul" are fundamentally impossible to answer: one might say it behaves as if it was conscious, but that not proof it actually is, in fact I could say none of you are conscious, you just appear to be, and there would be no way to prove otherwise. Likewise if In zealous enough I can deny a machines is conscious even if its programed to behave like a human perfectly, simply because its a machine.

The question should be one of results: can AI be made that has general intelligence comparable or greater then humans? Results are much harder to deny.
 
Simulated a Cell and producing complex thought are two very different things. Think of it this way: How many cellular interactions directly take part in cognitive thought? We know neurons and synapses communicated and change strengths rapidly and this is the were we reason most of the processing of cognitive thought is taking place. Hormones and Nitrous oxide are too slow and general to be vital for conscious thought. ...
True hormones act much more slowly than neural pulses that your neuromophic circuits can simulate, but that does not mean they don´t at times DOMINATE they way one thinks. I assume your neuromophic circuits will act the same way to same stimulus next week as they do this day; but if your a still menstruating woman (as I guess) you know from personal experience that changes in levels of "slow acting hormones" are very important in how your respond to same stimulus now and two weeks later.

Again I assert the brain is much more complex than what happens in terms of how it is synaptically connected up.

One might summarize this point by saying:
“Neuromophic computers are consistent in response to a fixed set of stimuli, hormone driven humans are not.”
 
To Gravage: thanks for post 175 - I agree with its point completely. For years, until a little more than a decade ago, glial cells were thought to be easy to understood as they have high oil content and wrap around (several turn layers) the axons insulating one from the adjacent one and thus prevent "cross talk" corrupting the signals each axon is transmitting. Also well understood was that there is a certain amount of capacitance per unit length of axon that must be charged as the depolarization wave traveling along them bring the internal voltage up from about -70 mV to slightly positive with the in rush of Na+ ions.

You can think of this capacitance as between the conductive inside of the axon and the ionic outside fluid where there is a conductive solution with the thin wall of the axon as the as the dielectric of the capacitor. If you want less capacitance per unit length so less Na+ is needed to locally raise the interior ~70 mV, you need to make the separating dielectric thicker - that makes the capacitance per unit length less. Well that too was known as a function of the glial cell. These cells block the Na+ influx but there are "nodes" between adjacent glial cells thru which the Na+ flows. The net effect is that the interior charges up more rapidly and thus the neural pulse traveling down the axon is faster.

A crude rule is that if the axon is long, it will have glial cells around it to speed data flow, but if short it will not as glial cells do reduce the density possible of axon bundles ("white matter"). In the old view, glial cells were not doing much more than layers of black electrical tape does around wires in a dense bundle of wires. Computer people are thinking they only need to consider the neurons and their synaptic connections. They tend to have a very ill informed old view of what other important things glial cells are now known to do (and we probably don´t yet know the half of it!).

Now we know that glial cells make about a half dozen different hormones that the brain uses, perhaps to set levels of excitability of in specific neuron types? - I have not followed the field for 5 years, so perhaps what this set of many different hormone like secretions from glial cell do may be better known now, but certainly not by those silly and arrogant enough to think the brain can be modeled by a large set of switches and interconnections. The brain does not use electron flows! - It is a chemically driven machine with at least 20 different neuro-transmiter chemicals,* perhaps a dozen hormones and even some gases like NO that are disolved in the fulids and seem to act as both neuro-transmitters and hormones!

* Some are highly specific to act only on a specific receptor site type on cell surfaces, others, like GABA, are very general. GABA is a universal inhibitor of cell firing. I think it must be made in the brain as I had a cattle ranch and when cow was infested with paracites, we injected them with GABA. The neural activity in the paracites stopped and they died but cow´s brain was protedted by the blood-brain barrier.

No problem at all and big thanks for your answers/posts, they help a lot in understanding the complexity of human brain. Yes, these companies are extremely arrogant and yet IBM and similar companies do not know at all how does human work, and like you said human brain does not use electron flows but technologists and scientists who work in IBM and all other similar companies do not know that.
You said that you didn't follow this field for 5 years, well this answer was posted as criticism to IBM's announcements of human brain project in 2011. and 2012., so you didn't miss much.
Cheers.
 
Wow, this is a very interesting post, thank you all for excellent discussion.

I am still wondering if we are setting our goals too high and are expecting something equivalent to a human brain. Why? Why not start with a known "simple brain" like an ant. Forget eyes, ears, and touch. Concentrate only on specific physical functions like an exo-skeleton, locomotion, and jaws to perform various functions within the hive.

Could we construct a small but functioning hive; soldiers, blind, but with extremely keen sense of smell (chemical scents) standing guard and reporting threats, workers for tending to new ants from a queen (an assembly line) and perhaps "feeding" them by charging empty batteries.

Of course this would not concentrate on consciousness, but starts at the beginning of evolution. Insects are ancient, yet in spite of their remarkable simplicity in brain function, they have managed to become the dominant species on earth. With exception of bacteria, the insect is probably the most abundant mobile organism on earth.

Watch this little clip and marvel at the MINDLESS dedication to specifically programmed responses, where each organism is useless by itself but indispensable (and highly replacable) cogs in a greater clockwork.

http://en.wikipedia.org/wiki/Dorylus
http://www.youtube.com/watch?v=bLy04CF2Lso&list=PLFFBDAA61C9D27454
 
Back
Top