I´m not well verse in this technology, but this seems to be dense memory and I can see that it could be stacked to be 3D. Each of the memristors presumably can be in the conducting or not state and in 3D the stack would have a third set of parallel wires (vertical and orthogonal to the two orthogonal parallel sets your figure shows). I don´t even have a probelm with some of the memristors being defective as like bad sections of a disk in a disk drive memory, the system could learn not to try to store (or read) data from the defective ones.
I assume reading is lower brief voltage pulse applied to one of wires in each of the three orthgonal sets (If full* possible current flows then that particular "addressed" menrisor is in the "on state.") and writing a bit might be by stronger voltage pulse or of opposite polarity etc.) But my computer is much more than its memory disk.
Also this serial read/write approachs seems to be valid only for a serial processing Von Newmann machine - the brain is a mainly parallel processing machine, but the tiny part we associate with consciousness is pehaps serial. ("stream of consciousness" etc.)
I.e. Increasingly confirmed is that decisions and choices are made in the parallel processing brain and then later consciousness learns of the results of that parallel processing (and of course being unaware of what was really done), assumes it decided / made the choice, etc.
I would not be surprized if it turns out that artificial memories can be more dense stores of individual bits than brain memory can store, but so little is known about how brain stores information now it is impossible to guess if that is true or false. It is not menory, but the processors and their programing, if required, that make me still think we are more than 100 years from amking an artificial brain equal to a human brain.
In addition to not being mainly a parallel processor like the human brain, there is the connection problem. Where and how would these three orthogonal sets of "zillions" of wires connect to the processors?
----------------
* Not sure this would even work in 3D as there is less than "full" current flow in millions of other memistors that had voltge pulses applied to both the leads connecting to them - that total is likely to be 1000s of times more current flow than in the one now being read (addressed). I.e. if reading the on/off state of the memsistor at location in the cube with coordinate (a, b, c) all the memristor with coordinates (a,b, x) where "x" is of the other layers (not layer c) also have current flow (and like wise for (a, x, c) and (x, b, c) points in the memristor cube.
Summary of this porblem: There are only two leads to any memristor that need to be activated for current flow. - How can you require three leads be activated to read on or off state of the one, and only the one, at memristor cube point (a, b, c)?
Perhaps the answer is: apply in sequence brief pulses to (x, y, 0) - i.e. zero voltage to one of the three orthogonal sets of wires while all possible x & y pairs are energized in turn, looking for current flow. Then do same lenghty sequence again for (z, 0, y) and still again for (0, x, y) points within the memory cube. When this huge delay is stacked on top of the fact the data extracted from memory will be applied to a serial Von Newmann processor, I expect the human brain will be not only much more general and self programing, but faster too for most probelms. (Not to mention it is cheaply produced by unskilled labor.)
I'm not sure if you saw my post in other thread, but I'll copy my post from there:
This was the response to IBM's human brain simulation announcement:
http://www.engadget.com/2012/11/20/ibm-supercomputer-simulates-530-billion-neurons/
http://www.scientificamerican.com/a...ulates-4-percent-human-brain-all-of-cat-brain
http://www.kurzweilai.net/ibm-simul...lion-synapses-on-worlds-fastest-supercomputer
Here is the entire post:
Russ Altman began his lecture in the Unsolved Mysteries in Medical Research series with a tough question and a snappy answer. "Why can't computers simulate a living cell? That's easy -- because it's too hard. Thank you."
When the chuckles died down, Altman, MD, PhD, associate professor of medical informatics at Stanford, began the real work of explaining why computers can't yet replace living organisms in medical research.
During his April 17 lecture, Altman broke down the question into steps, each with its own problems and potential solutions. But first he issued a warning.
"Most of us are not trained to do this," Altman said of the challenge of reassembling millions of bits of experimental data into a cohesive model system that could, for instance, predict the effects of untested medication on humans. "We're taught to be reductionists, but usually the more simple a model is, the more likely it is to be wrong."
Altman said the first step in the process is identifying the individual components -- such as proteins and pools of molecules -- that affect cellular functions. Then the interactions between the components and pools must be identified and the results represented in a map format. Finally, it's necessary to translate the relationships represented by the map into equations, which can then be used to analyze input data -- such as the presence of a new drug -- and predict cellular responses.
The Human Genome Project, a national effort to identify and characterize all human genetic material, has helped to identify many of the players. But Altman emphasized that alternative splicing and multifunctional proteins could inflate the effective number of components beyond the 35,000 genes that have been identified. He also pointed out that differences in the three-dimensional distribution of molecules within a cell can affect their function.
Identifying interactions between the components is extremely complicated, Altman said. Current methods of calculating interactions between isolated components, such as the Michaelis-Menton equation used in enzyme kinetics, are not accurate when applied to living systems, he said. And it's difficult to precisely quantify interactions between feedback pathways.
"As soon as you draw both a plus and a minus on the same page of a model, you've bought yourself a quantitative problem," Altman said. These quantitative tussles can hamstring any effort to generate accurate equations.
Finally, it's not clear whether the computational power exists to crunch the numbers of the billions of interactions that occur in a cell, and whether enough experimental data exists to support this goal, Altman said.
"We may have to give up our desire to have a computer system that permits 'one-stop shopping' and -- at least for the short term -- scale back our expectations," Altman said.
When researchers associated with IBM announced that they had created a computer simulation that could be likened to a cat's brain, they hadn't talked beforehand to Ben Barres. They would have profited enormously from the conversation if they had.
In a widely covered announcement, IBM said that its researchers had simulated a brain with 1 billion neurons and 10 trillion synapses which it noted was about the complexity of a cat's brain and last year (2012) 530 billion neurons and 100 trillion synapses, .
That led many writers to conclude that IBM computers could, as one put it, "simulate the thinking power" of a cat.
Getting a computer to work like any sort of brain, even little Fluffy's, would be an epic accomplishment. What IBM did, unfortunately, didn't even come close, as was pointed out a day later by other researchers, who published a letter scolding the company for what they described as a cynical PR stunt.
Any potential over-claiming aside, IBM's brain research follows the same pattern of similar explorations at many other centers. The logic of the approach goes something like this: We know the brain is composed of a network of cells called neurons, which pass messages to each other through connections known as synapses. If we build a model of those neurons and synapses in a computer, we will have a working double of a brain.
Which is where Ben Barres can shed some light. Barres is a neurobiologist and a specialist in something called glial cells. These are brain cells that are nearly as populous as neurons, but which are usually overlooked by researchers because they are presumed to be of little use; a kind of packing material that fills up space in between the neurons, where all the action is.
Barres, though, has made remarkable discoveries about glials. For example, if you take them away, neurons basically stop functioning properly. How? Why? We have no idea.
He does his research in the context of possible treatments for Alzheimer's, but the implications for modeling the brain are obvious, since you can't model something if you don't know how it works.
"We don't even begin to understand how neural circuits work. In fact, we don't even know what we don't know," he says. "The brain is very far from being modeled."
The computer can be a tempting metaphor for the brain, because of the superficial similarities. A computer has transistors and logic gates and networks of nodes; the various parts of the brain can be described in similar terms.
Barres says, though, that engineers seem to have a diminished ability to understand biology, in all its messy glory. Glial cells are one example, as they occupy much of the brain without our knowing barely the first thing about what they really do.
Another example, he says, involves the little matter of blood. Blood flow through the brain--its amplitudes and vagaries--has an enormous impact on the functioning of brain cells. But Barres said it's one that researchers have barely even begun to think about, much less model in a computer.
There are scores of neuroscientists like Barres, with deep knowledge of their special parts of the brain. Most of them will tell you a similar story, about how amazing the brain really is and about the utterly shallow nature of our current understanding of it.
Remember them the next time you read a story claiming some brain-like accomplishment of a computer. The only really human thing these programs are doing is attracting attention to themselves.
Besides this answer to IBM, there is more here:
http://blogs.scientificamerican.com...-hard-for-science-simulating-the-human-brain/
IBM has created simulated human brain with 530 billion neurons and with 100 trillion synapses, but yet they do not know at all how does human brain actually work.
That's about it.