Is it possibly to functionally transfer knowledge from a neural network to another?

Brings up an issue: What is "failure" that only appears after years of time, in a transfer of brain functionality. How would one detect it?
I have no objection to setting that aside - a lot of folks seem, to me, to underestimate the hardware complexity and (especially) flexibility of the brain.

I am not arguing that it would be easy. Like I said, I'm normally on the other side of this discussion.

Birds and planes are not that similar. They don't function the same.
But it is possible to "transfer" the functionality of bird flight to machinery, in principle - it would not look or operate like an airplane, is all.
From Wiki:
Lift and drag[edit]
The fundamentals of bird flight are similar to those of aircraft, in which the aerodynamic forces sustaining flight are lift and drag. Lift force is produced by the action of air flow on the wing, which is an airfoil. The airfoil is shaped such that the air provides a net upward force on the wing, while the movement of air is directed downward. Additional net lift may come from airflow around the bird's body in some species, especially during intermittent flight while the wings are folded or semi-folded[1][2] (cf. lifting body).

Aerodynamic drag is the force opposite to the direction of motion, and hence the source of energy loss in flight. The drag force can be separated into two portions, lift-induced drag, which is the inherent cost of the wing producing lift (this energy ends up primarily in the wingtip vortices), and parasitic drag, including skin friction drag from the friction of air and body surfaces and form drag from the bird's frontal area. The streamlining of bird's body and wings reduces these forces.
In general, what made us even think about flying in the first place?
 
From Wiki: In general, what made us even think about flying in the first place?

I made the point that birds fly and planes fly, but man-made flight operates via a different mechanism than natural flight. Just as man-made intelligence operates via different mechanisms than natural intelligence, so that it's a fallacy to imagine that just because a digital computer can play chess, that humans use a digital algorithm to play chess.

This argument is not mine, it's common in the AI debate.
 
Yes, except the human brain does not function in a binary code of "on/off" state.

Then you are conceding my point. Thank you.

Our brains function in an exponential fashion

What? Without looking it up, define exponential. And now explain to me what the F you are talking about.

forming cognition or recognition (faster) in a 3D environment. The emerging mental hologram of bits of information inside the brain, does not happen differently from our cognition of the existence of logarithms as an "efficient" form of symbolizable processing of information.

Logarithms? You're just randomly typing in math words. You're trolling me now and no longer trying to say anything substantive.

Well, I am not talking about computer languages, but the fundamentals which make such mathematically functional concepts equally useful for the human brain and and AI and throughout the universe.

Well, algorithms are described by computer languages. If you're not talking about formal languages then you're not talking about algorithms. You keep defeating your own point.

Lemurs already can recognize more from less, that is a fundamental mathematical calculation.

Good grief. What are you talking about? You are not making any sense. Of course minds can do math. But in fact math is not algorithmic. Gödel proved that.

IMO, my "reasoning" is sound.
I suspect that's where bio-chemistry begins to play a large part?
I have no quarrel with that.
Yes, because they do.
Yes, because it could.
Precisely that's why I highlighted the word efficient . Mathematical efficiency and conservation of energy are demonstrably equally valuable assets throughout the universe.
It depends on your perspective, which you have restricted to computers.

No. YOU are restricting the mind to computers by insisting that the mind is an algorithm.

I think it can be argued that all electro/chemical information our senses and mirror neural system receives should (by the evidence of some 3 billion neurons) employ a very similar networking function as a computer.

No, you're just wrong about that. There are no logic gates in the brain.

We can demonstrate this internal distribution of electro-chemical information by scanning which parts of the brain show activity while the brain is "working" on cognition or understanding.

Yes but the working does not involve algorithms. You keep making a claim without providing argument or evidence.

Yes, thas was Seth's argument also. You have to be alive to feel emotion.
Yes, from your perspective, but I am trying to address the orderly distribution of information
Do you know the definition of "foozle"? Look it up and you'll see the logical error you madeby using this definition.
No, my "assumptions" are based on evidence of the importance of decision making, which is an ancient survival mechanism, present in almost all living organisms with neural networks.. It seems a peculiarly well developed organ in hominids and especially in humans, especially in cognition of abstractions and abstract thought itself. This requires a certain logical processor.

Call it woo-stuff then.

Fair enough, I can understand your reluctance to using the term "biological computer" for "biological calculating organ" (machine).

It's the same as my reluctance to call a fish a brick. A fish isn't a brick.

https://en.wikipedia.org/wiki/Brain–computer_interface
Well then we are in agreement. I am fairly certain that many parts of the brain have completely different functional abilities.
But even in the subconscious part of the brain that only controls and regulates our own living system, has to deal with the processing of internal electro/chemical signals.

Sure. But not every electrical signal is an algorithm.
 
page 2
No, I am not speaking in metaphors, we just address the question from different perspectives.

Then you are calling a fish a brick and expecting me to take you seriously. If you are using the word algorithm to describe a process that is not an algorithm, you might as well call fish bricks and then demand that I agree with you.

I looked it up.
So you do agree that the human brain is capable of processing algorithms.

Yes, I've agreed to that many times. If I execute the Euclidean algorithm in my head, my brain is executing an algorithm. How many times have I already made this exact point using this exact example by now? Three or four times at least.

Substitute "numbers" with the concept of "inherent values" (potentials), perhaps that will make more sense

You're calling fish bricks. That does not make sense. You are using the world algorithm incorrectly.
 
Brings up an issue: What is "failure" that only appears after years of time, in a transfer of brain functionality. How would one detect it?

You claimed that classical physics was reducible to propositional logic. I pointed out that this is factually not the case. You agreed with my point. That's as far as this goes. The same problem applies to any attempted digitalization of a continuous phenomenon. The accumulated rounding errors will eventually throw the model wildly off the mark. And the same problem applies even to continuous approximations of continuous phenomena. Once you have an approximation, the tiny errors accumulate.

I have no objection to setting that aside - a lot of folks seem, to me, to underestimate the hardware complexity and (especially) flexibility of the brain.

Well you just said that you are only talking about tranference, and now you're willing to set it aside.

I really think my participation in this thread has reached the point of diminishing returns. Nobody's said anything new in quite some time.

I am not arguing that it would be easy. Like I said, I'm normally on the other side of this discussion.

Which side of which discussion? I'm serious. I no longer know what's being argued. I don't believe the mind is an algorithm executing in the brain. That's my point and my only point.

Birds and planes are not that similar. They don't function the same.
But it is possible to "transfer" the functionality of bird flight to machinery, in principle - it would not look or operate like an airplane, is all.

Diminishing returns again. Now you're claiming that a bird could fly like an airplane. Not even Icarus could pull off that trick. If we replaced every molecule of a bird with a computer chip, would the bird fly? Not bloody likely IMO. You'd be missing all the wetware that makes living things function.
 
Last edited:
Again, for some mysterious reason you are assigning quotes to the wrong poster. Must be something wrong with your algorithm.
 
Call it woo-stuff then
Again, a completely unrelated word to the discussion at hand.
woo, verb (used with object)

1. to seek the favor, affection, or love of, especially with aview to marriage.

Synonyms: court, pursue, chase.
2. to seek to win: to woo fame.

Synonyms: cultivate.
3. to invite (consequences, whether good or bad) by one'sown action; court:
to woo one's own destruction.

Your confused algorithms make it very difficult to have a intelligent conversation. You don't even know "who" you are addressing.
 
Diminishing returns again. Now you're claiming that a bird could fly like an airplane. Not even Icarus could pull off that trick. If we replaced every molecule of a bird with a computer chip, would the bird fly? Not bloody likely IMO. You'd be missing all the wetware that makes living things function.
A team of biologists led by Henri Weimerskirch at the French National Center for Scientific Research just announced the results of a major new study on great frigates (Fregata minor), these fascinating seabirds native to the central Indian and Pacific Oceans. Using super-lightweight GPS trackers, the biologists followed four dozen birds from 2011 to 2015, some for up to two years continuously.
What they found was astonishing. The birds could stay aloft for up to 56 days without landing, gliding for hundreds of miles per day with wing-flaps just every 6 minutes, and reaching altitude of more than 2.5 miles.
http://www.popularmechanics.com/science/animals/a21614/frigatebird-study-how-birds-fly/
You mean that its brain does not process the mathematics of aerodynamics and lift?

If this is all "woo" to you, I would suggest you study how the brain functions, instead of clinging to artificial neural networks of computers
 
Last edited:
Let's resolve this
However, researchers at the University of Missouri in St. Louis, Gualtiero Piccinini and Sonya Bahar, say that while the brain is in fact a computer, it is not the kind of computer that traditional computationalists make it out to be. In a new paper published in Cognitive Science, the authors argue that the nervous system fulfills four criteria that define computational systems. First, the nervous system is an input-output system. For example, the nervous system takes sensory information such as visual data as input and also generates movement of the muscles as output. Second, the nervous system is functionally organized in specific way such that it has specific capacities, such as generating conscious experience. Third, the brain is a feedback-control system: the nervous system controls an organism’s behavior in response to its environment. Finally, the nervous system processes information: feedback-control can be performed because the brain’s internal states correlate with external states. Systems that fulfill these four criteria are paradigmatic computational systems
https://www.psychologytoday.com/blog/the-superhuman-mind/201211/is-the-brain-computer

Notice; no woo or foozle!

Oh, found another link;
Reinforcement Learning Agents and Dopamine

We also have modelling of animals as agents in a reinforcement learning environment. Researchers of this field discovered that brain dopamine releasing levels in the midbrain during tasks follow the expected pattern of mathematical models using RPE (Reward Prection Errors, the difference between expected and real reward).

Off course, all of this concerns the algorithmic part.
https://www.quora.com/What-kind-of-...eing-developed-today-for-machine-intelligence

Oh,and here is another one;
We take a connectionist approach which supports that information is stored non symbolically in the weights or connection strengths of a neural network [2]. Therefore in our research, we chose a definition of algorithm that is non-symbolic and non-computational - “a step-bystep procedure for . . . accomplishing some end”
https://www.ansatt.hig.no/suley/Publications/Algorithm_IC-AI06-SY-RB_CamReady.pdf

need more?
 
Last edited:
Your confused algorithms make it very difficult to have a intelligent conversation.

I did look at your links. The pdf said that SOME brain functions are algorithms. This is a point I've agreed with several times, and even gave a specific example of executing the Euclidean algorithm in your head.

Why would you post a link that makes a point I've already agreed to several times?

Either you don't know the difference between SOME and ALL, or you didn't read the article, or you hoped I wouldn't, or you're just not debating in good faith. It takes me time to click on a link and read enough of it to determine that it doesn't support the point you think you are trying to make. I won't be clicking any more of your links. Every link you've shown me so far fails to make your point and in this latest case, makes a point I've already agreed to repeatedly. I've now gone through three of your links and it's been completely unproductive because your links don't support your points or else they make points I've already agreed to.

The Psychology Today article claimed the mind is a computer but "not the traditional kind." Well the problem is that as far as we know, there is no other kind. It's the Church-Turing thesis. The author doesn't know enough to be writing about computers. It's Psychology Today, not a research journal. He says brains "process information," but that fact is that information scientists have a definition of information and that's not it. Again, it's the fallacy of equivocation. Using the same word with different meanings within the same argument.

There's no point in your Googling articles and posting them. Not without a critical eye to extract what's valuable and see where their arguments are lacking. You seem to read popularized articles and accept them at face value. None of us should ever do that. We should try to see where the argument's weak, where the author hasn't shown what he claims, and especially when he's equivocated the popular meaning of a word (algorithm, computation, information) with its technical meaning. You can't get your worldview from Psychology Today. That's like getting your politics from cable tv. It's junk information.

Rather than making a remark about intelligent conversation, you might go through my posts and use them as a study guide for you to come up to speed on everything you don't know about this subject. Your arguments will get sharper. And other people will start sounding more intelligent because you'll know what they're talking about.

I have nothing else to add. I'm actually getting tired of making the same points over and over. I don't feel that I've said anything new in quite some time. I'm pretty much done here. Thanks for the chat.
 
Last edited:
I never claimed that a brain only uses algorithms or logorithm! You just assumed I did.
But it is some kind and collection of mathematical "rithms", an "efficient" logical way to process information.

Fundamentally we are in agreement, but we are examining the OP question from different perspectives....:)
 
Last edited:
I never claimed that a brain only uses algorithms or logorithm! You just assumed I did.
But it is some kind and collection of mathematical "rithms", an "efficient" logical way to process information.

Fundamentally we are in agreement, but we are examining the OP question from different perspectives....:)

How did logarithms get into this?

Gödel showed that mathematical truth can't be determined solely by an algorithm. This is a very strong argument that the "mind is a computation" advocates have to respond to. You can't say math is an algorithm because it's not. Roger Penrose thinks this might be the key to consciousness, that the brain can take that Gödelian leap past computation. Nobody takes this particular argument of his seriously, but he's still Sir Roger so you can't dismiss the idea completely.

I've been reading more of the Psychology Today article. It's interesting, there's a lot of good stuff in there. You know what would make your links more effective for me? If you said, "Here's a link. The whole article's great and you should read every word. But the main thing I want you to see is in paragraph 7 where they say XYX and this supports my points that A, B, and C." Give me a guidepost so I don't get annoyed having to read a long article, some of which is very interesting and some of which is just wrong, some of which supports your point and some which refutes it. I can't tell which is the part that you want to draw my attention to and why you think it's important to our conversation.

Yes we're in agreement that mind is part algorithm and part "something else," which needs a word. Woo by the way refers to new-age stuff. Like if you meet someone and they're into crystal healing and the astral plane, that's woo. As in "woo-woo." So there's some woo-stuff going on in the brain that implements mind. What we disagree on is that you want to call that woo-stuff a computation, but you are not making a case that it's so and none of your links have made that case.
 
Last edited:
And what is my case?
What we disagree on is that you want to call that woo-stuff a computation, but you are not making a case that it's so and none of your links have made that case.
So you are making a case for "woo-stuff"? Good luck wit that......:?
 
Last edited:
My case is that the brain is a computational engine (by any other name)

Your case is that the brain is not a computational engine but uses "woo-stuff" to make decisions.

Now who is making more sense?

p.s. "making sense" = "understanding", which is a result of greater abstract thinking.
IMO, this advanced thinking began with a gene mutation, which clearly shows that an evolutionary event was responsible for the split of homo sapiens from other hominids.
All great apes apart from man have 24 pairs of chromosomes. There is therefore a hypothesis that the common ancestor of all great apes had 24 pairs of chromosomes and that the fusion of two of the ancestor's chromosomes created chromosome 2 in humans. The evidence for this hypothesis is very strong.
http://www.evolutionpages.com/chromosome_2.htm
 
Last edited:
So you are making a case for "woo-stuff"?
My case is that the brain is a computational engine (by any other name)

Your case is that the brain is not a computational engine but uses "woo-stuff" to make decisions.

Now who is making more sense?

Yes it's all about what that "other stuff" is.

If we say the "other stuff "doesn't exist at all, then everything's a computation in the technical sense -- a Turing machine. A lot of smart and famous people believe that.

If we say that the "other stuff" is non-physical, then we are Cartesian dualists. We believe in a spiritual or metaphysical realms wherein many things may dwell. This would be the literal definition of woo-stuff. That it's woo.

I am not a dualist.

I am perfectly willing to agree that the "other stuff" is physical. I do NOT want to depend on a supernatural explanation.

Now, if the "other stuff" is not computational, as that term is currently understood, what could it be?

* It could be that it is some mode of computation we have not yet discovered. That there are meanings of "computation" that can be rigorously defined and that go beyond what Turing machines can do, and that can be instantiated in the world. Nobody has found one for eighty years but that is not to say someone won't find one tomorrow morning.

[Note. Quantum computers can perform dramatically better on certain specialized problems. For example a quantum computer can factor an arbitrary integer in polynomial time. That's an astonishing result. Nevertheless, a quantum computer can not compute anything a TM can't. That's been proven. The TM would just run slower].

* It could be that some new principle of physics will provide the answer. History shows that we have periodic revolutions in physics and there's no reason to think the next one won't give us some new insight into the nature of our world. In fact I tend to think that the breakthrough in computation I mentioned above would necessarily require a breakthrough in physics to make it possible. New physics and some new mode of computation becomes physically possible. That's the breakthrough we're waiting for.

* It may be that we haven't got the ability to comprehend the nature of the world. Why should this be surprising? If we're an ant on a leaf on a tree in a jungle, we may be a brilliant ant scientist but we simply are not biologically equipped to comprehend the many levels of existence beyond our leaf.

Why should we humans imagine ourselves to be the culmination of evolution, the universe's very means for understanding itself? Maybe we're just an ant on a leaf on a tree somewhere.

Remember that the intellectual history of humanity is based on successive discoveries that we're not special. We're not the center of the heavens. We're not the center of the solar system. We're not the center of the galaxy or of the universe. We're not separate from the animals, we are one of them, a member of the great order of primates. In the US where the government is currently shut down you may turn on the news and find evidence of chest-beating and dominance rituals common to all the great apes. Our wise leaders wear suits, that's about the main difference.

If we're not special snowflakes (LOL!) then perhaps we are NOT the one creature in creation capable of understanding all of creation.

I've run across the name for this idea recently. Its called Mysterianism.

Wiki said:
New mysterianism—or commonly just mysterianism—is a philosophical position proposing that the hard problem of consciousness cannot be resolved by humans. The unresolvable problem is how to explain the existence of qualia (individual instances of subjective, conscious experience). In terms of the various schools of philosophy of mind, mysterianism is a form of nonreductive physicalism. Some "mysterians" state their case uncompromisingly (Colin McGinn has said that consciousness is "a mystery that human intelligence will never unravel"); others believe merely that consciousness is not within the grasp of present human understanding, but may be comprehensible to future advances of science and technology.

https://en.wikipedia.org/wiki/New_mysterianism

I can go with that! It's what I believe. That either we need revolutionary breakthroughs in physics and computer science; or else we're not actually built by nature to understand the universe or ourselves.
 
Last edited:
Now, what could it be if it's not computational.
* It could be that it is some mode of computation we have not yet discovered. That there are meanings of "computation" that can be rigorously defined and that go beyond what Turing machines can do.
I have no quarrel with that, but note that you used the term "some mode of computation" which is as yet undiscovered. It would still be a mode of computation, though it may not strictly answer to the Turing type.

Why you insist on using the Turing type of computation as the standard by which all other forms of computation should be judged, is a mystery to me.

We are talking about the same thing, but because we used different perspectives, it seems we have been talking past each other, while trying to make the same point.

IMO, every brain neuron (every neuron) has an inherent computational ability. Now we may get into the differences between afferent neurons and efferent neurons, but either way there is always a translation between information received and information forwarded. This is a computational ability, IMO.
 
I've run across the name for this idea recently. Its called Mysterianism.
Well, I call this the "mirror neural system"
The mind's mirror
A new type of neuron--called a mirror neuron--could help explain how we learn through mimicry and why we empathize with others......

But that story is just at its beginning. Researchers haven't yet been able to prove that humans have individual mirror neurons like monkeys, although they have shown that humans have a more general mirror system. And researchers are just beginning to branch out from the motor cortex to try to figure out where else in the brain these neurons might reside.
http://www.apa.org/monitor/oct05/mirror.aspx
 
Now, if the "other stuff" is not computational, as that term is currently understood, what could it be?

* It could be that it is some mode of computation we have not yet discovered. That there are meanings of "computation" that can be rigorously defined and that go beyond what Turing machines can do, and that can be instantiated in the world. Nobody has found one for eighty years but that is not to say someone won't find one tomorrow morning.
There are levels of computation.

For example, it's pretty safe to say that if we did an atomic-level simulation of a human brain on a computer, the result would be almost identical (to the degree that the simulation is accurate) to what a human brain does. No existing or planned computer can simulate something like that, nor do we have the tools to create the model to simulate. (The model exists, of course - we just can't easily convert it to computer data.)

It is very likely that if we did a neuron-level simulation of a human brain on a computer, the result would be similar to what we see in a human brain, again depending on how accurate that level of abstraction is. Again, we don't have computers that can do that yet - but the neuron-level model (often called the connectome) is more easily discovered by existing tools (PET, FMRI etc.)

It is somewhat likely that if we did a function-level simulation of a human brain, the result would be somewhat similar to what we see in a human brain. In other words, model what a section of the brain does (say, the part that interprets motion in the visual cortex) and "run" it on a more conventional neural network, on a computer set up to do that easily. We do have computers that can do this now, but not for the whole brain - although we will in 5-10 years. How accurate it will be will depend on how close we come in terms of our assumptions on what that functionality is.

So we have a scale of computation, from near-perfect simulation of a brain (almost impossibly hard) to functional level simulation (almost doable now.) It is likely these will be implemented on nonconventional computers (i.e. computers that are, or include, an NPU, and do not meet the classic definition of a Turing machine) - but they are deterministic, easily programmed and understandable machines, and do not require any sort of "quantum weirdness" or anything.
 
Back
Top