Is it possibly to functionally transfer knowledge from a neural network to another?

This maybe of interest;
In psychology and cognitive neuroscience, pattern recognition describes a cognitive process that matches information from a stimulus with information retrieved from memory.[1] Pattern recognition occurs when information from the environment is received and entered into short-term memory, causing automatic activation of a specific content of long-term memory. An early example of this is learning the alphabet in order. When a carer repeats ‘A, B, C’ multiple times to a child, utilizing the pattern recognition, the child says ‘C’ after he/she hears ‘A, B’ in order. Recognizing patterns allow us to predict and expect what is coming.
The process of pattern recognition involves matching the information received with the information already stored in the brain. Making the connection between memories and information perceived is a step of pattern recognition called identification. Pattern recognition requires repetition of experience. Semantic memory, which is used implicitly and subconsciously is the main type of memory involved with recognition [2].
https://en.wikipedia.org/wiki/Pattern_recognition_(psychology)#Prototype_matching
 
Turing machines can emulate Boolean logic, and every process described by classical physics can be described in that logic (by way of propositional calculus etc https://en.wikipedia.org/wiki/Propositional_calculus

Of course I know what propositional logic is, but that link is hardly sufficient to support your point. You claim that prop logic can be describe every process of classical physics.

Now this happens to NOT be true, and I'll supply the refutation in a moment.

But of course even if it was true it wouldn't matter, because classical physics is only an approximation to quantum physics, which (by current theory) is the physics of our world. So whether or not prop logic can implement classical physics is irrelevant, since the brain lives in the physical world, which is not bound by classical physics.

So I see what you mean by introducing quantum physics. The point is that whatever the brain is, it's physical, so it's bound by the laws of physics. But it's certainly not bound by the laws of classical (by which I assume you mean Newtonian) physics.

Now as it turns out, Newtonian physics can not be implemented on a computer. Say we have three bodies in space and we wish to emulate their mutual gravitational interaction over time. In other words we have the differential equations for the three body problem, and we wish to program that into a computer.

Now as you know, the solutions to these equations involve real numbers. But computers can't store or represent real numbers, only finite approximations.

For practical calculations, our approximations are good enough. But the approximations introduce tiny errors; and over a long enough period of time, chaos theory says that the accumulated errors will end up throwing your model wildly off the mark. There's book by Ivars Peterson called Newton's Clock. It's all about chaos in the solar system. It turns out that even under the assumptions of perfectly deterministic Newtonian gravity, if we knew the exact position and velocity of every particle in the solar system, we can NOT determine whether the solar system is stable or not by using a computer!

This point is not sufficiently well known. You can't even use a computer to perfectly model deterministic Newtonian gravity. That's how weak algorithmic computations are. Finite approximations to real numbers are not sufficient over long enough time scales.

Of course the fact that some high school kid programmed a model of the solar system into his computer does not falsify my point. Our approximations are excellent within their limits. But the approximations are not perfect and over a sufficiently long period of time, they're wildly inaccurate.

In fact the ultimate stability of the solar system under Newtonian gravity is an open problem. We can't solve the differential equations and our computer models fail due to chaos.

https://www.amazon.com/Newtons-Clock-Chaos-Solar-System-ebook/dp/B007KLWZ00

https://en.wikipedia.org/wiki/Stability_of_the_Solar_System

The reference was to stuff like this: http://mathworld.wolfram.com/SurrealNumber.html and complex numbers etc. We are dealing with electronic current in three dimensions - complex numbers, even quaternions, are involved.

Yes ok. When you said gaps even in the reals I thought you might mean the hyperreals or the surreals. As it happens, any model of the real numbers that contains infinitesimals (as both those systems do) can not be topologically complete. You are absolutely correct about that. But the standard reals are complete. That's why the standard reals are a better model of the continuum than the hyperreals and surreals.

By the way the complex numbers and quaternions are topologically complete. And Euclidean n-space is topologically complete.

I failed to be clear: the "numerically weighted node" is the abstraction, the physical realization of it is the model - with their connections some neurons, some transistors, are physical models of numerically weighted nodes. Your claim was that the brain contains no such things, my claim is that it does.

Yes I thought of that but probably didn't articulate my response clearly.

First an analogy.

Suppose I say that my brain/body is not an algorithm. Then you say, oh yeah? You claim your body/brain can't express algorithms or doesn't contain algorithms? No, I don't claim that. In fact if I execute the Euclidean algorithm to calculate the GCD of two integers using pencil and paper, as I did many years ago in number theory class, I am a brain using my body to execute an algorithm.

But not EVERY function of my brain and body is nothing but an algorithm. See the difference? My brain can execute algorithms. But that's not ALL my brain can do. My brain does things that are NOT algorithms. That's my claim.

So I would be perfectly happy to agree that SOME functions of the brain might be implemented as neural networks. After all, neural networks are an abstraction of neurons in the brain. I would expect that some subsets of my brain can be isolated and modeled to perfection as a neural network.

But I am saying that NOT ALL brain function can be explained as a neural network.

I hope that's clear. I did not say the brain contains NO such things. I said it doesn't contain NOTHING BUT such things.


Of course it contains much more, not only in hardware but in organizational complexity - but so do the actual machines running neural nets, at least the hardware.

Hardware itself is an abstraction. It really contains flowing currents as noted earlier in the thread. But digital electronics presents an abstraction to the software that allows the software to pretend there's Boolean logic and bit flipping.

In the same sense, brains are full of all kinds of other gooey stuff performing who knows what functions.

Now you claim the brain is implementing an abstraction layer that looks like a neural network to the mind it hosts.

I claim not. I'll allow that the brain may implement some neural networks. But that's not the ONLY thing it does.
 
Last edited:
Finite approximations to real numbers are not sufficient over long enough time scales.
But they are functional at short times. As one of the Rover (landed on the Moon) engineers said."We don't need to be exact, we need to be exact enough"

For practical puposes our brains make exact enough "best guesses" to be functional as an imaginative future prediction engine.
 
But they are functional at short times. As one of the Rover (landed on the Moon) engineers said."We don't need to be exact, we need to be exact enough"

For practical puposes our brains make exact enough "best guesses" to be functional as an imaginative future prediction engine.

someguy1 has a point , long term matters .
 
I hope that's clear. I did not say the brain contains NO such things. I said it doesn't contain NOTHING BUT such things
And my response was to observe that neither does an actually functional, constructed and employed, digital neural network.
This point is not sufficiently well known. You can't even use a computer to perfectly model deterministic Newtonian gravity. That's how weak algorithmic computations are. Finite approximations to real numbers are not sufficient over long enough time scales.
This is so. But it does not speak to our problem, which is whether or not one can, in principle, transfer the state of a brain into another brain in such a way that the transfer recipient will behave as the original for the finite amount of time necessary to measure its functioning and verify that the transfer was a duplication - functionally.

In context, note that the same difficulty in handling chaos applies to the transfer of the state of a CPU to another CPU - we get around that by incorporating homeostatic error-correction in CPU operations, in practice, but that consideration also applies to brains: human brains exhibit chaotic behaviors which are controlled somehow to maintain reliable functioning, even in such simpler subfunctionings as regulating heartbeat. We see that happen. So we have reason to believe that our duplicate need only be close enough to function identically, that chaotic amplification of unmeasurable approximation errors will not disturb the larger scale functioning we intended to duplicate any more than such inevitable glitches in ordinary operations throw our originals into aberrant states.
 
If possible , how would the transfer not clash with the neural network already made in the brain of the receiver ?
 
But they are functional at short times. As one of the Rover (landed on the Moon) engineers said."We don't need to be exact, we need to be exact enough"

For practical puposes our brains make exact enough "best guesses" to be functional as an imaginative future prediction engine.

Interesting point. So our brains run algorithms that continually crank out "good enough" solutions to the outside world. Not a bad idea actually. Reminds me of Plato's cave. We don't see the world as it is, only a good enough approximation.
 
And my response was to observe that neither does an actually functional, constructed and employed, digital neural network.

Ah ... then what else does a neural net do that goes beyond the capacity of a neural net? That's a very interesting remark. Sure, PARTS of the brain might operate as a neural net. But which parts go beyond? And what does that beyond consist of?

This is so. But it does not speak to our problem

Well, you claimed that classical physics can be modeled by propositional logic, and I challenged that point, and now you agree but say it's not important! Ok. I'm actually curious as to your statement in general. For example I'm under the impression that there is no axiomitization of physics. Do Newtons laws constitute a propositional axiomitization of classical physics? I'm ignorant on these matters but curious if you happen to know.

, which is whether or not one can, in principle, transfer the state of a brain into another brain in such a way that the transfer recipient will behave as the original for the finite amount of time necessary to measure its functioning and verify that the transfer was a duplication - functionally.

You're right, at this point I'm pushing back on the idea that it's an algorithm. But if we could in theory take a brain out of one body and put it in another, and connect all the blood vessels and nerves and the spinal cord and such, what would happen? I don't think anyone knows.

But if we reject the algorithmic explanation, then what is it we're transferring exactly? It seems to me that the transfer idea presupposes that the mind is an algorithm. Because it's easy to transfer algorithms from one piece of hardware to another, we do that all the time.

In context, note that the same difficulty in handling chaos applies to the transfer of the state of a CPU to another CPU - we get around that by incorporating homeostatic error-correction in CPU operations, in practice,

Yes you're right, there are error-correcting systems in digital hardware. But I don't think that really applies to my point. If I give you a bit pattern, you can transfer it to a different piece of paper or hardware with 100% accuracy. That's the nature of digital. With analog systems, there's always some inaccuracy. I think we're in agreement on the basics but I don't follow your point. If I give you the bit pattern 101000101001 you can transfer that to any hardware without error. Of course you might make a copying mistake. This is true. But I'm not sure where this fits in to the discussion. Chaos doesn't apply to transferring a bit pattern.

but that consideration also applies to brains: human brains exhibit chaotic behaviors which are controlled somehow to maintain reliable functioning, even in such simpler subfunctionings as regulating heartbeat.

Yes, that's quite a miracle. How does the wetware stabilize itself. But that's only emphasizing the profound mystery of the brain compared to the relative simplicity of a digital computer.

We see that happen. So we have reason to believe that our duplicate need only be close enough to function identically,

Ah, it's "close enough" day. The algorithms in our brain aren't accurate but they're close enough. The duplicate no longer has to be exact, just close enough. Interesting turn the discussion has taken. Well sure, we can always approximate one system with another. But what happens after some time goes by? Wouldn't the approximations drift?

that chaotic amplification of unmeasurable approximation errors will not disturb the larger scale functioning we intended to duplicate any more than such inevitable glitches in ordinary operations throw our originals into aberrant states.

For the moment. But after a few years, perhaps the approximated clone fails. Develops horrible diseases from the accumulated rounding errors. This is straight out of The Fly. It's harder than it looks to make an atom-by-atom copy of something.
 
Interesting point. So our brains run algorithms that continually crank out "good enough" solutions to the outside world. Not a bad idea actually. Reminds me of Plato's cave. We don't see the world as it is, only a good enough approximation.
Anil Seth calls it "best guesses" or "controlled hallucination" which sounds strange at first, but makes sense when you view the brain as a prediction engine. This is why optical illusions work so well. They are purposely constructed to fool the brain into processing false information.

In the chessboard example we see a square (in the shadow) which fools our brain in approximating it's hue incorrectly.
I wonder if a computer would similarly be fooled by the "shadow" effect.

In the fake hand experiment, it is clear that the subject assimilated the fake hand as his own.
Would a computer be subject to such flexibility?
 
Last edited:
Anil Seth calls it "best guesses" or "controlled hallucination" which sounds strange at first, but makes sense when you view the brain as a prediction engine.

What's the evidence that this mechanism is an algorithm as the word is universally understood in computer science?

I have no doubt that the brain serves up useful hallucinations and distortions. For example if we had perfect universe-vision we'd see a bunch of whirling quarks and probability waves. It's our lack of ability to see reality that lets us imagine there are bricks and cars and food and things like that. I believe Huxley made this point in The Doors of Perception. That our minds are devices that filter out most of reality. And of course it's basic physiology that we happen to perceive a certain narrow band of the electromagnetic spectrum. Bats hear things we don't hear, so do dogs.

But the question is whether it's correct to call these limitations and distortions algorithms, which already have a very specific technical meaning. Like I said if you called the underlying mechanism a foozle I'd have no problem with your argument. You just can't call it an algorithm without evidence.

Do you have a link to the Anil Seth article you are referring to? If you gave it to me earlier I apologize but can you re-post it?
 
Do you have a link to the Anil Seth article you are referring to? If you gave it to me earlier I apologize but can you re-post it?
https://www.ted.com/talks/anil_seth_how_your_brain_hallucinates_your_conscious_reality

And Webster gives a more general definition of "algorithm"
Definition of algorithm
: a procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation; broadly : a step-by-step procedure for solving a problem or accomplishing some end especially by a computer
 

I hate watching videos. It's 17 minutes long. I can skim an article in a minute or two. But unless he has evidence that the brain operates via an algorithm, the video is irrelevant. I'm happy to agree that our brains serve up useful hallucinations and delusions. But not that they are caused by algorithms.

And Webster gives a more general definition of "algorithm"

The dictionary is the last place you should look for a technical definition. But even so, the one you gave is accurate as far as it goes. So where is your evidence that all (not some, all) of our brain processes are step-by-step procedures as in a computer? To me, brain processes seem analog, not digital. And it's interesting that the def you quoted specifically mentions the Euclidean algorithm, which I've used as an example myself.

I am perfectly happy to agree that SOME of our brain processes are algorithms. For example if you give me 6 and 14 and I give you back 2 by executing the Euclidean algorithm in my head, that's an example of my brain executing an algorithm. Or if I cook a meal by following a recipe, that's an example of me executing an algorithm.

But that isn't evidence that ALL my brain and body processes are algorithms.
 
Last edited:
I hate watching videos. It's 17 minutes long. I can skim an article in a minute or two. But unless he has evidence that the brain operates via an algorithm, the video is irrelevant. I'm happy to agree that our brains serve up useful hallucinations and delusions. But not that they are caused by algorithms.
If you did not watch the entire presentation how can you comment on its content? You asked for a reference and I gave it to you. Did I waste my time or do you dismiss my observational powers?
Within the presentation Seth suggests that the algorithmic function of the brain is a variable program, which can append itself by integrating new information which changes the algorithm.

Seth specifically makes the point that making computers smarter does not necessarily make them sentient.
For sentience and variable control of your functioning algorithms, you need to be alive. Computers are not living things.
 
p.s. Here is the definition of Algorithm in psychology.
Algorithm
An algorithm is a set of instructions for solving a problem or completing a process, often used in math. The steps in an algorithm are very precise and well-defined. If your problem is a headache, your algorithm might look like this:

1) Have you been hit on the head? If yes, seek medical attention; if no, go to next step.
2) Have you taken a pain reliever? If no, take one now; if yes go to next step.
3) Have you eaten today? … and so on until it would end with wether a solution or advice to seek medical attention.

Algorithms often take the form of a graph with a square for each step and arrows pointing to the possible directions from each step.
https://www.alleydog.com/glossary/definition.php?term=Algorithm
 
Last edited:
p.s. Here is the definition of Algorithm in psychology. https://www.alleydog.com/glossary/definition.php?term=Algorithm

Fine. What is your evidence that the mind processes everything in discrete steps?

From your link: "The steps in an algorithm are very precise and well-defined."

The state of the art in brain science simply doesn't have proof or even evidence that the brain works that way.

Also, this particular definition you linked fails to mention that the list of instructions must be finite. That's an important part of the definition.

There's no shortage of popularized definitions out there. What I need to see is some evidence that the brain works according to an algorithm. And I really hope you'll agree that vague popularized definitions are not technical scientific definitions.

And the example given in your link is terrible. An algorithm for treating a headache? Your algo would prescribe an aspirin for someone with a brain tumor.

Medical diagnosis is a very interesting example. In the early days of AI hype, there was a focus on "expert systems." A doctor would tell the computer everything there is to know about diagnosing headaches, and then the computer could diagnose headaches.

The problem is ... expert systems failed as an approach to AI. Medical diagnosis is an art that can often be reduced to a series of yes/no questions, but that sometimes requires a skilled clinical diagnostician. You can't encode all of the knowledge of medical practitioners into an algorithm.
 
Last edited:
Fine. What is your evidence that the mind processes everything in discrete steps?
I can't recall and at the moment don't have access to the speed and number of thought computations but thinking about a baseball player and the calculations required to hit the ball??????

Sure practice improves the best guessing (if that's what the brain does) but really? Surely the computational is not performed in discrete steps?

And where is the cooling fan? :)

:)
 
I can't recall and at the moment don't have access to the speed and number of thought computations but thinking about a baseball player and the calculations required to hit the ball??????

Sure practice improves the best guessing (if that's what the brain does) but really? Surely the computational is not performed in discrete steps?

And where is the cooling fan? :)

Well that could depend on the number of algorithms running simultaneously, wouldn't it?

In your example, would the technique used for muscle memory of swinging the bat correctly not be an algorithm?
 
Last edited:
From your link: "The steps in an algorithm are very precise and well-defined."
Max Tegmark proposes that everything can be explained with mathematics.

Moreover, computers today can apply their algorithms at millions of bits per second.
Geeks Weigh In: Does a Human Think Faster Than a Computer?
The question itself represents the fallacy of how people think about computers. When a person uses a computer, if it’s slow then it’s junk. But there are certainly other factors to consider when examining intelligence – what about image recognition, language recognition, multi-tasking capabilities or self-learning and self-healing features?

neuralnetwork.jpg


First, to partially answer the “speed” question we need to examine data transmission. In the Hartford Examiner, writer Joy Casad answers the question, “How fast is a thought” by describing the chemical/biological propagation of “thinking” neurons before getting to the point in the final paragraph – these neurons transmit signals at 0.5 milliseconds. That’s pretty fast!
https://www.makeuseof.com/tag/geeks-weigh-in-does-a-human-think-faster-than-a-computer/
 
Last edited:
Well that could depend on the number of algorithms running simultaneously, wouldn't it?
It could. A number of algorithms running simultaneously - in parallel - might complicate matters
Arm / swing / calculate / Arm / swing / calculate / Arm / swing / calculate
at the same time
Ball / coming / calculate / Ball / coming / calculate / Ball / coming / calculate
then
Put the two together / calculate / Put the two together / calculate / Put the two together / calculate /

In your example, would the technique used for muscle memory of swinging the bat correctly not be an algorithm?

I have no firm idea but my weak understanding of algorithms would suggest to me both bat swing and ball speed could be built into a computer to mimic a batter

Still no idea if the brain does it that way

I just recalled from I think a QI program how there is a Japanese tournament involving mathematic flash cards
The numbers are flashed up very briefly, contestants are required to tally them
During the test it looks like they are drumming their fingers
Apparently they are using a mental abacus
When they give their answers it seems for the individual numbers they tallied no recollection

:)
 
It could. A number of algorithms running simultaneously - in parallel - might complicate matters
Arm / swing / calculate / Arm / swing / calculate / Arm / swing / calculate
at the same time
Ball / coming / calculate / Ball / coming / calculate / Ball / coming / calculate
then
Put the two together / calculate / Put the two together / calculate / Put the two together / calculate /
I think it might be more simple than that. Calculate trajectory / swing , based on thousands of hours of prior practice. The logarithms are well established and from the previous analysis, thoughts take about 5 milliseconds which allows for many calculations if strictly focused. But obviously it still is only a best guess of speed and ball rotation. 3 strikes and you're out....:(
 
Last edited:
Back
Top