Is it possibly to functionally transfer knowledge from a neural network to another?

An AI may be able for "deep" thought processes, but until we can find an algorithm or a logarithm that makes an AI wanting to stay alive, the same fundamental urge that all biological organisms seem to possess which seems to drive the evolution of organisms.
This sentient AI business presents a real human dilemma.
Remember HAL from "2001: A Space Odyssee ?
Hal then reports the imminent failure of an antenna control device. The astronauts retrieve it in an extravehicular activity (EVA) pod but find nothing wrong. Hal suggests reinstalling the device and letting it fail so the problem can be found. Mission Control advises the astronauts that results from their twin HAL 9000 indicate that Hal is in error. Hal insists that the problem, like previous issues ascribed to HAL series units, is due to human error. Concerned about Hal's behavior, Bowman and Poole enter an EVA pod to talk without Hal overhearing, and agree to disconnect Hal if he is proven wrong. Hal secretly follows their conversation by lip reading.
https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film)#Plot
https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film)
 
Last edited:
But obviously it still is only a best guess of speed and ball rotation.

Agree best guess and not always right

How about discussion between two AIs as to whether Giacomo Antonio Domenico Michele Secondo Maria Puccini is better than Giuseppe Fortunino Francesco Verdi

How would that be best guessed?

:)
 
Agree best guess and not always right

How about discussion between two AIs as to whether Giacomo Antonio Domenico Michele Secondo Maria Puccini is better than Giuseppe Fortunino Francesco Verdi

How would that be best guessed? ......:)
Differently from humans, no doubt, but fundamentally still the processing of available information and responding appropriately in a programmed benign fashion. And it might have to make value choices, especially in patterns of mathematically based arts or geometrics.

Ever taken a deep zoom into a fractal until hydrogen atom is represented as a cloud.
I had it but lost it in transfer...it had a running counter of iterations and it was just astounding to behold what a simple program can produce over long periods of time. T he trick is to start at the smallest possible particle and build the fractal from the ground (planck scale) up.

Example: I Robot
In the year 2035, humanoid robots serve humanity, which is protected by the Three Laws of Robotics. Del Spooner, a Chicago police detective, hates and distrusts robots because one of them chose him to rescue from a car crash in the Chicago River (because his survival was statistically more likely), leaving a 12-year-old girl to drown. Spooner's critical injuries were repaired with a cybernetic left arm, lung, and ribs, personally implanted by the co-founder of U.S. Robots and Mechanical Men (USR), Dr. Alfred Lanning.........
https://en.wikipedia.org/wiki/I,_Robot_(film)#Plot

And this more serious inquiry.; https://itstillworks.com/morph-photos-8341726.html

It is dangerous to introduce an AI, which would compete wth humans. Actuall this has already happened . How many jobs are performed by robots? The world is already completely integrated with specialist computers, which are creating a whole new tool which can replace human labor and with proper maintenance may work a hundrd years to perfect its, own experience of life, approximatioms of human behavior.
 
Last edited:
If you did not watch the entire presentation how can you comment on its content?

His premise is wrong.

You asked for a reference and I gave it to you. Did I waste my time or do you dismiss my observational powers?

You clearly have no idea what an algorithm is. And if you got your ideas from this vid, either you misunderstood the vid or Anil Seth doesn't know what an algorithm is.


Within the presentation Seth suggests that the algorithmic function of the brain is a variable program, which can append itself by integrating new information which changes the algorithm.

Then whatever it is, it's not an algorithm as understood in computer science. Like I say, if you would just call it a "foozle" we'd have no disagreement. You can't call it an algorithm for the same reason you can't call it a banana or a brick. Those words already have well-understood meanings that conflict with how you are using the term algorithm.

Seth specifically makes the point that making computers smarter does not necessarily make them sentient.

I certainly agree with that.

For sentience and variable control of your functioning algorithms, you need to be alive. Computers are not living things.

So in the end you are agreeing with me that computer sexecuting algorithms are no sentient. And since the brain is sentient, the brain is not a computer.

You are completely agreeing with me.
 
I can't recall and at the moment don't have access to the speed and number of thought computations but thinking about a baseball player and the calculations required to hit the ball??????

Great example. Anyone who ever caught a ball knows that it's NOT a step-by-step computation of trajectories. That's how MACHINES are programmed to catch balls. Humans seem to have a real-time sense of where the ball is and where it's going. It's an analog system of continuous feedback that is NOT an algorithm. This is plain to anyone who has ever played catch or grabbed a falling plate in the air before it falls to the ground.

Sure practice improves the best guessing (if that's what the brain does) but really? Surely the computational is not performed in discrete steps?

Surely. We are in agreement. Our brains do not operate the way digital computers do.

And where is the cooling fan? :)

:)

Our body and brain have an amazing system of maintaining a constant temperature. I assume you know that.
 
Max Tegmark proposes that everything can be explained with mathematics.

Tegmark is already on record claiming not only that everything can be explained by math, but that the universe literally IS a mathematical structure. That's a provocative thesis that falls into the category of "Interesting even though almost certainly wrong."

I did happen to read a Tegmark article the other day about his ideas on substrate independence. His argument was SOUND but not VALID. That is, if you grant his premises, you have a correct logical argument. But his main premise was wrong. And that premise was that intelligence is an algorithm. Once we accept that, all kinds of other stuff follows. But intelligence is NOT an algorithm. Tegmark's wrong. Tegmark is in fact wrong about a lot of things. He has ideas but no experimental verification of those ideas. When he speculates without evidence, he's not doing science.

Moreover, computers today can apply their algorithms at millions of bits per second.
https://www.makeuseof.com/tag/geeks-weigh-in-does-a-human-think-faster-than-a-computer/

Yes but the speed of computation makes no difference to what an algorithm can compute. The Euclidean algorithm computes the GCD of two integers whether I run it on a supercomputer or do it by hand with pencil and paper. Any quality or attributed of a computation that depends on execution speed is NOT COMPUTATIONAL by definition. Because computations are substrate independent. They don't depend on the hardware. Of course the supercomputer can find the GCD of large numbers faster than I can with my pencil and paper. But in principle, given enough time, I'd find the same answer as the supercomputer.

Many arguments for machine sentience make this error. Running a computation really really fast makes absolutely no difference to what the computation can do.
 
For the moment. But after a few years, perhaps the approximated clone fails
But so would an exact and perfect duplicate. The brain itself "fails" to be predictable over enough time in the real world of input and reaction.
That can't be your criterion, or you have ruled out functional transfer by definition - which is empty.
- - - -
The problem is ... expert systems failed as an approach to AI. Medical diagnosis is an art that can often be reduced to a series of yes/no questions, but that sometimes requires a skilled clinical diagnostician.
The first attempts failed - in some areas. The latest round is not failing in as many areas - and it is showing signs of being particularly valuable in exactly the areas least amenable to checklist yes/no stuff, the judgment call and grey area assessments.
Ah ... then what else does a neural net do that goes beyond the capacity of a neural net?
All the auxiliary stuff - communication processing, sensory input, error housekeeping, memory stashes of one kind an another, etc. We aren't talking "beyond", necessarily, depending on what that implies.
What I need to see is some evidence that the brain works according to an algorithm.
Why must functional duplication depend on the brain running an algorithm?
It seems to me that the transfer idea presupposes that the mind is an algorithm. Because it's easy to transfer algorithms from one piece of hardware to another, we do that all the time.
That would not be easy to do if you were trying to transfer the playing ability of the latest round of Go playing neural networks - the "algorithm" is built into the hardware status at any given moment, and changes with every game. It's not a separate set of instructions. You would want to take another approach.
- - - -
Ah, it's "close enough" day.
Same as always, for example with any CPU clone.
If I give you a bit pattern, you can transfer it to a different piece of paper or hardware with 100% accuracy. That's the nature of digital. With analog systems, there's always some inaccuracy.
There exist analog computers. Are you claiming they cannot be cloned?
If I give you the bit pattern 101000101001 you can transfer that to any hardware without error. Of course you might make a copying mistake. This is true. But I'm not sure where this fits in to the discussion. Chaos doesn't apply to transferring a bit pattern.
Pay closer attention to the actual mechanics of this - what you are transferring is charge distributions, and they vary in a range with certain probabilities. If you have skimped on your error correction, your expectation of errorfree transfer and reliable operation will be disappointed quickly. If you have been careful, it will take longer is all.
Yes, that's quite a miracle. How does the wetware stabilize itself. But that's only emphasizing the profound mystery of the brain compared to the relative simplicity of a digital computer.
Nobody is claiming complexity makes no qualitative differences. The opposite. The point was that a duplicate or functional transfer would include transfer of that ability. So the "chaos" factor has - apparently - been covered in advance.
But if we reject the algorithmic explanation, then what is it we're transferring exactly?
The status of the brain's cogitation machinery at some moment in time - including direction/temporal change/acceleration information.

nb: I am usually on the "other side" of this discussion.
 
Anyone who ever caught a ball knows that it's NOT a step-by-step computation of trajectories. That's how MACHINES are programmed to catch balls. Humans seem to have a real-time sense of where the ball is and where it's going. It's an analog system of continuous feedback that is NOT an algorithm.
Neural nets can learn how to catch a ball - one result of training them in is a set of weighted nodes and connection strengths.
The human ability to project a trajectory - to create a continuity from its perceptions - is not based on continuity in the perceptions themselves. And humans have to learn how to do that - they train in, much as a neural net trains in.
 
Neural nets can learn how to catch a ball - one result of training them in is a set of weighted nodes and connection strengths.
The human ability to project a trajectory - to create a continuity from its perceptions - is not based on continuity in the perceptions themselves. And humans have to learn how to do that - they train in, much as a neural net trains in.

"... much as ..."

That covers a lot of vagueness, doesn't it? If you think the brain is run by algorithms I disagree with you. If tomorrow morning some brilliant neuroscientist publishes the brain's own ball-catching subroutine, I'll apologize for being wrong. Till then, you need to supply some evidence if you claim that the brain has a ball-catching algorithm. And formal neural nets ARE algorithms, there's no difference except for the organizational scheme.
 
That covers a lot of vagueness, doesn't it?
It refers to the striking similarity between the process of training in a ball-catching net and the process of learning to catch a ball.
If you think the brain is run by algorithms I disagree with you.
I don't care whether the brain is "run by" algorithms. I think that's irrelevant.
And formal neural nets ARE algorithms, there's no difference except for the organizational scheme.
Which is the major visible aspect of a functioning brain - the extraordinarily complex organizational scheme.

And one clue to how it runs - and so the basics of any cloning or transfer efforts - is to note that this claim misleads: "Humans seem to have a real-time sense of where the ball is and where it's going. It's an analog system of continuous feedback".
That "seeming" is a kind of illusion created by the brain - and all the different well-functioning human brains do it - from "feedback" that is (apparently, when investigated) not continuous. or not continuously registered.
 
But so would an exact and perfect duplicate. The brain itself "fails" to be predictable over enough time in the real world of input and reaction.
That can't be your criterion, or you have ruled out functional transfer by definition - which is empty.

I'm agnostic on whether functional transfer is possible. I truly have no idea.

I am opposing the claim that the mind is an algorithm in the brain. It so happens that many of the functional transference arguments are based on the idea that the mind is an algorithm. That's Tegmark's argument for example. By opposing the idea that the mind is an algorithm, I'm opposing many of the arguments for functional transference.

But you are right, I have not presented an argument against functional transference in general, nor do I have such an argument.

The first attempts failed - in some areas. The latest round is not failing in as many areas - and it is showing signs of being particularly valuable in exactly the areas least amenable to checklist yes/no stuff, the judgment call and grey area assessments.

Ok, if you say expert systems are making a comeback I believe you. That's not a key point. Some dictionary definition used medical diagnosis as an example of an algorithm, and it reminded me of the expert systems that were proposed in the 1980's to do medical diagnosis. That's as far as my remark went.

All the auxiliary stuff - communication processing, sensory input, error housekeeping, memory stashes of one kind an another, etc. We aren't talking "beyond", necessarily, depending on what that implies.

Well if you claim those are algorithms, we can debate that. If you agree they're not, you've conceded my point. Memory stashes? You are making things up. They can't slice open your brain and find the memory of the time you went to the store last week. Memory doesn't work that way. COMPUTER memory works that way. It's NOT THE SAME THING.

Why must functional duplication depend on the brain running an algorithm?

You're right. My arguments are against the idea that the mind is an algorithm. I take no position on functional duplication. But do please note that many of the arguments for functional duplication depend on the brain being an algorithm. So I'm arguing against those arguments. That's all.

That would not be easy to do if you were trying to transfer the playing ability of the latest round of Go playing neural networks - the "algorithm" is built into the hardware status at any given moment, and changes with every game. It's not a separate set of instructions. You would want to take another approach.

A neural net is an algorithm. It's a conventional program running on conventional hardware. A node is a memory location. Assigning a weight to a node is assigning a numeric value to a memory location. The algorithm is fixed and never changes. Of course algorithms can do very clever things like playing chess and Go and driving cars. Weak AI is impressive lately. Doesn't prove anything about brains.

There exist analog computers. Are you claiming they cannot be cloned?

Well analog computers work by completely different principles than digital ones. I don't know enough about them to know if they can be cloned.

Pay closer attention to the actual mechanics of this - what you are transferring is charge distributions, and they vary in a range with certain probabilities. If you have skimped on your error correction, your expectation of errorfree transfer and reliable operation will be disappointed quickly. If you have been careful, it will take longer is all.

All hardware is fallible. Hardware presents an abstraction layer to the software in which the software can assume there are such things as memory locations and the ability to flip bits. Not sure what that means in terms of proving brains are algorithms, if that's your goal.

Nobody is claiming complexity makes no qualitative differences. The opposite. The point was that a duplicate or functional transfer would include transfer of that ability. So the "chaos" factor has - apparently - been covered in advance.

I might be a little lost in the chain of quoting. I brought up chaos to point out that algorithms can't even solve the question of the stability of the solar system under deterministic Newtonian gravity. In any event I'm not arguing against functional transfer, only against the proposition that the mind is an algorithm executing in the brain.

The status of the brain's cogitation machinery at some moment in time - including direction/temporal change/acceleration information.

If the brain is not executing an algorithm, how exactly do you transfer all its goo? Molecule by molecule? Atom by atom? Quark by quark? Duplicating analog data is incredibly difficult.

nb: I am usually on the "other side" of this discussion.

I'm totally confused by now as to what's being argued. I claim the mind is not an algorithm executing in the hardware of the brain. That's all I'm arguing.
 
Last edited:
Neural nets can learn how to catch a ball - one result of training them in is a set of weighted nodes and connection strengths.
The human ability to project a trajectory - to create a continuity from its perceptions - is not based on continuity in the perceptions themselves. And humans have to learn how to do that - they train in, much as a neural net trains in.

Neural nets train, brains train, therefore brains are neural nets? Bad logic, I'm sure you can see that. Planes fly, birds fly, therefore birds fly by the same mechanism as planes. Bad logic.
 
It refers to the striking similarity between the process of training in a ball-catching net and the process of learning to catch a ball.

And planes and birds are strikingly similar in their ability to fly through the air. But the mechanism are very different. Just as the mechanisms of artificial intelligence are very different than the mechanisms of human intelligence.

I don't care whether the brain is "run by" algorithms. I think that's irrelevant.

It's the only point I'm arguing here. And Write4U is claiming the mind's an algorithm, and keeps posting popularized and inaccurate definitions of algorithm as if they were evidence. You may be reading my replies to Write4U and thinking I'm making a more general argument than I am. I'm only saying the mind's not the result of an algorithm. It could not be because algorithms are only syntactic and humans understand semantics.

Which is the major visible aspect of a functioning brain - the extraordinarily complex organizational scheme.

Brains are complex, chess playing computers are complex, therefore the brain is a chess playing computer? Please stop that!! You must be trying to make a more subtle point, I may be missing your meaning.

And one clue to how it runs - and so the basics of any cloning or transfer efforts - is to note that this claim misleads: "Humans seem to have a real-time sense of where the ball is and where it's going. It's an analog system of continuous feedback".
That "seeming" is a kind of illusion created by the brain - and all the different well-functioning human brains do it - from "feedback" that is (apparently, when investigated) not continuous. or not continuously registered.

Well it's certainly not a digital algorithm. Are you claiming it is? What are you claiming?
 
someguy1 said,
It's the only point I'm arguing here. And Write4U is claiming the mind's an algorithm, and keeps posting popularized and inaccurate definitions of algorithm as if they were evidence. You may be reading my replies to Write4U and thinking I'm making a more general argument than I am. I'm only saying the mind's not the result of an algorithm. It could not be because algorithms are only syntactic and humans understand semantics.
With "syntactic", do you mean symbolic? We can and do observe values, patterns, and their potentials.

My argument was based on the following:
An algorithm is an effective method that can be expressed within a finite amount of space and time[1] and in a well-defined formal language[2] for calculating a function.[3] Starting from an initial state and initial input (perhaps empty),[4] the instructions describe a computation that, when executed, proceeds through a finite[5] number of well-defined successive states, eventually producing "output"[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
The concept of algorithm has existed for centuries; however, a partial formalization of what would become the modern algorithm began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928.
https://en.wikipedia.org/wiki/Algorithm

Your assertion that the brain does not use algorithms because it is sentient is based on what? We know that computers (which are not sentient) must use algorithms, in order to execute the processing of information. Programmers write these algorithms, which at the least indicates an understanding of these mathematical functions.
If they work in non-sentient computers (pseudo intelligence), what rule prevents the sentient (intelligent) human brain from using these effective mathematical functions also?

After all is said and done and all semantics aside, the human brain is a biological computer, but it's function is at the same scale of a computer, i.e. at nano scale. And in the end it is just processing values just like a non- sentient computer.

But, unlike computers, the brain also functions at the bio-chemical level, which I suspect is responsible for "emotions" such as pain, pleasure, desire. (i.e. opioidaddiction). Ever seen an addicted computer?
That the bio-chemical difference. And perhaps this phenomenon is also produced by a form of chemical algorithm.

I am not saying that there is no difference between a brain and a computer. I am saying that fundamentally the calculating brain processes (best guesses) can be compared to what we call "algorithms" in computers.

I am familiar with the term "foozle" and it has nothing to do with a bungling brain function, but I do understand the concept of a variable (fuzzy) algorithm, depending on subtle differences of the input information..
 
Last edited:
Neural nets train, brains train, therefore brains are neural nets? Bad logic, I'm sure you can see that. Planes fly, birds fly, therefore birds fly by the same mechanism as planes. Bad logic.
Depends on your definition of flying. Regardless of mechanism, they all use the principle of "lift" in order to fly.
 
With "syntactic", do you mean symbolic? We can and do observe values, patterns, and their potentials.

Symbolic, yes. In the formal sense. There is an alphabet of symbols. The alphabet is taken to be at most countably infinite. When the machine encounters a particular symbol, it flips some bits and goes to the next symbol. Given a particular state of the computer and the particular symbol, it's entirely pre-determined what bits it will flip. It's defined by the computation's program.

I'm being picky like that because I can refer to this if you start to get poetic and fanciful about what algorithms are.

I wish to state that I am not opposed to poetry. I like imagery and intuition. I just want to make sure that when we're talking about algorithms, we're clear on whether we mean that in the technical sense or the poetic. Because once you use words like "patterns and their potential" I get that old woo-woo vibe and I feel that you are leaving the realm of the science of algorithms and drifting into the poetry. And that may be leading you astray in your reasoning.

Your assertion that the brain does not use algorithms because it is sentient is based on what?

Because our minds understand meaning. They have intensionality, "aboutness." When computers flip bits, that activity is meaningless. Humans impart meaning to the bits. These bits are a discussion forum, those bits are a cat video. That's what the humans add. If you just looked at the circuitry of the computer you'd just see a long but finite string of bits 1010101010010001001001001... There is no meaning in that, nor in the manipulations of the bits by the program.

In case it's not obvious, this is basically Searle's argument in the Chinese room. So if you and I disagree on that subject, we can agree to disagree. Many people think the room is conscious or self-aware or understands Chinese. I happen to think that's absurd. So now you know my philosophical orientation to all this.

We know that computers (which are not sentient) must use algorithms, in order to execute the processing of information.

Yes.

Programmers write these algorithms, which at the least indicates an understanding of these mathematical functions.

Yes. The humans impart meaning to the symbols. If I had to put my entire argument into a short sentence, that would be it.

If they work in non-sentient computers (pseudo intelligence), what rule prevents the sentient (intelligent) human brain from using these effective mathematical functions also?

Oh, nothing prevents humans from using effective mathematical functions. For example a long time ago I took a class in number theory. Early in the class they required us to execute the Euclidean algorithm by hand. In that moment my brain caused my body to execute an algorithm!

OF COURSE THAT HAPPENS. Of course PARTS of our brain may be algorithms. Reflexes, say. Doctor taps your knee in just the right post and your knee jumps. That's probably a little subroutine in the nerves that operate the muscles. I'm perfectly happy to stipulate that.

But not ALL brain and mind function can be explained as the execution of algorithms in the brain.

It's "some" versus "all." Are some brain functions algorithmic? Probably. Could we prove that it's at least possible? Sure, just execute the Euclidean algorithm in your head. 8 and 14 in, 2 out. Boom, I'm a digital computer that can execute a 2000 year old algorithm. Substrate independence. Euclid and I can execute the exact same algorithm in our respective brains.

But not ALL brain function is an algorithm. And the fact that algorithms are substrate independent does NOT prove that mind is. I saw Tegmark make that argument in an article recently. He's totally and completely and utterly wrong.


After all is said and done and all semantics aside, the human brain is a biological computer,

It's biological, agreed.

When you say biological computer, what do you mean by that? If by computer you mean a digital computer according to the abstract theory of Turing and the contemporary practice of hardware and software engineering, then there is not a shred of evidence that the brain works this way. There is no cpu, there's no ram, there's no instruction set, there's no clock, there's no program. None of these things.

So I must ask you if you would be willing to clarify EXACTLY what you mean by that remark? Do you mean to say that a biological computer is different than a digital computer? If so then how about using a different word. Call it the biological foozle and I will have not a single objection to write about. I'll have nothing to post. I'll be done here.

But if you call it a computer and you mean to invoke the science of modern digital computing, I simply must push back as strenuously as I can, because you are making an assumption utterly without evidence.


but it's function is at the same scale of a computer, i.e. at nano scale. And in the end it is just processing values just like a non- sentient computer.

The brain? Processing values? If by processing you mean computing then you have no evidence. It's the same problem as in your previous paragraph. You are using the word "processing" ambiguously. You want it to mean both "whatever it is the brain does," and also "what a digital computer does." It's a subtle switch in language. If you get very clear about what you mean by processing, your argument fails.

Again. The brain ultimately processes its gooey bits just like a non-sentient foozle. If you'll just stop calling things computers when you have provided no evidence that they are computers, I'll stop objecting. And honestly I think I'm just being tiresome now. I've pretty much made all the points I can make and if we agree to disagree so be it. But I think you are confusing yourself by overloading words like computer and process and algorithm so that they have both poetic and allegorical meanings as well as technical meanings, and by that ambiguity you are making an argument that fails once the ambiguity is identified.

That's how I see it anyway!!

But, unlike computers, the brain also functions at the bio-chemical level, which I suspect is responsible for "emotions" such as pain, pleasure, desire. (i.e. opioidaddiction).

So those parts are NOT algorithmic. Well then we're in agreement. Is that correct? If only SOME parts of our minds are algorithmic, I have no objection to that thesis at all. As long as you acknowledge that there's also something else going on, something that's not algorithmic.

Ever seen an addicted computer?

Haha I've seen a computer addict! In the mirror I think.

That the bio-chemical difference. And perhaps this phenomenon is also produced by a form of chemical algorithm.

As Ronald Reagan said to Jimmy Carter during their 1980 debate: There you go again.

https://en.wikipedia.org/wiki/There_you_go_again

Please, call it a chemical foozle and I'll say, Yes, I agree with you! Parts of the mind are algorithms and parts are foozle. I agree with that!

But when you want to say that some kind of phenomenon that is clearly NOT computational, can still be called a computation or an algorithm, you are making an invalid argument by using the same word in two different senses.

I am not saying that there is no difference between a brain and a computer. I am saying that fundamentally the calculating brain processes (best guesses) can be compared to what we call "algorithms" in computers.

Compared to. So after all this time you are not actually stating a claim. You're only making a metaphor? If you'd said that up front I'd have never raised a peep. You serious? You were only making a metaphor after all? Shall I compare thee to a summer's day? Thou art more lovely and more temperate.

You don't actually MEAN the brain operates via algorithm, but only that you are making a metaphor? Please clarify this point, you will save me a lot of typing going forward.

I am familiar with the term "foozle" and it has nothing to do with a bungling brain function,

You must know something I don't. I was using the word as a completely meaningless word that could stand for anything. If you know some specific meaning for it, that was not my intention. It's not bad, it is? Am I out of touch? Sorry about that.

Call it "woo-stuff," the mysterious whatever that minds do that man-made machines may or may not be able to do or may someday do. So whenever you want to talk about "emotional processing" in the brain that's sort of like some biological woo-stuff, I'll reply only to express my enthusiastic agreement with your point!

but I do understand the concept of a variable (fuzzy) algorithm, depending on subtle differences of the input information..

Word salad. Fuzzy algorithms? Look, I can write a program that goes:

if today is tuesday take out the trash
else lay in bed.

That's a program that does something completely different depending on "subtle differences of the input information," like what day it is.
 
Last edited:
That the bio-chemical difference. And perhaps this phenomenon is also produced by a form of chemical algorithm.

ps --

I found out what this logical fallacy is called. In Sophistical Refutations, Aristotle identifies the fallacy of equivocation, in which the same word is used in two different ways within the same argument.

You use the word algorithm with its standard meaning as in computer science; and you also use it to refer to those mysterious mental processes such as emotions that we don't yet understand. This equivocation is causing you to reach an erroneous conclusion.

https://howaristotleimpactedargument.weebly.com/fallacies.html
 
Symbolic, yes. In the formal sense. There is an alphabet of symbols. The alphabet is taken to be at most countably infinite. When the machine encounters a particular symbol, it flips some bits and goes to the next symbol. Given a particular state of the computer and the particular symbol, it's entirely pre-determined what bits it will flip. It's defined by the computation's program.
Yes, except the human brain does not function in a binary code of "on/off" state. Our brains function in an exponential fashion forming cognition or recognition (faster) in a 3D environment. The emerging mental hologram of bits of information inside the brain, does not happen differently from our cognition of the existence of logarithms as an "efficient" form of symbolizable processing of information.

I'm being picky like that because I can refer to this if you start to get poetic and fanciful about what algorithms are.
Well, I am not talking about computer languages, but the fundamentals which make such mathematically functional concepts equally useful for the human brain and and AI and throughout the universe.

Lemurs already can recognize more from less, that is a fundamental mathematical calculation.

I wish to state that I am not opposed to poetry. I like imagery and intuition. I just want to make sure that when we're talking about algorithms, we're clear on whether we mean that in the technical sense or the poetic. Because once you use words like "patterns and their potential" I get that old woo-woo vibe and I feel that you are leaving the realm of the science of algorithms and drifting into the poetry. And that may be leading you astray in your reasoning.
IMO, my "reasoning" is sound.
Because our minds understand meaning. They have intensionality, "aboutness." When computers flip bits, that activity is meaningless. Humans impart meaning to the bits. These bits are a discussion forum, those bits are a cat video. That's what the humans add. If you just looked at the circuitry of the computer you'd just see a long but finite string of bits 1010101010010001001001001... There is no meaning in that, nor in the manipulations of the bits by the program.
I suspect that's where bio-chemistry begins to play a large part?
In case it's not obvious, this is basically Searle's argument in the Chinese room. So if you and I disagree on that subject, we can agree to disagree. Many people think the room is conscious or self-aware or understands Chinese. I happen to think that's absurd. So now you know my philosophical orientation to all this.
I have no quarrel with that.
s. The humans impart meaning to the symbols. If I had to put my entire argument into a short sentence, that would be it.
Yes, because they do.
Oh, nothing prevents humans from using effective mathematical functions. For example a long time ago I took a class in number theory. Early in the class they required us to execute the Euclidean algorithm by hand. In that moment my brain caused my body to execute an algorithm!
Yes, because it could.
OF COURSE THAT HAPPENS. Of course PARTS of our brain may be algorithms. Reflexes, say. Doctor taps your knee in just the right post and your knee jumps. That's probably a little subroutine in the nerves that operate the muscles. I'm perfectly happy to stipulate that.
But not ALL brain and mind function can be explained as the execution of algorithms in the brain.
It's "some" versus "all." Are some brain functions algorithmic? Probably. Could we prove that it's at least possible? Sure, just execute the Euclidean algorithm in your head. 8 and 14 in, 2 out. Boom, I'm a digital computer that can execute a 2000 year old algorithm. Substrate independence. Euclid and I can execute the exact same algorithm in our respective brains.
Precisely that's why I highlighted the word efficient . Mathematical efficiency and conservation of energy are demonstrably equally valuable assets throughout the universe.
But not ALL brain function is an algorithm. And the fact that algorithms are substrate independent does NOT prove that mind is. I saw Tegmark make that argument in an article recently. He's totally and completely and utterly wrong.
It depends on your perspective, which you have restricted to computers. I think it can be argued that all electro/chemical information our senses and mirror neural system receives should (by the evidence of some 3 billion neurons) employ a very similar networking function as a computer. We can demonstrate this internal distribution of electro-chemical information by scanning which parts of the brain show activity while the brain is "working" on cognition or understanding.
It's biological, agreed.
Yes, thas was Seth's argument also. You have to be alive to feel emotion.
When you say biological computer, what do you mean by that? If by computer you mean a digital computer according to the abstract theory of Turing and the contemporary practice of hardware and software engineering, then there is not a shred of evidence that the brain works this way. There is no cpu, there's no ram, there's no instruction set, there's no clock, there's no program. None of these things.
Yes, from your perspective, but I am trying to address the orderly distribution of information
So I must ask you if you would be willing to clarify EXACTLY what you mean by that remark? Do you mean to say that a biological computer is different than a digital computer? If so then how about using a different word. Call it the biological foozle and I will have not a single objection to write about. I'll have nothing to post. I'll be done here.
Do you know the definition of "foozle"? Look it up and you'll see the logical error you madeby using this definition.
But if you call it a computer and you mean to invoke the science of modern digital computing, I simply must push back as strenuously as I can, because you are making an assumption utterly without evidence.
No, my "assumptions" are based on evidence of the importance of decision making, which is an ancient survival mechanism, present in almost all living organisms with neural networks.. It seems a peculiarly well developed organ in hominids and especially in humans, especially in cognition of abstractions and abstract thought itself. This requires a certain logical processor.
The brain? Processing values? If by processing you mean computing then you have no evidence. It's the same problem as in your previous paragraph. You are using the word "processing" ambiguously. You want it to mean both "whatever it is the brain does," and also "what a digital computer does." It's a subtle switch in language. If you get very clear about what you mean by processing, your argument fails.

Again. The brain ultimately processes its gooey bits just like a non-sentient foozle. If you'll just stop calling things computers when you have provided no evidence that they are computers, I'll stop objecting. And honestly I think I'm just being tiresome now. I've pretty much made all the points I can make and if we agree to disagree so be it. But I think you are confusing yourself by overloading words like computer and process and algorithm so that they have both poetic and allegorical meanings as well as technical meanings, and by that ambiguity you are making an argument that fails once the ambiguity is identified.
That's how I see it anyway!!
Fair enough, I can understand your reluctance to using the term "biological computer" for "biological calculating organ" (machine).
A brain–computer interface (BCI), sometimes called a neural-control interface (NCI), mind-machine interface (MMI), direct neural interface(DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions
https://en.wikipedia.org/wiki/Brain–computer_interface
So those parts are NOT algorithmic. Well then we're in agreement. Is that correct? If only SOME parts of our minds are algorithmic, I have no objection to that thesis at all. As long as you acknowledge that there's also something else going on, something that's not algorithmic.
Well then we are in agreement. I am fairly certain that many parts of the brain have completely different functional abilities.
But even in the subconscious part of the brain that only controls and regulates our own living system, has to deal with the processing of internal electro/chemical signals.
 
Last edited:
page 2
You don't actually MEAN the brain operates via algorithm, but only that you are making a metaphor? Please clarify this point, you will save me a lot of typing going forward.
No, I am not speaking in metaphors, we just address the question from different perspectives.
You must know something I don't. I was using the word as a completely meaningless word that could stand for anything. If you know some specific meaning for it, that was not my intention. It's not bad, it is? Am I out of touch? Sorry about that.
I looked it up.
Call it "woo-stuff," the mysterious whatever that minds do that man-made machines may or may not be able to do or may someday do. So whenever you want to talk about "emotional processing" in the brain that's sort of like some biological woo-stuff, I'll reply only to express my enthusiastic agreement with your point!
So you do agree that the human brain is capable of processing algorithms.
Word salad. Fuzzy algorithms? Look, I can write a program that goes:
"if today is tuesday take out the trash
else lay in bed".
That's a program that does something completely different depending on "subtle differences of the input information," like what day it is.
Thanks, you just proved my point. I could rewrite the algorithm in favor of taking out the trash then going back to bed....:D
But not ALL brain function is an algorithm. And the fact that algorithms are substrate independent does NOT prove that mind is. I saw Tegmark make that argument in an article recently. He's totally and completely and utterly wrong.
Substitute "numbers" with the concept of "inherent values" (potentials), perhaps that will make more sense
 
Last edited:
For the moment. But after a few years, perhaps the approximated clone fails.
Brings up an issue: What is "failure" that only appears after years of time, in a transfer of brain functionality. How would one detect it?
I am opposing the claim that the mind is an algorithm in the brain.
I have no objection to setting that aside - a lot of folks seem, to me, to underestimate the hardware complexity and (especially) flexibility of the brain.
If the brain is not executing an algorithm, how exactly do you transfer all its goo? Molecule by molecule? Atom by atom? Quark by quark? Duplicating analog data is incredibly difficult.
I am not arguing that it would be easy. Like I said, I'm normally on the other side of this discussion.
And planes and birds are strikingly similar in their ability to fly through the air. But the mechanism are very different. Just as the mechanisms of artificial intelligence are very different than the mechanisms of human intelligence.
Birds and planes are not that similar. They don't function the same.
But it is possible to "transfer" the functionality of bird flight to machinery, in principle - it would not look or operate like an airplane, is all.
 
Back
Top