The Nature of Thought

Any individual may have different patern, since the have (fairly) different structure of brain.

And this, I think, just helps to underscore the individuality of a human and it's brain/mind/personality/yadda-yadda... and how difficult it's going to be for researchers to understand the 'nature' of thought. Six billion (and counting) sets of patterns...

Quite a challenge but very interesting.

~~~

Counterbalance
 
Apologies for not partcipating in this thread sooner, although I have been posting what appears to be relevant posts to this thread but in philosophy and religion instead.

I haven't read the entire thread, but I can see there are some good discussions here. I think this post doesn't overlap too much with what has gone before.

This question was raised by Belelina and seems to follow on from the more recent posts in this thread.

But what creates the brainpatterns, what creates the neural network, what creates this physical reality?
You have 5 senses. From the time your brain is formed you have a bunch of raw neurons in your brain each with thousands of tentacle like dendrites flailing around. As your senses develop and you start using them, sensory input causes the dendrites to start making connections to other neurons. Your brain begins to form patterns (neural networks) that correlate directly with sensory input. Input that is repeated frequently tends to reinforce the corresponding networks; in fact the dendrites and the synaptic connections are physically strengthened by a glue-like coating (i.e. your brain has learnt). Even within the womb your brain is continually being bombarded by sensory data which it continues to relate to either existing networks or it creates new ones. When you are young you have relatively very few neural networks, since you are just at the start of the learning process. As we grow older we continue to learn and form new neural networks but nowhere near the rate of that of a baby.

Everything you experience even from the time you are in the womb is associated with neural networks. And everything has come to you via your 5 senses. But we also have the ability to process the stored networks and create new networks via that process. Every thought, idea, concept, memory and emotions, are held in your brain. Your brain is you.

So essentially the nature of thought is physical neural networks. But the formation of the networks is interesting, i.e. the learning process. Consider learning to swim or learning to drive a car. When the patterns are not present then the task is difficult, after a while, we are hardly aware that we are thinking when we drive a car.

Cris
 
Originally posted by Cris
....

So essentially the nature of thought is physical neural networks. But the formation of the networks is interesting, i.e. the learning process. Consider learning to swim or learning to drive a car. When the patterns are not present then the task is difficult, after a while, we are hardly aware that we are thinking when we drive a car.

Cris

Cris,

So if this is true, and learning is the creation of neural networks that perform specific tasks, how do you explain the abilities of idiot savants who seem to spontaneously develop their capabilities? Some of these people have never heard a piano before, but upon hearing one for the first time can play the piece heard better than the original player. Or how about people who, after trauma to the brain, have suddenly developed savant-like capabilities in areas they have never had previous experience?

I personally believe that the brain is pre-coded for specific neural networks at birth and that these capabilities are fine-tuned during the learning process.
 
Seeker/Cris,
These two points aren't necessarily at odds, there are basic circuits that the brain can flexibly map to eachother and interconnect. Building blocks of a mind growing into a machine for effectively navigating the world you come to inhabit. Evolution would shape inherent circuits and the details of how to go about mapping and experience would train it to the adaptive needs of our everyday life.

Everybody,
I recently heard about a paper that indicates that peri-synaptic glia are necessary for synapse strengthening and possibly memory ... which opens up new worlds of complexity in the human mind. There are 10X as many glia as nuerons.

I also read a paper about a new class of cell adhesion molecules called protocadherins localized in synapses. There are three tandem arrays of them in the genome and the amount of alternative splicing seen in them (including TRANSPLICING or splicing between two genes in different locations!!), making them a possible candidate for specific genetically encoded determinants of nueral connections ... the theroretical maximum of variant forms was I believe larger than the number of nuerons in the brain and multiple types are expressed in each nueron.

Also I don't know if you all know about it, but at www.ncbi.nlm.nih.gov you can search pubmed for primary research papers and now there are some links to freely available pdf reprints. You can also freely download sequence data, which is the original purpose, but that isn't as much fun.

Searching for "protocadherins maniatis", will get you to a free copy of the paper:

"Large exons encoding multiple ectodomains are a characteristic feature of protocadherin genes." (Sounds fascinating doesn't it ...)
 
From the article “A.I. Reboots” by Michael Hiltzik in Technology Review, March 2002:

“’Absolutely none of my work is based on a desire to understand how human cognition works,” says [Douglas B.] Lenat. ’I don’t understand, and I don’t care to understand. It doesn’t matter to me how people think; the important thing is what we know, not how do we know it.’”

And Hiltzik finishes that section of the article with...

“One might call this trend the “new” A.I., or perhaps the “new new new” A. I., for in the last half-century the field has redefined itself too many times to count. The focus of artificial intelligence today is no longer on psychology but on goals shared by the rest of computer science: the development of systems to augment human abilities. “I always thought the field would be healthier if it could get rid of this thing about ‘consciousness,’” says Philip E. Agre, an artificial-intelligence researcher at the University of California, Los Angeles. “It’s what gets its proponents to over-promise.” It is the scaling back of its promises, oddly enough, that has finally enabled A.I. to start scoring significant successes.

~~~

First of all, I don’t wish to throw this discussion off permanently by bringing in a related topic like A.I. However when I read this article my thoughts kept returning to this thread. Apologies to all who’ve posted recently on other “nature of thought“ topics here. Please, carry on with whatever is of current interest to you, but if you’ve also any comments or musings to offer about this, I’d enjoy hearing them.

Because I wonder: If “the goals shared by the rest of computer science [are] to develop systems to augment human abilities...,” then how appropriate is it to “not care” about how human cognition works? I understand the point that more progress is made by lowering the standards or expectations of what should be attempted in developing A.I. I understand how impossibly high expectations could impede efforts unnecessarily. It sounds like they want to make their programs work--and work well; they want to grow the field in directions they deem appropriate; they want success. Fair enough. But could this also mean that the technology will be shortchanged? That humans will be shortchanged? Is this a case of short-sightedness? That what might have opened many more doors for research and understanding about the “nature of thought” in the near and far future may not be opened as quickly or easily--as they might be if more A.I. researchers and developers did care to unravel these mysteries?

My questions are not meant to suggest that Lenat and others who share his view are wrong. I’m not knowledgeable enough about A.I. to make that judgment. I am interested however in how you guys interpret these remarks.

The importance of understanding more about the nature of thought--as it relates to developing new technologies--for humans--and in general.

Whatcha think?

~~~

Thanks,

Counterbalance

~~~

P.S. Sci, Seeker and Cris... enjoyed your posts. (the update and link, too, Sci.)
 
Bio-neural gelpacs?

Seriously, if biological computing was advanced enough, would the machine become sentient?
Will we be posting about The Nature of Artificial Thought ?
 
Originally posted by scilosopher
Seeker/Cris,
These two points aren't necessarily at odds, there are basic circuits that the brain can flexibly map to eachother and interconnect. Building blocks of a mind growing into a machine for effectively navigating the world you come to inhabit. Evolution would shape inherent circuits and the details of how to go about mapping and experience would train it to the adaptive needs of our everyday life.


Quote:
From:
A Universe of Consciousness (how matter becomes imagination)
Gerald M. Edelman

"Neural Darwinism: this theory embraces selective principles and applies them to the functioning brain. its mains tenets are (1) the formation during brain development of a primary repertoire of highly variant neuronal groups that contribute to neuroanatomy (developmental selection), (2) the formation during experience of a secondary repertoire of facilitated neural circuits as a result of changes in the strength of connections or synapses (experiential selection), and (3) a process of reentrant signaling along reciprocal connections between and among distributed neuronal groups to assure the spatiotemporal correlation of selected neural events."
 
Thought VS. Artificial Thought...

Hi there ESP! You have inspired another musing out of me.

Along a slightly different vein...
How do I know if I am entertaining a "real" thought or an "artificial" thought. This is an interesting distinction. ¿If a machine or a software program first "postulates" a certain pattern does it remain "artificial" until it is processed by a human (or other biological unit as the case may be)? Also, does this have anything to do with the complexity of the hardware/software or the complexity of the "thought", or is it because of other reasons that this distinction is made?
 
Jeez, did I just kill this thread too???

So I'm a little bit analytical, so what???
 
Back
Top