Analog processing of words by human minds

Status
Not open for further replies.

one_raven

God is a Chinese Whisper
Valued Senior Member
I saw this somewhere and thought it was pretty interesting to say the least.
Why does it work?
Is it simply that we read using pattern recognition?
Is it a quirk of analog processing?
Is it because we are used to imperfections, so gloss over them?

I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was
rdanieg. The phaonmneal pweor of the hmuan mnid aoccdrnig to a
rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer inwaht oredr the
ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and
lsat ltteer be in the rghit pclae. The rset can be a taotl mses and
you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn
mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.
Amzanig huh?

Yaeh, and I awlyas thought slpeling was ipmorantt
 
what does this mean : erd

???

but if you put it in a context "he ahs got erd eyes" then you can understand it because it becomes more and more likely for every work what it is supposed to mean.
It is a great gift indeed, but it could probably be replicated in code by Probability of certain words in a certain context.
 
That is pretty cool.

I'd bet it does have something to do with pattern recognition. Kind of like the way a single stimulus (e.g., a madeleine) can trigger an entire complex of memories.

While the good old Hopfield network is a bit too simplified to account entirely for the workings of the human mind, I'd bet it would also be very good at this type of task.
 
It has to do with the way the brain processes its surroundings. It concentrates on the edges of things and then fills them in at it's leisure.

With words, the brain has the ability to reorder phonemes. Amplifying some sounds, deamplifying others, and even rearranging some. Think of dyslexia and marvel at the power the mind has to interpret the universe.

Think of listening to a person speak in a crowded restaurant. You hear the clink of silverware and plates, people talking, waiters walking around, all kinds of noise obscuring the sound of the person whom you're concentrating on and yet your mind fills in the gaps. As long as enough of the conversation is heard to allow interpretation, the gaps are filled in imperceptibly.

For instance, there are experiments where sentences are played back on top with a single syllable cut out. For instance, the gis out of legislature. People who hear the tape automatically fill in the missing syllable without even noticing that it is missing. It all happens behind the scenes.

Also, interesting is how sounds change depending on which language one speaks.

...when A. F. Chamberlain visited the Kootenai and Mohawk Indians during the late 1800s, he noted that they even heard animal and bird sounds differently from him. For example, when listening to some owls hooting, he noted that to him it sounded like "tu-whit-tu-whit-tu-whit," whereas the Indians heard "katskakitl." However, once he became accustomed to their language and began to use it, he soon developed the ability to hear sounds differently once he began to listen with his "Indian ears." When listening to a whippoorwill, he noted that instead of saying whip-poor-will, it was saying "kwa-kor-yeuh."
-The Naked Neuron, p. 250

This also explains why chinese have problems with the L sound and others. They don't hear the same sound that English speakers hear.

Now, you might think that speaking and hearing is completely different from reading, but the same area of the brain, Wernicke's receptive area, processes words while reading as it does when hearing and speaking. The same mechanisms are taking place which reorganize and structure your environment in a meaningful manner.
 
I often notice that it is enough for me to read the first and last letter or two of a word and I'd know what it is. Or perhaps my mind glances at letters and arranges them on its own into a correct pattern based on the context and experience. I think that's an example of what invert_nexus was describing. Because of that, when I'm tired, I sometimes misread things; recently, I read "immoral" as "immortal." Our brain is used to filling in the gaps on its own, especially when it involves sight. To begin, it's because light travels faster than we can see (I'm referring to speed of light we learned in school, nothing about slowed down photons).

Um, another example of how sound is heard differently: Americans say that dog barks "wof, wof" and Russians say that dog barks "gav, gav." But I doubt it has to do with hearing as much as it does with traditional ways of reproducing sounds.
 
in Latvian it's "vau, vau" :)
interesting thread

read the text @ full speed and understood everything right away except for the word "taotl" which I had to read two or 3 times to understand as "total" and English isn't even my native language.
 
invert_nexus said:
This also explains why chinese have problems with the L sound and others. They don't hear the same sound that English speakers hear.

Make that Japanese people. We pronounce "L" quite the same way in Chinese as in English, but we don't have the "th" sound.

Speaking of filling in invisible letters, I'm afraid that I might be actually increasingly perverted at the sight of certain words that resemble skanky counterparts.
 
Good point, but it does serve to help us work towards understanding why we recognize attrocious handwriting, and vague shapes that we have never seen before but will represent the same thing to many, if not all, people, but, a computer will not recognize those shapes.
 
Oh, there's fuzzy logic programs I've seen that can tackle this, even with the harder cases. Jumble pharmacopoeia and you see what I mean. It got that right, while maybe you would not.

Shapes, too. But there's also many simple ways to make it so, they're stumped where we are not. Turing deformation in those pesky numbers to prevent bots from entering etc. or camouflaged shapes or horrible contrasting backgrounds.

Thing is our whole way of information processing is different from the tools we use, including microphones, video, even digital logic and neural networks. We build up everything from scratch, and fill in blanks, and whatever information seems lost en route is actually retrievable.

All those tools/simulacra not even close analogies, and to more or less correctly represent all neuronal activity in one single human brain you'd still need a server park the size of Florida, or so. And we'd still not know if it would be conscious, dreaming or comatose, because that's only the hardware part.

AI is making great strides in small things, but it almost seems the target's moving. ;)
 
Status
Not open for further replies.
Back
Top