Does the brain really "cause" consciousness?

The Extended Mind Hypothesis:


"The extended mind is an idea in the field of philosophy of mind which holds that the reach mind need not end at the boundaries of skin and skull. Tools, instrument and other environmental props can under certain conditions also count as proper parts of our minds.

The EMT

The "extended mind thesis" (EMT) refers to an emerging concept that addresses the question as to the division point between the mind and the environment by promoting the view of active externalism. The EMT proposes that some objects in the external environment are utilized by the mind in such a way that the objects can be seen as extensions of the mind itself. Specifically, the mind is seen to encompass every level of the cognitive process, which will often include the use of environmental aids.

The seminal work in the field is "The Extended Mind" by Andy Clark and David Chalmers (1998).[1] In this paper, Clark and Chalmers present the idea of active externalism (similar to semantic or "content" externalism), in which objects within the environment function as a part of the mind. They argue that it is arbitrary to say that the mind is contained only within the boundaries of the skull. The separation between the mind, the body, and the environment is seen as an unprincipled distinction. Because external objects play a significant role in aiding cognitive processes, the mind and the environment act as a "coupled system". This coupled system can be seen as a complete cognitive system of its own. In this manner, the mind is extended into the external world. The main criterion that Clark and Chalmers list for classifying the use of external objects during cognitive tasks as a part of an extended cognitive system is that the external objects must function with the same purpose as the internal processes.

In "The Extended Mind," a thought experiment is presented to further illustrate the environment's role in connection to the mind. The fictional characters Otto and Inga are both traveling to a museum simultaneously. Otto has Alzheimer's Disease, and has written all of his directions down in a notebook to serve the function of his memory. Inga is able to recall the internal directions within her memory. In a traditional sense, Inga can be thought to have had a belief as to the location of the museum before consulting her memory. In the same manner, Otto can be said to have held a belief of the location of the museum before consulting his notebook. The argument is that the only difference existing in these two cases is that Inga's memory is being internally processed by the brain, while Otto's memory is being served by the notebook. In other words, Otto's mind has been extended to include the notebook as the source of his memory. The notebook qualifies as such because it is constantly and immediately accessible to Otto, and it is automatically endorsed by him."----

http://www.en.wikipedia.org/wiki/The_Extended_Mind
 
The Extended Mind Hypothesis: ... "The extended mind is an idea in the field of philosophy of mind which holds that the reach mind need not end at the boundaries of skin and skull. Tools, instrument and other environmental props can under certain conditions also count as proper parts of our minds. ...
For me that POV does too much violence to the concept of "mind." I distinguish "mind" from "brain" mainly by idea that brain is material and governed by the natural laws (Physics and Chemistry being man´s versions of them) but mind is not material but an information processing system the brain "runs” or "executes" when you are conscious. Thus, the brain can have no "free will" but the mind could. (I am skeptical that it does have much, as we are so strongly indoctrinated when young and later by public expectations about how we should behave.)

Certainly it is true that external objects do extend the brain´s powers (my old slide rule or now my hand calculator are very helpful). For example with brain alone, I would require hours (and pencil & paper) to know what is the square root of a number like: 15,241,383, 936, but that does not make these external devices an extended part of my brain and certainly not of my mind.

PS to save you trouble: 123456x123456=15,241,383, 936
 
The Extended Mind Hypothesis: "The extended mind is an idea in the field of philosophy of mind which holds that the reach mind need not end at the boundaries of skin and skull. Tools, instrument and other environmental props can under certain conditions also count as proper parts of our minds.

Though related in terms of extending the territory, embodied cognition is less ambitious, going beyond the brain but not the body. Also making the brain more important than simply running an abstract program -- formerly it was 'mind' being what the brain does, "disembodied". Aspects of the latter view is what John Searle was addressing, but in the context of limited claims that had been made in AI, in the much misunderstood article Minds, Brains, and Programs (circa 1980), that introduced the Chinese Room.

Excerpts, George Lakoff interview:

LAKOFF: Early cognitive science, what we call 'first-generation' cognitive science (or 'disembodied cognitive science'), was designed to fit a formalist version of Anglo-American philosophy. That is, it had philosophical assumptions that the determined important parts of the content of the scientific "results." Back in the late 1950's, Hilary Putnam (a noted and very gifted philosopher) formulated a philosophical position called "functionalism." (Incidentally, he has since renounced that position.) It was an apriori philosophical position, not based on any evidence whatever. The proposal was this:

The mind can be studied in terms of its cognitive functions - that is, in terms of the operations it performs - independently of the brain and body.

The operations performed by the mind can be adequately modeled by the manipulation of meaningless formal symbols, as in a computer program.

This philosophical program fit paradigms that existed at the time in a number of disciplines [philosophy, linguistics, AI, psychology, information sciences, etc].

[...] All of these fields had developed out of formal philosophy. These four fields converged in the 1970's to form first-generation cognitive science. It had a view of mind as the disembodied manipulation of meaningless formal symbols.

JB: How does this fit into empirical science?

LAKOFF: This view was not empirically based, having arisen from an apriori philosophy. Nonetheless it got the field started. What was good about it was that it was precise. What was disastrous about it was that it had a hidden philosophical worldview that masqueraded as a scientific result. And if you accepted that philosophical position, all results inconsistent with that philosophy could only be seen as nonsense. To researchers trained in that tradition, cognitive science was the study of mind within that apriori philosophical position. The first generation of cognitive scientists was trained to think that way, and many textbooks still portray cognitive science in that way. Thus, first generation cognitive science is not distinct from philosophy; it comes with an apriori philosophical worldview that places substantive constraints on what a "mind" can be. Here are some of those constraints:

Concepts must be literal. If reasoning is to be characterized in terms of traditional formal logic, there can be no such thing as a metaphorical concept and no such thing as metaphorical thought.

Concepts and reasoning with concepts must be distinct from mental imagery, since imagery uses the mechanisms of vision and cannot be characterized as being the manipulation of meaningless formal symbols.

Concepts and reasoning must be independent of the sensory-motor system, since the sensory motor system, being embodied, cannot be a form of disembodied abstract symbol-manipulation.

Language too - if it was to fit the symbol-manipulation paradigm - had to be literal, independent of imagery, and independent of the sensory-motor system.

From this perspective, the brain could only be a means to implement abstract 'mind' - wetware on which the 'programs of the mind' happened to be implementable. Mind on this view does not arise from and is not shaped by the brain. Mind is a disembodied abstraction that our brains happen to be able to implement. These were not empirical results, but rather followed from philosophical assumptions.

In the mid-1970's, cognitive science was finally given a name and outfitted with a society and a journal. The people who formed the field accepted the symbol-manipulation paradigm. I was originally one of them (on the basis of my early work on generative semantics) and gave one of the invited inaugural lectures at the first meeting of the Cognitive Science Society. But just around the time that the field officially was recognized and organized around the symbol-manipulation paradigm, empirical results started coming in calling the paradigm itself into question.

This startling collection of results pointed toward the idea that mind was not disembodied - not characterizable in terms of the manipulation of meaningless symbols independent of the brain and body, that is, independent of the sensory-motor system and our functioning in the world. Mind instead is embodied, not in the trivial sense of being implementable in a brain, but in the crucial sense that conceptual structure and the mechanisms of reason arise ultimately and are shaped by from the sensory-motor system of the brain and body.

JB: Can you prove it?

LAKOFF: There is a huge body of work supporting this view. Here are some of the basic results that have interested me the most: The structure of the system of color categories is shaped by the neurophysiology of color vision, by our color cones and neural circuitry for color. Colors and color categories are not 'out there' in the world but are interactional, a nontrivial product of wave length reflectances of objects and lighting conditions on the one hand, and our color cones and neural circuitry on the other. Color concepts and color-based inferences are thus structured by our bodies and brains.

Basic-level categories are structured in terms of gestalt perception, mental imagery, and motor schemas. In this way the body and the sensory-motor system of the brain enters centrally into our conceptual systems.

Spatial relations concepts in languages around the world (e.g, in, through, around in English, sini in Mixtec, mux in Cora, and so on) are composed of the same primitive 'image-schemas', that is, schematic mental images. These, in turn, appear to arise from the structure of visual and motor systems. This forms the basis of an explanation of how we can fit language and reasoning to vision and movement.

Aspectual concepts (which characterize the structure of events) appear to arise from neural structures for motor control.

Categories make use of prototypes of many sorts to reason about the categories as a whole. Those prototypes are characterized partly in terms of sensory-motor information.

The conceptual and inferential system for reasoning about bodily movements can be performed by neural models that can model both motor control and inference. Abstract concepts are largely metaphorical, based on metaphors that make use of our sensory-motor capacities to perform abstract inferences. Thus, abstract reason, on a large scale, appears to arise from the body.

These are the results most striking to me. They require us to recognize the role of the body and brain in human reason and language. They thus run contrary to any notion of a disembodied mind. It was for such reasons that I abandoned my earlier work on generative semantics and started studying how mind and language are embodied. They are among the results that have led to a second-generation of cognitive science, the cognitive science of the embodied mind.

JB: Let's get back to my question about the difference between cognitive science and philosophy.

LAKOFF: OK. Cognitive science is the empirical study of the mind, unfettered by apriori philosophical assumptions. First-generation cognitive science, which posed a disembodied mind, was carrying out a philosophical program. Second-generation cognitive science, which is working out the nature of the mind as it really is - embodied! - had to overcome the built-in philosophy of earlier cognitive science.

JB: Does 'second-generation cognitive science' presuppose a philosophy?

LAKOFF: We have argued that it does not, that it simply presupposes commitments to take empirical research seriously, seek the widest generalizations, and look for convergent evidence from many sources. That is just what science is committed to. The results about the embodied mind did not begin from, and does not presuppose, any particular philosophical theory of mind. Indeed, it has required separating out the old philosophy from the science.


"PHILOSOPHY IN THE FLESH" - A Talk with George Lakoff; EDGE 51— March 9, 1999; THE THIRD CULTURE
 
A comment (or two) on / or extension to Post 203:

Prior to Hilary Putnam´s disembodied mind, the mind was every much "embodied." - A dominate POV was that there was a one-to-one correspondence between beliefs, thoughts, mental activity, qualia, etc. and neural activity in the brain and more distal parts of the bodies nervous system. For example, "pain" was increased or new activity in the C-fibers.

This "identity theory" leads to the following problem: It implied that life forms which did not have C-fibers could not experience pain. Hilary Putnam (and a few others) solves this problem by advancing functionalism. E.g. non-C-fiber organism could have pain if they had structures with the same function as humans had with C-fiber activity.

That problem was also solved by "physical tokenism." I.e. the cognitive system could have a token or structure, which when activated gave the experience of pain. Note all these ideas were efforts to separate from the even earlier Dualism - the presumption that the mind was not physical.

The down fall of functionalism and physical tokenism was in large part due to John Searle´s "Chinese Room" and Hilary Putnam´s "Twin "Earth" -an exact copy of Earth (even your identical twin is there speaking English and drinks "water," which has all the functional property of Earth´s water (but chemically is not H2O). Although functionally your twin asking for a glass of water has the same brain tokens, etc. but is not about same thing as when you ask for water*. In the Chinese Room, a man (or a machine) who understand not a word of Chinese answers questions written in Chinese with replies in Chinese as he has a very extensive look up table and data file.

Both the Chinese room and twin earth show that there can be no understanding of what tokens are representing or what functional structure are about (The "intentionality problem", but here "intentionality" is just the insider´s way of saying "about"). I.e. if the mind is just a very advance Turing logic machine (but disembodied) manipulating symbols (tokens) how can it be "about" any external thing?

For your thoughts to be about something, they must be tied to the body (be "embodied" but not necessary a human body, which just happens to have a brain to embody the mind) I.e. your experiences (and probably some innate genetic gifts) give meaning to the Turing machine token you process.

The above is grossly over simplified but should give a helpful partial history concerning the development of cognitive scientist´s evolving POV on the still not solved "Mind-Body Problem"

A minor personal note:
Hilary Putnam came to the Un. of MD circa 1964 to give a colloquium and afterwards met with about 20 people. I was one of the 20 – the only one from JHU except for a JHU professor. I was only a graduate student so was timid but near the end o the discussion period, I asked something, like: “How can some functional processes in the brain (or other creature´s central processor) be about or have the feel of pain instead of for example the smell of a rose”? It was some years later that he reversed his earlier functionalism POV as then he really did not have a plausible answer to my question.

* Conversly the same thing can conceptional be very different. The "evening star" and the "morning star" are both Venus, but very different concepts.
 
Last edited by a moderator:
I think they are all true, if "computer" is defined as a Von Neumann machine - I.e. a digital, clocked stepped, sequencial, rewrite of a multi-bit word (typicall 32 or 64 bits for desk-top computers) according to small set of operation (And and Nor, mainly I think where subraction is adding a negative value) performed in accord with a controlling set of "code" loaded in that is different for different types of jobs. They are less than 100 years old but Ancient Greeks and Chinese had computers of the mechanical types.

This ignores:
(1) All analogue computers, such as those used in torpedoes as they have only one job to do and can do it faster with less power consumption.
(2) All Neural Network computers, which are "trained" for their task and have no tasks specific logic or program to "load in"
(3) All neumatic computers, which don´t even use electricity to shift gears in some cars with neumatic automatic transmission.
(4) All hard wired computers, like those schedulling which floor the elevator should stop on next.
(5) All arithmetic computers (hand held calculators dominating this group)
(6) All non-electical computers. For example a mesh of strings (each representing the distance between two cities with knot nodes for the cities) - Finds the shortest route for the "travling salesman." It fits in shirt pocket and is cheaper and faster than a Von Neuman Machine doing the T.S. problem.
(7) All content addressable memory computers / data bases (like Google, I think)
(8) All quantum computers - only a few early stages demonstarted
(9) All parallel processors
(10) All mechanical computers such as speed governners, vacuum spark advances, etc. and encription devices like WWII´s Enigma-Machine.
(11) All optical computers
(12) All independent distributed computers*, such as the "Non-Von 1" (for details see: http://www.chrisfenton.com/non-von-1/)

Just group (5) Hand calculators, in the hands of many all over the world who are too poor to own a Von-Neumann machine probably out number the Von-Neumann computers 5 to 1 or more. When the HP-35 computer first came out, it costs more than $500 - My APL/jhu group bought one, keep under lock and key, but you could sign it out for over night personnal use.

* There are many of these found in nature -For example the octpus has one at the root of each leg, ant legs are independently running the "walk program"
 
Does consciousness reside in the brain of a dead-body also?

Brain does reside in a dead-body though.

Huh? What in the world are you talking about?

The discussion I offered was that consciousness does not reside outside of the brain, but that it has been localized to the brain stem and/or cerebrum, depending on how we agree to define the type of brain activity that corresponds to a particular definition of the term.

The most primitive brain may be in the Amphioxus. It's unclear whether anyone would consider it to be conscious. However, I would offer that its tiny brain is more aware that than a dead human one, if that's where you wanted to go with this.
 
I have told most of this and more but here is a more recognized source:
http://en.wikipedia.org/wiki/Hemispatial_neglect said:
a stroke affecting the right parietal lobe of the brain can lead to neglect for the left side of the visual field, causing a patient with neglect to behave as if the left side of sensory space is nonexistent (although they can still turn left). In an extreme case, a patient with neglect might fail to eat the food on the left half of their plate, even though they complain of being hungry. If someone with neglect is asked to draw a clock, their drawing might show only numbers 12 to 6, or all 12 numbers on one half of the clock face, the other side being distorted or left blank. Neglect patients may also ignore the contralesional side of their body, shaving or adding make-up only to the non-neglected side. These patients may frequently collide with objects or structures such as door frames on the side being neglected.[1]
Neglect may also present as a delusional form, where the patient denies ownership of a limb* or an entire side of the body. Since this delusion often occurs alone without the accompaniment of other delusions, it is often labeled as a monothematic delusion.
Neglect not only affects present sensation but memory and recall perception as well. A patient suffering from neglect may also, when asked to recall a memory of a certain object and then draw said object, again, only draw half of the object.
For first days post large parietal stroke, the patient can see his left leg, but it is not part of his RTS /his reality - not his. He typically is disgusted that the nurse has left someone´s leg in his hospital bed, and may even try to throw it out of the bed.

A well known anecdotical story concerns the interaction between a doctor and victim of large parietal stroke after the doctor takes victim´s hand with his own. Doctor asks about the joined has he wiggles (the one still perceived by the victim): "Whose hands are these" P. (correctly) replies: "One is yours and other is mine."

Then doctor wiggles the other two joined hands and again asks: "Whose hands are these" P. (incorrectly) replies: "They are your hands." (One is really his now deigned hand.)

Then doctor asks: How can I have two hands here, (small wiggles of joined hands P said were the doctors) and one hand here (small wiggles of the first pair of hand, one the patient acknowledges as his.)"

P. Replies: "It stands to reason that you should have three hands since you have three arms."
 
Last edited by a moderator:
Billy, I think you are stretching what 99% of the people on this planet mean by the term "computer" when referring to 99% of the computers on this planet. Computers made out of strings and pipes? Really? And so maybe my toilet is a computer too. lol!
 
Billy, I think you are stretching what 99% of the people on this planet mean by the term "computer" ...
If that were true, computers did not exist before about 1945, but term (or in chinese and Aribic languages) is more than 1000 years old. The first computers were people, aided by scratches made in a sand box and later by the abacus.
 
The Extended Mind Hypothesis: "The extended mind is an idea in the field of philosophy of mind which holds that the reach mind need not end at the boundaries of skin and skull. Tools, instrument and other environmental props can under certain conditions also count as proper parts of our minds.

There is a psychology concept called projection, where what is inside, but unconscious, is projected outward, similar to a movie overlay, so it can become conscious from outside.

For example, say I dropped you off, late at night, in the woods, without any light. Under those uncomfortable conditions, the imagination can become active and start to sense things that are not really there, but which appear to be out there. The shadow near the bush becomes an animal standing there looking at you. The movie projection, becomes conscious, as the animal shadow outside.

Along these lines, if the projector was creating an intutitive overlay onto a tool, a tool may feel like it was an extension of oneself. Projection can also be a feeling that leads one to a particular version of the tool.
 
If that were true, computers did not exist before about 1945, but term (or in chinese and Aribic languages) is more than 1000 years old. The first computers were people, aided by scratches made in a sand box and later by the abacus.

Unbridled equivocation.

You are talking about "one who computes" as opposed to the commonly expected usage of "a device that computes". Would you also equivocate "calculator"?

cal·cu·la·tor
n.
1. One that calculates, as:
a. An electronic or mechanical device for the performance of mathematical computations.
b. A person who operates such a machine or otherwise makes calculations.​

In both cases, there is a clear difference between a computational aide and the user of such an aide.
 
If that were true, computers did not exist before about 1945, but term (or in chinese and Aribic languages) is more than 1000 years old. The first computers were people, aided by scratches made in a sand box and later by the abacus.

1946 actually. "If you look at most history books, they'll tell you ENIAC (for Electronic Numerical Integrator and Computer) was the first true all-purpose electronic computer. Unveiled in 1946 in a blaze of publicity, it was a monstrous 30-ton machine, as big as two semis and filled with enough vacuum tubes (19,000), switches (6,000) and blinking lights to require an army of attendants. Capable of adding 5,000 numbers in a second, a then unheard of feat, it could compute the trajectory of an artillery shell well before it landed (compared with days of labored hand calculations)."

Read more: http://www.time.com/time/magazine/article/0,9171,990596,00.html#ixzz2GK4adpma
 
... In both cases, there is a clear difference between a computational aide and the user of such an aide.
Certainly true, but both are often called computers; however, none of the 12 "not Von Neumann computer" types I listed in post 206 were human computers. They were all physical devices used by humans* for calculation or decisions making.

* Although the analogue computer that torpedoes use to know when to self destruct is not directly used by humans. Many still think torpedoes make hull contact to blow up, but the modern torpedo does not do that. If exploding with hull contact it would only damage a big ship, not sink it. You sink a big ship by making huge gas bubble many meters under the center of the ship for the unsupported center to fall into - braking the ship into two parts.

Sometimes a large oil tanker, improperly being loaded will try to lift one end up out of the water only a few meters from level and be destroyed. The resulting legal battles as to who pays can take more than a decade.
 
Last edited by a moderator:
To Magical Realist Yes 1946 was the official date as then it ran reasonable well, but it ran less well in 1945 with "bugs" literally making it stop. Many of those 6000 "switches" were relays that killed quite a few ants and lady bugs etc. in 1945. Those bug induced computer failures is where our term “bug” as in: "It must be some bug in the program.” comes from. - Just a little history few remember now.

BTW, your last sentence about atrillery shells remined me of another very important WWII computer - The Norden bomb sight. I don´t remember all the imput data, but plane´s air velocity (direction included) and wind velocity and some other imputs related to target location (plus others, including air temperature, I think) let it calculated when to release the bomb(s) - much better than any human could. Briefly it also told the pilot how to fly or actually flew the plane by controling the autopilot system as the plane approached the drop point.!

350px-Norden.JPG
237px-ThomasFerebee.jpg
The Norden bomb sight, not a human, dropped the first A-bomb!
First Figure caption is: "The Norden bombsight at the Computer History Museum in Mountain View, California." ( but I made three words bold)
More details about this computer that greatly helped win WWII at: http://en.wikipedia.org/wiki/Norden_bombsight

PS You are correct: People less than 60 years old, in their ignorance, do think "computer" refers to a Von-Neumann machine, but I know there are many different types. Hell what you no doubt call a "refrigerator" I still call "the ice box."
 
1946 actually. "If you look at most history books, they'll tell you ENIAC (for Electronic Numerical Integrator and Computer) was the first true all-purpose electronic computer.

The Colossus computers preceded it during the WWII years, but that was more fixed-program, not a general-purpose project (it decrypted Lorenz coded messages). Turing even introduced the design for a machine called "Banburismus" as early as 1939 (improved a year later by Gordon Welchman), which apparently got the moniker of "Bombe" eventually. But it was electromechanical, and likewise was limited to cryptanalysis -- specifically targeting the Enigma device of the Germans. Both Colossus and Bombe were classified as covert for a number of decades.

--- Decoding Nazi Secrets; PBS Airdate: November 9, 1999 ---

NARRATOR: . . . Eight of the ten Colossus machines were destroyed. The remaining two were moved to British secret service headquarters, where they may have played a significant part in the codebreaking operations of the Cold War. In fact, the Russian military had developed a code that was similar to the high command's Fish code. So the techniques invented at Bletchley Park were still to prove vital in a very different kind of conflict. In 1960, the order finally came to destroy the last two Colossus machines.

THOMAS H. FLOWERS: That was a terrible mistake. I was instructed to destroy all the records which I did. I took all the drawings and plans and all the information about Colossus on paper and put it in the boiler fire, saw it burn.

NARRATOR: Tommy Flowers returned to the post office and was forgotten. In all the secrecy, Colossus never received recognition as the world's first programmable computer. Instead, that honor was to go to the American Eniac. As for the codebreakers, they all dispersed, some back to universities and others into the fledgling computer industry. A few stayed on in the British secret service, while some of the Americans returned to Arlington Hall. The most innovative thinker of all, the man whose inventiveness had been at the center of Bletchley Park's success, died tragically. In 1954, Alan Turing took his own life after being persecuted as a security risk because he was gay.


http://www.pbs.org/wgbh/nova/transcripts/2615decoding.html
 
This it probably the first digital self portrait made by a digitally programed computer:
jacquard-portrait.jpg
It required more than 100,000 bits of information (stored in more than 10,000 cards with punched holes or not) to program the silk weaving loom, which made the above silk self portrait.
http://yin.arts.uci.edu/~studio/resources/175/jacquard.html said:
In 1801, the Frenchman Joseph-Marie Jacquard invented an automatic loom controlled by punch cards. Hand weaving, though requiring high levels of skill, was also a repetitive and therefore often tedious task. Jacquard, himself the son of a silk weaver, worked out a system that used stiff pasteboard cards in which the patterns of punched holes controlled the movement of the loom's warp strings and therefore the pattern that would be woven as the shuttle passed through these strings. The holes, in other words, served as a program for the loom.
Programable, mechanical computers were used long before WWII code cracking and generating ones, like the Enigma-Machine. When the output of your digital camera is displayed on the screen of your Von-Neumann computer your are doing exactly what was done ~200 years ago, by mechanical technology. -I.e. converting stored digital information into a image you can view.
 
Does consciousness reside in the brain of a dead-body also?

No, not even during deep sleep but only when the RTS is running - i.e. the awake state or during dreams.

That means a person has no consciousness, when he is sleeping but not dreaming.


"You," meaning your psychological self, not your body also only exist in these two states.

I believe by "psychological self", you mean 'psychological consciousness'. In spiritualism, "consciousness" is synonym with "soul" which defines 'life' ie the difference of 'a living body' and 'a dead body'. So, i guess "psychological consciousness" and "spiritual consciousness" are different. In psychology, what would be the terminology to define the difference between 'a living body' and 'a dead body'?




Huh? What in the world are you talking about?

The discussion I offered was that consciousness does not reside outside of the brain, but that it has been localized to the brain stem and/or cerebrum, depending on how we agree to define the type of brain activity that corresponds to a particular definition of the term.

The most primitive brain may be in the Amphioxus. It's unclear whether anyone would consider it to be conscious. However, I would offer that its tiny brain is more aware that than a dead human one, if that's where you wanted to go with this.

As far as 'spiritualism' is concerned "consciousness" is something, which activates or energises our body and our brain works. As our brain works, we get "psychological consciousness".

In a dead body, brain or body does not work. So, "spiritual consciousness" is not there in a dead body.

So, cause of "psychological consciousness" can be "spiritual consciousness".
 
That means a person has no consciousness, when he is sleeping but not dreaming.
Yes that is my POV. Why would anyone think other wise?
I believe by "psychological self", you mean 'psychological consciousness'.
Yes that is my POV. To be clear I am not speaking of my body, I often use quotes –“I” “me” “myself” when I mean the part of the information process running in my body´s parietal brain, that I call the RTS.

You can think of “me” as a subroutine in the RTS, which is largely concerned with making a Real Time Simulation of the external world the body has sensors for detecting as having real time understanding, instead of neural delayed information, is a great survival advantage, nature has selected for. Try to duck a spear or thrown rock if your only knowledge of where it is, is delayed by 0.3 seconds.* By perfecting the RTS, our ancestors killed off all the other humanoids, including the bigger, stronger, and larger brained Neanderthals, who perceived the world with slight delay due to stages of neural processing. They “exploded” Out of Africa dominating all others with some interbreeding sharing of genes, but mostly just killing, and sometimes, especially in cold winter when food was short, eating those in their way.
In spiritualism, "consciousness" is synonym with "soul" which defines 'life' ie the difference of 'a living body' and 'a dead body'. So, i guess "psychological consciousness" and "spiritual consciousness" are different.
Yes very different. There is no evidence for the existence of a non material “soul.” Even if it did exist, it could not make any difference that is observable as only material things (and their force fields) can move mater. A “soul” can not even deflect a single atom from the trajectory the natural laws impose upon it. There is a great deal of evidence that consciousness exist, even if it is hard to get universal agreement on what consciousness is and which creatures have it. IMHO, this lack of definition is because consciousness is an information process sometimes running in the world´s most advanced computer, which can not be observed directly as transisters changing states can be in man-made computers. The human brain, is by far the most complex dynamic structure known to exist.
In psychology, what would be the terminology to define the difference between 'a living body' and 'a dead body'?
There is no sharp line between. Historically different tests for being dead have been used. I.e. No fog on a cold mirror held many minutes next to nose, No heart sounds, No EEG signals (brain dead) even if the body is functioning with machine assists. Etc. IMHO, various parts of the body die at different times. Fingernails & hair still grow more than a week after brain dead. Organs, if keep cold until transplant can live in someone else´s body for years. Bodies can live without many parts, including an electrically active brain, if located in an advanced medical center, keeping organs alive for transplant needs..

Do you really think that one ms after time T, when you were alive, you are dead?

* That is less than 10 sequential stages of processing with typical interstage transmission delays included. Even just the tiny (less than 0.05 second) delay in signals from one eye looking thru a dark filter with the other not, can distort straight line motion into perceived revolving motion.
See: http://en.wikipedia.org/wiki/Pulfrich_effect

SUMMARY: Living with neural delays instead of a real time understanding of the world is tough and dangerous, as all the now extinct other humanoids life forms would tell you if they could.
 
Last edited by a moderator:
This it probably the first digital self portrait made by a digitally programed computer: It required more than 100,000 bits of information (stored in more than 10,000 cards with punched holes or not) to program the silk weaving loom, which made the above silk self portrait.

These early punch-hole approaches to storing information infrequently remind me of the Philip K Dick story, The Electric Ant. Where the android Garson Poole tinkers with micro-patterns on the roll of punch tape in his chest cavity to alter his reality. Which in turn sort of "erroneously" (for me) impinges upon the question of multiple realizability (but not in regard to variable substrates being able to produce intelligent external behaviors, as much as the generation of subjective or phenomenal experiences). "Erroneously" because Electric Ant doesn't necessarily imply in the story that the action of reading or reacting to the punch tape itself creates Poole's manifestations of the world, since the resulting pseudo-sensory data is fed to a "central neurological system" or whatever applicable techno-babble PKD provided.

John Searle, apparently touching on multiple realizability in Minds, Brains, and Programs: ...the same program could be realized by an electronic machine, a Cartesian mental substance, or a Hegelian world spirit. The single most surprising discovery that I have made in discussing these issues is that many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physical/chemical properties of actual human brains. [...] Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place -- only something that has the same causal powers as brains can have intentionality -- and though the English speaker has the right kind of stuff for intentionality you can easily see that he doesn't get any extra intentionality by memorizing the program, since memorizing it won't teach him Chinese.

However, it's difficult to see what special characteristics or "causal powers" a biochemical substrate would be contributing to whatever functional schemes correspond to qualia production (if that is a part of the "human" brand of "understanding" which Searle is embedding in his use of "intentionality"), other than electrical or electromagnetic properties. Which, say a Charles Babbage descended mechanical simulation of "mind", would not feature. But any brain oscillation patterns and non-trivial EM fields could surely be replicated more faithfully in electronic devices. Penrose, of course, insisted QM was at work somewhere within neurons (Hameroff's microtubules, etc), despite the hot environment which critics point-out. Though there seems little posited in that area that would help explain qualitative manifestations anymore than what's posited about electromagnetism (correlation alone would still seem to be the extent of the "how" in either, which would be no better than correlation to the higher-level neural circuitry patterns or perhaps even the abstract formal schemes of Penrose and Searle's opponents, treated disembodied or independent of any particular substrate). Oddly enough, though, quantum biology has at least had a few developments emerge in the news since that might increase its feasability or survivability.

http://en.wikipedia.org/wiki/The_Electric_Ant

http://www.tor.com/blogs/2012/03/understanding-hegel-with-philip-k-dick-on-the-thirteenth-floor
 
Back
Top