Are plants conscious?

OK, you yada yada'ed over the important part.
Because it's obvious to someone who can move out of the shallow end of language.

How do we have this concept of free will and other things don't?
Its a concept that arises from the value of self, which controls behaviour. A machine may or may not have machine learning algorithms, but since both machines lack a self, any distinction between them is vacuous.

You have to introduce a sense of self to distinguish these two sentences:

A. If the activites are unpredictable, then freewill will be present.

B. If the activities are unpredictable, then a lack of control will be present.

A is controlled by the self. B is controlled by random generation.

You can say that unpredictable action is a necessary condition for free will, but not a sufficient one. Usually the (deterministic) response to this is to define the self in a counterfactual manner eg.

"A lack of control in deterministic paradigms gives us unpredictable behaviour. The self also gives us unpredictable behaviour so it either
(a) arises from a lack of control or
(b) is actually predictable"

As such, subscribers to the deterministic notion of freewill commonly point to engineering random generation in machines, which, in their minds, illustrate the relationship between a lack of control (random behaviour) arising from a predictable language (computer programming).

Which then leads them to their next adventure in folly : wading in the kiddy pool of language while imagining they are participating in some sort of Jules Verne sequel.

If one can actually move between the deep and shallow notions of language one can avoid all this .... hence the "yada, yada".
 
but since both machines lack a self,
Prove that we have a self, apart from the sense of self necessary (which could be illusory) to know which hole to put the food in. And then prove that a machine can't have a sense of self.
 
Prove that we have a self, apart from the sense of self necessary (which could be illusory) to know which hole to put the food in.
What a strange request.
If you are currently striving to live in society and not get incarcerated by a lengthy jail sentence (free accommodation and food ... quite an efficient and easily accessible solution to the "food in hole" problem), your own life is proof that there is more at hand than your meagre estimations of selfhood.

And then prove that a machine can't have a sense of self.
Machines don't care if they are locked in jail, taken on a world tour or thrown in the bin.
Your attributing selfhood to machines requires a vacuous definition of selfhood.
Its just like taking a cup that has a cracked bottom that instantly leaks all the water one pours in to it. Sure, you can say it is carrying as much water as possible, but that is just another way of saying it is empty.

https://en.m.wikipedia.org/wiki/Vacuous_truth
 
your own life is proof that there is more at hand than your meagre estimations of selfhood.
No it isn't. A could have a sense of self for utilitarian reasons, just like a self-driving car has a sense of self to know to avoid collisions.
Machines don't care if they are locked in jail, taken on a world tour or thrown in the bin.
You are talking about existing machines, which I agree are limited. But that does not address the philosophical question of whether a machine can theoretically have preferences and a sense of self. If you need an anology, think of the cylons on BSG. Prove we aren't also just biological machines.
 
If the map or memory of someone's driving would overide you and forcibly dictate your movements, then no, you would not be conscious
So addicts (whose actions are often dictated by their addiction) are not conscious?
I thought you were talking about ai choosing mountainsides over packed car parks as an emergency landing site?
Yes, I was. They have fully capable autopilots that can fly from brake release to touchdown. They have never had to choose "mountainsides over packed car parks" and there is very little likelihood they ever will. Likewise, car AI's have never had to choose pedestrians over drivers, nor is it likely they ever will.
Have you got a source for this "mountainside protocol"?
Nope. The paradox does not exist, and so has not been designed for (true of cars AND planes.) It's just a mind game some like to play.
You can't unify everything under the umbrella of a "trolley problem" given that they operate in vastly different environments of hazard, accountability and finance.
Agreed. It is an interesting philosophical debate that doesn't have much application in the world of vehicles.
 
standing squarly in the shallow while quietly gesturing at the deep.
Lol
You underestimate your swimming prowess. It seems a lot of people are required to wear a life jacket when taking a shower.

i thought the functional philisophical construct that might enter the discussion might enjoy a shallow start.
err-go can a human that has not been taught something, learn it without being taught by another human ?
is this translatable to a language format by secondary means of communication that would render such learning to be the functional content of a created language ?
I'm not sure I understand exactly what you are asking.
So, for instance, suppose one reads the statement "Drink a glass of water and you will not feel thirsty". One can read that sentence and understand what everything means, but until one actually drinks the water, one will not actually understand what it means to not feel thirsty.
The most efficient way to communicate what it means to not feel thirsty is to provide a directive to drink water.
Are you asking, aside from this, is there some other way to communicate something experiential by language through a shared medium of selfhood ?

can any known living thing duplicate this ?
is this (im hypothasising) form of process able to be outlined as a form of consciousness ?

could the human be unconscious and do this ?(probably not)
thus can we atribute the process solely to humans and or another species ?(i think dolphins & chimpanzees[orca and a few other animals] teach others how to use tools and how to play[im not disecting the play soo much into variant forms of basic behaviour modelling])

So, just as I am a human and I have certain experiences is there a certain core experience of "being a human" that extends beyond my culture that is communicable to others? (The fact that we are interested in hearing about everyday events of ancient history, or even the predominance of the "soap opera" drama as a genre of entertainment, tends to indicate a resounding "Yes!", although at the same time, how we learn language shapes how we perceive the world and ourselves) .

... but does this have the capacity to transcend even species? So when we look at dolphins, for instance, are we looking at acts of playfulness, acts of kindness or are we merely looking at our own behaviour and overlaying it on another species, via anthropomorphism?

And if there is a factual basis for this beyond anthropomorphism, how far does this definition slide down the slope? (For instance, we may describe dolphins and dogs as playful, but can we attribute the same behaviour to reptiles, insects or trees?)

I am just trying to see if I understand what you are asking. Feel free to fill in the blanks.
 
...yet.

There's no reason to think they won't start as they get intelligent.
Self driving uber cars might kill pedestrians on occassion, but its impossible to fathom how they could start feeling remorse (or satisfaction) for it.

Perhaps uber can overlay some algorithms for shrieking in despair everytime their cars run over someone (or even instantly pull up the victims personal history and establish how the incident is actually an act of vigilante justice ... robo-car voice:"yeah bro! Well next time look where you are going, jerk, , and" -sound of an internal hard drive briefly spinning into action -"maybe stop beating your wife and the universe will stop punishing you for being such an asshole!" - tyres screech as car shoots off down the street in apparent disgust for the human race).

.... but it will always be someone else who has to front the legal retribution (not the car). The same car may be involved in a dozen such accidents, but would that make it a repeat offender (or is it some goofball in tech or logistics roll out who is liable to lose their pants?).
 
No it isn't. A could have a sense of self for utilitarian reasons, just like a self-driving car has a sense of self to know to avoid collisions.
But you are wearing a "life jacket in the shower" of language depths definition to introduce "self" in such a way.
You could just as easily say a pocket calculator has a sense of self. It "reacts" in specific ways, for utilitarian ends, when it interacts with the environment (someone presses the buttons) .... nevermind the obvious.

You are talking about existing machines, which I agree are limited. But that does not address the philosophical question of whether a machine can theoretically have preferences and a sense of self.
I discussed the Deterministic outlook on the problem of freewill. It was all philosophy, but you edited it out of your response. Do you want to go back to it now or do you want me to copy/paste it again?

If you need an anology, think of the cylons on BSG. Prove we aren't also just biological machines.
So now we not only have humans creating machines that are humans, but also machines creating humans that are machines (with the added complexity, that the machines creating humans that are machines, do so out of a desire to kill the humans who make machines that are humans ... and just when you thought it couldn't get better, the machines who make humans that are machines, make them out of some pseudo religious zeal that has its origins in tribal customs from the ancient middle east .... and furthermore amongst the machines who make humans like machines there is an emerging tendency to rethink this propensity to kill the humans that make machines like humans to the degree that they - the machines that make humans like machines (just in case you were starting to get confused at this point) - can reject this cultural imperative laid down by ancient middle eastern tribal customs.
A famous quote from Mark Twain regarding who irony is wasted on comes to mind ...)

I'm not sure if you read my response to Rainbow, but you just provided a wonderful example of ....

Which then leads them to their next adventure in folly : wading in the kiddy pool of language while imagining they are participating in some sort of Jules Verne sequel.

It's better to take one counterfactual definition at a time rather than simultaneously introduce two or three.

https://en.m.wikipedia.org/wiki/Counterfactual_conditional
 
Last edited:
I discussed the Deterministic outlook on the problem of freewill. It was all philosophy, but you edited it out of your response. Do you want to go back to it now or do you want me to copy/paste it again?
I didn't find it convincing. I don't think we have free will, I think we are deterministic machines with the illusion of free will.
 
So addicts (whose actions are often dictated by their addiction) are not conscious?
No, but they do illustrate the varigatedness consciousness involves, ie (if you take addiction and non-addiction as the two extremes under the microscope) the fact that the same individual can degrade to a lesser state of performance (succumb to addiction) or rise to a higher state (climb out of addiction) shows how the varigatedness belongs to the valves (filters? mediums? habits?) and not the consciousness, per se.

IOW you can super filter consciousness down to such a degraded form, close the valve to the smallest aperture. In such a state, has one actually successfully equated ai with consciousness ?
Is a fungus growth on the backside of a sea slug "dumber" than a smart car or is one merely limited in one's vision, seeing things according to one's "filter" (meh, sea slugs don't even have public holidays, what to speak of their fungus ... its obvious they are not conscious ... at least my car's ai can work out when it's a public holiday).

Yes, I was. They have fully capable autopilots that can fly from brake release to touchdown. They have never had to choose "mountainsides over packed car parks" and there is very little likelihood they ever will. Likewise, car AI's have never had to choose pedestrians over drivers, nor is it likely they ever will.
It may not "choose" it, but its algorithms may lend to a performance bias that will be interpreted as a choice, a choice that someone will have to wear the responsibility for. Cars are involved in evasive action all the time (as distinct from planes). If the ai has sufficient data to distinguish a person from a brick wall, then a whole new legal avenue of pursuit presents itself (for which there is an enormous body of precedence).

In the case of a fatality (caused by either the car killing/injuring a pedestrian to avoid a brick wall, or a car killing/injuring a passenger by swerving to avoid a pedestrian) do you expect that the entire legal inquiry of professional negligence will be disbanded because "computers don't make mistakes"?
 
I didn't find it convincing. I don't think we have free will, I think we are deterministic machines with the illusion of free will.
That's fine.
You only speak in such a way because your deterministic wiring requires that you give such a response.
 
No, but they do illustrate the varigatedness consciousness involves, ie (if you take addiction and non-addiction as the two extremes under the microscope) the fact that the same individual can degrade to a lesser state of performance (succumb to addiction) or rise to a higher state (climb out of addiction) shows how the varigatedness belongs to the valves (filters? mediums? habits?) and not the consciousness, per se.

IOW you can super filter consciousness down to such a degraded form, close the valve to the smallest aperture. In such a state, has one actually successfully equated ai with consciousness ?
I think you have just (successfully) argued that consciousness represents a scale rather than a binary decision. Rocks? Not conscious. Earthworms and self driving cars? Very limited consciousness. Lizards? Higher up along the scale. Addicts? Definitely conscious, with some limitations that non-addicts don't have.
It may not "choose" it, but its algorithms may lend to a performance bias that will be interpreted as a choice, a choice that someone will have to wear the responsibility for. Cars are involved in evasive action all the time (as distinct from planes).
TCAS (traffic collision avoidance systems) are used pretty frequently on aircraft. They issue TA's and RA's (more serious) when collisions are possible or likely. There are thousands of such alerts a year within US airspace; most are acted on.

But yes, any system failure will have lawyers searching for someone to sue. That does not mean that there will be trolley problems that must be solved by autonomous vehicles.
In the case of a fatality (caused by either the car killing/injuring a pedestrian to avoid a brick wall, or a car killing/injuring a passenger by swerving to avoid a pedestrian) do you expect that the entire legal inquiry of professional negligence will be disbanded because "computers don't make mistakes"?
Nope.

If a car hits a brick wall and causes a fatality, someone will sue.
If a car hits a pedestrian and causes a fatality, someone will sue.
If a car does a 'wheel hard over' due to hardware failure and causes a fatality, someone will sue.
If a car stops in the middle of a freeway and causes a fatality, someone will sue.
If a car fails to stop and rear-ends another car, causing a fatality, someone will sue.
If a ten ton rock falls into a freeway lane, and the car fails to stop in time, causing a fatality, someone will sue.

There's a common thread there, and it's not that "the car couldn't solve the trolley problem."
 
I didn't find it convincing. I don't think we have free will, I think we are deterministic machines with the illusion of free will.
We are definitely not deterministic machines - that hasn't been a working model since the late 1800s.
As far as "the illusion" of free will - that has never made much sense to me. It's like "the illusion" of solidity in objects that are mostly empty space, or "the illusion" of pain - trip over a chair in the dark, and the material difference between whatever reality somebody wants to substitute and the illusions you cannot avoid and can intersubjectively verify begins to seem illusory itself.

Whatever one wishes to call what a drug addict has less of than they did before becoming addicted, and what a plant appears to have none of, it's going to be a synonym of "free will".
 
We are definitely not deterministic machines - that hasn't been a working model since the late 1800s.
The idea gets repeatedly reworked ... semi-compatibalism, soft determinism, etc, well into the 21st century.


As far as "the illusion" of free will - that has never made much sense to me. It's like "the illusion" of solidity in objects that are mostly empty space, or "the illusion" of pain - trip over a chair in the dark, and the material difference between whatever reality somebody wants to substitute and the illusions you cannot avoid and can intersubjectively verify begins to seem illusory itself.

Whatever one wishes to call what a drug addict has less of than they did before becoming addicted, and what a plant appears to have none of, it's going to be a synonym of "free will".

The original proponents were buddhists. They advocated that reality is ultimately composed of parts with no whole, so its all illusion. Obviously they needed to explain why they didn't walk into walls or off cliffs, so they introduced a double tiered reality.

Basically there is a functional platform of chairs and brick walls and above that a "genuine" reality devoid of characteristics. If you approach the genuine reality, the functional platform collapses (although technically, there is no "you" to speak of that does the approaching, or at least comes back to talk about the experience in any meaningful way .... buddhism is loaded with grammatically negative words).
So it enabled them to talk about the nature of the self and reality in some really counter-intuitive ways, while falling short of letting that counter intuitive "talk" manifest as full scale counter intuitive action (with a few exceptions ... https://www.telegraph.co.uk/news/wo...ealed-inside-1000-year-old-Buddha-statue.html )

Semi-compatibalism/soft determinism works much the same way, trying to walk a fine line between saying what the world "really" is and relegating anything else to a mere, self collapsing inferior reality or illusion (no prizes for guessing which tier of reality free will and selfhood belong to).

Its actually a view quite compatible with some (many? most?) takes on science since you can talk about observing things without the pesky interference of consciousness . . although quantum mechanics emerged as an inconvenient truth.

As always, the jury is perpetually out on these subjects ....

A poll was conducted at a quantum mechanics conference in 2011 using 33 participants (including physicists, mathematicians, and philosophers). Researchers found that 6% of participants (2 of the 33) indicated that they believed the observer "plays a distinguished physical role (e.g., wave-function collapse by consciousness)". This poll also states that 55% (18 of the 33) indicated that they believed the observer "plays a fundamental role in the application of the formalism but plays no distinguished physical role". They also mention that "Popular accounts have sometimes suggested that the Copenhagen interpretation attributes such a role to consciousness. In our view, this is to misunderstand the Copenhagen interpretation."[15]

But, 2500 years on, this double tiered reality never seems to tire from recruiting martyrs ...

Bohr also took an active interest in the philosophical implications of quantum theories such as his complementarity, for example.[21] He believed quantum theory offers a complete description of nature, albeit one that is simply ill-suited for everyday experiences - which are better described by classical mechanics and probability. Bohr never specified a demarcation line above which objects cease to be quantum and become classical. He believed that it was not a question of physics, but one of philosophy or convenience.[22]

https://en.m.wikipedia.org/wiki/Von_Neumann–Wigner_interpretation


It strikes me as a huge irony, that we have such a long, persistent philosophical embarrassment with selfhood.
 
iceaura said,
We are definitely not deterministic machines - that hasn't been a working model since the late 1800s.
The idea gets repeatedly reworked ... semi-compatibalism, soft determinism, etc, well into the 21st century.
As far as "the illusion" of free will - that has never made much sense to me. It's like "the illusion" of solidity in objects that are mostly empty space, or "the illusion" of pain - trip over a chair in the dark, and the material difference between whatever reality somebody wants to substitute and the illusions you cannot avoid and can intersubjectively verify begins to seem illusory itself.

Whatever one wishes to call what a drug addict has less of than they did before becoming addicted, and what a plant appears to have none of, it's going to be a synonym of "free will".​
Does "consciousness" necessarily imply "free will"?

When there is a choice, the decision will always be in the direction of "optimal parsimony".
Thus when a person makes a choice it appears to be from free will, but in reality the choice is already present in latent form before the choice is made, IMO.

When a person with a mathematical mind has a choice to study mathematics or literature, what subject will he/she choose?
OTOH, a person with a propensity for language has the choice between study of mathematics and literature, what subject will he/she choose?
 
Last edited:
Back
Top