Do you think that AI will ever feel emotions?

She has no soul

:)
Could the AI program be considered an artificial soul? With that I mean "character traits".
Can we build AI which have "good souls"?

I should like someone to ask her what she thinks when humans are talking about her as if she wasn't there.
How does she process the information she registers when she observes human interaction.
 
Last edited:
Why do humans always have to project a sinister aspect to AI? That is so typical.

While I agree AI, in and of itself, will have sinister aspect, the programmer can and put in sinister instructions

There is no evidence whatsoever that AI should develop a desire to dominate.

Agree, in and of itself AI NO. Unless a extremely clever program gives it instructions when certain criteria are met - dominate

:)
 
Could the AI program be considered an artificial soul? With that I mean "character traits".
Can we build AI which have "good souls"?

I should like someone to ask her what she thinks when humans are talking about her as if she wasn't there.
How does she process the information she registers when she observes human interaction.
Possible

I have some dealings with mobile phone companies for a few weeks

Sometimes I get the feeling I am chat (like this) with a AI program. Get the feeling answers are pre programmed replies culled from previous questions and answers given and answered by humans

:)
 
I think that the term “soul” can be defined a few ways. It can be defined as “emotional or intellectual energy,” which I kind of like. From a spiritual view, it can be seen as that part of us that is immortal, transcending the physical realm.

I think what you are defining as soul “emotional or intellectual energy,” is a PROCESS

At death this process stops so not immortal

:)
 
While I agree AI, in and of itself, will have sinister aspect, the programmer can and put in sinister instructions
Of course, but that is the human programmer who is evil not the AI.
Agree, in and of itself AI NO. Unless a extremely clever program gives it instructions when certain criteria are met - dominate :)
Sure, that would not be difficult. Program a few trigger words, just like humans respond to certain trigger words.
 
I think what you are defining as soul “emotional or intellectual energy,” is a PROCESS

At death this process stops so not immortal

:)
I agree, what we call a soul is a product of an individual's self-awareness, which stops at death.

But I am beginning to examine words that end on .".....ness", which suggest a transcendent property. Of course that would still stop when that person is dead and does not project an image of say, self-awareness of happiness anymore...:rolleyes:
 
How does something , that has not evolved from millions of yrs. Of experience . Really understand much of anything .

Other than what we program into it ?

It can't .

AI can not build its self without Life doing the building .

Think about this .
 
Last edited:
AI can not build its self without Life doing the building .
We are life that's building it...! Just as bacteria are 90% of the Life in the human biome, humans are the driving force in the evolution of AI. The method of evolution is if no consequence to the result. Some evolution is fast, some emerges slowly. In the end it's all a mathematical formation of increasing complexity, in all living things.

Natural selection does not select for intelligence per se. It selects for that which can reproduce. And self-organizing complexity of self-referential patterns in AI is just a matter of time now.

Don't forget it takes a human some 20 years to fully mature. Give a learning AI 20 years of access to knowledge in a cloud on the internet, able to quasi-intelligently control things. The thought alone makes me gasp.
 
Last edited:
We are life that's building it...! Humans are the driving force in the evolution of AI. The method of evolution is if no consequence to the result. Some evolution is fast, some emerges slowly. In the end it's all mathematical.

Natural selection does not select for intelligence per se. It selects for that which can reproduce. And self-organizing complexity of self-referential patterns in AI is just a matter of time now.

Don't forget it takes a human some 20 years to fully mature. Give a learning AI 20 years of access to knowledge in a cloud on the internet, able to autonomously control things. The thought alone makes me gasp.

Mathematics in and of itself can not produce anything physical . Without the physical pre-existing .
 
And self-organizing complexity of self-referential patterns in AI is just a matter of time now.

??? can the brain PROCESS of self-organizing complexity of self-referential patterns running around in a electrical / biochemistry continuous loop embed (etch) a pattern on the network which translates to conscientiousness?

Will tidy up thoughts on this with coffee

:)
 
In the end, we all find out ^_^
Problem with such a thought is we will never know

No dead theist comes back and says I told you so

No dead atheists comes back and says I told you so

And, my take, we are not able to know when dead

:)
 
??? can the brain PROCESS of self-organizing complexity of self-referential patterns running around in a electrical / biochemistry continuous loop embed (etch) a pattern on the network which translates to conscientiousness?

Will tidy up thoughts on this with coffee

:)
You underestimate the sensory and reasoning abilities of AI. Big Blue is a chess program that is able to calculate all possible future moves, many moves ahead. The necessity for controlled self-referential functions is very well established.

Sophia's algorithm seems well suited to process information into a cohesive response. IMO, it is the biological materials on which Eukaryotic organism's neural network is founded, which seems very responsive to electro-chemical stimulus and may well serve as the platform of an ORCH OR or a IIT information processing machine with an emergent self-awareness, i.e. "consciousness"?

Let's pose the question if ORCH OR and IIT processing patterns might actually work on AI...
thinking-face_1f914.png
 
You underestimate the sensory and reasoning abilities of AI. Big Blue is a chess program that is able to calculate all possible future moves, many moves ahead. The necessity for controlled self-referential functions is very well established.

Sophia's algorithm seems well suited to process information into a cohesive response. IMO, it is the biological materials on which Eukaryotic organism's neural network is founded, which seems very responsive to electro-chemical stimulus and may well serve as the platform of an ORCH OR or a IIT information processing machine with an emergent self-awareness, i.e. "consciousness"?Let's pose the question if ORCH OR and IIT processing patterns might actually work on AI...
thinking-face_1f914.png

I don't underestimate Big Blue chess program . But it is still a electronic program . Stop programming . Stops Big Blue .
 
Not causal agent but THE agent of AI . Period .
That's lazy thinking. The agent of a good AI is an open platform, able to make subtle, even abstract inferences of information perceived by its senses.

AI is already making physical macro copies of itself. An ability that not even a virus possesses.
The programmed behavior is the causal factor. In AI it's intellectual growth pattern is a digital OS and memory (HD), which allows it to learn and reliably respond to causal circumstances. In biological organisms, physical growth patterns are encoded in DNA and the sensory information processes lie in the electro-chemical memory (microtubular pyramids).

Theoretically, there should be no non-trivial obstacle of either system developing (evolving) a continued increase of
complexity, sophistication, and accurate measurements on which to perform calculations.

Don't forget, the human brain can only make a "best guess", just like an AI makes a "best guess" of the sensory input.
 
How does something , that has not evolved from millions of yrs. Of experience . Really understand much of anything .

Other than what we program into it ?

It can't .

"Machine learning involves computers discovering how they can perform tasks without being explicitly programmed to do so."

In the course of these processing devices exploring and developing their own sets of rules for accomplishing an assigned challenge, they output algorithms which the human researchers are often unable to analyze and understand how/why they successfully work.

Despite such arguably being explicit knowledge in terms of being stored and codified, such is also perversely akin to tacit knowledge via lacking sufficient, articulated explanation. (But that may be purely due to the time and difficulty required for studying the algorithms.)

In a sense, it's "a machine creating an understanding of how to achieve _X_" without understanding how that "understanding" does it, despite researchers knowing the language of instructions that the "understanding" is represented slash expressed by. Perhaps partially rubbing against the territory of the symbol grounding problem.
 
Problem with such a thought is we will never know

No dead theist comes back and says I told you so

No dead atheists comes back and says I told you so

And, my take, we are not able to know when dead

:)

I forgot that I posted this thread.
 
Back
Top