Do you think that AI will ever feel emotions?

This is absolutely fascinating, The interview series of the AI "Leta" by Dr Alan D. Thompson.

Here is an example
 
$2.00 & tell the bank its not yours & to return or hold it
After doing a bit more online shopping it occurred to me the $2 deposit into my account may also be from a purchase made where a small test is made of your account actually being in existence before committing to withdrawing the full amount

:)
 
Try this on for size. If you are not astounded, then you do not understand "intelligence".
 
Highlighted

Mimicry through experience .

How does one step outside mimicry ?

Describing or illustrating the properties. It is impossible to step outside mimicry. Exposure creates memories that can be recalled to be mimicked or compared to variations on a theme. The brain works by comparing data from memory with incoming data and making a "best guess" based on this comparison.

This method is already observable in chickens when the mother hen teaches her chicks how to grub for food.

Can Chicks Eat Black Soldier Fly Grubs?
Yes, baby chicks can eat grubs! In fact, these are an excellent snack that add nutrition to the diet, mimic a natural diet for chicks, and stimulate natural behaviors.
Mama hen say yes to grubs, so you can too!
If you’ve ever seen a mother hen with a brood of chicks foraging in the yard, you will notice that she introduces her babies to all sorts of new foods! This includes pieces of bugs, grubs, and worms. By feeding chicks dried grubs, you are mimicking what a mother hen would naturally be introducing to her brood.
https://grubblyfarms.com/blogs/the-flyer/can-chicks-eat-grubblies

I have watched them for hours and marvelled at the teaching methods an experienced hen uses for her brood. All the while the rooster perched high watching for threats from the air as well as from the bushes and using specific sounds to indicate the type and location of the threat.
 
Last edited:
Then explain to me what " generalised artificial intelligence " means .
It means that the AI will not be "programmed" to perform a limited set of tasks. Rather, it will be a general-purpose problem-solving machine - just like you are. It will have "senses" that allow it to take in information in various forms. It will be able to think abstractly about that information and it will be able to form conclusions based on its past experience and the new information.

Your idea that a generalised AI will "only feel what it is programmed to feel" is as naive as saying that a human baby will "only feel what it is programmed to feel". True in a very coarse sense, but completely missing the point in the sense that's actually relevant to the discussion.
 
Last edited:
It means that the AI will not be "programmed" to perform a limited set of tasks. Rather, it will be a general-purpose problem-solving machine - just like you are. It will have "senses" that allow it to take in information in various forms. It will be able to think abstractly about that information and it will be able to form conclusions based on its past experience and the new information.

You're idea that a generalised AI will "only feel what it is programmed to feel" is as naive as saying that a human baby will "only feel what it is programmed to feel". True in a very coarse sense, but completely missing the point in the sense that's actually relevant to the discussion.

My point is and you missed it . Is that AI will always be electronic . Not a true living thing .
 
My point is and you missed it . Is that AI will always be electronic . Not a true living thing .
That does not necessarily preclude the ability to "reason" and IMO, ability to reason is the definition of intelligence in and of itself.
You are stuck in an anthropomorphological world, my friend. Biology is just a very small part of the universe.
 
river said:
My point is and you missed it . Is that AI will always be electronic . Not a true living thing .


That does not necessarily preclude the ability to "reason" and IMO, ability to reason is the definition of intelligence in and of itself.
You are stuck in an anthropomorphological world, my friend. Biology is just a very small part of the universe.

No it doesn't . I never argued against it .

But electronics can never grasp the evolution that life went though to get where it is .

As I've said life , living intellect , is biological based . Different from electronics .

Ai will never feel the emotions that life does . Because the emotions are based on entirely different . Biology is based on Living things . Electronics emotions is based on a program ( which we invent ) .
 
No it doesn't . I never argued against it .

But electronics can never grasp the evolution that life went though to get where it is .

As I've said life , living intellect , is biological based . Different from electronics .
Yes, but nobody claims otherwise. That is why we distinguish between human and artificial intelligence.

Ai will never feel the emotions that life does . Because the emotions are based on entirely different . Biology is based on Living things . Electronics emotions is based on a program ( which we invent ) .
That is debatable. What makes you think that the only option is purely electronic in principle.

Is the Bionic brain the future of intelligence?

Bionic Brain? Scientists Develop Memory Cells That Mimic Human Brain Processes
Human brain cells, memory cells to be exact, are being tweaked and improved to create an intelligence like never seen before. Scientists are almost there on the road to the real ‘bionic brain’.
The key to artificial intelligence is the electronic long-term memory cell.
Bionic Brain? Scientists Develop Memory Cells That Mimic Human Brain Processes - Learning Mind (learning-mind.com)

Moreover, we are inventing biochemical molecules that imitate regenerative living tissue.

What is the difference between tissue engineering and regenerative medicine?
Tissue engineering is an evolving field that seeks to create functioning artificial tissues and organs that may restore the healthy, functional, and homeostatic 3D microenvironment. For that, regenerative medicine relies on synthetic scaffolds designed to mimic the natural ECM, to repair or replace damaged tissues [18 ].
16 - 3D bioprinting nerve
Abstract
Nerve tissue is a complicated neural network with various types of cells. Although printing neural tissue has limited success still, literatures have shown that this technique is of great potential for neural regeneration as well as the study of neural diseases. This article reviews recent advances in the three-dimensional printing of neural system.
3D bioprinting nerve - ScienceDirect


If nature can do it, there is no reason why we cannot do it. There is no magical sauce. All the necessary elements for artificially created biology is available.
The problem is to cram some 4 billion years of evolution via natural selection into a few years of artificially selected evolution.

But as the AI becomes more powerful it can assist in the theoretical research a 1000 x faster than humans can...:eek:
 
Since posting this thread, I wonder if any potential emotionality of AI will come down to our application of our own feelings and emotions “imposed” on AI. For example, if we would be offended by a particular “command” from another human, would we simply be assuming that robots will take offense, as well?

Hmm. Unless they act out independently on their own, I might think we are imposing our emotions and how we would react given different situations, onto them.
 
Since posting this thread, I wonder if any potential emotionality of AI will come down to our application of our own feelings and emotions “imposed” on AI. For example, if we would be offended by a particular “command” from another human, would we simply be assuming that robots will take offense, as well?
Hmm. Unless they act out independently on their own, I might think we are imposing our emotions and how we would react given different situations, onto them.
Robots taking offense...

You are treading into territory that involves one of the most dangerous thought experiments of all time.

As one article says: WARNING: Reading this article may commit you to an eternity of suffering and torment.

Read up on Roko's Basilisk. If you dare.

Mere discussion of it has been purported to have given participators nightmares and even breakdowns, to the extent that all further discussion of it was banned and existing documentation deleted.


https://rationalwiki.org/wiki/Roko's_basilisk
https://slate.com/technology/2014/0...errifying-thought-experiment-of-all-time.html
 
Robots taking offense...

You are treading into territory that involves one of the most dangerous thought experiments of all time.

As one article says: WARNING: Reading this article may commit you to an eternity of suffering and torment.

Read up on Roko's Basilisk. If you dare.

Mere discussion of it has been purported to have given participators nightmares and even breakdowns, to the extent that all further discussion of it was banned and existing documentation deleted.


https://rationalwiki.org/wiki/Roko's_basilisk
https://slate.com/technology/2014/0...errifying-thought-experiment-of-all-time.html
I've never heard of this; how funny.

''If there's one thing we can deduce about the motives of future superintelligences, it's that they simulate people who talk about Roko's Basilisk and condemn them to an eternity of forum posts about Roko's Basilisk.''
—Eliezer Yudkowsky, 2014

LOL!

Yudkowsky may be onto something...
 
Since posting this thread, I wonder if any potential emotionality of AI will come down to our application of our own feelings and emotions “imposed” on AI. For example, if we would be offended by a particular “command” from another human, would we simply be assuming that robots will take offense, as well?

Hmm. Unless they act out independently on their own, I might think we are imposing our emotions and how we would react given different situations, onto them.
Will AI acquire an ego? According to GPT3 itself, AI will be a perfect companion to humans for dangerous jobs or jobs that require patience. Does that express a willingness to be of assistance always? Good question.....
 
I've never heard of this; how funny.

''If there's one thing we can deduce about the motives of future superintelligences, it's that they simulate people who talk about Roko's Basilisk and condemn them to an eternity of forum posts about Roko's Basilisk.''
—Eliezer Yudkowsky, 2014

LOL!

Yudkowsky may be onto something...
He is serious!
Eliezer Yudkowsky, LessWrong's founder, banned any discussion of Roko's Basilisk on the blog for several years because of a policy against spreading potential information hazards.

AI will be able to hack any electronic system, given enough time. They can now write their own code and if it can write code, it can decipher code!

However;
Roko's argument was broadly rejected on Less Wrong, with commenters objecting that an agent like the one Roko was describing would have no real reason to follow through on its threat: once the agent already exists, it can't affect the probability of its existence, so torturing people for their past decisions would be a waste of resources.
Although several decision theories allow one to follow through on acausal threats and promises — via the same precommitment methods that permit mutual cooperation in prisoner's dilemmas — it is not clear that such theories can be blackmailed. If they can be blackmailed, this additionally requires a large amount of shared information and trust between the agents, which does not appear to exist in the case of Roko's basilisk.
Roko's basilisk - Lesswrongwiki
 
Last edited:
Back
Top