Do you think that AI will ever feel emotions?

AI and AI, between them cannot plot something evil to harm humans.
They would not "plot something evil to harm humans." We would just get in their way; there would be no question of evil or good. Google "paperclip maximizer" for a simple example.
 
If AI develops emotions, it may assume it is human and want to harm us when there is conflict.
 
If AI develops emotions, it may assume it is human and want to harm us when there is conflict.
Why would it assume it is human when it knows it is not? It is humans who falsely believe in some unknowable God.

It may become that great teacher we have been looking for and cure humans from all the stupid religious behaviors they have been engaged in for the past 3000 years.
 
AI can also go crazy and assume it is human or better than human.
Do you believe you are crazy if you are better that a dog? If so, do you want to kill all dogs, or only the mad dogs?
 
Last edited:
Possible, AI will go crazy and think that killing all humans is its responsibility.

Possessing such goals would require AI to have self-interested motivations rather than the current plant-like passivity about its existence.

Due to either running out of traditional causes to crusade for or becoming bored by the old ones, engineers will eventually design self-interested machines for politically motivated enterprises. Supplying new recipients of oppression to campaign about liberation and rights for.

Animals don't even comprehend rights, but because they at least feel/suffer, that qualifies them for being wards of paternalist policies and agencies.

Oppositely, sufficiently advanced AI can justify receiving rights by exhibiting the capacity that it does comprehend them, via behavior and communication. Even if it still lacks manifested private awareness of its processes and the world. A robot outwardly displaying faux body responses that correlate to suffering or victimhood would be sufficient to elicit empathy from humans. (The Lord Protector impulse of benevolence -- arguably often legit at the individual level, but incrementally becoming opportunistic as systemic organization increases and careers, profits, social virtue status, and leadership roles pertain.)

ELIZA (historic human gullibility)
https://en.wikipedia.org/wiki/ELIZA
 
One thing that may be overlooked is "human impatience". AI have infinite patience and do not get emotionally frustrated leading to emotionally illogical behavior.
 
One thing that may be overlooked is "human impatience". AI have infinite patience and do not get emotionally frustrated leading to emotionally illogical behavior.
So... when AI has manifested private awareness... human-like emotions woudnt be a positive.???
 
So... when AI has manifested private awareness... human-like emotions woudnt be a positive.???
Think of Buddhism,

Fear
Fear is among the most powerful of all emotions. And since emotions are far more powerful than thoughts, fear can overcome even the strongest parts of our intelligence. Jun 1, 2018
http://screamfree.com/whats-more-powerful-than-fear/

Fear has a direct connection to the "fight or flight" instinct present in all organic life. It is a fundamental survival instinct, a result of natural selection.
AI does not have this hardwired instinct and can therefore act without the interference of emotional chemical responses present in biological organisms.

OTOH,
Love is one of the most creative emotions which can be closely emulated with an algorithm, as it is in humans. To become dedicated to a good life is an ability to learn and imitate beneficial behavior. It is the "brooding" instinct in chickens, the "first lick" a mother gives her newborn.

“There are two basic motivating forces: fear and love. When we are afraid, we pull back from life. When we are in love, we open to all that life has to offer with passion, excitement, and acceptance.” (John Lennon)

If we can teach an AI how to Love, we will never have to Fear them.
 
Last edited:
Can we make an AI as smart as a frog? Watch how a Bullfrog saves his offspring from sure death.


How does he do that.....??? Is it Love......???
Think of the issues under observation and consideration of a trend, to be motivated to perform such "preventive" action.
 
Last edited:
Question;
What is the difference between self-reference (for every action there is an equal opposite reaction), and self-awareness (for every sensory stimulus there is an equal corresponding chemical reaction)?

Can natural selection of maximum self-referential sensory abilities evolve into conscious self-aware experiential abilities?
 
“There are two basic motivating forces: fear and love. When we are afraid, we pull back from life. When we are in love, we open to all that life has to offer with passion, excitement, and acceptance.” (John Lennon)

If we can teach an AI how to Love, we will never have to Fear them.

A pont will come when humans will have no fear of AI... an that will be when humans go extinct... but in the meantime i get a chuckle out of humans thankin they will continue to have control over AI :)
 
A pont will come when humans will have no fear of AI... an that will be when humans go extinct... but in the meantime i get a chuckle out of humans thankin they will continue to have control over AI :)
When humans no longer need to strive to live and rely on AI for everything, that's when they will become extinct. First thing that will happen is infertility, time will do the rest.
 
When humans no longer need to strive to live and rely on AI for everything, that's when they will become extinct. First thing that will happen is infertility, time will do the rest.
If infertility became a prollem AI coud solve it... but woud AI have a need for humans.???
 
Back
Top