Do you think that AI will ever feel emotions?

Especially GPT3, which produces intelligent responses that easily pass the Turing test.
The Turing test isn't about the " intelligence" of certain responses but of the overall discussion, and a discussion that can go in any direction at all, at the questioners discretion.

Have a search on Google about how to fool GPT3.

Yes, it communicates in a very natural way, which is what it was designed for, but it is not capable, yet, of passing a Turing test. It won't, for example, say that it doesn't know an answer, but instead tries to come up with an answer, which gives the game away.

The video of the conversation was indeed impressive, but it was also limited in scope, the questioner not veering off on tangents, not trying to trick GPT3.
 
The Turing test isn't about the " intelligence" of certain responses but of the overall discussion, and a discussion that can go in any direction at all, at the questioners discretion.

Have a search on Google about how to fool GPT3.

Yes, it communicates in a very natural way, which is what it was designed for, but it is not capable, yet, of passing a Turing test. It won't, for example, say that it doesn't know an answer, but instead tries to come up with an answer, which gives the game away.
I understand, but is this any different than an ill-prepared person taking a test and guessing at many answers? They would not pass a Turing test either.
The video of the conversation was indeed impressive, but it was also limited in scope, the questioner not veering off on tangents, not trying to trick GPT3.
The point I am trying to make that if you need a trick test to establish that an AI is in fact not human, it is well on its way to true intelligence. We are not looking for Einstein, although an AI might actually score very high in publicly available science.

Have you had a chance to listen to the music comparisons? The two Japanese violinists were fooled by several compositions. Granted, the examples were of short duration and the compositions were performed by human musicians, which of course added a human "touch".

Another clip of AI actually playing a violin is truly horrendous. It would just as soon break the violin as play it....:confused:
 
I understand, but is this any different than an ill-prepared person taking a test and guessing at many answers?
Yes, in some cases it is much worse, and clearly comes across as an AI struggling.
They would not pass a Turing test either.
The point of a Turing test isn't to get the right answers! It's to convince a panel of people they have a conversation with that they are talking to a human. How an AI handles questions it doesn't know, or that don't make sense, make up a big part of that. Ask someone you don't know "how many Spurgle make up a Nurgle" and most will probably ask "What's a Spurgle or a Nurgle?" or some such, etc. Few would jump straight to a made up answer of "three" as a guess. And ask the AI enough such questions and it would become obvious to most that it was an AI - thus failing the test.
The point I am trying to make that if you need a trick test to establish that an AI is in fact not human, it is well on its way to true intelligence.
First, whether the Turing test is a test for "true intelligence" is rather disputed, and not something to be taken as a given. Second, it is things like being able to handle trick questions that sets a "truly intelligent" AI above the rest. Just being able to answer questions in a naturalistic way, but then be unable to respond sensibly to questions that rely on meaning rather than just language structure.... that's not going to cut it.
We are not looking for Einstein, although an AI might actually score very high in publicly available science.
If it's just a matter of regurgitating what is in the public domain, sure, Watson could probably do that. But is that all intelligence is?
Have you had a chance to listen to the music comparisons?
No, not yet.
 
I understand, but is this any different than an ill-prepared person taking a test and guessing at many answers? They would not pass a Turing test either. [...]

To make mentally adept people the standard that AI has to attain is a kind of backhand slap to village idiots and the intellectually disabled (whatever the very bottom of that spectrum is, minus being comatose). Wherein the latter cling to qualifying as human purely due to biological factors and consequent rights, rather than having the capacity to pass a test for AIs. In some cases there are "normal", wildly disoriented individuals whose disorganized communication and reasoning would actually be construed online as a bot (by many). That might be attributed to intoxication or missed medication protocol, but who really knows from a distance...

The range of human intelligence
https://aiimpacts.org/is-the-range-...mall/#Low_human_performance_on_specific_tasks
 
I understand, but is this any different than an ill-prepared person taking a test and guessing at many answers? They would not pass a Turing test either.
A Turing test is not a test of intelligence, based on getting "right answers." It is a test to see whether your average person could talk to an AI and determine whether or not they are human or an AI.
 
A Turing test is not a test of intelligence, based on getting "right answers." It is a test to see whether your average person could talk to an AI and determine whether or not they are human or an AI.
I understand, but in that interview with the GPT3, except for the speed (too fast) with which it is responding, it is impossible to tell that the man is talking to a machine.

As Sarkus observed, only with a much greater sophistication in testing skills, would the difference become apparent.

The Go match of AlphaGo against Lee Sedol (worldchampion) demonstrates the abilities of current AI to think in abstract (intuitive) terms. Go cannot be analyzed by brute computing power, the possible moves are astronomical. It requires intuitive cognition of miniscule advantages. After all, the object of the game is to capture squares. The player who ends up with a single (or) more captured squares than the opponent wins the game.

But in mathematical skills GPT-3 cannot compete with a program like Maple.

What is Maple: Product Features
Maple is math software that combines the world’s most powerful math engine with an interface that makes it extremely easy to analyze, explore, visualize, and solve mathematical problems. With Maple, you aren’t forced to choose between mathematical power and usability, making it the ideal tool for both education and research.
https://www.maplesoft.com/products/Maple/features/

GPT3 is a different beast altogether. It does not think in pure binary terms but, as with humans, does a series of self-referential guesses and selects what it believes is the best guess of the answer to the question.

GPT-3
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory.[2] GPT-3's full version has a capacity of 175 billion machine learning parameters. GPT-3, which was introduced in May 2020, and was in beta testing as of July 2020,[3] is part of a trend in natural language processing (NLP) systems of pre-trained language representations.[1] Before the release of GPT-3, the largest language model was Microsoft's Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters or less than a tenth of GPT-3s.[4]
The quality of the text generated by GPT-3 is so high that it is difficult to distinguish from that written by a human, which has both benefits and risks.[4] Thirty-one OpenAI researchers and engineers presented the original May 28, 2020 paper introducing GPT-3. In their paper, they warned of GPT-3's potential dangers and called for research to mitigate risk.[1]:34 David Chalmers, an Australian philosopher, described GPT-3 as "one of the most interesting and important AI systems ever produced."[5]
.......more.
https://en.wikipedia.org/wiki/GPT-3#

The largest deep neural networks are composed of a few billion parameters.
(GPT-3 uses 175 billion!)

But then there is the Human Brain:
The human brain, in contrast, is constituted of approximately 1,000 trillion synapses, the biological equivalent of ANN parameters. Moreover, the brain is a highly parallel system, which makes it very hard to compare its functionality to that of ANNs.
brain.jpg

Artificial neural networks are much closer to the human brain than is popularly believed, researchers at Princeton University argue (Image credit: Depositphotos)

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.
Consider the animal in the following image. If you recognize it, a quick series of neuron activations in your brain will link its image to its name and other information you know about it (habitat, size, diet, lifespan, etc…). But if like me, you’ve never seen this animal before, your mind is now racing through your repertoire of animal species, comparing tails, ears, paws, noses, snouts, and everything else to determine which bucket this odd creature belongs to. Your biological neural network is reprocessing your past experience to deal with a novel situation.
Large_Indian_Civet_Viverra_zibetha_in_Kaeng_Krachan_national_park.jpg

Our brains, honed through millions of years of evolution, are very efficient processing machines, sorting out the ton of information we receive through our sensory inputs, associating known items with their respective categories.
That picture, by the way, is an Indian civet, an endangered species that has nothing to do with cats, dogs, and rodents. It should be placed in its own separate category (viverrids). There you go. You now have a new bucket to place civets in, which includes this variant that was sighted recently in India.
https://bdtechtalks.com/2020/06/22/direct-fit-artificial-neural-networks/#
 
Our existence is caused by and governed by physics, the measurable behavior of particles. If there is a foreign influence that interferes with the physics of particle behavior that influence would be measurable, and would thus become part of physics.

Any assumption of another immeasurable supernatural influence is not a logical, nor scientifically defensible position, IMO
 
Last edited:
mechanisms of absolute outcomes
drive what ?
how ?
by what governing property and system ?

how do you pre define that ?

launch all nuclear missiles
lunch all nice girls

code glitch in emotional systems control ?


Our existence is caused by and governed by physics, the measurable behavior of particles. If their is a foreign influences that interferes with the physics of particle behavior that influence would be measurable, and would thus become part of physics.
Any assumption of another immeasurable supernatural influence is not a logical, nor scientifically defensible position, IMO


if i didn't think you were up for such a complicated question i would not have asked you.
(i must admit, it is a stupendously complicated question, something to think on for some time, im not even sure how i would phrase my own answer to my own question)
;)
:)
 
scientifically defensible
*Art is the creation of that which evokes an emotional response leading to thoughts of the noblest kind* W4U
[this is not my answer to my question just a thinking point]

Emotion does not rely on scientific logic
some might suggest there is functional process, causative systems of templates algorithms and collective sub-group corralling

yet in modern society domestic murder of women by their sexual partner conflicts with moral concepts.

does that mean emotions govern ?

without answering this question on a science based level do we simply hand over critical outcomes to AI ?

does modern science have a good understanding of emotion ?

why are all society's effectively ideologically divided into 2 which are fighting each other ?
 
As Sarkus observed, only with a much greater sophistication in testing skills, would the difference become apparent.
That's not what I said, or suggested really. The idea of a Turing test is that Joe Public are the judges, they are the ones asking the questions, leading the conversation. One doesn't need anything more sophisticated than being human and being able to ask questions. On any subject they want to.

I could write a program that would respond in a very human way if you asked it specific questions. But deviate from those and it would fail. Yes, GPT3 is impressive, but it's not capable (yet) of passing the Turing test, I don't think. On specific matters it may be capable of providing human responses, but that's not what the Turing test is. It is not limited in scope, but simply a conversation to ascertain whether the one you're talking to is an AI or human. If you stick to it's favourite topic, you're not really doing your job in that regard. ;)
 
Emotion does not rely on scientific logic.
some might suggest there is functional process, causative systems of templates algorithms and collective sub-group corralling
I agree, but that is not what is implied. Art is the representation of the best and the worst of reality. Symmetry, balance, shape, color are the beautiful aspects of the natural world.
A perfect example is a color blind person who is introduced to the world of bright colors for the first time.
Almost all children and adults alike cannot help but cry from the overwhelming emotions a colorful world elicits.
yet in modern society domestic murder of women by their sexual partner conflicts with moral concepts.
Indeed, and those are representations of the worst aspects of mental abstract fascination and the art it produces, which to sane people is abhorrent, but also inspires many to practice benign behaviors to his/her fellow human.
The medical world is a perfect example of dedication to help people in maintaining good health in the face of the most horrendous mishaps and diseases.

Florence Nightingale
The Florence Nightingale effect is a trope where a caregiver falls in love with their patient, even if very little communication or contact takes place outside of basic care. Feelings may fade once the patient is no longer in need of care.
Origin,
The effect is named for Florence Nightingale, a pioneer in the field of nursing in the second half of the 19th century. Due to her dedication to patient care, she was dubbed "The Lady with the Lamp" because of her habit of making rounds at night, previously not done. Her care would forever change the way hospitals treated patients.
Most consider Nightingale the founder of modern nursing. There is no record of her having ever fallen in love with one of her patients. In fact, despite multiple suitors, she never married for fear it might interfere with her calling for nursing. Albert Finney referred to the effect as the "Florence Nightingale syndrome" in a 1982 interview,[1] and that phrase was used earlier to refer to health workers pursuing non-tangible rewards in their careers.[2]
https://en.wikipedia.org/wiki/Florence_Nightingale_effect

Can an AI develop such dedication? I think so, it's all in the program, Human or Artificial.
 
I could write a program that would respond in a very human way if you asked it specific questions. But deviate from those and it would fail. Yes, GPT3 is impressive, but it's not capable (yet) of passing the Turing test, I don't think. On specific matters it may be capable of providing human responses, but that's not what the Turing test is. It is not limited in scope, but simply a conversation to ascertain whether the one you're talking to is an AI or human. If you stick to it's favourite topic, you're not really doing your job in that regard. ;)
Oh sure, most robots are build to fill a specific function and lack the complexity of the human brain.
But as I understand it, GPT-3 is based on the same principles as the human brain, i.e. not purely binary linear, but self-referential against "verbally (linguistic) identified knowledge" . IOW, the GPT-3 has access to all narrative explanations of everything, if available on the the open internet.

The pictures it painted of an avocado chair from a mere verbal (text) command is remarkable versatile and most are "functional". (see previous posts)

I just ran across a very clear explanation of how GPT-3 works.
You give it a few word based commands and GPT3 writes the code and produces exactly what you asked for in plain text. Then if you wish to add other refinements, you just ask it to do this or that and GPT3 will modify its code to incorporate the request. Anyway, give this a try and marvel at your brilliant student who will write you an application based on a plain linguistic request.......

 
Nice Demo. But it is just a toy.
No, this is no longer a toy. The ceiling has not even been touched yet. It's just a matter of processing power in a small package. We're just beginning to deal with nano scale networks.

And the physical abilities for sophisticated mobile and manual dexterity have barely begun to be developed.

Remember that humans are a result from millions of years of evolution and refinement. AI is 100 years old?
 
Back
Top