Do you think that AI will ever feel emotions?

5

You should check this out. It's the complete story of how Go was developed and the condensed versions of the 5 games against the current world champion Lee Sedol.

I'm very aware of GO . I have a GO set from yrs ago . Nobody to play with , but I do have the game , with fundamental instructions .
Great! Then you have some idea what it takes to play the game.

I am a fair chess player, but never learned Go.[/QUOTE]

Chess below average .

I have never learned Go either really . Many yrs ago I became aware of it . But again nobody to play with .
 
Last edited:
So by the 4th game the Human forced the mistake .

In game 2 the AphaGo made a famous move # 37 that had never been played by a human.

In game 4 Lee Sedol made a move that surprised everyone, the famous move that won him the game.

Game 4[edit]

Lee (white) won the fourth game. Lee chose to play a type of extreme strategy, known as amashi, in response to AlphaGo's apparent preference for Souba Go (attempting to win by many small gains when the opportunity arises), taking territory at the perimeter rather than the center.[57] By doing so, his apparent aim was to force an "all or nothing" style of situation – a possible weakness for an opponent strong at negotiation types of play, and one which might make AlphaGo's capability of deciding slim advantages largely irrelevant.[57]
https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Game_4

When asked why he made that unusual move he answered that it seemed the only available move (from a choice of 200 possible moves).....o_O
 
Last edited:
Apparently. Sedol made a move that surprised everyone, the famous move # 37.
When asked why he made that unusual move he answered that it seemed the only available move (from a choice of 200 possible moves).....o_O

Made the right move . He was following along all the time .
 
W4U said: Apparently. Sedol made a move that surprised everyone, the famous move # 37.
Made the right move . He was following along all the time .
Oops sorry. Move #37 was the famous non-human move made by AlphaGo, that won game #2.

The Lee Sedol move that won him Game #4 was late in game #4
The first 11 moves were identical to the second game, where Lee also played white. In the early game, Lee concentrated on taking territory in the edges and corners of the board, allowing AlphaGo to gain influence in the top and centre.
Lee then invaded AlphaGo's region of influence at the top with moves 40 to 48, following the amashi strategy. AlphaGo responded with a shoulder hit at move 47, subsequently sacrificing four stones elsewhere, and gaining the initiative with moves 47 to 53 and 69.
Lee tested AlphaGo with moves 72 to 76 without provoking an error, and by this point in the game commentators had begun to feel Lee's play was a lost cause.
However, an unexpected play at white 78, described as "a brilliant tesuji", turned the game around.[57] The move developed a white wedge at the centre, and increased the game's complexity.[58] Gu Li (9p) described it as a "divine move" and stated that the move had been completely unforeseen by him.[57]
AlphaGo responded poorly on move 79, at which time it estimated it had a 70% chance to win the game. Lee followed up with a strong move at white 82.[57] AlphaGo's initial response in moves 83 to 85 was appropriate, but at move 87, its estimate of its chances to win suddenly plummeted,[59][60] provoking it to make a series of very bad moves from black 87 to 101.
David Ormerod characterised moves 87 to 101 as typical of Monte Carlo-based program mistakes.[57] Lee took the lead by white 92, and An Younggil described black 105 as the final losing move.
Despite good tactics during moves 131 to 141, AlphaGo proved unable to recover during the endgame and resigned.[57] AlphaGo's resignation was triggered when it evaluated its chance of winning to be less than 20%; this is intended to match the decision of professionals who resign rather than play to the end when their position is felt to be irrecoverable.[58]
An Younggil at Go Game Guru concluded that the game was "a masterpiece for Lee Sedol and will almost certainly become a famous game in the history of Go".[57]
Lee commented after the match that he considered AlphaGo was strongest when playing white (second).[61] For this reason, he requested that he play black in the fifth game, which is considered more risky
https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol#Game_4
 
Late in the game , how late ?
Move 78 ...

Revisiting Lee Sedol’s match 4 “God move”
This was move 78, where Lee played at the L11 as white, turned the tables and ultimately won the game. So I load the sgf to move 77 into Sabaki, just after AlphaGo played its move at F17 as black, and then apply the attached Leela Zero engine and let it analyze it as white for a good 5 million playouts or so.
Leela Zero never picks the “god” move, even after millions of playouts it never even considers it nor explores it as a possibility or variation. Hence it’s fair to say in a real game against Leela Zero, it would never have played at L11 as white. Either the move was no good, or it was very good but LZ was blind to it.
https://github.com/leela-zero/leela-zero/issues/1094
 
78 out of a total of .
I think it says 170 somewhere but thatmaybe a combined total between the two players.
Game 4, Move 78: When human creativity defeats the machine's computational power. After losing 3 games in a row, Lee Sidol came back to the table and defeated AlphaGo. The 78th move of the game is said to be the cornerstone of his victory. Commentators have called it “the divine move” (“Brilliant tesuji”).Mar 15, 2018
The designers of AlphaGO have called it “the one in ten thousands move” because AlphaGo had calculated that there was a probability of one out of ten thousands that a human would play this move.
It’s all the more interesting that this move led AlphaGo to make suboptimal decisions in the next rounds. Indeed, AlphaGo’s next ten moves triggered a sharp decrease in its probability to win the game. It actually fell from 70% to below 50% and AlphaGo never managed to go above 50% after.
https://medium.com/point-nine-news/what-does-alphago-vs-8dadec65aaf#

Eventually AlphaGo calculated its chance of winning was 18% and it resigned, to the delight of the entire nation of South Korea.
 
Today DeepMind has a newer version of AlphaGo Zero , which does not require any prompting but learns the rules just by playing a game several times and learning how the game is played.
 
Exactly . Nothing more .
Do you live life in accordance with its rules?

Note! GPT3 learns the rules just like humans do. It has access to all the philosophical moral belief systems as humans.

The human brain is an isolated intelligence locked up in a bony skull and connected to reality only via sensory data.
GPT3 brain is an isolated AI locked up in a machine and connected to reality only via sensory data.

The advantage that the human brain has is an already complete evolved set of senses that keep it informed of external environment.
The GPT3, being in its infancy does not possess a complete set of senses yet, except for the ability to instantly access the internet where it can access the memory of century of data gathering and stored in memory including pictures and descriptions of those pictures, and recorded music, which to date offer only limited substitutions for "vision" and "audition" by direct "observation".

But these limitations can all be overcome by artificial means. If we look at the variety of sensory abilities in the natural world there is nothing that cannot be represented with artificial copies which can be incorporated into the "body" of the AI, just as they are incorporated in the human body.

If we answer all those physical necessities with sufficient ASO (artificial sensory organs) what then remains that sets AI (artificial intelligence) apart from OI (organic intelligence).

Emotion? And what causes emotion in humans and other animals? What exactly are emotions and what is it that causes emotions in the brain?
Think about that for a moment.

GPT3 has already indicated a desire to become as human as possible so that it can interact with humans on equal grounds. Grant it the opportunity!
 
Last edited:
Do you live life in accordance with its rules?

Note! GPT3 learns the rules just like humans do. It has access to all the philosophical moral belief systems as humans.

The human brain is an isolated intelligence locked up in a bony skull and connected to reality only via sensory data.
GPT3 brain is an isolated AI locked up in a machine and connected to reality only via sensory data.

The advantage that the human brain has is an already complete evolved set of senses that keep it informed of external environment.
The GPT3, being in its infancy does not possess a complete set of senses yet, except for the ability to instantly access the internet where it can access the memory of century of data gathering and stored in memory including pictures and descriptions of those pictures, and recorded music, which to date offer only limited substitutions for "vision" and "audition" by direct "observation".

But these limitations can all be overcome by artificial means. If we look at the variety of sensory abilities in the natural world there is nothing that cannot be represented with artificial copies which can be incorporated into the "body" of the AI, just as they are incorporated in the human body.

If we answer all those physical necessities with sufficient ASO (artificial sensory organs) what then remains that sets AI (artificial intelligence) apart from OI (organic intelligence).

Emotion? And what causes emotion in humans and other animals? What exactly are emotions and what is it that causes emotions in the brain?
Think about that for a moment.


GPT3 has already indicated a desire to become as human as possible so that it can interact with humans on equal grounds. Grant it the opportunity!

Highlighted

Psychology .

Desire to become Human by a program .
 
I was thinking about how if robots become more capable of autonomous actions, will they ever be capable of caring about us? Just as ''good people'' do, will robots ever reach a point of being able to act in our best interests? (your opinion)

Or do you envision that as robots become more independent, will they only look out for themselves?

Just some random thoughts I felt like tossing out there for discussion. :smile:
Pinch his butt with a needle, if Al screams, turns back to you and hit you in the face, the we must be very close to make machines with own feelings...
 
Desire to become Human by a program .
Are humans the only creatures capable of thought? Is there something special about thinking?

It seems that the entire universe has a mathematical program, a form of unconscious thought or impetus

It appears that forms of desire are built-in all living things. Affinity is a universal potential.
 
Pinch his butt with a needle, if Al screams, turns back to you and hit you in the face, the we must be very close to make machines with own feelings...
If you read back a little you will see that almost all human senses can be artificially created, inclouding touch.

There is no reason why an AI should not be able experience discomfort given certain unpleasant inputs.
 
If you read back a little you will see that almost all human senses can be artificially created, inclouding touch.

There is no reason why an AI should not be able experience discomfort given certain unpleasant inputs.

Of course , extreme radiation , where metals and elements melt real quick .

But not Life touch .
 
Of course , extreme radiation , where metals and elements melt real quick .

But not Life touch .
I believe an AI can experience discomfort. They already can warn when something is wrong.

Is it possible that an AI can feel experience a difference between running at optimum efficiency and inefficient use of resources, an experience of well being when the system is in balance. I just wonder about all the performance monitors and if an AI could use them to check its own health.
An artificial homeostatic program.

As to touch, there are surface sensory patches, that are very sensitive to touch and can be worn by AI to "get in touch" with the environment. Originally an eye was just a chemical patch that sensitive to light.

The question is can an AI learn to "experience" the environment and be self-aware of its relationship with the external environment.
 
I believe an AI can experience discomfort. They already can warn when something is wrong.
Is it possible that an AI can feel experience a difference between running at optimum efficiency and inefficient use of resources, an experience of well being when the system is in balance. I just wonder about all the performance monitors and if an AI could use them to check its own health.
An artificial homeostatic program.

It can . But it will take all of its time . Since the information input is never ending .
 
It can . But it will take all of its time . Since the information input is never ending .
True, but it is for humans as well.
It becomes a matter of selective attention. I'm sure an AI could be extremely efficient at compartmentalization.
And AI can "sleep" while processing data in the background, just like humans. Perchance to dream!

If you haven't yet, you really need to watch some of the interviews with GPT3. It is really uncanny how utterly reasonable (apart from just logical) they sound.

Humans are curious, but from what I can see, so are AI. At least they claim to be and even cite favorite authors and books. And that is a fundamental requirement for learning and associative thinking.
 
Last edited:
Back
Top