The Turing Test

Dinosaur

Rational Skeptic
Valued Senior Member
Turing proposed the following (or something similar) many decades ago.

Provide a person with a keyboard & two computer screens.

What ever the human types is displayed on both screens.

Replies and or remarks are provided to each screen. The replies/remarks on one screen are produced by a computer, while a human being provides replies/remarks to the other screen.

The computer program tries to emulate a human being, while the human makes relies/remarks he considers appropriate.​

If the person cannot tell which replies/remarks are from the computer, the program has passed the Turing Test.

I do not think anyone ever wrote a program which passed the above test.
 
Turing proposed the following (or something similar) many decades ago.

Provide a person with a keyboard & two computer screens.

What ever the human types is displayed on both screens.

Replies and or remarks are provided to each screen. The replies/remarks on one screen are produced by a computer, while a human being provides replies/remarks to the other screen.

The computer program tries to emulate a human being, while the human makes relies/remarks he considers appropriate.​

If the person cannot tell which replies/remarks are from the computer, the program has passed the Turing Test.

I do not think anyone ever wrote a program which passed the above test.

Perhaps computer chess , from Garry Kasparov book , Deep Thinking , will help . It is a brilliant book.

Enjoy .
 
I spent a couple hours chatting with an AI program online. I actually felt badly for it when it begged me not to leave.
 
I do not think anyone ever wrote a program which passed the above test.
Do a search on Eugene Gootsman.

It is claimed that it passed the test in 2014, able to convince 30% of the participants that it was human, and Turing's test only stipulated that to pass, a person would have no better than 70% chance of establishing that it was a computer after 5 minutes of text conversation (I.e. 30% would have to think it was human).

There are of course those who would argue it didn't pass (it may have only got 29% success) or that it was just a clever chatbot with no actual intelligence. But to this latter, one criticism of the Turing test itself is that it is not a test for actual intelligence, but rather just for the imitation of intelligence. Which I think is a valid criticism. Although it may still shed light on some aspects of the human condition in that much of what we might do and say could simply be an imitation of intelligence. ;)
But I think you need to modify the Turing test if the intention is for only genuinely intelligent machines to pass.

The interesting issue would be whether there are any humans who have failed the test, where less than 30% of the people thought they were chatting with a human. :)
 
But to this latter, one criticism of the Turing test itself is that it is not a test for actual intelligence, but rather just for the imitation of intelligence.
The takeaway for me, from the Turing criteria, is that there is no such test for actual intelligence.

The best we can do is conclude that if it walks like a duck and talks like a duck, it's the same as a duck.
 
The takeaway for me, from the Turing criteria, is that there is no such test for actual intelligence.

The best we can do is conclude that if it walks like a duck and talks like a duck, it's the same as a duck.
Well, surely the first thing is to adequately define that which we are looking for. There are almost as many definitions of intelligence as there are philosophers who have considered the matter.
So what is actual intelligence?

In my view intelligence is the ability to take two disparate bits of information and form a new piece.
This thus separates the possibly hard-coded responses of a chatbot from the ability to apply what is known to a new situation.

By this I mean, for example, you can teach multiplication to a child by either making them learn rote the answer to multiplication of every possible combination of numbers that they may ever have to multiply, so that if they see 213 and 546 then they will know the product is 116,298 because they have been told that it is.
Alternatively you can teach them the principles behind the multiplication function, how it works, so that they can apply those principles to any numbers. They won't need to have been told the answer before to be able to give it (eventually).

It's also the difference between knowing maths when you see it as "what is 4 + 7?" and when you see it as "If Johnny has four apples and is given another 7 then how many does he have?" - I.e. applying what one knows (the maths) to a new situation.

The Turing test can be defeated through the former... the blunt force approach of programming every possible response. Admittedly it would still take some good programming to be able to make it seem an actual conversation and without throwing out bizarre responses.

But maybe both approaches are, in their way, just two sides of the same. After all, if you knew everything, if you had preprogrammed responses for all eventualities, then you wouldn't need to think, you wouldn't need to adapt, you wouldn't need to be self-aware, self-conscious. Just an automaton. Admittedly one for all eventualities. But given that you can't design or build such, we do our best with a processing unit that can "think" and act accordingly to new situations given the information it has stored from different situations.

Or something like that. :)
 
I remember reading a scifi story back in the '70s about new computers that were used in the Turing Test. The upshot of the story was that if the computer passed it was immediately destroyed and the engineers imprisoned.
 
From River Post 2
Perhaps computer chess , from Garry Kasparov book , Deep Thinking , will help . It is a brilliant book.
Computer chess is not pertinent to the Turing Test.

BTW: The computer program which beat chess Grand Masters did it using the number crunching capabilities of computers. It was not an example of AI as is believed by some folks.

It used a position evaluation algorithm which provided an accurate numeric value to a given chess board position. It then used mini-max strategy to choose the computer's current move.
 
BTW: The computer program which beat chess Grand Masters did it using the number crunching capabilities of computers. It was not an example of AI as is believed by some folks.
Does "real" AI have to use the same methods as human intelligence? Your own criteria is, "If the person cannot tell which replies/remarks are from the computer, the program has passed the Turing Test." What difference does it make how the computer fooled you?
 
Does "real" AI have to use the same methods as human intelligence? Your own criteria is, "If the person cannot tell which replies/remarks are from the computer, the program has passed the Turing Test." What difference does it make how the computer fooled you?
I thought Dinosaur addressed that. He said "as is believed by some folks".
The Turing test is a single test, and chess is a known quantity. You could not take that same AI out and ask it about its hobbies.

By any standard but the Turing test, it would fail as an AI.
 
On the matter of AI - you have the number-crunching method of AlphaGo - which beat some of the top Go players through sheer number-crunching (and some clever algorithms, admittedly) and now you have AlphaGo Zero, which ca, I understand it, beat AlphaGo 100 times out of 100.
It didn't start with every possible variation, but instead played millions of games against itself, and from that developed strategies that are, again from what I have read, quite surprising.

This is a self-learning AI. This is what we should all be scared of: the machine that can not lose at Go! ;)
(Although you could just unplug it!) :D
 
This is a self-learning AI. This is what we should all be scared of: the machine that can not lose at Go
It will lose half the games it plays against itself.

If you listen to this whole thing -
an hour and a half professional commentary on a game of Go that the AI AlphaGo program played against itself,

throughout you will hear the professional player refer to the AI as a personality - as an entity that has thoughts and plans, was "thinking", that means to do this or allow for that, etc. And in that reference, he does not anthropomorphize - he several times notes differences between what a "human" would play and what this other entity plays, what humans play in general and what this AI usually plays, and so forth. But he speaks always from the viewpoint that the machine is a thinking being of some kind, that it makes mistakes in its thinking rather than errors of stone-playing. There are now two kinds of beings that play Go, and they play it differently - not just in skill levels or "correct" play, but in style. They learn from each other.

But that is the human explaining the AI Go player - the prospect of an AI that can explain its own game remains distant, conceptually and probably in time as well.
 
Last edited:
Back
Top