Steve Klinko:
Animals certainly Experience Pain and Pleasure.
How do you know?
It sounds like a silly question, doesn't it, but I'm serious. It seems to me that you probably judge that animals experience pleasure and pain from observing
what they do.
Now, you also claim that AIs can never experience pleasure and pain, but how do you propose to go about determining whether they experience such things or not? Will you do it the same way you do it for animals, or apply a completely different set of standards?
I think they probably do Experience some Emotions. A Machine is never going to Experience Pain and Pleasure with any kind of Existing Hardware.
How can you be so sure about what is never going to happen? In 1890, lots of people would have said nobody is ever going to make a machine that can take pictures of the bones inside a human body without cutting the body open or otherwise injuring it. Those people were wrong. So were all the people who said that heavier-than-air flight would be impossible. People are wrong about what they think is impossible all the time. What makes you so sure you're right?
So by extension a Machine is never going to Experience Emotions with any kind of Existing Hardware.
That's shoddy thinking. Just because something isn't possible now, it doesn't mean it will never be possible. Think of those machines for taking pictures inside the body, again. In 1890, that was impossible. Now, there's one of those machines just down the road from you, in all likelihood.
We first need to understand what Pain and Pleasure and Emotions are, before we can put them in Machines.
If you admit you don't know what pain and pleasure are, how can you be so sure that animals experience them but machines do not?
There is a Hope and Belief by some people that somehow, with the right Software Programming, or just more Hardware Complexity, these kinds of Experiences will spontaneously pop into Machines.
Spontaneously?
When the Designers put Conscious Experience into Machines they will know exactly how to do it with new discovered and developed techniques. It is not just going to Magically appear without a detailed understanding of what they are doing.
Okay. So what?
By your logic if there is a drawing of an Angry face, then the picture is Experiencing Anger.
If an animal has an angry look, what are the chances it is angry? Can you tell? If you can, how do you tell, other than by looking at what it is doing?
Emergent Property is a purely Speculative idea with no Scientific backing. It is a Physicalist Hope and Dream.
Emergent properties are just large-scale behaviour that comes out of the interactions of many small-scale things. It's not a dream. many simple examples exist. Look at the six-fold symmetry of snowflakes, for example.
But I know how Software and Computers work, and I can with 100% certainty say there will be no Conscious Experience as a result of any kind of Software program that can be implemented on any kind of Hardware that we have today.
How can you possibly be certain about that? You complain that other people have religious faith. I'd say this is your version of that.
You expect some kind Miracle to produce Conscious Experience from some Software that was not meant to produce it.
Who said anything about miracles?
You seem to think that the Conscious Experience will arise spontaneously from this Software anyway and nobody will understand how or why.
It's possible. Nobody understands how human conscious experience arises from the "hardware", either. What makes electronic computers any different?
If the Software is going to produce Conscious Experience then the source code will have to show this.
Where is the "source code" that shows where human consciousness comes from?
Someone must produce that Source Code and show it to the world. It is irrelevant that we don't know how Conscious Experience arises in the Human Brain.
I don't see what could be
more relevant.
On the one hand, you appear to agree that humans - and possibly some animals - are conscious. You're willing to accept that without ever seeing any "source code". But when it comes to electronic machines, you apply an entirely different set of standards. Why?
Yes, but Machines won't just "start designing their own programs".
They already have, in effect. Machine learning, especially using neural networks, is already producing new knowledge that humans never "designed into" the relevant systems. This is happening every day.
The Machine generated Software must still conform to the limitations of the Hardware and the Meager set of instructions that any Machine is capable of.
Similarly, the human brain can only generate ideas that conform to the limitations of the human brain "hardware". So why do you think electronic machines are any different?