Why can't machines programs theirselves?

Status
Not open for further replies.

Spectrum

Registered Senior Member
What's the deal with having programs write to theirselves? I read somewhere that it crashed early computer systems which were programmed to update theirselves, but now it is possible, but still getting the program to choose a command line is tough. For example if we examine the following program:

1 print line 2, "run"
2 end

then we can have a program run itself instead of ending and it has done this itself. If we could somehow question the computer and have it choose a line to run then it would be running itself. I have wrote a batch program which follows these lines:

copy c:\folder\filename$.txt c:\folder\filename$.bat
*call filename$.bat

and now when I open up the .txt file and save modifications the program is run live. I could do with an autosave. Does anyone know how this would work in a batch program?

The above program looks better like this:
cls
prompt $t
copy c:\folder\filename$.txt c:\folder\filename$.bat
type c:\folder\filename$.bat
*call filename$.bat
 
Last edited:
Yes but if it could choose it's own lines...
They can and do. This usually results from an unhandled exception from an invalid branch or interrupt vector and the computer then "chooses" to run any damned line it "wants". This is never a very good thing.
 
The Chinese Room is more a metaphor than an argument. If you slow a brain down, and look at the individual interactions between neurons, does it still look like the brain, as a whole, can 'understand' something?

Which is a little like saying, if you slow down a bee, does it still look like it can 'fly'? It's only your perspective that has changed. The bee is the same.
 
The Chinese Room is more a metaphor than an argument. If you slow a brain down, and look at the individual interactions between neurons, does it still look like the brain, as a whole, can 'understand' something?

Which is a little like saying, if you slow down a bee, does it still look like it can 'fly'? It's only your perspective that has changed. The bee is the same.
Hmmm. Not sure I agree with that. It's was always presented to me as an argument as to why computers will never be "intelligent" the way we are. Because we assign meaning to the symbols we "process" whereas a computer doesn't. Which assumes that a sufficiently advanced computer could never begin to extract meaning from the symbols as its algorithms "evolved". Seems to presuppose a conclusion based on anthropocentric bias.

Meh.
 
That's probably what it's meant to say. But I think if you encode electric signal patterns as Chinese symbols and neural cluster behaviour as rules, the Chinese Room can simulate a brain. Just much slower.

Another thought, which has little to do with the argument but seems interesting anyway - couldn't an intelligent, curious person running a Chinese Room actually learn to understand the symbols with time?
 
Another thought, which has little to do with the argument but seems interesting anyway - couldn't an intelligent, curious person running a Chinese Room actually learn to understand the symbols with time?
Why not? After all, isn't that what happens to all humans? We're born with no idea of the specific symbols of a particular language (although we probably have an inherent "language processor" in our brains somewhere) and we learn to assign meaning to these otherwise arbitrary symbols (letters, pictograms, etc.)?
 
A computer would have to be able to experience before it could understand anything. It would have to be able to see the symbols, letters, words...
 
An AI program could 'see' via a camera input. But there are other ways of experiencing information. A blind person can still understand many things.
 
A machine can't experience what it sees. It can't see because there is no observer in a machine.

A machine can only do what YOU choose for it to do. It can't choose by itself because it has no feelings or thoughts. It experiences nothing. There is nobody, no consciousness, in a machine... that feels the information.
 
So human brains contain an observer? In which part of the brain does this observer live? Does the observer have a brain too? :)

The observer sees through the eyes, but it is nowhere specific in the brain, because it (consciousness) continues to see and live even when the physical brain is dead.

Observation and brains are just thoughts.
 
Your faith in dualism is impressive. Mine would be stronger if there were observable effects to vouch for it.
 
I always took this argument to be incredibly weak and shallow. It assumes that the "computer" in question will never gain a semantic grasp of the symbols it processes. This is clearly presupposing the conclusion in what I always took to be a tour de force of circular reasoning.
I don’t think the “Chinese room” is meant to be taken as proof that a computer could never be intelligent – it’s simply pointing out that a system could appear to be perfectly intelligent without actually being intelligent. So, just because the computer always makes the correct “intelligent” choice, it doesn’t mean that there is necessarily intelligence present.

The ultimate example of a “Chinese room” AI would be scanning every bit of information about every atom in someone’s brain and feeding the data into a computer with an advanced physics engine that allowed it to calculate exactly how the brain would respond at the atomic/molecular level. Such a computer might be able to always generate exactly the same response to a complex question that an actual human brain would, but as far as the computer knows it’s just running a physics simulation on a whole bunch of atoms – it has no idea that it’s “considering” anything more than what each atom in the brain will do in the next picosecond based on the laws of physics.
 
What's the deal with having programs write to theirselves? I read somewhere that it crashed early computer systems which were programmed to update theirselves, but now it is possible, but still getting the program to choose a command line is tough.
the earliest computers was designed so that they could change their programming. it isn't that they can't, it's more like it too hard for a programmer to share his programs with others.
you must admit that sharing a source code that the computer itself modifies is useless.
a programmer trying to follow such a code will find himself helpless.
 
I don’t think the “Chinese room” is meant to be taken as proof that a computer could never be intelligent – it’s simply pointing out that a system could appear to be perfectly intelligent without actually being intelligent. So, just because the computer always makes the correct “intelligent” choice, it doesn’t mean that there is necessarily intelligence present.

The ultimate example of a “Chinese room” AI would be scanning every bit of information about every atom in someone’s brain and feeding the data into a computer with an advanced physics engine that allowed it to calculate exactly how the brain would respond at the atomic/molecular level. Such a computer might be able to always generate exactly the same response to a complex question that an actual human brain would, but as far as the computer knows it’s just running a physics simulation on a whole bunch of atoms – it has no idea that it’s “considering” anything more than what each atom in the brain will do in the next picosecond based on the laws of physics.
Yes. I suppose then the question really becomes, what is the difference between Us and a computer that works exactly like Us? Is (as I believe) our sense of self, consciousness if you will, an illusion born of the highly complex and self referential nature of the brain?
 
Status
Not open for further replies.
Back
Top