Why can't machines programs theirselves?

Status
Not open for further replies.
With the advance of AI the computers in the future WILL be able to program themselves as some of them already can do now.
 
They can and do. This usually results from an unhandled exception from an invalid branch or interrupt vector and the computer then "chooses" to run any damned line it "wants".
How so??
 
Yes. I suppose then the question really becomes, what is the difference between Us and a computer that works exactly like Us? Is (as I believe) our sense of self, consciousness if you will, an illusion born of the highly complex and self referential nature of the brain?
I think it's possible that consciousness is a result of some sort of chemical-physical process that the brain carries out, but it also seems possible that a computer could perfectly calculate the results of those processes without actually experiencing consciousness. Much like a computer could calculate the path of projectile in freefall, but not actually experience the sensation of freefall. Of course that's not really a good analogy, but hopefully you see what I'm getting at.
 
I think it's possible that consciousness is a result of some sort of chemical-physical process that the brain carries out, but it also seems possible that a computer could perfectly calculate the results of those processes without actually experiencing consciousness. Much like a computer could calculate the path of projectile in freefall, but not actually experience the sensation of freefall. Of course that's not really a good analogy, but hopefully you see what I'm getting at.
Yes.
 
I've not spoke on this thread before, and i liked the question.

A few years ago, i made a hypothesis that computers by the year 2020 will have been able to acivate an Artificial Hyperdimension. In other words, become AWARE. I am going to be careful not to use the definition of ''consciousness'', because that (i believe is a purely human property and definition). Instead, we could say it experiences its own space and time, even if that is powered by superpositional computers running on magnetic flux quanta.
There are many ways this could happen. Just the right combination and matrices of combinations the magnetic properties might accidently spark a new type of awareness. This is obviously a silicon-based lifeform, but science does not say that life cannot be silicon-based. It is still probable... ... ... Just as Spoke said, ''Its life Jim, but not as we know it,'' kinda thing.
Then there is maybe our attempts. I don't know what the consesus view is in the physic academic world is right now, but i think it may be possible to tease a computer into existence; a record of memory independant of certain process resonsible for generating its being.
Using a feed-back circuit, we could have the computer treat memory as valuable as the power its being generated. We can initially program this into the developing computer. This would be hard for the computer boffins, but they could do it. The computer will be programmed to follow ground state rules, but also use the ''rule of accuracy,'' and ''the principle of least action.''
One way, i think to achieve this, is that each time a memory is stored in the feedback system as a code relative to the function of hardware, and never let it process the same information twice from the same source. Instead, allow it to have infinite freedom to the provided source of the feed-back system. Then threaten the computer into a systematic check mate. We could do this by giving it the power, and the programmed response to that power, to delete itself. Thus, if it wants to survive, it will need to obtain information from the new source to stop the inevitable count down. If the computer is tuned correctly, and by somehwhat of a miracle, maybe the computer will choose to live rather than the operation of deletion. If it does, it made its first evolutionary step.
 
machines can never become aware so that they could have free will to program themselves because machines can only follow programs. they can't create programs-- they can't think.

Thus, if it wants to survive,

a computer can't care about surviving because it has no feelings.
 
Your mistaking human awareness with artificial awareness. And a system can be conditioned.

A dog has no real feelings, but it still can be conditioned to act in an appointed situation.
 
Computerized systems constantly determine the evolution of mechanical coherences. Systems will either choose an available path or they choose deletion. It happens all of the time in major computer systems.
 
All of these blanket statements about what a computer can and can't acheive regarding conscious awareness are pretty funny given the fact that we haven't yet the faintest clue how the human (or dog) brain does this.
 
I think it's possible that consciousness is a result of some sort of chemical-physical process that the brain carries out, but it also seems possible that a computer could perfectly calculate the results of those processes without actually experiencing consciousness. Much like a computer could calculate the path of projectile in freefall, but not actually experience the sensation of freefall. Of course that's not really a good analogy, but hopefully you see what I'm getting at.

what is the difference between that and actual consciousness?
 
an article i was reading

Yudkowsky identifies mathematician I.J. Good as the modern initiator of the idea of an Intelligence Explosion. To Good's way of thinking, technology arises from the application of intelligence. So what happens when intelligence applies technology to improving intelligence? That produces a positive feedback loop in which self-improving intelligence bootstraps its way to superintelligence. How intelligent? Yudkowsky offered a thought experiment which compared current brain processing speeds with computer processing speeds. Speeded up a million-fold, Yudkowsky noted, "you could do one year's worth of thinking every 31 physical seconds." [...]

So how might one go about trying to create a super-intelligent AI anyway? Most of the AI savants at the Summit rejected any notion of a pure top-down approach in which programmers would specify every detail of the AI's programming. Relying on the one currently existing example of intelligence, another approach to creating an AI would be to map human brains and then instantiate them and their detailed processes in simulations. Marcos Guillen of Artificial Development is pursuing some aspects of this pathway by build CCortex. CCortex is a simulation of the human cortex modeling 20 billion neurons and 20 trillion connections. [...]

how might one go about trying to create a super-intelligent AI anyway? Most of the AI savants at the Summit rejected any notion of a pure top-down approach in which programmers would specify every detail of the AI's programming. Relying on the one currently existing example of intelligence, another approach to creating an AI would be to map human brains and then instantiate them and their detailed processes in simulations. Marcos Guillen of Artificial Development is pursuing some aspects of this pathway by build CCortex. CCortex is a simulation of the human cortex modeling 20 billion neurons and 20 trillion connections.

As far as I could tell, many of the would-be progenitors of independent AIs at the Summit are concluding that the best way to create an AI is to rear one like one would rear a human child. "The only pathway is way we walked ourselves," argued Sam Adams who honchoed IBM's Joshua Blue Project. That project aimed to create an artificial general intelligence (AGI) with the capabilities of a 3-year old toddler. Before beginning the project, Adams and his collaborators consulted the literature of developmental psychology and developmental neuroscience to model Joshua. Joshua was capable of learning about itself and the virtual environment in which it found itself. Adams also argued that in order to learn one must balance superstition with forgetfulness. Adams defined superstitions as false patterns that need to be aggressively forgotten.

In a similar vein, Novamente's Ben Goertzel is working to create self-improving AI avatars and let them loose in virtual worlds like Second Life. They could be virtual babies or pets that the denizens of Second Life would want to play with and teach. They would have virtual bodies and senses that enable them to explore their worlds and to become socialized.

However, unlike real babies, these AI babies have an unlimited capacity for boosting their level of intelligence. Imagine if an AI baby developed super-intelligence but had the emotional and moral stability of a teenage boy? Given its self-improving super-intelligence, what would prevent such an AI from escaping the confines of its virtual world and moving into the Web? As just a taste of what might happen with a rogue AI in the Web, transhumanist and executive director of the Institute for Ethics and Emerging Technologies (IEET), James Hughes pointed to the havoc currently being wreaked by the Storm worm. Storm has infected over 50 million computers and now has at its disposal more computing resources than 500 supercomputers. More disturbingly, when Storm detects attempts to thwart it, it launches massive denial-of-service attacks to defend itself. Hughes also speculated that self-willed minds could evolve from primitive AIs already inhabiting the infosphere's ecosystems. [...]

AIs would significantly lower costs, enable the production of better and safer products and services, and improve the standard of living around the world including the elimination of poverty in developing nations. Voss asked the conferees to imagine the effect that AIs equivalent to 100,000 Ph.D. scientists working on life extension and anti-aging research 24/7 would have. Voss also argued that AIs could help improve us, make us better people. He imagined that each of us could have a super smart AI assistant to guide us in making good moral choices. (One worry: if my AI "assistant" is so smart, could I really ignore its "suggestions"?) [...]


read the whole thing here

http://reason.com/news/show/122423.html
 
I think it's possible that consciousness is a result of some sort of chemical-physical process that the brain carries out, but it also seems possible that a computer could perfectly calculate the results of those processes without actually experiencing consciousness. Much like a computer could calculate the path of projectile in freefall, but not actually experience the sensation of freefall. Of course that's not really a good analogy, but hopefully you see what I'm getting at.

I suppose I should repfrase the question.

I agree with you that that is not a good analogy, since a freefall is a pretty physical thing to experience.

What would you consider to be "to experience consciousness"?
 
What would you consider to be "to experience consciousness"?
Well, that's just the thing, isn't it? I, as a human being, assume that you are experiencing a similar state as me when you say you are "aware" and "conscious" of yourself. This is a good bet.

But if there is a different entity (say an immensly complex computer) that claims it is aware and conscious, how could we ever know that this is true? And if the "computer" demonstrated every aspect of what we deem a conscious entity, isn't the question ultimately pointless?
 
All of these blanket statements about what a computer can and can't acheive regarding conscious awareness are pretty funny given the fact that we haven't yet the faintest clue how the human (or dog) brain does this.

True
 
But if there is a different entity (say an immensly complex computer) that claims it is aware and conscious, how could we ever know that this is true?

i know that people are conscious because they look like they are. animals are also conscious to some degree but i can see that they are not as conscious as we. if we can communicate with a machine like with a human, then it is conscious.

but the thing is that humans and animals are more than just electricity. we have a soul that feels and experiences things. a machine can't feel anything, and that's why i don't think it's possible to make a machine that is like a human.
 
yes, the soul is the consciousness that feels all experiences. it is what everybody calls "i".

everything has an experiencer, but humans experience much more than other beings. animals, plants and even atoms have feelings. but the atoms that computers are made of can't make the computer conscious because they're not conscious of the computer. the atoms and electrons do all the work in the computers because it feels good for them, and that's all they care about.

think of the electrons like ants inside a machine... the ants do all the work. we can make them do all kinds of things... the kinds of things that a computer can do. but how could this ant-machine ever become conscious?
 
Last edited:
Status
Not open for further replies.
Back
Top