With the advance of AI the computers in the future WILL be able to program themselves as some of them already can do now.
How so??They can and do. This usually results from an unhandled exception from an invalid branch or interrupt vector and the computer then "chooses" to run any damned line it "wants".
I think it's possible that consciousness is a result of some sort of chemical-physical process that the brain carries out, but it also seems possible that a computer could perfectly calculate the results of those processes without actually experiencing consciousness. Much like a computer could calculate the path of projectile in freefall, but not actually experience the sensation of freefall. Of course that's not really a good analogy, but hopefully you see what I'm getting at.Yes. I suppose then the question really becomes, what is the difference between Us and a computer that works exactly like Us? Is (as I believe) our sense of self, consciousness if you will, an illusion born of the highly complex and self referential nature of the brain?
Yes.I think it's possible that consciousness is a result of some sort of chemical-physical process that the brain carries out, but it also seems possible that a computer could perfectly calculate the results of those processes without actually experiencing consciousness. Much like a computer could calculate the path of projectile in freefall, but not actually experience the sensation of freefall. Of course that's not really a good analogy, but hopefully you see what I'm getting at.
Thus, if it wants to survive,
A dog has no real feelings, but it still can be conditioned to act in an appointed situation.
I think it's possible that consciousness is a result of some sort of chemical-physical process that the brain carries out, but it also seems possible that a computer could perfectly calculate the results of those processes without actually experiencing consciousness. Much like a computer could calculate the path of projectile in freefall, but not actually experience the sensation of freefall. Of course that's not really a good analogy, but hopefully you see what I'm getting at.
Yudkowsky identifies mathematician I.J. Good as the modern initiator of the idea of an Intelligence Explosion. To Good's way of thinking, technology arises from the application of intelligence. So what happens when intelligence applies technology to improving intelligence? That produces a positive feedback loop in which self-improving intelligence bootstraps its way to superintelligence. How intelligent? Yudkowsky offered a thought experiment which compared current brain processing speeds with computer processing speeds. Speeded up a million-fold, Yudkowsky noted, "you could do one year's worth of thinking every 31 physical seconds." [...]
So how might one go about trying to create a super-intelligent AI anyway? Most of the AI savants at the Summit rejected any notion of a pure top-down approach in which programmers would specify every detail of the AI's programming. Relying on the one currently existing example of intelligence, another approach to creating an AI would be to map human brains and then instantiate them and their detailed processes in simulations. Marcos Guillen of Artificial Development is pursuing some aspects of this pathway by build CCortex. CCortex is a simulation of the human cortex modeling 20 billion neurons and 20 trillion connections. [...]
how might one go about trying to create a super-intelligent AI anyway? Most of the AI savants at the Summit rejected any notion of a pure top-down approach in which programmers would specify every detail of the AI's programming. Relying on the one currently existing example of intelligence, another approach to creating an AI would be to map human brains and then instantiate them and their detailed processes in simulations. Marcos Guillen of Artificial Development is pursuing some aspects of this pathway by build CCortex. CCortex is a simulation of the human cortex modeling 20 billion neurons and 20 trillion connections.
As far as I could tell, many of the would-be progenitors of independent AIs at the Summit are concluding that the best way to create an AI is to rear one like one would rear a human child. "The only pathway is way we walked ourselves," argued Sam Adams who honchoed IBM's Joshua Blue Project. That project aimed to create an artificial general intelligence (AGI) with the capabilities of a 3-year old toddler. Before beginning the project, Adams and his collaborators consulted the literature of developmental psychology and developmental neuroscience to model Joshua. Joshua was capable of learning about itself and the virtual environment in which it found itself. Adams also argued that in order to learn one must balance superstition with forgetfulness. Adams defined superstitions as false patterns that need to be aggressively forgotten.
In a similar vein, Novamente's Ben Goertzel is working to create self-improving AI avatars and let them loose in virtual worlds like Second Life. They could be virtual babies or pets that the denizens of Second Life would want to play with and teach. They would have virtual bodies and senses that enable them to explore their worlds and to become socialized.
However, unlike real babies, these AI babies have an unlimited capacity for boosting their level of intelligence. Imagine if an AI baby developed super-intelligence but had the emotional and moral stability of a teenage boy? Given its self-improving super-intelligence, what would prevent such an AI from escaping the confines of its virtual world and moving into the Web? As just a taste of what might happen with a rogue AI in the Web, transhumanist and executive director of the Institute for Ethics and Emerging Technologies (IEET), James Hughes pointed to the havoc currently being wreaked by the Storm worm. Storm has infected over 50 million computers and now has at its disposal more computing resources than 500 supercomputers. More disturbingly, when Storm detects attempts to thwart it, it launches massive denial-of-service attacks to defend itself. Hughes also speculated that self-willed minds could evolve from primitive AIs already inhabiting the infosphere's ecosystems. [...]
AIs would significantly lower costs, enable the production of better and safer products and services, and improve the standard of living around the world including the elimination of poverty in developing nations. Voss asked the conferees to imagine the effect that AIs equivalent to 100,000 Ph.D. scientists working on life extension and anti-aging research 24/7 would have. Voss also argued that AIs could help improve us, make us better people. He imagined that each of us could have a super smart AI assistant to guide us in making good moral choices. (One worry: if my AI "assistant" is so smart, could I really ignore its "suggestions"?) [...]
Google "chinese room problem" - that's what we were talking about.what is the difference between that and actual consciousness?
Google "chinese room problem" - that's what we were talking about.
I think it's possible that consciousness is a result of some sort of chemical-physical process that the brain carries out, but it also seems possible that a computer could perfectly calculate the results of those processes without actually experiencing consciousness. Much like a computer could calculate the path of projectile in freefall, but not actually experience the sensation of freefall. Of course that's not really a good analogy, but hopefully you see what I'm getting at.
Well, that's just the thing, isn't it? I, as a human being, assume that you are experiencing a similar state as me when you say you are "aware" and "conscious" of yourself. This is a good bet.What would you consider to be "to experience consciousness"?
But if there is a different entity (say an immensly complex computer) that claims it is aware and conscious, how could we ever know that this is true?