Singularity is gona get U all Humans !

What do U think about Singularity ?

  • Nothing

    Votes: 8 28.6%
  • Its a real danger to human specie

    Votes: 4 14.3%
  • I dont care , that will never happen

    Votes: 12 42.9%
  • I am sure God will help us and we will fight it off

    Votes: 4 14.3%

  • Total voters
    28

Singularity

Banned
Banned
The "singularity" is the buzzword among technophiles, scientists and future-gazers these days. It's their name for a point in the near future when computers become more intelligent than humans, and evolution leaps into hyper-drive. And, writes PNS Associate Editor Walter Truett Anderson, it's inspiring giddy utopian dreams as well as dark nightmares among the faithful.

Although the word "singularity" hasn't quite made it into the general public's vocabulary yet, it is stirring great excitement among growing numbers of scientists, technophiles and future-gazers, who use it to describe what they believe may be one of the great watershed events of all time -- the point at which the computational ability of computers exceeds that of human beings.

In various meetings, articles and of course Web sites, speculations about what form this may take range from glowing scenarios of a technological golden age to dire predictions that it will lead to the extinction of the human species.

The term -- at least in the way it is now being used -- was coined in a 1993 article by Vernor Vinge, a mathematician-computer scientist-science fiction writer. In the article, Vinge cited research on the accelerating growth of computational power and predicted that when it reaches and passes human levels, it will kick off an unprecedented burst of progress. Smarter machines will make still-smarter machines on a still-shorter time scale, and the whole process will go roaring past old-fashioned biological evolution like the Road Runner passing a sleeping Wile E. Coyote.

"From the human point of view," Vinge wrote, "this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen 'in a million years' (if ever) will likely happen in the next century."

Vinge cautiously predicted that the singularity would occur somewhere between 2005 and 2030. Since then, a consensus of singularity watchers seems to have formed around the year 2020. That's the target date identified by Ray Kurzweil, inventor and writer ("The Age of Spiritual Machines"), who is certain that by then we will have computers costing about $1,000 with the intelligence level of human beings.

For some, the expectation of the singularity has taken on an almost cult-like aura, reminiscent of the Harmonic Convergence that enchanted New Agers in the 1930s, or the Rapture prophecy popular among many Christians, who expect God to descend some day soon and whisk the faithful off to paradise.

In this case, the vision is an explosion of computer-generated scientific and technological innovation, leading to -- well, leading to just about anything you can imagine: new sources of food and building materials and energy, interstellar space travel, human immortality.

Say, for example, advances in nanotechnology continue to the point where microscopic machines can manipulate reality on a molecular level. Billions of intelligent micro-machines might course through your bloodstream, repairing damaged cells, attacking viral invaders, even synthesizing new proteins from the molecules around them. Viewed from here, claims of human-engineered immortality may seem a little less outrageous.

But many take a darker view of the singularity breakthrough and the technologies it may spawn. Imagine that same nanotechnology gone terribly wrong, a plague of superintelligent micro-robots loosed on the biosphere.

It was precisely the singularity prediction that led computer scientist Bill Joy to write his widely read Wired magazine article, "Why The Future Doesn't Need Us," in which he warned that we may, in effect, be engineering our own obsolescence by creating self-replicating machines that will charge off on evolutionary pathways far beyond us.

"The new Pandora's boxes of genetics, nanotechnology, and robotics are almost open," Joy wrote, "yet we seem hardly to have noticed."

How likely is it that anything of this sort will in fact happen, either for good or ill? Will computers really become smarter than human beings?

If you stick with the simplest and most mechanistic definition of "smart," the answer has to be a resounding "yes." IBM has already designed a machine that can outplay chess champions, and there are many reasons to expect that computer science will indeed move beyond silicon-chip technology into new realms of speed and memory.

But, say doubters such as British mathematician Roger Penrose and American philosopher John Searle, this doesn't necessarily guarantee that anything resembling either the fantasies or the nightmares of the singularity-watchers will come to pass. The central point of such dissent is that pure computational ability isn't thought, intelligence or anything resembling consciousness. It is simply mechanical efficiency, and as it increases we will have, instead of a new chapter in evolution, a lot of really good computers.

And there are yet other scenarios: Perhaps, instead of the machines going off on their own evolutionary pathway, leaving us behind, electronic and biological intelligence will merge -- each of us with a brain augmented to superhuman levels. Perhaps there will be a merging of all humanity with all computers into a vast global brain.

The possibilities seem to be endless, the whole subject simultaneously too far-out for most of us to grasp, yet too close to today's reality to be completely dismissed. We may know what is happening, but we can't be at all certain where it may lead.

One thing seems certain: Homo sapiens is going to exit from the 21st century looking like a considerably different animal from what it was going in.

http://news.pacificnews.org/news/view_article.html?article_id=575
 
Last edited:
There is a great book called "The Singularity Is Near" by Ray Kurzweil that I suggest every person read.

I worry far more about the "Grey Goo" effect that many theorize about.

~String
 
And here's that Vinge article in full:

http://mindstalk.net/vinge/vinge-sing.html

So yeah, um, we're doomed. South Korea, Japan and others are feverishly drafting robot ethics charters but they won't be worth the paper they're written on when Skynet assumes control. The end will be painful but mercifully fast.
 
And it was u who voted for the last option, right ?
No, that would have been me. I'm highly doubtful but one can only hope. The silicon Gods may save us yet:

Trans-post-human2.jpg

http://en.wikipedia.org/wiki/Transhumanism
 
computers are only as smart as the people who prgamed them
In the next post, #7, Singularity gave a reference and pointed to one non-programmed approach to AI. (It was too long to read, but in my skim of it, I saw no mention another type, already in widespread use*, not just a "paper design.") I refer to what I, and some others, prefer to call a "connection machines," but many more call them: "neural networks."

About 20 years ago one was trained on records of loan applications at a bank. (Including whether or not the previously granted loans were repaid on schedule.) It is now better than the human staff in selecting which new loans to grant. (Humans still interview the applicant of course, but then feed the answers to the machine and it decides to grant, or not, the loan. The applicants good looks, low cut dress, etc. do not enter into the decision.) It, like all three or more layer "connection machines," was / is never programmed, only given a “learning set” of input information closely related to the type of problem to be solved.

You need to broaden your concept of what a computer is. It need not follow a Von Neuman architecture with software running a written program.

Another "singularity" that can wipe out both humans and robot high on AI is a real singularity, like the small black hole of my book, if it passes close to our solar system. I.e. Earth may be expelled from the HZ (habital zone).
--------------------------
*Many complex chemical process plants, like a pulp to paper mill, are better controlled by connection machines than humans as there are just too many variables and the interaction of them is not understood (by humans)** well enough to program or even process "intutitively," as done until recently by a seasoned / experienced "old timer," who might even taste the batch in an effort to adjust things.

**It is your choice - do you want to say the connection machine "understands" better? If not, are you sure any human "understands" anything - after all the brain is just the best, most-adaptive, connection machine in existance (on Earth at least.)
 
Last edited by a moderator:
In the next post, #7, Singularity gave a reference and pointed to one non-programmed approach to AI. (It was too long to read, but in my skim of it, I saw no mention another type, already in widespread use*, not just a "paper design.") I refer to what I, and some others, prefer to call a "connection machines," but many more call them: "neural networks."

About 20 years ago one was trained on records of loan applications at a bank. (Including whether or not the previously granted loans were repaid on schedule.) It is now better than the human staff in selecting which new loans to grant. (Humans still interview the applicant of course, but then feed the answers to the machine and it decides to grant, or not, the loan. The applicants good looks, low cut dress, etc. do not enter into the decision.) It, like all three or more layer "connection machines," was / is never programmed, only given a “learning set” of input information closely related to the type of problem to be solved.

You need to broaden your concept of what a computer is. It need not follow a Von Neuman architecture with software running a written program.

Close - but not close enough. ;) Even though you claim it wasn't "programmed", that "learning set" was most certainly a set of parameters - which is just a form of programming. A machine can sort and correlate data but unless it's told what to do with the results nothing will come of it. And that's certainly not an example of Artificial Intelligence. ;)

There are several marks of true intelligence, one is being to handle a situation/input that it has never encountered before. Another is having an original thought. So far, no machine has ever done either of those and I honestly doubt if one ever will.
 
In the next post, #7, Singularity gave a reference and pointed to one non-programmed approach to AI. (It was too long to read, but in my skim of it, I saw no mention another type, already in widespread use*, not just a "paper design.")...

"Genetic and evolutionary programming do have their uses - they are powerful tools that can be used to solve very specific problems, such as optimization of large sets of variables; however they generally are not appropriate for creating large systems of infrastructures. Artificially evolving general intelligence directly seems particularly problematic because there is no known function measuring such capability along a single continuum - and absent such direction, evolution doesn't know what to optimize. One approach to deal with this problem is to try to coax intelligence out of a complex ecology of competing agents - essentially replaying natural evolution."

So does this scare u ?
http://msdn.microsoft.com/msdnmag/issues/04/08/GeneticAlgorithms/default.aspx:mufc:
 
Even though you claim it wasn't "programmed", that "learning set" was most certainly a set of parameters - which is just a form of programming. A machine can sort and correlate data but unless it's told what to do with the results nothing will come of it. And that's certainly not an example of Artificial Intelligence....
Either (1) you think humans, H, are also "programmed" or (2) you do not understand how connection machines, CM, work.

Both Hs and CMs lean from their experiences, but both must have some external guide to correct them when their response is not acceptable / correct. When informed that their response was not correct, they both make internal modification in the "connections." (For Hs this a change in the brains synaptic connections. For CMs, this is the transfer weights for one "layer" to the next.) Neither H nor CMs are "programmed" as that term is usually used. Both learn from their mistakes and improve their performance on not only the training task set but only all never before seen similar sets. For example, separating photos of women from men, loan re-payers from dead beats, etc.

I attended a lecture by Terry Sonowski's on his "nettalk" CM about 25 years ago. It learned to read out loud (actually how to drive a TI voice synthesizer) when present with strings of letters (actually the ASCI codes for them). It was fascinating to hear it at various stages of its learning process. - Just like a child learning to speak. For example, at first, it only said "ma ma" or "da da" (constant+ vowel a) in response to every input string of letters. Later it went thru a period of "over regularizing" the past tense, just like a child, and said things like "he drowned," (instead of "he drown"), but later leaned correctly most of these exceptions.

Just like most of a child's learning, it never was given even one line of "program code" - not any instruction was ever given to it. (Children do get instructions also.) Both Hs and CMs are given an intital set of connections for them to modify to improve their performance and some systematic way to make and evaluate the effect of the changes they make, mainly trial and error. (Both tend to follow the rule "If it ain't broke, don't fix it." In the CM case, this means that when an error occurs on the learning set, the change in connection strengths is greatest in the connections that, in that case, were most influential in producing the erroneous result. - Perhaps the new connections will work better, perhaps not, but they keep trying till they are better on all the learning set of examples.)

Your choice:
Either both Hs and CMs are “programmed” or neither is. - Both Hs and CM learn operationally by the same "trial and be correct when wrong" way.* - You are really just discussing the definition of "programmed." As I use this word, it means giving a set of instruction to be followed. What do you mean by it?
--------------------------
*For Hs, the mother is a source of some corrections, but physics (nature) supplies more. - E.g Walk wrong - fall down. Touch hot thing -feel pain. Etc.
 
Last edited by a moderator:
Computers taking over the world? I disagree.

I have it on excellent authority that IBM, Dell, Mac (yrrch), Toshiba and indeed all other computer companies have been for years equipping their products with an inherent and unbeatable failsafe device to prevent the conquest of the planet via sentient silicon beings.

This device in question is termed a "plug". It is located on the back of the computer. And it works like thi
 
.... absent such direction, evolution doesn't know what to optimize. ......
"direction" (which is alternative word for "programming") is NOT required, either by humans, dogs, even amoebas, and connection machines. All that is required to "shape" their behavior is feedback that tells them when their response is "wrong" and when it is "correct" plus a set of trials on which to learn. (Using mild electric shock to tell it when it was wrong, and amoeba can learn to move either toward or from a light etc.)

I admit that at present the "learning set" is very restricted to one or two areas in most cases, I think. (not up to speed- all I post here is from memory at least 20+ years old) Tasks such as recognizing men from women, printed letters, different words, car made by companies (Fords from Toyotas, etc.), etc. where the sample space is small and well defined, but see no reasons, in principle, to say "programming" (or 'direction") is required. Humans, many other (even very simple) animals and Connection machines all learn without any. All they require is feedback telling them when wrong and some means of changing their "input to output transfer function" until they perform better on that sort of task, which in principle, perhaps 500 years form now, could be "behave just as the typical human would under all circumstances."

BTW, your idea that evolution requires "direction" {from God, the first programmer?} is entirely wrong. The whole point of Darwin was that "direction" is NOT required. What evolution requires is (1) some means (genes in the case of Earth's life forms) to hand down learned changes from one generation to the next and (2) some way to select (for transfer to the next generation with better than average probability) the set of these means (genes on Earth), which tend to produce the "correct" response (or eliminate the "error responders" before they can reproduce)
 
Last edited by a moderator:
Back
Top