The "singularity" is the buzzword among technophiles, scientists and future-gazers these days. It's their name for a point in the near future when computers become more intelligent than humans, and evolution leaps into hyper-drive. And, writes PNS Associate Editor Walter Truett Anderson, it's inspiring giddy utopian dreams as well as dark nightmares among the faithful.
Although the word "singularity" hasn't quite made it into the general public's vocabulary yet, it is stirring great excitement among growing numbers of scientists, technophiles and future-gazers, who use it to describe what they believe may be one of the great watershed events of all time -- the point at which the computational ability of computers exceeds that of human beings.
In various meetings, articles and of course Web sites, speculations about what form this may take range from glowing scenarios of a technological golden age to dire predictions that it will lead to the extinction of the human species.
The term -- at least in the way it is now being used -- was coined in a 1993 article by Vernor Vinge, a mathematician-computer scientist-science fiction writer. In the article, Vinge cited research on the accelerating growth of computational power and predicted that when it reaches and passes human levels, it will kick off an unprecedented burst of progress. Smarter machines will make still-smarter machines on a still-shorter time scale, and the whole process will go roaring past old-fashioned biological evolution like the Road Runner passing a sleeping Wile E. Coyote.
"From the human point of view," Vinge wrote, "this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen 'in a million years' (if ever) will likely happen in the next century."
Vinge cautiously predicted that the singularity would occur somewhere between 2005 and 2030. Since then, a consensus of singularity watchers seems to have formed around the year 2020. That's the target date identified by Ray Kurzweil, inventor and writer ("The Age of Spiritual Machines"), who is certain that by then we will have computers costing about $1,000 with the intelligence level of human beings.
For some, the expectation of the singularity has taken on an almost cult-like aura, reminiscent of the Harmonic Convergence that enchanted New Agers in the 1930s, or the Rapture prophecy popular among many Christians, who expect God to descend some day soon and whisk the faithful off to paradise.
In this case, the vision is an explosion of computer-generated scientific and technological innovation, leading to -- well, leading to just about anything you can imagine: new sources of food and building materials and energy, interstellar space travel, human immortality.
Say, for example, advances in nanotechnology continue to the point where microscopic machines can manipulate reality on a molecular level. Billions of intelligent micro-machines might course through your bloodstream, repairing damaged cells, attacking viral invaders, even synthesizing new proteins from the molecules around them. Viewed from here, claims of human-engineered immortality may seem a little less outrageous.
But many take a darker view of the singularity breakthrough and the technologies it may spawn. Imagine that same nanotechnology gone terribly wrong, a plague of superintelligent micro-robots loosed on the biosphere.
It was precisely the singularity prediction that led computer scientist Bill Joy to write his widely read Wired magazine article, "Why The Future Doesn't Need Us," in which he warned that we may, in effect, be engineering our own obsolescence by creating self-replicating machines that will charge off on evolutionary pathways far beyond us.
"The new Pandora's boxes of genetics, nanotechnology, and robotics are almost open," Joy wrote, "yet we seem hardly to have noticed."
How likely is it that anything of this sort will in fact happen, either for good or ill? Will computers really become smarter than human beings?
If you stick with the simplest and most mechanistic definition of "smart," the answer has to be a resounding "yes." IBM has already designed a machine that can outplay chess champions, and there are many reasons to expect that computer science will indeed move beyond silicon-chip technology into new realms of speed and memory.
But, say doubters such as British mathematician Roger Penrose and American philosopher John Searle, this doesn't necessarily guarantee that anything resembling either the fantasies or the nightmares of the singularity-watchers will come to pass. The central point of such dissent is that pure computational ability isn't thought, intelligence or anything resembling consciousness. It is simply mechanical efficiency, and as it increases we will have, instead of a new chapter in evolution, a lot of really good computers.
And there are yet other scenarios: Perhaps, instead of the machines going off on their own evolutionary pathway, leaving us behind, electronic and biological intelligence will merge -- each of us with a brain augmented to superhuman levels. Perhaps there will be a merging of all humanity with all computers into a vast global brain.
The possibilities seem to be endless, the whole subject simultaneously too far-out for most of us to grasp, yet too close to today's reality to be completely dismissed. We may know what is happening, but we can't be at all certain where it may lead.
One thing seems certain: Homo sapiens is going to exit from the 21st century looking like a considerably different animal from what it was going in.
http://news.pacificnews.org/news/view_article.html?article_id=575
Last edited: