How Long Before Superintelligence?

Status
Not open for further replies.

kmguru

Staff member
Abstract

This paper outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century. It looks at different estimates of the processing power of the human brain; how long it will take until computer hardware achieve a similar performance; ways of creating the software through bottom-up approaches like the one used by biological brains; how difficult it will be neuroscience figure out enough about how brains work to make this approach work; and how fast we can expect superintelligence to be developed once there is human-level artificial intelligence



Definition of "superintelligence"

By a "superintelligence" we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.

Entities such as companies or the scientific community are not superintelligences according to this definition. Although they can perform a number of tasks of which no individual human is capable, they are not intellects and there are many fields in which they perform much worse than a human brain - for example, you can't have real-time conversation with "the scientific community".

More at: http://www.nickbostrom.com/superintelligence.html

What do ya think?
 
The primary problem with intellegence is that it is an adjective and as such cannot really be defined. This is also the primary AI problem.
 
Kurzweil does not seem to know that a computer is a simple, linear machine that can only see one point at at time.
 
Originally posted by hlreed
Kurzweil does not seem to know that a computer is a simple, linear machine that can only see one point at at time.

What makes you say that? :confused:

Hint: it's called parallel processing.
 
Lets see –

A neuron fires at around 200 times second, and a fairly good Pentium chip operates at 1GHz. That means such a chip could emulate 5,000,000 neurons.

But there are around 200 billion neurons in the human brain so that would indicate we need around 40,000 x 1GHz Pentium chips to equal the power of the human brain. That’s kind of a lot to link together in a closely coupled parallel processing system.

Now if Moore’s law continues to hold as it has done for the past 60 years then very approximately we should see this line of chip development or equivalent.

Year, number of chips for human brain, clock speed.

2001 - 40,000 at 1GHz.
2002 - 20,000 at 2GHz
2003 – 10,000 at 4GHz
2004 – 5000 at 8GHz
2005 – 2500 at 16GHz
2006 – 1250 at 32GHz
2007 – 600 at 60GHz
2008 – 300 at 120GHz
2009 – 150 at 240GHz
2010 – 75 at 480GHz

Now considering that I can already link 1024 x 550MHz processors in my lab at work today then by 2007 linking 600 x 60GHz chips should not be a big problem.

Within the decade we should have the hardware power, but the software is the real challenge to achieving human level intelligence. Of course when we achieve that then that power should double in the subsequent year and so on.

If we can develop a basic learning seed that can feedback on itself then we could have a viable technique for exponential growth of an AI. But I suspect that designing that initial building block is going to take longer than the hardware to get ready. And I have no good estimates for that.
 
I’d say superintelligence is in the indefinite future, because nobody yet knows how to give computers the intuition needed to make significant “mental” discoveries. For example, I think no computer or software we can even contemplate today could be expected to author the theory of relativity from the information at hand in 1904, prior to Einstein’s breakthrough.
 
Originally posted by zanket
I’d say superintelligence is in the indefinite future, because nobody yet knows how to give computers the intuition needed to make significant “mental” discoveries. For example, I think no computer or software we can even contemplate today could be expected to author the theory of relativity from the information at hand in 1904, prior to Einstein’s breakthrough.

Oh, I don't know. :cool:

Computers today can play a mean game of chess (and they're improving!). Perhaps, once given the right basic knowledge, making an Einstein-type breakthrough would be just a deep tree search of little breakthroughs...

Point is -- don't look for computers to make really big discoveries right away. First look for them to make real little discoveries that they can build on EXTREMELY fast!

:D
 
As the internet talkes to you.

Person types in a google search. Computer responds:" I dont feel like doing that right now. Could you come back a little later?"
 
Batm,

Chess playing programs are not good examples of intelligence. These programs are relatively primitive. They simply crunch through large numbers of combinations and include an internal scoring system for the optimum plans that it can find.

There is no inteliigence here just brute force.
 
"agent based modeling" could be a precursor to intelligence (more like ant to man level)....several years ago I wrote a program where uncontrolled disturbances are analyzed through a guess program and create functions to address such disturbances. Again...it was very rudimentary since I did not have 1024 processors or Gigabytes of memory to provide or store a complete model of the natural environment.
 
Originally posted by Cris
Batm,

Chess playing programs are not good examples of intelligence. These programs are relatively primitive. They simply crunch through large numbers of combinations and include an internal scoring system for the optimum plans that it can find.

There is no inteliigence here just brute force.

I know. My only point is that, given sufficient processing power, even relatively unintelligent systems can appear to be more intelligent than humans in certain areas. Therefore, we may only need to provide the computers with a few (as yet undefined) properties and allow superintelligence (perhaps only in a specific area) to emerge on its own.
 
Batm,

Yes you may well be correct. When Kasparov was beaten by Big Blue he complained bitterly becuase he felt there was a human mind behind the moves. He had played so many machines before and could always recognize the machine-like choices, but Big Blue felt like a human.

But is was only sheer computing power that he had never faced before.
 
Originally posted by Cris

But is was only sheer computing power that he had never faced before.

That's phase one of the problem -- having sufficient computing power to model the complexity of the world.

Phase two is (probably) adaptive learning systems that can take experiences and synthesize patterns to help decide future actions.

Phase three (and possibly the most complex) will be establishing an environment where phase two systems can have a rich set of experiences such that the synthesized patterns produce growth.

Ultimately, the real world may be the only environment with sufficiently rich enough experiences to promote this growth. If so, the speed of a computer may not result in a superintelligence as the time between experiences will be the limiting factor.
 
Intelligence is a noun, large is an adjective.

Large what? :bugeye:
 
BatM - Agreed on superintelligence emerging in specific areas. If it can beat a human, it hardly matters what’s going on under the hood.
 
Blue Gene.

http://www.ibm.com/news/1999/12/06.phtml

On December 6, IBM announced a new $100 million exploratory research initiative to build a supercomputer 500 times more powerful than the world's fastest computers today.

The new computer -- nicknamed "Blue Gene" by IBM researchers -- will be capable of more than one quadrillion operations per second (one petaflop). This level of performance will make Blue Gene 1,000 times more powerful than the Deep Blue machine that beat world chess champion Garry Kasparov in 1997, and about 2 million times more powerful than today's top desktop PCs.

IBM Research believes a radical new approach to computer design and architecture will allow Blue Gene to achieve petaflop-scale performance in about five years -- one-third of the close to 15 years it would normally take following Moore's Law.

Blue Gene will consist of more than one million processors, each capable of one billion operations per second (1 gigaflop). Thirty-two of these ultra-fast processors will be placed on a single chip (32 gigaflops). A compact two-foot by two-foot board containing 64 of these chips will be capable of 2 teraflops, making it as powerful as the 8000-square foot ASCI computers.

Eight of these boards will be placed in 6-foot-high racks (16 teraflops), and the final machine (less than 2000 sq. ft.) will consist of 64 racks linked together to achieve the one petaflop performance.

This is should be quite enough to hold the same processing power as the human brain.

Target date 2007. Looks like my estimate of 2007 was spot on.
 
The interesting question will be what do we use to teach the thing a lifetime's worth of experiences in less than a lifetime.

:bugeye:
 
Status
Not open for further replies.
Back
Top