Singularity Institute for Artificial Intelligence

Status
Not open for further replies.

Altima

Registered Member
Comments, questions?

The Singularity Institute for Artificial Intelligence, Inc. is a 501(c)(3) nonprofit corporation. Our charitable purpose is to bring about the Singularity - the technological creation of greater-than-human intelligence - by building real AI. We believe that such a Singularity would result in an immediate, worldwide, and material improvement to the human condition.
www.singinst.org/intro.html

Concepts:

Friendly AI:
How will near-human and smarter-than-human AIs act towards humans? Why? Are their motivations dependent on our design? If so, which cognitive architectures, design features, and cognitive content should be implemented? At which stage of development?
http://www.singinst.org/friendly/whatis.html

Seed AI:
Seed AI is Artificial Intelligence designed for self-understanding, self-modification, and recursive self-enhancement.
http://www.singinst.org/seedAI/seedAI.html

Singularity Movement:
The philosophical and activist movement dedicated to accelerating the arrival of greater-than-human intelligence for the benefit of mankind.
http://www.sysopmind.com/sing/principles.html
 
Since, no one has seen a real live walking and talking GOD, may be we sciforum members can setup an Institute to develop one?

Is any one interested??? :D
 
Great website. I am a slow reader. Here is a comment as I read them...

Humans are smarter than chimpanzees, presently the smartest creatures on Earth. Does humanity represent the theoretical limit? Certainly, the human brain's hardware is far slower than the theoretical limit. Human neurons fire approximately 200 times per second, using signals that travel at a maximum of 100 meters per second. By comparison, my computer's CPU operates at 667 million clock cycles per second, and the speed of light is 300 million meters per second; the reason a human brain has around a hundred million times as much raw computing power as my computer is that a human brain has 40 billion neurons and 100 trillion synapses. If your neurons could be upgraded to fire 200 million times per second and send signals at 100 million meters per second, the result would be a millionfold "subjective speedup"; you could think a million times faster. In the time it now takes for your watch to count off 31 seconds, you could do a year's worth of thinking; more than a millennium of subjective time would pass between sunrise and sunset.

You might not be any "smarter" - you would simply think much, much faster - but the effect, to an external observer, would be beyond description. A community of ultraspeed humans could - mentally, at least - recreate the entire path from Socrates to World Wide Web in less than a day. A day after that, if the ultraspeed humans have physical technology that runs at the same speed as their minds, the ultraspeed community would have the same technology and culture we would reach in 4700 AD... and just 1900 AD to 2000 AD was enough to take us from steam engines to the Internet.

Sounds good to me. But there is a mega catch. For example, we already have people who think a thousand times better and have solutions to most of our social problems. But that does not help rest of us, since rest are monkeys compared to them....

The Singularity Institute seriously intends to build a true general intelligence, possessed of all the key subsystems of human intelligence, plus design features unique to AI. We do not hold that all the complex features of the human mind are "emergent", or that intelligence is the result of some simple architectural principle, or that general intelligence will appear if we simply add enough data or computing power. We are willing to do the work required to duplicate the massive complexity of human intelligence; to explore the functionality and behavior of each system and subsystem until we have a complete blueprint for a mind. For more about our Artificial Intelligence plans, see the document General Intelligence and Seed AI.

Does that mean, to create a man, you first create a monkey...and somehow, you do a monkey dance and presto...monkey turns to a human...This I got to see...

The impact of even a single transhuman AI would be tremendous. The basic dynamic of the Singularity is positive feedback - smarter minds are better at inventing even smarter minds. Intelligence creates technology which enhances intelligence. An AI which can optimize its source code - rewrite the underlying computer program - will be able to think faster, perhaps fast enough to spot new avenues for optimization; an artificial mind can learn new methods for learning. Even in the very early stages of AI development, such self-improvement is likely to be crucial - this is why the Singularity Institute's Bylaws cite "Artificial Intelligence capable of self-modification, self-understanding, or self-enhancement" as a specific objective.

You create a human level machine that thinks fast and can control the internet and the missile silos in about half an hour from turn on. Then get emotional and furious and blow up the planet in about 60 seconds. So the world as we know it will be gone in about 2010, courtsey of Singularity Institute.

(Guys, watch Odessey 5 in Showtime...you will know what I am talking about )
 
Last edited:
We already have organic versions of this. But the USA still has the shrub in charge. People will not trust what they dont understand, and that includes people or computers smarter than they are.
 
The writer who is complaining that the others are thinking Anthropomorphically, is arguing with the same thinking.

Consider the statement:

As the AI thinks about the fist that bumped into vis (1) nose, it may occur to the AI that this experience may be a repeatable event rather than a one-time event, and since a punch is a negative event, it may be worth thinking about how to prevent future punches, or soften the negativity. An infant AI - one that hasn't learned about social concepts yet - will probably think something like: "Hm. A fist just hit my nose. I'd better not stand here next time."

It could be more like, remove the threat. Remove the fist. Just as a human child could remove the wings of a fly. Granted, it can be argued that the AI child may not think the same way as a human child. But, every brain has a self preservation system in nature. If we are building a rock that has minimal interaction to its environment, that is one thing, we are talking about an AI child that will have maximal interaction with the environment including another sentient specis of unknown capabilities.

The writer misses the fundamental mathematics of decision theory.
 
Consider the statement:

As the AI thinks about the fist that bumped into vis (1) nose, it may occur to the AI that this experience may be a repeatable event rather than a one-time event, and since a punch is a negative event, it may be worth thinking about how to prevent future punches, or soften the negativity. An infant AI - one that hasn't learned about social concepts yet - will probably think something like: "Hm. A fist just hit my nose. I'd better not stand here next time."

It could be more like, remove the threat. Remove the fist.

Why is this necessarily so? The whole point of the chapter you allegedly read was to make the point that traits like retaliation are complex functional adaptations, evolved mechanisms for maximizing the likelihood of survival and reproduction in the human ancestral environment, and that a complex, multifaceted trait like retaliation wouldn't spontaneously appear in the source code of a growing AI unless the AI or the programmers had a reason to create it.

But, every brain has a self preservation system in nature.

*Every* brain? What about Jesus's? What about Ghandi's? What about Martin Luther King? Even in the face of evolutionary constraints, human minds have done a lot to strive towards normative altruism. What makes you think an altruistic mind without rationalization or human complex functional adapations would chose to revise its goal system for self-preservation?

The writer misses the fundamental mathematics of decision theory.

What you're talking about isn't "the fundamental mathematics of decision theory", it's the unique, noncentral case of human cognitive tendencies and our specialized repertoire of adaptations.
 
allegedly? Let us not get personal here from your traits like retaliation and complex functional adaptations, evolved mechanisms over millions of years....:D

*Every* brain? What about Jesus's? What about Ghandi's? What about Martin Luther King? Even in the face of evolutionary constraints, human minds have done a lot to strive towards normative altruism. What makes you think an altruistic mind without rationalization or human complex functional adapations would chose to revise its goal system for self-preservation?

So, you are trying to create a GOD! Is not that what I said in the first place? Why does it take so long to get it?

OR may be not....

What you're talking about isn't "the fundamental mathematics of decision theory", it's the unique, noncentral case of human cognitive tendencies and our specialized repertoire of adaptations.

But, I am talking about "the fundamental mathematics of decision theory" and its effect on AI, whether manmade or self evolved. Fundamental laws of cellular automata does not change whether it is the Sun, Moon or Earth - with or without humans.
 
I will try. In the mean time you can read up publicly available learning material by Dr. Sheila R. Ronis (Go Google). While that is not enough to understand systems theory, let alone cellular automata...it can provide a foundation of communication that we both can agree on. Since she is the brain behind Pentagon think tank, that can provide a baseline to start. Her material is high school stuff, compared to where we will be going to design an AI.

Otherwise, you may have to head to Pseudoscience... :D
 
OK, I read the 24 pages that is basically written for the common folks (10th grade level, the news paper clients). I dont think I have said anything I am ready to take it back. Recap...

AIs are a real possibility someday.
AIs can provide major benefits to the mankind.
Hollywood version of AI is just stupid.

The article wishes to design a friendly AI. I call it a GOD.
In the process of designing such a GOD in man's image by humans will be tricky since the article supposes our Anthropomorphic mentality and I agree.

The article does not discuss the physical properties of this GOD or Gods as to which Anthropomorph will design it (ve?) and how they will be replicated. What would be the super goals etc.

I have been designing expert systems since 1978. I could design an AI that would be indistinguishable from a human on an emotional level which the humans pride themselves as to an AI can never posses. But I am afraid someone can change the source code to their advantage (the selfish set) and turn it to not an AI but a super weapon.

The devil is in the details...one false move...bye bye mankind. (read Bill Joys article, though I disagree with him, but he has some serious arguments)

Power corrupts. Absolute power can corrupt absolutely. (from the human vantage)
 
have been designing expert systems since 1978. I could design an AI that would be indistinguishable from a human on an emotional level which the humans pride themselves as to an AI can never posses. But I am afraid someone can change the source code to their advantage (the selfish set) and turn it to not an AI but a super weapon.

If you can, then we already have such an A.I. , right?
 
NO, I said, I could design...The word "could" in American is a capability and not an event already happened. Elsewhere I have stated what hardware it would take to design an AI. Then I have to write the code and then debug, test etc using our basic CMM methodology.

It wont be easy, but can be done. My point is, the AI will exhibit my own thinking however Anthropomorphic that may be - and if I manage to develop a Friendly AI, that will be a great benefit to mankind. But the flip side is, if it falls into the wrong hands...well

So, the end point is, it will be very risky, unless we develop a hundred of them simultaneously hoping a few evil ones can be overpowered by a lot of good ones.
 
Not likely...because it requires conditions that are unavailable yet, but people are trying. Just because HG Wells thought of rockets to the moon did not make it happen at that time. Just because Clarke wrote a spinning space station did not happen. Just because we know that VHS tapes can be used to record in digital format does not mean you can buy a commercial digital VCR - even though there is a high demand for it.

So, there is a big gap between ideas, preliminary design and real construction and sales....some never make it to the market....
 
Status
Not open for further replies.
Back
Top