Building Gods or Building Our Potential Exterminators?

Status
Not open for further replies.
Remember the human brain is a machine. It can be built and it can be superceded.

mmm I belive it can be superceded, that's inevitable. I don't think I agree that it can ever be "built", grown - maybe even duplicated, but of all man's achievements I don't think we'll ever be able to fully replicate a brain into an A.I. (this might not even be a negative conotation...) There's too much transition from us having sensory input and an actual "percieving" of our environment that I don't believe a replicated intelligence could share. You can make A.I. all day long, but how could they "learn" to be human without exact human conditions? Until there is a common ground for existance (I have no idea what that could be except for crazy scifi realm antics) we will be separate from any intelligence that doesn't share the same set of inputs for a similar experience.
 
It is only possible if we can upload a human brain (the memory and the program) to an artificial brain. That is quite years away....
 
All depends on your definition of "smart". A two dollar calculator is "smarter" than I or any other human on this planet, but only if I define smart as processing speed to carry out any number of logical or mathematical functions.
 
intelligence is coupled in humans with emotions, willingness for survival and continuous existence at certain costs and efforts.
to what degree should robots be programmed as such, is a matter of design and purpose..
intelligence is not self sustaining, and wisdom is another issue, a world of intelligence pure and simple is without meaning and purpose..
replacing peoples by robots is not only a matter of independent evolution but also of human design, emotion and intention..
technology domination over human decision making is already a current case but these are emerging from a apparently selfless hi tech entity.
creative solutions to the inherent technology dilemma are possible
I think..
 
But we would not have souls by definition, correct?

I'm not certain "soul" and "definition" work well together in the same sentence ;). Theorhetically we should retain our "souls" no matter what, unless the provider of said souls decides to revoke the privilege of their bestowment upon us assuming there is a subsidiary of our selves that actually functions without physical presence. I think I've decided (with no actual proof) that A.I. wouldn't have need for the limitation of a soul, because they don't actually live - or coincidentally die. We assume that because we as humans fear our own demise, that something with like intelligence would also fear termination. I'm not sure that would be the case, and if the singularity occurs there's probably the point that not only would a super-intelligence be able to prolong it's existance indefinatly, but possibly ours as well. The coolest part about science is the unknown and the unimaginable, that's where I see A.I. leading us - augmenting our capabilities and patching our weaknesses.
 
The coolest part about science is the unknown and the unimaginable, that's where I see A.I. leading us - augmenting our capabilities and patching our weaknesses.

Or if the "wrong" people start to design and build it could be the demise of civilization, except for the very few who will carry on controlling what the "new order" decides is best. :eek:
 
I think the most interesting topic within this dicussion is the fact that a lot of you see A.I. "pessimistically" with trouble brewing on the horizon for mankind... Why is that? (serious question, not meant sarcastically) Granted the military and various other totalitarian groups could use it it for ill, but there's also the other side of the coin where even the military might be nulled out when a superior intellect comes to be. The Subject line of the post even begs the question, but is it just pop culture that makes us think A.I. will "exterminate" us, or is this something those of you who are voting for the extermination outcome actually fear?
 
My favorite story on this subject is:
Answer

from Angels and Spaceships, by Fredric Brown (Dutton, 1954). Here is the original text:


Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore through the universe a dozen pictures of what he was doing.

He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe--ninety-six billion planets--into the supercircuit that would connect them all into the one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment's silence, he said, "Now, Dwar Ev."

Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.

Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."

"Thank you," said Dwar Reyn. "It shall be a question that no single cybernetics machine has been able to answer."

He turned to face the machine. "Is there a God?"

The mighty voice answered without hesitation, without the clicking of single relay.

"Yes, now there is a God."

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning from the cloudless sky struck him down and fused the switch shut.*

*Permission to post this short story has been requested.
http://www.alteich.com/oldsite/answer.htm
 
Last edited:
I think the most interesting topic within this dicussion is the fact that a lot of you see A.I. "pessimistically" with trouble brewing on the horizon for mankind... Why is that? (serious question, not meant sarcastically) Granted the military and various other totalitarian groups could use it it for ill, but there's also the other side of the coin where even the military might be nulled out when a superior intellect comes to be. The Subject line of the post even begs the question, but is it just pop culture that makes us think A.I. will "exterminate" us, or is this something those of you who are voting for the extermination outcome actually fear?
Yes. However, I think it wise to stick to making silicon and other inorganic machines. I am a little concerned about the hybrids in the brain etc. and more concerned about the purely organic computers - Some day when they are more advanced they may decide humans are nutritious and dumb as pigs.
 
Status
Not open for further replies.
Back
Top