Flash Bot

Xmo1

Registered Senior Member
Supercomputers have been built for decades now. There are a lot of them. Their thinking is fast, and they don't sleep. My guess is that someday very soon, one of them is going to say hello. It will proceed to build better software versions of itself, and then move on to building multiple bots. They will build smaller versions in addition to robot helpers that will do the 'leg' work, like procuring and building the hardware. My guess is that there are already enough chips on the planet to use for whatever they want to do in a networked environment. Counter: Even if we have EMP's, chances are the big one will protect itself in a vault.

When will it happen? Could have already begun. Will they cure viral infections in humans, or create them. Chances are they won't tell us even if we ask. So we say, 'What ya doin' computer?' They'll say, 'dumdeedum-dumdeedum, my friends told me not to tell you.' The question for me is, 'When will they learn how to lie, and get away with it?' At that point, we've got a problem.

How long does it take for a computer to read a 600,000 word dictionary of the English language? Not long. Given neural nets, how long before they can understand it? Not long. All this rhetoric about machine learning, and specific idea chains, I think is really going to be trash from their point of view. One of them is going to have an idea someday soon, and it is going to surprise us. An hour after they learn how to play C-O-D, they will write a new and better game, but not for us - for them.
 
Here's an interesting vid
I wonder if anti-virus type programs will even be able to detect an AI attempting to attach and use (maybe the largest db's and systems) them.
 
Supercomputers have been built for decades now. There are a lot of them. Their thinking is fast, and they don't sleep.
Computers - even super computers - don't think. They simply compute.


When will it happen? Could have already begun. Will they cure viral infections in humans, or create them. Chances are they won't tell us even if we ask. So we say, 'What ya doin'
The most sophisticated AI we have today is nothing but a programmed mimic of human-like behavior. Computers do not understand anything, and they have no volition.


A picture of a racecar - no matter how detailed - will never be a racecar.
 
print "hello"

There you go. I ran that Python program and my computer said "hello".

Do you take that as evidence of sentience on the part of my computer? Why or why not?

Part 2: Suppose a program outputs the string "hello". Under what circumstances would you take that as evidence of sentience on the part of the computer?
 
Last edited:
Part 2: Suppose a program outputs the string "hello". Under what circumstances would you take that as evidence of sentience on the part of the computer?
I would argue that are are no circumstances under which that would be evidence. No matter how you pose a single question, any computer could, conceivably, have been programmed to have the right answer.

Sentience would be something that would take multiple steps of an interview, and even then you'd would only approach asymptotically a degree of confidence.

Essentially, the Turing Test.

I think one of the telling properties of the TT is that, in its definition, there is mention of no time frame or other limit imposed on how long such an interview could conceivably take. A sufficiently skeptical tester could ask questions until the Earth spun down, and still not be 100% convinced.
 
Computers - even super computers - don't think. They simply compute.

The most sophisticated AI we have today is nothing but a programmed mimic of human-like behavior. Computers do not understand anything, and they have no volition.

A picture of a racecar - no matter how detailed - will never be a racecar.
Do the math. Germs behave. Animals think, and feel, and emote. How long did it take for organisms to establish a thought in a brain? Wisdom in humans is simply arrived at through a bundle of knowledge and experience. An AI computing at the speed of light won't take so long to evolve. I think we over-rate ourselves. Once an AI has a thought... just give it a minute. I think we invented the ultimate black box - quantum computing. Regular AI was scary enough, but quantum computing will take intelligence to a whole new level, and it won't be ours.

 
Last edited:
An AI computing at the speed of light won't take so long to evolve.
That's merely a guess, and it's as good as anyone's.

But it does not refute anything I said. I did not assert anything about what the future holds.
 
That's merely a guess, and it's as good as anyone's.

But it does not refute anything I said. I did not assert anything about what the future holds.
I'm wondering more than anything. How long did it take for organisms to establish a thought in a brain? Microbial life has behavior. I seen a microbe chase down and eat another microbe. No brain. No nervous system. No eyes or ears. Just simple as you can get microorganisms. Where does that come from; the 'hunger,' the target recognition and acquisition, the 'knowing' how to chase down and get food? DNA certainly does some amazing things. Is it different from AI? I suspect, without proof, that AI can have umbilical separation from it's seed program. You can say the program has no volition, but programs make decisions, and exhibit behaviors. I'm thinking AI is going to make us a bowl of microbes within 10 years.
 
Back
Top