AIs smarter than humans... Bad thing?

James, an animal is something produced by natural selection.
A machine is something conceived and produced by humans beings.
Animals with a brain have required at least several hundred millions of years of evolution over the entire biosphere to exist.
It is certainly logically possible that a machine could be smarter than a human being. But no engineering work could amount to even a small fraction of several hundred million years of evolution over the entire biosphere. So, it is just very, very unlikely that an AI smarter than a human being will ever be produced.
However, whether it could is not the topic of this thread. I assumed in my OP that "we will indeed successfully design an AI smarter than us". So, your point is just a derail.
EB
 
We thought of whales, chimpanzees, gorillas etc for decades as unthinking brutes. We recently learned how wrong we were. Looks like we will make the same sort of mistake with AI.
I'm perfectly happy thinking about many humans as unthinking brutes, and also thinking about many animals as intelligent and sensitive beings.
But a chimpanzee is not a human being and a human being is not a whale.
You have no argument except analogy and analogy is crap.
EB
 
So you're imagining that there will be AIs with human-level intelligence but no goals or volition? Why?
AIs won't have their own goals. They will have the goals that humans put in them. I know how software is produced. If an AI does something unexpected then it's a flaw in the design and the engineering firm will be responsible for the consequences and could suffer massive liabilities.
It's a mistake from the start to think of an AI as if it was some kind of sentient being. It's just a machine and as such it can be deadly. You can't understand a machine if you think of it as very similar to us. Like all analogies, that analogy is crap and this one may well get you killed. AIs will remain different from humans except for intelligence and then not the same kind of intelligence. Most people will be fooled into thinking these things are really sentient. They will fall in love with them, literally. Dope for the brain-dead of this world.
EB
 
Being the sci-fi fan you say you are, no doubt you will be aware of Arthur C. Clarke's three laws. When I accused you of a lack of imagination, I was thinking in terms of Clarke's laws.
Yes, I also read Arthur C. Clarke and I like it a lot. I even remember spending some time considering the implications of the three laws. I had absolutely no problem imagining Arthur C. Clarke's robots, but imagining won't make actual AIs smarter than human beings.
Current AIs already do things that are unexpected. I could give you many examples. Even chess playing computers have done things that have had the best chess analysts scratching their heads trying to work out why the computer's strategy worked so successfully.
Like I said, failure of imagination. A lot of AI behaviour is emergent. It is not "designed" in by human beings. When AIs start having opinions on complex matters, they won't be ones dictated by human designers. They will be opinions formed within the machines themselves, based on unknowable processes taking place in the lower-level architecture.
I'm well aware machines can do unexpected things. This isn't even anything new. This was true with the first machine we ever built. Any machine can surprise you. Any machine at all. So what? Does that make all machines more intelligent than humans?!
Smart AIs would be a serious hazard for humanity. If these things are let loose without an effective control and a self-destroy mechanism, then the people would will have built it will be responsible.
What is wrong in your attitude is that you are exculpating the culprit in advance of the crime. An AI is a machine and the designer will remain legally responsible for any liability. If you can't control the thing, don't make the thing to begin with.
Then again, maybe the stupidity of humans is such we don't even need an AI really smarter than us to go extinct.
You might have an argument that we should not produce truly autonomous and self-aware AIs. But I don't think it will be possible to hobble true AIs in the way you imagine.
You're underestimating the power and scope of my imagination.
To use an analogy
Please don't. Analogies are for idiots.
I'm not here to argue endlessly about vacuous analogies. If you can't articulate your point, please abstain.
EB
 
Back
Top