AIs smarter than humans... Bad thing?

Speakpigeon

Valued Senior Member
What would be the impact on humans of AIs smarter than humans...
First of all, the prospect that humans could produce better than themselves seems really very, very small. Essentially, as I already posted on this forum, you need to keep in mind that humans have a brain which is the last outcome of a 525 million years of the natural selection of nervous systems, from the first neuron-like cells to an actual cortex. Think also that natural selection operates over the entire biosphere, which is like really, really huge. This gives us a very neat advantage over machines. Compare how AIs are now being conceived and designed: less than a million engineers, a few thousand prototypes, a very slow development cycle, and all this over a period less than a paltry 100 hundred years. The figures are just not commensurable. Natural selection beats this small group of short lived and ineffectual scientists, mathematicians, engineers, government officials and billionaires. The real situation is that no human being today understand how the human brain works. The best example of that is mathematical logic, which can't even duplicate what the human brain does even though mathematicians have been working on that for more than 120 years now.
Second, new machines are normally tested and have limited autonomy. A machine is something we, humans, use. Nobody is interested in having a machine use us.
So, assuming we will indeed successfully design an AI smarter than us, the question is how to use it. I suspect the priority will be in using AIs, initially few in numbers, very costly and probably still cumbersome to use, only in strategic or high-value activities, like security, finance, technology and science, possibly even the top administration. Again assuming that everything goes well after that first period, maybe the use of AIs will spread to the rest of society, including teaching, executive functions in companies, medicine, etc.
Where would be the problem in that?
Well, sure, there will be people who don't like it one bit. Maybe this will result in protracted conflicts over a long period, why not. However; overall, human societies in the past have demonstrated that we can adapt and make the best of a bad situation, and then this won't be a bad situation. Most people will learn to relate to AIs in a functional and operational way like they have adapted in the past to all sorts of situations. Pupils at school will learn to respect AIs. The problem will be smoothed over within one or two generations. That's what people do. That's what they do even when the governing elite is very bad.
Although AIs would be smarter than humans, it will still be humans using AIs, not the other way around. AIs will have hard-wired rules to limit themselves to what will be expected of them.
It is of course difficult to even imagine the impact of a greater intelligence on our psychology. Humans are competitive and people who enjoy today being at the top of the pile because of their wits may find themselves just redundant. Maybe that could be very bad for the moral, but only for the small group of people who want to be the big boss, and so there will be no difference with today since plenty of people today at frustrated not being the big boss. For most people, there will be no substantial difference.
The real difficulty will be in assessing which functions AIs should be allowed to take over. I would expect that at best they will be kept as advisers to human executives, although this might complicate things a great deal. At least, this will be tried and tested.
Potentially, this could solve a great many of our problems. AIs may be able to improve our governance and technology, for example. There will be also mistakes and possibly a few catastrophes but overall, there's no reason to be pessimistic.
The only real, almost certain danger is a few humans somehow using AIs against the rest of humanity. But humans doing bad things is nothing new here. AIs will definitely provide another historical opportunity for madmen to enjoy wreaking havoc on the world but it is up to us to make sure this couldn't happen.
Other than that, no problem.
EB
 
There are AIs that are already smarter than humans.
But they tend to be smarter in a few very specific ways.
One general AI learnt Chess, Go, and Shogi from scratch and in just a few hours could beat the world's previous best AI at those games, as well as human players.
 
How can AI be smarter than the humans who created it? AI doesn't evolve on its own, or can it?

To answer the OP, it's not a ''bad'' thing if that were to happen, but I don't see how a machine can outsmart its designer. The designer created all of the codes within a framework. The machine would literally have to ''think'' outside of that framework.
 
How can AI be smarter than the humans who created it?
Isnt that like asking how can we build cars that are faster than us? ;)
AI doesn't evolve on its own, or can it?
They can learn on their own.
Even computers have far better recall, can process specific information far faster than we can, but when you add in the ability to learn, it can get rather impressive.
For AlphaGo, the AI that beat the others after a few hours of learning, was basically given the rules and the objective, and then left to its own devices.
Within a few hours each time it was beating the best existing AI and the best human players, in each of the three games (Chess, Go, Shogi) it learnt,
:)
To answer the OP, it's not a ''bad'' thing if that were to happen, but I don't see how a machine can outsmart its designer. The designer created all of the codes within a framework. The machine would literally have to ''think'' outside of that framework.
Or it would have to be taught how to learn.
Humans do not learn simply by being told lots of facts and regurgitating them.
AI is being programmed to start using the same methods that humans are thought to use, and so will often be able to surprise with connections it makes that we humans may never have thought of.

Bear in mind that this is with reference to AI for very specific roles.
General AI, however, is a different matter.
 
There are AIs that are already smarter than humans.
But they tend to be smarter in a few very specific ways.
One general AI learnt Chess, Go, and Shogi from scratch and in just a few hours could beat the world's previous best AI at those games, as well as human players.
I don't call that smarter. I call that dumber.
EB
 
How can AI be smarter than the humans who created it? AI doesn't evolve on its own, or can it?
To answer the OP, it's not a ''bad'' thing if that were to happen, but I don't see how a machine can outsmart its designer. The designer created all of the codes within a framework. The machine would literally have to ''think'' outside of that framework.
First of all, the prospect that humans could produce better than themselves seems really very, very small. Essentially, as I already posted on this forum, you need to keep in mind that humans have a brain which is the last outcome of a 525 million years of the natural selection of nervous systems, from the first neuron-like cells to an actual cortex. Think also that natural selection operates over the entire biosphere, which is like really, really huge. This gives us a very neat advantage over machines. Compare how AIs are now being conceived and designed: less than a million engineers, a few thousand prototypes, a very slow development cycle, and all this over a period less than a paltry 100 hundred years. The figures are just not commensurable. Natural selection beats this small group of short lived and ineffectual scientists, mathematicians, engineers, government officials and billionaires. The real situation is that no human being today understand how the human brain works. The best example of that is mathematical logic, which can't even duplicate what the human brain does even though mathematicians have been working on that for more than 120 years now.
That being said, don't expect that anyone knowing how to make AIs smarter than humans will tell you. Except if they are really dumb. So, the fact that you don't know doesn't mean it's not possible.
Also, think of atoms. The way atoms work can't be said to be intelligent. Yet, ultimately, it's atoms that make us what we are, which is arguably more intelligent than atoms.
Another way to see this is to say it's not atoms making what we are but reality itself, as a whole, and I wouldn't presume to know that reality can't produce AIs smarter than us.
Still, again, it's really not plausible it will ever happen.
What is certain, however, is that you'll have plenty engineers and other interested people claiming they've done it. Yet again. I remember similar claims made in the sixties and at the time I already thought it was crap. Still is, more than fifty years later!
EB
 
First of all, the prospect that humans could produce better than themselves seems really very, very small.
As Baldee points out - and in an even broader sense - the very point of any technology we create is that it does things better than we can do it ourselves (otherwise we'd just do it ourselves).
Faster, higher, stronger, deeper, etc.
Smarter is just one other way we make things to extend what we do.

... humans have a brain which is the last outcome of a 525 million years of the natural selection
So did our legs, and yet we can barely manage a 15mph run.
 
First of all, the prospect that humans could produce better than themselves seems really very, very small.
I've read similar predictions about airplanes, trains and radios.

"[female passenger] uteruses would fly out of [their] bodies as they were accelerated to speed”

"Lee DeForest has said in many newspapers and over his signature that it would be possible to transmit the human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public ... has been persuaded to purchase stock in his company ..."

"To place a man in a multi-stage rocket and project him into the controlling gravitational field of the moon where the passengers can make scientific observations, perhaps land alive, and then return to earth - all that constitutes a wild dream worthy of Jules Verne. I am bold enough to say that such a man-made voyage will never occur regardless of all future advances."

"Heavier-than-air flying machines are impossible."

"How, sir, would you make a ship sail against the wind and currents by lighting a bonfire under her deck? I pray you, excuse me, I have not the time to listen to such nonsense."

"When the Paris Exhibition [of 1878] closes, electric light will close with it and no more will be heard of it."








Volume 0%
 
That's the label we give it. Can a machine really be ''intelligent?''
Agree on the first.

As to the second, well, that depends on how we define intelligence.
If the word literally means organic intelligence, explicitly excluding inorganic intelligence, then no, machines are not "intelligent". Then we would need another word to define what inorganics can do.

It's merely semantics. The label doesn't limit the thing.
 
Agree on the first.

As to the second, well, that depends on how we define intelligence.
If the word literally means organic intelligence, explicitly excluding inorganic intelligence, then no, machines are not "intelligent". Then we would need another word to define what inorganics can do.

It's merely semantics. The label doesn't limit the thing.
That's exactly how I've been thinking about it. My definition would have an organic spin. Go figure, ''artificial'' intelligence might be appropriate, then.

They may seem intelligent, but it's not on the same plain as human intelligence. From a philosophical view, not sure the design could ever out-smart the designer, because it relies on the designer to exist in the first place. This is why I don't feel that AI will ''take over'' and humans will become extinct, like some grade B sci-fi film. Nah, I think that they are quite stoppable because they're predictable. If humans can build a machine, it can destroy it, too.

If someone could create a machine to exhibit wisdom or emotional type responses (not manufactured), now that would interesting. And, possibly a bit threatening to humans.
 
Last edited:
That's exactly how I've been thinking about it. My definition would have an organic spin. Go figure, ''artificial'' intelligence might be appropriate, then.

They may seem intelligent, but it's not on the same plain as human intelligence.
So you've just used a modifier to distinguish human intelligence from other types of intelligence. Makes sense, right?

There's intelligence, then there's human intelligence - or, more broadly, organic intelligence, since animals have intelligence too - then there's inorganic intelligence.

So, when we are comparing them, we are talking about general intelligence - what the various types have in common.

From a philosophical view, not sure the design could ever out-smart the designer, because it relies on the designer to exist in the first place.
A car can outrun a human. Why is intelligence different?
It's pretty hard for a human to think in seven or eight dimensions, but it's trivial for an artificial mind.

I think that they are quite stoppable because they're predictable.
What makes you think they're predictable? An artificial mind could easily juggle orders of magnitude more parameters than a human - leading to behavior that would confound us.
 
So you've just used a modifier to distinguish human intelligence from other types of intelligence. Makes sense, right?

There's intelligence, then there's human intelligence - or, more broadly, organic intelligence, since animals have intelligence too - then there's inorganic intelligence.

So, when we are comparing them, we are talking about general intelligence - what the various types have in common.


A car can outrun a human. Why is intelligence different?
It's pretty hard for a human to think in seven or eight dimensions, but it's trivial for an artificial mind.
But it's not really ''thinking,'' is it? Human thinking involves more than mere mechanics to completing a task, lest we would be robots. When I meditate for example, I'm thinking about calming my mind, body and focusing on something altogether separate and away from the daily stresses of life. Thinking is more than mere mechanics. It is more than getting from point a to b. So, I challenge the idea that AI has the ability to ''think,'' in the way that I've defined it.


What makes you think they're predictable? An artificial mind could easily juggle orders of magnitude more parameters than a human - leading to behavior that would confound us.
How do we know that? I think we want that to be so, but I'm not sure we can be sure. It still requires a designer to make such a machine. True that a car can outrun a human, but it still requires a human to control it. To manipulate it. If we are the manipulators of AI, can they ever become truly independent?
 
I don't call that smarter. I call that dumber.
Well, thank you so much for your enlightening comment.
A machine that can learn a game and in a few hours beat the best human players, and the best previous AI machines (who could already beat the best human players).
Yes, sounds dumber to me too. :rolleyes:
 
That's the label we give it. Can a machine really be ''intelligent?''
Define what you mean by "intelligent" and let's take it from there. :)
That's exactly how I've been thinking about it. My definition would have an organic spin. Go figure, ''artificial'' intelligence might be appropriate, then.
It's comforting when words really do imply what they intend to. :D
They may seem intelligent, but it's not on the same plain as human intelligence.
Yet they can whip our butts at chess.
From a philosophical view, not sure the design could ever out-smart the designer, because it relies on the designer to exist in the first place. This is why I don't feel that AI will ''take over'' and humans will become extinct, like some grade B sci-fi film. Nah, I think that they are quite stoppable because they're predictable.
If they were predictable they couldn't beat us at chess.
Per wiki (font of all knowledge ;)) Danish grandmaster Peter Heine Nielsen thought AlphaZero played like a superior alien species, and a Norwegian grandmaster claimed it to have profound positional understanding.

As for not being able to exceed the designer, I'm fairly sure I'm smarter than my parents. :)
If humans can build a machine, it can destroy it, too.
Yes, while the intelligence is localised.
But with the advent of blockchains, and the rise of the decentralised applications, a decentralised AI might not be able to be easily destroyed.
Can you switch off the internet, for example?
If someone could create a machine to exhibit wisdom or emotional type responses (not manufactured), now that would interesting. And, possibly a bit threatening to humans.
Emotional intelligence is one type.
Wisdom... is that not displayed in beating human chess players - intelligence is knowing what the pieces do and how to move them, but wisdom comes from the strategies employed.
That maybe one way to look at it, anyway.
But it's not really ''thinking,'' is it? Human thinking involves more than mere mechanics to completing a task, lest we would be robots.
Many would say that we are just that, robots, merely organic and highly complex and advanced, but robots nonetheless.
Just machines doing what we do.
When I meditate for example, I'm thinking about calming my mind, body and focusing on something altogether separate and away from the daily stresses of life. Thinking is more than mere mechanics. It is more than getting from point a to b. So, I challenge the idea that AI has the ability to ''think,'' in the way that I've defined it.
AI at the moment is only just beginning to tackle highly specific tasks.
But it already does those better than humans in many cases.
It seems unfair to be comparing AI at the moment to what we humans do.
Give it a few hundred years, perhaps.
The more we learn about the way our brains work, the more we understand about how we think, the more that will be applied to AI, and the more intelligent they will become.
Will they be as versatile?
No.
AlphaZero may have learnt chess in a few hours and now be the best player ever, but it can't boil a kettle, walk, talk, understand human language etc.
How do we know that? I think we want that to be so, but I'm not sure we can be sure.
Because it happens.
The strategies AlphaZero came up with in the games it played were novel.
No one taught it those strategies.
It learnt them itself.
It still requires a designer to make such a machine. True that a car can outrun a human, but it still requires a human to control it. To manipulate it. If we are the manipulators of AI, can they ever become truly independent?
Being independent is separate to being intelligent.
Autonomous cars will soon be a reality, and they will be faster than humans.
Also, not having independence doesn't mean it can't be more intelligent.
But AI will likely be for specific tasks, at least for the foreseeable future.
And in those tasks it will easily outperform us if applied correctly.
 
Define what you mean by "intelligent" and let's take it from there. :)
It's comforting when words really do imply what they intend to. :D
lol Right? :D

Yet they can whip our butts at chess.
If they were predictable they couldn't beat us at chess.
Predictable in that an engineer has to create the various possible outcomes for the machine to beat us at chess, or any other game. The machine itself isn't ''thinking'' and trying to figure out how to beat its opponent. It has an array of options to choose from, in order to compete. We may be in awe as we lose the chess game against a machine, but you lost the game against the person(s)/company who made that machine, is my point of view.

Per wiki (font of all knowledge ;)) Danish grandmaster Peter Heine Nielsen thought AlphaZero played like a superior alien species, and a Norwegian grandmaster claimed it to have profound positional understanding.

As for not being able to exceed the designer, I'm fairly sure I'm smarter than my parents. :)
I knew someone would go there. :p But, while you're their biological child, they didn't program you, so to speak. Humans are more complex than machines. Machines can't feel, reason or espouse wisdom, at least any more than they've been programmed to do. Your ability to outsmart your parents for example, could have to do with the time period you were born in, not knocking your intelligence. lol It could have a lot of mitigating factors, that have nothing to even do with your parents. Again, humans are complex and have the ability to learn, while machines follow a blueprint of sorts, for every process they execute.

Yes, while the intelligence is localised.
But with the advent of blockchains, and the rise of the decentralised applications, a decentralised AI might not be able to be easily destroyed.
Can you switch off the internet, for example?
Emotional intelligence is one type.
Wisdom... is that not displayed in beating human chess players - intelligence is knowing what the pieces do and how to move them, but wisdom comes from the strategies employed.
That maybe one way to look at it, anyway.
Many would say that we are just that, robots, merely organic and highly complex and advanced, but robots nonetheless.
Just machines doing what we do.
AI at the moment is only just beginning to tackle highly specific tasks.
But it already does those better than humans in many cases.
It seems unfair to be comparing AI at the moment to what we humans do.
Give it a few hundred years, perhaps.
The more we learn about the way our brains work, the more we understand about how we think, the more that will be applied to AI, and the more intelligent they will become.
Will they be as versatile?
No.
AlphaZero may have learnt chess in a few hours and now be the best player ever, but it can't boil a kettle, walk, talk, understand human language etc.
Because it happens.
The strategies AlphaZero came up with in the games it played were novel.
No one taught it those strategies.
It learnt them itself.
Being independent is separate to being intelligent.
Autonomous cars will soon be a reality, and they will be faster than humans.
Also, not having independence doesn't mean it can't be more intelligent.
But AI will likely be for specific tasks, at least for the foreseeable future.
And in those tasks it will easily outperform us if applied correctly.
Performing tasks faster is why AI is in high demand. If it were to ever replace humans in terms of interactions, emotional intelligence, etc...I'm not sure if that would indicate that robots are evolving, or we as a species are becoming ''dumbed down.''

Having said all of that, I'm realizing that maybe my idea of intelligence, expanded a bit further than necessary, for the sake of discussing AI. It's quite possible that AI could out-perform us, when it comes to finishing tasks, with accuracy. Essentially, that is primarily the main goal of AI, I'd think?

Although, after watching a few episodes of Westworld, maybe they could evolve over time, and the lines could get blurry. ;)
 
I've read similar predictions about airplanes, trains and radios.
"[female passenger] uteruses would fly out of [their] bodies as they were accelerated to speed”
"Lee DeForest has said in many newspapers and over his signature that it would be possible to transmit the human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public ... has been persuaded to purchase stock in his company ..."
"To place a man in a multi-stage rocket and project him into the controlling gravitational field of the moon where the passengers can make scientific observations, perhaps land alive, and then return to earth - all that constitutes a wild dream worthy of Jules Verne. I am bold enough to say that such a man-made voyage will never occur regardless of all future advances."
"Heavier-than-air flying machines are impossible."
"How, sir, would you make a ship sail against the wind and currents by lighting a bonfire under her deck? I pray you, excuse me, I have not the time to listen to such nonsense."
"When the Paris Exhibition [of 1878] closes, electric light will close with it and no more will be heard of it."
But I didn't make anything like the vacuous claims you've listed here. I provided a rational justification for my conclusion. The fact that you choose to ignore it makes your comment here irrelevant. You've merely asserted your belief as if that could be an argument. Not exactly impressive.
EB
 
Back
Top