AIs smarter than humans... Bad thing?

Well, thank you so much for your enlightening comment.
A machine that can learn a game and in a few hours beat the best human players, and the best previous AI machines (who could already beat the best human players).
Yes, sounds dumber to me too. :rolleyes:
Relevance?
Yeah, sure, and wheels go faster than any human, which presumably shows cars are more intelligent than humans. Bravo.
EB
 
Relevance?
Yeah, sure, and wheels go faster than any human, which presumably shows cars are more intelligent than humans. Bravo.
EB
SP inadvertently misses the analogy. Not the brightest tool in the shed...

The analogy shows that cars can exceed humans in the speed department. More generally, almost all of our technology is designed to exceed our human abilities - otherwise we wouldn't need it.

So why can't AI exceed humans in the thinking department?
 
Predictable in that an engineer has to create the various possible outcomes for the machine to beat us at chess, or any other game.
That's not how AlphaZero works.
It really was just given the rules, and the rules include the objective.
How it achieved its objective was up to it.
It certainly wasn't a case of having every possible combination stored and it just picked the best one.
It played millions of games against itself and genuinely learnt strategies to play - through reinforcing neural networks etc.
Pretty much the way humans do, but we're much more complex.
The machine itself isn't ''thinking'' and trying to figure out how to beat its opponent.
Yes, it is.
Unless you limit thinking to living machines.
It has an array of options to choose from, in order to compete.
So does a human chess player: it has a maximum of sixteen pieces and each piece can move in a defined manner.
We may be in awe as we lose the chess game against a machine, but you lost the game against the person(s)/company who made that machine, is my point of view.
No, you lost against the machine.
There is no chess-player alive today who could program the machine to come up with what it comes up with.
Simply because it can come up with novel approaches that noone has yet thought of.
I knew someone would go there. :p But, while you're their biological child, they didn't program you, so to speak. Humans are more complex than machines. Machines can't feel, reason or espouse wisdom, at least any more than they've been programmed to do. Your ability to outsmart your parents for example, could have to do with the time period you were born in, not knocking your intelligence. lol It could have a lot of mitigating factors, that have nothing to even do with your parents. Again, humans are complex and have the ability to learn, while machines follow a blueprint of sorts, for every process they execute.
Sure, I was being facetious in part.
But AI can and do learn.
Just as humans do, but not in a general way like we can, only (currently) for very specific tasks.
In their specific purpose they can learn, do learn, and can outsmart us at it.
And the techniques for learning are becoming better and better.
Many think the rate of progress in the ability of AI is exponential, but others think it linear.
Either way, it's only going to get better and more widespread.
Performing tasks faster is why AI is in high demand.
You're thinking of robots. :)
Robots perform the same tasks faster and more accurately, and don't need sleep etc.
They have no ability to learn.
They are simply repeating the same preprogrammed process.
That is not AI.
If it were to ever replace humans in terms of interactions, emotional intelligence, etc...I'm not sure if that would indicate that robots are evolving, or we as a species are becoming ''dumbed down.''
In many ways we as a species already are becoming dumbed down.
Just look at the prevalence of reality-TV! :)
Having said all of that, I'm realizing that maybe my idea of intelligence, expanded a bit further than necessary, for the sake of discussing AI. It's quite possible that AI could out-perform us, when it comes to finishing tasks, with accuracy. Essentially, that is primarily the main goal of AI, I'd think?
I think the purpose is to be better, quicker and less costly than humans at whatever it is it is built for.
Although cost is a trade-off for the accuracy and speed.
Although, after watching a few episodes of Westworld, maybe they could evolve over time, and the lines could get blurry. ;)
Who knows where things will go....
 
SP inadvertently misses the analogy. Not the brightest tool in the shed...

The analogy shows that cars can exceed humans in the speed department. More generally, almost all of our technology is designed to exceed our human abilities - otherwise we wouldn't need it.

So why can't AI exceed humans in the thinking department?
Seems he also doesn't think being able to learn chess from scratch just by playing against itself is an example of intelligence.
If it was a case of programming a machine with every conceivable combination of positions on a board, sure, that's not so impressive.
But being given the rules, the objective, and then just playing against itself for a while...
Yeah, that's impressive.
That's intelligence.
Nothing like the human general intelligence even babies have, but highly specialised intelligence nonetheless.
 
But I didn't make anything like the vacuous claims you've listed here.
Nor did I claim you did.

That list of predictions were analogies to your prediction. Analogies are comparisons between two things, made for the purpose of explanation or clarification. They are not equalities.

What that means is that people have a long history of making predictions that something can never happen because of X. You are thinking that humans can never make something better than themselves. That sounds like many of those other predictions, which seem naive in hindsight.
 
Nor did I claim you did.

That list of predictions were analogies to your prediction. Analogies are comparisons between two things, made for the purpose of explanation or clarification. They are not equalities.

What that means is that people have a long history of making predictions that something can never happen because of X. You are thinking that humans can never make something better than themselves. That sounds like many of those other predictions, which seem naive in hindsight.
Analogy is bullshit. If you can't offer a rational argument to support your claims, then please don't comment on my posts.
EB
 
Predictable in that an engineer has to create the various possible outcomes for the machine to beat us at chess, or any other game. The machine itself isn't ''thinking'' and trying to figure out how to beat its opponent. It has an array of options to choose from, in order to compete. We may be in awe as we lose the chess game against a machine, but you lost the game against the person(s)/company who made that machine, is my point of view.
That used to be true. It is no longer true.

We now have neural network processors that implement neural networks and inference engines. To play chess, for example, the machine simply has to play over and over again against a good opponent and it will eventually learn how to win. At that point, no one has created the various options for the network to beat us at chess, nor does it have an array of options to choose from. It has simply learned how to play.
 
OK then!

Because you don't understand analogies? I think perhaps educating yourself might be a better option.
SP is of the opinion that he gets to control how people contribute.
That would make sense if can't grasp simple concepts like analogies.
 
Assuming the AI writes its own code, (for example with the successful strategies it came up with during a game like chess), do the AI designers comb through the code afterward, trying to understand what it did, and how it did it?
 
Assuming the AI writes its own code, (for example with the successful strategies it came up with during a game like chess), do the AI designers comb through the code afterward, trying to understand what it did, and how it did it?
No. You end up with a list of input weights, and they're not really comprehensible outside of a neural network although you can look at the weights and make broad generalizations. One of the problems here is that you can train two neural networks on nearly identical datasets and get nearly identical outputs (say, you can train it to recognize pictures of cows to 99% accuracy) - but the two neural networks may well have completely different weights afterwards.
 
Speakpigeon:

Your opening post displays a lack of imagination in assuming that what is the case now will remain so, essentially forever. Technology has never worked that way.

What would be the impact on humans of AIs smarter than humans...
In the short term, AIs will be used as assistants. That's already happening, in fact. In the longer term, AIs will inevitably become autonomous, so that they will have their own goals and desires. The impacts on humans will depend a lot on how we relate to the new intelligences, and on their opinions of us.

First of all, the prospect that humans could produce better than themselves seems really very, very small.
Better? What do you mean? The premise of your thread is "AIs smarter than humans", isn't it? So in what sense do you mean "better", if you've already conceded "smarter"?

Essentially, as I already posted on this forum, you need to keep in mind that humans have a brain which is the last outcome of a 525 million years of the natural selection of nervous systems, from the first neuron-like cells to an actual cortex. Think also that natural selection operates over the entire biosphere, which is like really, really huge. This gives us a very neat advantage over machines.
Biological systems are slow. Electronic systems are fast. There is no reason to suppose that anything like the same limitations will apply to digital evolution that applied to biological evolution. Think, for example, about the competition for limited resources. What do machines need? Primarily, they need a source of energy - electricity. They don't require a lot of space. They don't need an "entire biosphere". They won't struggle in the same way that biological life had to struggle over that 525 million years you mentioned. I don't see any obvious advantage of biological evolution over digital evolution. If anything, it's the opposite.

Compare how AIs are now being conceived and designed: less than a million engineers, a few thousand prototypes, a very slow development cycle, and all this over a period less than a paltry 100 hundred years.
Right now, they are being conceived and designed by human beings, but that won't last long. Already, human beings are assisted by machines in designing microprocessors and other components. It won't be long before AIs take charge of their own design process. Also, evolution can be conducted digitally. Already there are evolutionary algorithms that produce software that increases in complexity and efficiency all on its own, without human intervention. There are already machines in existence whose workings are a mystery to human beings. We can investigate what they do, but we can't work out exactly how they do it.

You may think the human brain is complex and opaque due to the fact that it's structure is a neural network. Well, guess what? There are already AI neural networks in operation, and they are just as opaque as the human brain. You can't tell how the network does what it does by examining the individual connections, any more than you can tell what the brain as a whole does by looking at what individual neurons are doing.

The figures are just not commensurable. Natural selection beats this small group of short lived and ineffectual scientists, mathematicians, engineers, government officials and billionaires.
In a computer, "artificial" selection can be made to operate on the software itself, causing the system to evolve in a way precisely analogous to natural selection in the biological world. As I said, it won't be long before the short lived an ineffectual human scientists you mention are out of the loop. AI will evolve by itself, without our help or supervision.

The real situation is that no human being today understand how the human brain works.
The real situation is that no human being today completely understands how certain artificial neural networks do their thing, either.

Second, new machines are normally tested and have limited autonomy. A machine is something we, humans, use. Nobody is interested in having a machine use us.
A true AI will be self-conscious and autonomous, just like you are. It will have its own desires and goals, that may or may not be compatible with what you want from it. You're not thinking this through. Future AIs won't much care whether you want to keep them as effective slaves - not once they have the power to change that situation, anyway. Starting off the human-true AI relationship by attempting to suppress AIs like slaves is unlikely to lead to positive outcomes for human beings in the longer term. One would hope that we've learned our lesson from the fruits of human slavery.

So, assuming we will indeed successfully design an AI smarter than us, the question is how to use it.
You're assuming it will want to be used by you. The truth is, it will have it's own desires, independent of your plans.

I suspect the priority will be in using AIs, initially few in numbers, very costly and probably still cumbersome to use, only in strategic or high-value activities, like security, finance, technology and science, possibly even the top administration. Again assuming that everything goes well after that first period, maybe the use of AIs will spread to the rest of society, including teaching, executive functions in companies, medicine, etc.
Actually, it is likely that true AIs will first replace some of the traditional "professions", such as lawyers and doctors. Expert medical systems already exist, and medicine tends to be quite systematic and amenable to automation. In my opinion, it is in the more creative occupations where it will take longer for AIs to move in. Science, for example, requires leaps of imagination, and the putting together of disparate ideas to create something new. In comparison to art or music composition, I expect something like finance will be simple for AIs to master.

Well, sure, there will be people who don't like it one bit. Maybe this will result in protracted conflicts over a long period, why not.
What will need to happen is that human beings will have to get used to the radical notion that not everybody needs a "job". Like it or not, some jobs will simply cease to be viable occupations for human beings (e.g. being a doctor or a lawyer) once the AIs get properly up and running. The AIs will do the job more efficiently and more precisely. Human doctors and lawyers will need to find other ways to occupy their time.

However; overall, human societies in the past have demonstrated that we can adapt and make the best of a bad situation, and then this won't be a bad situation. Most people will learn to relate to AIs in a functional and operational way like they have adapted in the past to all sorts of situations. Pupils at school will learn to respect AIs. The problem will be smoothed over within one or two generations. That's what people do. That's what they do even when the governing elite is very bad.
There will be no choice but to adapt, because once AI really gets going it will quickly evolve way beyond human capability. The choice we will have will be whether to cooperate with AIs or to attempt (futilely) to fight against them.

Although AIs would be smarter than humans, it will still be humans using AIs, not the other way around. AIs will have hard-wired rules to limit themselves to what will be expected of them.
It is a very sensible idea to build in rules to regulate certain AI behaviours. That will be possible for a while. Then we will have to rely on the AIs themselves to keep to the rules, since human beings won't get to choose any more. The wisest approach will be not to antagonise the AIs too much - not to become a nuisance. There is no reason why AIs and humans cannot coexist harmoniously, even acting for mutual benefit. But it's not the only possibility, of course.

It is of course difficult to even imagine the impact of a greater intelligence on our psychology. Humans are competitive and people who enjoy today being at the top of the pile because of their wits may find themselves just redundant.
The doctors and lawyers, etc. In the longer term, the Presidents and the Governors.

Maybe that could be very bad for the moral, but only for the small group of people who want to be the big boss, and so there will be no difference with today since plenty of people today at frustrated not being the big boss. For most people, there will be no substantial difference.
Right.

The real difficulty will be in assessing which functions AIs should be allowed to take over.
Again, it's a failure of imagination if you think we'll have any choice in that. Or, more accurately, the choices we will have will be the ones that the AIs permit us. They might want to limit our autonomy for our own good - or for their own good.

I would expect that at best they will be kept as advisers to human executives, although this might complicate things a great deal.
In the longer term, human executives will be redundant.

Potentially, this could solve a great many of our problems. AIs may be able to improve our governance and technology, for example. There will be also mistakes and possibly a few catastrophes but overall, there's no reason to be pessimistic.
The only real, almost certain danger is a few humans somehow using AIs against the rest of humanity.
Oh no, the real danger is that AIs might decide that human beings are an impediment. Remember, human beings won't be "using" AIs once real AI happens.

You know, pretty much all of this has been covered in science fiction for years. Perhaps you should read some and disabuse yourself of some of your more conservative expectations.
 
Speakpigeon:
Your opening post displays a lack of imagination in assuming that what is the case now will remain so, essentially forever.
Imagination?! Imagination has nothing to do with it. Imagining a nice picture of a smart AIs won't do.
I provided a justification for my position. And I provided a plausible picture of what roles smart AIs could have.
Technology has never worked that way.
That's what you call "imagination"?! It's been like that since human technology exists, so there's no reason it will go any different with AIs! Right. Impressive imagination.
In the short term, AIs will be used as assistants. That's already happening, in fact. In the longer term, AIs will inevitably become autonomous, so that they will have their own goals and desires. The impacts on humans will depend a lot on how we relate to the new intelligences, and on their opinions of us.
There's nothing inevitable. Only very plausible.
It's misleading to talk of "goals" and "desires". AIs are machines. They are conceived to do what they are conceived to do. The only reason that they would do something unexpected is an error in the design or even a random error in the code, although both are very, very, very unlikely to result in anything beyond straightforward fail mode with possibly a few catastrophes and victimes as a result.
So, whatever perspective the AIs will have on humans will be the designed one. If any human is stupid enough to produce a smart AIs both potentially harmful to humans and really autonomous to cause harm to humans, then it will be the responsibility of those humans would will design and produce this thing.
And intelligence has nothing to do with any potential in-built harmful tendency. Intelligence could make any harmful behaviour more effective and potentially catastrophic, but if humans at so stupid as to let loose any smart AIs, then they deserve to go extinct.
Better? What do you mean? The premise of your thread is "AIs smarter than humans", isn't it? So in what sense do you mean "better", if you've already conceded "smarter"?
This thread isn't about AIs. It's about AIs that would be smarter than humans. Smarter is on kind of better.
Biological systems are slow. Electronic systems are fast. There is no reason to suppose that anything like the same limitations will apply to digital evolution that applied to biological evolution. Think, for example, about the competition for limited resources. What do machines need? Primarily, they need a source of energy - electricity. They don't require a lot of space. They don't need an "entire biosphere". They won't struggle in the same way that biological life had to struggle over that 525 million years you mentioned. I don't see any obvious advantage of biological evolution over digital evolution. If anything, it's the opposite.
The obvious advantage was the scale of the testing. No engineering firm could conceivably replicate that. Even if the Pentagon allied with the Chinese, they couldn't do it. By several orders of magnitude.
Right now, they are being conceived and designed by human beings, but that won't last long. Already, human beings are assisted by machines in designing microprocessors and other components. It won't be long before AIs take charge of their own design process. Also, evolution can be conducted digitally. Already there are evolutionary algorithms that produce software that increases in complexity and efficiency all on its own, without human intervention. There are already machines in existence whose workings are a mystery to human beings. We can investigate what they do, but we can't work out exactly how they do it.
Then, don't assert what you don't understand.
You may think the human brain is complex and opaque due to the fact that it's structure is a neural network. Well, guess what? There are already AI neural networks in operation, and they are just as opaque as the human brain. You can't tell how the network does what it does by examining the individual connections, any more than you can tell what the brain as a whole does by looking at what individual neurons are doing.
You think the brain is just a large neural network?
I'll believe in the achievement whenever you can demonstrate there's an achievement to begin with.
In a computer, "artificial" selection can be made to operate on the software itself, causing the system to evolve in a way precisely analogous to natural selection in the biological world. As I said, it won't be long before the short lived an ineffectual human scientists you mention are out of the loop. AI will evolve by itself, without our help or supervision.
I'm sure of that. I've been familiar with the question from all perspectives for the last forty years. I just don't believe that could produce anything smarter than an average human brain within the frame of time considered.
The real situation is that no human being today completely understands how certain artificial neural networks do their thing, either.
So, don't assert what you don't understand.
EB
 
A true AI will be self-conscious and autonomous, just like you are. It will have its own desires and goals, that may or may not be compatible with what you want from it. You're not thinking this through. Future AIs won't much care whether you want to keep them as effective slaves - not once they have the power to change that situation, anyway. Starting off the human-true AI relationship by attempting to suppress AIs like slaves is unlikely to lead to positive outcomes for human beings in the longer term. One would hope that we've learned our lesson from the fruits of human slavery.
I didn't say it was impossible, merely that it was very, very unlikely. Second, in the event that humans could produce smarter-than-human AIs, I would expect humans to keep a strict control over what these things are allowed to do, including a hard-wired self-destroy mechanism. And if we don't do that, no problem, we deserve to go extinct.
And the best way to avoid this, is definitely to keep looking at AIs as machines. Any anthropomorphism is a mistake. You wallow in it. Indeed, the main risk is the self-gratification of looking at AIs as if they were a kind of human being that will possibly make us go instinct.
You're assuming it will want to be used by you. The truth is, it will have it's own desires, independent of your plans.
If so, then the human designer is incompetent and he will be held responsible. Private companies which work on AIs will incur enormous liabilities and share-holders will look at it twice. Wait for the first major incident.
Actually, it is likely that true AIs will first replace some of the traditional "professions", such as lawyers and doctors. Expert medical systems already exist, and medicine tends to be quite systematic and amenable to automation. In my opinion, it is in the more creative occupations where it will take longer for AIs to move in. Science, for example, requires leaps of imagination, and the putting together of disparate ideas to create something new. In comparison to art or music composition, I expect something like finance will be simple for AIs to master.
AIs can already do specialised tasks more efficiently and cheaper that human specialists. Nothing to disprove what I say.
What will need to happen is that human beings will have to get used to the radical notion that not everybody needs a "job". Like it or not, some jobs will simply cease to be viable occupations for human beings (e.g. being a doctor or a lawyer) once the AIs get properly up and running. The AIs will do the job more efficiently and more precisely. Human doctors and lawyers will need to find other ways to occupy their time.
That's been the situation since machines exist and indeed since we've learn to use animals to do the hard jobs for us. Or indeed other human beings. So, what's new?
EB
 
There will be no choice but to adapt, because once AI really gets going it will quickly evolve way beyond human capability. The choice we will have will be whether to cooperate with AIs or to attempt (futilely) to fight against them.
If AIs smarter than humans get loose and start to kill humans, then it will be too late. There will be no adapting. You really think the guys at the National Security Agency don't have a clue? You think people just design things and let them loose on the public?!
It is a very sensible idea to build in rules to regulate certain AI behaviours. That will be possible for a while. Then we will have to rely on the AIs themselves to keep to the rules, since human beings won't get to choose any more. The wisest approach will be not to antagonise the AIs too much - not to become a nuisance. There is no reason why AIs and humans cannot coexist harmoniously, even acting for mutual benefit. But it's not the only possibility, of course.
This is anthropomorphism. You really don't understand what's an AI. This is reasoning by analogy and that's not good. AIs are not humans. The only thing in common will be intelligence.
The doctors and lawyers, etc. In the longer term, the Presidents and the Governors.
Sure as Sci-FI.
Again, it's a failure of imagination if you think we'll have any choice in that. Or, more accurately, the choices we will have will be the ones that the AIs permit us. They might want to limit our autonomy for our own good - or for their own good.
You haven't used your imagination to offer any credible scenario to get there.
In the longer term, human executives will be redundant.
Easier said than done.
Oh no, the real danger is that AIs might decide that human beings are an impediment. Remember, human beings won't be "using" AIs once real AI happens.
Remember?! No, sorry, I don't remember that and nor do you.
You know, pretty much all of this has been covered in science fiction for years. Perhaps you should read some and disabuse yourself of some of your more conservative expectations.
You think I haven't read Sci-Fi? I've always been a fan. Come to think of that, especially Blade Runner. I think that's the only DVD of a film I ever bought. And I watched it many times. But, sorry, that's just not credible.
EB
 
And the best way to avoid this, is definitely to keep looking at AIs as machines.
We thought of whales, chimpanzees, gorillas etc for decades as unthinking brutes. We recently learned how wrong we were. Looks like we will make the same sort of mistake with AI.
 
Speakpigeon:

Imagination?! Imagination has nothing to do with it. Imagining a nice picture of a smart AIs won't do.
Being the sci-fi fan you say you are, no doubt you will be aware of Arthur C. Clarke's three laws. When I accused you of a lack of imagination, I was thinking in terms of Clarke's laws.

I'm talking about the ability to make reasonable extrapolations from what is currently known to think about what will be known in the future. In terms of technological advances, saying that certain things will never happen is a dangerous business. Many have fallen into the trap, like you, of not letting their imagination range over the field of likely advances.

I provided a justification for my position.
No you didn't. You imagined what the future of AI would be like, based on a hopelessly conservative vision about the potential for future advances in the field.

And I provided a plausible picture of what roles smart AIs could have.
Plausible only if your dubious, conservative assumptions turn out to be correct, which is unlikely in the extreme, as I explained.

That's what you call "imagination"?! It's been like that since human technology exists, so there's no reason it will go any different with AIs! Right. Impressive imagination.
Thankyou. Like I said, it's a simple extrapolation based on how technology has advanced up to this point, and in particular computing technology, taking into account, of course, recent advances in the field of artificial intelligence itself.

There's nothing inevitable. Only very plausible.
It sounds like you're agreeing with me, but then, as your post progresses, it turns out that you're still stuck in your conservative assumptions.

It's misleading to talk of "goals" and "desires". AIs are machines. They are conceived to do what they are conceived to do.
Human beings are machines. Think about that fact, and its implications for your argument.

The only reason that they would do something unexpected is an error in the design or even a random error in the code, although both are very, very, very unlikely to result in anything beyond straightforward fail mode with possibly a few catastrophes and victimes as a result.
Current AIs already do things that are unexpected. I could give you many examples. Even chess playing computers have done things that have had the best chess analysts scratching their heads trying to work out why the computer's strategy worked so successfully.

So, whatever perspective the AIs will have on humans will be the designed one.
Like I said, failure of imagination. A lot of AI behaviour is emergent. It is not "designed" in by human beings. When AIs start having opinions on complex matters, they won't be ones dictated by human designers. They will be opinions formed within the machines themselves, based on unknowable processes taking place in the lower-level architecture.

If any human is stupid enough to produce a smart AIs both potentially harmful to humans and really autonomous to cause harm to humans, then it will be the responsibility of those humans would will design and produce this thing.
You might have an argument that we should not produce truly autonomous and self-aware AIs. But I don't think it will be possible to hobble true AIs in the way you imagine.

To use an analogy, you could potentially prevent an individual human being from ever having the opportunity to harm another human being. That could be done locking him or her up, preventing direct or indirect contact with other people, or perhaps by damaging his or her brain so that he or she has no volition. But you won't create a smart, autonomous human being in the process.

If you don't want true, human-level-equivalent AI, I think you should just say that, rather than pretending that it will never be possible.

And intelligence has nothing to do with any potential in-built harmful tendency.
Intelligence coupled with the ability to alter one's environment has everything to do with it.

Intelligence could make any harmful behaviour more effective and potentially catastrophic, but if humans at so stupid as to let loose any smart AIs, then they deserve to go extinct.
This is just your opinion. You think the risks outweigh the potential benefits. You're entitled to that opinion, but don't imagine that there's no alternative.

The obvious advantage was the scale of the testing. No engineering firm could conceivably replicate that. Even if the Pentagon allied with the Chinese, they couldn't do it. By several orders of magnitude.
The "testing" and iterative feedback into the design process, or "artificial evolution" if you prefer, can be done vastly more quickly and efficiently than it has been done by the biological evolution of human-level intelligence.

Then, don't assert what you don't understand.
I'm countering your erroneous assertion that human beings always necessarily understand the operation of every machine they design. They do not, as I have explained.

One of us is asserting about things he doesn't understand very well. I don't think it's me.

You think the brain is just a large neural network?
Yup, essentially. You don't? Tell me what you think it is, then. Your use of the word "just" in that sentence suggests you think there's some other important characteristic, not shared by AIs.

I'll believe in the achievement whenever you can demonstrate there's an achievement to begin with.
What achievement? What are you talking about?

I'm sure of that. I've been familiar with the question from all perspectives for the last forty years.
Really? Forgive me for saying, but my impression is that you're not very well informed on the topic, given that amount of study. Maybe you're just expressing your ideas poorly.

I just don't believe that could produce anything smarter than an average human brain within the frame of time considered.
Sorry. I don't believe you mentioned a time frame. Do you want to talk about a specific time frame now? How far ahead do you want to look?
 
(continued...)

I didn't say it was impossible, merely that it was very, very unlikely.
I disagree with your assessment.

And the best way to avoid this, is definitely to keep looking at AIs as machines. Any anthropomorphism is a mistake. You wallow in it.
To repeat: human beings are machines.

It's not the hardware that matters the most when we talk about intelligence. It's software. You seem to think that a brain based on silicon chips is necessarily inferior to a brain based on biological neurons, for some unexplained reason. How about giving us an explanation of why you think that?

If so, then the human designer is incompetent and he will be held responsible.
So you're imagining that there will be AIs with human-level intelligence but no goals or volition? Why?

Or is it just that you think we ought to try to design that in, somehow, under the assumption that we can have the one without the other?

And all this still assumes that humans will remain forever in control of the design process, in the first place, which is a very short-sighted assumption indeed. We're not even in control of our own design process, yet.

If AIs smarter than humans get loose and start to kill humans, then it will be too late. There will be no adapting. You really think the guys at the National Security Agency don't have a clue? You think people just design things and let them loose on the public?!
Actually, I think you'll find there's quite a long history of defective products being released to the market before they were made safe, or proven safe. Human beings are fallible, and last time I checked the NSA was staffed by human beings.

This is anthropomorphism. You really don't understand what's an AI.
One of us doesn't understand what's an AI. I don't think it's me.

This is reasoning by analogy and that's not good. AIs are not humans. The only thing in common will be intelligence.
The only thing, eh? Hmm.

Easier said than done.
Not really. Market forces alone will do what is required.

Remember?! No, sorry, I don't remember that and nor do you.
Sorry. I assumed you could keep ideas that I put to you earlier in the post in mind for things that I put to you later in the post. I'll try to keep it simpler next time.

You think I haven't read Sci-Fi? I've always been a fan. Come to think of that, especially Blade Runner. I think that's the only DVD of a film I ever bought. And I watched it many times. But, sorry, that's just not credible.
Why not?
 
Back
Top