Can a robot learn right from wrong?

center of gravity shifting means that as the fuel burns more is being used from tanks
No no no

This was a fixed movement of the planes centre of gravity due to new engines being required to be attached further forward under the wing

And not at high speed, at take off and climbing when nose up is a requirement

:)
 
I'm not sure what the question really is? Is it "can robots learn right from wrong" or is it, "do you think this is a good idea"?

Maybe it's both. I think AI is a long way from worrying about ethics and morality.

If you program them to make certain decisions under certain circumstances, that is what they'll do. There will need to be test data to evaluate before setting them loose on the public. If they increase safety then it's a good idea.

Are seat belts a good idea? If they save some lives, reduce injuries and don't create a lot of injuries, it's a good idea. Using a robot in those limited circumstances is no different.

Why do I feel that I'm always bursting your bubble? :)

I’ve decided to step out of my bubble, since it’s always getting burst. lol

I agree, and that’s an interesting perspective.

But seatbelts don’t make decisions, AI is going to be required to make decisions. They’re going to be designed with ethics in mind. Is that a good or bad idea, I don’t know yet.

I hesitate to get into the whole “can AI develop consciousness” conversation as I don’t think that machines can develop anything more than they’re engineered and programmed to “know.” That’s just my opinion, anyway.
 
I’ve decided to step out of my bubble, since it’s always getting burst. lol

I agree, and that’s an interesting perspective.

But seatbelts don’t make decisions, AI is going to be required to make decisions. They’re going to be designed with ethics in mind. Is that a good or bad idea, I don’t know yet.

I hesitate to get into the whole “can AI develop consciousness” conversation as I don’t think that machines can develop anything more than they’re engineered and programmed to “know.” That’s just my opinion, anyway.

Maybe but they will be designed by humans and how do humans react? If you're in a car about to crash, you try to avoid the crash. You don't really think "If I avoid the car in front of me, there may be a car to my side, I may hit it and someone may be killed".

Sure, if there is a large crowd of pedestrians to the right, you hit the car in front of you rather than hit the pedestrians but again, the pedestrians might be able to jump out of the way and you may be killing a newborn baby in the car but you just can't know any of that and AI can't either.

So I expect that AI will operate much like we would in the limited areas that they operate.

On an airplane with "fly by wires" a computer makes many micro adjustments to the flying surfaces. Aircraft can be designed so that no human can keep them flying without the aid of the computers. Fighter jets are like this now.

The computers are backed up by other computers and there are rules about which one to use if one goes "bad". The proof is in the pudding so to speak, you test them and once you are happy with the results you OK their use in the fighter jet.

There's really no "ethics" involved and I doubt that in the strictest sense that there ever will be.
 
Back
Top