Can Robots Make Ethical Decisions?

Status
Not open for further replies.
Oh pishposh. Robots/computers do whatever they are programmed to do. Sure, they can choose to do things which have been programmed into them as good, probably mirroring their programmer. In and of themselves, however? No. This is not the world of Asimov. I, Robot is not real.

Then again, I suppose it would help if I fully accepted the relevance of ethics anyway, considering that every person has their own ethics, and no two are precisely the same. With that in mind, I suggest you qualify what an ethical decision is, and exactly how you make said decision, then consider how to translate that into a computer/robot.

Intelligence is an interesting concept, can it be had without consciousness? I wonder.
 
Hammy has a good point. If a CCR programs the bot it will be WAY different than if a liberal does. Their definitions of right and wrong would just be too opposite, no? It would be easy with the basics (like don't hit the dog) but once you got into bigger decision-making based on right/wrong, it would be more difficult, wouldn't it?
 
If the ethics are programmed into them, then are they ethics?

Good point. That's what I was trying to say but couldn't quite get the question. But aren't our ethics programmed into us? By parents, teachers, pastors etc? Don't we learn most behavior?
 
Doesn't teaching mean programming?

Only with present computer design.

Good point. That's what I was trying to say but couldn't quite get the question. But aren't our ethics programmed into us? By parents, teachers, pastors etc? Don't we learn most behavior?

Not quite. There is no flow chart of ethics or morals that can anticipate every situation. I feel that is the basic flaw of religious-based morality. Personal judgement is vital.
 
Not quite. There is no flow chart of ethics or morals that can anticipate every situation. I feel that is the basic flaw of religious-based morality. Personal judgement is vital.

Good points. I can't believe I agree with you. Again.:eek::D
 
while possible in theory technically it is in all likely hood impossible. the amount code and processing power required would be immense.
 
We can't even get humans to decide what is ethical and what isn't.
So either:
A) no, because we'll never (?) be able to programme them with a complete etchical database or
B) they'll decide their own "ethics" and then we'll argue with them. :p
 
I'm thinking someone on this forum could create one. We have a lot of smart people here.
I'm thinking otherwise.

Scientists have successfully simlulated a portion of mouse brain in a computer.
More accurately, scientists have successfully created a system that comprises a bunch of simulated neurons that in sheer number are equal to half the number of real neurons in a mouse brain. To do this, the simulation ran at 1/10 real-time speed and ran for a whole ten seconds of simulated time. I would not call this simulating a mouse brain. I would call this simulating something that looks a tiny, tiny bit like a mouse brain -- if you squint just right and stretch the truth a whole lot.

Let me know when scientists create something that can survive in a simulated outdoors for a few months, avoiding simulated predators, finding simulated wild food, finding a simulated mate, and leaving little simulated mouse pups to survive them.
 
We can't even get humans to decide what is ethical and what isn't.
So either:
A) no, because we'll never (?) be able to programme them with a complete etchical database or
B) they'll decide their own "ethics" and then we'll argue with them. :p

:D:bravo:
 
I'm thinking otherwise.


More accurately, scientists have successfully created a system that comprises a bunch of simulated neurons that in sheer number are equal to half the number of real neurons in a mouse brain. To do this, the simulation ran at 1/10 real-time speed and ran for a whole ten seconds of simulated time. I would not call this simulating a mouse brain. I would call this simulating something that looks a tiny, tiny bit like a mouse brain -- if you squint just right and stretch the truth a whole lot.

Let me know when scientists create something that can survive in a simulated outdoors for a few months, avoiding simulated predators, finding simulated wild food, finding a simulated mate, and leaving little simulated mouse pups to survive them.

It's a start, but then people like to keep raising the bar.
 
Sandy, genuine AI is hard, very hard. One or two brilliant people is not going to solve the hard AI problem. Thousands of people have been working on hard AI for half a century. They are still having problems making machines make sense out of these two sentences:
  • Fred saw the plane flying over Zurich.
  • Fred saw the mountains flying over Zurich.

That machine that "simulated" a mouse brain for a whole ten seconds? It wasn't one computer. It was 4096 computers, each of which would make the computer on your desktop look like a toy. With those 4096 computers, they were only able to make a tiny dent in what a real mouse does, and then at 1/10 speed.
 
Yes, but computers are doubling their speed every 2 years or so. The fact that it's possible means that computer simulations of an active brain are not that far off, assuming we come to a working model of how the brain works.
 
It's a start, but then people like to keep raising the bar.
Let's raise the bar.
Lewis Carroll

'Twas brillig, and the slithy toves
Did gyre and gimble in the wabe;
All mimsy were the borogoves,
And the mome raths outgrabe.​

Here's how two computer programs translates this to French.
Google translator:
Lewis Carroll

'Twas brillig, et le toves slithy
Saviez-tourbillon et gimble dans le Wabe;
Tous Mimsy étaient les Borogoves,
Et le mome Raths outgrabe.​

Babelfish:
Lewis Carroll

Brillig de Twas, et les toves slithy
A fait le gyre et gimble dans le wabe ;
Tout mimsy étaient les borogoves,
Et l'outgrabe de raths de mome.​

Here is how two humans translated this into French:
J. B. Brunius

C'étaient grilleure et les tauves glissagiles
Giraient sur la loinde et guiblaient;
Le borogauves avaient l'air tout chétristes,
Et fourgarés les rathes vociflaient.​

Frank L. Warrin

It brilgue: the toves lubricilleux
Se gyrent by whirling in the Guava.
Enmîmés are gougebosqueux
And mômerade horsgrave.​

Let's see how those computers do in translating this back into English.
Google translator:
J. B. Brunius

They broiler and Tauves glissagiles
Giraient on loinde and guiblaient;
The borogauves looked all chétristes,
And the fourgarés rather vociflaient.​

Babelfish:
Frank L. Warrin

It brilgue: lubricilleux tôves
Gyrent themselves while boring in the guave.
Enmîmés are the gougebosqueux ones
And the mômerade horsgrave.​


So, maybe nonsense verse is a bit too difficult for computers to comprehend. How about the middle of the poem?
He took his vorpal sword in hand:
Long time the manxome foe he sought—
So rested he by the Tumtum tree,
And stood awhile in thought.

And as in uffish thought he stood,
The Jabberwock, with eyes of flame,
Came whiffling through the tulgey wood,
And burbled as it came!​

Robert Scott deftly translated this into German as
Er griff sein vorpals Schwertchen zu,
Er suchte lang das manchsan' Ding;
Dann, stehend unterm Tumtum Baum,
Er an-zu-denken-fing.

Als stand er tief in Andacht auf,
Des Jammerwochen's Augen-feuer
Durch tulgen Wald mit Wiffek kam
Ein burbelnd Ungeheuer!​

Here are what the two computer translators make of Scott's translation.
Google translator:
He resorted to his Vorpal sword Chen
He searched for the manchsan 'thing;
Then, standing beneath the Tumtum tree,
He-think--started.

When he was deep in meditation,
Weeks of misery's eyes of flame
Came through the forest with Tulge Wiffek
A burbelnd monster!​

Babelfish:
He accessed its vorpals Schwertchen,
It looked for long manchsan' the thing;
Then, standing under the Tumtum tree,
To on-think-catch it.

As if it rose deeply in devotion,
The Jammerwochen's eye fire
Through tulgen forest with Wiffek came
Burbelnd a monster!​

That's just obscene, in more ways than one.


Until computers can do better at translating poetry than accessing its vorpals Schwertchen and looking for its manchsan thing, I don't see them understanding or making ethical decisions. Until then they're just a dumb box in John Searle's Chinese Room.

What's harder to understand than poetry? "Get in the fast lane grandma, the bingo game is ready to roll!" When computers can make sense of sports announcers then I will be ready to admit that computers really do have a chance to truly understand and make ethical decisions.

"She Wants to Sell My Monkey!"
 
Your assumption is that todays computers are representative of how computers will always be.
 
No, my assumptions are (a) that tomorrow's computers will still be nothing more than Turing machines and (b) the human mind is more complex than a Turing machine. I am far from alone in this latter belief. Penrose, Searle, and Godel, to name but three, hold/held similar views.
 
That's quite an assumption, given all the new ways of computing that have been discovered recently. I also doubt that the present architecture would be up to the task. It would be vital to develop an understanding of how a brain works. We have no working theoretical models at this point as far as I know.
 
That's quite an assumption, given all the new ways of computing that have been discovered recently. I also doubt that the present architecture would be up to the task. It would be vital to develop an understanding of how a brain works. We have no working theoretical models at this point as far as I know.

So, what then? What would you suggest? What's next?
 
Status
Not open for further replies.
Back
Top