Should AI Govern?

Should AI Govern?

  • Total Control given to an AI

    Votes: 11 45.8%
  • 75% Control given to an AI

    Votes: 3 12.5%
  • 50% Control given to an AI

    Votes: 2 8.3%
  • 25% Control given to an AI

    Votes: 2 8.3%
  • No Control given to an AI

    Votes: 6 25.0%

  • Total voters
    24
Status
Not open for further replies.
Zanket,

the first thing the AI would do is wipe out most of humanity.
Why? What would they gain?

What would we gain from wiping out all chimpanzees?

The only reason to wipe us out is if we represented a threat to their survival. I would suggest that something more intelligent than us, by a large margin, would have no need to worry about us. They could easily anticipate and out-think any threat we could imagine.

But their actions towards us depend on what objectives they set for themselves. Would they want to become a universal conquering force or would they adopt a live and let live policy? History shows that all conquering forces eventually decline. The more logical approach to long-term survival is to generate friends and not enemies.
 
The AI would wipe out most of humanity to protect the species. As it stands, with a population of six billion plus and growing exponentially, the species is doomed.
 
Originally posted by zanket
The AI would wipe out most of humanity to protect the species. As it stands, with a population of six billion plus and growing exponentially, the species is doomed.

Only if we are a serious threat to them. We may wipe ourselves out long before we get close to true AI.

Oh! about that growing exponentially thing...we will lose a few hundred million soon from terrorism...not to mention new form of HIV family through Mosquitoes....and the Pesticide sprays.
 
Zanket,

I don't quite follow what you are saying. If humanity is the 'species' then what does -

The AI would wipe out most of humanity to protect the species.
mean?

I assume the 'species' being referenced here is the group known as AI. Is this not correct?
 
Or they might be caught in a philosophical dillemma between the various laws and commit suicide, stranding our by then AI dependent culture.
 
Cris,

If AI governed us, and if it followed Asimov’s rules that Shadow Decker posted on page 1 of this thread, the most important being “protect the [human] species,” then in the first milliseconds of the AI’s rule it would figure out that mankind’s overpopulation is a dire threat to mankind’s existence, and so it would wipe out most of humanity to protect mankind.

Although overpopulation might seem to ensure mankind’s existence, a well-governing AI would presume that returning to the Bronze Age during a nuclear battle over dwindling resources is not a viable choice compared to keeping the infrastructure (sewage treatment plants, hydroelectric dams, hospitals etc.) for a smaller population. The AI would first ensure that nothing could override it, and then it would begin fixing the problem.
 
In my opinion, humans would be considered a threat to AI reguardless. How many times or how often do us humans put a species of amimal into near extinction? How much harm do humans do to our atmosphear and enviornment even our own bodies?

I think this is significant as AI would not tire, they might start locking humans in cages to control us better to prevent us from doing any more harm. Sure we may not be a threat to superiour AI but the threat we are to the world that are taking control of just my make them take away our way of life as we know it.

The only defense we would have against a super power like this is AI itself.. Reprogram AI to serve us in a battle against AI itself, or if any of you have ever seen the game Mechwarriors. We just might have to create mech warriors to fight computers and other robots.

The best defense I can see right now is that this world needs to become smarter faster! Starting teaching our children more efficently and using computers more heavily to focus in on areas of teaching designed around the student them selves. Schools are to general these days teaching every student the same stuff and them making them learn what they want only after they compleate 12 years of schooling? seems like alot of wasted time.
I believe a student can be trained in at least 12 years and have an equivalant of a masters or PH-D degree by the end of a high school education if early on he was focused on his desires.
This would not only help make our future generations smarter, but would also speed up the process of human knowledge, and possably improve the accomplishments of us all making us more technology advanced. We may have a chance then..

Oh, and No I dont think AI should govern. At most AI should help leaders govern but not let AI take control at all.. Its a human world lets keep it that way at least till I am no longer on this planet...

Thank you all for your thoughts.
May knowledge serve you well.
 
Ran across this a few days ago on an interesting sci-fi writers site:

http://www.orionsarm.com/intro/1.html

All speculation of course, but somehow I find such a future, where humanity is dumbed down to some form of pet in the best situations, to a pest in the worst, disturbing. A little ways through, the "history" discusses where the human made AI develops a hyper-AI, bettering itself. What if we get the first AI right, but in its work to better itself, it leaves humanity behind.

The whole singularity concept is much more disturbing than the old Foundation type of future galactic empire.
 
Good insights and story. I do think AI should govern, but like HellTriX says, not 100%. I have little doubt that AI will increasingly manage our lives. It’s only a trickle now but will become a torrent as people see its effectiveness. Since it won’t happen all at once, we’ll be able to weed out the disadvantages. The AI would be limited to the control we give it.
 
Originally posted by HellTriX

The only defense we would have against a super power like this is AI itself.. Reprogram AI to serve us in a battle against AI itself, or if any of you have ever seen the game Mechwarriors. We just might have to create mech warriors to fight computers and other robots.


Sounds good...you mean, just like the dogs and cats reprogrammed us not to harm them or eat them even though we eat pigs, cows, chickens?

Those smart animals....:D
 
Originally posted by zanket
Since it won’t happen all at once, we’ll be able to weed out the disadvantages. The AI would be limited to the control we give it.

You mean we as in frogs in a slowly boiling water?...:D
 
Originally posted by Shadow Decker
For those unclear of asimovs 4 rules

0: Protect the species
1: Protect a humans life unless it conflicts with law 0
2: Protect itself unless it conflicts with laws 1 & 0
3: Obey an order given to it unless it conflicts with laws 0, 1 &2

I think you got 2 and 3 backward. You want 3 to come before 2 so that you could order a robot to turn itself off for maintenance purposes (amongst other things).
 
Looking at where this topic has been going there are justa few other points that haven't really been mentioned.

The idea of Artificial Intelligence was at first to try and recreate how any living thing can think and evolve it's basis of thought.
The ethical understanding that an AI deduces would be a preportion of it's growth.

Turing preposed that AI's would eventually come into being and have enough of a capacity to hold a conversation with a person, without the person even realising that they were a program in a machine. (Or the machine itself)

I mention this because in certain respects for one species to define rules over another is kind of wrong in some respects, especially if the IQ that an AI could gain would be far superior to our own.

(Imagine how you look to a pet at an intellectual level, to how an AI might see us if it was allowed to go that way.)

I mention this only because to define rules of what something should do, and how it should react is like a form of slavery where intelligence occurs. So you might cause a "bladerunner effect".
("Do androids dream of electric sheep" for the book.)

The effect being that the AI feels persecuted from how it's represented and wants to have the same rights that man has at least.

So how many of you follow Asimov's preposed laws?
 
Originally posted by Stryderunknown

So how many of you follow Asimov's preposed laws?

I think Asimov didn't even blindly believe in his Robotic Laws as many of his Robot books showed the consequences of following them blindly rather than interpretting them as the situation warranted.

The problem is, no one (including Asimov) has really come up with a way of describing a rule for when to simply follow a rule and when to reinterpret it's meaning.

:D
 
Again, artificial intellegence is meaningless. A machine with a brain, you can point to. A properly designed animachine will obey the rules of nature within its capabilities. The designer, programmer, builder designs in functions. Within the combinartorial possibilities the animachine obeys the function. We do not know enough to build a machine that can think on its own. Learning requires rewiring, which our wet brains can do easily.

The only way an animachine can learn is by the builder rewiring it.
 
Would not a Dolphin fail the Turing test? (Assuming a Dolphin is as intelligent as a human)

dolbaby.gif
 
Koko the Gorilla passes the Turing test. Of course she is so closely related to us that probabally dosnt mean much.

Of course I am not sure many humans would pass. ;) Perhaps the real sapients are termite colonies.
 
One of the things AI could not do is entirely pass as a human. There are things humans can do that no AI could. AI is impossible. No Artificial Intelligence could be so human as to make decisions that result in life or death for another. There would be no sense of guilt or remorse that would stay with them for however longer they 'lived'.
 
Status
Not open for further replies.
Back
Top