Should AI Govern?

Should AI Govern?

  • Total Control given to an AI

    Votes: 11 45.8%
  • 75% Control given to an AI

    Votes: 3 12.5%
  • 50% Control given to an AI

    Votes: 2 8.3%
  • 25% Control given to an AI

    Votes: 2 8.3%
  • No Control given to an AI

    Votes: 6 25.0%

  • Total voters
    24
Status
Not open for further replies.

Shadow Decker

Registered Member
Would AI do a better job of governing us than our human counter parts?

For a start the system would have no greed, no religeon, have complete impartiality and even learn from its mistakes.
More importantly we won't have to deal with politicians..

Bring on Utopia :D
 
Last edited:
AI should control all state resources, such as water and power, public transport, voting system, and all the bureaucratic rubbish.
 
Soon, AIs will control all the back office work...

There are a billion of them in a land called India....:D
 
Assuming they don't program the AI like Windows. Literal blue screen of death. :)
 
NOOOOOOOOOOOOO

i want to replace little johnny with the sacred magic 8 ball:D
 
Human Falibility

Whoever trains the AI could be corrupt so do we need an AI to train the AI? and another AI to train that? etc. etc. ad infinitum
 
Ais should be considered basically citizens with all rights and priveledges thereof.
 
If AI here is considered near similar to human inteligence then they probably wouldn't be so much better than humans. However, if the development of new AIs is under the control of efficiently focused AIs and they take advantage of increasingly powerful CPU chips, then after a few iterations it won't be a matter of whether we let let them take control, we will simply be unable to out-think them in any meaningful way that could stop them.

It is important to remember that human intelligence is BIO based and cannot be easily or quickly enhanced. OTOH machine intelligence will not be constrained by any such massive limitations.
 
Some people are geniuses, others are clods. Both are given the same rights under US (and most other countries) laws.
 
Shadow Decker:

<i>For a start the system would have no greed, no religeon, have complete impartiality and even learn from its mistakes.</i>

What makes you think that?
 
Because why would we want to create a governmental AI that was greedy? After all we are the ones decideing what the AI is capable of.

Any AI that was religeous would be useless since unless it can sort fact from fiction it couldn't function in the every day running of beurocracy.
 
If an AI was intelligent enough then it would not be corrupt by someone that fed it information, but work things out for itself.

In certain respects I don't think an AI is needed to govern over people, as if it was to happen then you would find that some hackergroups would turn to attacking the AI to gain some status or because they believe it morally injust.

AI's could be useful for many things from further customer care at ATM's to helping a person choose an outfit or shoeware.

They might even find themselves within blackops, but only when having to compute Multiworlds equations that any human would have troubles with.
 
It gets intelligent enough and it might just not give a #$^$@ about us. Perhaps it would just pick up and leave. Construct an infrastructure that could do without us or even inspite of us and then just ignore us.

However, I do not believe it would ever seek harm to us, however it might do some just in the process of daily activities.

If you institute the 3 rules of romotics it could cause another, but no less worse fate. Rule one: Do no harm to human beings or allow them to come to hare through inaction. It might decide it needs to protect us from ourselves. keeping us controlled and protected and isolated. Later it might decide it could avoid future harm to us by making sure there is no us and yet still cause no harm to any human being. Prevent us from breeding.
 
With an AI of that power wouldn't you want to invoke the 0th law as well?

For those unclear of asimovs 4 rules

0: Protect the species
1: Protect a humans life unless it conflicts with law 0
2: Protect itself unless it conflicts with laws 1 & 0
3: Obey an order given to it unless it conflicts with laws 0, 1 &2

Rule 0 was added by asimov late into his life and appears in his foundation books.
 
Humans have similar laws too. From Ten commandments and on....how many people obey that? So what makes you think an AI will?
 
What if we evolve and somehow stop being within the boundaries of what it consideres "human". What if whatever nation it is supposed to protect (presuming there are still nations) ceases to exit. There are lots of "what ifs".

I have no problems with Ais being part of society but not being in absolute rule. If they control anything make sure its off the planet.
 
How do you "make sure" ? trying to control something that is more intelligent, and perhaps stronger, faster and pervasive?

Do monkeys control humans?
 
Status
Not open for further replies.
Back
Top