If we ever get to the point of Robocop type machines walking around but are designed for security and defense purposes against crime i feel there's a big chance of error and mistake.
If the machine could perhaps be hacked, or to be programmed in a very simplistic manner to alter it's abilities or changed to become an evil type of machine then there would be huge problems. I feel there would be a huge chance the machine would get in the wrong hands and reprogrammed, or reverse engineered if it was destroyed.
I think concerning brain emulation everything could be altered with the technology if it was available. I think someone out there would find a way of doing it very quickly. I think an intelligent design based on the human brain's complexities would not be a good idea from societal standpoint. I think from a miltary defense standpoint it would be good but in limited numbers. I think the worst case scenario could possibly or would be more likely to happen depending on how advanced the machine was in design and ability is the robot turning against humans and wiping them out. I guess maybe a kill switch would be there to turn them off if neccessary.
I personally see the government eventually creating terminator or robocop types of machines which could turn on it's creators. This obviously is nothing new but i feel the chance of error for mistake is very high with something like this.
I could be wrong and perhaps there's more technological advancements to be achieved in the future to stop this from ever happening.
If the machine could perhaps be hacked, or to be programmed in a very simplistic manner to alter it's abilities or changed to become an evil type of machine then there would be huge problems. I feel there would be a huge chance the machine would get in the wrong hands and reprogrammed, or reverse engineered if it was destroyed.
I think concerning brain emulation everything could be altered with the technology if it was available. I think someone out there would find a way of doing it very quickly. I think an intelligent design based on the human brain's complexities would not be a good idea from societal standpoint. I think from a miltary defense standpoint it would be good but in limited numbers. I think the worst case scenario could possibly or would be more likely to happen depending on how advanced the machine was in design and ability is the robot turning against humans and wiping them out. I guess maybe a kill switch would be there to turn them off if neccessary.
I personally see the government eventually creating terminator or robocop types of machines which could turn on it's creators. This obviously is nothing new but i feel the chance of error for mistake is very high with something like this.
I could be wrong and perhaps there's more technological advancements to be achieved in the future to stop this from ever happening.