Have you seen the Sci-fi movies where some machines wreaked havoc because they went ‘wrong’? I mean, the portrayal of machines operating in the future is partially here, though there is much advancement to be done when it comes to making machines think like a human or at least start thinking like humans.

But the media and technical reports are full of issues and explanations that engender a lot of suspicion and fear among the masses and therefore act as a deterrent to the popularity of AI or gadgets that operate using this technology.

Why?

Well, the designing of gadgets and tools driven by Artificial Intelligence tend to loiter at the border of human-like thinking—the designers who design the neural networks of such devices code the programs to enable them to change decisions as per the environment, which means that if the machines start making on its own, the programmer doesn’t have control over the machine. Which is scary as hell, right?

Well, if such machines are taking the medical decisions; are operating crucial functions that could have the lives of human populace in question, you would worry that such machines should not be created, no.

Absolutely. Some control is always needed whether machines are operating or humans are operating—if something wrong, explanation and inquiry could be instituted so that future possibility could be reduced or minimized.

This is where the AI or the industry itself needs to reinvent itself so that it could be welcomed by people with less fear and more anticipation to ease the lives. The industry needs to come up with a mechanism that could still have an ultra-level control over these machines or at least have an explanation or reason about the behavior of the machines if they start disobeying the commands of the programmer or the designer.

What do you think? The ball is in the court of the designers and the industry…