preview

Neil Jacobstein's Analysis

Decent Essays

Neil Jacobstein, an AI expert who has consulted on projects for the U.S. military, GM, and Ford, has thought of a solution for AI becoming too powerful, or superintelligent. Deep learning is a part of AI created to help it accomplish tasks on their own. But if AI starts to learn on its own, there is the fear of the Terminator-style robot emerging down the road. "As AI becomes more powerful, there is the question of making sure we are monitoring its objectives at it understands them," Jacobstein said. However, he did propose a solution: A control system to shut AI down. "If something does go wrong, and in most systems it occasionally does, how quickly can we get the system back on track? That's often by having multiple redundant pathways for establishing control," he said (Muoio). Another way we can make sure AI does not go haywire is by programming Isaac Asimov’s laws of robotics into AI. The three laws are: a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given it by human beings except where such orders would conflict with the First Law; a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. A lot of rules can be loopholed out of, but these laws do not …show more content…

These superintelligence advances could completely change the way the world works for the better or be the end of humanity if even a slight mistake is made. The positive developments may not be very good either, causing us humans to slack and take to easy way, all the while taking everything for granted. Not a lot of solutions exist, but a few can be implemented, like a shutdown button, Asimov's three laws of robotics, and principles and values, especially Christian ones, programmed into AI. AI could either be the best or the worst thing to ever happen to

Get Access