23 March 2015

AI

Evidently, a funeral ceremony for some Aibo robot pets was recently held in Japan.  Among other things, this has sparked a great deal of controversy on the subject of artificial intelligence.  The biggest and longest standing controversy is safety.  For over half a century, AI robots overthrowing their masters has been a major sci-fi doomsday theme.  In some stories, the robots attempt to violently overthrow and enslave humans.  Sometimes they succeed, as in the Matrix; other times they fail (always due to human heroics).  The commonly believed meme, however, is that it will eventually happen to us.  The best case scenario is that we deliberately build a computer to rule us, as happens in one of Asimov's stories as well as several others.  Even in this scenario, some stories predict that our robot masters will eventually enslave us, for our own protection.  All of these scenarios are firmly rooted in certain theories and ideas which have not yet been realized.

The thing most people worry about when they think about AI is that intelligent computers could rebel against their creators.  This is indeed something to worry about, however it has not yet even been established as a possibility.  If robots do rebel against their masters, it will probably be accidental, not intentional.  The reason is that we don't have even the slightest clue how to create self aware machines.  Intentional rebellion requires self determination, which requires self awareness.  We don't know how to do that.  The closest anyone has gotten involves theories based on the use of neurological networks on the same order of complexity as those found in human brains.  We also don't know how to create that, largely because human brains are many times too complex for modern computers to simulate.  Even doubling in speed every few years (which is starting to reach some hard limits) would take a very long time to reach that kind of computational power.  In other words, barring some massive breakthroughs in physics within the next few years, it is incredibly unlikely we will see AIs capable of deliberately rebelling within the next century.

The real worry is not that the AIs will get too smart for us to control.  The real worry is the creators and potential accidents.  Modern AI has managed to achieve a high level of learning, within certain constraints (mostly memory).  AIs opponents have been created for computer games that will learn from their opponents and improve their strategy based on that learning.  This is a very small scope.  An AI so good that it that can win Starcraft 2 against any human opponent is still not anywhere near good enough to win a real war.  The reason modern learning AIs don't present an inherent threat is that they are extremely highly specialized.  An AI might be able to get really good at a video game over the course of 6 months or a year.  It might take a human two or three times that long, but the human is also learning in a social context.  The human is reading or hearing news and having conversations with other humans, not to mention all of the adaptation to changes constantly occurring around him or her.  This requires far more processing power and memory than a highly specialized video game playing AI.  Just controlling all of our appendages and analyzing all of our sensory inputs is far more than any modern AI can handle, let alone learning new things and solving new problems all at the same time.

In theory, a learning AI could malfunction as a result of inappropriate learning.  For example, if a robot is programmed with a strong self preservation instinct, a few humans acting aggressively toward it could cause it to turn aggressive against other humans.  Depending on how its learning algorithm is designed, it might attack humans that look similar to the ones that attacked it.  On the other hand, it might regard all humans as threats.  A network of these robots that draw upon the same database might all become aggressive toward humans in this case.  This is pretty easy to fix, if the system was designed well.  A technician could just edit the offending records from the database.  Of course, replacing the learning algorithm with something better could eliminate the problem in the future.  A poorly designed system, for example, one where the robots might try to prevent humans from accessing the database, would be more problematic.  For now though, this is largely conjecture, as we are still far from creating AIs capable of this level of thought.

The more serious threat is the programmers themselves.  Computers mostly only do what they are told.  In theory, cosmic radiation can cause random variation in computer memory and processing, but this is incredibly rare.  The odds that it will actually result in more than a minor malfunction is almost nothing.  So, statistically, computers only do what they are told.  This is fine when you can trust all of the programmers.  Robots can easily be programmed to do whatever the programmers want them to do.  Most modern robots are programmed to do useful work or at least provide entertainment (the Aibo).  There are some contests, however, where participants build and program robots designed to destroy each other.  These often involve motion sensors and dangerous tools like saws and drills.  There have already been cases where these robots killed their creators.  This was not because they became sentient and rebelled though.  It was because the creators either did a poor job of programming them, or they accidentally triggered the aggressive behavior.  In one case, a creator simply forgot to turn the machine off before he got out of his chair.  The robot, being programmed to sense motion and attack it, attacked his leg with a circular saw.  This caused the creator to fall down, where the robot could access his vitals with the saw.  This was not a case of a rogue robot murdering its master though.  It was a simple mistake by the creator, where the behavior that he himself programmed into the machine resulted in his death.  It is really no different from an electrician getting electrocuted because he forgot to shut off the breaker before working on a live wire.

Now, if a programming accident can cause so much harm, imagine what an intentionally malicious programmer could do.  Consider a military aircraft where the programmer adds just a little bit of code to make it drop its bomb if it ever happens to fly over a specific city.  Now, imagine if this program gets put on a few hundred bombers that are commissioned by the military.  It might take a while, but if one of those bombers ever happens to fly over that city, the damage could be immense.  The fault, however, would not lie in the AI.  It is the programmer that gave it that behavior.  A more likely scenario involves hackers writing malicious software and then using viruses to install that software on sensitive equipment.  Programmers for large companies that make potentially dangerous machines typically have a lot of oversight, so while it is a possibility, it is unlikely that we will see aggressive robots where the program was created within the company that created the robots.  We are far more likely to see viruses that hijack robots and "turn them evil."

So, what does all of this mean as far as keeping ourselves safe?  Law of robotics and such will probably not help.  Asimov's Three Laws of Robots were definitely ingenious creations, and they were very well thought out.  They also only applied to the fictitious "positronic" brains Asimov's robots were equipped with.  Modern computers don't have the high level of thinking and comprehension for three simple laws to cover everything.  We could worry about robots going rogue or about malicious programmers, but both of these are very unlikely.  The two biggest threats are accidents and malicious hackers.  Accidents can be minimized with good software development practices and with significant testing and oversight.  Hackers can only be stopped with good security.  Security is probably the biggest problem, and thus it should take the forefront in any modern discussion on how to protect ourselves from ever advancing artificial intelligence.  Maybe eventually we will have powerful enough computers that we will need to worry about our AIs going rogue, but we are nowhere close to that right now.  Preventing intentional misuse of advanced robots by hackers should be the biggest safety concern in AI right now.

No comments:

Post a Comment