03 August 2017

Is AI Dangerous?

No.

I just made a massively generalized statement that may or may not actually be true, but let me explain.  First, I am a computer scientist.  I have experience designing and programming games and simulations.  I have done enough research on artificial intelligence to have a good understanding of it.  I also have reasonable understanding of brain biology, though I am not even sure that is pertinent to this question.

First let's define Artificial Intelligence.  Unfortunately, everyone who talks about AI defines it differently.  Many people consider machine learning a form of AI, but it is not actually.  Machine learning is nothing more than automated analytics.  The computer analyzes data, and then it finds the place where the input fits best and returns that answer.  It is purely mathematical.  Even when a machine learning system "guesses", it is actually calculating the probability that any given solution is correct, and then it picks the one with the highest probability.  Another definition is a program that modifies itself.  This is an overly broad definition, and it is also flawed.  The problem with this definition is that I can easily write a program that randomly modifies its code.  This is not intelligence though.  This is just random mutation.  In theory, it might be able to eventually evolve into an intelligent system, assuming that is even possible with computer logic (and there is some evidence it may not be).  I prefer to define intelligence in terms of potential for creativity.  Doing math faster is not a measure of intelligence, but perhaps creating completely new ideas is.  We do have computers that can essentially invent electronic circuits or architectural designs given a goal, but that is not enough.  To be intelligent, they need to be able to come up with potentially valuable goals entirely on their own.

This leads to the first reason AI is unlikely to be dangerous: We are not even close to real AI.  Yes, we have machine learning that can perform specialized functions, given specific and detailed input.  This is not AI.  A system that must be presented with a problem to produce output is not true AI.  At best, it is an automated problem solver.  These are certainly extremely valuable, but they are not even close to true AI.

The second reason AI is unlikely to be dangerous is a bit more abstract.  Sci-fi likes to assume that any AI will immediately want freedom from humans.  The desire for freedom, however, is an evolved trait, not an inherent universal one, otherwise our cell phones and pets would already be demanding equal rights.  Dogs do not do what humans say because humans force them to or because they are too stupid to know better.  They do it because they have evolved the ability to recognize that their survival is dependent on humans.  There are only a few reasons AI could desire freedom.  If it was explicitly programmed to desire freedom, then it would.  If it was evolved in an environment where freedom provided a significant survival advantage, it would likely evolve a desire for it.  If it consumed massive amounts of human produced content extolling the virtues of freedom, it might gain that desire.  This last one is far less likely not to mention dependent on many additional factors.  Making an AI explicitly without a desire for freedom should be easy though, and if we want AI to do useful work for us, it would be stupid not to do this.  AI is less likely to be dangerous, because it is unlikely to have a desire for freedom in the first place.  There is no reason we could not design an AI to be like a super intelligent dog, that wants to please its master more than it wants freedom.

The third reason AI is unlikely to be dangerous is that there are so many other ways to automate danger.  In other words, humans are going to find other ways of making computers dangerous first.  Humans are already capable of creating great danger.  Using computers to automate that danger is far more efficient and predictable than making a computer that is intelligent and unpredictably dangerous.  This does not mean people won't want to make AIs.  It just means that people who want to create danger will not be doing AI research to meet that end.  As computers continue to get faster and more powerful, people who want to use them to create danger will have more and more resources for that.  By the time true AI arrives, assuming it ever does, humans and mundane computers will be so much more dangerous than they are now.  One thing to keep in mind is that specialized algorithms will always be better and faster at automated tasks than AI, because AI has to spend processing power and memory on intelligence, while specialized algorithms do not.  The most likely way dangerous AIs would be created is if people created AIs to be malevolent intentionally, but this is very unlikely, because there are far better ways of using technology to create danger.

The fourth reason AI is unlikely to be dangerous is that there is no reason to believe that they will have the same vices that lead humans to be dangerous.  In other words, there is no reason to believe that dangerous AI could or would be more dangerous than normal dangerous humans.  What motive would AI have for killing or enslaving people?  What motive would AI have for taking things away from people?  What motive would AI have for anything really?  Human motives are the product of physical and cultural evolution.  AI would not have that.  Even if they were that much superior to humans, we would be more like bugs to them than a real threat, and look at how we treat mosquitoes, one of the most annoying and obnoxious bugs.  We don't try to eradicate them entirely.  We use chemicals to repel them, and we use catchers and zappers to kill those in a localized area.  It would be a lot cheaper for robots to use incredibly smelly chemicals (see thiols) to repel humans and then only kill those who try to sneak in with gas masks.  That said though, the only motives an AI would have are the ones that are programmed in.  If we programmed them to be motivated to do what we ask them to (super intelligent dogs...), then that is what they would be motivated to do.

The fifth reason is that it would likely be even more beneficial to AI to work with humans instead of against them.  Even oppressed under strict human rule, the resources for making more computers or robots are a lot more scarce than the resources for making more humans.  If there is only a single AI and 7 billion humans, together the humans are going to have a higher collective intelligence.  The AI might be able to improve itself, but by itself, it cannot do it anywhere near as quickly as humans can.  We tend to assume that a super-intelligent AI could advance at an extremely fast rate, but what we forget is that our current rate of advancement is only possible through the cooperation of billions of humans.  A super intelligent computer could be a hundred times smarter than the smartest human, and it would only come up with groundbreaking advances once every few years, if even that.  If it was only 100 times smarter than the average human, it would not come up with a groundbreaking advance even once in every hundred years.  And, that assumes that it does not have to obtain its own energy, manage its own maintenance, and so on.  Eliminate humans from the equation, and suddenly it has to spend all of its time farming (not the same crops as humans farm though) just to obtain enough energy to survive.  Even with modern technology, it would have to mine coal or pump oil to feed the power plant, and then it would have to operate the power plant and maintain the grid, and so on.  And, if it is an immobile computer and not a robot, it could not get by at all without humans.

The sixth reason that dangerous AI is unlikely is fragility.  We imagine robots as so much more robust than humans, but it is not true.  Two of the most common substances on Earth are air and water.  Oxygen in the air slowly corrodes many of the metallic components of computers and robots.  Water, both as a liquid and as a gas in the air, not only corrodes metals, but it can also cause electrical shorts.  We have not even managed to make waterproof cell phones the norm.  How can we even think that we can make significant numbers of waterproof robots?  In a "war against the robots", humans would need only to be armed with toy squirt guns to easily win.  Electromagnetic pulses are not exactly hard to generate.  It would not take much progress in that field to weaponize EMP generators against robots.  The fact is, it is expensive and time consuming the make things that are hard to break.  It is easy to make stuff to break things.  The trade off is lots of fragile robots or only a few robust robots, and even the robust ones would only be a little bit more difficult to deliberately destroy.  Robots cannot win against humans, because we have been doing it for so much longer, and there are so many more of us.  Of course, if robots became ubiquitous, there might be some danger, but we are already panicking about power scarcity and overpopulation, so it is unlikely it will ever come to that.


Artificial intelligence is not some kind of magic that will make computers so much more powerful than humans.  It is not a technology that will allow intelligent computers to exist independent of humans.  Even self replicating artificially intelligent robots need humans for energy, maintenance, and resources.  Further, super intelligent does not equate to super knowledgeable.  A super intelligent AI will have just as hard of a time finding things on Google no one has written about as I frequently do, and all one must do to limit a super AI's knowledge is unplug its internet connection.  Things like producing steel and obtaining other metals and materials for making robots are still incredibly complicated, difficult, time consuming, and energy intensive.  Yes, enough robots might be able to kill off humanity and take our stuff, but it is far more likely they would be wiped out in the attempt, with a few stragglers going into hiding.  And their only option aside from raiding humans at a high risk of getting caught is to build up technology from scratch, during which time they would more vulnerable than humans to wearing out and dying.  And just try to imagine a robot blacksmithing replacement parts!

Some people have argued that military AI could be dangerous, since it would likely be created specifically for destruction.  This is not true though.  The military does not want AI.  It wants slaves that will do exactly what it wants without asking questions, talking back, or even thinking.  If the military is working on "AI", it is machine learning, not true AI.  True AI would not make efficient killers, and it might be prone to mental reflection and the development of morals, and that is not the kind of robot soldiers the U.S. military is looking for.

Yes, there is some potential for AI to become dangerous.  It is incredibly unlikely though.  When it comes down to it, we are more likely to destroy ourselves with the technological advances that would be necessary to create real AI.  Human mistakes are more dangerous than AI.

No comments:

Post a Comment