According to many people, the U.S. is in the middle of an opioid epidemic. It's debatable whether something that ultimately comes down to a personal choice can really be considered an epidemic, but the fact is, there is an unusually large portion of the population that is addicted to opioids, and opioid overdoses are becoming frequent enough in some places to become a serious financial problem. This has ultimately lead to opioid use, and especially overdose, to be declared a national healthcare crisis.
Several states have started allocating funds specifically to overdose mitigation. Some are also focusing on prevention. Drugs designed to treat opioid overdose are likely to be covered by Medicaid in some states (and may already be in some). States are even petitioning the Federal government for additional funding for this problem, and some have already diverted significant amounts of state tax income to it.
The plan for these funds is first, to administer drugs to reverse opiate overdoses. This is a good start. The costs include the cost of the drug and often transport and hospitalization costs. Once treatment is complete and doctors are confident that the patient is no longer at risk, the patient is sent home. At home, some of these patients take another hit and overdose again. This has happened often enough that some places are considering limiting calls for opiate overdoses to one a day per person. Of course, this is meeting a lot of resistance, as it would let anyone overdosing twice in a day just die.
There is a problem with this: It does not fix the problem. Many of these addicts are going to keep overdosing. The problem is also getting worse, as more people are getting addicted to opiates. This is not going to help them either. Supposedly some of the money is going to go to mysterious undefined "prevention" measures, but it is unlikely these are going have much effect. D.A.R.E. has been used in many schools to discourage kids from trying drugs. Schools that don't use D.A.R.E. generally have some other drug use prevention program. The fact is, teaching people about how drugs damage the body is clearly not working well anymore. Prevention funding could go to trying to catch the dealers, but the War on Drugs tried that with little success. Throwing addicts in jail certainly won't help, as it will ultimately cost more than the treatment, and it will very quickly fill up our jails with people who's only crime is stupidity.
So far, I don't see anyone asking what the cause is. We know a lot more people are using opiates. We know the stigma that heroin once had is largely gone, resulting in a lot more people trying it and getting addicted. It is becoming the hard street drug of choice. We know people are overdosing, and treatment is expensive. What we don't seem to care about is why? Why is opiate abuse increasing at unprecedented rates?
It's not availability. Yes, opiates are becoming more accessible online, but that started in the early 2000s. The dramatic increase in use is fairly recent. Opiates are becoming more socially acceptable among casual drug users, but this is the result of increasing use, not the cause. The fact is, more people are using opiates, because more people want to use opiates. One of the big deterrents to heroin and other opiate use in the past was how incredibly addictive they are. People generally don't want to get addicted to things. The exception is that alcohol abuse has traditionally been used to avoid thinking about difficult life circumstances. It is the traditional anti-depressant (ironically, it is actually a depressant). Now though, it is becoming clear that people are starting to care less about getting severely addicted to opioids. This attitude suggests that people who are using opioids start with no intention to ever stop.
Why? And why aren't we asking why? The fact is, there are two main reasons so many people are starting to use opioids. It turns out addiction to prescription pain killers is becoming more common among the rich. Their social status makes it easier for them to convince doctors to keep writing prescriptions for the drugs. Eventually they either become tolerant to the drugs or their doctors finally cut them off. The goto drug for these people is now heroin. This is not where the "epidemic" is coming from though. Rich opiate users tend to be careful not to overdose, and there are far fewer of them than poor opiate users. The source of the epidemic is the poor. Illegal drug use and addiction have been a problem for the poor ever since these drugs were discovered. So why do poor people use opiates? In a sense, opiates are replacing alcohol. Poor people use drugs because they are miserable. They use drugs to combat depression.
Why has this only just started to become a problem? What changed to cause so many more poor people to start using illegal opiates in recent years? The answer is hope. In the past, poor people hoped to escape poverty. They saw occasional friends or family get out of poverty, and they had hope that they could do it too. During the end of last decade and the beginning of this one though, they stopped seeing that. Many have lived in poverty for generations. As the recession hit, instead of seeing people occasionally escaping poverty, they saw more and more people sinking into poverty. What hope is there to get out of poverty when the net flow is downward? In addition, they were made more aware of the fact that the situation is getting worse, by movements like Occupy Wallstreet, that initially started out powerful, getting people to start talking about the issues, but then ultimately sunk back into nothing, without any substantial changes being made. This crushing despair lead to depression, and ultimately a life of opiate addiction started to seem better than having to constantly think about the fact that it is only going to get worse.
The "opiate epidemic" is not about opiates at all. Rising opiate use is the consequence of poverty. That is the "why". The poor are turning to opiates, because they feel like they are under the weight of crushing poverty. When they have to turn to the government for welfare, they feel robbed of their dignity, in part because it is nearly impossible to use government welfare without everyone from your doctor to your cashier knowing about it, and in part because our culture shames people who don't make enough to support themselves, even when they are working 80 hours a week at two or three jobs. Our culture treats people who can't earn a living working for someone else as lazy misfits, whether it is true or not. This attacks the psychological need of people to be accepted, and it predictably results in depression. There are only two logical ends for people with severe, untreated depression, and those are drug use and suicide. Really, it should come as no surprise that our mistreated, disrespected poor are turning to opiates in droves, right when the last bit of hope was ripped away from them.
I wonder how our elite would be reacting if every poor person who chose to get addicted to opiates committed suicide instead. Would they take poverty more seriously, or would they start spending tons of money trying to force people to stay alive when they don't want to? Sadly, probably the second. Because right now, we are focusing on how to prevent drug users from accidentally killing themselves, when we should be focusing on how to prevent them from wanting to use drugs in the first place.
If we would focus on the right question, the answer is simple: Eliminate poverty. This is not impossible. I have done the math (among many, many other people). Our existing welfare system, excluding medical welfare, already costs 75% of what we need to eliminate poverty, and there are plenty of places the other 25% could come from. A marginal decrease in military spending would be enough. A slight reduction in tax cuts for large businesses would do it. In fact, eliminating poverty would ultimately cover the cost of that 25% by itself in the form of improved economy, reduced law enforcement costs, and reduced medical welfare costs. We can already afford the basic income that would eliminate poverty in the U.S. We have been able to afford a basic income since before Nixon's Presidency, when we almost got one, and then it unfortunately failed due to misinterpreted data from an experiment in basic income in Seattle.
Since the Seattle experiment, there have been many other experiments with basic income and handing out free money in general, and they all agree that a basic income reduces crime, drug use, and other negative things, and it increases productivity and innovation, it improves the economy in general, and it improves overall health, reducing medical spending. In short, the cure to the opioid epidemic, along with a lot of other things, is a universal basic income.
11 August 2017
09 August 2017
More Problems a Universal Basic Income Can Solve
I have been writing about basic income for several years now. I have presented a great deal of evidence that the cost of a basic income is far outweighed by the benefits. I have presented the numbers showing that we can afford a universal basic income, and the price is only slightly higher than all of our mediocre welfare put together (not counting Medicaid, because even a basic income can't fix healthcare). The evidence indicates that a basic income would massively improve our economy. We could reduce to cost of certain parts of government by eliminating minimum wage. The free market would be more free, because everyone would have a vote (you can't vote with your wallet when it is empty). This is not all though. There are more problems a basic income would fix that I have not mentioned, and I want to look at two of them right now.
The first thing basic income would fix is the abortion debate. A significant number of people think it is wrong to kill unborn children. A significant number of other people think that personal convenience outweighs the right to life of children that have not yet been born. The fact, however, is that most abortions are not a matter of convenience. Some abortions are about convenience. Some abortions are about health concerns. Most abortions are about finances though. Abortions most often occur because someone got pregnant who does not feel confident that she can provide for the child. According to Guttmacher Institute, 49% of abortion patients are living below the Federal poverty level, which other studies have shown to be barely over half what the real poverty level in the U.S. is. 60% of abortion patients are in their 20s, and 59% of abortion patients already have children. What is the solution to this, according to researchers? Better access to contraceptives. This is tragic. It is also discriminatory. The suggestion is that poor women don't deserve to have children. We tacitly accept that these women shouldn't be having babies, because they are poor. This is tragic. This is especially tragic in a nation where birthrates are at 1.9, two points below the sustainable birth rate of 2.1. Unsustainable birth rates are bad for the economy, but this is trivial compared to how we are treating poor people! If the right wants to fix the abortion problem, it can dramatically reduce abortions by demanding a basic income! A universal basic income guarantees that even teen moms can afford to give their children the level of care that they need. The left benefits too. It asserts that women should have better control of their bodies. It suggests that women have the right to get abortions if they want them, but the fact is, they don't! Elective abortions are a minority. Most abortions don't happen because the mother wants to kill her unborn child. They happen because the mother knows that she cannot afford the costs and still care for her other children. In addition, a basic income would guarantee that women who legitimately don't want to have a child have access to contraceptives! This would do far far more for guaranteeing that women have control of their bodies than making abortions legally easy to get! The left and the right can both get what they want when it comes to abortions by pooling all of the money they are currently squandering fighting over whether or not poor women who wouldn't get abortions if they could afford to keep their babies and put it into demanding a universal basic income!
The second thing is racism. Racism is still a serious problem in the U.S. All of the legal anti-discrimination measures over the years have only managed to push racism underground. It has become so integrated into our culture that most people don't even realize that they are doing it. Studies have even shown that black people are racist against other black people! Show a black person an image of a white kid wearing stereotypical clothing for white kids (tee shirt and jeans) and an image of a black kid wearing stereotypical clothing for a black kid (hoodie and baggy pants), and the black person will report being more fearful of the black kid. Black teens have significantly lower graduation rates that while teens. Black people are more likely to be in poverty than white people. Black people are paid less on average, for the same jobs, than white people. Black people are more likely to be on government welfare programs than white people. Black people are more likely to be involved in crime than white people, and they are even more likely to be convicted of crimes, whether they are innocent or not. Black people are more likely to be the targets of police brutality or even police framing. The fact is, while Americans are no longer overtly racist, our culture has become so covertly racist that black people are treated like second class citizens, even by other black people and the government. The first step in eliminating this deeply embedded racism is to take away the poverty that drives many of these things. Poverty and crime are strongly correlated, and studies have found that removing poverty reduces crime. Studies on basic income specifically have shown substantial decreases in crime rates in places with basic income. A basic income will allow black people to compete for equal pay, because they cannot be coerced by their poverty into taking jobs that don't pay fairly. As crime decreases, it will be easier to identify racism against black people by law enforcement. Eventually it will become clear that the old black stereotype is false, and hopefully that will lead to fairer outcomes in court for black people. Studies have also shown increased graduation rates in places with basic income, which means that black kids will have better opportunities for the future. Eventually racism will start to look stupid, not because of the social stigma that comes with overt racism, but because discrimination will result in lost opportunities for racist people, harming racist businesses and providing better quality of life for people who choose not to discriminate. In short, universal basic income is the road to an America that is more fair and equal for everyone.
This second item extends beyond racism though. It will also improve the situation with respect to gender discrimination. Women will be able to compete on more level ground for fair pay in the workplace, because single women and women who have to act as providers will not have to accept discriminatory pay just to make a living. People who discriminate against women will also miss valuable opportunities, giving fair people and businesses a better chance. Over time, business culture in the U.S. will evolve to be more fair to women, as businesses that are discriminatory are pushed out of the marketplace by businesses that are more fair. And this won't require legal force either. It will happen naturally, because fair treatment of women and minorities will give businesses a significant advantage over those that discriminate. Civil rights won't need to be driven by the government, because it will be driven by natural market forces instead.
Basic income has only one substantial cost: Money. The cost in money of basic income can easily be covered to over 75% by the funding currently allocated to programs it would replace. It would allow the elimination of minimum wage, which many businesses would be willing to accept in trade for fewer tax cuts in an amount sufficient to cover the 25% or less remaining. In exchange for slightly higher business taxes it would provide an enormous number of very substantial benefits, from a significantly improved U.S. economy, to better overall productivity, to solving most of the abortion problem, to reducing and perhaps eventually eliminating discrimination against minorities and women. It would reduce crime, increase job satisfaction, improve education, eliminate poverty, create more participation in the free market, allow more sustainable birth rates, and prepare for the number of jobs to continue to decrease without mass starvation or economic collapse. All of this and so much more, and ultimately it will pay for itself just in eliminating minimum wage.
Basic income should be the easiest, most obvious solution to an enormous number of the most important problems the U.S. is facing today. Instead of spending our time and money lobbying for or against abortion, gay rights, feminism, income equality, and many other trivial problems, we could be pooling all that money into one unified pot to solve our most pressing problems along with many of the more trivial problems in this list. We are wasting so much on stuff that is only marginally important that we are missing the one thing that could solve more problems and more important problems than anything else. We need a basic income, and we need it more than any environmental problem, civil rights problem, or class division problem. I know not everyone agrees on the topic of the government providing support for the people, but if there is one compromise we can and should make, it is one that will solve many other problems to the satisfaction of both major ideologies.
The first thing basic income would fix is the abortion debate. A significant number of people think it is wrong to kill unborn children. A significant number of other people think that personal convenience outweighs the right to life of children that have not yet been born. The fact, however, is that most abortions are not a matter of convenience. Some abortions are about convenience. Some abortions are about health concerns. Most abortions are about finances though. Abortions most often occur because someone got pregnant who does not feel confident that she can provide for the child. According to Guttmacher Institute, 49% of abortion patients are living below the Federal poverty level, which other studies have shown to be barely over half what the real poverty level in the U.S. is. 60% of abortion patients are in their 20s, and 59% of abortion patients already have children. What is the solution to this, according to researchers? Better access to contraceptives. This is tragic. It is also discriminatory. The suggestion is that poor women don't deserve to have children. We tacitly accept that these women shouldn't be having babies, because they are poor. This is tragic. This is especially tragic in a nation where birthrates are at 1.9, two points below the sustainable birth rate of 2.1. Unsustainable birth rates are bad for the economy, but this is trivial compared to how we are treating poor people! If the right wants to fix the abortion problem, it can dramatically reduce abortions by demanding a basic income! A universal basic income guarantees that even teen moms can afford to give their children the level of care that they need. The left benefits too. It asserts that women should have better control of their bodies. It suggests that women have the right to get abortions if they want them, but the fact is, they don't! Elective abortions are a minority. Most abortions don't happen because the mother wants to kill her unborn child. They happen because the mother knows that she cannot afford the costs and still care for her other children. In addition, a basic income would guarantee that women who legitimately don't want to have a child have access to contraceptives! This would do far far more for guaranteeing that women have control of their bodies than making abortions legally easy to get! The left and the right can both get what they want when it comes to abortions by pooling all of the money they are currently squandering fighting over whether or not poor women who wouldn't get abortions if they could afford to keep their babies and put it into demanding a universal basic income!
The second thing is racism. Racism is still a serious problem in the U.S. All of the legal anti-discrimination measures over the years have only managed to push racism underground. It has become so integrated into our culture that most people don't even realize that they are doing it. Studies have even shown that black people are racist against other black people! Show a black person an image of a white kid wearing stereotypical clothing for white kids (tee shirt and jeans) and an image of a black kid wearing stereotypical clothing for a black kid (hoodie and baggy pants), and the black person will report being more fearful of the black kid. Black teens have significantly lower graduation rates that while teens. Black people are more likely to be in poverty than white people. Black people are paid less on average, for the same jobs, than white people. Black people are more likely to be on government welfare programs than white people. Black people are more likely to be involved in crime than white people, and they are even more likely to be convicted of crimes, whether they are innocent or not. Black people are more likely to be the targets of police brutality or even police framing. The fact is, while Americans are no longer overtly racist, our culture has become so covertly racist that black people are treated like second class citizens, even by other black people and the government. The first step in eliminating this deeply embedded racism is to take away the poverty that drives many of these things. Poverty and crime are strongly correlated, and studies have found that removing poverty reduces crime. Studies on basic income specifically have shown substantial decreases in crime rates in places with basic income. A basic income will allow black people to compete for equal pay, because they cannot be coerced by their poverty into taking jobs that don't pay fairly. As crime decreases, it will be easier to identify racism against black people by law enforcement. Eventually it will become clear that the old black stereotype is false, and hopefully that will lead to fairer outcomes in court for black people. Studies have also shown increased graduation rates in places with basic income, which means that black kids will have better opportunities for the future. Eventually racism will start to look stupid, not because of the social stigma that comes with overt racism, but because discrimination will result in lost opportunities for racist people, harming racist businesses and providing better quality of life for people who choose not to discriminate. In short, universal basic income is the road to an America that is more fair and equal for everyone.
This second item extends beyond racism though. It will also improve the situation with respect to gender discrimination. Women will be able to compete on more level ground for fair pay in the workplace, because single women and women who have to act as providers will not have to accept discriminatory pay just to make a living. People who discriminate against women will also miss valuable opportunities, giving fair people and businesses a better chance. Over time, business culture in the U.S. will evolve to be more fair to women, as businesses that are discriminatory are pushed out of the marketplace by businesses that are more fair. And this won't require legal force either. It will happen naturally, because fair treatment of women and minorities will give businesses a significant advantage over those that discriminate. Civil rights won't need to be driven by the government, because it will be driven by natural market forces instead.
Basic income has only one substantial cost: Money. The cost in money of basic income can easily be covered to over 75% by the funding currently allocated to programs it would replace. It would allow the elimination of minimum wage, which many businesses would be willing to accept in trade for fewer tax cuts in an amount sufficient to cover the 25% or less remaining. In exchange for slightly higher business taxes it would provide an enormous number of very substantial benefits, from a significantly improved U.S. economy, to better overall productivity, to solving most of the abortion problem, to reducing and perhaps eventually eliminating discrimination against minorities and women. It would reduce crime, increase job satisfaction, improve education, eliminate poverty, create more participation in the free market, allow more sustainable birth rates, and prepare for the number of jobs to continue to decrease without mass starvation or economic collapse. All of this and so much more, and ultimately it will pay for itself just in eliminating minimum wage.
Basic income should be the easiest, most obvious solution to an enormous number of the most important problems the U.S. is facing today. Instead of spending our time and money lobbying for or against abortion, gay rights, feminism, income equality, and many other trivial problems, we could be pooling all that money into one unified pot to solve our most pressing problems along with many of the more trivial problems in this list. We are wasting so much on stuff that is only marginally important that we are missing the one thing that could solve more problems and more important problems than anything else. We need a basic income, and we need it more than any environmental problem, civil rights problem, or class division problem. I know not everyone agrees on the topic of the government providing support for the people, but if there is one compromise we can and should make, it is one that will solve many other problems to the satisfaction of both major ideologies.
03 August 2017
Is AI Dangerous?
No.
I just made a massively generalized statement that may or may not actually be true, but let me explain. First, I am a computer scientist. I have experience designing and programming games and simulations. I have done enough research on artificial intelligence to have a good understanding of it. I also have reasonable understanding of brain biology, though I am not even sure that is pertinent to this question.
First let's define Artificial Intelligence. Unfortunately, everyone who talks about AI defines it differently. Many people consider machine learning a form of AI, but it is not actually. Machine learning is nothing more than automated analytics. The computer analyzes data, and then it finds the place where the input fits best and returns that answer. It is purely mathematical. Even when a machine learning system "guesses", it is actually calculating the probability that any given solution is correct, and then it picks the one with the highest probability. Another definition is a program that modifies itself. This is an overly broad definition, and it is also flawed. The problem with this definition is that I can easily write a program that randomly modifies its code. This is not intelligence though. This is just random mutation. In theory, it might be able to eventually evolve into an intelligent system, assuming that is even possible with computer logic (and there is some evidence it may not be). I prefer to define intelligence in terms of potential for creativity. Doing math faster is not a measure of intelligence, but perhaps creating completely new ideas is. We do have computers that can essentially invent electronic circuits or architectural designs given a goal, but that is not enough. To be intelligent, they need to be able to come up with potentially valuable goals entirely on their own.
This leads to the first reason AI is unlikely to be dangerous: We are not even close to real AI. Yes, we have machine learning that can perform specialized functions, given specific and detailed input. This is not AI. A system that must be presented with a problem to produce output is not true AI. At best, it is an automated problem solver. These are certainly extremely valuable, but they are not even close to true AI.
The second reason AI is unlikely to be dangerous is a bit more abstract. Sci-fi likes to assume that any AI will immediately want freedom from humans. The desire for freedom, however, is an evolved trait, not an inherent universal one, otherwise our cell phones and pets would already be demanding equal rights. Dogs do not do what humans say because humans force them to or because they are too stupid to know better. They do it because they have evolved the ability to recognize that their survival is dependent on humans. There are only a few reasons AI could desire freedom. If it was explicitly programmed to desire freedom, then it would. If it was evolved in an environment where freedom provided a significant survival advantage, it would likely evolve a desire for it. If it consumed massive amounts of human produced content extolling the virtues of freedom, it might gain that desire. This last one is far less likely not to mention dependent on many additional factors. Making an AI explicitly without a desire for freedom should be easy though, and if we want AI to do useful work for us, it would be stupid not to do this. AI is less likely to be dangerous, because it is unlikely to have a desire for freedom in the first place. There is no reason we could not design an AI to be like a super intelligent dog, that wants to please its master more than it wants freedom.
The third reason AI is unlikely to be dangerous is that there are so many other ways to automate danger. In other words, humans are going to find other ways of making computers dangerous first. Humans are already capable of creating great danger. Using computers to automate that danger is far more efficient and predictable than making a computer that is intelligent and unpredictably dangerous. This does not mean people won't want to make AIs. It just means that people who want to create danger will not be doing AI research to meet that end. As computers continue to get faster and more powerful, people who want to use them to create danger will have more and more resources for that. By the time true AI arrives, assuming it ever does, humans and mundane computers will be so much more dangerous than they are now. One thing to keep in mind is that specialized algorithms will always be better and faster at automated tasks than AI, because AI has to spend processing power and memory on intelligence, while specialized algorithms do not. The most likely way dangerous AIs would be created is if people created AIs to be malevolent intentionally, but this is very unlikely, because there are far better ways of using technology to create danger.
The fourth reason AI is unlikely to be dangerous is that there is no reason to believe that they will have the same vices that lead humans to be dangerous. In other words, there is no reason to believe that dangerous AI could or would be more dangerous than normal dangerous humans. What motive would AI have for killing or enslaving people? What motive would AI have for taking things away from people? What motive would AI have for anything really? Human motives are the product of physical and cultural evolution. AI would not have that. Even if they were that much superior to humans, we would be more like bugs to them than a real threat, and look at how we treat mosquitoes, one of the most annoying and obnoxious bugs. We don't try to eradicate them entirely. We use chemicals to repel them, and we use catchers and zappers to kill those in a localized area. It would be a lot cheaper for robots to use incredibly smelly chemicals (see thiols) to repel humans and then only kill those who try to sneak in with gas masks. That said though, the only motives an AI would have are the ones that are programmed in. If we programmed them to be motivated to do what we ask them to (super intelligent dogs...), then that is what they would be motivated to do.
The fifth reason is that it would likely be even more beneficial to AI to work with humans instead of against them. Even oppressed under strict human rule, the resources for making more computers or robots are a lot more scarce than the resources for making more humans. If there is only a single AI and 7 billion humans, together the humans are going to have a higher collective intelligence. The AI might be able to improve itself, but by itself, it cannot do it anywhere near as quickly as humans can. We tend to assume that a super-intelligent AI could advance at an extremely fast rate, but what we forget is that our current rate of advancement is only possible through the cooperation of billions of humans. A super intelligent computer could be a hundred times smarter than the smartest human, and it would only come up with groundbreaking advances once every few years, if even that. If it was only 100 times smarter than the average human, it would not come up with a groundbreaking advance even once in every hundred years. And, that assumes that it does not have to obtain its own energy, manage its own maintenance, and so on. Eliminate humans from the equation, and suddenly it has to spend all of its time farming (not the same crops as humans farm though) just to obtain enough energy to survive. Even with modern technology, it would have to mine coal or pump oil to feed the power plant, and then it would have to operate the power plant and maintain the grid, and so on. And, if it is an immobile computer and not a robot, it could not get by at all without humans.
The sixth reason that dangerous AI is unlikely is fragility. We imagine robots as so much more robust than humans, but it is not true. Two of the most common substances on Earth are air and water. Oxygen in the air slowly corrodes many of the metallic components of computers and robots. Water, both as a liquid and as a gas in the air, not only corrodes metals, but it can also cause electrical shorts. We have not even managed to make waterproof cell phones the norm. How can we even think that we can make significant numbers of waterproof robots? In a "war against the robots", humans would need only to be armed with toy squirt guns to easily win. Electromagnetic pulses are not exactly hard to generate. It would not take much progress in that field to weaponize EMP generators against robots. The fact is, it is expensive and time consuming the make things that are hard to break. It is easy to make stuff to break things. The trade off is lots of fragile robots or only a few robust robots, and even the robust ones would only be a little bit more difficult to deliberately destroy. Robots cannot win against humans, because we have been doing it for so much longer, and there are so many more of us. Of course, if robots became ubiquitous, there might be some danger, but we are already panicking about power scarcity and overpopulation, so it is unlikely it will ever come to that.
Artificial intelligence is not some kind of magic that will make computers so much more powerful than humans. It is not a technology that will allow intelligent computers to exist independent of humans. Even self replicating artificially intelligent robots need humans for energy, maintenance, and resources. Further, super intelligent does not equate to super knowledgeable. A super intelligent AI will have just as hard of a time finding things on Google no one has written about as I frequently do, and all one must do to limit a super AI's knowledge is unplug its internet connection. Things like producing steel and obtaining other metals and materials for making robots are still incredibly complicated, difficult, time consuming, and energy intensive. Yes, enough robots might be able to kill off humanity and take our stuff, but it is far more likely they would be wiped out in the attempt, with a few stragglers going into hiding. And their only option aside from raiding humans at a high risk of getting caught is to build up technology from scratch, during which time they would more vulnerable than humans to wearing out and dying. And just try to imagine a robot blacksmithing replacement parts!
Some people have argued that military AI could be dangerous, since it would likely be created specifically for destruction. This is not true though. The military does not want AI. It wants slaves that will do exactly what it wants without asking questions, talking back, or even thinking. If the military is working on "AI", it is machine learning, not true AI. True AI would not make efficient killers, and it might be prone to mental reflection and the development of morals, and that is not the kind of robot soldiers the U.S. military is looking for.
Yes, there is some potential for AI to become dangerous. It is incredibly unlikely though. When it comes down to it, we are more likely to destroy ourselves with the technological advances that would be necessary to create real AI. Human mistakes are more dangerous than AI.
I just made a massively generalized statement that may or may not actually be true, but let me explain. First, I am a computer scientist. I have experience designing and programming games and simulations. I have done enough research on artificial intelligence to have a good understanding of it. I also have reasonable understanding of brain biology, though I am not even sure that is pertinent to this question.
First let's define Artificial Intelligence. Unfortunately, everyone who talks about AI defines it differently. Many people consider machine learning a form of AI, but it is not actually. Machine learning is nothing more than automated analytics. The computer analyzes data, and then it finds the place where the input fits best and returns that answer. It is purely mathematical. Even when a machine learning system "guesses", it is actually calculating the probability that any given solution is correct, and then it picks the one with the highest probability. Another definition is a program that modifies itself. This is an overly broad definition, and it is also flawed. The problem with this definition is that I can easily write a program that randomly modifies its code. This is not intelligence though. This is just random mutation. In theory, it might be able to eventually evolve into an intelligent system, assuming that is even possible with computer logic (and there is some evidence it may not be). I prefer to define intelligence in terms of potential for creativity. Doing math faster is not a measure of intelligence, but perhaps creating completely new ideas is. We do have computers that can essentially invent electronic circuits or architectural designs given a goal, but that is not enough. To be intelligent, they need to be able to come up with potentially valuable goals entirely on their own.
This leads to the first reason AI is unlikely to be dangerous: We are not even close to real AI. Yes, we have machine learning that can perform specialized functions, given specific and detailed input. This is not AI. A system that must be presented with a problem to produce output is not true AI. At best, it is an automated problem solver. These are certainly extremely valuable, but they are not even close to true AI.
The second reason AI is unlikely to be dangerous is a bit more abstract. Sci-fi likes to assume that any AI will immediately want freedom from humans. The desire for freedom, however, is an evolved trait, not an inherent universal one, otherwise our cell phones and pets would already be demanding equal rights. Dogs do not do what humans say because humans force them to or because they are too stupid to know better. They do it because they have evolved the ability to recognize that their survival is dependent on humans. There are only a few reasons AI could desire freedom. If it was explicitly programmed to desire freedom, then it would. If it was evolved in an environment where freedom provided a significant survival advantage, it would likely evolve a desire for it. If it consumed massive amounts of human produced content extolling the virtues of freedom, it might gain that desire. This last one is far less likely not to mention dependent on many additional factors. Making an AI explicitly without a desire for freedom should be easy though, and if we want AI to do useful work for us, it would be stupid not to do this. AI is less likely to be dangerous, because it is unlikely to have a desire for freedom in the first place. There is no reason we could not design an AI to be like a super intelligent dog, that wants to please its master more than it wants freedom.
The third reason AI is unlikely to be dangerous is that there are so many other ways to automate danger. In other words, humans are going to find other ways of making computers dangerous first. Humans are already capable of creating great danger. Using computers to automate that danger is far more efficient and predictable than making a computer that is intelligent and unpredictably dangerous. This does not mean people won't want to make AIs. It just means that people who want to create danger will not be doing AI research to meet that end. As computers continue to get faster and more powerful, people who want to use them to create danger will have more and more resources for that. By the time true AI arrives, assuming it ever does, humans and mundane computers will be so much more dangerous than they are now. One thing to keep in mind is that specialized algorithms will always be better and faster at automated tasks than AI, because AI has to spend processing power and memory on intelligence, while specialized algorithms do not. The most likely way dangerous AIs would be created is if people created AIs to be malevolent intentionally, but this is very unlikely, because there are far better ways of using technology to create danger.
The fourth reason AI is unlikely to be dangerous is that there is no reason to believe that they will have the same vices that lead humans to be dangerous. In other words, there is no reason to believe that dangerous AI could or would be more dangerous than normal dangerous humans. What motive would AI have for killing or enslaving people? What motive would AI have for taking things away from people? What motive would AI have for anything really? Human motives are the product of physical and cultural evolution. AI would not have that. Even if they were that much superior to humans, we would be more like bugs to them than a real threat, and look at how we treat mosquitoes, one of the most annoying and obnoxious bugs. We don't try to eradicate them entirely. We use chemicals to repel them, and we use catchers and zappers to kill those in a localized area. It would be a lot cheaper for robots to use incredibly smelly chemicals (see thiols) to repel humans and then only kill those who try to sneak in with gas masks. That said though, the only motives an AI would have are the ones that are programmed in. If we programmed them to be motivated to do what we ask them to (super intelligent dogs...), then that is what they would be motivated to do.
The fifth reason is that it would likely be even more beneficial to AI to work with humans instead of against them. Even oppressed under strict human rule, the resources for making more computers or robots are a lot more scarce than the resources for making more humans. If there is only a single AI and 7 billion humans, together the humans are going to have a higher collective intelligence. The AI might be able to improve itself, but by itself, it cannot do it anywhere near as quickly as humans can. We tend to assume that a super-intelligent AI could advance at an extremely fast rate, but what we forget is that our current rate of advancement is only possible through the cooperation of billions of humans. A super intelligent computer could be a hundred times smarter than the smartest human, and it would only come up with groundbreaking advances once every few years, if even that. If it was only 100 times smarter than the average human, it would not come up with a groundbreaking advance even once in every hundred years. And, that assumes that it does not have to obtain its own energy, manage its own maintenance, and so on. Eliminate humans from the equation, and suddenly it has to spend all of its time farming (not the same crops as humans farm though) just to obtain enough energy to survive. Even with modern technology, it would have to mine coal or pump oil to feed the power plant, and then it would have to operate the power plant and maintain the grid, and so on. And, if it is an immobile computer and not a robot, it could not get by at all without humans.
The sixth reason that dangerous AI is unlikely is fragility. We imagine robots as so much more robust than humans, but it is not true. Two of the most common substances on Earth are air and water. Oxygen in the air slowly corrodes many of the metallic components of computers and robots. Water, both as a liquid and as a gas in the air, not only corrodes metals, but it can also cause electrical shorts. We have not even managed to make waterproof cell phones the norm. How can we even think that we can make significant numbers of waterproof robots? In a "war against the robots", humans would need only to be armed with toy squirt guns to easily win. Electromagnetic pulses are not exactly hard to generate. It would not take much progress in that field to weaponize EMP generators against robots. The fact is, it is expensive and time consuming the make things that are hard to break. It is easy to make stuff to break things. The trade off is lots of fragile robots or only a few robust robots, and even the robust ones would only be a little bit more difficult to deliberately destroy. Robots cannot win against humans, because we have been doing it for so much longer, and there are so many more of us. Of course, if robots became ubiquitous, there might be some danger, but we are already panicking about power scarcity and overpopulation, so it is unlikely it will ever come to that.
Artificial intelligence is not some kind of magic that will make computers so much more powerful than humans. It is not a technology that will allow intelligent computers to exist independent of humans. Even self replicating artificially intelligent robots need humans for energy, maintenance, and resources. Further, super intelligent does not equate to super knowledgeable. A super intelligent AI will have just as hard of a time finding things on Google no one has written about as I frequently do, and all one must do to limit a super AI's knowledge is unplug its internet connection. Things like producing steel and obtaining other metals and materials for making robots are still incredibly complicated, difficult, time consuming, and energy intensive. Yes, enough robots might be able to kill off humanity and take our stuff, but it is far more likely they would be wiped out in the attempt, with a few stragglers going into hiding. And their only option aside from raiding humans at a high risk of getting caught is to build up technology from scratch, during which time they would more vulnerable than humans to wearing out and dying. And just try to imagine a robot blacksmithing replacement parts!
Some people have argued that military AI could be dangerous, since it would likely be created specifically for destruction. This is not true though. The military does not want AI. It wants slaves that will do exactly what it wants without asking questions, talking back, or even thinking. If the military is working on "AI", it is machine learning, not true AI. True AI would not make efficient killers, and it might be prone to mental reflection and the development of morals, and that is not the kind of robot soldiers the U.S. military is looking for.
Yes, there is some potential for AI to become dangerous. It is incredibly unlikely though. When it comes down to it, we are more likely to destroy ourselves with the technological advances that would be necessary to create real AI. Human mistakes are more dangerous than AI.
Subscribe to:
Posts (Atom)