29 October 2014

Federal Ballot Measures and Initiatives

During the November elections this year, nearly every state will present voters with some issues that voters will have direct control over.  These may be highly controversial things like state level legalization of certain mind affecting drugs or less controversial things that are equally important, like Salt Lake county's ZAP tax or oil drilling issues in Alaska.  Some states will even have ballot issues put there by the voters themselves, as initiatives.  During Federal election years, however, the general public has no power in Federal decision making aside from electing officials to represent them in Federal government.

A highly controversial topic over the last few decades has been the idea of limited government.  Conservatives often claim that the Founding Fathers designed the U.S. Federal government to have very limited power.  They say that the vast majority of the power belongs to the states.  In some degree, this is true, but not in its entirety.  When the Constitution was drafted, it specifically enumerated the powers and responsibilities of the Federal government, explicitly stating that any power not listed belonged exclusively to the states.  Of course, they also added a means of amending the document that could be used to extend that list.  The Federal government has certainly overstepped the bounds given in the Constitution, however, this does not mean that the Founding Fathers were entirely opposed to a robust and powerful Federal government.  The bounds set do provide the Federal government with a fairly large scope, even in its limitations.  Many of our Founding Fathers were supporters of a strong Federal government.  Thomas Jefferson may have been opposed to a powerful Federal government, but George Washington, our first President was a strong proponent of a robust central government.  All of our Founding Fathers recognized the value in separating power and responsibilities as both a means of balancing government as well as providing the greatest degree of freedom possible to the people.  They did not necessarily agree on what the ideal balance should be, and they ultimately left it up to the people, who could choose representatives in both state and Federal governments who would work for whatever balance the people desired.  I doubt, however, that any of them ever expected the Federal government to have so much power as it now does.

We live in a time where technology is far more advanced than our Founding Fathers ever could have imagined.  In a time period where long distance communication could take weeks, the technology itself was a limiting factor in the power of governments in general, especially governments over very large geographical areas.  There were many things back then that the Federal government just did not have the power to do, because no one had that power.  With today's technology, the Civil War would likely have been over before it ever started, because Federal troops could be sent in to take control from rogue state governments in less than 24 hours.  Similarly, state governments do not have anywhere near the resources of the Federal government.  The capacity for power that the Federal government has is extremely unbalanced from the perspective our Founding Fathers had.  Now, I am not saying this is bad, but I do think it warrants rethinking the relationship between citizens, states, and the Federal government.

I don't want to suggest anything especially revolutionary here, but I do think that the change in relationship between civilians and the Federal government deserves some attention.  In a large part, the Federal government has usurped powers officially reserved for states, however, in a large part it has done this with the blessing of the people it is designed to serve.  As such, the specifics of authority are merely technicalities.  The problem we have now is that the Federal government is acting in many areas as a state level government.  At least in these areas, its relationship with the people needs to be altered to make it more like that of a state government.

State governments have multiple ways of allowing the people to be heard.  Citizens of the state elect representatives for their state government.  They can also vote on specific issues on state election ballots.  Many states even allow citizens to propose and petition for specific initiatives to be added to the ballots.  Even states that do not allow initiatives often have some kind of petition system where citizens can attempt to force government officials to take notice.  The Federal government has far more limited means of contact with citizens.

The only official means of communication between the Federal government and citizens is the election of representatives.  Citizens can vote for Congressmen, Senators, and the President.  At one time, citizens could also vote for the Vice President, but that was eventually changed, and they are now grouped together with the President.  (I believe Electoral College members can still vote for President and Vice President separately.  I heard that sometime in the '80s or '90s, an Electoral College member deliberately voted for her party's President and Vice President in reversed roles, out of spite.)  Citizens can unofficially petition the Federal government for attention, and they can also lobby, however lobbying is almost exclusively done by large organizations because few individuals can afford the "totally ethical costs" associated with getting the attention of elected Federal officials.  Citizens can also stage protests, but none of these things have any official impact on Federal government.  The Federal government is becoming more and more directly involved in the lives of its citizens, but there is not an equal increase in the influence of the citizens on the Federal government.

This is a problem.  What it means is that our Federal government, which is totally out of touch with American reality, is trying to directly govern the people.  Aside from the fact that it was never intended to have as much direct influence as it currently does, it cannot do a good job of governing without closer citizen involvement.  A government that directly governs its citizens needs to be influenced directly by those it governs, otherwise it will slowly drift away from the will of the people.  Plenty of political polls already show that this is happening.  Frequently polls show that the people believe or desire things that the Federal government either directly opposes or at least assumes that a majority of the people oppose.  (For instance, a significant percentage of Americans are religious and pray regularly, however, the Federal government strictly regulates prayer in many public places.  A majority also support increased religious freedom, however the Federal government treats religion as distasteful and something to protect people against.)  Unfortunately, the size and population of the U.S. make it nearly impossible for direct communication between the citizens and the Federal government.  Originally, the election of representatives was supposed to manage this, however, as population increases, this becomes ever more difficult.  There are at least a few things we could do though.

The first change in relationship between civilians and the Federal government should be adding controversial or especially important topics as ballot measures in Federal elections.  Also, it would probably be a good idea to add costly but trivial things as ballot measures.  Salt Lake county in Utah has a ballot measure every decade to renew a "ZAP" tax.  This is a 0.1% sales tax that goes to support zoos, the arts, and parks.  The idea is that this tax funds higher culture in the county.  It raises substantial amounts of money that funds everything from universal cultural things to community wide religious events that are geared toward community education on other cultures.  While it has its opponents, the people have consistently shown support for it.  The Federal government has its own budget for the arts, which is voted upon in Congress, but which is never discussed with the people.  I have never seen a campaign platform that includes Federal support for the arts (maybe I just missed them, but then they cannot be that common if I did), yet our representatives in the Federal government seem to consistently support them without having ever asked me or anyone else they represent.  This is my tax money being spent, and I never even got to give my input.  Maybe the people do want the Federal government supporting high culture in the U.S., but it has never bothered to ask.  Highly controversial issues, like same sex marriage, should also not be touched by the Federal government without direct involvement of the people (frankly, I think this should be entirely a state issue, without any Federal involvement).  Doing the will of the people should be a strong enough Federal concern that elected officials should not even feel comfortable with high impact or highly controversial legislation without the direct influence of the people.

The second change that is needed is some kind of mechanic like initiatives.  If the Federal government is going to have a direct influence in the lives of its citizens, it citizens deserve to have the power to directly influence the Federal government.  Obviously, initiatives should be treated differently from how they are treated in states, to avoid abuse and other situations where wide reaching affects are not appropriate, but they should exist.

Note that I personally would prefer the Federal government to repeal any laws it has enacted that the Constitution does not explicitly allow it to enact.  I think we would be better off with less Federal interference.  I also think that states should step up and start doing their jobs, so the Federal government does not feel like it has to do it.  Evidently, however, most Americans do not agree with me on this.  The distinct lack of outrage every time the Federal government oversteps its bounds tells me that the people are fine with this.  Since our government is designed to be run democratically, if they majority is not opposed to the Federal government overstepping its bounds, then it does not matter.  Through their inaction, the will of the people is done.  I may not agree with it, but it is democratic, and I will deal with it.  Given that, I suggest the above as a way of maintaining Federal accountability to the people, while still maintaining what appears to be the will of the people.  It is better than letting our government slowly become totally disconnected and out of touch with the people it is supposed to serve.

28 October 2014

How about "all safe?"

Organic, all natural, non-GMO...  These terms are all associated with safer, more wholesome products.  Enough of the general public wants foods and cosmetics in these categories that they have gained a fairly large market share.  The one thing these share in common is that they are no more safe or harmful than industrial, synthetic, and GMO products.

Organic foods have repeatedly been compared to industrial foods with no evidence that they are any safer or more healthful.  The biggest benefit of organic farming is that it is not as susceptible to fertilizer run off as industrial farming is.  Poorly done, however, organic farming can be as bad for the environment as industrial farming.  Likewise, done well, industrial farming can be as good for the environment as organic farming.  In reference to the products though, there is no significant difference.

"All natural" is an extremely overused term in retail.  It turns out that a vast majority of "all natural" products contain as least one synthetic ingredient.  The term is not currently even regulated.  I could claim that petroleum jelly is all natural, justified by the fact that the crude oil from which it is made got there through natural means.  Even if it was well regulated, however, it would say little about the safety or healthfulness of the products.  Black Widow spider venom is very literally all natural, however it is often deadly.  Unroasted cashews contain a deadly, all natural poison.  Caffeine is naturally found in coffee and tea, and yet it is still harmful when used excessively.  Fugu, otherwise known as blowfish, contains a deadly neurotoxin that will cause the victim to suffocate to death while totally coherent.  Again, it is all natural.  Even glucose, the primary (and "all natural") form of energy used by humans, can be extremely harmful in large amounts or in moderate amount for a long period of time.  "All natural" may have some meaning, but that meaning makes no reference to safety or health.

GMO is perhaps the least well understood of this list.  Many people view GMO products as the spawn of Satan.  GMO foods are seen as totally unpredictable, and opponents believe that they can cause anything from cancer to actual genetic modification in humans.  None of these claims are true, and in fact, it is highly probable that GMO foods are safer than non-GMO alternatives.  Direct genetic modification is highly targeted, while traditional selective breeding is subject to all manner of random genetic modifications.  In addition, GMO foods are regulated far better than non-GMO foods.  Where non-GMO foods may have undetected traces of harmful substances, GMO foods have been extremely extensively tested to ensure that they contain nothing unsafe.  The only serious worry about GMO foods is deliberate malicious modifications, however, keep in mind that we are talking about for-profit companies who profit more from healthy customers (who maybe eat a bit more than is healthy), not some evil organization bent on the destruction of the human race.  Besides, even deliberately evil modifications would still have to go through the same rigorous FDA mandated safety testing.

Instead of advocating for all of these questionable labels that involve more work and expense for the same quality of product, maybe we should be advocating for "all safe" products.  It does not matter so much if the product is organic, all natural, or non-GMO.  Nearly all of the concerns with these things come down to safety.  So instead of attacking these things, where all evidence indicates that they have little role in whether a product is safe or not, we should be attacking any product that has not been proven safe.  What would happen if we held non-GMO foods to the same strict standards we hold GMO foods to (besides making them more expensive)?  I can tell you what: We would quit getting contaminated batches of tomatoes in the U.S..  We would not have to worry so much about whether our beef is infected with mad cow disease.  While the testing might be more expensive, the total cost would eventually decrease, because we would not keep doing things of questionable value that ultimately increase costs.  The best part, though, is that our food, and many other products, would actually be safer, instead of being imaginary safer like our organic, all natural, and non-GMO products.


(Just for the record, I am not opposed to organic farming.  Some forms of organic farming can produce higher crop yields in less space, in less time, with less work than industrial farming.  I advocate research on combining best methods of both to produce a truly and clearly superior farming method.)

24 October 2014

What are you?

"What are you?"  This is the question I asked my 5 year old daughter the other day.  As I expected, she answered, "A girl."  Even as I expected this answer, it bothered me.  My daughter identifies most strongly as her gender.  She does not immediately identify as human.  Gender exists among nearly all animal species on Earth (with a few exceptions).  My daughter identified herself as part of a group that comprises almost half of all animals.  She could have identified as human, which is much more specific.  She could have identified by her surname, which is even more narrow.  It bothered me (mildly) that she identifies herself first as female.

I suspect this is a result of language.  Boys are "he," and girls are "she" in English.  There is no singular neutral pronoun.  The consequence of this is that children learn very young that gender identity is essential to communication.  Now, I am not saying this is bad, though it is especially problematic to writers, who feel forced to either use very bad grammar (using "they" or "their" as a singular pronoun") or use ugly constructs like "he/she" or "his or her."  I am glad that my daughter identifies as female, because she is female.  I do not, however, feel comfortable that she identifies as female before anything else.

My daughter is 5 years old.  This is a difficult age, because she does not know what she wants to do with her life.  I am in my 30s, and my response to the question I posed to her might be "a computer scientist," "a video game developer," or any number of other things that I qualify as (maybe "a father" or "a husband" would be the most appropriate answers, though I might even say "a child of God" depending on my mood).  At 5 years old, it is pretty unusual to have enough experience in anything to identify yourself as a practitioner of that thing.  I suppose she could have said, "a kid," but that is even worse than "a girl," because it is transient, while gender is permanent (yes, we have procedures to change that, but none of them can change genetic gender).

As I said before, it only bothers me mildly that she identifies first as female (when I asked her again, she said "I don't know," which I am more comfortable with).  I think that identifying as a specific gender is healthy and important.  It is also good that she recognizes her gender, because gender is important in our culture (admittedly, even when it sometimes should not be).  Identifying with her gender is important to success, however, I think it would be better for her to identify as gender only after something else.  Maybe she could identify as a hard worker first.  That is definitely useful.  She might identify as being very intelligent and a good problem solver.  I would especially support this one.

While I think that gender is an essential trait, I do not think that is enough to justify it as a primary identifier.  Gender is something we have no control over.  I think it would be better to identify as something which we have control over.  I don't go around telling people this, but I am highly intelligent.  I do not believe that this is an accident.  Maybe I did get some genetic luck that is helping, but I choose to study things and do things that will make me smarter and better educated.  I know people who are genetically gifted but are still ultimately pretty dumb, because they do not choose to be smart (I know a guy with an IQ around 140 who is a deadbeat druggie with very little education, by choice).  On the other hand, while I do identify as male (secondary to many other things), I do not identify as "a man."  This is because, whenever I hear the phrase "be a man," it is typically said by someone who swears, drinks, and is rude and disrespectful to other people.  If that is a man, I am certainly not one.  I am genetically male though, so I do identify as male.  (Note that I do not "hang out with the guys," and nearly my entire life, I have had more female friends than male friends.  I tend to view men as jerks until the prove otherwise.)

What am I going to do about this?  Perhaps nothing.  In my experience, leading by example is far more effective, where it is possible.  I hope my daughter sees me identifying as many things beyond my gender.  I will probably tell her, at some point, that people who do science are scientists, and people who do electronics are electrical engineers, and so on.  I won't do this to convince her to identify as something other than her gender though.  I'll do this in the natural progression of teaching her, and I would have done it even if I had not had this experience.  Ultimately, I think that she will learn that she can choose what she wants to be and then identify as whatever that is.  What does bother me is that many parents encourage their children to identify as their gender.  I suspect much of the sexism and gender discrimination in our society (both ways) stem from people who identify as their gender before anything else.

20 October 2014

Discrimination Against the Poor - Part 1

I want to share a little bit of back story before I start the actual article.  We just had our 5th child.  It was a natural birth at a local hospital.  Our first birth was a water birth in a birthing center, and we went home three hours after the birth.  Our other three were all natural in-hospital births, where the hospital required us to stay for 24 hours for observation after the birth, and my wife was sick of this.  We made a birth plan specifying that we wanted to leave 12 hours after the birth, and while we forgot it at home, we made our plans very clear to the hospital staff.  Our nurse, our midwife, and the pediatrician all accepted our decision, though some of them did not agree with it.  The nurse, however, informed the billing person for the hospital, who came in and informed my wife that if we left against medical advice (the hospital's 24 hour policy required the pediatrician to write the discharge for the baby as "against medical advice" if we left before 24 hours), Medicaid would not pay for the services provided for the baby, and we would have to pay out of pocket.  The midwife had told us something different, so I went home and did some research, while my wife tried to contact Medicaid by phone.  Eventually, she reached Medicaid who told her that they had no such policy and could only find a reference stating that if we brought the baby back before the 24 hours were up, we might be charged for services for the new visit that did not qualify as medically necessary.  Before that, however, my research at home revealed a rats nests around the billing lady's claim.  First, her claim was completely and absolutely false.  Second, most hospitals tell their patients this lie (though, hospital staff rarely knows the truth to begin with) and not just those insured through Medicaid.  I found three research papers from three different studies about this problem.  None of them found any insurers in the US with such a policy.  Anyhow, we left about 13 hours after the birth, and we informed the nurse that the claims were false and asked her to forward that on to the billing lady (the baby had no issues within those last 11 hours, though, the hospital staff had already determined she was perfectly healthy and the probability of problems was extremely low).


It should be obvious that poor people in the US face regular discrimination.  It is awfully hard to get hired for a job, even a really poor job, without nice clothing to wear to the interview (in fact, at least one US charity loans suits to poor job applicants to wear to interviews).  Many Medicaid, Food Stamp, and WIC office employees treat clients as inferiors.  Often, schools in poorer areas of towns and cities get sub-par teachers, while the other schools get the more skilled ones.  Middle and upper class people often look down on poor people and treat them as inferiors, and sometimes poor people even treat each other more poorly than those with more wealth.  Many Americans assume poor people are lazy.  This problem is so prevalent that comments from the few people that really are freeloading on government welfare often reflect badly on anyone who is receiving government welfare for any reason.  There is one place where this discrimination against the poor is especially repugnant, not to mention of questionable legality.

Most hospital employees in the US will tell patients that if they leave before their treatment is complete (known as leaving "against medical advice" or AMA), their insurance will not pay for it.  Because most patients leaving AMA are Medicaid patients, and because more wealthy patients can afford the costs better, this affects poor people far more than anyone else.  At least three studies have been done on this subject in the last three years, and none of these studies have found any insurance provider in the US with such a policy (some insurers actually laughed at the researchers for even asking).  Medicaid also has no such policy.  Now, in most cases, the hospital employees are not deliberately lying (though, hospitals do stand to benefit from patients staying "for observation" longer than is strictly necessary).  This is a common misconception among hospital employees, and it is presumably perpetuated as interns are taught this lie by regular employees.  This problem is not just bad for patients, it is also bad for insurers and potentially very bad for hospitals.

Because this problem affects primarily the poor, it is a clear case of discrimination against the poor.  As such, it is rather appalling.  It is also dangerous and perhaps even illegal.  This may be one reason that medical costs in the US are so high.  The biggest reasons people leave a hospital AMA are poor treatment or lack of additional time after treatment is complete.  Often, Medicaid patients have long waits to see a doctor, when their conditions are not critical.  Eventually, they get fed up with waiting, and they sometimes leave against medical advice.  The second, and more nefarious problem is when a patient has completed treatment, but the hospital either wants to observe the patient for an extra day or more, or the patient has to wait a long time for the doctor to do a final review and sign discharge papers.  In both cases, the hospital may charge more money to Medicaid, another insurer, or the patient, for the longer stay.  In the second case, however, it is possible that the long waits are actually deliberate abuse of the system, designed to allow the hospital to charge more for the visit by keeping the patient there longer.  Either way, forcing patients to wait so long that they consider leaving without getting full treatment is dangerous to the health of the patient.  Telling the patient that insurance will not pay if they leave early, however, may be more dangerous to the hospital than the patient.

Most hospitals require patients to sign a release before leaving AMA, to reduce liability for any problems that might have been prevented had the patient received full treatment.  Patients leaving AMA is considered a big problem in the US right now, especially among Medicaid patients (males patients are also more likely to do this).  Concerned hospital workers may be tempted to lie to patients to convince them to stay and complete treatment.  This carries two very dangerous consequences.  If found out, these lies will cause patients to distrust doctors, and this is already a big enough problem in the US; we really do not need to add to it.  Lying to patients may cause them to look for alternatives to normal medical treatment that might be dangerous or at least allow serious conditions to go untreated.  This is not in the best interest of the patients, and as such, it qualifies as a violation of the oath taken by nearly all medical practitioners in the US to avoid harming patients.  The second consequence is worse, at least for the hospital.  Medical patients have legally protected rights in the US, and one of those rights is to refuse treatment.  Any medical patient in the US may choose to leave a hospital at any time, without legal penalty, and if the hospital attempts to hold them against their will, the hospital is breaking the law.  This is a very serious offense.  Lying to a patient to manipulate them into forgoing this right, when they would otherwise have chosen to exercise it is a violation of this right.  Telling a patient that there will be severe financial penalties (for people on Medicaid, nearly any hospital bill is severe) is essentially forcing the patient to make a choice under duress.  Decisions made under duress are not legally binding.  If the patient has informed a hospital employee of an intent to leave AMA, and the hospital uses this lie to convince the patient to stay, the patient's original decision is still in force (because the overriding decision was made under duress), and by keeping the patient, the hospital is both holding the patient against his or her will (this is illegal by itself) as well as violating the rights of the patient.

There are several better ways to treat this kind of situation.  First, financial employees in hospitals should determine policy for specific insurance providers before any employee is allowed to suggest to a patient that penalties might exist.  Since most insurance companies have no such penalties, there is no point discussing them without asking the companies first.  Second, instead of trying to scare patients to stay by lying to them, it should be far more effective to inform them of the actual medical consequences of leaving AMA.  Even Medicaid patients are not stupid.  If they still want to leave, fully informed of the potential consequences, then it is their legal right to do so.  At that point, they have chosen to own the consequences, and nobody has any right to force them to stay.

This problem is dangerous to both patients and hospitals.  Employees need to be educated properly so that they do not inadvertently do or say things that could get the hospital in trouble.  Violations of patient rights can incur heavy fines, and multiple instances can get hospitals shut down.  Given how prevalent this problem is in the US, there have probably been enough of this kind of patient rights violations at most US hospitals to get them shut down.  Further, this kind of discrimination against the poor needs to stop.  Most poor people may not have the research skills to ever discover the lie they have been fed, but this does not absolve hospital employees of their responsibility to treat patients well and honestly.  If nothing else, more care should be taken to treat the poor fairly and legally, because they have a disadvantage.


Following are the studies on this problem:

The University of Chicago Medicine
http://www.uchospitals.edu/news/2012/20120203-billing.html

PubMed.gov, Journal of General Internal Medicine
http://www.ncbi.nlm.nih.gov/pubmed/22331399

Annals of Emergency Medicine, An International Journal
http://www.annemergmed.com/article/S0196-0644%2809%2901798-3/fulltext

16 October 2014

Gamers the New Social Group?

Gamers are notoriously anti-social.  It turns out, however, that notoriety for something does not make it true.  The anti-social basement gamer of the '90s hardly even exists anymore, with the advent of MMOs.  Serious gamers that play modern games must be social to be successful in their games.  Multi-player first person shooters typically pit teams of players against each other, requiring team work to win.  MMOs, perhaps the most popular genre of games among modern gamers, frequently present players with challenges that require them to work together.  While modern gamers may often had poor face-to-face social skills, their team work skills are awesome.

Only children are not notoriously anti-social, but it turns out that the lack of face-to-face interactions is causing behavior that is normally considered anti-social.  Only children do tend to be more self absorbed than children with siblings of similar age.  Their interactions are primarily with adults, and some psychologists have claimed that this may make them more mature.  They also tend to be happier living along, and studies have also found that only children are more prone to strong anxiety in face-to-face interactions with others.  Only children are becoming far more common in the US, and while some benefits have been correlated to being an only child, some experts are suggesting that the next generation may be composed primarily of people who live along and conduct nearly all of their social interactions online.

Here is the kicker: Gamers are already well suited to this kind of social interaction, but most Americans are not.  Gaming helps build online social skills, and it can also help improve face-to-face social skills, as LAN parties and face to face gaming groups are becoming more popular among gamers.  Non-gamers do not have this kind of social web that includes a common activity to connect them.  Unless a majority of the next generation are gamers (it could happen), gamers may be the next generation's social class.  And don't expect public school social interactions to help.  Many people build social networks in high school and even college, but very few high school social relationships last far past graduation and many college social relationships are lost or become long distance online relationships after college.

What can we do to ease only children into this, so they do not suffer from a lack of social skills?  In some cases it may not matter.  Telecommuting is becoming more common, making face-to-face social skills matter less.  From the perspective of careers, the need for social skills depends on the career.  Close personal relationships, however, nearly always require face-to-face social skills.  While people who have interacted closely for a long time might be able to manage without decent social skills, establishing such a long term relationship does require some face-to-face social skills.  Japan and Korea are already having a problem with this, and it is resulting in birthrates low enough that economic collapse is becoming a serious concern.  The governments of both countries have started campaigns designed to help people meet, establish relationships, and get married.  Our recent recession may even be a consequence of an unsustainably low birth rate.  If we just ignore the problem (or even embrace it, as some advocates of single child families do), it will eventually come back around with far more severe consequences.

I am no psychologist, but I do have some suggestions that may help: Parents of only children should find opportunities for face-to-face social interaction for their children.  This does not necessarily have to be with children of the same age, since most social relationships after adulthood are not with people the same age.  A great opportunity for social interaction is games.  I prefer table top games, but sports can work as well.  A good blend of competitive and cooperative games is ideal, as competition and team work are both important in work and non-work relationships.  For proactive parents who want to give their child the best chance, I would even recommend socially oriented computer games, and the parents should have some involvement as well.  MMOs are great options, because the opportunity for social interaction is extremely high.  Many real time strategy games (like Starcraft and Starcraft 2) can be fairly social as they are naturally competitive but also provide the opportunity for team work.  There are some entirely cooperative games, like Minecraft, that can be very beneficial to social skills when played with other people.  I want to stress that parents should be involved with their children in these games.  One reason is to moderate potentially unhealthy social interactions, which do sometimes occur.  Another reason is to make sure the child is interacting in a beneficial way (when I play MMOs, I tend to play on my own; this is not very beneficial for social skills).  A third reason is to make sure that the child is not spending an unhealthy amount of time playing the game.  Social skills are great, but only if they are often used for interactions in the real world as well as the game.  This has several benefits.  One is that it will help the child learn to communicate effectively using an electronic platform.  If most social interaction is online in the future, knowing how to interact online will be important.  It will also help improve team work skills.  If done with significant parental involvement, it can also help form a more lasting long term relationship between parents and their children  Overall, the right online games will help improve general social skills and online social skills, while face-to-face games will help counter the anxiety common in only children during face-to-face social interactions.

As a gamer, it does not bother me that gamers are likely the new social class of the future.  It does bother me that many non-gamers may have a hard time functioning in society as a consequence of limited social skills though.  Really, we probably need more research on this, however, until that happens, my suggestions should be helpful, and they are based on research on the affects of gaming on people.

15 October 2014

Creativity in Video Games

I just read this article, and I found some things I agree with, but I also found many that I do not.  I am a gamer.  I do not spend all of my time playing games, but I enjoy them, and video games are my default entertainment activity.  I am also a researcher, and if I have spare time that I am not spending on video games, I am probably searching the internet for obscure information on some subject that interests me.  One prominent subject that I tend to come back to regularly is the psychological effects of video game on humans.  I have read a great many articles on the subject, and I have found many studies.  One interesting thing about the studies is that they all seem to agree.

Here is a brief summary of what the research indicates: The first studies on this subject very quickly found that video games improve hand eye coordination.  There were also some positive effects found on reflex speed for certain types of actions.  These were initially revered (and one US President was very excited about the prospects of recruiting skilled gamers to the Air Force, because games are great from improving the skills required for flying military aircraft), but became the subject of ridicule less than a decade later.  More recent studies have found that the hand eye coordination benefits are actually very profound and have many useful applications in everyday life.  They have found much more, however.  Regular gamers tend to have sensory benefits similar to those found in the remaining senses of people who are missing one sense entirely.  For instance, blind people tend to have more acute hearing and touch.  Gamers with normal sight also tend to have these benefits.  Even non-gamers were able develop these benefits by playing games regularly.  Specific types of games can bestow specific benefits.  MMOs, where large numbers of players must frequently interact with each other, help with social skills.  Games where any sort of accounting is necessary help improve math skills.  Action games improve fast decision making skills.  Many games improve memory and spacial mapping skills.  Many games also improve organization skills.  Games of all genres have been shown to have many benefits.  Even violent games can have strong benefits in real world skills.  Just learning to play a new game can improve problem solving skills.  More recently another concern has gotten a lot of attention.  In some cases, violent behavior has correlated with the use of violent video games.  This has led to an unwarranted assumption that violent video game cause people to become more violent.  There is also plenty of research on this subject, but there is no evidence that the assumptions are true.  There is indeed a correlation, but the evidence indicates that causality goes the other way: People who are already violent or prone to violence are more likely to play violent games.  There is also plenty of other research examining other aspects of the effects of video games, but this summary covers the most controversial and important parts.

I want to look at the claims made in the article mentioned above, because many of them show that most of society, including those who should really know better, are basing conclusions of causality on evidence that shows only correlation.  The article initially discusses Carey Martell's childhood.  Carey is a former game developer and is now the CEO of an internet TV network.  His father played video games with him when he was young, but eventually started preaching to him that video games would not get him anywhere in life (note again that one of Martell's careers was video game developer).  Back then, this was a common attitude toward games, and even psychologists often preached this unsupported doctrine.  Needless to say, Martell's opinion on the matter is much different from that of his father.

The second person quoted in the article is a "toyologist and child development expert."  Stevanne Auerbach claims that the only educational value in violent video games is preparing young people for the army.  This is a lie.  First person shooters have repeatedly been shown to improve spacial mapping skills and memory.  Surely there are professions other than the military that these skills are useful for.  In fact, mapping skills are exactly the thing needed to avoid getting lost in a new city, which is a common problem for business people who travel a lot.  There are few professions that do not benefit from improved memory.  First person shooters also tend to improve attention to detail (it is easy to get shot from behind if you are not watching every shadow and listening for footsteps).  This skill is important in nearly every field where mathematics play a role, and it is especially valuable in engineering and science fields.  MMOs typically feature a lot of violence as well.  This genre typically helps improve social skills, math skills, problem solving skills as well as spacial mapping and memory.  Again, these are all very useful skills outside of the military, and some of them (problem solving especially) are very valuable skills in high paying business and engineering jobs.  Other genres that typically revolve around violence also have their own share of benefits, often favoring higher paid jobs as well.  There are still some concerns about excessive use of violent games, especially for people who are already prone to violence, but saying that violent video games have no educational value outside of military service is absurd.  Someone in child development should know better than to make bold claims without first examining the evidence.

Dr. David Bickham also missed the mark, though not as dramatically.  Bickham works for Boston's Center for Media and Child Health and also would benefit from doing his research before commenting.  He is partially right though.  There are some strong theories that Minecraft may promote problem solving and cognitive skills.  He is also right that we need more research on this topic.  He is wrong, however, in his claim that it is unclear if Minecraft is really beneficial or just good entertainment.  Minecraft has been proven to promote good problem solving and cognitive skills.  The only thing that is not clear is how much these benefits translate to real life skills.  Several schools in the US are already using Minecraft as a learning tool, in some cases for many different subjects.  It has already been shown to be useful for teaching math and basic architecture (despite not having realistic physics).  Some teachers have started using it specifically for learning problem solving skills, and it is frequently used as a group activity in this way to promote good social skills at the same time.  There is, so far, no teacher using Minecraft for teaching that has not reported dramatic benefits, with numbers to back their claims.  Minecraft is a very potent learning tool, and we even know why, in a large degree.  It is engaging, which keeps students focused and interested.  Engaged learning is the most effective kind of learning.  Where we need work is determining what subjects it is good for (and if there are any that it is not), what the limitations are, and how to maximize the benefits.  If Minecraft is already showing good results with just basic teaching techniques, imagine how well it could do if we can figure out the best ways to use it for education.  Anyhow, it is not a matter of trying to "figure out what it means."  The question is how we can best use it.

One thing the article hits on that is the real gold mine that Minecraft managed to hit is creativity.  The one thing that all of the violent games miss is creativity (most games miss it, because allowing creativity is very hard to program).  Creativity is another very important skill for many of the highest paying (and most interesting) jobs.  It is especially valuable in engineering.  Creativity is the thing that Minecraft brings that makes it such a powerful teaching tool.  Trying to teach architecture by having students play a game where they merely walk around the buildings (or even destroy them) is obviously not going to be as effective as having the students play a game where they actually build the buildings.  Minecraft is especially good for math, because players get to see and experience real world applications for the math, as they do things like calculating how much materials it will take to build a building.  Further, gaining the skills will help the students to play more efficiently, allowing more time to be spent doing fun things.  One additional benefit of sandbox games like Minecraft is that they combine fun with work.  Building awesome things in Minecraft takes a lot of work, and much of the work is spent gathering the materials.  In real life, many tasks require a lot of work that is only indirectly related to the end product.  This work is tedious and often hated.  Minecraft players build a strong drive to finish projects, and they learn the importance of the less fun work.  The result is people who are well suited to doing the difficult tedious parts of the work, because they can see the necessity of it for completing the fun parts of the project.

Auerbach contends that it is the responsibility of game industry to start improving the educational value of video games.  This is another lie.  The legal responsibility of any for-profit company is to generate profits for its share holders.  So long as violent games with less than maximum educational value make large profits, this is what the game industry is going to make, because they are legally bound to maximize profits for shareholders.  The people with the biggest influence are consumers, however the real problem is public perception of video games.  So long as they are viewed primarily as valueless time wasters, this is all they will ever be.  The people with the power to change public perception are the child development experts themselves.  As long as they are telling parents that video games are bad, parents will avoid video games.  When they do get games for their children, they will largely ignore the content of the games, because all games are bad, so it does not matter which ones they get.  Even saying that some are good and some are bad will have negative consequences, because parents will not feel qualified to make the decision for their kids.  Society (and especially experts) need to accept that all video games are beneficial to some degree.  Once they accept this, they will be ready to start examining what makes some games better than others.  It turns out that the most beneficial games are also some of the most engaging games.  Once we break through this barrier, video game companies will start deliberately making more beneficial games, because more engaging games sell better.  Ultimately, it will be the game industry that makes better games, but before they will (or can even be reasonably expected to), they must be shown that it will be in their best interest and in the best interest of their share holders.

The big barrier in all of this is public perception of games.  There are few of us that believe that games are already powerful educational tools, just as they are now.  Most of those who believe that they can be powerful educational tools (but do not believe that they currently are) are lamenting the supposed fact that they are not and are blaming the game industry for the problem.  The evidence supporting the belief that games are already beneficial is extremely strong, and it is not very difficult to find.  All of those child development "experts" that are spending their time and effort whining need to get off their butts and do their research.  Then, they need to start teaching parents that games are not bad.  The public needs to know that games are already powerful teaching tools, and they need to learn it from experts that they trust.  They also need to know that current games only educate to a fraction of their potential, and that choosing the best games will help their children more.  If the public demands better games, the game industry will make them.  If the public believes that all games are bad, however, the idea of "better games" will not make enough sense to them to convince them to demand them.  The fault for this situation does not belong to any one group, but the group complaining the most seems to be the group with the most capacity to affect change.  It is time for the child development and psychology fields to start teaching the truth about video games.



I want to add a side note.  I am a video game designer.  I am currently free lance, and I prefer it that way.  I don't know if I can make it profitable enough to remain free lance, but I am going to try.  Now, here is the important part: I have been studying the subject of this article for almost a decade.  I mentioned that I have read a great deal of articles and research on it.  One of my goals is to leverage my knowledge to create fun and engaging games that also happen to be educational.  (Games designed specifically to be educational are almost invariably not very engaging, and thus do not keep students' attention well enough to be effective.)  In the game industry, my perspective is something of a pariah.  Most companies either want to make explicitly educational games (and may even believe that all other games are evil), or they believe that any attempt to add deliberate educational elements will ruin the game.  I believe that engaging games can have many educational elements, and I do not think it is too difficult to include them.  The trick is to avoid contrived educational elements.  In my opinion, Minecraft is strong evidence that this is possible.  I do not know if I will be able to accomplish this goal, but I plan to try.

12 October 2014

Technology Aesthetic

Technology has struggled with aesthetics for decades.  Modern tech devices really do not look much better than early ones, except in a few rare circumstances.  I doubt this struggle will ever end, because different people have different tastes.  There are a few things that could be done to improve the situation though.

There are several options when dealing with aesthetics in technology.  The first is hide it.  This is probably the most common.  The typical cell phone has an almost sleek looking body that serves only to hide the insides and offer very minor protection from damage (sorry, anything that is not waterproof cannot qualify as offering more than minor protection).  Most desktop computer cases serve the entirely utilitarian purpose of providing a frame to attach internal components to, again offering minor protection.  Laptops sometimes have customizable options, but still rarely serve any purpose well besides holding the components together and hiding them from the user.  Hiding is, by far, the most common way of dealing with tech aesthetics.  Real beauty is never a part of the equation.  Even hiding is not usually done very well.  An LCD screen encased in black plastic may hide the internal electronics, but it only barely does so.  Looking at the screen, it is still obvious that electronics are inside.  There is no suspension of disbelief; we all know what is hidden in there, and we cannot pretend otherwise.

The second option is exposing the technology.  Circuit boards and electronic components can look aesthetically pleasing, but it takes a bit of care and effort.  Some people in the tech industry enjoy the appearance of raw electronics so much that they will purchase computer cases with big windows on the sides, so the internal circuitry are visible.  Most people, however, do not like the unfinished appearance of typical electronics.  At a cost, it is possible to completely clean up electronics and make them look presentable in their own right, but this gets expensive very quickly as boards get larger (like the motherboard in a PC).  Most smaller devices, like phones, include very sensitive radio frequency circuitry that would be difficult to expose safely, so exposing the technology is not always a viable option.

The third option, which has never been done very well, is hiding, but focusing on making the hiding place beautiful.  Apple has tried this repeatedly, but still has not managed to get very far past basic hiding.  Touch screens and capacitive buttons (as opposed to tactile switch buttons) help this a lot by more effectively hiding the technology, but white plastic is still clearly very modern and unnatural.  The white plastic enclosures of most modern Apple products  just cries out technology.  The technology may be hidden on the inside, but the aesthetic advertises that this is a man made technological device.  It may look pretty to some, but it does not do a very good job of hiding the technology.

There is a fourth option, which has never been done on a large scale.  Part of the reason is that it would initially be expensive, and it might be difficult to keep up.  This is changing the aesthetic altogether.  This falls into the category of hiding, but it is far more flexible and more capable of beauty.  This kind of hiding puts suspension of disbelief as the primary goal.  Instead of enclosing the device in a mundane plastic enclosure, it could be contained in a wood (or realistic faux wood) enclosure.  Correctly designed, an LCD monitor could be made to look like a picture frame.  This could be mounted on an office wall, and if the cables were easily concealable, a static image on it could trick visitors into thinking that it was an actual photograph, instead of a computer monitor.  A flip phone might look like a large locket.  Buttons or keys could be made to look like stone (some sci-fi shows feature "ancient" technology that uses large stone blocks as keys for an input device).  Brass or other metals (or even realistic metallic paint on plastic) could be used to make keys that appear to have been made during the Victorian era or earlier.  Using materials that appear to be non-technological in nature it would be possible to hide technology in enclosures that do not advertise that technology lies within.  It might still be obvious that a cell phone is a cell phone, but the user (and spectators) could imagine that the device uses magic or mechanics to operate, instead of modern electronics.  The appearance of the device would not break suspension of disbelief.


Now none of these options are the "best."  Poorly hiding the technology is probably the cheapest option.  Exposing the technology satisfies a niche, but most people would prefer it hidden.  Hiding with a focus on aesthetics has worked very well for Apple, and many people like the idea of carrying a device that is clearly high tech without actually exposing the technology. 

Visually transforming technology could introduce a great variety of different aesthetics, and it could revolutionize how we interact with technology, but it would also carry some initial expenses.  Making technology into artwork requires real artists, not just industrial engineers trained to focus on the most practical designs.  Creating all of the hardware for producing beautiful technology would be fairly expensive.  Eventually, this expense would largely disappear as fabrication technology improved, and much of this hardware would be a one time cost anyway, but the art costs would still linger.  Ultimately, it would pay though.  There are many niche markets (some fairly large) that would make this very profitable if managed wisely.  Steampunk cell phones would probably sell fairly well already, without excessive marketing costs (word can get around quickly in the Steampunk community).  A Medieval theme would be popular for those involved in Renaissance Fairs and SCA, as well as for some historians.  Any variety of fantasy themes could be wildly successful (a cell phone could be modeled as a magical scrying device and an LCD monitor might have a frame that would be appropriate for a magic mirror) and the sci-fi market is largely untapped here as well (just within the Stargate series, there are at least 10 alien tech aesthetics that could be highly profitable; Star Trek and Star Wars would also be quite popular).  Nature based themes (a realistic wooden case, maybe even with knots protruding slightly) could be popular with the environmentalist movement, and they might even appeal to primitivists who are not opposed to using some technology.  Of course, there are nearly infinite possibilities in the anime and cosplay markets.  Already, case upgrades with some of these themes are popular, but case upgrades are just pretty pictures attached to the phone.  They do not beautify the phone itself.  Some designs would justify adding as much as $50 to the price of a phone (possibly more for larger devices), and it would probably be easy to get an extra $5 or $10 even for simpler (but realistic looking) designs.  The potential for profit is very high, and there are also possible reputation boosts for companies that do this well.

The point of this is not to denigrate current technology aesthetics, but rather to point out that they do not live up to their full potential.  The market for utilitarian designs and designs that "feel" modern and high tech is saturated.  There are multiple markets for designs that feel low tech or even no tech, but that are still high tech on the inside.  There are markets for designs that feel like sci-fi advanced tech as well as designs that feel mystical or ancient.  There are even markets for designs that feel like products of nature.  Tech aesthetic does not have to be all about hiding the technology.  It can be about making the technology appear to be more or less than it actually is.  There is a place for purely practical designs, but in a world with billions of people, there are many that want more than just practical utilitarian devices.

A friend of a friend once used the term "fashion device" to refer to a potentially useful electronic device designed to fit into a specific aesthetic.  Themed aesthetics does not have to stop at hand held or wearable devices though.  Even a laptop or desktop computer, an LCD TV, or a computer printer could be designed to have decorative value in addition to practical value (in fact, parts of the steampunk community have altered laptop and desktop computers to look like something straight out of Victorian England; these are currently the ultimate in useful decorations, but the potential is practically infinite).  Instead of hiding technology, we could be disguising it as other things.  We could be making technology look and feel like something entirely different, and we could do it without sacrificing utility.  Instead of ugly blocky speakers and screens framed in mundane black plastic, we could devices that are both works of art as well as useful tools.  Our houses could look like art galleries instead of collections of ugly but necessary blocks of multi-colored plastic and painted metal.  Long ago, aesthetics was considered more important in household devices.  In antique shops, it is sometimes possible to find sewing machines, radios, and even old television sets framed in high quality wood with artistic embellishments.  Modern technology has opened up many more aesthetic possibilities.  Now it is time to take advantage of them and beautify our tech culture.

03 October 2014

Failing Education

I regularly come across evidence that our education system is failing.  I am not just talking about test scores or students who have to retake classes.  I am also not just talking about public education.  I am talking about education across the board.  The particular evidence I want to look at today is written language skills.  Written language is an extremely important part of communication, and researchers are finding that its importance is increasing as more and more communication takes place in the form of emails, text messages, and written posts on various forms of social media.  Originally, it was predicted that audio technology would obsolete written communication except in legal and scientific fields, but now they are finding this is wrong.  Text messaging is wildly more popular than voice calls, and many people use their smart phones for reading and writing on social media than they do for voice phone calls.  Writing skills are becoming more important than nearly any other communication skill.

Today I came across a posting for a product being sold online.  The product was some kind of partially prepared meat.  Specifically it was pork.  The product description included the following text (original formatting is preserved):

The finest cuts of "All Natural" Pork...

Besides the fact that the complete sentence was a fragment, not a complete sentence (this is sometimes permissible in less formal writing, like advertisements), the use of bold type is not exactly appropriate here.  In advertisement though, bold type is often used for emphasis, because it draws the eye better (in most formal writing, italics are used for emphasis, while bold type is very rarely used because it breaks visual continuity).  My problem with this is the inappropriate use of quotation marks. Quotation marks applied in that way around an adjective typically indicate that something is not what it claims, at least in informal writing (which advertising is).  So, this "All Natural" pork (yes, capitalizing "pork" is also not correct writing) must not be all pork, otherwise they would have called it All Natural pork, right?  Sadly, this is probably not true.  Whoever wrote that text probably thought that the quotation marks somehow emphasize or even strengthen the claim of all-naturalness.  Really, they imply that the person saying the words is smiling and winking as they say them, indicating that they want someone else to think the pork is all natural, when it really is not (sometimes people even make a quote mark gesture with hands to further indicate that there is innuendo involved).  This is not the most common mistake I see, but it does seem to come up frequently.

Perhaps the most common mistakes are grammatical errors.  Most (though not all) advertisements use correct spellings.  The most common misspellings are homophones, where the misspelling is actually a word that sounds the same but is not the intended word.  Spell checkers do not catch these words, but a good editor or proofreader would have.  (The worst spelling error I have seen on an ad was in Utah, where some real-estate company thought "available" was spelled "availble."  Even a pretty lame spell checker would have caught that one.)  Grammatical errors, however, indicate that the writer did not even bother to reread the text.  In most cases, grammatical errors stick out when the writing is read out loud, and any English teacher will suggest that students read their work out loud to make sure it sounds how they intended.  Right next to run of the mill grammatical errors is incomplete sentences.  These drive me nuts, because I can see that the sentence is trying to say something, but when it ends prematurely, it leaves me wondering (imagine if this sentence stopped here; what the heck does it leave me wondering; I'll be kind) where it was going.

This epidemic of poor writing skills leads to two conclusions: Either most people are too stupid to remember anything they learned in school long enough to use it, or the schools are doing a horrible job of teaching it in the first place.  Given the state of our education system, I think the evidence favors the second conclusion.  I'll admit that many people do not realize the importance of communication skills and choose not to put any effort into maintaining them, but that is a huge amount of people to accuse of stupidity or laziness, and I would prefer to give the benefit of the doubt.

Some people might wonder why this matters.  Even with rampant misspellings and grammatical errors, it is still easy to tell what most advertisements want (I'll give you a hint, they want you to give someone else your money in exchange for products of dubious value).  Communication is about getting your point across, and if they do that, does it really matter how badly they are written?  Well, yes.  Do you want to by "All Natural" pork, or would you rather buy All Natural pork (or even "All Natural pork" or All Natural "pork" where it could be a veiled reference to pork flavored cat or dog meat)?  These are two different things.  Even if the company does actually mean that it is all natural (and not fake all natural), if they cannot communicate that effectively, what else are they mis-communicating that is not so obvious?  That real-estate company certainly does not impress me with their proficiency in their job.  Maybe they only forgot an "a" in that sign.  What happens if they forget a "0" when they are selling my house (or add one when I am buying it)?  On that sign, one missing letter only makes them look inept.  On a contract, one missing letter (or digit) could be disastrous.  Misspelling and grammatical errors can change the meaning of a word or phrase.  You might notice five or ten errors in a company's advertisements over a year or two.  Consider though, how many did you miss?  How many errors were there that did not turn the text into nonsense but still managed to change the meaning significantly?  In fact, if you pay close attention, you may notice that many national retail chains post ad retractions or errata on their doors several times a year.  Most often this is because they either mis-advertised a sale price, or they mismatched a sale price with the wrong product.  Incidentally, a video game store in Utah once mis-advertised a price, and when I went to buy it, they refused to honor their advertisement, even though they had not posted any retraction.  I never bought anything there again.  Writing skills are worth customers.  When I see an ad with an obvious writing error, it makes me less willing to buy products from that company, because it makes it clear that they do not care about quality.

I may have discussed this before, but this applies outside of advertisement.  Good writing skills are also important on resumes.  I have heard arguments that throwing out poorly written resumes might lose opportunities to hire good employees.  I think this is wrong, at least in most cases.  Most businesses get far more applications than they can read.  They use fast filter techniques to reduce the pile to a manageable size.  These techniques involve things like looking for writing errors and checking qualifications.  If they fail one of these tests, they go in the garbage.  Now, I cannot claim that tossing resumes with writing errors will not eliminate qualified candidates that might do a good job.  I can claim, however, that it will not sacrifice opportunities for the business to hire good employees.  Unless the business needs almost as many employees as there are applications, filtering will leave, on average, a much more qualified selection of employees to choose from.  If someone thinks that the company lost a really good opportunity when they threw out his or her resume, that person needs an ego check, and really, the other employees would probably have hated that person's self centered, big headed attitude anyway.  In today's economy, there is always someone else who could fill the position just as well.  And, if that person has better written communication skills, they can probably even fill it better. 

Poor writing skills signal several things.  First, they signal that you are too lazy to do a good job.  Second, they signal that you really do not care about whatever you are writing for.  Third, they signal a lack of precision and attention to detail.  Forth, they signal a lack of good education (a college degree does not necessarily mean you still know anything from the classes you took, and poor writing skills is evidence that you have allowed yourself to forget at least one important subject you took classes in).  Fifth, they signal a lack of caring about quality.  When a person looks at an advertisement or a resume and sees writing errors, that person gets an impression of laziness, lack of motivation, lack of attention to detail, lack of education, and lack of quality about the person or company responsible.  That is why it matters.  Writing errors will cause people to look for a company with a better reputation or hire a person who's resume looks like he or see cares, even if technical qualifications are not quite as good.  A person or company who puts the effort into avoiding mistakes in writing can probably put the effort into becoming more proficient and more qualified.  Poor writing provides no such evidence.

I do not think this is purely an educational problem.  I am sure there are people who write poorly because they are lazy and because they are not motivated and because they do not care about attention to detail and so on, but I do think education is the root of the problem.  I have friends and family members that ask me to proofread their writing because they do care about attention to detail, but they do not have the quality English education to do a good job of it themselves.  This is not their fault.  Their lack of English education is the fault of public schools and even private universities who neglect their responsibility to properly educate their students.  (In fact, my good English education is largely the result of studying far beyond what was required in school and college and then studiously applying it until I developed strong skills.  The schools themselves just provided me with a small amount of resources to build on.)


I just want to add that I am not perfect.  I do make writing mistakes, though not typically very often.  I reread this article twice (once out loud), correcting errors along the way, before I published it, but there are probably a few still lurking in here somewhere.  I sincerely hope they are not obvious, but if they are, I take full responsibility for missing them.  If this were a professional publication, I would ask some of my friends to review it before publishing, and I would reread it at least two more times.  If it were a resume, I would probably reread it four more times.  As an informal opinion piece that I am posting on my blog (that has a fairly small readership) though, I just do not have the time to put that much effort into it.

01 October 2014

Donating Ourselves to Death?

We might be donating ourselves to death.  Wealth goes two ways.  The first, and most often noted, is how much money a person has.  The second, and most frequently ignored or overlooked, is how much stuff costs.  A salary of $350,000 a month sounds like a lot, but when a one bedroom apartment costs $59,000 a month, suddenly that salary seems a bit low.

According to one website, ¥350,000 is about the average monthly salary in Japan, and according to another ¥59,000 a month is about the average rent for a one bedroom apartment outside of a city center.  Right, it is yen, not dollars, but given the same values in dollars, those wages would not be that impressive given those costs.

Where ignoring costs and focusing on dollar amount becomes problematic is when average pay raise is less than average inflation.  When pay increases less than inflation, it is the same as pay decreasing by the difference.  This has been happening in the US, especially among the middle and lower classes, since at least 1960.  That, however, is not the subject of this article.

Modern companies are expected by many consumers to back popular causes.  Companies that donate to educational, environmental, or human rights causes are are revered by many consumers, and companies that do not are viewed as evil money grubbers.  The consequence is that companies feel obligated to back some popular cause, because otherwise they will loose business and eventually fail.  From a cost point of view this is problematic.

It is possible to reduce the income of the poor without ever touching their money or reducing their wages or welfare benefits.  All that is necessary is to raise prices without raising the wages and benefits of the poor.  An extremely effective way for the general public to do this is to make companies feel obligated to spend money on something that will not bring any profit.  Consumers who refuse to do business with companies that do not support some popular cause force businesses to spend more money on unprofitable things.  This, in turn, forces those companies to raise their prices.  Increased prices make the poor even poorer.

While supporting moral causes is a good thing, expecting businesses to do so is a misplacement of resources.  When a business donates to a cause, it is indirectly forcing all of its customers to donate to that cause.  If some of those customers cannot afford to donate, then this practice is unethical.  This kind of shopping habit perverts competition to force businesses to do things that are oppressive to the lower classes.  The correct application of market forces uses competition to keep prices low, and this application of competition supports a healthy economy.  Competition that values things other than price or product quality almost always results in increases in price and reductions in quality.  Admittedly, competition that encourages businesses to act ethically (from a business standpoint, not a popular cause standpoint) can be very good and can help keep workplaces safe and encourage ethical treatment of employees (though it still increases prices and/or reduces quality but justifiably).  Competition that encourages or forces businesses to start acting outside of their sphere of influence, however, is harmful to the economy, because it drains funds from the rich and poor equally and without their consent.

We might be donating ourselves to death.  The graduated income tax system is designed to put the majority of the tax burden on those with the majority of the money.  At first this may seem unfair, but a government that protects the ownership of property benefits the wealthy far more than the poor, because the wealthy have more to protect.  In addition, the poor cannot afford much if any of the burden, and what good is a government that favors protecting property ownership over the lives and well being of its citizens?  When businesses donate, they impose a sort of flat tax on all of their customers.  Of course, the rich pay more than the poor, because they spend more, but the poor cannot afford to pay any of this involuntary tax.  By donating to popular causes, businesses are harming the economy and robbing the poor.  When people refuse to shop at the cheaper stores because those stores do not donate to a popular cause, they are paying the more expensive store off for oppressing the poor.

It is not fair to blame the businesses for this problem, because they are not, for the most part, responsible.  They are responding to market forces.  If they did not do this, they would ultimately fail.  The blame goes to the people who blindly choose to avoid stores that do not donate without considering the consequences.  The blame goes to the people who are too lazy to donate directly and instead patronize businesses that donate so they can feel good about themselves anyway.  If a cause is worth supporting, it is worth donating directly, instead of spreading the burden to those who cannot afford it by expecting businesses to do the work of donating.

Philanthropy does not belong in for-profit business.  Those who want to donate to a cause should do it themselves instead of expecting someone else to do it for them.  It is hard enough for the poor without forcing them to donate to every popular cause.  This problem is one of the many reasons the US economy is struggling and taking so long to recover.