29 December 2014

Mandatory Benefits Enforce Slavery

Freelance work is becoming a big deal in the U.S. for several reasons.  One reason is that the currently very poor economy (yeah, they claim it is improving, but really it is only getting better for the wealthy but no one else) is still making it extremely difficult to find decent work.  Right, you heard on TV that unemployment is decreasing, but did they bother to mention that most of the new jobs are low paying jobs?  Did they even point out that wages are staying stagnant while inflation is still increasing?  A lot of Americans are finding that freelance work is easier to get than permanent employment.  That is not the big driver of freelancing though.  Over half of freelancers are doing it entirely voluntarily.  They have chosen freelance work over long term full-time work because they like being their own boss.  They like setting their own hours.  They like the ability to choose what work they will do and what work they will leave to someone else.  Many even like the fact that they do not have to work a full 40 hours a week to get by.  Freelancing comes with a cost though: No benefits.

Aside from social pressure, wage slavery is primarily driven by mandatory benefits.  I know many people who would like to start their own businesses, but they cannot, because they cannot afford private health insurance.  Other benefits are a problem as well, but health insurance is, by far, the biggest problem.  I even know a few people who have their own businesses and work a regular job for the health insurance.  Businesses, like Lowe's, that offer these benefits even to part-time employees are a great blessing to business owners who cannot afford private health insurance.  (Years ago I worked at a Lowe's store, and at least two other employees there owned their own businesses, but worked 10 hours a week at Lowe's for the health insurance package.)  This is a problem, because it discourages freelance work and the creation of new businesses.  For the most part, only independently wealthy people can really even afford to start their own business, and I am not just talking about businesses with really expensive startup costs.  I have several computers, I have access to all of the tools I needed, I have all of the necessary knowledge and training, but I still cannot afford to start a software company, because I am stuck spending nearly all of my time working for someone else.  Even most middle class employees are stuck in this situation.

What is the solution?  Get rid of mandatory benefits.  In fact, ideally, all non-monetary compensation should be prohibited.  Someone still has to take responsibility for health insurance, because costs are still too high.  Obamacare made health insurance mandatory, but it did not solve the underlying problem, which is that it just plain costs too much.  At this point, a single payer system seems like the best option, and the retirement of Medicare and Medicaid would go a very long way in funding it (actually, if you add all the costs of the multiple Obamacare failures, it might make up the difference).  Further though, if there was still a deficit, another side effect of this would cover that and then a whole lot more.  The single most abused benefit is stock options.  Eliminate those and tax revenues (especially from CEOs and such) would increase dramatically.

Taking the burden of health insurance off of employment would release millions of Americans from wage slavery.  Of course, they still have to work to survive, but they would have much more control over that work.  Without employer provided health insurance, more people would be motivated to start new businesses, and more people would be willing to work for those businesses.  More people would be able to go the freelance route.  In addition, one more awesome benefit of this is that more people would feel free to choose part-time work instead of feeling compelled to work full-time, making more jobs available for others.  More Americans would be free to choose their own paths than ever before.

Now, I am sure you are aware that I endorse a basic income in addition to this, and a basic income would free Americans to a degree never before seen in all of human history (accepted history, anyhow).  Even without a basic income though, eliminating all non-monetary benefits would go a long way to increasing freedom in the U.S..  Of course, if stock options were eliminated, the increase in tax revenue would likely cover a large chunk of the costs of the basic income.  I just wanted to point that out.

27 December 2014

Unions

I have a problem with unions.  It comes down to two things: Unions are too powerful and too easy to abuse.  Unions are currently absolutely necessary to take care of problems that the government refuses to treat fairly.

The recent Supreme Court ruling on a dispute between an Amazon contractor and its warehouse employees (which I have discussed in more detail in a previous post) illustrates the second part of my problem.  Without unions, many workers are just plain not treated fairly.  In the Amazon case, workers were being forced to go through excessively long security checks daily without pay for the time spent.  Our Supreme Court justices (whom I must assume are idiots, because the only other option is that they are deliberately helping to enslave and oppress innocent Americans, and I want to give them the benefit of the doubt) declared that businesses do not have to pay workers for time spent doing anything that is not, in essence, part of the job description.  At this point, this declaration now counts as an infallible part of U.S. law.  The government offers no protection for what amounts to blatant wage theft.  There is only one solution: unions.

Unions were originally created in response to government inability to enforce fair labor practices.  In the early U.S., it was common for employers to underpay workers and to require far more hours of work than is healthy or fair.  Unsafe work conditions were more common than safe ones by a very wide margin.  People were regularly inured or killed in workplace accidents that could have easily been prevented, because owners were too cheap to spend even small sums to ensure safety.  Children were treated as slaves, working 16 hours days in these conditions, for so little money that entire families had to work, and that was still not enough to get by.  The government was not powerful enough to do anything to stop these unfair practices, and in many cases, the government did not have enough reach to even be aware of them.  The solution was labor unions.

Workers in these conditions eventually banded together, demanding fair treatment.  Their employers refused the the demands and threatened to fire anyone who continued to dissent.  Eventually the workers realized that if all of them dissented at once, their employers would be unable to replace them all fast enough to avoid financial catastrophe.  The worker strike was born (it was actually born in France, but it was quickly adopted by oppressed U.S. workers).  Nearly all of the workers in one or more factories refused to continue work until conditions, hours, and wages were improved.  Employers were powerless against the unions because they were dependent on the employees.  Firing them all would result in financial ruin for the company.  Initially the government panicked: Worker's unions threatened the U.S. economy.  If workers had so much power, they could easily force businesses to pay so much that it would cause rampant inflation.  Besides that, even short strikes resulted in production halts, and in factories that produced necessities, those halts could result in serious harm.  This did something else very important though: It put the problem of workers right in the face of the government, where it could no longer be overlooked or ignored.

The government realized that treatment of workers was a major problem.  It also recognized its responsibility in doing something about it.  The government still did not have the power or reach to handle the problem on its own.  It did have the power to protect the workers in their own attempts to deal with the problem.  Business owners lobbied the government to make unions and worker strikes illegal.  Their claim was that these things caused economic instability.  Their claims seemed reasonable, however, the government eventually recognized that the underlying problem was not the strikes, but the unsustainable hours and pay, as well as the often deadly work conditions provided by employers.  Laws were passed to protect unions and striking workers from retaliation.  Currently, workers cannot be fired for discussing unionization, actually unionizing, or for striking.  Workers who are striking on economic grounds (wages, other compensation, or work hours) can be "permanently replaced" (they cannot be fired, but if a willing replacement can be found, the strikers hours can be reduced to 0 indefinitely, which is approximately the same as being laid off).  The government also created a set of safety and treatment requirements and guidelines for how employees may be treated.  Strikes related to these issues are further protected, prohibiting even permanent replacement.  When it comes to safety and other government protected employee rights, replacements hired during a strike must be fired to make room for striking employees returning to work once the dispute has been resolved.

The potential for abuse of unions was still clear, so some restrictions have been added.  Closed shops, where the company may only hire union members, was strictly prohibited.  Closed shops allow the union to control all hiring decisions by restricting admittance into the union.  This gives the union veto power over any hiring action.  In the U.S., closed shops are illegal.  Union shops, where new hires are required to join the union after being hired, are legal, as well as agency shops, where non-union members must still pay union dues, and open shops, where employees may choose but are not required to pay dues if they are not union members, are all legal in the U.S..  Prohibition of closed shops prevented the most obvious abuses of unions, but it still left some loopholes, most of which still exist.


When unions were originally created, they were necessary.  They were very useful, and they did a great deal of good.  Since then, many things have changed.  The biggest change is power and reach of the government.  Workplace safety is no longer a serious union issue, because OSHA, a government agency, defines and enforced workplace safety.  If a workplace is unsafe, it is faster and easier for an employee to report the violation to OSHA than it is for a union to try to resolve the issue, and the penalties for those violations are enforced by the government, making workplace safety violations fairly rare.  Wages are still a problem, but not because the government is not powerful enough to do anything about it.  They are a problem because the government refuses to do anything about it.  Worse, the most common places for wage issues are not well suited to unions, because employee turnover is too high.  In the past several decades, most union wage issues were not problems of employers paying unfair wages.  Most of the issues were greedy employees who were already being paid far higher than the U.S. average wanting more than their fair share (and, in the case of the U.S. steel industry, this was one of the blows that ultimately killed it).  Unions are no longer useful tools for enforcing fair wages.  Instead they are tools for overpaid employees to rip off their employers even more.

Work hours were another major thing that unions were good for.  Twelve to sixteen hour work days were common.  Unions pulled the U.S. work week down to 40 hours and the work day to 8, requiring extra pay for any time worked beyond that.  Of course, the goal was actually closer to 35 or 30 hours a week (20 according to some), but unions lost sight of that goal almost a century ago.  Unions are no longer necessary to enforce this though, because the government has enacted laws prohibiting employers from giving employees more than 8 hours of work in a day and 40 in a week, with an additional requirement that when this is violated, employees are paid extra for time beyond those limits.  This is no longer a union problem; it is now a government problem.  Worse, despite unions and government, the average American voluntarily works an average of 50 hours a week and often the overtime goes entirely unpaid.  When the workers don't care, there is little unions can do to fix the problem.

Overall, unions have lost most of their usefulness.  They still have potential for abuse though.  Unions have a great deal of lobbying power.  In Alaska, in the mid '90s I believe, the workers at some of the power plants went on strike.  I don't know all of the details, but I do know that the labor union exercised power that belongs only to government and individual citizens, by manipulating the state government in making some very harmful laws.  The power plants hired electrical workers from Washington state, as temporary workers until the strike was resolved.  In retaliation, the union lobbied the state government to change certification laws to require electrical workers in Alaska state to have gone through their training in-state.  In other words, a journeyman or master electrical worker in Washington state could only be hired as an apprentice in Alaska, without going through all of the time required for certification within the state of Alaska.  The union did this to put more pressure on the power company by denying them well qualified temporary workers (the law specifically prohibited hiring them into positions that normally required journeyman certification).  Besides being a low and very unethical blow, this has some severe economic implications.  I am certain the argument given to the legislature and governor was that hiring out-of-state workers would drain money from the state economy.  I don't think this justifies using the law to lie about a person's job qualifications, but besides that, this economic justification was incomplete.  The end result was that the workers got most of their demands.  The economic consequences of that was increased cost for power, which resulted in economically damaging inflation in a state where the cost of living is already quite high.  There may have been short term economic costs of hiring out-of-state workers, but the long term costs of not doing so were far worse.  There is also another long term economic cost: The electrical workers union in Alaska now has a legally enforced monopoly on electrical labor.  The political power held by unions has not just been harmful in Alaska.  In other places in the U.S., unions have used the law or other political influence to merge with other unions against their will (by "merge," I mean "hostile takeover").

Unions have largely become for-profit institutions in the U.S..  Their primary goal is no longer doing what it best for the workers or even representing the workers.  Their goal now is to do whatever gets the union the most money.  This frequently means demanding higher pay even when it is not needed or fair.  It also preempts any requests for reduced hours, because reduced hours means lower gross pay, which means lower dues.  By allowing union and agency shops, the government has allowed unions to force employees to become union members and to pay union dues against their will.  Unions in the U.S. typically have a number of permanent employees who are not actually members of the union.  In many unions, this includes a CEO and other administrative positions, who make decisions about what the employees want, without actually having any experience of being one of those employees.  Some of these positions, like lawyer and accountant, are justified, but full-time administrative positions in a union are absurd.  Unions are now run primarily by people who are totally disconnected from the union members and their work environment.  Frankly, a union that is a for-profit business should not have any degree of legal protection beyond what is normal for any other for-profit business.  Otherwise, it is even more prone to abuse.

So, now we come down to the problem: The government now has the reach and power to make unions entirely obsolete, and it has already made them mostly obsolete.  Instead of doing that though, it is actually making unions more necessary.  Unions should no longer exist, because they should no longer be needed.  When they were created, the potential for good outweighed the potential for abuse.  This is no longer true...except, when the government fails to do its primary job of representing the will and best interest of the people.

The Amazon case is prime example of where unions are useful.  The employees are being robbed by their employer.  They could unionize and strike, demanding pay for their time worked, demanding that the security check be listed in the job description (making it an essential part of the job, and thus legally part of paid work time), or demanding that the security checks be discontinued.  They could even unionize and heavily lobby Congress to repeal the highly constitutionally questionable law the Supreme Court used to justify its appallingly oppressive decision (even abuses of power can have legitimate non-abusive uses).  The problem I have with this is that they should not need to unionize to get paid for all of the time they spend doing work required by their employer.

An employer should have the right to require employees to do worthless work (plenty already do it anyhow), but employees should have the right to get paid regardless of whether the work required is profitable or not.  This should be legally protected.  What free society has a law that explicitly permits employers to blatantly and openly require work time from an employee that does not need to be compensated?

16 December 2014

Legalized Wage Theft

This article discusses a recent Supreme Court ruling that Amazon does not have to pay hourly employees for time spent in mandatory security checks.  The case involved several workers at Amazon warehouses, where post-shift security checks are mandatory to prevent theft, and the time spent waiting in line routinely takes more than 30 minutes.  The justification was an ill advised law from the middle of last century stating that work that is not essential and integral to the job position does not have to be paid.  The Supreme Court concluded that since the security checks could be eliminated without harming the work of the employees, Amazon (or rather, the company they pay to manage the warehouses) does not have to pay employees for this time.

This is an absurd case of legalized wage theft.  In this case, the verdict should be simple: The security checks are mandatory.  This makes them an integral part of the job.  Even by that old law, the employees should be paid.  Any mandatory activity that is part of a job should be part of the job description.  If it is not, then it should not be mandatory.  And, if it is part of the job description, then it is an integral and essential part of the job and thus should be compensated.

This ruling leads to other problems though.  It sets a legal precedent for allowing businesses to squander employee time without compensating them for it.  Some easy examples that have been given include sharpening knives in meat packing.  This is an important task, because it affects efficiency and safety, but, unless a knife will no longer cut, it is not essential or integral to the task.  The Supreme Court has said that their ruling does not apply to things done for safety or efficiency, but as a legal precedent, it does apply, because the law in question does not say otherwise.  Given this, another major concern is time spent putting on and taking off safety equipment.  Legally, employers no longer have to pay employees for these things, because the Supreme Court has ruled that something which can be eliminated without removing the ability to do the job does not need to be compensated.

The real problem here is that employers have been given a level power over employees that is entirely abusive.  What if Amazon's security checks get longer?  What if employees are now stuck in line for 2 hours?  It is still not essential to the job, and the Supreme Court has declared that it is permissible to detain employees on pain of firing without paying them for that time.  What if that time goes to 4 hours?  Now, most employees are staying at work for long enough to get significant overtime, but according to the Supreme Court, it is still totally legal to detain them without paying them for that time.

Frankly, I don't care about that old law.  It was a bad idea, but the problem is far deeper:  The Supreme Court has more or less signed over the right to hold employees against their will indefinitely, without any accountability.  At least requiring employees to be paid for the time is an effective deterrent.  If that security check is not part of the job description, employees should be able to easily bypass it without any adverse consequences.  If the employer tries to detain the employee, then the employer should be charged with and convicted of holding the employee against her will.  If it is part of the job description, then by definition, it is an integral and essential part of the job and should be paid as such.  Really, there should be criminal charges going on in this case, not just a question of getting paid for that time.

10 December 2014

Tech for Teens

We will not be buying smart phones for our children.  We will also not be allowing them unsupervised use of technology.  In addition, we are teaching them good manners and responsibility.

The question of whether parents should buy cell phones for their teens has been a hot topic off and on over the last decade and a half.  Many parents see a high potential for abuse and a low value to giving teens mobile communication devices.  Their arguments seem to be justified by the fact that cell phones have become a major distraction to students in class, and they are almost exclusively used for socialization.  The other side of the argument claims that the socialization itself has a very high value.  They also claim that teens need to be able to contact parents at any time for personal safety.  Both sides have some truth to their arguments, however, as cell phones have begun to proliferate among teens, we have gained a great deal of additional data.

Over the last 6 months, I have read at least 6 articles on how mobile communication devices are used by teens for bullying, revealing private information, and even for distributing child pornography (typically images of other under-aged students).  This has become very common.  Recently, yet another anonymous communication app has been condemned for its use in high school bullying and distribution of child pornography intended to defame its subjects.  This is a huge problem.  In fact, it has gotten so big that the FBI has gotten involved in multiple cases, and Apple even temporarily took one of these apps off of their app store, while the creators made some changes to make bullying and other illegal uses more difficult.  Some parent groups are even petitioning Apple to remove the app permanently.

Here is what I get out of this: The average teen is not responsible enough to trust with a smart phone.  I don't care if the justification is personal safety; a teen who deliberately compromises the safety of others with a mobile communication device should not possess it.  The problem, however, is not the teens themselves.  The people who are responsible for the behavior of those teens should bear a large portion of the responsibility.  Parents have the final say in whether a child younger than 18 years old possesses anything, including a smart phone.  Those parents have a moral and legal responsibility to provide oversight.  Before buying or allowing a teen to buy a mobile communication device, parents need to seriously assess the maturity of that child.  If that child is not mature enough to handle the responsibility, the parents should forbid the device, and they should enforce that decision.  When parents do not do this, and someone else is harmed by their child's poor choices, the parents should be charged with neglect, and they should be held responsible for the crimes committed due to that negligence.  The default should not be permission.  The default should be not to give a teen a cell phone.  Truly responsible parents should not give their teens mobile communication devices unless those teens have shown an unusually good level of responsibility and civility.  If your teen still calls people names or is heavily involved in teen drama, you can be certain that a smart phone will become a weapon for harming others in the hands of your teen.

Parental oversight is essential in the appropriate and legal use of technology.  Parents should not be allowing their teens to use these tools unsupervised.  Parents who do allow their teens to use communication technology without direct supervision should check histories, texts, and any other logs to make sure their teen is using the technology responsibly.  Parents should also forbid the use of anonymous communication that does not store logs.  Is this an invasion of privacy?  No!  Parents have a legal responsibility for the behavior of their minor children.  Part of this responsibility includes prying to make sure their children are not doing things that will harm others or themselves.  So long as a parent is a legal guardian of a child, that child legally owns nothing.  Even if the child pays for with money he or she earned, the property belongs to the parent until legal guardianship ends.  This means that a parent has every right to confiscate a cell phone that is being misused.  In fact, a parent has a responsibility to do so.  Teens are going through a period of brain development that leaves them especially vulnerable to making poor choices.  Part of the responsibility of parents is to help get their teens through this time without making choices that will cause substantial harm to themselves or to others.

Looking at the outdated arguments that are still used to justify permitting or forbidding cell phones for teens, here is my conclusion:  Cell phones are a major distraction in school (not just classes).  They do have the potential to help with social things, however, their potential to cause massive social harm is extremely high in the hands of a person that does not yet have well developed reasoning skills.  Personal safety is the only argument in favor of permission with any strength behind it.  There are, however, alternatives.  "Stupid phones" are still readily available.  Our plan is to buy a TracFone (or some similar pre-paid phone) model that only does calls (and perhaps texts) for our children.  When they go out somewhere that we think a phone would be a good idea, they will be issued a phone (we may need more than one, as our children are close together in age).  When they return, they will give the phone back, and we will log the time spent.  If any time was spent on calls to anyone but us, the child will be grounded.  This will not be a social phone.  It will be for communication with parents exclusively (though, emergency provisions, like 911 calls, will also be permissible).  The phone will not be a smart phone, and if we can manage it, it will not have internet access at all.  If our children want to buy cell phones with their own money (and pay their own subscription fees), we will only permit it once they are 16 years old, on the condition that we are to review all histories each day, and then only once we have evaluated their capacity for responsible use of the device.  Some might consider this policy restrictive and invasive.  Perhaps it is.  As a parent, with a responsibility to both my children and general society, this is my job.  If you are a parent, this is your job as well.

This epidemic of electronic bullying and crime by teens is ultimately the responsibility of their parents.  I sincerely hope to see FBI investigations on this problem end with massive fines for the parents who are ultimately responsible for allowing their teens to harm others.  A parent who, through negligence, allows their teen to perpetrate this sort of crime is a failure as a parent.  I do recognize that some teens will be sneaky and circumvent even the most strict parenting (and I do not condemn parents who are a victim of this), but when parents allow their minor children to harm others or themselves without even trying to be responsible, they are just plain bad parents.

What this comes down to is that the group against giving teens cell phones was right.  The potential for harm is far greater than the minor social advantages.  I am sure there are some teens responsible enough to have their own cells phones.  Ultimately, this should be up to the judgment of the parents.  Understand, however, that going easy on this judgment to appease your teen may turn out to be a massive parental failure to both your teen and to society as a whole.

(And, if parents will not step up and take responsibility for this problem, I will become a strong proponent of requiring licenses, at least for teens, to possess mobile communication devices.  And, as a supporter of free exchange of information, I really do not want to do that.  I am already opposed to allowing students to have cell phones in public schools.)

08 December 2014

CDC Apology for Poor Flu Vaccine?

Did the CDC actually apologize for a poor flu vaccine last Wednesday?  There are multiple reports that it did, but they are not strictly true.  I was immediately skeptical when I first heard this, not because I trust the flu vaccine, but rather because I seriously doubt the CDC would issue an apology even if this year's vaccine did not work at all.  According to Snopes, my skepticism is fully warranted.  The CDC did not issue an apology, and they also did not alter any recommendations regarding getting the vaccine.  They did, however, issue an advisory stating (in different words) that this year's vaccine kind of sucks.  This is not entirely their fault, because flu vaccines have to be made far in advance, so selecting the right strains is mostly guess work.  Further, the flu mutates fairly quickly, and even the best guess cannot protect against a totally new strain.  The common strain this year happens to have drifted enough that the vaccine is somewhat less effective than usual.  The point of the advisory was to let doctors know that flu cases should be treated more aggressively, since the vaccine is not likely to help as much.

What does this mean about getting vaccinated?  Well, the CDC has not changed their recommendation, and they recommend that everyone get vaccinated.  This season's vaccine will still help, just not as effectively.  Even vaccinating for the wrong strain will help at least a little, because there are still similarities.  For some people, however, it will not help at all, and for others it will help, but not as effectively.  This means, even with vaccination, more people will get the flu this year, and those that do will probably have more severe symptoms than usual.  The CDC advisory recommends that doctors use anti-viral drugs early on for high risk patients to mitigate with this.

I want to present the actual numbers, for several reasons.  The first is to show that the difference is actually not as big as it is made out to be.  The second is to show that the flu vaccine was never that effective in the first place.

This article goes over the data in more detail than I will, and it is much more clear than the CDC advisory.  It also references a study done around 2 years ago on the effectiveness of the flu vaccine.  It turns out that the flu vaccine has never been much more effective than 59% on average.  A newer version of the vaccine has shown higher effectiveness, but some studies have shown it to be entirely ineffective in children when administered as a nasal spray (to my knowledge, the most common delivery system for it).  The one currently being administered, however, is the one with 59% effectiveness.  This year's vaccine is estimated at 48% effectiveness.  That is not an enormous drop, but it is significant, and it does put effectiveness below half.  The part that is bad is that the effectiveness was so low in the first place.  (Admittedly, a few studies before the 2012 one estimated between 80% and 90% effectiveness.  Flaws in those studies prompted the most recent one.  The medical industry honestly thought effectiveness was between 80% and 90% until recently.)

Does this low efficacy warrant second thoughts about getting vaccinated?  Probably not.  There is a risk factor that should be considered when making this decision, but for most people, even a 48% effectiveness is probably a good gamble for what is a fairly cheap vaccine.  Knocking down almost 50% of flu cases before they start should reduce the number of cases by well over 50%.  This is because those who do not get the flu will not be contagious, so they will not spread it.  Reducing the spread may ultimately have more value than preventing an individual case when it comes to vaccinating.

I have discussed the risk factor before.  Pre-drawn vaccines typically contain latex.  Also, some other common ingredients of vaccines are allergens.  Cheap vaccines are especially likely to contain allergens.  For people with latex allergies, or even a family history of latex allergy, minimizing the number of vaccinations may be wise.  For people who have ever had a reaction to a vaccine, the risk may also be especially high.  For these people, it would probably be a good idea to weigh the risk of allergies against the vaccine that will only reduce your chance of getting the flu by 50%.

The flu vaccine is very complicated.  It is more or less a guessing game.  On average, it will reduce your chances of getting the flu by around 60%.  Each year, however, the exact percentage is different.  In one year, it measured at 10%.  In others, it has measured closer to 80%.  In cases where the risk of allergies is very low, the cost of the vaccine is low enough to make it worth gambling on.  In cases where the risk of allergy is high, gambling may be unwise regardless of how low the price is.  Overall though, the flu vaccine is not 100% effective, and it should not be treated as being that effective.  Even at a low efficacy, it does have some benefits to heard immunity, but the current vaccine will never eliminate it entirely.  The flu vaccine does little more than keep the flu in check.  Refusing to vaccinate for the flu because of unfounded fears actually has little impact on the overall effectiveness, so long as plenty of other people vaccinate.  This year, the impact of choosing not to vaccinate is even lower than usual.  None of this is the fault of the CDC.  It is a persistent problem with trying to vaccinate against a disease that is highly adaptable and widespread in a large human population.  That said, we have known about the low effectiveness of the vaccine for at least two years now.  It would be nice if the medical establishment would stop misrepresenting it as a highly effective solution.

04 December 2014

Religion is Government

Throughout history, religion has played a major role in how people act.  In many cases, religion is better at controlling how people act than law is.  Historically, many nations that recognized this co-opted religion as an additional method of control, creating or adopting state religions that encouraged people to act how the government wanted them to.  Religion has always been more personal than government though.  Even within one religion, members understand doctrines differently from one another.  Because religion is about personal belief, it should not be forced on someone, and as the American Revolution approached, this started to become far more obvious.  This fact was ultimately one of the driving factors in that revolution.  If you ignore the aspect of personal belief though, you may notice that religion has a lot in common with government.

First, to be completely blunt, religions are governments.  They are not secular governments, but they do govern their members.  The most important difference between religion and secular government is free will, and this is why government and religions have no business being legally connected.  When a state religion is created, it becomes an arm of the government, and the free will that makes religions what they are is lost.  This even applies to nations that adopt atheism as the state religion, banning any other religions.  As micro governments, religions actually play some very useful roles.

Religions have some power over the behavior of their members.  Now, some people perceive this as a bad thing, but it is not actually.  The reason is that participation is entirely voluntary.  Where religion is free from the influence of secular government, it encourages people to be civilized of their own free will.  Religions to do wield legal power to punish their members in any universally meaningful way.  They might excommunicate members who do not follow the tenants of the religion, but in most cases, members who are expelled from a religion have shown either through their words or actions that they do not actually believe the doctrine of that religion (there are occasional exceptions), and thus, no serious harm is done to them.  Religions are more or less social institutions that impose social rules and punish deviation through entirely social means.  People are free to choose their social rules by choosing which religion they are a member of.  This is unique, because people have little power over their secular government beyond relocating to the realm of a different government.  Even in a democratic government, those who do not agree with the majority have little control over how they are governed.  When religion is free, each person can choose his or her own social rules, and if there is not a religion that fits, it is always possible to create a new one.  Overall, religions help keep civilization civilized, and more effectively than government can.

Religions act as an additional check and balance to secular government.  Religions help unify people.  Groups of people who choose to have similar beliefs is far more united than the people of a nation that is forced to follow only one religion.  Religions can unite against unjust government actions.  Religions can help encourage political dialog that can drive positive change.  Religions give the people more power and ability to unite against the government when necessary (religions even played an integral role in starting the American Revolution).  When governments choose to work with religions, the voice of the people can be better heard by the government, without the need for the people to unite against the government.  Viewed as independent governing entities who represent their followers, religions can work with secular governments to enhance communication between the government and the people it represents.

Religions also tend to be better at social welfare than governments.  Because religions cannot impose mandatory taxes, they are limited to the voluntary donations of their members, which is why some forms of welfare must be handled by secular governments, however, religions can often get into places that secular governments cannot.  This does not just include countries in need of foreign aid that doubt the motives of secular governments.  It also includes homeless people, who do not have permanent addresses or even identification.  Secular governments just cannot afford the man power required to effectively distribute all needed welfare, even in their own regions.  Religions often have plenty of members willing to spend some time on charity work, who can distribute welfare with less concern for accountability.  Because the funds are donated voluntarily, religions do not have to worry so much about abuse of the system.  Also, because religions typically have more limited funds, abuse of religious social welfare is rarely very profitable.  In addition, because religions are autonomous and have less accountability, they can be more flexible.  In the effort to enforce fairness and accountability, government often inadvertently leave gaps in their social welfare programs.  Religions can fill those gaps, though perhaps no so well as the government could by analyzing the system and making adjustments.  Without religions helping with social welfare, much of the world would be doing far less well than they are.

The most unique thing about religions is that participation is voluntary.  This is very useful.  First, it encourages each person to choose a religion.  Most people in the U.S. are members of some religion or other.  Of those that have no official membership, many still identify with some religion, even if it is just a generic version of some category of religions (for instance, non-denominational Christian).  Those who do not identify with any religion often still have some personal religious ideology that guides they actions.  This means that most Americans subscribe to some religious ideology that encourages them to get along with others.  Further, because religion is voluntary, people feel compelled to keep the tenets of their religions, because they made a personal choice to do so.  There is a great deal of work that government does not need to do, because religions do it for them.  Integrity is only legally enforced when legal contracts are involved, however, most people are honest most of the time, even when it may not benefit them.  Most people don't steal, even when they know they will not get caught.  Most people overlook minor harm that was unintentional.  There are no laws enforcing most of this good behavior, and in the cases where there are, they are not reliable.  People choose to be civilized anyway, and in a large part, religions are responsible.  Religions encourage civilization and making wise choices, and because participation is voluntary, members are more likely to follow the commandments and recommendations, because they chose them of their own free will.

Secular governments have a monopoly on violence, and perhaps that is for the best.  In the past, religions that have been permitted to use violence have abused that authority a majority of the time.  Even limiting religions to using violence only on their own members is probably a bad idea.  Likewise, religions have something of a monopoly on personal belief.  Again, this is probably for the best.  Allowing secular governments to control the beliefs of people has almost always ended in disaster in the past, and forcing a large group of people to have the same beliefs has never turned out well.  Government and religion complement each other in very important ways, when they are autonomous from each other.  When they are combined, however, a major conflict of interests almost always arises, and one or the other is assimilated and becomes an engine of tyranny.

28 November 2014

Orgy of Consumerism

It is that time of year again: time to write about Black Friday.  Yesterday, my wife came across a video contrasting Black Fridays in the '80s with Black Fridays now.  The difference is huge.  Thirty years ago, there was not some frenzied rush to get to the stuff first.  There was no brawling, shoving, or snatching items from other people's carts.  In fact, it looked like any normal shopping day, except with around five times as many people.  Modern Black Fridays are actually dangerous, and people have died in the rush to get marginally good deals nearly every year for the last decade (a recent study has shown that Black Friday deals are not typically the best you can get).

The average American is Christian, according to polls.  Christians should, according to The 10 Commandments and other Biblical passages, be generous, respectful, kind, charitable, and a host of other virtuous things.  Christians should not be greedy, rude, mean, or otherwise harmful to others.  In the 2013 movie, The Purge, the government mandates one almost entirely lawless day each year (there are some exceptions, mostly with regards to the safety of high ranking government officials and ordnance or explosive weapons).  The unofficial goal of this is population control, though the government does not admit to it.  Anyhow, in the movie, normally good people do or attempt to do completely horrendous and evil things during the yearly purge.  This is what Black Friday is becoming.  People who profess to be good Christians (or other denominations that have similar values), and who act like good Christians, the rest of the year turn into evil, conniving jerks on Black Friday.

I want to compare the events of Black Friday to another kind of completely immoral activity: an orgy.  Instead of sex, the orgy that Black Friday has become focuses on greed, pride, complete self absorption, and abandonment of the most basic self discipline.  Almost equally sinful, this is not an activity that good Christians should be taking part in.  Any Christian willing to take part in such an activity, even only once a year, is a hypocrite the rest of the year.  Just like The Purge, people's true colors come out during Black Friday.  Don't kid yourself: The person you are on Black Friday is the person you are the entire rest of the year as well.  Maybe you hide it really well the rest of the year, but during Black Friday, the truth is revealed.

27 November 2014

Pulling Your Own Weight

The idea of pulling your own weight is based on the idea that each person incurs costs for upkeep, including food, water, clothing, and shelter.  In the U.S., we might add things like internet and electricity to this, but really it comes down to the fact that every person has an upkeep cost, and someone has to pay it.  The idea of pulling your own weight is a very old idea, but also a conditional one.  Each person in a society that is capable of doing so is expected to pull their own weight.  Of course, there have been some deviations from this, but it is largely the most common way of running an economy.

There are some occasional historical exceptions to this, but there are also some chronic exceptions.  Historical exceptions almost always involve slavery.  Greek philosophy and math were built by people who did not pull their own weight.  In fact, if they had not had slaves to pull their weight for them, we would probably not have modern technology and science as we know them.  Slavery has been common off and on throughout history.  In the U.S. and most of Western civilization, slavery (overt slavery, anyhow) has been abandoned and replaced with an economic philosophy very common to cultures that reject slavery.  This philosophy is the idea that every person must pull their own weight.  Chronic exceptions to this are very common and will never go away.  Babies, young children, elderly people, and disabled people are not expected to pull their own weight, because they cannot.  Stay-at-home mothers are treated as not pulling their own weight in many parts of modern society, however this is a filthy lie.  They may not be producing goods, but stay-at-home mothers are doing work that is far more important than most of the work done outside the home.  Now, the slavery exception is becoming an unusual one that is likely to overturn how we view economy, probably within the next half century.

In older economies, the pull-your-own-weight ideology was a fairly sound one.  While it is possible for a small number of people to provide for a large number, the work involved has been excessive.  One slave working 16 hours a day might be able to provide the needs of ten or twenty other people, but that slave cannot have any freedom because there is just no time for it.  Modern technology has changed this though.  Besides finding more efficient ways of producing, it has also provided ways of replacing human labor with mechanical slaves.  Mechanical slavery is completely ethical.  The machines can work 24 hours a day, and they never need time off or personal time.  The only down time is time spent on repairs and maybe upgrades.  Experts estimate that this ethical form of slavery will replace about 50% of the human workforce by 2050.  This presents a very serious ideological problem.

Here is the problem: The U.S. economy is based on this pull-your-own-weight ideology.  We are in the process of rapidly replacing human workers with mechanical slaves.  These two things are completely incompatible.  If we replace half of the human labor force with slaves and then still expect the humans to pull their own weight, we are expecting the impossible.  Actually, we are perhaps doing something worse.  We are missing something important. What is the actual weight of a human?

The "weight" of a human is the amount of labor required to meet that human's needs.  Slavery with human slaves does not change the weight of a human; it just displaces the labor.  Some human still has to pull the weight.  Slavery with machines slaves, however, does change the weight of humans.  Replacing human labor with machine labor directly reduces the human labor required to meet the needs of humans.  This is what we are missing: As we automate more processes, we are reducing the weight of humans.  The problem is that we are not accounting for this.  We have high unemployment largely because we have reduced the weight of humans, and those humans that are still doing the same amount of work are now pulling more than their own weight.  The result is that there is not enough work left for everyone else, because their weight is already being pulled.  Unfortunately, because we have not noticed this problem, we are not distributing the results of the work appropriately.  The consequence is that some people are pulling more than their own weight, and they are getting the proceeds of that.  The people that are not able to pull their own weight are stuck without enough to survive, because their portion is being given to the people that are pulling their weight for them.

This is complicated, and it is not obvious that this is what is happening.  Further, there is a very important reason that this is happening: We have reached a point where it is actually substantially less efficient for each person to pull their own weight.  When each person's weight costs 2 to 4 hours of work per day (and, when that burden is centralized to one or two people per family), it is fairly efficient for businesses.  Each employee spends enough time working to easily keep up with overhead.  Now, however, each person's weight comes out to around 1 or 2 hour per day, or even less.  When centralized, this comes out between 10 to 20 hours a week.  Having every employee work half time doubles the overhead, because the number of employees are doubled (reducing hours does not reduce overhead).  In addition to that, higher end jobs often have warm up and cool down time that results in unproductive hours on each end of a shift.  This means, in an 8 hour shift, if an hour at each end is unproductive, 75% of the work time is productive.  In 4 hour shifts, productivity is reduced to only 50%.  In lower end jobs this effect is dramatically lower, but in high end jobs (especially in problem solving work like engineering and science), this is a major obstacle to reducing hours (note that in these jobs, longer time between shifts tends to increase the unproductive warm up time, so 8 hours three days a week is not an efficient solution either).  This is an efficiency problem that is never going to go away.  It is just not efficient at current human "weight" for each person to pull his or her own weight.

Is there a solution to this?  Yes, but it is not a very popular one.  It is incredibly unpopular among conservatives, and it is at least mildly unpopular among liberals.  The solution is abandoning the pull-your-own-weight ideology.  We are quickly becoming a slave state, just like Greece was, except that we are doing it ethically.  If we do not abandon this pull-your-own-weight ideology, we are going to either let the majority of Americans starve as their jobs are replaced by machines, or we are going to have millions of Americans working workweeks so short that they are costing more overhead than the value they are generating.  Neither of these is a good long term economic plan.  One short term solution might be long vacation time, where each employee works "normal" hours, but only for 1/4 of the year, and the rest of the year is vacation time, however, that only partially mitigates overhead costs.  The most efficient solution is for some people to work 20 to 40 hour weeks at least 50% to 75% of the year, while everyone else lives off of the proceeds of that work.  Some kind of motivation would be necessary for those who work, and this would probably be complicated and difficult to do without resulting in an overprivileged working class and an underprivileged non-working class (ironic, given that historically the opposite happens).  Ultimately though, it is going to eventually become necessary, or we are going to have an epic economic crash when so many consumers starve to death that consumption drops below an economically sustainable level.

Things are changing rapidly.  Technology continues to advance faster than we can keep up with.  In the past, the impact of this has been primarily limited to the tech industry itself.  In the near future, however, this is going to have a massive economic impact.  If we are not prepared, we are going to suffer.  In some degree, the consequences are not predictable, but there is one thing that is predictable: If a large portion of human labor is replaced with machine labor, we cannot have a sustainable economy that is based in the pull-your-own-weight ideology. 

26 November 2014

Overtime

It turns out that the average working American is working around 50 hours a week.  Almost 12 percent of Americans work more than 60 hours a week.  This is a problem, for several reasons.  First, we still have a high rate of unemployment, and I have said before that people working more than 40 hours a week are effectively stealing work from those working less than that (who want to work 40 hours a week).  Second, many of these workers are salaried, which means that no one is getting paid for this extra work.  In these cases, the extra hours are being stolen, without any benefit to the thief.  Some workplaces even mandate that salaried employees work more than 40 hours a week.  Hourly employees are legally entitled to extra pay for overtime hours, but this does not justify stealing work that is needed by others.  Ironically, hourly overtime costs the employer more, in addition to increasing unemployment.  This free labor and poorly distributed work is a big problem, even though it may not be obvious.  Given current unemployment as well as the 50 hour a week average of most U.S. workers, a redistribution of labor could easily solve unemployment entirely.

The first thing that needs to be done is the elimination of any unpaid labor (within an employer/employee relationship).  Salaries should only apply for the first 40 hours a week of work.  Even salaried workers should be entitled to overtime pay for any hours beyond 40 in a week.  This by itself would push businesses to hire more employees, instead of expecting free labor from salaried employees.

The second thing that needs to be done is fines for overtime.  Many states' labor laws technically forbid overtime, but they include clauses stating that overtime must paid at a higher rate when it does occur.  Federal labor law does not forbid overtime, but it also requires a higher pay rate for overtime.  In all cases, however, salaried employees are exempt.  Federal labor law needs to remove the salaried employee exemption, and it needs to turn the 40 hour a week limit into a hard limit.  No states with a hard limit actually enforce it, and there is no set penalty for violation of the limit (though, the limit does entitle an hourly employee to refuse to work overtime without threat of retribution).  In addition to a Federal hard limit, penalties need to be set and enforced for violation of that limit.  Fines for overtime would accomplish two useful things.  First, it would encourage employers to hire more employees instead of facilitating the theft of work.  Second, it would provide a source of funding for welfare to support those who are not able to find work because that work is being stolen by other people working overtime.

A more extreme third thing that could be done is fines for employees working more than 40 hours a week.  The point of this is to combat the likely response of getting a second job for people who loose overtime hours due to the first two things.  Again, this would both discourage working more than 40 hours a week, and it would provide a source of funding for welfare when people choose to work more hours anyway.

There is a fourth thing that needs to be done, and perhaps it should have been the first.  Overtime labor laws need to be strictly enforced.  Wage theft is becoming a major problem in the U.S., and a majority of it comes from unpaid work and overtime paid at a non-overtime rate.  There is a local business where I live that has a strategy for avoiding overtime pay that happens to be highly illegal.  This business logs hours based on client projects.  Employees are forbidden from working more than 8 hours a day and 40 hours a week on any one project.  The business owners seem to think that overtime pay is only necessary if overtime is worked all on one project.  This business has employees (as well as ex-employees) who are owed thousands or tens of thousands of dollar in unpaid overtime.  At least one has tried to report the situation to the state labor board but was told that they are too far behind to do anything about it.  Evidently this situation is common across the U.S.  In many cases, employees do not know their right, but in other cases, they fear retribution (also illegal) or state labor boards are understaffed (or, possibly, just lazy).


It is absurd that our country has set a 40 hour work week, but we have a high rate of unemployment largely because the average work week is actually 50 hours.  Enacting and enforcing laws that push this back down to 40 hours could increase the amount of available work by up to 20%, which would completely cover our unemployment with some to spare.  This would tip the economy to favor employees over employers, which would go a long way in increasing wages and reducing poverty.  Our economy needs us to eliminate unpaid overtime and dramatically reduce overtime overall.

Upper Class Blindness

In America, we do not like to see poor people.  We do not want to see homeless people.  We do not want to see people living in poorly maintained low income housing.  We would prefer not to see the hungry.  So, what do we do about it?  Evidently, we try to hide it.  Within the last year, at least 21 U.S. cities have passed laws forbidding the feeding of homeless people in public.  Some cities have replaced park benches with new models that include separators designed to prevent homeless people from sleeping on them.  Businesses have placed obstacles on sidewalks to make sitting on them painful, to deter the homeless from loitering near their stores.  In many cities, construction projects have been approved that destroy or renovate low income apartments to become classy higher income housing.  In some cases, low income housing has been replaced in response to higher income residents that live nearby, who feel that the nearby low income housing damages their property values and forces them to see things they would rather not.  In the U.S., our solution to our discomfort at seeing poor people is to create laws to drive them away.

This is a major ethical problem.  We have plenty of poor in the U.S., and the number is only increasing.  Hiding the problem is not fixing it.  All of these laws and other solutions are actually making the problem worse.  Now, hungry homeless people are being forced to starve, because they cannot be fed where they are, and they have nowhere else to go.  Tearing down low income housing is putting more people on the streets.  Perhaps the worst part, though, is that all of these efforts to hide the problem are making it less obvious, which makes it easier to ignore the suffering.

There is a solution.  It is a painful one, and the upper class will certainly be opposed to it.  It needs to be done though.  The problem has been ignored for so long that there seems to be no other reasonable way.  First, I think we need an amendment to the Constitution offering Federal protection for the poor.  No law should be allowed to persist which is designed specifically to discriminate against the poor.  When a city tries to enact a law designed to hide the fact that the city is tolerating the pain and suffering of its poor, Federal courts should have the legal backing to come down hard on that city.  Building projects designed specifically to relieve the rich from the burden of seeing the suffering of the poor should also be shut down.  In fact, the truly ethical city would deliberately zone such that every large, expensive house looked out at cheap low income housing.  The homeless shelter should be right next to the highest income mansion.  The soup kitchens should be right across from the country clubs.  Not only should it be legal to feed the homeless right out on the streets where they live, it should be encouraged to feed them in prominent locations where the rich can observe, and the right to feed them in those places should be legally protected.  The point of all of this is that the people with the greatest capacity to improve the situation should be the people who have the greatest exposure to the problem.  Yes, this will be very emotionally painful.  It should be.  Imagine the pain and suffering of those poor people.  If we think we cannot bear to feel at least a part of their suffering, we deserve to feel the full impact of their fate for ourselves.

Upper class blindness needs to be cured.  If this requires the poor to be shoved in the faces of the rich, then this is what needs to be done.  Perhaps if the rich were forced to realize what their money games are doing to our nation's poor, they would think twice about how their business deals and profit strategies might be causing harm to others.

17 November 2014

The Little Red Hen

There was once a little red hen.  She owned a wheat field.  When duck came asking for a job working on the farm, the little red hen told him that she did not need any help, because she had an automatic system for planting, watering, harvesting, and separating the wheat.  The little red hen also owned a flour mill, but when pig asked if there was anything he could do to help, the little red hen told him that she had an automatic delivery system from the farm to the mill, and the processes for milling the wheat and bagging the flour were automated as well.  The little red hen had a bread factory, but when cow asked if there was something she could do to help, the little red hen told cow that the factory was so well automated that she did not even need someone for quality control.  The little red hen had a bakery as well.  When horse asked if he could help sell the bread, the little red hen showed him rows of completely automated bread vending machines, and she told him she already had it covered.

When it came time to harvest the wheat, the automatic harvester harvested all the wheat, it dumped it into a thresher, which separated the grain from the chaff.  The wheat was then pour into buckets on a conveyor belt, which carried the wheat to the mill next door.  Machines at the mill dumped the buckets into the milling machine, and the flour cascaded down a funnel into bags.  Another conveyor carried the flour next door to the bread factory, where they were dumped into huge mixers along with water and other ingredients, then divided into loaves, cooked, bagged, and sent to the bakery on yet another conveyor.  A complex mechanical system hidden behind the vending machines filled each one with bagged loaves of bread.  The little red hen then waited for customers to buy her bread.

After a few hours with no business, the little red hen looked out the front window.  Standing outside, across the street, stood duck, pig, cow, and horse, looking longingly at the bakery.  The little red hen walked outside and called across the street, asking why they were looking but not buying any bread.  One by one, each of them explained that they had been unable to find any jobs, so they had no money.  They just could not afford the bread.  The little red hen stuck up her beak and went back inside.  She did not need friends who were poor, when she had so much.  If they did not have any money, then they would not have any bread.

Duck, pig, cow, and horse lived on the streets until they starved to death.  Only the little red hen was left in the town, but she was content.  She had plenty of bread.  Her lack of friends did not bother her.  She was rich, so she did not need any friends.  Her money and her property could be her friends.  At least, this is what she told herself when she started feeling lonely.


(In case someone thinks that this story is about the evils of automation, read my opinion on that subject: Dehumanizing.  Automation is not evil.  People who succumb to greed are what is evil.)

Universal Pre-K

http://national.deseretnews.com/article/2750/navigating-the-research-on-universal-pre-k-overhyped-or-silver-bullet.html

I just read this, and I see so many flaws in the various arguments that I cannot resist writing about it.

First, the argument is about whether the Federal government should devote several billions of dollars to make preschool part of the education system.  There is some evidence that poor children are likely to make more money and are less likely to get involved in crime when they grow up, if they attended a preschool.  There is also, however, significant evidence that the cognitive benefits of preschool disappear within 2 years of starting elementary school.  The cost to the country of doing this is around $15 billion.  One side of the argument claims that universal pre-K is the best way to improve education and situation of the poor.  The other side argues that the benefits are primarily temporary, and the cost will be more than the return.  At this point, I don't actually care who is right.  Perhaps we need more research, preferably done by people with mixed opinions, to avoid confirmation bias.

The first problem with universal pre-K is the cost.  Our nation is already heavily in debt, and if we cannot prove that the investment will pay off, perhaps we should not do it.  The second is reach.  While the evidence shows that poor children can gain substantial long term benefits from pre-K, there is no conclusive evidence that middle and upper class children benefit at all.  Those supporting universal pre-K say that it will not be taken seriously if it only targets poor people, and they cite Head Start as an example of this.  While this is probably true, it is not, perhaps a valid excuse for spending many times what is necessary.  What I hear them saying is, "We need to spend $15 billion to get people to take this seriously."  That money would probably be more effective spent as a bribe to get the people to pretend to take it seriously than it would to use it to offer preschool to those whom it is unlikely to benefit.

There is also a lot of mud slinging going on in this debate, which makes it very difficult to determine what is fact and what is opinion.  There is one study that "was likely underfunded" (yeah, I don't know what that is supposed to mean either) that showed kids who attended pre-K actually did worse in math and language than kids who did not.  The "fact" that it might have been underfunded is used to discredit it.  Likewise, another study showed impressive long term benefits from pre-K, at a price of $90,000 per child.  While this study may have been valid, the price tag for those results is just not an option.

One theory as to why benefits are observed is that preschool provides more social interaction than the home, improving the social skills of the children at an age where it makes a bigger difference.  Perhaps (though it is not stated), middle and upper class children have more opportunities to gain social skills at 4 years old than lower class children?  If this is not true, then this theory does not account for the discrepancy between lower class children and middle/upper class children.  (Supposedly, poor families are actually having fewer children than middle and upper class families now, so maybe social interaction at home can have the same benefits, so long as there are several children.)  Regardless, if this is true, we don't need to bother spending $15 billion extra on this.  It is already proven that the benefits of the learning go away fairly quickly.  If the social interaction is the key, then we could eliminate low income pre-K programs like Head Start and instead provide government funded day care, and it would be far cheaper.  Day care provides a very similar social setting, and day care workers don't cost as much as trained educators.  In fact, without the learning part attached, and presented as an aid to poor families where both parents work, it would be taken far more seriously than a preschool program justified primarily by limited and unreliable data.

One proponent of universal pre-K asks a question that is stupidly obvious.  Discussing some of the problems with programs specifically targeting poor people, Steven Barnett asks, "Why would we do that?  Why not just make it open to everyone?"  The painfully obvious answer is $15 billion.  I guess he just didn't think of that one.  In addition to this, there are multiple claims that the $15 billion to $20 billion already being spent on low income preschool programs is being spent poorly.  Not everyone agrees with this, but given the state of the rest of our education system, it is hard to believe that significant improvements are not possible.

Ultimately, the situation is complicated.  Obama and other proponents of the idea seem to be prepared to throw huge amounts of money at in, just in case it works.  There is evidence that it could be beneficial, but there is no evidence that it will be.  None of the most influential studies mirrored the reality of the situation well enough to actually trust.  The less influential studies all seem to be affected by many uncontrolled factors, as there is really no consensus between them.  Studies targeting the middle and upper classes are unlikely to ever be conducted, because no one seems to care.  What I see this as is a giant $15 billion experiment that will affect children all around the U.S., to see whether universal pre-K will help them or harm them.  Maybe the potential for harm is not that high, but the price tag certainly is.  $15 billion is enough money to pull over 1 million Americans out of poverty entirely.  This would dramatically reduce the need for a preschool system designed to help poor children, and it would likely do far more for them than preschool ever could.

I don't care who is right in the debate over benefits, but I am opposed to spending huge amounts of money on things that have such a high risk of failure.  Instead of arguing over what the data means, maybe we need to spend a fraction of that money doing more research, where the situations are closer to what they would be if universal pre-K was made available on the proposed budget.  I might not care about who is right, but I certainly do not want our government to gamble even more money on huge social experiments that have a limited probability of paying off.

10 November 2014

Taco Bell App

Taco Bell has come out with an ordering app that allows customers to use their smart phones to put in an order and pay for it.  As the customer approaches a Taco Bell location, the app asks if they want the restaurant to start preparing their food.  This process can involve almost no human contact (I suppose someone has to pass the food out the window, but ordering and paying is entirely electronic).

As this becomes more popular (Taco Bell is not the first to try this, and it most certainly will not be the last), a lot of jobs are going to be lost.  Eventually, most drive through orders will not require a cashier, because most of them will already be ordered and paid for before the customer even enters the drive through.  This will allow the drive through cashier position to be combined with another position.  It is also likely that the added convenience will reduce the need for inside cashiers.  Eventually this is going to spread to all fast food restaurants, because otherwise, they will not be able to compete.  This is going to add up to a lot of jobs that are lost.

It is about time!  Fast food restaurants severely underpay their employees.  They claim that they cannot afford to pay more.  I have argued this before, and I will repeat it again: A business that cannot pay employees enough to survive on is not worth existing.  Work that is not worth a living wage is not worth doing at all.  Pay that is below a living wage is just plain not sustainable.  A business that cannot pay a living wage is not profitable enough or valuable enough to justify its own existence.  Fast food is practically the bottom of the barrel (ok, agriculture is far worse, but also far less prominent).  Current Federal minimum wage, which most fast food places start at, generates well under the poverty level in income, even full time.  One of the most effective ways of reducing costs (so that employees can be paid fair wages) is automating processes and eliminating unnecessary employees.  Food assembly is hard to automate (though, certainly possible).  Automated order taking is now very easy to automate.  It is the low hanging fruit.  It is nice to see that fast food is finally figuring this out.

There is a catch.  The most common response to increased profits through automation is faster expansion and better shareholder payouts (or, even worse, increased CEO salary).  If Taco Bell choses to take this route, then not only is it not worth existing, it is actively worth destroying.  Why?  It is already vastly underpaying its employees.  It should take this opportunity to make its employment system more sustainable by raising wages.  Admittedly, eliminating maybe two or three employees will not save enough to pay all of the rest a living wage.  An effort, however, would be nice.  It would show that they care about paying their employees fairly.  If, instead, they spend the profits on something else, then they are showing that they could care less about their employees.  If this is the case, then the business does not deserve to exist, and additionally, it deserves to die so society no longer has to pay the costs of its freeloading on our unpaid labor (if it pays less than a living wage, then it is not paying for all of the labor it is getting).  I hope they do the right thing, but I am not holding my breath.

Religion in Politics

Around 49% of Americans seem to believe that it is not only appropriate, but obligatory for churches to be involved in politics.  While it is illegal, according to IRS restrictions for non-profit tax status, for churches designated as non-profit organizations to support specific political candidates, it is not illegal for churches to support specific ballot measures, initiatives, or even political movements.  While there has been some resistance to churches having any involvement in politics, the percentage of Americans opposed to church involvement in politics is far lower than the percentage for.  In fact, the percentage of Americans who support removing the non-profit restriction for supporting specific candidates is even growing.

Mixing politics with religion has been a controversial topic for almost a century, however, there was a time when few questioned it.  The American Revolution was driven, in a very large part, by Protestant preachers in the colonies.  The religious view at the time was that government was ordained of God, and only He had the right to change it.  There are even Bible passages that lend a good deal of support to this argument.  Many preachers, however, carefully studied the passages often quoted to support this idea, and they found an interesting loophole.  Most of the passages stated or implied that government was ordained of God to serve the people.  They reasoned that a government that does not effectively serve the people is not a legitimate government, by that standard.  By refusing to give the colonies representation in Parliament, the British government was not doing its job by serving its citizens in it colonies.  Many preachers explained this to their congregations, showing that even God could support a revolution against a tyrannical government, because, by His standards, a government that does not properly serve its people is not a legitimate government.  The British government did serve the people of England properly, however, it did not serve its citizens in the colonies properly, thus it was not a legitimate government over the colonies.  Ultimately, this broke down the barriers preventing the people from rebelling against Britain, and the result is that the U.S.A. is now a sovereign nation in its own right.

Our Founding Fathers were very wary of religious influence in government and government influence in religion.  Some groups of colonists had come to the Americas specifically to escape religious persecution, and even much the majority that came primarily for economic freedom and opportunity also had religious freedom in mind.  At the time (and even today), Britain had a state religion, which certain government officials were required to be members of.  The Church of England was literally owned and controlled by the British government.  Certain other religions were banned in Britain (often depending on the mood of the current monarch).  Many other European countries also had state religions as well as specific religious bans.  Punishments for violating bans or even being a member of a religion not endorsed by the state ranged from public persecution to death, depending on the religion and the current ruler.  While Protestantism was the dominant religion in the colonies, there were still some Catholics and Anglicans.  In addition, Protestantism was fractured into a large number of different denominations.  Almost without fail, any state religion would reduce a significant portion of the population to second class citizens.  This did not fit well at all with the philosophy that people should be allowed to worship as they saw fit.  The result of this was strict protections for religious freedom, along with strict condemnation of any laws that might favor one religion over the other.

So now we get to a modern application of this knowledge.  The first important thing to remember is that religion and politics are strongly related.  Government is expected by the people to enforce certain moral expectations.  In a large degree, these moral expectation come directly from religion.  Rights that are supported by all religions are often called "human rights" and are frequently turned into laws called "civil rights."  Even entirely secular laws designed to improve the national economy (including tariffs and such) are based in the Biblical principal that government is ordained of God to serve the people.  This "separation of church and state" idea that religion and government should have nothing to do with each other is both wrong and impossible.  So long as religion is common in the U.S., it will and must have an impact on government.  Likewise, government will always have an impact on religions within the region it governs.  The Constitutional protections necessary to ensure religious freedom make these influences largely indirect, but they cannot be reasonably prevented.

Back to the question: Should churches be involved in politics?  Separation of church and state as an argument against it is not valid.  While direct influence can be eliminated to a large degree, indirect influence cannot.  Churches in the U.S. have a historical precedent of political involvement.  Our Founding Fathers, who drafted the Constitution never spoke out against this practice, though they were fully aware that it existed.  It would thus be unreasonable to assume that they believed churches should not be involved in politics.  Perhaps they were wrong though, and maybe we are more enlightened.  Of course, this attitude of assuming that past generations were stupider than we are is a strong red flag.  This is an egotistical assumption that is often wrong and will cause more trouble than it is worth.  Instead we should look at the relationship between government and religion.

What is the appropriate relationship between government and religion?  Many people would say that no relationship between the two is appropriate.  This argument is impossible to support though.  There is no way the government can interact with religious without becoming involved with it.  Even wide spread prohibition of religion is a government relationship with religion (and in fact, it is the equivalent of establishing a mandatory state religion).  If the government ignores religion entirely, its relationship with religion will come through the people.  For example, despite the fact that it is unconstitutional to restrict public official to those of a specific religion, Kennedy's opponents used his Roman Catholic religion against him in their campaigns.  So long as religion exists, there will be a relationship between religion and government, and if it is eliminated by government edict, that is, in and of itself, a relationship between religion and government.  It is almost pointless to discuss the question of whether such a relationship should exist, because it is impossibly for it not to exist.  That said, in a democratic government where some of the citizens have religious beliefs, it is entirely appropriate for such a relationship to exist, because the people the government represents include religious people.

Government involvement of the general public is all about beliefs.  A person who supports unregulated abortion typically does so out of a belief that the woman should be free to choose.  A person against unregulated abortion may chose to be against it out of a belief that killing even an unborn child is murder.  One of the most controversial topics that churches have gotten involved in is same sex marriage.  Those who support it believe that homosexuals are otherwise being deprived of rights that are freely available to heterosexual Americans, while those against typically believe that homosexual acts are sinful and may ultimately result in the wrath of God.  It does not matter whether the belief comes from religion or supposed logic; neither position really has a strong argument, and it all comes down to opinion and personal beliefs.  One group may choose to subscribe to a specific set of beliefs will the other may choose beliefs ala-carte, but ultimately it does not matter.  An American citizen has the right to representation, regardless of where they choose to get their beliefs.  So long as some of those beliefs may be obtained from religion, religion is an integral part of government.  Now, this does not mean that we should strip the Constitution of its protections for religion, but it is something that anyone arguing about the propriety of religious influence in government should be aware of.

During this election season, a much larger number of churches supported specific political candidates than in the past.  While this is stated to be illegal, it is technically not.  What is illegal is for a non-profit organization to support a specific candidate, and since most churches in the U.S. are registered as non-profits, it is illegal for them to support specific political candidates.  Of course, this is actually far more complicated than it seems.  This particular law is part of IRS policy for non-profit organizations.  It is also legally questionable.  While it is not addressed specifically in the Constitution, many believe that it could qualify as persecuting churches to prohibit them from supporting specific political candidates, and the specific argument is that it infringes on freedom of speech.  While this argument does seem rather sound, it still has a great deal of opposition.  The opposition's primary argument is the "separation of church and state" argument, which we have already established does not apply to this kind of situation.  Ultimately though, it may not matter.  The 1,600 preachers that have supported specific candidates from the pulpit will likely not face any trouble from the IRS.  The IRS policy is primarily in place to prevent attempts to create non-profits designed as campaign engines for specific candidates.  Churches, even when supporting specific candidates, are not specifically designed to do this.  Churches typically support candidates that agree with their beliefs and that will support their morals in government.  This is little different from supporting specific legislation on a state level ballot (which is entirely legal).  Further though, the primary goal of these preachers is to gain the ire of the IRS, so they can push a case through to the Supreme Court, in hopes that the IRS non-profit policy will be overturned, at least with reference to religious organizations.  So far, the IRS is not biting, and they may never bite, given that these churches are not violating the purpose of the policy.

My opinion on this is simple.  I believe that churches have every right, and in fact, they may sometimes even have a moral obligation, to support or oppose specific legislation according to the beliefs they teach.  I am ambivalent about the issue of churches supporting or opposing specific political candidates, however, I have a hard time seeing any difference if a church is consistently supporting candidates that will represent their moral beliefs.  I do think that churches with non-profit status should not be allowed to make monetary campaign contributions for specific candidates.  This could easily be seen as a misuse of tax exempt non-profit funds.  I suppose, however, I would not be opposed to a specific exception allowing campaign contributions, so long as they are reported and taxes are paid on the money contributed, but these contributions should be entirely transparent, so their followers know what is going on.  (Or, perhaps even better, they could organize a contribution event, where a church official collects and contributes funds for specific campaigns, but where the funds never become the legal property of the church.  This would be sort of like how for-profit businesses have charity events, soliciting and collecting contributions for some charity.)

Overall, trying to separate politics from religion is a fruitless task.  Religion defines the beliefs of many people, and the people are supposed to define the government.  This means, in a large part, religion defines government.  Attempting to completely eliminate the influence of religion on government is impossible, and if history is a good indicator, even trying is a prediction that the government is starting to crumble.  Democratic politics and religion are both belief based things.  This is, in a large part, why religious freedom needs protection from the government.  Trying to take the religion out of politics is essentially saying that a majority of the population is not qualified to take part in government, because they are "tainted" by their religious beliefs.  This is just not how a democratic government operates.

07 November 2014

What Americans Care About

The job description of the U.S. government is to serve the people, largely by doing the will of the people.  It is a Republic, and a Democratic Republic at that.  What this means is that the government represents the people and is lead by people who are democratically elected to represent the people.  Now that this is out of the way, let us consider what Americans actually worry about.

According to Pew Research, the second biggest concern of Americans is religious hatred.  Obviously, this plays directly into religious freedom, and it is, in fact, one of the most major elements of religious freedom.  Religious hatred is what ultimately caused our Founding Fathers to be so explicit in protecting religious freedom and in prohibiting preferential treatment of any religion by the government.  As the second biggest concern, we should see a lot of discussion in Congress over this issue, given that it is the second most important thing Americans seem to care about.  Sadly, Congress is more worried about things that Americans seem to find trivial.  This is not the most disturbing part of the situation though.

The research indicates that the first biggest concern of Americans is income inequality.  This subject has gotten some attention in Congress, with the most prominent result being a health care law that forces those at the middle and lower ends to spend a larger percentage of their income on health care insurance than those with much higher incomes.  Technically, this is making income inequality worse, not better.  While this subject seems to come up a lot in Congress and in Presidential press conferences and such, little is being done to actually address it.  This is currently the most important issue to Americans, and Congress cannot be bothered to give it serious consideration long enough to actually do something about.  Instead, Congress is doing things like harassing our (admittedly poor) education system, repeatedly forcing it to adopt untested techniques to improve test scores.  Income inequality has consistently been proven to affect education outcome more than any other factor.  Income inequality is the primary reason than a significant percentage of Americans cannot afford health care insurance (and making laws requiring them to purchase it does nothing to fix that problem).  Income inequality is also very strongly linked to our recent and current economic problems.  It also seems to have strong links to crime as well.  Income inequality also has links to many types of self destructive behavior (drug and alcohol abuse, for instance).  Income inequality is closely related to a vast majority of the big problems Congress keeps failing to fix.

There is a field of medicine that is starting to get more attention recently called "functional medicine."  Traditional medicine treats symptoms.  If there is pain, pain killers are administered.  If there is skin dryness, lotions are administered.  If there is depression, medications designed primarily to make a person feel good are administered.  The catch with a vast majority of these treatments is that they treat the symptoms, but they do not treat the cause.  Chronic headaches, which are often treated with ever stronger pain killers, are typically caused by something that can be treated to eliminate the problem entirely.  Skin dryness, even chronic types, can often be cured by functional medicine, when normal dermatologists would prescribe a life time treatment of lotions and moisture buffers.  Instead of treating symptoms forever, functional medicine aims to cure the underlying cause of the symptoms, eliminating the symptoms for good.  Now, apply this to income inequality.

Income inequality is a known cause of many of the problems we currently face.  It is a likely cause for many other problems that we have either not researched or have not gathered enough supporting evidence to constitute proof of a causal relationship with.  The evidence indicates that this one thing could solve nearly all of the big problems Congress has been trying to fix over the last half century.  It is also currently the biggest concern of Americans.  Congress should be tackling income inequality head on, instead of skirting around it trying to cover up the symptoms.  Congress needs to stop flirting with special interests and start taking care of its primary responsibility: The people it is sworn to serve.

04 November 2014

Time is Not Money

"Time is money."  This phrase is used so frequently that it has become a cliché.  Clichés like this one are so overused that they quickly become annoying.  The worst part about this one, however, is that it is patently false.  It may be possible, given the right circumstances, to exchange time for money.  On its own though, time is far more valuable than money.

Time is flexible.  Money is not.  Time can be spent on any number of things, including love and friendship.  Money cannot buy friends or love (another common cliché that contradicts the time cliché).  Time can be spent on many things that money cannot.  In addition, spending time can benefit both parties.  Trading time for money always benefits the employer more that the employee (otherwise it would be unprofitable).  Time can be spent on intangibles, like worship, while money cannot.  In addition, anything worth spending money on requires time to benefit from.  Even food takes time to eat.  Time is an incredibly flexible resource.  Money is an extremely limited resource.  Time is, in a sense, more raw.  It can be turned into money.  It can also be turned into a huge array of other things that money cannot be turned into.  Once time is turned into money, all of the other possibilities are lost.  So, time is far more valuable than money.

Knowing this, why are we so willing to give up more of our time in exchange for money?  Many Americans take work with them wherever they go, even when it is not a required part of the job.  Spending part of vacation time working has become a very common practice.  Even worse, those who do it the most are on salary, not hourly pay.  They are not actually being paid any more for their extra work.  Many people who become extremely rich through their "hard work" are trading time for money on a grand scale, and many of them are miserable or at least unhappy without the distractions of work.  Consider the relationship between cash and gold.  The value of gold typically rises at least as fast as inflation.  The value of cash diminishes over time, due to inflation.  Time is more flexible than even gold, but like gold, its value rises with inflation.  Even as the value of money decreases, the value of time increases.  This is not just a function of rising wages (which have actually risen slower than inflation over the past 60 years).  As travel becomes easier and cheaper, as knowledge becomes more readily available, and as more activities become available, time becomes ever more flexible and thus valuable.  Not only is time more valuable than money, its value is rising.

This begs an important question:  When people go into fields where their work is very valuable, why do they put up with long hours at higher wages?  I am a computer scientist.  Entry level salary in this field pays around $65,000 a year.  Some deluded companies expect 50 hours of work a week for this pay (there are plenty that will pay more for only 40 hours a week).  This pay is enough for a family of 5 or more to live fairly comfortably in most places in the U.S..  In fact, my family of 7 could do perfectly fine on half that.  That much money per year would be nice, but time is more valuable than money, even at that pay scale (in fact, it seems the higher the pay, the more valuable the time becomes, because less time is required to be traded to make enough money to be comfortable; also, more money opens up even more options for spending time).  Instead of desiring additional pay for higher quality work, we should be desiring additional free time.  If time is more valuable than money, then we should be willing to trade only as much time as is necessary to get enough money to be comfortable.  We should not be willing to squander all of our valuable time in trade for more money than we will ever need.  That is the definition of waste.  Working more than is necessary is literally a waste of our valuable time.

The aggressive, and rather excessive, tax systems in most European countries has driven this point home.  People who make too much money end up giving a majority of it to the government.  Instead of raising wages, at a certain point, employers increase vacation time and reduce hours, because employees will not accept promotions that will ultimately not benefit them significantly.  Those near the top of the pay scale (doctors, lawyers, and some tech industry) only work around 6 months out of the year, and sometimes even less.  They get copious vacation time, and promotions for this class of workers generally involve little or no salary increase, but instead involve added weeks of paid vacation time.  The time thing aside, this has some other pretty great economic benefits as well.  Overall though, wealthy Europeans clearly see that time is more valuable than money, though it may have required oppressive taxes devaluing money to open their eyes.

A major peripheral benefit of valuing time more than money is economic.  Employees who value time more than money will prefer reduced hours and increased vacation time to pay raises, once they earn enough to be comfortable.  Reducing the amount of time employees work will ultimately increase the number of jobs available.  A 30 hour work week would add one job for ever three full time employees.  That is a 33% increase in the number of available jobs.  A 20 hour work week (many workers in the medical, legal, and tech industries make enough to half their hours and pay and still be able to live better than a vast majority of Americans) would double the number of full time jobs.  In addition to all of this, shorter work periods would not significantly reduce productivity in most jobs.  Working eight hours a day, five days a week is quite tiring.  Even just eight hours in a day is pretty tiring.  After around five hours, productivity will drop significantly, as employees begin to experience fatigue.  This may also result in increased incidents of mistakes and accidents, which can lead to negative productivity.  Likewise, after three or four days in a row working eight hours, fatigue begins to set in and not just physical fatigue but also mental fatigue.  This will further reduce productivity and increase potentially costly errors.  Reducing hours (while maintaining total pay) of more productive employees will ultimately result in higher quality and more productive work.  Added to all of this is morale.  Morale has repeatedly proven to have a bigger impact on productivity and work quality than nearly anything else.  Economically, valuing time more than money is both healthy and profitable.  Not only is time more valuable than money, but treating time as more valuable can lead to higher profits and thus more money.

As our tax system is not as oppressive as that of most European nations, we cannot rely on it to force the truth down our throats.  Businesses in the U.S. are not smart enough to recognize that increased free time is more valuable to them and their employees than pay raises, so we cannot rely on businesses to change.  Our government is stuck on what was initially a temporary 40 hour work week, and it is unlikely to ever change that on its own, even though it is quickly becoming the only viable option for long term economic recovery.  The only way companies will reduce time worked, instead of paying more money, is if the workers refuse anything less.

Most companies will not fire a person for refusing a promotion.  Next time a promotion is offered, try negotiating for time instead of money*.  It probably will not work the first few times, because business tradition in the U.S. does not recognize time as a negotiable commodity.  You can either refuse the promotion outright if your boss will not negotiate time, or if the position really is one you want, attempting to negotiate time may at least increase the amount of raise that is offered.  If enough people do this with many different employers, businesses will eventually start to take notice.



* Note that hourly employees should probably do the math for this to make sure that the requested time change will still earn enough money at whatever wage increase is settled upon.