13 December 2018

Is Welfare a Moral Obligation?

The left claims welfare is a moral obligation.  The right says it is not.  Ignoring the mountains of scripture from nearly every conservative religion explicitly stating that welfare is indeed a moral obligation, and addressing this from a purely secular point of view*, is welfare a moral obligation?



(*Technically speaking, from the purely secular point of view, morality is purely man made, and thus is completely artificial.  But let's pretend morality is a legitimate thing, even from a purely secular point of view.)

The biggest argument in favor of morally mandated welfare is that if a community is producing enough for everyone, then everyone should have enough.  This is a poor argument, because it is merely a claim without backing evidence.  Another major argument is that poverty causes suffering and suffering is wrong, thus to avoid this wrong, poverty must be eliminated if possible.  This is also a poor argument, because it is based on an argument, suffering is wrong, that is a claim without evidence.  The problem here is that all of these arguments are based on assumptions that have not been universally established.  It is truly right that because a community produces enough for everyone, everyone deserves a share?  What even is this "deserving", and what are the people of the community doing to have this property?**  And what about suffering?  Is suffering truly wrong?  The common consensus seems to be that pain is bad, but without pain we would not know when something was harmful to us.  The fact is, none of these arguments are truly secular.  They are all religious in nature.  They have no foundation in logic, only in the personal beliefs of individuals.  There is some wisdom in thinking this way, but if logic is not applied at some point, this is a purely religious way of thinking, even if it does not necessarily involve a higher power or a formal religion.

(**This highlights the problem with a purely secular point of view on morality.  Who gets to decide who deserves what?  Who gets to decide what is fair and what is not?  Is it a king?  Is it a religious leader?  Is it the wealthy elite?  Is it the majority?  Keep in mind that this is the same majority that tolerated or even endorsed slavery for so long in the U.S..    If the majority gets to decide morality, than slavery was legitimately not wrong until the majority decided otherwise.)

From the purely secular point of view, there is one thing that matters.  That thing is survival.  Every species on Earth has evolved with the exclusive motive of survival.  Humans, in this quest for continued survival, evolved sentience.  This gives us the power to reason logically, and it gives us the power to think in complex ways that can supersede instinct.  This is true as individuals, and as individuals we thus have the capacity to override this evolutionary motive of survival.  As a species though, survival is still our greatest instinct.  There may be individuals who don't want to survive.  There may be individuals that would prefer the human race not be in a position of dominance over nature.  As a collective though, we are no different from bacteria, plants, or other animals.  We have a motivation to survive, and because of our superior intellect, we have a much greater capacity for survival.  How is this relevant?  It is relevant, because in the evolutionary quest for continued survival, we evolved an instinct for cooperation.  We evolved this instinct because cooperation improved the ability of the species to survive.  Further, studies over the last few decades have found similar cooperative instinct in many primates, especially in the more intelligent species. From a purely secular point of view, all that matters in this argument is whether or not welfare improves the ability of the species to survive.

Now we have a starting point.  The question of whether or not welfare is a moral obligation from a purely secular point of view can be rephrased: Does welfare improve the ability of the human race to survive?  Natural evolution takes place as a function of survival of the fittest.  Essentially, nature kills off those who poorly suited to survival, reducing bad genetics and concentrating good genes.  This results in species slowly becoming more capable of survival over time.  Does welfare help to eliminate bad genes?  Obviously not.  Welfare is pretty indiscriminate it its effect of improving individual survival.  Welfare also does not hinder reproduction in those with inferior genes.  Given this, it is probably important to ask if welfare maybe increases the survival of bad genes, on average.  To answer this question, we must ask another one: Do poor people who need welfare have worse genes on average than everyone else?  If so, then welfare may actually be harming our ability to survive as a species.  Thankfully, the evidence does not support the idea that poor people have worse genes on average.  Physically, poor people are often more fit than middle class and rich people, because poor people are more likely to do a lot of physical labor.  This is not a genetic trait though, so it does not matter.  Poor people do tend to be less healthy than middle class and rich people, but this can be tracked down to poor eating habits that are the result of very limited income and lack of time for preparing healthier meals.  This is also not a genetic trait.  Poor people tend to measure as less intelligent and less well educated, but again, these can be traced back to poor health and lower quality public education for the poor.  In fact, the only genetic differences the poor could reasonably have, given the evidence, is actually better genetics, because poor people with serious genetic diseases are more likely to die before reproducing due to lack of quality medical care.  Yes, our poor may actually be genetically superior, which should honestly not be surprising, given that genetic disease has consistently been more common among nobility and royalty for most of human history (probably more due to inbreeding than quality of medical care, though medical care almost certainly played a role).  It is clear that even in the worst case, welfare does not increase the survival of bad genes on average.

Let's consider what would happen if welfare was eliminated.  Obviously, a lot of people would die from starvation, exposure, and medical problems.  Food is the easiest necessity to get, so discontinuing food stamps would not cause everyone relying on them to starve, but food banks, religious charities, soup kitchens, and others would easily be overwhelmed by the increase in need, and a lot of people would starve to death.  The current state of subsidized housing is pathetic.  That means that kicking everyone out that could not pay full price would not affect the majority of the poor.  Still, a lot of newly homeless people would freeze to death in the winter.  We already have problems with medical welfare, because it leaves such a large gap between who is able to afford their own medical and who qualifies for medical welfare.  Getting rid of medical welfare entirely would significantly impact the bottom of the lower class.  Many people would die due to lack of care.  Many would survive with permanent disabilities.  Many would survive but spend a significant amount of time sick.  In addition, many older retired people would die fairly quickly without their medical care and expensive medications.  So, how would all of this affect our ability to survive as a species?  Well, the lower class covers a significant portion of the labor required for producing things necessary for our survival.  If they are dead or otherwise incapacitated due to starvation or illness, we are going to have a serious problem.  The death toll would likely be so large from a change like this that we would not have the time or manpower to clean up all of the bodies, which means decaying sick corpses everywhere spreading disease.  Even the parts of the lower class that were able to get by without welfare will be getting sick from this.  The middle and upper classes will have a difficult time surviving without the food, clothing, and other necessities provided largely by lower class workers.  Retail stores won't be able to keep up with demand on either side, because on the supply side there will be shortages, and on the service side most of their employees will be dead or sick.  I could go on, but I think it is pretty clear that without welfare, our immediate ability to survive would be dramatically reduced.

We also need to ask what other things poor people provide.  It turns out they don't just provide low end labor.  More and more people in the tech industry are coming from poor areas.  This is true of other industries as well, though not, perhaps, as much.  The tech industry has always struggled to find sufficient skilled labor, so losing this source would certainly harm it.  Evidently, the lower class is actually a source of very intelligent people for the middle and upper classes.  In short, poor people provide a lot of value that improves our collective ability to survive.

In addition to that, poor people provide a lot more than just labor to the economy.  For the economy to function, there must be people to buy stuff.  This is part of the reason general population decline is harmful to the economy, even without considering the effect retired old people who need care have on the economy.  If we suddenly lost most of our poor, our economy would almost certainly crash very badly, and then everyone would be poor.  Our economic health depends on our poor.

The answer is, the left is right, even from a purely secular point of view, because the survival of our poor is critical to the survival of everyone else.  If we don't support our poor, our ability to survive as a species will be reduced considerably.  Most other social species seem to recognize this.  The only time most species that are similar to humans will hoard resources for the "upper class" is when they are so scarce that the only choice is to distribute the resources and have everyone slowly starve to death, or give all of the resources to a few in the hope that they can survive through the scarcity and repopulate.  Humans seem to be the only social species that will allow its lower classes to live in poverty even when there is plenty.  In short, yes, we have a moral obligation to make sure everyone is provided for.  The conservative religious argument has been clear for over 2,000 years.  Even the purely secular argument is well backed though.

06 December 2018

Post-need United Order

For the non-LDS (aka non-Mormon) reader, the United Order is a wealth distribution system/social order designed by Joseph Smith Jr..  He took inspiration for the idea from the Bible, where Christ's disciples attempted to implement a very similar system after Christ's death.  Thus far all attempts at the United Order have failed.

The United Order is, in its initial form, a need based wealth distribution system.  Some people have compared it to communism, though many LDS people balk at this description, claiming that it is different because it is voluntary and communism is not.  (This is actually not true.  Marx, the inventor of communism, specifically described it as something that the people imposed, not as something a government imposed on them.  In addition, during Joseph Smith Jr.'s time, he told his followers that God had said they would face severe divine punishment if they did not participate in the system.  Strict Marxian communism is voluntary, while the United Order was imposed as a commandment from God.)  In the United Order, as it was practiced by Latter-day Saint communities, a religious leader (typically a bishop) was in charge of all of the community's resources.  Due largely to the timing and location, the United Order was almost exclusively practiced in agrarian communities.  The bishop was in charge of a warehouse (the origin of the term "bishop's warehouse", now used mostly in reference to LDS welfare buildings where food is distributed to those in need) where all of the community's production was stored.  Resources were then distributed on an as-needed basis.  This ensured that United Order communities had no poverty.  In most instances, the United Order broke down when one or more members of a community started trying to abuse the system.  There are stories of a man in one community who would go around claiming other people's property (a pocket watch, on one occasion) on the grounds that he needed it more than they did.  Now, to be clear this is not how the United Order works.  It does not abolish personal property.  It does not allow individuals to claim the property of others.  All it does is places all resources produced into a central pot and distributes them in an ostensibly fair distribution among the people, with filling needs being the highest priority.  The only time the United Order expects personal property to be donated is on creation or entry.  The idea is that when a person commits to live in the United Order, that person contributes everything he or she has to it, and then is returned only sufficient to be on approximately even ground with everyone else.  "Everything" has often been subjective in this, typically including primarily things of significant value in the context of the community.  For example, low value family heirlooms would not typically be expected to be donated.  If one donated one's house, it would probably immediately be returned, unless it is excessively large for the number of people occupying it and another family needs it more (in which case a more suitable home would be provided in exchange).  In United Order communities just starting out (a vast majority of them), moderately valuable items might also be expected to be donated, either for use by the community or to be sold for funds to buy other things the community needs.  To be clear though, this was not just about money.  United Order communities typically owned farms, farm equipment, tanneries, and other productive facilities, with the goal of being 100% self sufficient.  According to Joseph Smith Jr., he had done the math, and it indicated that a community living the United Order could get to a point where it was producing many times what the people needed, allowing it to become incredibly wealthy fairly quickly.  Unfortunately, this never happened, but it was not because Joseph's math was wrong.

As it has been practiced, the United Order needs buy in from everyone.  (It could be practiced differently, but in the religious setting of the early LDS Church, under hostile Federal, state, and sometimes local governments, there really was no other option than purely voluntary participation with minimal repercussions for reneging on the contract.)  All it takes is one greedy person abusing the system to disillusion everyone else and cause it to collapse.  I am only aware of one instance where it lasted a significant amount of time.  I am not sure how long it lasted, but one community lasted at least a few generations.  Last I heard, it had eventually disbanded, but it was probably the most successful instance of the United Order.  It was not as successful as it could have been though.  The problem is this: The United Order has never been carried to its conclusion.  It has only ever gone through the early phase.  In most cases it failed there, but in a few it lasted longer but stalled at the culmination of the early phase.  The problem is this: The United Order was never intended to be a need based system.  It was intended to be a system of labor and wealth distribution that was both fair and profitable for everyone.  Because it never got past the early need-based phase though, few people actually understood this.  Joseph Smith Jr. never had the opportunity to establish the full potential of the system, because greed destroyed his attempts at it before it ever got that far.

There are a lot of misconceptions about the United Order.  The idea that it is a need based system is one of the worst.  It was never intended to only fill the needs of the people.  Joseph Smith Jr. was clear about that when he claimed that any community practicing it correctly would become extremely wealthy.  The United Order was first and foremost about fairness, and it was second about highly efficient industry.  It is well known that cutting out middle men can result in substantial savings due to increased efficiency.  A system without profiteering at every corner is going to cost far less to run than one where there is someone at every level taking a share, especially when half of the "levels" are not necessary in the first place.  This is what the United Order was about.  It was about increasing efficiency and reducing opportunity for greed to rob the community of its profits.  Obviously, however (or, at least, obviously to Joseph, because many modern businesses don't seem to get this piece of common sense), a society cannot run efficiently when the needs of its members are not being met.  And thus, the first priority of the United Order was to meet the needs of the people.  Any community stopping there, though, was no longer practicing the United Order, because it was never intended that meeting needs should be its only priority or even its highest priority.  No community has ever successfully practiced the United Order, because even the most successful stopped practicing it as soon as needs were met.

Joseph Smith Jr.'s United Order was intended to work very differently in the long run from how it was ever practiced.  Meeting needs and establishing self sufficiency were indeed the first goals.  Several instances managed to get this far, but they were either torn apart by dissension or just stopped there.  Only one that I am aware of lasted very long, but its later problems really highlight its own failure.  This community, once self sufficient, continued to isolate itself from the outside world.  There is a story floating around about a teenage boy who was given some money and sent outside of the community by the leaders, to find things that the community did not have.  The goal, specifically, was for him to go out and find some decent quality modern clothing.  He did this, and he returned with some nice jeans, a fairly high quality shirt, and a few other things.  These items were turned over to those members of the community who made the clothing.  They examined the clothing and took from it sufficient knowledge to recreate it.  The items were then given to the boy as a reward for successfully completing his task.  Now, here is the problem: This was most of the extent of the trade between this community and the outside world.  Sometimes the community would sell stuff, but there was not a lot of demand for most of their more primitive productions.  The community was self sufficient, but it was not wealthy, by any means, largely because it did not participate in sufficient trade to gain much wealth, and further, any wealth they did have was spent almost exclusively inside the community, thus isolating it from modern conveniences that gave the outside world a significantly higher standard of living.  This is not what the United Order was supposed to be like!

The United Order, practiced as intended, would have rapidly achieved self sufficiency and met the needs of its members, but it would not have stopped there.  It is true that true self sufficiency requires the ability to be self sufficient even in isolation, but it does not require that a state of isolation be maintained in the long term.  One can have all of the land and labor required to produce enough food and be self sufficient, even if one is choosing to buy food from someone else because it is cheaper than growing it.  Self sufficiency is about ability, not about actually doing.  The second phase of the United Order, which to my knowledge has never been attempted, is finding ways to increase efficiency, without sacrificing self sufficiency.  For example, you can't sell the farm to increase efficiency somewhere else, but if it is cheaper to buy food than to grow it, definitely do that.  The farm land might be more valuable growing some cash crop than growing food, but so long as the land is capable of growing sufficient food, self sufficiency is not compromised by using this more efficient strategy.  Efficiency almost requires engaging in trade.  A self sufficient community has a lot of flexibility in what it can produce, and being able to obtain necessities cheaper through trade increases that flexibility.  Thus, a United Order community that has achieved self sufficiency must emerge from isolation to progress to the next phase.  Sadly, this has never been done.  The second phase is the generation of wealth.  Once needs are met and self sufficiency has been achieved, focusing on the generation of wealth is not wrong, nor is it harmful to the community.  In fact, it is quite valuable.  Such a community can afford to expand its capabilities, allowing it to produce more and more value to trade for wealth.  But, it must also trade some of that wealth for efficiency.  Upgrading factories and automating processes will increase its capacity for generating wealth and its ability to be self sufficient.  Once this process has gotten to a point where the community is generating a significant income, it is time to move on to the third phase.  The technological and efficiency progress of the second phase is not abandoned here, just like the self sufficiency and need goals of the first phase were not abandoned upon moving to the second.  What is different is that the wealth begins to be shared with the members of the community, instead of all being put back into improving the community, and this wealth is not merely distributed in the form of more needs.  It is distributed in the form of money that can be spent within or outside of the community.  If the community can provide for some fairly universal wants significantly more cheaply than individuals can buy it themselves, then this may be done to improve the efficiency of the system, allowing more wealth to be retained and distributed than might otherwise be.  Attempting to do this for all wants, however, is a poor and maybe even oppressive strategy, because either the system is stocking up on wants that may end up with surplus, or the system is determining what wants the people are allowed to have and what they are not.  For example, if only a few people want video game consoles, either the system has to guess how many it will need (of each type), or it will decide that because the majority does not want it no one can have it.  This is a poor strategy that money distribution avoids.  A successful United Order is a community where everyone has significant buying power even outside the community.  This is the kind of United Order Joseph Smith Jr. intended, because what is the point of a wealthy community where the individuals of the community are still poor, having only their needs met and nothing else?  (Joseph Smith Jr. and the primary book of scripture for the church he created are both very clear on the fact that God wants his people to have the blessing of material wealth.)

The fact is, the United Order has not been proven to be a failure, despite how many times it has failed.  Even the few cases where it has lasted to the point of self sufficiency, it was never allowed to progress beyond that.  Joseph Smith Jr.'s claim that the United Order is a path to a very successful and wealthy community makes perfect sense to anyone with a modern understanding of supply lines and general economics.  The problem is not the system.  The problem is that no one who truly understands it has ever managed to get it to the post-need phases.  The system itself is probably one of the most well designed systems for economic equality and fairness, and it is a shame that it has never been given a fair chance.

07 November 2018

Matching Speed and Safe Driving

One of the central tenets of my religion is the legitimacy of governments.  As such, one of our behavioral assertions is that we obey the law of the land.  Many years ago, I was in a religious class for young adults.  The teacher asked the students what sin is the least serious in God's eyes.  I, being a little bit of a troll but also being something of a philosopher and wanting to make a point, raised my hand.  When called upon, I suggested that perhaps violating the law of the land, and specifically the violation of speed limits while driving, was the least serious sin.  The point the teacher was trying to make was that any sin, no matter how minor, would keep one out of Heaven unless repented of and forgiven, and the point I was trying to make was that even something as seemingly trivial as speeding qualifies as such a sin.  The reaction of the other students was so completely different from what I expected as to significantly shake my faith in my friends.  Nearly every single student in the room instantly started making up justifications for committing this sin.  One said he was just not ready for such a high law.  Another said she just really enjoyed going fast.  One, who I want to focus on, justified herself by claiming that following the speed of traffic is safer than going the speed limit.  The only person in the room who took my veiled criticism the way it was intended was the teacher, who openly admitted that he struggled to adhere to this law, expressed shame in his sin, and expressed a desire to improve.  I was the only person in the room not bothered in some way by my question, because by that time I had already put sufficient effort into self control while driving that I only very rarely exceeded the speed limit and only accidentally.

This is a nice story, but religion is not the focus of this article.  The one justification that bothered me the most was the claim that speeding is safer than following the speed limit.  It seemed to be true.  For an accident to occur between two cars, they must be going at different velocities.  Speed is an element of velocity, so if two cars are traveling at the same speed, they are more likely to be going the same velocity than if they are traveling at different speeds.  From a religious perspective, this made me wonder if God would justify speeding if it was necessary to remain safe.  From an analytical perspective, I wondered exactly how much difference it would make.  Clearly if you are constantly being passed by other cars, there is greater opportunity for someone to make a mistake that causes an accident.  At the same time though, what proportion of accidents involve vehicles passing?

All of this happened well over 10 years ago.  Since then, I have continued to religiously adhere to the speed limit.  At the beginning of this year, I started attending a college some 30 to 40 minutes from where I live, where I spend most of the driving time on a highway.  What I have observed strongly opposes the claim that matching speed with traffic is safer than going the speed limit.

To start with, I want to discuss the prerequisites for an accident to occur between two vehicles.  I mentioned before that they must be going different velocities.  Velocity is the composition of speed and direction.  Two vehicles traveling at exactly the same velocity cannot ever collide.  They will either be traveling on parallel paths separated by space, or they will be traveling on the same path at different positions in time.  Roads guarantee massive overlap in paths traveled, so on a road, the critical factor is separation in time.  The next prerequisite is that the vehicles must be close together in both time and space.  Again, roads guarantee they will be close together in space but not necessarily time.  Now, to be clear about this time element, when two objects are on the same path in space but not time, this means that one is in front of the other, but they are traveling the same path.  (And the gap in time can be anything from seconds to decades or even centuries if the path is that old.)  An accident occurs when paths intersect in both time and space.

What all of this physics stuff means in layman's terms is that for an accident to occur between two vehicles, they must be close enough to each other that they can easily attempt to occupy the same space (ie., collide) before either party reacts.  When this happens, an accident occurs.  The severity of an accident is dependent on additional factors.  The biggest factor is the magnitude of the difference in velocity of the two vehicles.  A big difference will do more damage than a small one.  Thus, two vehicles going about the same direction and about the same speed will have a far less dramatic accident than two vehicles going in opposite directions at very high speeds.  The other factor is the points on the vehicles that meet, as a front end collision is more likely to destroy the mechanics of the vehicle and harm passengers than a rear end collision.

On highways, vehicles near each other are almost never traveling in opposite directions.  They are generally following the same or parallel paths.  So the direction element of velocity is not an issue.  The difference in speeds are also typically fairly small, with the exception of on and off ramps where some drivers struggle with limiting their acceleration and deceleration to the ramps.  Typical speed differences are around 10MPH (ranging from 5 under to 5 over), which is pretty low for an accident.  If you are going the speed limit, you can expect a vast majority of traffic to being going between 3MPH and 5MPH faster than you are, which is quite low for an accident.  If you are constantly being passed though, this is enough of a difference to increase risk at least a little bit, because cars are constantly changing velocity around you.  In these circumstances, it would certainly be safer to match speed with traffic.

My experience suggests that the circumstances required for an accident are exceedingly rare, when going the speed limit.  If you typically match speed with traffic, you will likely not understand this, because you will constantly be surrounded by traffic, in what is sometimes called a "wolf pack".  A wolf pack is a group of cars traveling fairly close together, and they form quite naturally on roads of limited width where people are traveling at a variety of different speeds.  All it takes to compress a sparsely distributed group of cars into a wolf pack is a vehicle going the speed limit in front of them.  First, the group slows a little, starting at the front and slowly moving to the back.  This allows the cars at the back to catch up, compressing the group front to back.  Then, as they pass, they line up.  Slower cars will pull back into the right lane after passing and faster ones will get ahead, and for at least 5 minutes, most of the cars will stay in a fairly compressed wolf pack.  If, during those 5 minutes, there is another obstacle, the process will be applied again, keeping the wolf pack in formation for at least 5 more minutes.  In practice, wolf packs can maintain a compressed formation for hours, so long as any cars leaving the pack (either going significantly faster or slower, or leaving the highway entirely) are eventually replaced.  If you match speeds with traffic, you will spend almost all of your driving time in wolf packs.

On the other hand, if you don't match speed with traffic, instead going the speed limit, you will be the one getting passed by wolf packs.  All of this getting passed may seem dangerous, but if you have never tried this, what you might not realize is that there are often massive gaps between wolf packs, where you will be almost entirely alone!  This is my experience.  I make the drive to and then back from classes four times a week.  That is eight 30-45 minute drives every week, going at almost exactly the speed limit the entire way.  On average I am passed by 3 to 4 wolf packs.  Each one takes between 1 and 3 minutes to pass, for a total of 3 to 12 minutes spent with other cars around me.  The entire rest of the time there is a gap of around a quarter of a mile behind and in front of me, which is far larger than the recommended 3 second following distance, making me the single safest vehicle on the road!

The thing people forget about the speed matching argument is that when you match speed, it guarantees that you are close to many other cars the entire time.  Being close to another car is one of the prerequisites of having an accident with another car.  If you go the speed limit, you are very likely to spend far more time without any other cars nearby, and the brief periods where wolf packs are passing you do not increase your risk by anywhere near as much as your alone times decrease it.  In short, matching speed with traffic is only safer when traffic is so heavy that it is constant along your entire path.  If there are significant gaps you could be enjoying by going the speed limit, then going the speed limit is far safer than matching speed.

If you want to try driving this safer way, there are some things you will need to get used to.  One is slowing down and traveling below the speed limit sometimes.  When passing, you want to be at the front or the back of the line, and if other cars are going faster than you, it will be hard to get in the front (back is better, by the way, because you have better visibility of hazards in front of you than behind you).  This might involve going as much as 5MPH under the speed limit for several minutes.  This is fine though, because for safety, you want the faster (less disciplined) drivers to be in front of you.

I also want to share a few other observations.  First, in my experience, the most aggressive drivers are the most likely to end up really close to me when I get to me destination.  What I mean is, the drivers that pass me at 10MPH or more over the speed limit have better than even odds of ending up right in front of me when I get to my exit or they get to theirs.  If you are exceeding the speed limit so much that you are constantly changing lanes and weaving in and out, you are almost certainly going to get yourself stuck behind some slow vehicle for long enough for me to catch up with you.  And no offense intended, but I will be laughing at you when I see that all of your speeding saved you no more than a few seconds (while costing you more in fuel).  On some occasions, I even end up passing people who have done this (I also laugh when this happens).  Second, if you have a strong aversion to slowing down, you are prone to drive more dangerously.  A constant problem I see is people pulling in front of other people with around 1 second of following distance.  The average human reaction time is 0.25 seconds.  That means, if the person in front hits the breaks, and the person behind responds as fast as possible, there will only be 0.75 seconds for the person behind to stop without hitting the person in front.  This is modified by quality and condition of breaks, quality and condition of tires, weight of the vehicle, and general road conditions.  While 0.25 seconds is the average, reaction times as high as 0.5 seconds are common.  The recommended following distance is 3 seconds.  Pulling in front of someone with only 1 second of following distance is not just careless, it is incredibly rude, because it also endangering their life.  And pulling in front of a semi with a short following distance is practically suicidal, because a semi requires several times the distance (and thus time) to stop than lighter vehicles do.  If you need to get off at the next exit, you should probably avoid passing, even if it means you have to slow down, and if you are in the left lane, you should try to get behind the guy beside you, instead of hitting the gas and trying to pull across in front.  It might cost an extra second or two, but that's better than costing you your car, weeks in the hospital, or even your life.

The moral of this story is that speeding is dangerous, even when everyone else is doing it.  It has a lot less to do with the speed than driving conditions.  If a particular speed will maintain significant distance between you and other cars, that speed is probably the safest speed.  And frankly, speeding rarely saves significant time.  Also remember that manners have a big impact on safety as well.  When you give other drivers space, you get space too, and that makes everyone safer.  And lastly, matching speed is not safer than going the speed limit.  That myth is a lie made up by people to justify breaking the law, and sadly a great many people have been fooled into believing it to be true.  There may be circumstances where matching speed is safer, but they are fairly rare compared to circumstances where it is actually significantly less safe.

18 June 2018

Mickey Mouse: Nemesis of Creativity

This is not just about Mickey Mouse, but Mickey Mouse has played a major role in the limitation and downfall of creativity in the U.S..  Long ago, a collection of men wrote an incredibly important document, through a collaborative process that involved constant debate, argument, and compromise.  This document was intended to define how the new government of the American colonies would govern its people.  One section of this Constitution was intended to enumerate the powers  of the Federal government, and after the first 7 powers were decided upon, the discussion turned to patent and copyright.  The British government, which played the role of abusive parent to the colonies, had a patent and copyright system, but most of the governments of the world did not.  Those in favor of such a system argued that protection of ideas would motivate people to come up with more and without such protection, people would not bother creating, because there would be nothing of value in it for this.  Some also argued that invention is expensive, and inventors needed some way to recover their costs.  There may have also been an idea that protection of ideas would help increase immigration from countries without such protections.  And thus, the Article 1, Section 8, Clause 8 of the U.S. Constitution was written, giving Congress the power “To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.”

This clause was not accepted unanimously though.  Many opposed it, including Thomas Jefferson who described any government enforce monopoly as a travesty and said the following with respect to the idea of ownership of ideas:
If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it.
 And Jefferson was not the only person present who felt this way.  It was widely accepted that ideas cannot be literally owned and had no natural owner associated with them, for more reasons than stated by Jefferson.  Ideas cannot be stolen, because taking an idea away does not deny the creator the use or value of the idea, like the theft of physical property does.  Ideas can be shared freely without the loss of any amount of the idea for those sharing it.  An idea, once taken, cannot be confiscated from the taker and returned to the creator.  In addition, multiple people can create the same idea, without interfering with each other.  The use of physical property is exclusive.  Only one or a limited number of people can use it at the same time.  Ideas can be used by any number of people, without any sort of crowding or interference.  As Jefferson wrote, "If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea..."  And the majority agreed with him.  The reason the Federal government was given the power to create patents and copyrights is clear from the wording in the Constitution: “To promote the progress of science and useful arts...”  While some clearly argued that ideas should naturally belong to their creators, otherwise Jefferson would not have had cause to write what he did, the majority agreed that patents and copyrights should be granted for the progress of society.  Thus, the power granted to the Federal government only allowed for temporary copyrights and patents, not permanent ones.

Unfortunately, the evidence suggests that even the argument ultimately used to justify this power was wrong.  Elsewhere Jefferson pointed out that at the time, Britain was the only nation known to grant patents and copyrights, and yet Britain was no more advanced than any other developed nation.  Patents and copyrights had done nothing for Britain to "promote progress of science and the useful arts".  In fact, the evidence suggested that patents had hobbled Britain, as the exclusive ownership of ideas had prevented those ideas from being improved upon by others.  Giving inventors exclusive rights to their ideas did not and thus far has not proven useful in encouraging invention.  And now days, we are actually seeing substantially more innovation in the realm of open source invention, where inventors and creations deliberately give up exclusive rights to their ideas, than we have ever seen in proprietary invention.

This is not all though.  As Jefferson and others saw with Britain, we have also seen our own share of copyright and patent actually retarding the progress of science and the useful arts.  In the early '90s a technique dubbed patent trolling became a major source of profits for some companies.  These companies would hoard patents, doing their best to obscure what patents they held, and then they would wait for someone to infringe.  Infringement of a patent gives the patent owner the upper hand.  This allowed patent trolls to extract excessive sums of money from honest companies who did not realize they were infringing on a patent, because the patent trolls had deliberately made that information hard to find.  Companies like Apple and Microsoft objected to this highly dishonest practice, and they were the ones to give companies like this the title of patent trolls.  Of course, over the next 10 years, Microsoft developed into one of the worst patent trolls our society has ever seen.  Since then things have improved, as patent searcher are now much easier with the internet and Microsoft has finally begun to prioritize ethics, but patent trolls still severely hinder innovation.  Many companies carefully avoid innovation, staying safely within the public domain, to avoid the threat of patent trolls.  Many small businesses and startups have been destroyed completely by patent trolls, when they could not afford legal fees to fight, and they could not afford the demanded settlement.  In addition, patents and copyright deny the ability to make derivative works without permission.  This mean that new ideas cannot be built on top of patented ideas until the patents expire.  In the U.S. patents have a term of 20 years.  This means that an idea which could be iteratively improved or built upon at a rate of some major new innovation every 2 years will progress at a rate 10 times slower, because of patents.  And in many cases, innovation will never occur, because after 20 years, many ideas are no longer in the public eye and are buried under 20 years worth of newer ideas.  Ideas which could have turned into something incredible are frequently forgotten in less time than that.

It is not clear whether patents and copyrights had a significant impact on immigration, but it seems unlikely.  We currently have many times more people wanting to immigrate than we are allowing.  Even if this was a legitimate thing when this power was granted, it has not been for well over a century.  Most people immigrated to the U.S. for economic and religious freedom and still do, not because of the promise of exclusive ownership of ideas.  If patents and copyright ever did have an impact on immigration, it was hardly significant.

Then there is the Berne convention, an international treaty on copyright, which the U.S. joined in 1988.  The Berne convention treats copyright as a natural right of the creator.  It holds the U.S. to the copyright laws of whatever country an item is copyrighted under.  It also requires copyright to be enforced even without registration.  Every single point here violates the Constitutional power granted to the Federal government.  The Federal government is only authorized to treat copyright and patent as existing for the "progress of science and the useful arts".  And given the well documented opposition to treating ideas as natural property by a majority of those who drafted the document, any honest judge must interpret the Constitution as denying any form of natural right associated with copyright and patent.  The mandate to enforce the copyright laws of the country of origin is only Constitutional when there as a guarantee that those countries' copyright laws adhere to the same requirement that they exist for the exclusive purpose of "progress of science and the useful arts".  Enforcing copyright law without registration is a bit more ambiguous in its legality.  If it was clear that copyright promotes "the progress of science and the useful arts", then automatic copyright without registration would definitely be Constitutional.  This is not clear at all though.  Unlike patents, copyright has less potential for hindering progress.  At the same time though, of the vast quantities of materials that are copyrighted due to automatic copyright, only the barest fraction benefit anyone by being copyrighted.  And while only the tiniest faction of copyrighted would could benefit anyone if derivative works were allowed, there is enormous value in being able to obtain copies of materials that are no longer considered valuable enough to continue publishing.  This includes the enormous amounts of books that publishers do not consider profitable enough to prioritize over newer books, old newspaper articles, and old magazines.  We cannot even legally learn about our own history, because copyright prohibits us from copying (even digitally) newspaper and magazines that are not substantially older than a vast majority of us are.  This is certainly not promoting "progress of science and the useful arts".  In short, the Berne Convention is directly opposed to the Constitution.  Congress should never have ratified it, and if it was brought to the Supreme Court, they would either have to rule it unconstitutional or violate their integrity (which sadly is not an uncommon occurrence in that court now days).

How does this have anything to do with Mickey Mouse?  It has everything to do with Mickey Mouse, because Mickey Mouse is a copyrighted character and has almost had his copyright expire at least three times.  We are approaching the fourth time rapidly.  Mickey Mouse's current copyright expires in under 5 years.  Under current copyright law, it expires in 2023.  Don't start getting your hopes up about making derivative work or otherwise using the character.  Mickey Mouse has almost expired three times already.  Each time, Disney lobbied Congress to extend the length of copyright to maintain control over this character, and Congress gave in.  The legality of this is questionable.

Thus far, I have focused exclusively on the purpose of copyright.  Now it is time to pay attention to the mechanics.  Specifically the part of clause 8 that says, "...by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries."  Note that it says "for limited times".  By periodically extending copyright, Congress is violating the Constitution deliberately.  But because each extension has a time limit, Congress can avoid scrutiny.  Disney is not the only company behind this, but it is very obviously the main one.  Every significant increase in copyright term has happened right before the copyright on Mickey Mouse expired.  Within the next few years, we are due for another unconstitutional increase in copyright term, and it is unlikely to matter which party is in control.  Both are more under the control of large businesses than anything else, and the music industry, the movie industry, the writing industry, and the journalism industry, all of which are massive, are going to be right there with Disney pushing for another extension so they can maintain control of what they think should be theirs forever.

At this rate, nothing will ever expire from copyright, and our history, science, and everything else will be under lock and key forever, only available to those wealthy enough to pay for licenses.  And some may eventually go out of publication and disappear from public access forever, ultimately being permanently lost as storage devices fail and no backups are made, because the "owners" don't consider the content worth the cost of saving.

Congress needs to tell Disney and the media industries "no".  The term of 70 years after the death of the author, and the term of 95 to 120 years for corporate creations is already excessive.  They practically guarantee that nothing created during our own lifetimes will ever be available for our own use, even if the creators make many times what they spent back in profits.  Maybe it is time for us to tell Congress, "No, Mickey Mouse is ours now.  We have already paid Disney many times what he is worth.  He is rightfully ours, because he has been bought and paid for by multiple generations.  Disney can keep using him, but he can no longer belong exclusively to Disney, because we have paid every bit what he is worth many times over."  And this applies to every other media company as well.  If they have not managed to make their money back after a lifetime of exclusive ownership, either they are not competent enough to do so ever, or they were never valuable enough to worth the copyright in the first place.  In either case, it is time for someone else to have a try.

Right now, Mickey Mouse is toxic to progress in society, because he is the excuse for a harmful and unconstitutional system of copyright.  Mickey Mouse is the nemesis of creativity.  The only way this can be fixed is for Mickey Mouse to be allowed to gracefully enter the public domain in 2023, when his copyright will finally expire, unless of course, Disney colludes with Congress yet again to violate our Constitutional rights, to withhold what was rightfully ours many decades ago.

30 April 2018

Taste Testers

Many years ago, I worked at McDonald's.  One day, we got a letter for the corporation, telling us that a change had been made to procedure.  One of the sandwiches, the double cheeseburger, I think, had been changed.  Instead of putting one piece of cheese on top of the meat patties and the other on the bottom, we were to put one on top and one between.  We had to retrain everyone to make this sandwich this way.  Supposedly this seeming minor change had been suggested by McDonald's panel of professional taste testers, who said that the sandwich tasted better with the new arrangement than with the old.

The other day, I watched a YouTube video, where a professional taste tester for some ice cream company demonstrated the process of taste testing.  He started by taking a small ice cream taste spoon, getting a scoop of ice cream on it, and then depositing the ice cream on his tongue, much like the typical customer does.  All similarities disappeared from there though.  He began to rapidly smack his tongue against the top of his mouth, sucking in air in between.  He explained that doing this aerated the ice cream, allowing him to better taste the subtle nuances of the flavor.  He then spent more than a minute doing this, periodically stopping to call out some attribute of the flavor he had discovered.  Clearly this man knew exactly what he was doing, when it came to favor analysis of the ice cream.  Equally clearly though, the ice cream company that hired him and his colleagues knew nothing about business.

After watching this video, something struck me.  One person in the comments said, "That's amazing!  That is exactly how I eat my ice cream!  It is really nice to see someone affirm what I have been doing for years!"  I don't know if this was intended sarcastically or not, but my internal response was, "Yeah, and you are probably the only person that does it who is not a professional taste tester."

I realized something here: Food companies are hiring highly trained, professional taste testers to decide whether or not food is suitable for the general public.  We have taste testers carefully aerating the food as they taste it, to identify flavors that not one of us will ever notice, because we don't eat food like that!  At this very moment there are probably flavors being rejected that the general public would love, because after three minute of aerating, a taste tester got the slightest hint of bitter, and there are probably flavors that are going to go onto the market that are terrible, because some taste testers liked the particular "bouquet" of flavors, of which the vast majority of us will only taste the most prominent two or three.

I teach my video game design students that the most important person is the player, and it is critical to know and design to your target audience.  This includes getting members of your target audience to test and provide feedback on your design.  This applies equally to taste testing.  Do you want the tiny percentage of your population that are taste testers to like your product, or do you want everyone else to like your product?  Hiring professional taste testers to test a product that you intend on marketing to people who are not professional taste testers is...well...kind stupid.

This leads to another thought though.  McDonald's, Burger King, A&W, and every other burger joint are doing it wrong.  Consider this: When you attend a barbecue, how do the burgers work?  Is there a guy who cooks the meat, puts it on a bun, adds condiments, and then hands out the finished produce?  Every barbecue I have been to had a guy cooking the meat, but he would just put it on a plate on a table or maybe a tail gate.  Near the plate, other ingredients would be arranged, including ketchup, mustard, pickles, onions, tomatoes, mayonnaise, lettuce, cheese, and so on.  There was not anyone assembling sandwiches to hand out.  People were expected to come make their own sandwiches.  And they didn't just throw everything on either.  Each person would pick the ingredients he or she wanted.  Everyone's sandwich was custom made.  There was no default build, where any deviations counted as a "special order".

Welcome to modern burger joints though, where anything customized is a "special order", even though in real life, there is no "non-special" order.  And this is not because it is not feasible for every order to be specially made.  No, Subway has proven that made-to-order sandwiches for every single order is not only possible, but it is also quite competitive.  Made-to-order McDonald's would be even easier, because Subway has some 20 or 30 ingredients on their table, while McDonald's has maybe 8.  Honestly, implementing this would not even be difficult.

The typical burger joint is not arranged in a way that is friendly to made-to-order burgers.  This is fine though.  Subway needs the options to be visible, because there are so many.  McDonald's, Burger King, and so on don't.  The first step is to replace the antiquated POS system with something more user friendly, and flip the monitors around.  When the customer orders a sandwich, display a graphic of the preparation table, with small text labels for each ingredient.  The customer can then touch the ingredients to toggle them on or off.  With the small number of ingredients, this would be pretty straight forward.  There would no longer be a standard build for sandwiches, though there might be a default selection of ingredients for each, to make things easy for the occasional customer who wants to order 200 sandwiches (yes, that actually happens, and they are typically cheeseburgers).  A well designed system could even beat Subway for ease of use, and it would not require any layout of the restaurant design.  Even the software upgrade would be pretty cheap, as any programmer worth his or her salt could produce a better system than the existing one a few months or less.

What this all comes down to is that most fast food places don't know their target audience.  They hire professional taste testers when they should be hiring taste testers randomly selected from their customer pool.  They violate certain traditions and assumptions associated with the kinds of food they serve.  If they knew their target audience better, they would likely be significantly more successful, because normal people would enjoy their food more, and they would be more comfortable with the process surrounding ordering it, because many people find asking for special accommodations embarrassing.  The state of the fast food industry is pretty sad.  Maybe one of their executives will read this and apply my suggestions, and their brand will beat everyone else into the dirt.  I doubt it though.

30 March 2018

New Age Luddites

Fire may have been among the first real creations of man, but it was perhaps not the most epic!



First, what are animals?  Here is Google's definition:
a living organism that feeds on organic matter, typically having specialized sense organs and nervous system and able to respond rapidly to stimuli.
The definition of the word "living" in there is somewhat ambiguous.  In fact, biologists do not even entirely agree.  The most common technical definition is something like "a property of a chemical system that makes it self sustaining."  This makes bacteria, protozoa, plants, and animals living, but viruses are not technically living.  The important parts here though are that animals are first, living organisms, and second, have nervous systems, sense organs that allow them to respond rapidly to stimuli.  That second part is what distinguishes them from plants, at least mostly (many carnivorous plants have specific sense organs and many respond rapidly to stimuli, and most plants can sense sunlight, but I am not aware of any with nervous systems).

Second, where did animals come from?  This is a trick questions.  Vegans and animal rights activists will say they came from nature, through the process of evolution, but that is only valid if houses built by humans came from nature.  Perhaps the original animals came from "nature" through evolution, but claiming that modern animals also did shows a distinct lack of understand of most of human history.

There are a lot of various explanations about where animals came from.  The purely secular answer is natural evolution.  Some people claim that God did it.  Some believe both of these.  But it does not matter, because for many animals that ended tens or hundreds of thousands of years ago.

Many modern animals are man made.  Yes, it is true that natural evolution or God or something other than regular humans created the DNA and life of animals, but that is not relevant here.  The matter we all rely on for pretty much everything was made in stars, but we don't say nuclear weapons were created by stars.  Trees evolved or were created by God just like early animals, but we don't say nature created all of our houses.  This can be argued with anything.  We rely on nature for all of our raw materials, and they were all ultimately created in stars.  Man was created by nature or God.  We could then justify saying that everything we do and make is attributable to nature or God.  And within this view, modern animals are completely natural.  But this is absurd.  If we call houses, cars, buildings, computers, and anything else man made, then so are many animals.

The first domesticated animal that we are aware of was the dog, but we can't really claim that the original domesticated dogs were man made.  It was their choice.  It is believed that the domestication of dogs began with wolf packs that followed nomadic human (or pre-human) tribes.  These wolves followed humans, because humans left behind food scraps that they would eat.  Some of these wolves would approach human camps and act violently, so the humans killed them.  The more aggressive of these wolves would get killed, while the less aggressive would survive, and thus natural selection ultimately breed domesticated dogs.  Of course, since the first domesticated dog was kept as a pet, humans have used selective breeding extensively to create customized dogs, separating dogs as a species entirely from nature and turning them into a completely man made thing.

Sheep and goats came next.  Sheep were breed to be incredibly docile, and while we have ideas, we don't actually know what animal domesticated sheep were originally breed from.

After sheep and goats, we domesticated horses, cows, pigs, chickens, and fairly recently, turkeys.  None of these animals can reasonably be considered the product of nature.  They are all man made animals, and there are many more man made animals as well.

Animal husbandry throughout human history has been an endeavor of creation.  Humans did not just create new breeds of existing animals.  Using existing animals as their raw resources, they created entirely new, man made animals.

In the fringes of the vegan movement is a deeper movement that believes animals are sacred.  They believe that animals deserve the same rights that humans have.  In their zealotry to give animals complete freedom, they advocate either killing or just letting die out (by removing the reproductive freedom, which they strongly advocate for, from these animals) all domesticated species that have been breed to rely on humans for survival.  (I would like to draw a connection here between this and human genocide.  Their argument to support this xenocide of entire domesticated species (which they argue deserve the same rights as humans) is that they are not natural animals, because they are incapable of being free.  Isn't this "not normal" the typical argument for every human genocide?  They are not like natural or real humans?  Welcome to the dark side of veganism!)  Aside from this seriously violating all of their own declared ethics about animals rather horrifically (think about someone suggesting this for substandard humans, which is more or less what they believe they are doing, because they claim that animals are no less valuable than humans), this reveals their true motivation.

Vegans are Luddites.  The original Luddites were a group of people who burned a bunch of automatic looms in Europe, because they feared automation would hurt the weaving industry (and they were right, but it created enough other industries to make up for it).  "Luddite" has since become a word used to describe people who oppose technology or actively support tearing it down.  And in case the above was not obvious enough, domesticated animals are some of our earliest human technology.  Cows are not the product of nature.  Cows as a species belong to humans.  We created cows just like we create houses, cities, cell phones, and anything else.  These extreme vegans are advocating that we destroy some of our most epic creations, because they have nervous systems.  Around a decade ago, some scientists said that Pentium 4 processors had about the same computational power as guppies.  Modern computers are even more powerful.  Should we destroy all of our computers too, because they have central nervous systems and the ability to think at the same level as some animals?  They are at least as dependent on humans for survival as most domesticated animals.

The fact is, most animals are little more than biological robots.  Some argue that even humans are.  We only know of two classes of animals capable of competing with humans on intellect.  These are cephalopods (squid and octopi) and dolphins, and we don't actually know exactly how they compare.  Every other animal is at least an order of magnitude less intelligent than humans.  Yes, this even includes primates, which are probably around one order of magnitude less intelligent.  Chickens, cows, and sheep are incredibly stupid (yes, I have experience).  Pigs are a bit smarter.  Goats are a bit smarter than pigs.  Domesticated turkeys may be the stupidest animal ever to exist (yes, stupider even than the dodo bird, which at least managed to reproduce without human intervention).  None of these is incredibly intelligent.  If their origin species were more intelligent (they probably were, and in the case of turkeys, which we know originated from wild turkeys back in the U.S. colonial era, their origin species is quite intelligent), humans have very successfully breed that out of them.

In other words, vegans are essentially arguing for robot rights.  Yes, it might make some sense to argue that the most intelligent animals, like primates, dolphins, and cephalopods deserve a bit more freedom  than we tend to give them.  Arguing for the freedom of domesticated animals is inane.  We are talking about what is perhaps the biggest, longest running, and most successful human project ever, spanning hundreds or thousands of generations, and these vegans want to just dismantle the program, commit xenocide against everything it has produced, and pretend like all animals have the cognitive capacity to even understand freedom.  And these animals are not even ones that they can reasonably argue should be free, because we have literally breed so much intelligence out of them that they bear more similarity to computerized microfactories than naturally occurring animals.  If you are going to be an anarcho primitivist Luddite, at least own it, instead of hiding behind veganism.

20 March 2018

Marriage Age

Recently there has been a lot of focus on child weddings in the U.S., except that it is not about children getting married at all.  It is mostly about teens, and contrary to popular belief teens are not children.  Most states have no minimum age for marriage, but they do require minors to have consent from a parent or judge to get married.  As religious diversity has increased in the U.S. there has been the occasional wedding of a 12 or 14 year old girl to a much older man, and courts allow this because there is no law against this in most states.  This has resulted in some activists suggesting that all states should have an minimum age for marriage of 18 years.  There are two states that currently have a minimum marriage age of 18, one of which is Texas.  Both allow court emancipated minors to marry, regardless of age.  Those advocating for this age limit in all states, however, want it to be an absolute minimum.  This is a terrible idea.

The justification for having an 18 year absolute minimum marriage age is that some parents will pressure their children into agreeing to a fairly young wedding.  This is a legitimate problem.  Muslim refugees have been known to do this, as typical marriage age in Muslim countries ranges from 16 to 18 years old (with a customary minimum of 15 years old).  Combating this with an absolute minimum marriage age, however, is rather heavy handed, infringing on teens rights just as much as the current situation.  There should certainly be something preventing this kind of abuse, but in trying to protect these teens in this way, we would also be robbing them of their own choice.

Perhaps some historical background will help make some of the issues here more clear.  Historically, typical marriage ages have ranged from the year of a girl's first period (there are still some primitive tribes that do this) to around 16 years old.  For males, 12 is generally the youngest.  There have been occasional cultures that would wed actual children (ie, under 12 years old), but these were fairly rare.  Most cultures never went further than child betrothals, where the actual marriage happened between 12 and 16 years old.  In the 1800s, and for some time before that, the typical marriage age was 16 years old.  In fact, sweet sixteen and cotillion balls originated as coming of age celebrations, where 16 year old girls would "come out" as ready to begin courting and ready for marriage.  Most of these 16 year old marriages were fairly good.  It is hard to compare objectively, as pressure against divorce back then had a significant impact on divorce rates, however, historical records, including journals and such, seem to indicate at least as high of a rate of happiness in marriage as we have now, and often higher.  Typical marriage age in the U.S. started increasing as public secondary education (high school) gained traction.  Before that, a woman over 20 years old was seen as at risk of becoming an old maid, and single men much older than 20 were often viewed as a menace to society.  (While it is not clearly documented, LDS prophet Brigham Young is quoted to have said that the exact age that a single man became such a menace is anywhere between 21 and 27 (most commonly quoted as 25), and he used the exact phrase "a menace to society".)  Public high school raised the typical minimum age for marriage to 18 in the U.S.  This resulted in the term "teen pregnancy" becoming a derogatory term (before that, pregnancies between 16 and 18 were quite common, and pregnant teens were typically married).  Now, even married teens that get pregnant are looked down on, when around 100 years ago, a girl who was not married and pregnant in her late teens was seen as damaged.  Throughout most of human history, it seems that between 14 and 16 years old was the most common marriage age.  The only reason most Americans see marriage before 18 is bad now is that 18 is the typical high school graduation age.  Note that many now preach that marriage should be delayed until after college, but with people starting college later and taking 6 years on average, instead of the prescribed 4, and while it is impossible to find actual statistics it looks like typical college starting age is between 20 and 22, by that reasoning, we should set the minimum legal marrying age to 26 or 28 years old.

There is some psychological research that shows the age of 21 to be unique (not 18).  Around 21 years old is when human brain development tends to stabilize.  During the teen years, the brain is quite malleable, and it is in a state of development that makes it a bit more prone to irrational behavior.  Now, to be clear, environmental influences have not been entirely ruled out.  There is some evidence that children brought up in primitive tribes develop the same rationality as U.S. 21 year olds as young as 6 years old, and there is no reason to believe that this accelerated brain maturity is the result of genetics.  In the U.S., however, teens are known to have difficulties with rational thought, up until around 21 years old.  This has commonly been used to justify denying teens "grown up" rights and responsibilities, despite the fact that some of these were common only 100 years ago, and teens back then fared perfectly fine if not better than teens today.

On the other side, having more malleable brains leaves teen brains open to greater developmental capacity.  Adults in college do not learn easier than teens because their brains are more developed.  In fact, it is quite the opposite: Adults in college learn better than teens despite less malleable brains  (a combination of the rationality issue and just poor parental discipline are the most likely causes).  Historical records suggesting that people who married between 16 years old and 18 years old were generally happier than people who marry later suggest that this malleability allows couples who marry in their teens to form more meaningful and stronger relationships.  When couples in the mid 20s or later get married, their brains have already reached a point of reduced malleability.  They are literally "more set in their ways", and they are generally less likely to consider challenges to their opinions.  This is a prime situation for stubborn arguments that result in unhappiness in marriage.  When teens get married, they have the opportunity to grow and develop together, allowing for much stronger relationships.  There may be other factors of teen development that work against this, but this is not known at this time, because no one has bothered to do the research.  It would  be tragic, however, if we made laws forbidding teen marriage, when it may actually be the best age to get married.

Another important factor in teen marriage is risk involved in pregnancy.  There are studies suggesting that teen pregnancy has significantly higher risk than pregnancy in early 20s.  It is hard to find age specific information though, and it is not clear what the role of typical demands on teens may play in this.  Specifically, pregnant teens do not tend to get prenatal care until significantly later in pregnancy than mothers in their 20s.  This, on its own, is a significant indicator for increased risk at any age.  Pregnant teens are also often under significantly more stress than older mothers, and this on its own is also a significant indicator of risk for mothers of any age.  Some have suggested that at 16 years old, a woman's body is generally at its most healthy and is thus most prepared for pregnancy, but it is difficult to confirm or reject these claims when 16 to 18 year old girls have so many demands and expectations on them that pregnancy can result in depression, anxiety, and other issues that are known to make pregnancies significantly higher risk.  It is clear, however, that between 16 and 18 years old is practically perfect when it comes to physical fitness for caring for children.  A child conceived at 16 years old and born when the mother is 17 years old will be a teen when the mother is 30 years old.  A child born when the mother is 21 years old, will be a teen when the mother is 34 years old, and a child born when the mother is 28 will still be a child when the mother is 40.  (In 2014, the average age of a first time mother was 26.3 years, putting her at over 38 years old, before her first child is a teen.)  In the mid to late 30s is when people tend to start suffering from physical wellness issues, and the later parents wait to have a child, the more physically difficult caring for children will be.  Starting at 16 to 18 years old generally guarantees that all children will be physically independent enough that this is not a problem by the time the age of the parents would start playing a role (exceptions are when a couple has more than 5 or 6 children, leaves large gaps between children, or has a severely disabled child).  In short, while the evidence seems to suggest the best time for child bearing is the early 20s, our culture may be hiding a younger ideal age in a pile of culturally induced stress and anxiety for those who get pregnant at a younger age.

The evidence seems to suggest that the best time for marriage is between 16 years old and 18 years old.  There are clearly some issues with this though.  If parents have the sole responsibility of determining if a minor is ready for marriage, there are sadly some that will abuse this responsibility for less than ethical reasons.  An absolute minimum marriage age of 16 years old does not seem unreasonable, but teens 16 to 18 years old do need some protection.  We should certainly not protect them by taking away their own choice of when to marry though.

Instead of an absolute minimum of 18 years old, an absolute minimum of 16, with additional protections against parental abuse of authority would probably be a better solution.  There are a few other things that probably need attention here though.  Instead of discouraging older teens from getting married before they are out of high school, we would be encouraging them to be responsible.  The fact is many teens are already having sex at that age.  (According to polls, numbers have been decreasing, but even a small percentage is still a lot.)  It is much less embarrassing for a married couple to be buying birth control than a single teen.  In fact, some families that are opposed to providing birth control to unmarried children might be willing to provide it for married teen children.  This may also require and cause a shift in high school cultures.  Marriage may reduce occurrences of things like nude sharing, because husbands tend to be more protective than boyfriends.  At the same time, current high school cultures that seem to hinge on the false idea that what you do in high school does not matter in the long run could result in ill conceived marriages.

The solution to parental abuse of authority and high school culture issues could easily be the same.  We need some kind of requirement beyond just parental consent before teens marry.  This should not just be a judge signing off on it either, because in states that allow teens to get married with the consent of a judge, judges have been known to allow some pretty clearly terrible marriages (including 14 or 16 year old girls to much older men, without so much as asking the girl if she really even wants to get married or why).  Perhaps a marriage counselor could be assigned to the couple, to assess their relationship and either give or refuse consent.  An absolute maximum age difference for minors getting married could also be wise.  For example a 3 year age difference would prevent a family from trying to marry off a 16 year old daughter to a 20 year old or older man, and he would ultimately have to wait until she was legally an adult to marry her.  This might not prevent the parental abuse entirely, but it would at least delay it, and it would give the victim some control of the situation.  At the same time, a 3 year maximum difference would still give teens the ability to choose to marry someone in their peer group, if they can show they fully understand the decision they are making.

Right now, we are allowing our live-in culture to control one of the most important institutions in our culture.  At the rate we are going, within the next century, we won't be letting anyone under 30 get married (years spent in college to graduate is only going to increase, as more and more advanced specialization is required to even be useful).  If this is really about protecting teens, the last thing it should be doing is taking away their right to choose when they get married.  We have already successfully marginalized teens to the point that depression and anxiety are incredibly common.  The fact is teens are not children, and when we treat them like they are, it harms them.  Historically, adulthood was reached at 16 years old.  When our founders set the voting age to 21 years old, they chose that age because they expected people to be ready to vote only after several years of experience as adults.  (This was changed to 18, when it was noted that we could conscript 18 year old young men to fight in wars before they were old enough to to be represented in government.)  Now, we don't even treat 21 year olds as adults.  In fact, in many places, people are "young adults" until after 30 years old, and the emphasis is placed on "young", not "adults". 

Average marriage age should not dictate minimum marriage age.  Societal education expectations should also not dictate minimum marriage age.  If we are to have a minimum marriage age, it should be based on empirical studies that show marrying younger to be either harmful or unethical.  Ethical violations by others should be mitigated directly, instead of being mitigated by taking away the rights of those who are supposedly being protected.  And above all teens are not children, and by treating them like children we are harming them.

Texas has one thing right: By allowing emancipated teens to get married regardless of age, they are at least trying to preserve the rights of those the law is trying to protect in limiting marriage age.  An emancipated teen should always have the right to get married.  We should not be punishing teens that are not emancipated just because some other teen's parents might try to influence that teen to make a poor marriage choice though.  We need to do our best to protect their rights in addition to protecting them from abuse by others.  Setting the absolute minimum marriage age to 18 years old is not the way to do that.

24 February 2018

Paid Maternity Leave

Paid maternity leave is a great idea.  Most first world countries are doing it.  The U.S. is not though.  Instead, it mandates unpaid maternity leave, and consequently, at least 40% of Americans don't get any maternity leave, because they cannot afford it.  Among the poor less than 5% can afford it.  This gives them two options.  The first is to have kids, use what little vacation and sick time they get, and then go back to work after around 10 days, compromising the physical, mental, and emotional health of their babies, and seriously risking their own health.  Doctors recommend new mothers to take it easy for 6 weeks.  This is not a vacation.  This is for recovery, because if they don't, they can end up with serious medical complications including death, as well as serious postpartum depression.  Mothers need a break after giving birth, and their babies need them.  When they cannot afford to take this break, it harms them both.  In the U.S. we claim to have a health crisis.  Overall health has been shown to be significantly affected by the health of the mother both before and after giving birth as well as the care the baby gets during the first months of life.  A baby that is not regularly breast fed and close to the mother during this time is far more likely to suffer from serious health problems in the future, including mental health problems.  (What if the cause behind our higher rate of mass shootings than any other first world country is our lack of paid maternal leave?)

I don't want to promote paid maternity leave though.  As a country, we keep getting distracted by trivial things.  We don't need paid maternity leave.  The Republicans are right.  Mandatory paid maternity leave would destroy small businesses, which provide a majority of our jobs.  Currently, there are four states that pay for maternity leave out of the public coffers.  This takes the burden off of small businesses.  It sounds like a good idea.  It is not though.  What happens when a woman who is earning a million dollars a year gives birth?  Now the state is paying some fraction of that salary.  These four states pay between 55% and 67% of the salary for six weeks.  For a woman making a million dollars a year, that is $77,000 total!  The public should not have to pay even a fraction of the salary of someone making that much money.  If a woman is making a million dollars a year, and she is too stupid to live a lifestyle that allows her to save a significant portion of that, there is no reason the public should be required to pay to maintain her lavish lifestyle!  That is downright wrong.  So, it is unethical to make small businesses pay for maternity leave, and it is unethical for the government to pay it.  In theory paid maternity leave is a great idea.  In practice though, it is a terrible idea, as it will either result in economic harm, or it will rip off the middle class (the primary source of tax revenue) to maintain extravagant lifestyles.

The fact is, maternity leave, just like a long list of other things, is a mere distraction from what we really need: Basic income.

A basic income solves the problem of paid maternity leave quite nicely.  Those who are already very well off should have enough money saved to easily handle taking up to 12 weeks of unpaid maternity leave (guaranteed by FMLA).  If they do not have the money saved it is their own fault, and they deserve to be accountable for it.  (After all, the Right is big on personal accountability.  If the Right thinks it is wrong to bail out the poor, because they dug their own hole, this is even more true of those who actually have the means.  It turns out that our poor may be more wise about managing their finances than our middle class and rich.  If a middle class or rich family cannot afford unpaid maternity leave, maybe it is time for them to sell that second car and look for a cheaper house, so they can get rid of the expensive mortgages that they clearly cannot afford.  You can complain to me about the poor being bad at finances when that becomes an option for them!)

Anyhow, a basic income would provide easily for a reasonably frugal family, even with unpaid maternity leave.  In fact, it would allow an expecting mother to start the maternity leave sooner, if necessary, to avoid the health risks to her and the baby, if her job poses such risks.  Once the baby is born, the basic income for the family will increase, further reducing the burden.  This means, for middle class families living close to the edge of their incomes, the added basic income from the baby might be enough to offset the loses, avoiding severe cost cutting measures.

The best part of all of this is that a basic income is far superior to paid maternity leave, because it applies even when there is not a new baby.  Most first world countries already have paid maternity leave of 12 weeks or more.  They also already have decent welfare systems.  If the U.S. was the first modern country to have a decent basic income, we could leap past the rest of the world, reaping the benefits of improved economy, improved health, and greater overall happiness.  A basic income is probably the fastest and most effective way for America to become great again.  And it would also ensure that new mothers have the opportunity to have sufficient recovery time and time to care for their new children.  In short, we don't need paid maternity leave.  We need a Universal Basic Income!

19 February 2018

How Food Stamps Hurt

The typical American on food stamps struggles to pay rent.  The amount of food stamps varies widely, depending on a large number of factors.  Some families get enough food stamp benefits to pay for 100% of their food.  Some get so little they would starve without help from others.  The fact, however, is that food is readily available to a vast majority of poor people in the U.S., even without food stamps.  Many churches have charitable food programs.  Boy Scouts and many many other organizations hold food drives annually if not more often.  Many cities in the U.S. have soup kitchens where poor people can go to get a free meal.  People feel more comfortable giving beggars and homeless people food than anything else.  Family members are more likely to provide food to poor relatives than anything else.  The fact is, poor people can get food if they need it.  Food stamps do accomplish one valuable thing: They save some dignity.  It can be humiliating going to food drive or asking family for food.  Food stamps shift that embarrassment to the checkout stand, where the cashier and anyone standing behind you who is paying attention can see that SNAP card.  The card is more discrete than the old fashioned coupons (WIC's checks are still just as bad as those coupons though), but it is still an affront to dignity.  Food stamps are not just humiliating though.  They are actually downright harmful.

The right likes to say food stamps contribute to inflation, and that might be true.  If you vote against welfare though, the blood is on your hands.  If fairness contributes to inflation, then so be it.  Letting people starve to death is not the right answer.  Food stamps contributing to inflation, assuming the claims are actually true, does not make them harmful.

This is what makes food stamps harmful: By taking away their choice, poor people become worse at managing their finances.  Food stamp money is not real money, because real money can pay the bills.  Lawmakers sometimes argue that it is just as good though, because it frees up real money that would have been spent on food.  This is an oversimplification.  People act differently when they are spending money that is not their own, and money that cannot be spent on whatever they want is not their own money.  This is a common phenomenon, even among the rich.  People are more likely to buy extravagant things with money that does not fully belong to them than they are with money that does belong to them.  In other words, food stamp money is more likely to be spent frivolously than cash.  Consider this: Recently, I started working on a project where the goal is to produce meals as cheaply as possible.  I learned that the average American family spends $250 per person on food each month.  We spent closer to $150 per person.  I produced a recipe that is healthier than what the typical American eats that costs less than $50 per person per month.  When a family gets $100 or $150 per person each month in food stamps, they spend $100 or $150 per person on food each month, because they cannot spend it on anything else.  When they get that money in cash though, they are far more likely to look for ways to conserve it, because if they spend less than that, they can spend the leftovers on other things.  That might be something frivolous.  But who cares!  The goal is for them to eat healthy right?  Eating healthy does not have to be incredibly expensive.  As long as they are not eating significantly less healthy food, what does it matter if they spend $100 a month per person on food and spend the extra $50 per person on something else?  Keep in mind that in most cases, that "something else" won't be luxury goods.  It will be rent, electricity, car repairs, appliance repairs, or even education.  Again though, as long as they are eating healthy who cares!  They are going to get and spend the money one way or another.  If they are forced to use it on food, food is all they will get out of it.  If they are just given cash, they are given the opportunity to exercise their free will to decide if they want to blow it all on food or if they want to be frugal with food and spend it on something else.  This gives them the opportunity to learn from their mistakes.  It allows them to learn how to manage their finances.  Yeah, some will spend the excess on drugs or alcohol, and that is truly tragic, but is it right to deny the majority the opportunity to improve, just because a few people will make stupid choices if we do?  No!  By that reasoning, we should just toss everyone in jail, because a few are going to harm others if we don't, and we cannot know who they are beforehand.

Food stamps should be replaced with a cash handout.  This would significantly decrease the cost of the program, because the enforcement systems that make sure stores are only selling food for food stamp money could be eliminated.  A default direct deposit system would eliminate the costs associated with SNAP cards for a majority of recipients, and the remainder could be given cheap pre-paid Visa cards that are credited each month.  Money on lost cards might be hard to retrieve, but replacing them would be cheap.  (Actually, I just checked.  If the food stamp administration keeps a copy of the card number and other card information, which they would need to do to credit the card each month, retrieving the money from a lost card would be trivial.)  The money saved could be used for a number of things.  It could make the food stamp program significantly cheaper for the government.  It could allow larger payouts each month for participants.  It could allow larger payouts for participants that are currently getting too little.  It could support more participants, increasing the maximum income limit for participation.  Making food stamps a cash handout would make the program much cheaper, but that is not all.

As a cash handout, food stamps would help recipients the most.  Currently, in many places, it is common for some food stamp recipients to buy food and then resell it at a fraction of the price for cash.  This is stupid, because now people who don't need food stamps are essentially fleecing some of the benefits, as a sort of currency exchange.  If food stamp recipients were given cash instead, this could not happen, because no exchange would be necessary!  (Yes, this exchange is illegal.  You think it would be happening if we could stop it?  "Illegal" does not matter, when the government does not have the power to enforce.)  Making food stamps a cash handout would reduce any inflation effect, because less of the money would be spent on food.  It would allow poor people to pay their rent or electricity with the money, if conditions made those things more important than getting enough food.  It would reward them for spending money on food wisely.  It would even encourage poor people to find food from other charitable sources, to free up their food money for more important things that they cannot get at nearly any church, food bank, or family member's house.

There is something especially nefarious about failed bills in various states intended to further limit what food stamp recipients can buy.  Poor people have been observed buying lobster, filet mignon, and other expensive delicacies on food stamps.  (In fact, I am not ashamed to admit that I bought lobster on food stamps a few months ago.  I have since been using small amounts in various meals, at an average total cost of less than $1 for each meal.  Note that $1 a meal comes out to less than $100 a meal, which is well under half what the average American spends on food.)  Some lawmakers seem to think the solution is to essentially make expensive foods generally associated with luxury illegal to buy on food stamps.  Otherwise stated, they endorse the system in Soviet Russia, where only the elite were allowed to purchase certain luxury items.  This is stupid though.  I mentioned above that if you give people $150 or $100 a month that can only be spent on food, they will spend that much on food.  The solution is not to further limit what poor people can spend food stamps on.  The solution is to allow them to spend excess on things other than food!  Would they be buying lobster with food stamps, if they could instead spend the money on fuel for the car, on rent, on utilities, or even on other luxury goods?  They are not buying lobster because they are incompetent with their finances.  They are buying lobster because they have managed their food stamp money in a way that makes it so they can afford it!  Why the heck does anyone think the appropriate response to this is to punish them?  If poor people are able to manage their money so well that they can afford to buy lobster on food stamps, that is evidence that the program is not completely broken.  When people are buying lobster on food stamps, that is evidence that they are managing the money well enough that they should be allowed more freedom with it!  That means that maybe we have underestimated them.  Maybe they can handle cash payouts, without any strings attached.  And the fact is, the research supports this conclusion.  If we don't want poor people spending food stamp money on lobster, let them spend it on other things.

Our current welfare system is pretty terrible in a lot of ways.  The worst is assuming that recipients are stupid.  It is controlling their finances for them.  If they are not given the freedom to chose, they will never learn.  It is clear that some have overcome this and learned anyway, but they are still stuck in the box.  And for that, some think we should punish them.  I think it is time to take away the box.  It is time to treat them like they are smart enough to manage their own finances.  If they fail, yeah, that is terrible.  Most won't though.  Most will make better choices than the government ever could make for them.  Making food stamps a cash handout would help the poor, perhaps more than any other welfare plan that has ever been developed.  And perhaps more importantly, it would stop hurting them.  If the Republicans really want to deregulate something, let them deregulate the use of food stamp money.