24 February 2016

The Problem with Fairness

The concept of fairness is a major problem for humans.  It causes massive amounts of inefficiency, it is economically harmful, it causes a great deal of unnecessary poverty, and it is inherently hypocritical.

Fairness causes a great deal of problems for queuing theory.  Queuing, in the simplest terms, is a group of things lining up for something.  In some places, a line of people waiting for some kind of service (a line at the bus stop or a checkout line at the store, for example).  In computer science, a queue is a container that stores things and gives them to the program on a first-come-first-served basis.  A major pet project in queuing theory is how to best arrange lines of people waiting for a service, when there are multiple providers of that service operating at the same time.  Bank tellers and grocery store checkouts are a perfect example of this.  The question is, what is the most efficient queue for this kind of situation?  It turns out the answer is the kind of queue found in grocery stores.  Each service provider (cashier, in this case) has a queue.  When a person comes to checkout, that person goes to the shortest queue (that being the one with the least amount of work, so one person with 100 items adds more length than several people with only a few items each).  This queuing strategy minimizes the amount of time spent in the queue, and it is currently considered the most efficient kind of queue, however it is not perfect.  The problem is that it is not fair.  With this queuing strategy, people are not always served in the order they arrived.  Maybe one cashier has an item that is not scanning correctly, so the next customer has to wait an extra 30 seconds.  While he is waiting, the guy that got there after him, who is in a different queue, gets served first.  Yes, it seems trivial, because it is trivial.  It also really bothers people.  It bothers people so much, in fact, that they will spend several minutes trying to figure out which line is likely to get them to the cashier first.  Some people are so bothered by this that they will pick a line and then peak around to see if another line is going faster, sometimes switching lines several times before they end up checking out.  People would prefer a single serial (just one line) queue over a parallel (multiple lines) queue, even though the second one will almost always save everyone time, because the serial queue is never unfair.

Fairness is economically harmful for several reasons.  The first one is that people tend to define it in whatever way is most beneficial to themselves.  It is economically harmful, because it encourages people to thing about things like ownership emotionally instead of logically.

The most shallow definition is contractual fairness.  This is the idea that anything you can get someone to agree to is fair.  The claim that something is only worth what people will pay for it is an example of contractual fairness.  A more nefarious example is the extension that something is always worth the most people will pay for it.  Contractual fairness would assert that if I was the only person in a desert with access to a well, and you had exactly $100, it would be fair for me to charge you your entire $100 for a pint of water.  It does not take into account need or ability to afford.  In a work environment, contractual fairness asserts that any wage you pay me is fair, regardless of my productivity, because I agreed to that wage.  It does not care whether I had other options, and it does not care if I only took the job because the only other option was starving,  Contractual fairness even allows for slavery and coercion, because even if someone is threatening your life, you have the choice to give it up instead of agreeing to the contract.

Implied fairness, also defined as natural fairness, is fairness as we observe it in nature.  "The early bird gets the worm," because it has already eaten it by the time the other birds have arrived.  This is where "first come, first served" comes from.  "Possession is nine tenths of the law," because it is hard to prove ownership, but it is easy to see who currently possesses something.  The person who makes something implicitly owns it, for the same reason.  When your parents die, you get what they owned, because you know more about what and where it is than anyone else.  Unfortunately, the strong also rule over the weak.  It is better to let the weak die, because they are a liability to everyone else.  It is better to kill your enemies, because it makes you and your offspring a larger relative portion of the gene pool.  Implied fairness is actually worse than contractual fairness, because it is based on nature, and nature does not actually care about fairness at all.

Moral fairness is about what people "deserve."  It asserts that people who work hard deserve more rewards than people who do not.  People deserve to use the "fruits of their labor" however they see fit.  If you take something I own, it still belongs to me.  "All men are created equal."  I should inherit what my parents own when they die, because I am their offspring.  At the same time though, people who don't work should be allowed to starve and die.  If someone is in poverty, it must be their own fault.  Everyone who commits a crime, whether that crime actually harms anyone or not, should be locked away in jail.  The problem with moral fairness is that it is completely objective.  It does not distinguish between a lazy person and a crippled person.  Both don't work much, and thus both deserve poverty.  Someone who steals because they are starving to death is no better than a rich person who embezzles.  Moral fairness uses social conventions to define an objective law of fairness, but it takes no effort to account for the many special circumstances.

Civilized fairness arrives from deep contemplation and critical thought.  It is not based on personal feelings or emotions.  This kind of fairness asserts that everyone should be given an equal chance (instead of asserting that everyone is given an equal chance, like moral fairness does).  It cares very deeply about situation.  Disability is not the same a laziness.  Contracts are not inherently fair, and agreeing to a contract under duress (including the duress of threat of starvation) does not count as voluntary.  Charging the most people are willing to pay is not inherently fair, and sometimes things are worth more than the highest bidder is offering.  Acquisition does not inherently grant full rights of ownership, and owning something does not necessarily give exclusive rights of possession.  The creation of a thing does not inherently grant the creator possession or control of it.  Death is never a fitting punishment for laziness, and laziness is not always even the fault of the person who is lazy.  Unlike the other three definitions, civilized fairness also considers more than just the individual.  If first-come-first-served does not minimize average wait time, then another strategy is more fair, because it wastes less time total.  Civilized fairness allows the late guy to get to the front of the line in an airport, so he won't miss his flight, even if it is his own fault that he is late, because it is better to minimize the number of people who miss flights than it is to save 30 people waiting to go through security one minute just because they got there first.  Civilized fairness says it is better to have a simpler welfare system, despite the fact that it is more easily abused, because the cost of abuse is less than the cost of the extra administration required to maintain a complex welfare system.  It also says that income should be reasonably well distributed, because that will ultimately enrich everyone (including the rich) and improve the economy, while contractual and moral fairness will ultimately harm the economy by starving the majority of resources.  Civilized fairness takes a level of thinking that is beyond what most people are willing to do.  It tends to have a higher up front cost, and it sometimes uses unintuitive strategies, but ultimately it pays of with enormous interest.

In the end, all four types of fairness have their place.  Implied fairness is intuitive, because we see it constantly in nature.  Contractual fairness is important to the proper function of economy and government.  Moral fairness is important for keeping people motivated and useful.  Without civilized fairness though, we are little more than animals who have managed to make hive-like social constructs, without hive-like anti-individualism.  Civilized fairness tempers selfishness with the need to get along.  It can allow us to thrive as a group, without sacrificing our individuality.  It allows for the beneficial parts of the other types of fairness, without the parts that are ultimately harmful.  Civilized fairness is a higher way of thinking and living, and without it, I don't think any large civilization can last long term, without eventually collapsing.

20 February 2016

Modern Superstitions

A long time ago, people believed some crazy things.  Acorns could bring good luck.  Amber on a necklace could protect from disease.  There were a whole bunch of ways to figure out who you were going to marry.  Kissing a certain stone in Ireland will give you a glib tongue.  There are several involving luck and black cats, depending on the culture.  Clover can protect you from evil magic.  A dried frog worn in a silk bag prevents epilepsy.  And there are hundreds more.  With the prevalence of science in our society, though, it is easy to find just as many modern superstitions, often based on misunderstandings about science.

A big enough particle accelerator could create a black hole that would swallow the Earth or at least a large portion of it.  This superstition is based partially in truth.  Scientists have predicted that the particle accelerator at CERN is indeed powerful enough to create microscopic black holes.  If particles collide with enough energy, they can form a system with such high density that they technically form a black hole.  Fortunately, black holes that small do not have enough mass to remain stable, and they evaporate almost instantly.  Perhaps a bigger particle accelerator could create a stable black hole?  It is incredibly unlikely, and it certainly could not happen on Earth.  The problem is that black holes require enormous amounts of mass to be stable.  All of the mass on Earth is not enough to create a stable black hole.  The Sun may have enough mass to make a stable black hole, but it currently has far too much energy to collapse into one.  There are three things that make it impossible for a particle accelerator on Earth to make a black hole that is stable and large enough to be a problem.  The first is lack of sufficient matter.  All of the mass of the Earth is still not enough.  The second is too much energy.  Even if Earth did have enough matter to make a black hole, if you tried to crush it down small enough, it would have so much energy that it would immediately explode when it was released.  Third, the particle accelerator necessary to accelerate enough mass to create a stable black hole would not fit on the Earth, and it would probably not even fit on Jupiter, the largest planet in our solar system, and it is over 300 times larger than the Earth.  This superstition has lost a lot momentum, since LHC scientists announced that the accelerator could not create dangerous black holes, but many people still believe it.

One of the two big ones is that "chemicals" are bad.  Some people even claim that they don't believe in chemicals.  This superstition is propagated by the media, which uses the term very narrowly to mean only chemicals that are dangerous.  What they don't mention is that pretty much everything, even water, is a chemical or is made from chemicals.  The source of this lack of understanding is school and parents that don't teach their children what the word actually means, though at this point the superstition is so entrenched that even most teachers don't know any better.  What it really comes down to though, is that "chemical" means any element or compound in a homogeneous mixture.  So, water is a pure mixture of the compound of one oxygen atom and two hydrogen atoms.  Salt water is the homogeneous solution of salt molecules dissolve in water molecules.  It turns out, using a loose definition, even mud and bread dough could be considered chemicals.  In addition, most of the dangerous chemicals you hear about in the media are also extremely useful.  Contrast this with the fact that the human stomach produces copious amounts of hydrochloric acid, which is an extremely dangerous chemical.

There are several other very popular modern superstitions, including ones involving "organic" foods and "GMO" foods.  Both of these are based largely on a misunderstanding of science, but both are also based very loosely on true ideas.  It is possible to make edible plants harmful by exposing them to certain substances while they are growing.  The "organic" food movement is based on the idea that organic fertilizers are safer than industrial ones.  It turns out this is not true though.  Most organic fertilizers come from biological processes that are poorly understood, which makes it very difficult to predict what dangerous substances might be in them.  Industrial fertilizers tend to be fairly pure chemicals that are known to contain nothing dangerous.  Now, there are some environmental concerns with using more pure fertilizers, but they don't negatively impact the quality of the food grown with them.  Overall though, this particular superstition is not likely to cause more harm than wasting your money paying more for the same product, because it has a special label on it.

The GMO foods superstition is likely to be very dangerous in the future.  The superstition here is that using scientific methods to precisely alter the genetics of plants makes them dangerous to eat.  It is true that this could happen.  Certain alterations could cause a plant to produce toxins that it would not have otherwise.  Doing this deliberately would be extremely expensive though, and the mandatory FDA testing would quickly reveal the problem.  A more worrying concern is that alterations intended to improve crop properties (which is what most GMO is for) could accidentally have side effects that would produce toxins.  The fact, however, is that this is very unlikely, and it would still be caught by the mandatory testing.  It turns out that, if anything, GMO foods are probably safer than non-GMO foods!  There are several reasons for this.  The first is that the way plants evolve naturally (nope, even in nature, plant DNA does not just stay exactly the same forever) is far more likely to produce a toxic strain than the very focused modification of one specific gene or set of genes.  The second is that, if this did happen, we would probably not know until a lot of people were being affected, because there are no mandatory tests for non-GMO foods.  So, how could believing the GMO superstition actually be dangerous though?  Simple: There are currently two popular purposes for genetically modifying food plants.  The first is improving crop properties.  This includes things like making crops more resistant to pests and diseases (as in, now we don't have to use dangerous pesticides to maintain good yields), making food plants produce more food faster, and making crops more resilient to poor growing conditions.  If society as a whole rejects GMO foods, we will have go back to using seriously poisonous pesticides, and we may eventually have a hard time growing enough food to feed everyone.  GMO allows us to use less land to grow the same amount of food, and it allows us to use land that would otherwise have little value for growing food.  Without GMO, the Earth's population cap is dramatically smaller.  The second use of GMO that is starting to get more popular (you probably won't see it in stores for a while, because it is still experimental) is making food plants healthier and more tasty.  In the next few years, many GMO foods will start to be available that are significantly more nutritious and tasty than non-GMO foods.  (And, Monsanto, who I very pointedly do not endorse, for ethical reasons, is using GMO to create selective breeding plans, for using the natural method to get the same genetic modifications without making the food GMO.)

Humans are naturally superstitious.  Eventually old superstitions are revealed for the absurdities that they are, but at the same time as we are making fun of the superstitions of our ancestors, we are subscribing to equally absurd ones of our own.  So next time you laugh at a crazy superstition from a long time ago, give a thought to the modern superstitions that are affecting your own decisions.





12 February 2016

Tau vs Pi

Pi (π) is an amazing number that defines the ratio of the circumference of a circle to its diameter.  It has important historical significance, because many civilizations knew that it existed, but until recently, none had been able to discover what it was to any degree of precision.  In the modern world, we use π all over the place.  It turns out though, that there are some places where π just does not make sense.  For example, the unit circle, used in trig and other more advanced mathematics, ends up with a lot of complicated and unintuitive fractions, and a single revolution around the circle is equal to 2π in distance.  This is confusing to new students and makes a lot of the math more complicated.  In fact, it turns out that 2π is used all over the place, and it may even be used significantly more often than π by itself.

A group of people have started advocating the use of an alternative to π.  Tau (τ) is equal to 2π.  It could be defined as the ratio of the circumference of a circle to its radius.  A unit circle using τ makes more sense, because a single revolution is equal to τ.  A quarter revolution is τ/4 (using π, it is π/2), a half a revolution is τ/2 (π), and 3/4s of a revolution is 3τ/4 (3π/2).  With τ, the fraction of the circle is the fraction of τ, but with π, it is twice the fraction of the circle, which makes it more difficult to understand and complicates the math.  So, this group supporting τ says we should switch from using π to using τ, because it simplifies the math.

It turns out that this is only part of the story though, and I am writing this, because I have not seen any evidence that anyone else fully understands the issue.  The problem is not that τ always makes more sense than π.  The problem is that we are using π wrong.  To fully understand this, we need to define both π and τ, without reference to the other.  In most debates on the subject, τ is defined as 2π, which is technically true but also misleading.   τ is not merely 2π.  Both τ and π are ratios relating the width of a circle to its circumference.  Mathematically, τ = c/r, while π = c/d, where c = circumference, r = radius, and d = diameter.  If you look closely, you might see why we end up using 2π all over the place.  Look at it this way: π = c/2r -> 2π = c/r.  It should be obvious by now.  π is a ratio of the circumference to the diameter, but we are doing all of our math using the radius.  Of course we have to multiply π by 2 all the time, because we are implicitly dividing the diameter by 2 nearly everywhere we use it!

The most obvious solution is to replace 2π with τ.  This is certainly a valid solution, but it is not the only solution, and it is not necessarily the best or most sensible solution either.  The other solution is to keep using π, but use d instead of r.  This would even fix the unit circle, as radians are defined as the distance from 0 multiplied by r.  π radians is not actually π.  It is the distance πr, but the r is not written, because it is implied.  If we replaced r with d, a full revolution would be exactly π (the d is implied this time).  The only problem is that we could no longer call them radians, because the name comes from "radius," and we would be using the diameter.

So, why do we use r instead of d?  This question stumped me when I took geometry in highschool.  It did not make much sense.  The reason is simple: When we find the area or perimeter of a square or rectangle, we use width (w) and height (h).  We don't use w/2 or h/2.  So why, with circles, are we always using d/2?  It just does not make much sense.  Even when we are calculating the area of a triangle, we use (w * h)/2, not w/2 * h or w * h/2.  It does not make sense to use half the width of the circle when finding perimeter (circumference) or area, when don't use half lengths anywhere else.  I recently realized why we use r instead of d, and the answer is rather disappointing.  The definition of a circle is the following equation: r^2 = x^2 + y^2.  This is the only place I can find where it makes significantly more sense to use half width over whole width.  So why is it that we are using this one equation to define the normal case instead of using all of the others and defining this one as a special case?  Honestly, I can see no reason for doing it this way, except perhaps that this is how it was done in the past.  I don't happen to subscribe to the theory that tradition trumps logic and reason.  If tradition does not make sense, it is time to trade it for something that does.

It turns out that this entire argument is almost pointless though.  Using 2π all over the place works, and it is pretty entrenched in our mathematics.  Outside of education, math and science are not going to suddenly change because someone decides it is better to do things a bit differently.  Within education is where the "almost" comes in though.  I am not the only person who noticed that using r instead of d does not make sense in the context of all of the rest of geometry.  I am certainly not the only person who noticed that the unit circle does not make sense using π as we do.  We could just replace 2π with τ, but that would only fix the unit circle and part of the math.  It does not fix the underlying problem with using only half of the distance for circles, while we use the entire distance for everything else.  If we really want to make education easier, we should keep using π, but switch to using d instead of r.