28 January 2016

Impossible Conspiracy Theories: Faked Moon Landing

I hope this is going to be a series on conspiracy theories that opponents claim to be impossible.  I also want to make it clear that I do not actually believe any of these conspiracy theories.  The point of this series, however, is to show how they could be true.  I would like to note though, that the problem with most conspiracy theories is the number of people that would have to be involved.  The more people who know, the higher the probability of a leak.  Read this article to learn more about the correlation between number of people and probability of a leak.

We are going to begin the series with moon landing, since the theory that it was faked seems to be one of the more popular conspiracy theories.  It is also central to the Flat Earth theory, which rests on rather a lot of space exploration conspiracy theories.

I can think of only two reasons the U.S. government and/or NASA would want to fake a moon landing.  The first is that they don't have the technology to do it, but they want to show someone that they can.  Given the time period, the U.S. government might have wanted a faked moon landing to show that we could get there before Russia.  This would have involved a lot of people though, and there is a high probability the plan would have been leaked before it was even completed.  The second is that the government or NASA had discovered that going to the moon was actually impossible, regardless of technology, but they wanted people to believe that it was possible (the Flat Earth theory suggests that the government knows about an impenetrable dome around the Earth that would prevent space travel at any serious distance).  Even if it was only NASA that knew, that would be a lot of people, and it would still probably have leaked before the mission was complete.  For the sake of argument though, let's assume that only a few people knew.

The big problem with conspiracy theories is that the chance of success is inversely proportional to the number of people that know about it and the amount of time passed.  As such, it is important to minimize the number of people who know.  In the case of the moon landing, it is estimated that over 400,000 people would have had to be in on it, but I think that we could reduce that to less than 1,000, with enough resources.

So, who needs to know to pull off a fake moon landing?  It depends on what you can fake.  It turns out that during that time period, technology was sufficiently advanced to fake quite a bit, given enough resources.  There are several groups of people involved.  First, there are the planners.  Only one planner actually needs to know.  The planners are who provide the information on what needs to be faked, and they will do a much better job if they think the mission is real.  One planner needs to be in on things, because someone needs to collect information on what needs to be faked.  The official plan documents might omit or marginalize information that is not important for a real moon landing but which is essential in a fake one.

The next group of people involved are those who are going to build the equipment.  The planners will have created a list of requirements (is air tight, for instance).  The people designing and constructing the space craft, rockets, and everything else will use those requirements.  They only need to know if the crafts are going to be faked as well.  Building real space craft is expensive, but fake ones are much cheaper.  Involving all of the engineers dramatically reduces chances of success though.  Minimizing people who know is more important than price here, so the only engineers that will know are those who will build fake stuff.  We will need a little bit of fake stuff, because we are faking a moon landing here, but without the real stuff, many more people have to get involved.  The fake stuff we will need is a fake moon module and some support equipment to fake space travel and the environment of the moon.

The most obvious people involved are the astronauts.  Unfortunately, they are also the most conspicuous, and the public will be continuously watching them after the event.  They are the weakest links, so they absolutely cannot know.  This is going to be very difficult, because they are supposed to actually experience the event.  This is why we need the fake stuff.  We have to fool the people right at the center of the thing.

There is going to be a launch party watching the blast off.  It is going to include some high profile people who definitely should not know.  This is why we need the real stuff.  These people need to see a group of astronauts get into a rocket and blast off into space.  This is going to be the second hardest thing to fake.  We cannot let the astronauts get onto the rocket, because they are going to be boarding the fake one at the same time.  So we need fake astronauts, and they may even need to look an awful lot like the real ones.  If they are expendable, they could be launched in the rocket, where they will likely ultimately die.  Otherwise, we could have some means for them to leave the rocket after boarding, without being seen.  This is a complicated one without involving more engineers, and if we picked some lookalikes from the engineers that know, we could reduce the number of people that know by launching them to their certain doom (of course, we would have to lie to them, because they would probably object if they knew).  This strategy keeps the launch party safely ignorant.

Most of mission control does not need to know.  They are just communicating with the real astronauts, who think that they are really visiting the moon.  We probably want a few people there who know, in case something goes wrong.

Somewhere else nearby, we will need lots of people who know.  The engineers who know will have built a second moon landing module that looks nearly exactly like the real one.  The real astronauts will have a separate launch party, because they need to think everything is real.  They will get into the rocket, and then a bunch of things have to happen.  For launch, the capsule has to be mounted on equipment to produce the expected acceleration of lift off.  This does not have to be perfect, but the process needs to be smooth.  Mounting the capsule on a large centrifuge would work.

Once the astronauts are in the capsule and think that they have lifted off, the big thing is keeping the pictures on the windows right, so it looks like they are going through space.  Some projectors mounted to the outside, with the right projection media on the windows might be able to manage that one.  Simulating zero gravity is going to be hard though.  Supposedly an extremely strong alternating magnetic field can achieve this effect, though at extreme expense.  To my knowledge, the only reported experiments on this effect with living things was done on frogs, but I believe they did survive.  So, simulating zero G is possible, though extremely expensive.  Alternatively, they could have lined the space suits with ferrous materials, which would have dramatically reduced the energy requirements.

The landing would have been pretty simple, but the moon environment would have been a huge amount of work.  Enough distance would have had to be created to allow an astronaut to walk around a bit, just in case.  It would have to be created either in a large building or underground, and the ceiling and walls would have to show images of what space would look like from the moon (something no one would have seen before, so it would have to be a guess).  Simulating low gravity in such a large area would be much harder than simulating it in a small space capsule.  Since the astronaut could be guaranteed to be wearing a space suit though, the ferrous metal lining with a large electromagnet would be more viable.  Low gravity simulated this way would feel wrong, but for someone who has never experienced the real thing, it would probably not be noticeable.  Faking the rest of the trip would be easy from there.

Everything else just comes down to faking pictures.  Eventually most faked pictures are figured out, but NASA had the funding to hire the best, and it is not like we have anything similar to compare.  Most of the usual means for detecting faked pictures or videos rely on existing things to compare against.

What it comes down to is that all of the technology to fake a moon landing existed.  There were two possible motives (though one is incredibly unlikely).  NASA was provided with huge amounts of funding at the time, so even cost would not have been much a problem.  In other words, the fake moon landing theory is possible.

The problem with the theory is not whether it was possible or not.  There are so many minor problems that make it incredibly unlikely.  First, the minimum number of conspirators required to pull it off in the time given is still far too large.  According to the article mentioned in the introduction, even 1,000 people would probably not be able to keep it quite for more than a decade.  Just the number of engineers required for such an elaborate hoax would be at least half that, and much more if you expected it to take less than a decade or two of work.  Add in probably one or two hundred people involved in construction of the larger parts, and all of the various experts on things like film manipulation, disguise (for the fake astronauts and the fake dignitaries), and other minor but essential details, and you have right around 1,000.  There would have just been too many people involved to keep it quiet for this long.  Unless, of course, most of them were killed afterwards to keep them quiet, but then we have to wonder why he have not heard anything about almost 1,000 people going missing who all happened to work for or near NASA...

The second problem is scale.  The scale and detail of the moon landing is huge.  There is far more information than is necessary to convince even experts that we actually sent someone to the moon.  A fake could probably have been done at half the price, by eliminating most of the unimportant details, and people would have been equally well convinced.  Anyone so out of touch with those they are trying to fool almost certainly would have screwed up somewhere, and yet no one has been able to produce conclusive evidence of such a mistake.

The third problem is that neither of the motivations really make much sense to spend so much effort and money on a hoax.  If the Earth really is flat with a giant dome, you don't need to fake sending a man to the moon to convince people otherwise, especially since they already believed otherwise.  Sending a machine to explore the moon would have been equally effective and much easier and cheaper to fake.  For only slightly more money than faking a simple landing and a small group of additional people in on the hoax, you could fake an entire moon colony.  Likewise, if the U.S. did not have the technology and was worried that Russia would get there first, a much cheaper fake could have been done, or a much more elaborate fake (the colony thing again) could have been done for only a bit more money and involvement.

In short, while there may have been sufficient resources, motivation, and technology to fake the moon landing, the odds of it are incredibly small.  The motivation is questionable, the execution does not fit any of the motivations, and the chance of successfully concealing the hoax for this long is almost nothing.  So, it is not impossible, but it is improbable in the extreme.

25 January 2016

My Qualifications

Some might consider this a pet peeve, but I think it is a very serious problem in our society.  Many Americans seem to think that job titles and pieces of paper make them experts.  Some even think that a future qualification makes them experts now.  Here are a few experiences I have had, to illustrate my point.


I worked in a hardware store around 8 years ago.  I worked in the electrical department.  One of the products we sold was ceiling fans.  We also sold different lengths of rod for attaching them to the ceiling, since they generally came with something around a foot long.  There was one brand with a few fans that did not fit the rods that we stocked.  The head of the department recommended that we open the box and make sure the rod would fit right when we helped customers with this particular brand of fan.  One day, another employee was helping a customer with a fan, and I overheard the customer ask if the rod would fit the fan.  The employee said yes, without even checking.  I noticed  that the brand of fan was the one we had been warned about, so I interrupted and explained what I had been told.  I then went off to do some other task.  A bit later, during my break, the employee involved decided to tell me off.  He opened with "I am a certified electrician," and he explained to me that he knew exactly what he was doing.  He complained that my comment had caused the customers to doubt his word, and he had been required to show them that the parts would fit.

Now, let me explain my thought processes.  First, his antagonistic attitude did not help the situation.  Second, when he stated his credentials, it made it worse.  Certified electricians can make much more money doing electrical work than they can working at a chain hardware store.  I have a hard time believing that a certified electrician who is worth his salt  would stoop to what amounts to a retail job.  Even if he did do it entirely voluntarily though, he had not been doing electrical work for many years, so it is again hard to believe that he had any personal experience with the products he was selling.  Lastly, he did not start by stating his experience.  He only stated his credentials.  Credentials without experience is worthless, and if he did not have sufficient experience to state them, then the credentials are meaningless to me.


My second experience was on an internet forum.  These are rife with people who claim credentials they don't have.  Many people will try to settle an argument by explaining why they must be right, instead of proving it or explaining why they are right.  I had been doing a chemistry experiment.  If you put an electric current through water, you can break the molecules into oxygen and hydrogen gasses.  Water is not naturally very conductive though, so this process is very slow.  To improve the situation, an electrolyte can be added to the water, and a readily available one is table salt.  So, I had a device with a pair of electrodes, some tubes for capturing the gasses, and a 12 volt battery, and I had filled it with salt water.  It worked quite well, but I noticed something unexpected: I could smell chlorine coming off of the experiment.  The water was well water, so it was not chlorinated.  There was only one possible source for the gas, and that was the salt (which is made of sodium and chlorine).  I decided to do some research on this.  It turns out that this is not well documented, so I ended up looking through forums where people related their personal experiences.  Indeed, some others had experienced the same thing, however not everyone had.  An argument was ensuing about whether or not it was even possible.  One group claimed that they clearly smelled chlorine coming off of their experiments.  Another claimed that they did not.  A third group was trying to convince everyone else that the voltages involved were way too low to break up the salt molecules, and thus it was completely impossible that the chlorine was coming from the salt (despite the fact that no other source existed).  Many in the third group claimed to be chemists, and some of the rest claimed to have asked chemist friends.  The fact is, however, that every single person in the third group was wrong.  I could smell chlorine coming from my experiment, and the salt was the only source.  The credentials were meaningless, because the people that had them (or claimed to...) were still wrong.

The internet is full of people who lie about their credentials to convince people that they are right.  I sincerely believe, however, that at least some of those claiming to have various degrees in chemistry actually did.  The fact is, all a degree proves is that you managed, one way or another, to get grades high enough to pass your required classes.  Most modern colleges will graduate students with all Cs and Ds.  In other words, you have to know between 60% and 70% of the material for a fairly brief period of time (4 years is the typical maximum for a Bachelors degree, but often you can forget material as soon as you complete the course without too much harm).  Stated more plainly, a degree is evidence that you have been exposed somewhat to the subject.  It does not imply either a working or a complete knowledge of the subject.  Real world experience in a specific domain of your degree that applies to the discussion is what really matters.  When someone says, "I have a degree in chemistry," what it tells me is that they are not confident enough in their knowledge or experience to cite anything meaningful, so they fall back on something that they perceive to have more value than it really does.


My last experience took place in a hardware store, but not the one I worked at.  I was shopping for a shovel, so I could turn up the soil in our garden (essentially I was going to hand till using a shovel).  I told the employee that I needed something suitable for digging about 2 feet deep.  I had taken (and passed with a high grade) a college class on organic gardening, and we had learned about some research showing that digging that deep had proved, in experiments, to have a significant beneficial effect on crops.  Some other guy shopping there overheard the conversation, and he interrupted to tell me that digging deeper than 1 foot was a waste of time.  Then, he stated his credentials: He was an agriculture major at the local college.

This situation is absurd.  The guy did not even have a degree.  The only credential he stated was the fact that he had told the college the he intended to get a degree in agriculture.  He was using a future degree as his credentials.  For all I know, he could have been a freshman taking only generals his first semester.  Even if he had been a senior though, the agricultural knowledge taught in most of academia applies very specifically to industrial farming, not to organic gardening or even run-of-the-mill home gardening (I was applying some organic gardening techniques to home gardening).  Things like the depth you dig the dirt depend on a lot of factors, including things like what crops you are growing, the climate you are in, how frequently you water, and what you use to fertilize.  Even a degree in agriculture would not have qualified this guy to tell me how I should garden.  The fact is, I had one class, many hours of research, and some real life experience on the subject as my credentials.  This guy had made a tentative commitment to learn about the subject.


Americans seem to place far too much value on credentials and not enough on experience.  This is not just a pet peeve.  It is a real problem with serious economic implications.  I know people with tons of experience in a field who were laid off merely because they did not have a degree.  In fact, in some fields it is becoming common practice to require older workers to go back to school for a degree that did not exist when they started working.  The employers generally pay for this, but often the employees have far more experience and knowledge in the field than the people who are supposed to be teaching them.  This is stupid and wasteful.  A degree is supposed to be evidence that a person knows a bit about a specific subject.  Experience is far better evidence of that.  In my field, this is fairly well recognized.  I have a BS in Computer Science.  Most employers don't even look at a degree if you have significant experience.  Many (but not all) will still discard a resume without looking at experience, if there is no degree, but with a few years experience, it hardly matters what the degree is in, so long is it is there.  Of course, since the mid 90s, business experts have been recommending that software companies look for employees that are self taught, with or without a degree, because self taught people do far superior work.

Anyhow, next time you are tempted to spout credentials, consider these lists:

It is only appropriate to share credentials if:
  • Someone asks.  It is almost never inappropriate to share your credentials if someone asks, though you might consider also mentioning experience, if it is relevant.
  • You are filling out a job application (or a College application).  Even in companies that care more about experience, credentials can make the difference between two candidates with the same work experience.
  • You have been asked to share something about yourself and either the credentials are relevant to the situation or you cannot think of anything about yourself that is more interesting.  In the second case, please consider rethinking your life.  Again, experience is better, if it is relevant.
  • You have no other experience in the field and the credentials are relevant and meaningful.  For the sake of honesty, you should admit that you have no actual experience in this case.
Here are some situations where it is not appropriate to share credentials:
  • You don't actually have them yet.   Being a "major" in a subject is not a credential.  It is a commitment to obtain credentials.  Undergoing current training in a subject (maybe as an apprentice) is also not credentials.  If you know about the subject being discussed because you took a class on it, cite the class or the teacher.  Likewise, if someone who is training you told you something relevant, cite that person as a source, don't claim that participation in the training itself makes you a qualified source until the training is completed.
  • You don't have them at all.  If you claim to have credentials that you don't have, you are a liar.  Aside from being morally and ethically wrong, you will eventually ruin your reputation.  Do this too many times, and people won't believe you even when you do have the credentials you claim to have.  It is not worth perjuring yourself to convince people that you are right, whether you are or not.  Even if you are right, let them be wrong if they want to.
  • You are not absolutely certain your claims are true.  If you make an incorrect claim and then use your credentials to back it, not only do you make yourself look stupid, you also make anyone else with similar credentials look stupid.  If one chemist makes a false claim, people will start to assume that most or all chemists don't know what they are talking about.
  • The credentials don't guarantee the knowledge you are claiming to have.  A chemistry degree does not mean that you completely understand how electrolysis works.  An electrician certification does not mean you have experience with every fixture and brand of electrical device that exists.  Even PhDs don't know everything in their field (I had a professor with a PhD who did not know some of the features in a very common programming language that he was very experienced with).  Credentials are evidence of general knowledge.  For specific knowledge, you should cite either personal experience or a reliable reference.  (PhDs tend to have more focused knowledge, but instead of citing their PhD, it is better for them to cite the research and thesis that earned them the PhD.  That is the real experience.)
  • You have actual experience.  Experience trumps credentials.  You might be able to show me math that "proves" my machine cannot produce chlorine, but I have experience proving that chlorine is actually being produced.  My experimental evidence proves your math wrong (in fact, this is largely how science advances; someone figures out a theory about something, and someone else proves the theory wrong through experimentation).  The only place where experience does not trump credentials is in job applications for companies that are too stupid to realize that experience is more valuable.
Ideally, credentials would only matter for entry level jobs and qualifying for post-graduate programs.  They should never be cited as a reason to trust someone.  When it comes to determining the value of a claim, the most important factor is experience.  Either cite your own or cite someone else who had the experience.  Credentials are just evidence that a person managed not to bomb it too bad.

20 January 2016

Crashiness

I discovered a new metric for  software stability.  A set of programs were described by a blog writer as "crashy."  While not well defined, this is a useful metric when comparing applications that do similar things.  This is a bit of a tangent of what I want to discuss though.

I have found that most computer users have a poor understanding of software stability.  I suppose this is to be expected, as most computer users do not even understand the technology at a basic level, but there are some things that people should know when they are using computers. Unfortunately, these things are not really taught anywhere.  I am going to fix that in this article.

First, stability is a measure of how well a program handles various situations.  A program that has errors or crashes when given typical expected input is very unstable.  A program that has errors or crashes on unusual input is somewhat unstable.  A program that has errors or crashes in only a few very rare cases is considered fairly stable.  A program that never crashes is considered highly stable.  In industry, programs go through phases of development, and often one factor in determining when to advance to the next phase is stability.  Different companies tend to have different standards on what is stable enough to sell.  This is why some companies have a reputation for very stable software while others have reputations for poor stability.

Second, instability is typically caused by programmer error.  There are some errors in programs that cannot be avoided, however if an error can be predicted, the program can handle it without crashing.  Stable programs do not crash when they encounter an error.  Instead they either work around it, or if that is not possible, they notify the user and give the user options for how to respond.  There are only a few extremely rare cases where an error cannot be predicted (hardware issues, like bad memory or cosmic radiation changing memory data).  In short, instability exists because it was programmed in.  Not to imply that it was deliberate, because it is almost never intentional, but when a program crashes, it is the fault of a programmer somewhere for not handling the error appropriately.  In other words, when your applications crash, it is not your fault!  I have to say this because I frequently hear people say things like, "I should have known it would do that, because it always crashes when I do this thing," as if the crash was their fault.  The user should not have to memorize a bunch of cases that crash the program to avoid crashes.  The programmer should have taken care of that in the first place.

Now, this is kind of a hard nosed approach.  Something important to keep in mind is that it is nearly impossible to create large applications without any bugs.  Software development went way beyond the level of human comprehension decades ago.  When a group makes a large application, it is divided into parts, because no single human can completely understand the entire application all at once.  Each group works on a part of the application, and they are given information on how their part should work together with all of the other parts.  Sticking to best practices minimizes any clashing that could occur between parts, but since no person can fully understand all of the parts, there are always holes.  These holes, where unexpected interaction (called "side effects" in some types of programming) between parts of a program occur can result in bugs that are extremely difficult to find.  For large software companies, what it comes down to is, "How much money are we willing to spend on debugging before we start selling the software?"  Cheaper companies will try to maximize profits by reducing debugging spending.  Higher end companies will try to maximize quality by putting a bit more into debugging.  This affects product price though, and in most cases, spending the time and money to eliminate all bugs would be so expensive that no one could afford to buy the software, and it would take so long it would be obsolete by the time it was ready anyway.  In short, unless you are paying millions or billions of dollars on software that is years or even decades behind its time, you should not expect it to be bug free.

The question then is, what is reasonable to expect?  The answer is that it depends on criticality.  Start with the question, "What do I lose if the software crashes?"  The higher the loss, the higher the criticality.  For example, if your word processor crashes, you could lose hours of work.  If your word processor has a decent autosave feature, you could lose minutes or seconds of work.  The word processor without autosave has higher criticality than the one with.  If your online video game client crashes, you lose a few seconds or minutes of leisure time to restart it, which is hardly critical at all.  If your game is not online and it does not have autosave, it is more critical, but it is still just leisure time that you are losing, so it is less critical than the word processor.  On the other hand, your operating system is extremely critical.  This is because everything else depends on it.  Your word processor could be 100% bug free, but an unstable operating system can cause it to crash anyway, and if your operating system itself crashes, you lose everything that has not been saved, regardless of the stability of the applications.  On any system, the operating system is always the most critical piece of software, because everything else depends on it.

So, next time someone tells you that a piece of software is crashy, but it still does what you need, consider the cost of the crashes.  You might find that you are ultimately more productive with a less popular product with fewer features, because you lose less work less often and spend less time waiting for to program to start back up after a  crash.