Richard J. Mills
I recall hearing a statement regarding the Apollo moon mission launches. It is true that a single loose wire, or a weak rubber O ring can cause disaster. On the other hand, the Apollo mission was so complex, that if every component was required to work correctly for the mission to succeed, then only one in five million launches would have succeeded! It is misleading to emphasize the vulnerability to simple failures, while failing to point out the overall robustness. This logical flaw seems common in the way many commentators on the Y2K problem have been approaching the problem. Allow me to pick on a typical example from Fallback Chapter 13.
There is a federal (US gov) standard for a car's emissions control systems logging any failure conditions of the components, for example a fuel injector being open longer than it is supposed to due to some dirt in the fuel. The name for the logging system is OBD2 (On Board Diagnostics 2).
This next is informed speculation. Suppose there is date-time logging for the failure. Suppose that the date routine for some of the software is (surprise) not Y2K compliant. Suppose the failure mode is either to lockup or refuse to run the car (unlikely but not impossible). I have seen statements in both directions - that some automotive engine control processors will or will not fail after The Day. It seems to me that it is likely that those who know (because they wrote the software) are probably contractually bound to keep quiet. The rest of us are just guessing.
The supposition is that such a failure has a serious effect on the primary function of the car. The credibility is buttressed by the suggestion that corporate types are engaging in a conspiracy of silence. In many cases, (not this one), it ends with an appeal to you to spread the word so that someone in authority might do something about it.
The first thing that should strike you about this example is that it precisely follows the recipe for creating a successful urban legend. I don't want to be pejorative, but I do want to point out this as one reason why the whole Y2K problem has trouble gaining credibility in some circles. Our anecdotal examples, are very hard to distinguish from urban legends.
The logical flaw in the above example, and in many others, is that they implicitly invoke Murphy's Law to predict the behavior of complex systems. That's wrong. Briefly, Murphy's Law says, If anything can fail, it will. It was never intended to apply to forecasts of behavior, but rather as a ground rule as a design standard. Mr. Murphy's son recently clarified this in a public statement. He said:
I would suggest, however, that Murphy's Law actually refers to the CERTAINTY of failure. It is a call for determining the likely causes of failure in advance and acting to prevent a problem before it occurs. In the example of flipping toast, my father would not have stood by and watched the slice fall onto its buttered side. Instead he would have figured out a way to prevent the fall or at least ensure that the toast would fall butter-side up.
Murphy and his fellow engineers spent years testing new designs of devices related to aircraft pilot safety or crash survival when there was no room for failure (for example, they worked on supersonic jets and the Apollo landing craft). They were not content to rely on probabilities for their successes. Because they knew that things left to chance would definitely fail, they went to painstaking efforts to ensure success. EDWARD A. MURPHY III, Sausalito, Calif.
If Murphy's law did apply to real complex systems, we never would have had a successful rocket, or airplanes or computers to program in the first place. They would almost never work.
In practice it comes down to fuzzy language and imprecise logic. If a discussion mixes improbable events with probably ones, and is vague about the consequences, i.e. the difference between anything fails and everything fails, then the chances of misleading everyone (including yourself) abound. In non-technical discussions, including this one, we can't resort to the only precise description (i.e. fault trees and their mathematics). That would be unsuitable for general audiences.
There's another important difference in the character of different kinds of fallible systems. For purposes of discussion, let me divide them into two kinds, engineered systems, and human-created artifact systems.
Systems like power plants, airplanes, and automobiles are engineered systems. They are designed to provide a primary function. What is necessarily to accomplish the primary function is more a matter of physics than human preferences. Power plants make power, vehicles move things. Even if they have many secondary characteristics, their success is judged only on their ability to fulfill their primary function.
Systems like cities, IRS bookkeeping, and the stock market, are human-created artifacts. They are supposed to do whatever we expect them to do. If they have a primary function at all, its importance is hard to distinguish compared to all the secondary characteristics we expect it to provide. In the case of a city, it must provide, water, power, police, fire protection, tax collection, civic leadership, and many other ill defined functions to be considered a success.
Engineered systems tend to be very robust in fulfilling their primary function. Like the Apollo rocket, they can tolerate lots of failures and still succeed. Basically, everything must fail, or be rendered moot, for the engineered system to fail. Human-created artifacts, like a city are thought to have failed if any of the important functions fails. Basically, if anything fails, we rush to judge the system as failed.
The difference is profound. In Y2K discussions involving both kinds of systems, we must take particular care to keep this in mind. Software can be right or wrong.. Whether or not a system is considered failed because of wrong software is more a function of expectations than a function of logic.
Also see Dick's list of other articles on Y2K.
Dick Mills, http://www.albany.net/~dmills/
West Charlton, NY