ABSTRACT

The classic definition of reliability is the probability that a product will perform its intended function under specific environmental conditions for a specified period of time. The field of reliability gained major importance after World War I with impetus from the aircraft industry. During the 1940s, Robert Lusser introduced the basic definition of reliability and the formula for the reliability of a series system (Lusser 1958). The 1950s saw an increase in the use of such terms as failure rate, life expectancy, design adequacy, and success prediction. But not until the 1960s were new reliability techniques for components as well as systems developed at a faster rate. In 1961, H. A. Watson of the Bell Telephone Laboratories introduced the concept of fault tree analysis (FTA) (AMC Safety Digest 1971). Due to nuclear power reactor safety considerations, much emphasis was placed on FTA during the 1970s. Software reliability assessment has been of great interest since the mid 1970s. Much of the work done in the early 1980s concerned network reliability through the use of graphs. MIL-HDBK217F (1991) and Bellcore (1990) are the most well known standards used for electronic equipment and system reliability prediction. These standards are used mostly during the design phase to evaluate reliability assuming random failures. In the last 15 years of the twentieth century, Markov and Monte Carlo simulation models as well as their applications in reliability and availability calculations have been considered extensively (Rice and Gopalaswamy 1993). Advances in technology have resulted in better manufacturing processes, production control, product design, and so on, thereby enabling engineers to design, manufacture, and build components and systems that are highly reliable.