ABSTRACT

The scheme of activities that are performed from the time a product is conceived to the time the product is made and delivered to the customer can be described as a product creation cycle� Such a scheme, adapted from the “quality planning timing chart” described in the Reference Manual of Advanced Product Quality Planning and Control Plan, published jointly by the big three automakers (Chrysler LLC, Ford Motor Company, and General Motors Corporation 2008a), is presented in Figure 3�1� The modern approach to making quality products requires that quality issues be addressed and the necessary quality activities performed throughout this product creation cycle�

The product creation cycle in Figure 3�1 shows the activities divided into the following six stages:

1� Product planning 2� Product design and development 3� Process design and development 4� Product and process validation 5� Production 6� Feedback, assessment, and corrective action

These stages are not sequential in the sense that the activities overlap� The quality activities performed in the stages of product planning, product design, process design and product and process validation are collectively called “quality planning activities,” and those performed during the production stage are called “quality control activities�” The activities relating to quality performed during the planning stages, along with their objectives, the tools employed, and the outcomes achieved, are covered in this chapter�

The major tools employed during the quality planning stage are:

• Customer surveys-used to find the needs of the customers� • Quality function deployment-used for translating customer needs

into product features� • Failure mode and effects analysis-used to proof the product and pro-

cess designs against possible failures� • Basic principles of reliability-needed to define, specify, measure, and

achieve reliability in products� • Design of experiments-used to select product characteristics and pro-

cess parameters to obtain desired product and process performance� • Tolerancing-used to determine the economic limits of variability for

product characteristics and process parameters�

This is the first stage of planning for a product, when the major features for the product are determined� If the product is a car, features such as horsepower, body style, transmission type, safety standards, fuel consumption, and so on, are determined at this stage� If the product is a lawnmower, such major features as engine horsepower, deck size, and whether it will

be self-propelled or self-starting will be determined� Quality and reliability goals for the product are also established at this stage�

The quality goals are chosen in terms of several aspects of quality, such as performance, safety, comfort, appearance, and so on, to meet the needs of the customer after ascertaining what their needs and expectations are in each of these areas� Reliability goals are set in terms of length of failure-free operation of the product-again based on customer preferences-cost constraints, and prevailing competition�

The quality and reliability goals selected at this stage drive the specifics of design activities during the next stage, in which product design details are worked out and process design is undertaken to proceed side by side with product design� Detailed drawings for parts and subassemblies are also prepared, along with bills of materials in this stage� A preliminary process flow chart is prepared, including the choice of material and machinery for making the product� Preparation of the flow chart, among other things, will help to determine if existing technology will be adequate or if new technology is needed to make the product� The bills of materials will help in deciding what parts will be produced and what parts will be procured�

Quality planning activities are generally performed by a cross-functional team called the “quality planning team�” This team is comprised of representatives from various functional areas, such as marketing, product engineering, process engineering, material control, purchasing, production, quality, and customer service� Supplier and customer representatives are included when appropriate� Quality planning begins with finding the needs of the customer�

3.2.1 Finding Customer Needs

Finding customer needs is often referred to as “listening to the voice of the customer�” The customers’ voice can be heard in several ways:

1� Surveying of past and potential customers� 2� Listening to focus groups of customers, such as chain stores, dealers,

or fleet operators� 3� Collecting information from the history of complaints and warranty

services� 4� Learning from the experiences of cross-functional team members�

Deciding on which approach should be used in a given situation depends on the nature of the product, the amount of historical information already available, and the type of customer being served� For example, industrial customers may have to be approached differently from the general public, and customers for cars must be approached differently from customers for

toys� The customer survey is the most commonly used method and is often combined with other methods, such as interviewing a focus group� Thus, the customer survey is an important tool in assessing the needs of the customer�

3.2.1.1 Customer Survey

A typical customer survey attempts to establish the customers’ needs and the level of importance that customers attach to the different needs� When appropriate, information is elicited on how much the customers favor a competitor’s product and why� The survey could be conducted by phone interview, personal interview, or mail; each method has its advantages and disadvantages� For example, direct contact of customers, either by phone or in person, could generate information that the design team might not have even thought of asking (Gryna 1999)� Mailed surveys will mostly produce answers to prepared questions� These surveys may be cheaper to administer, whereas the direct contact may be expensive� In either case, the quality planning team should have a prepared survey instrument or questionnaire�

Often, the survey tools used for measuring the customers’ satisfaction with an existing product can be used for projecting what the customers would want in a new design, or a new product� Designing a survey starts with identifying the attributes that customers might look for in the product� The customers are asked to express the level of their desire for the chosen attributes, usually on a scale of one to five� Tables 3�1 and 3�2 show examples of customer surveys; one is for a tangible product and the other is for an intangible service�

Designing a customer survey is a science in itself� There are good references (Hayes 1998; Churchill 1999) that provide guidance on preparing a survey instrument� The following discussion covers some of the important fundamentals of creating a customer survey�

The list of characteristics on which the customer’s rating is requested has to be drawn by people with a good understanding of the product being planned and the customer being served� The items in this list must be relevant, concise, and unambiguous� Each item should contain only one thought and be easy to understand� For new products or customers, the list should have room for the addition of items by the customer� Usually, a five-point (Likert-type) scale is chosen for the customer to express the strength of their requirement as the Likert-type format is known to give more reliable results than a true-false response (Hayes 1998)� The questionnaire should have a brief introduction, as shown in Tables 3�1 and 3�2, to let the customer know the objective of the questionnaire and how the questionnaire is to be completed�

The survey instrument must be reliable� A survey is reliable when the survey results truly reflect the preferences of the customer� One way to measure this reliability is to give the survey to the same set of sample customers twice, with an intervening delay, and evaluate the correlation between the

two sets of responses� A high correlation between the two sets of responses will indicate high reliability of the questionnaire� Another way of evaluating this reliability is to include in the questionnaire two questions for each attribute, worded differently� The correlation between the responses for the same attribute is then evaluated� A high correlation would indicate high reliability of the questionnaire� The example questionnaire from the bank in Table 3�2 contains questions of this kind�

A survey result becomes more reliable with improvements in the clarity and relevance of the questions� It also becomes more dependable with an increase in the number of questions (Hayes 1998), which, of course, has to be balanced with relevance so that whole survey instrument remains concise�

Once the survey instrument is prepared, the plan for administering the survey should be made� Because of the cost and time constraints, surveying the entire population is not possible except in small populations, and

statistical sampling techniques are required� Two commonly used sampling techniques are:

1� Simple random sampling 2� Stratified random sampling

In simple random sampling, the sample is chosen from the entire population such that each customer in the population has an equal chance of being included in the sample� In stratified random sampling, the population is first divided into several strata, based on some rational criteria-such as sex, age, income, or education-and simple random sampling is done within each stratum� Stratified sampling provides better precision (i�e�, less variability) in the estimates for a given sample size compared to the simple random sampling� It also provides estimates of customer preferences for each of the strata, which may provide additional useful information for some product designers�

The size of the sample is determined by the confidence level needed so that the error in estimates does not exceed a chosen value� If (1 – α)100% level of confidence is required that the error in the results is not more than ±e, then the sample size n is given by the formula:

n

z s e

=

× α/ ,2 2

where zα/2 is the value that cuts off α/2 probability on the upper tail in the standard normal distribution and s is the estimate for standard deviation of the scores, known from previous, similar surveys�

Example 3.1

The above discussion included the basics of conducting a customer survey to make the reader aware of the issues involved� This may even be adequate to make some simple surveys� For more complete information on sample surveys, the reader is referred to the references cited earlier�

The next important quality-related function in product planning is translating the customers’ voice into design parameters or design features of the product� A formal tool used in this translation is called the “quality function deployment�”

3.2.2 Quality Function Deployment

The quality function deployment (QFD) is the method used to select the design features of a product in a manner to satisfy the expressed preferences of the customers� This method, which originated in Japan, also helps in prioritizing those features and picking the most important ones for special attention further along the design process� The major component of the QFD method is a matrix created with the customer’s preferences on the rows and the design features selected to meet those preferences on the columns (Figure 3�2)� The intersecting cell between a column and a row is used to record the strength of the relationship, or how well the chosen feature will meet the corresponding customer preference� These relationships are determined by the collective judgment of the product planning team and are recorded in the matrix using notations that express the strength of those relationships� Some design features may satisfy more than one customer preference, while other design features may have a strong relationship with some customer preferences but a weak relationship with others� Three different notations are generally used to indicate weak, strong, and very strong relationships, as shown in Figure 3�2�

The purpose of making this matrix is to identify those design features that are most important from the point of view of meeting customer preferences� A design feature that has strong relationships with several customer preferences will come out ahead of a feature that has only weak relationships with a few customer preferences� This process of prioritizing the design features based on their contribution to satisfying customer preferences is done using some simple arithmetic calculations�

Numerical values are assigned to the customer preferences to signify how strongly the customer prefers one requirement relative to the other� These preference numbers are the ones obtained from the customers through the  customer survey, and are indicated in the column immediately next to the customer requirements� These numbers are multiplied by the numerical equivalent of cell entries that express the strength of relationships between customer preferences and design features� The total of these resulting products for any design feature represents how important that feature is in terms of satisfying the needs of the customer� These totals (obtained column-wise) are entered at the bottom of each column,

thus assigning a score for each design feature� Those with the highest scores are the most important features from the customer’s point of view� These features are further studied with regard to the advantage they may provide in comparison to one or more competitors� The design features that are important from the view of satisfying customer preferences, and those that provide certain advantages over competitors, are identified and given special treatment in the new design� This is the basic function of the QFD method�

The matrix of customer preference versus design feature is topped by a triangular matrix, which shows how the design features are related among one another� Knowledge of these relationships helps in alerting the designer to changes that may occur in other design features while making changes to one of them� The QFD method also provides for studying how a competitor’s product fares against the expressed preferences of the customers� This provision-which helps in evaluating competitors and, thus, enables a comparison of the features of the new product with those of the competitors-is known as “benchmarking�” This enables the identification of the strengths and weaknesses of the competitor’s product� It also helps in building upon their strengths or winning a competitive edge by satisfying a need the competitors have not addressed, as shown in the example below� Different users of the QFD method also add many other details to the main function in order to suit their individual products�

The matrices, when put together, look so much like the picture of a house with a roof, windows, and doors, that the assembly of the matrices is called the “house of quality” (HOQ)� The procedure of using the QFD method and creating a HOQ is explained in Example 3�2�

Example 3.2

3.2.2.1 Customer Requirements and Design Features

Next, the preference numbers reflecting the relative importance of the various customer preferences are determined� When a product has multiple

customers-as in this case-a set of importance-weights, which are used to represent the importance of one customer relative to others, is first generated by the planning team� A total of 10 points is distributed among the customers, based on their relative importance� In this case a weight of five is assigned to professors, three to students, one to professionals, and one to the publisher� The numbers expressing the relative importance of the preferences within each customer are then obtained from those expressed by the customers in the customer surveys� A total of 10 points is distributed among the preferences within each customer�

For any customer preference, the product of the importance-weight of the  particular preference within the customer and the importance-weight of the customer (relative to other customers) is obtained as the indicator of how important the preference is overall� The computation to arrive at the importance-weight of the customer preferences is done on the columns next to the column that lists the customer preferences�

Next, a list of design features that meet the customer preferences identified in the previous step is developed� This list is in the language of the designer� These design features must be measurable� Note that there is at least one design feature that responds to each of the customer requirements� The strength of relationships among the customer preferences and design features are then estimated by the design team and are marked on the matrix� As mentioned earlier, three levels of strength are used-weak, strong and very strong� The strength notations generally indicate that an increase in the measure of the design feature will provide increased satisfaction of the particular customer preference� Some design features may help in meeting a preference of one customer but may work against a preference of another customer� Where a design feature affects a customer requirement inversely, such an affect is indicated by a down-arrow (↓) next to the strength notation�

3.2.2.2 Prioritizing Design Features

For each cell in the relationship matrix, the product of the numerical equivalent of the strength relationship and the importance-weight of the corresponding customer preference is obtained and added column-wise, and the total is placed under each column in the row at the bottom of the relationship matrix� (The relationships with down-arrows also make a positive contribution in this step when we are determining the importance of a design feature� Their negative significance is taken into account later, when we are determining target values for the design features of the new design�) The numbers in this row at the bottom of the central matrix represent the importance of the design features in meeting key customer requirements� For this example, the following numerical equivalents for the strength relationships have been used: weak = 1, strong = 3, and very strong = 5� This is the usual scale employed, but other scales, such as (1, 3, 9) instead of (1, 3, 5), are also sometimes used�

The numbers obtained for each of the design features are then “normalized” using the formula yj = 100 (xj/Σxj), where yj is the normalized score and xj is the raw score for the j-th design feature� These normalized scores represent the relative importance of a given design feature among all the design features� These relative importance scores (called the “normalized contributions” of the design features) are used to prioritize the design features for further deployment� Usually, three or four features with topranking normalized scores will be chosen as the most important design features� For the example, the top-ranking features are identified with a (#) mark below the normalized scores, with the top-most feature being identified with a (##) mark�

3.2.2.3 Choosing a Competitor as Benchmark

Also shown on the right-hand side of the HOQ is the assessment on how well competitor books fare with regards to the established customer preferences� For this example, four books-A, B, C, and D-are identified as competitors for the new design� These books are evaluated by the planning team and assigned numbers on a scale of 1 to 5 to represent their ability to meet the established customer preferences� The products of these numbers and the importance-weights of customer requirements are added and the total is shown at the bottom of the column for each competitor� These numbers represent how well a competitor’s book satisfies the customer preferences� The competitor with the largest of these numbers is the best-in-class, and it is chosen as the benchmark� For this example, the benchmark is Competitor A�

3.2.2.4 Targets

For the design features that have been prioritized as the most important, targets are selected for the new product based on a comparison with the benchmark� The benchmark is first evaluated by the planning team, and scored on a scale of 1 to 10 to reflect how well it has handled the chosen design features� These scores are shown in a row below the row containing the normalized contributions of design features of the new product� These numerical scores for the benchmark provide the basis for selecting the target for the new product� The new product will then have the important design features at targets chosen based on a comparison with the benchmark�

For the book example, the design features “statistics fundamentals,” “statistical methods,” “management topics,” and “formula derivations” are identified as the most important� The targets for these features in the new book will be chosen by taking into account the numerical sores the benchmark secured for the features� From the scores assigned for the different features, we notice that this competitor is lacking in one important design feature, “management topics,” which can be taken advantage of in the new design� The targets for each design feature of the new book are chosen by

keeping in mind how the design features satisfy the customer requirements and how these features interact among one another� These target values are shown in the row below the row displaying the feature scores of the benchmark, Competitor A� These target numbers are relative numbers, related to the scores of the corresponding features of Competitor A� For example, the number of statistical methods covered in the new book will be about seventenths of those covered in Competitor A� The target for the statistical methods is made smaller than the competitor’s in order to balance out the increase in the target for the number of management topics so that the total size of the new book will still be comparable with that of the benchmark�

A simple example of designing a book was used above to describe the QFD methodology� The reader can imagine the level of details needed for a product like a refrigerator or a car� The above example, however, illustrates the important principles involved in using the QFD methodology for identifying the needs of the customer and designing a product to satisfy those needs� Several good books (e�g�, Cohen 1995; Akao 1990) are available for further study�

At the planning stage, the selection of major design features of a product includes the selection of quality and reliability goals� The customers are asked for their quality and reliability preferences for the particular grade level of the product� Their needs expressed in this regard through past complaints are also gathered� Suitable design features are then incorporated to respond to these needs� A basic understanding of the principles of reliability is needed for a design engineer-and the other members of the planning team-to understand customer needs in this respect, and to be able to respond to them� Knowledge regarding how reliability is defined, measured, and specified is necessary� The engineers should also know how reliability goals are chosen and achieved in products� The following discussion on the fundamentals of reliability is given to meet this objective�

3.2.3 Reliability Fundamentals

Reliability refers to the ability of a product to perform without failure over a period of time� It is related to the length of life of a product before a failure occurs� This length of life, simply referred to as “life,” is a random variable in the sense that in a given population, although the units are all built by the same process, the life of different units will vary� Those life values are viewed as values of a random variable, usually denoted as T�

The variability in the life variable can be described by a frequency distribution, and this frequency distribution can be obtained from data collected on lives of sample units if the product already exists� The frequency distribution of a future product can be projected based on similar past models� An example of a frequency distribution is shown in Figure 3�3� This frequency distribution can also be represented by a mathematical function, which we call the probability density function (pdf) of the random variable T. (We will

assume that T, which generally represents life in hours, days, or months, is a continuous variable�) The frequency distribution of the life variable is the basic information necessary to assess the reliability of a product� The cumulative distribution function of the life variable is called its “life distribution�”

3.2.3.1 Definition of Reliability

Reliability is expressed as a function of time� The reliability of a product at time t, denoted as R(t), is defined as the probability that the product will not fail before the time t, under a stated set of conditions� In notations,

R t P T t( ) ( ).= >

This probability can be seen in Figure 3�3 as the area under the curve above t� This probability also represents the proportion in the population that survives beyond time t� If f(t) is the pdf of T, then

R t f x dx

( ) ( ) .= ∞∫

Also, it can be seen that R(t) = 1 − F(t), where F(t) is the cumulative distribution function (CDF) of T�

3.2.3.2 Hazard Function

An important definition relating to reliability is that of the hazard function, which is also known as the “instantaneous failure rate” or “mortality rate” function� The hazard function, denoted by h(t), represents the rate at which survivors at time t will fail in the instant immediately following time t� In other words, h(t) will provide answers to questions such as: What proportion

of the refrigerators that are five years old will fail in the next year? Or, what proportion of engines that have run for 100,000 miles will fail in the next mile? The h(t) is obtained as described below�

With reference to Figure 3�3b, the proportion of the population surviving beyond time t is R(t)� The proportion of these survivors failing in the interval Δt immediately following t is:

f t t R t ( )

( ) × ∆

The rate at which they fail in the interval Δt is

f t t R t t

( ) ( )

. ×

×

We want the rate at the instant immediately following time t-that is, we are looking to find the rate when Δt → 0� That is, we want:

lim ( ) ( )

( ) ( )

. ∆

f t t R t t

f t R t→

×

× =

Therefore,

h t f t R t

( ) ( ) ( )

.=

The h(t) is not a probability; instead, it reflects the susceptibility of a product to failure when it has reached a certain age or when it has worked certain hours or run a certain distance� It can be obtained once the failure distribution is known either as an empirical function obtained from the failure data of sample units, or as a mathematical function projected on the basis of the analysis of historical failure data� The study of the h(t) of a product over time can reveal quite a bit of information about the reliability of the product at different stages of its life� The h(t) for a given product can be an increasing function of t, a decreasing function of t, or a constant that is independent of t� The h(t) could behave differentlyincreasing, decreasing, or remaining constant-at different periods of life even for the same product� The following example shows how h(t) can be calculated from empirical data obtained from the failure times of sample units of a product�

Example 3.3

3.2.3.3 The Bathtub Curve

Failure rate curves have been used to study the life behavior of many different types of equipment� Figure 3�5 (a to d) shows a few different types of failure rate curves experienced by different types of equipment� There are equipments whose failure rate increases with time, decreases with time, or remains

constant over time� The one failure rate curve that seems to describe the failure rate behavior of a variety of complex equipment is shown in Figure 3�5d� Because of its shape, it is called the “bathtub curve�” This curve shows how the failure rate for this type of equipment changes differently at various periods of the product’s life� Based on the nature of change in the failure rate, the life of such equipment can be divided into three major periods�

Period A

Period A represents a time of decreasing failure rate, at the end of which the rate tends to become constant� The decrease in failure rate is caused by defective units in the population failing early and being repaired or removed from the population� This region is known as the “infant mortality” region� The causes

for failure in this period, which are called “early failures,” are mainly poor material or bad workmanship, such as use of a wrong clip, a missing washer, or a bolt not tightened enough� They are not caused by any design weakness� A high early failure rate or long period of infant mortality would indicate an inadequate process control during manufacture or assembly� The length of this period determines the length of burn-in that is required-the time for which the product must be run in the factory before shipping in order to avoid customers experiencing the early failures� A design engineer would want to keep the early failure rate small and the infant mortality period short�

Period B

Period B represents a time of constant failure rate, at the end of which the rate starts to increase� Here, failures occur not because of manufacturing defects but because of “accidents” caused by chance loads exceeding the  design strength� As will be seen later, a constant failure rate implies that the life distribution of units in this period follows exponential law� This region is known as the period of “chance failures,” or “useful life�” The latter name is derived from the fact that the product performs during this period without failure, except for accidents, giving its most useful performance during this period�

Period C

Period C is a time of increasing failure rate because of parts wearing out� Failures occur in this period as a result of fatigue, aging, or embrittlement� This region is known as the “wear-out” period� Knowledge of when wearout begins helps in planning replacements and overhauls so that wear-out can be delayed and the useful life extended�

3.2.3.4 Distribution of Product Life

The above discussion of the failure rate behavior of products and the phenomenon of the bathtub curve brings out the fact that the distribution of the life variable could change over the life of a product� At least three distinctly different periods can be identified for most electromechanical equipment, in which the life distribution follows different characteristics� We would want to know the distribution pattern of life in the different periods of a product’s life� The distribution characteristics have to be studied by gathering data on the failure times of the product and then checking which distribution fits the data best� Some of the candidates include the exponential, Weibull, log-normal, and gamma distributions� The properties of these distributions, and how their fit to a given set of data should be determined, are discussed in books on reliability-such as Ireson and Coombs (1988), Krishnamoorthi (1992), and Tobias and Trindade (1995)—which are cited at the end of this chapter� Of all the distributions employed to model the life variable, however, the exponential distribution is the most common as many parts and products are known to follow the exponential law during a major part of their lives� Exponential distribution is to the life variable what the normal distribution is to other measurable quality characteristics� A brief description of the exponential distribution follows�

3.2.3.5 The Exponential Distribution

The function form, or the pdf, of the exponential distribution is given by:

f t e tt( ) ,= ≥−λ λ 0

The exponential random variable takes only positive values and the distribution has one parameter, λ� We write T ~ Ex(λ) to indicate that a random variable T has exponential distribution with parameter λ� The distributions shown in Figure 3�3 do, in fact, represent the shape of an exponential distribution�

If T ~ Ex(λ), then:

CDF F t P T t e dx e t

R t F t e t

( ) ( ) ,

( ) ( ) ,

= ≤ = = − ≥

= − = ≥

1 0

h t f t R t

e e

t t

t( ) ( ) ( )

,= = = −

λ λ λ

λ a constant independent of

The mean of the distribution is:

µ λ λ

tt e dt= =− ∞∫ 1 0

The variance of the distribution is:

σ µ λ λ

1 = − =

∞∫ ( ) And the standard deviation is:

σ λT =

We can see from the above that the mean and the standard deviation of an exponentially distributed random variable are equal� We can also see that a product with exponential failure times has a constant failure rate equal to the value of the parameter of the distribution, λ� This failure rate can be estimated from the failure data from sample units, using the following formula:

λˆ = Number of failures

Total number of hours of running time

3.2.3.6 Mean Time to Failure

The mean of the distribution, or the average life of all units in the population, is called the “mean time to failure” (MTTF) and is equal to the reciprocal of the failure rate λ� The term MTTF is used for products that have only one life

(i�e�, those that are not repairable)� For products that are repairable, the term “mean time between failures” (MTBF) is used to denote the average time between repairs� The MTTF (or the MTBF) has a special significance when the life distribution is exponential� Then, as shown above, it is equal to the reciprocal of the failure rate λ, the single parameter of the distribution� This means that knowledge of the MTTF alone provides information about the lives of the entire population� The evaluation and prediction of all measures relating to reliability can be made once the MTTF is known�

It should be pointed out, however, that the MTTF does not have the same significance when the life distribution is not exponential� For example, if the Weibull distribution (another popular model for life variables) is used to model the life of a product, then knowledge of the MTTF is not adequate to define the life distribution� Instead, the two parameters of the Weibull distribution must be estimated�

Example 3.4

Example 3.5

Example 3.6

From the above discussion, we see that reliability is a function of time and is quantified as the probability that a product will survive beyond a given time� Reliability can also be interpreted as the proportion of the population that will survive beyond the given time� The failure rate expressed as a function of age represents the susceptibility of a product to failure at a given age, and it provides another measure of reliability� The reliability of a product can be evaluated if its life distribution is known� The life distribution can be obtained from empirical data on the failure times of sample units� It can also be modeled using a distribution function that is chosen to “fit” the historical failure data of the product� Probability distributions (such as the exponential, Weibull, log-normal, and gamma) are commonly used to model life variables�

The most commonly used mathematical model for describing life variables, however, is the exponential distribution� It has one parameter, λ, called the “failure rate�” This failure rate is a constant and is independent of age� If a product life is exponential, then its reliability can be measured using its failure rate or its reciprocal MTTF� The failure rate, or MTTF, can then be

used for setting reliability goals and monitoring reliability achievements� We have also seen how knowledge about the behavior of the failure rate over time, expressed as a bathtub curve, can be used to understand-and possibly enhance-a product’s reliability�

3.2.3.7 Reliability Engineering

The discipline of applying reliability principles to evaluating, predicting, and achieving reliability in products is called “reliability engineering�”

Reliability goals are first chosen at the overall system level� For example, if a manufacturer produces trucks, the reliability goal is first chosen for the entire truck� This reliability is then apportioned to the components so that by working together, the component reliabilities will contribute to achieving the desired system reliability� The apportioning of the component reliability is done with an understanding of the relationship of the components to the system� First, the functional relationship of the components to the system is expressed in what is called a “reliability block diagram” (RBD)� The system reliability is then calculated from component reliabilities using procedures that are documented in books on reliability (e�g�, Tobias and Trindade 1995)� Apportioning the system reliability to the components is done by trial and error until component reliabilities, which have been assigned taking into account the capabilities of individual components, would achieve the desired system reliability� At the end of the apportioning exercise, the reliability required of components will be known� For example, in the case of the truck, the individual reliability requirements of the engine, transmission, controls, and so on, will be known� The reliability requirement of purchased components will be written as requirements in the purchase contracts� Achieving reliability goals for manufactured components will be the responsibility of the design engineers and the quality planning team�

The reliability of current designs can be estimated either from field failure data or from laboratory test data� From these, the gap between the required and actual reliability can be obtained for each component, and from these estimates will emerge a few critical components that must have their reliability improved in order to attain the system reliability goals� Reliability improvement can be accomplished by studying the failure rate behavior and failure mechanisms of parts and subassemblies� If failures are happening in early life, process controls should be implemented, or improved� Variability in process parameters must be reduced and poor workmanship must be avoided� If the failures result from wear-out, it may be because seals, belts, and hoses that have increasing failure rates� Better material, better tolerances, and improved maintenance will delay the wear-out failures� If the failure rate is high during the useful life, load-strength studies (see Chapter 4 in O’Connor 1985) will help in identifying opportunities to minimize “accidental” failures� Also, designs can be made more robust; that is, less susceptible to failure due to changes in environment over which the user has no control�

This is done by optimal choice of product parameters and their tolerances� The issues relating to the choice of product parameters and their tolerances are discussed next as part of the product design� Note that any significant improvement in reliability has to come from design changes�

Product design consists of two stages: first, the overall parameters are chosen; and second, the details of the parameters are worked out� Engineering drawings and specifications are created, and prototype testing is done for validating the design� As indicated in Figure 3�1, process design is undertaken even as the product design progresses� Trial runs are made for validating both the product and the process design� The major quality-related activities at this stage are:

1� Parameter design 2� Tolerance design 3� Failure mode and effects analysis 4� Design for manufacturability study 5� Design reviews

The objectives, procedures, and outcomes of each of these activities are explained below�

3.3.1 Parameter Design

Parameter design, in the context of product design, refers to selecting the product parameters or those critical characteristics of the product that determine its quality and enhance the product’s ability to meet its intended use and satisfy the customer�

At the end of the QFD exercise, the major design features of the product and their target values would have already been decided� For example, if the product is a lawnmower, the planning team would have chosen the performance target as: mowing an average yard of about 10,000 sq� ft� in less than one hour� They must then decide the product parameters, such as blade size, blade angle, engine horsepower, speed of rotation, deck height, chute angle, and so on, in order to accomplish the target performance� Most product designers would have initial values for these parameters based on their experience with previous models� The question to be answered is whether these initial values are good enough to meet the new target or if they need to be changed to attain the new target for the new model� Often, the answer has to be found through experimentation-that is, by trying different values

for the product parameters, measuring corresponding performances, and choosing the set of parameters that give the desired performance� Thus, in the lawnmower example, an experiment has to be conducted with the objective of finding the best set of values for the product parameters that will enable the cutting of a yard that is 10,000 sq� ft� in size in one hour or less�

A vast body of knowledge exists on how to perform experiments efficiently so that the required information about the product performance, vis-à-vis the product parameters, is obtained with the minimum amount of experimental work� This branch of statistics, referred to as the “design of experiments,” (D�O�E�), was pioneered by Sir Ronald Fisher, the English statistician, who in early 1920s was researching the selection of the best levels of inputs, such as fertilizer, seed variety, amount of moisture, and so on, to maximize the yield from agricultural fields� The designed experiments were subsequently used profitably in industrial environments to optimize the selection of product parameters during product design, and process parameters during process design� The Japanese engineer Dr� Genichi Taguchi propagated the philosophy that experiments must be used for selecting the product and process variables in such a way that the performance of the product or process will be “robust�” By this, he meant that the selection of parameters should be such that the performance of the product will not be affected by various noise or environmental conditions to which the product or process may be subjected�

The basics of designed experiments are discussed below� The objective here is to impress upon readers the need for experimentation when choosing product and process parameters, and alert them to the availability of different experimental designs to suit different occasions� Details on how experiments should be conducted and their results analyzed are also provided for some popular designs� By the end, it is hoped that readers will be able to appreciate the value of designed experiments in the context of product or process design, perform some basic experiments, and analyze the data from them� Readers will also be better prepared to explore more advanced designs when the need for them arises� The discussion below relates to two simple, but important, designs that are used in industrial experimentation, which are known as 22 and 23 factorial designs� Some additional designs in the 2k family are included in Chapter 5�

3.3.2 Design of Experiments

An experiment is designed to study the effect of some input variables, called “factors,” on a response, which may be the performance of a product or output of a process� The factors can be set at different levels, and the product performance or the process output could change depending on the levels at which the different factors are kept� The design of the experiment involves choosing the relevant factors, selecting the appropriate levels for them, and determining the combinations of the factor levels, called the “treatment combinations,” at which the trials will be conducted� The design also determines

the number of times that the trials will be repeated with each treatment combination in order to obtain a measure of the random variability present in the results� In addition, the design will specify the sequence in which the trials should be run, and is usually accompanied by a procedure for analyzing the data from the trials and drawing conclusions�

Sometimes, an experimenter is concerned with only one factor and wants to determine the best level of that factor to achieve the desired level of a response� In such a case, an experiment would be conducted by running trials at different levels of that one factor� Such an experiment is known as a “one-factor experiment�” More often, though, we will be dealing with situations where several factors are influencing a response, which is a quality characteristic of a product or output of a process, and we have to find out how the different factors, individually and jointly, affect the response� With this being the more common situation, we will focus here on the multifactor experiments�

There are many multifactor experimental designs to suit the varying situations in which experiments have to be run� We will discuss below one type of design called the “2k factorial” design, in which k number of factors, each with two levels, are studied to learn of their effect on a response� These designs are very useful in the selection of product and process parameters and are considered to be “workhorse” designs in industrial experimentation� We will discuss the 22 and 23 designs in this chapter, which, being simple designs, are useful in explaining the concepts and terminology of experimental design� The more general 2k design is discussed in Chapter 5�

3.3.2.1 22 Factorial Design

A factorial experiment is an experiment in which each level of one factor is combined with each level of every other factor to obtain all possible treatment combinations at which trials are to be performed� The 22 design is a factorial experiment involving two factors, each at two levels, and is explained below using an example�

Example 3.7

The 22 factorial design shown in Figure 3�6 can also be represented in a table, as shown in Table 3�4� In this table, the lower level of each factor is designated by a (−) sign and the higher level by a (+) sign� Notice the simple pattern of the signs in the columns under each factor: the signs alternate one at a time in the column of Factor A and they alternate two at a time in the column of Factor B� This enables the creation of the design easily and quickly� The design is to run the experiment at the four treatment combinations numbered one to four� For example, in Treatment Combination 1, both the factors will be at low levels; and in Treatment Combination 2, the Factor A is at its higher level and Factor B is at its lower level� In addition, the design recommends that two trials be run at each treatment combination to obtain two replicates� The results from the two replicates will then be averaged, and the average will be used as the response from the treatment combination�

The number of replicates needed is determined based on the variability expected from the individual trials as well as the cost associated with, and

time available for, repeating the trials� At least two replicates are necessary, as will be seen later, to obtain an estimate for the experimental error, or the variability in the response because of noise� However, when the number of factors is large, (>3), there are ways of obtaining estimates for experimental error without multiple replications of the trials� For two-and three-factor experiments, at least two replicates must be performed�

“Noise” refers to the environmental conditions that are not deliberately changed as part of the experiment, but changes in them may influence the experimental results� For example, in the case of the lawnmower, we would like to perform the experiment with different treatment combinations on the same piece of yard so that things other than the experimental factors will remain the same for all treatment combinations� For obvious reasons, however, it is not possible to do this, and we have to repeat the experiment with different treatment combinations on different yards (called the experimental units), which may all be similar yards of “equal” size� When different yards are used for different treatment combinations, there will be some difference in the yards because of small differences in size, slopes, weeds, wet patches, and so on� These will cause variability in the results outside of the effects due to the factor levels being investigated� Replicates allow us to evaluate this variability, and the averages from the replicates when used as the response from the treatment combinations will contain less variability from such noise, compared to the single readings from individual trials�

3.3.2.2 Randomization

Just as noise or extraneous factors can cause variability in replicates that are run with the same treatment combination, noise factors can also affect outcomes between two treatment combinations, either by masking or adding to the factor effects� For example, dry grass in the afternoon may require less time to cut than moist grass in the morning� The morning/afternoon effect will add to, or subtract from, the real effect of the factors; therefore, care should be taken to avoid the morning/afternoon effect influencing the real factor effects in any systematic manner� To remove-or at least minimizethis effect of noise, the trials are randomized� In other words, the trials are

not run in the order indicated in the design; instead, they are run in a randomly chosen order� Randomizing prevents the noise factors from affecting the trials systematically and minimizes the influence of noise factors on the trial outcomes�

In the example of the lawnmower, the eight trials can be randomly assigned to eight different yards (of approximately 10,000 sq� ft� in area and similar in all other aspects), and the eight trials can be run in a random sequence, using one operator� The experiment will then be considered to be a “completely randomized” experiment� On the other hand, suppose there is some restriction on the number of trials that can be run in one day-for example, only four  trials are possible in a day� In such a case, it may make sense to run the four trials from one replication on one day, and the four trials from the other replication on another day� This way, if there is a “day effect,” it will affect all four treatment combinations in one replication in the same way� It is possible then to separate this day effect from the rest of the noise by using a suitable analysis technique and, thus, minimize the amount of variability that is not accounted for� This is an example of blocking, in which the trials are run in two blocks, with the days being the blocks� The trials within a block are run in a randomized sequence� Such an experiment will be called a “randomized complete block” design, with the term “complete” being used to indicate that each block includes one full, or complete, replication� When a full replication cannot be accommodated inside a block, we will have a “randomized incomplete block” design� This gives an idea of why so many different designs have been developed to accommodate the different circumstances of experimentation� We will discuss here only the randomized complete design� More details on the design and analysis of experiments with blocks can be found in the references (e�g�, Hicks and Turner 1999) cited at the end of this chapter�

3.3.2.3 Experimental Results

Suppose the eight trials of the above experiment are run in a completely randomized manner and the results from the trials are as shown in Table 3�4� The results are presented in the graph in Figure 3�7, with the respective corners of the square representing the treatment combinations� It is easy to see that Treatment Combination 2 produces the best result, requiring the least amount of time to cut the given-size yard� Looking at the difference between the two replicates at each treatment combination (see Table 3�4), not much variability can be seen between the replicates, indicating that the experimental error, or unexplained variability, is almost not there� This means that there is not much noise; therefore, the signal is clear, and it is easy to see the best treatment combination�

The results, however, do not come out this clear from many real experiments� There may be much difference in the results from replicates of the same treatment combination, and the results for the various treatment

combinations may not be far apart to give a clear-cut choice� The results from the trials of treatment combinations may even overlap� When the differences among the results from the treatment combinations are not far apart, showing obvious differences due to factors, we would then want to know whether the differences in the results are, indeed, true differences because of factors or just experimental variability� In such a case, we would have to calculate the effect of the individual factors and the effect of interaction among the factors, and use a statistical technique to determine if these effects are significant�

3.3.2.4 Calculating the Factor Effects

The data from the experiment under discussion are rearranged in Table 3�5 to facilitate the calculation of the effects� This table has a new column,

with the heading “interaction,” added to those from Table 3�4� The original columns with (−) and (+) signs are named “design columns,” and the new column is called the “calculation column�” The treatment combinations are identified by new codes: “(1)” for the treatment combination in which both factors are at low level, “a” for the treatment combination in which Factor A is at high level and Factor B at low level, “b” for the treatment combination in which Factor B is at high level and Factor A at low level, and “ab” for the treatment combination in which both factors are at high level� These codes represent the average response from the respective treatment combinations in the formulas that are given below for calculating effects� The graphic representation of the design in Figure 3�7 includes these new notations to help readers follow the development of formulas for computing factor effects� (Please note that the codes we use here represent the averages from the treatment combinations� Some authors use the codes to represent the totals from the treatment combinations, so the formulas given here may look different from theirs�)

3.3.2.5 Main Effects

Factors A and B are called the main factors in order to differentiate them from interactions that also arise as outcomes of experiments� The effect caused by a main factor, called a “main effect,” is calculated by subtracting the average response at the two treatment combinations where the factor is at the lower level from the average response at the treatment combinations where the factor is at the higher level� For example, the average of the responses of treatment combinations where Factor A is at the higher level is:

ab a+

2 and the average of the responses of treatment combinations where Factor A is at the lower level is:

( )1

2 + b

The difference between these two averages gives the effect of Factor A and is denoted as A� So,

A

a b ab =

− + − +( )1 2

Similarly,

B

a b ab =

− − + +( )1 2

So, for the example,

A =

− + − + = −

73 48 60 83 2

B =

− − + + =

73 48 60 83 2

These effects can be interpreted as follows: if the blade angle (Factor A) is changed from its lower level of 12° to the higher level of 16°, then the mowing time decreases by 1 minute, and if the deck height (Factor B) is changed from 5 in� to 7 in�, then the mowing time increases by 11 minutes�

3.3.2.6 Interaction Effects

Interaction between two factors exists if the effect of the two factors acting together is much more, or much less, than the sum of the effects caused by the individual factors acting alone� It is necessary to detect the existence of interaction between factors, because when significant interaction exists, the main effects calculations are rendered suspect� The interaction effect between two factors also has practical meaning and helps in understanding how the factors work together�

The interaction effect between Factors A and B in the two-factor experiment is calculated as follows: take the average of the responses from the treatment combinations where both factors are at the high and both are at the low level; this is the average of the responses at the two ends of the leading diagonal of the square in Figure 3�7� Then take the average of the responses from the treatment combinations where one factor is at the high level and the other is at the low level; this is the average of the responses at the two ends of the other diagonal in Figure 3�7� Subtract the latter from the former; the difference is the interaction effect caused by increasing A and B simultaneously, denoted as the AB interaction�

For the example, to get the AB interaction, find the average of the treatment combinations where both factors are at the high and at the low level:

( )

. 1

2 + ab

Then, find the average of the treatment combinations where one factor is at the high and one factor is at the low level:

a b+ 2

.