Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Chapter 20 © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. BUSINESS ANALYTICS: DATA ANALYSIS AND DECISION MAKING Statistical Process Control Introduction One of the areas where statistics has had the largest impact in the business world is the area of quality. The quality movement comprises much more than just statistical or quantitative methods. However, a large part of the success of the quality movement is due to the increased use of quantitative methods. One set of quantitative tools is referred to as statistical process control (or SPC). Its two most important goals can be summarized as: Get it right the first time—It is much better to catch mistakes early, when they are less costly to fix, than to wait for final inspection. Reduce variation—Variability is the main culprit that hurts quality, so companies need to be able to measure it and give workers a way to eliminate it. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Deming’s 14 Points (slide 1 of 3) W. Edwards Deming is probably more responsible for today’s emphasis on quality than any other single individual. Deming taught Japanese industries after World War II the principles of quality management, for which they are now well known. In the early 1980s, Deming and a few other quality gurus began teaching U.S. companies the statistical principles they needed to compete successfully. Deming is perhaps best remembered for his famous 14 points, a list of precepts he taught in all of his seminars. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Deming’s 14 Points (slide 2 of 3) 1. 2. 3. 4. 5. 6. Constancy of Purpose—Create constancy of purpose toward improvement of product and service, allocating resources to provide for long-range needs rather than only short-term profitability, with a plan to become competitive, stay in business, and provide jobs. The New Philosophy—Adopt the new philosophy. We are in a new economic age, created in Japan. We can no longer live with commonly accepted levels of delays, mistakes, defective materials, and defective workmanship. Transformation of Western management style is necessary to halt the continued decline of industry. Cease Dependence on Mass Inspection—Eliminate the need for mass inspection as a way to achieve quality by building quality into the product in the first place. Require statistical evidence of built-in quality in both manufacturing and purchasing functions. End Lowest-Tender Contracts—End the practice of awarding business solely on the basis of price tag. Improve Every Process—Improve constantly and forever the system of production and service, to improve quality and productivity, and thus constantly decrease costs. Institute Training—Institute modern methods of training for everybody’s job, including management, to make better use of every employee. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Deming’s 14 Points (slide 3 of 3) 7. 8. 9. 10. 11. 12. 13. 14. Institute Leadership of People—Adopt and institute leadership aimed at helping people to do a better job. Drive Out Fear—Encourage effective two-way communication and other means to drive out fear throughout the organization so that everybody can work effectively and more productively for the company. Break Down Barriers—Break down barriers between departments and staff areas. Eliminate Exhortations—Eliminate the use of slogans, posters, and exhortations for the workforce, demanding zero defects and new levels of productivity without providing methods. Eliminate Arbitrary Numerical Targets—Eliminate work standards that prescribe quotas for the workforce and numerical goals for people in management. Permit Pride of Workmanship—Remove the barriers that rob hourly workers, and people in management, of their right to pride of workmanship. Encourage Education—Institute a vigorous program of education, and encourage self-improvement for everyone. Top Management Commitment and Action—Clearly define top management’s permanent commitment to ever-improving quality and productivity, and their obligation to implement all of these principles. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Introduction to Control Charts (slide 1 of 3) Control charts are one of the most important statistical tools available for reducing variability and improving quality. They are generally easy to use, and they provide a wealth of information about a process. There are two types of variability in a process: If the current variability in the output of a process is due entirely to the inherent nature of the process, we say that its variability is due to common causes and that the process is in statistical control, or simply, is an in-control process. Common cause variability is the inherent variation in an in-control process. If some of the current variability of the process is due to specific assignable causes, such as bad materials or an improperly adjusted machine, we say that the process is an out-of-control process. Assignable cause variability is the extra variation observed when a process goes out of control—which could be for any number of reasons. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Introduction to Control Charts (slide 2 of 3) One of the main purposes of control charts is to monitor a process so that a company can see when a process goes from an in-control condition to an out-of-control condition. A process in control is not necessarily a good process, but it is at least predictable, regardless of whether it is any good. An out-of-control process, on the other hand, is unpredictable. The assignable causes that produce out-of-control behavior can often be corrected by the workers on the shop floor, without management intervention. There is little workers can do to improve an in-control process that has unacceptable variability. Control charts allow workers to measure the amount of variability, but there is generally no way they can reduce the amount of variability without guidance from management. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Introduction to Control Charts (slide 3 of 3) The primary reasons that control charts have become so popular include: They improve productivity and lower costs. They prevent unnecessary process adjustments. Control charts allow the operator to see when a process is really in need of an adjustment. This prevents unnecessary “tampering.” They provide diagnostic information about the process. Productivity is defined as the number of good items produced per hour. Control charts allow mistakes to be found early in the process—before they result in poor finished products. Control charts not only signal when something is wrong, but they provide clues as to the cause of the problem. They provide information about process capability. Process capability is defined as the ability to produce outputs that meet specifications. Control charts provide this information, at least when the process is in control. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Control Charts for Variables (slide 1 of 2) There are two basic types of control charts: Charts for variables are relevant when there is a measurable quantity, such as a diameter or a weight, that can be monitored. The purpose of the chart is to see how this quantity varies through time. Charts for attributes are appropriate when a item is judged to conform to specifications or not. This type of chart tracks the proportion of conforming (or nonconforming) parts through time. It is also appropriate for tracking the number of defects through time. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Control Charts for Variables (slide 2 of 2) Two of the most common types of variables control charts are the X chart and the R chart. To produce X and R charts, we randomly sample a small number of items and measure the characteristic. An X chart plots the averages of small subsamples through time. Its purpose is to see how the mean of the process is changing through time. An R chart plots the ranges (maximum minus minimum) of small subsamples through time. The resulting sample of measurements is called a subsample. Its purpose is to see how the variability of the process is changing through time. The resulting time series plots are more informative when centerlines and control limits are added to the charts. A centerline indicates the average value that the X’s (or R’s) vary around. Control limits place upper and lower bounds on where the X’s (or R’s) should be for a process in control. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.1: Soda Cans.xlsx Objective: To use X and R charts to check whether the process of filling soda cans is performing as it should. Solution: The data file contains data on the number of ounces of soda in cans labeled “12-ounce” cans. (slide 1 of 2) Every half hour, five cans of soda from a production process were measured for fill volume. This was done for 70 consecutive half-hour periods. To create the charts in StatTools, designate the data range as a StatTools data set and then select X/R Charts from the Quality Control group on the StatTools ribbon. Fill in the resulting dialog box, select the variables Obs1 and Obs5, limit the graph to observations 1 to 30, and base the control limits only on these observations. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.1: Soda Cans.xlsx (slide 2 of 2) StatTools creates a new sheet called X-R Charts, which contains the data the control charts are based on, along with the X and R charts. These charts are shown below. No points are outside of the control limits, and there is no obvious “nonrandom” behavior, such as an upward trend through time. Therefore, this process appears to be in control. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. More on X and R Charts The X chart is a plot of the subsample averages, that is, the individual X’s. The R chart measures within-subsample variation over time. The centerline for this plot is the average of all X’s, denoted X. The lower and upper control limits, denoted LCL and UCL, are approximately three standard deviations (of X) on either side of the centerline, where the standard deviation of X is σ√n and n is the subsample size. Each R measures the variability in the process at a given point in time. For the R chart, we use the average R as the centerline and again go out three standard deviations (of R) on either side to form the control limits. We typically look at the R chart first, because the control limits for the X chart make little sense unless the R’s are in control. Assuming that the R chart indicates in-control behavior, we then shift our attention to the X chart. Any point beyond the control limits suggests a shift, either up or down, in the mean of the process. If we see such a point, we can begin searching for an assignable cause. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.1 (Continued): Soda Cans.xlsx (slide 1 of 2) Objective: To continue the X and R charts to learn whether the soda can process stays in control beyond the subsamples on which the original charts were based. Solution: Plot all of the subsamples but base the control limits and centerlines only on subsamples 1-30. The resulting X and R charts are shown below. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.1 (Continued): Soda Cans.xlsx (slide 2 of 2) The R chart shows that the process stayed in control for at least 10 more half-hour periods beyond subsample 30. However, beginning shortly after subsample 40, the process variability appears to have increased and two points jumped above the upper control limit. Presumably, the operator discovered the problem and fixed it around the time of subsample 55. At about the same time, the X chart suggests a downward shift in the process mean. Many points are below the centerline, and one finally crosses the lower control limit on subsample 63. It appears that this machine needs to be readjusted to bring its mean back up to the previous level. After this is done, both control charts should indicate an in-control process—at least until some other assignable cause forces it out of control again. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Control Charts and Hypothesis Testing (slide 1 of 2) As we monitor processes with control charts, there are two types of errors we can make. Type I error—when we react to an out-of-control indication when in fact the process is still in control We call this a false alarm. We want to make the probability of a type I error fairly small; if it is too large, we react to too many false alarms. Therefore, we set the control limits fairly far apart so that the chance of observing a point beyond them is very small. The mean number of subsamples until an in-control process produces a point beyond the control limits is called the average run length, or ARL. If we set the control limits three standard deviations from the centerline, ARL = 1/0.0027 ≅ 370. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Control Charts and Hypothesis Testing (slide 2 of 2) Type II error—when the process has gone out of control, but the control charts do not indicate it It is difficult to calculate the probability of a type II error because there are many types of out-of-control conditions that could occur. To keep both type I and type II errors to a minimum, there are two strategies: Sample more frequently. Increase the subsample size. Both strategies are intended to decrease the ARL when the process goes out of control. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Other Out-of-Control Indications (slide 1 of 2) In addition to points beyond the control limits, other possible indications of an out-of-control process include: 1. 2. 3. 4. At least 8 upward (or downward) consecutive changes At least 8 consecutive points above (or below) the centerline At least 2 of 3 consecutive points beyond two standard deviations from the centerline (where both are on the same side of the centerline); usually applied only to X charts. At least 4 of 5 consecutive points beyond one standard deviation from the centerline (where all 4 are on the same side of the centerline); usually applied only to X charts. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Other Out-of-Control Indications (slide 2 of 2) For the last two conditions, it is common to divide the region between the centerline and either control limit into three “zones” of width one standard deviation each, as shown below. Then condition 3 is called the Zone A rule, and condition 4 is called the Zone B rule. The idea is that although points within zone A and zone B are within the control limits, it is unlikely that an in-control process would have this many nearby points in zone A or B. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Rational Subsamples The small number of observations taken periodically should be rational subsamples. This means they should be taken in such a way that only common-cause variability can be attributed to the points in a particular subsample. There shouldn’t be any assignable causes of variability that affect some of the points in the subsample and not others. Typically, rational subsamples are obtained by taking observations nearby in time. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.2: Gaskets.xlsx (slide 1 of 2) Objective: To see how nonrational samples can produce misleading information in X and R charts. Solution: Two parallel production machines produce identical types of gaskets. Every 15 minutes, four gaskets were sampled, two from each machine, to determine their thickness. The data file contains data from this process. The charts for these data appear below. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.2: Gaskets.xlsx (slide 2 of 2) The X chart looks too good. Each X is an average of two typical machine 1 observations and two typical machine 2 observations. With such averages, the highs tend to cancel the lows. A rational subsample philosophy would suggest control charts for each machine, as shown below for machine 2. There is one out-of-control point in the X chart and nearly another. Machine 2 should be checked for assignable causes. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Deming’s Funnel Experiment and Tampering (slide 1 of 3) If a system is already in control, frequent small adjustments can actually make a system worse. Deming called this “tampering” and often demonstrated it in his seminars with the following funnel experiment. Deming placed a funnel above a target on the floor and dropped small balls through the funnel in an attempt to hit the target. Deming proposed four rules for adjusting the position of the funnel: 1. 2. 3. 4. Never move the funnel. After each ball is dropped, move the funnel—relative to its previous position—to compensate for any error. Move the funnel—relative to its original position at (0,0)—to compensate for any error. Always reposition the funnel directly over the last drop. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Deming’s Funnel Experiment and Tampering (slide 2 of 3) We can see how these rules work by running a simulation in Excel®. The following equations allow us to simulate 50 consecutive drops for any of the four rules, where Px,t and Py,t are the xand y-coordinates of the position of the funnel just before drop t; Px,1 and Py,1 are the coordinates of the initial position (both set to 0 for all of the rules); Xt and Yt are the coordinates where drop t actually falls; and Px,t+1 and Py,t+1 are the coordinates of the next funnel position: © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Deming’s Funnel Experiment and Tampering (slide 3 of 3) The figure on the bottom left illustrates the simulation for rule 2. After implementing each of the four rules for 50 drops, we can use a data table, as shown on the bottom right, to replicate 100 times the distance from the 50th drop to the target for each rule. Because we want these distances to be small, rule 1 is performing best, with rule 2 following fairly close behind, and rules 3 and 4 performing terribly. As Deming predicted, tampering with an in-control system never helps—and it can have very negative consequences. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Control Charts in the Service Industry Although most applications of control charts are in the manufacturing area, it is certainly possible to apply the same analysis to problems in the service industry, as shown in the next example. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.3: Checks.xlsx (slide 1 of 3) Objective: To see how control charts can help Woodstock find the reasons for untimely check processing and suggest ways of decreasing check processing time. Solution: Woodstock Company is having difficulty processing checks in a timely manner, which is making its suppliers unhappy and is keeping Woodstock from obtaining the discounts many suppliers offer for prompt payments. To produce control charts, assume that Woodstock measured the processing times for five checks completed each day. Each processing time is defined as the time from when a supplier’s shipment is received until Woodstock sends the check to the supplier. The data file contains these processing times for 60 consecutive business days. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.3: Checks.xlsx (slide 2 of 3) The X chart, shown below, indicates out-of-control points on days 7 and 10. Woodstock learned that on day 7 the people in finance implemented a change in the check preparation process, but this change backfired and was eliminated after 5 days. Woodstock eliminated points 7-11 and formed new charts that showed statistical control, but it was still alarmed at the high average processing times and the high variability. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.3: Checks.xlsx (slide 3 of 3) Management then discovered several unnecessary steps in the check preparation process and took steps to streamline the process. The X and R charts through day 60 (with days 7-11 eliminated) appear below. The R chart indicates a lower level of variability, and the X chart indicates a decreased average time to process checks. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Control Charts for Attributes An item that fails to conform to specifications is called a nonconforming (or defective) item. When items can be classified only as conforming or nonconforming, we typically chart the proportions that are conforming during consecutive periods of time. The resulting chart is called a p chart. It is one of several types of charts called attributes charts, where the term attribute indicates an “on/off” type of measurement: The item either has the attribute or it does not. There are other types of attributes charts called c charts and u charts that are used to chart the number (or rate) of defects in successive items, where a defect is any flaw in an item. These charts can be formed easily with the StatTools Quality Control procedures. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. The p Chart Suppose we sample ni items during period i, and ki of these fail to conform to specifications. Let be the proportion of nonconforming items in sample i: A p chart is then a time series plot of the ’s. It plots the proportions of items that are nonconforming (defective) through time. We place a centerline and control limits on the chart in such a way that the ’s for an in-control process vary randomly around the centerline and almost never cross the control limits. The lower and upper control limits are three standard deviations below and above the centerline, where the centerline is the proportion of all items that are nonconforming. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.4: Chips1.xlsx (slide 1 of 2) Objective: To use p charts to see whether the chip manufacturing process at SoundTech is in control and is producing a “small” number of noncomforming chips. Solution: SoundTech manufactures electronic chips for sound systems in personal computers. Each chip is classified as conforming or nonconforming. Each hour 75 chips are tested for conformance. The data file lists the number of nonconforming chips (out of 75) for 25 consecutive hours. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.4: Chips1.xlsx (slide 2 of 2) To create the p chart, select P Chart from the StatTools Quality Control group. In the dialog box, the variables Number Nonconforming and Sample Size should be selected for this example. The p chart appears below. The current process appears to be in control, but an average percent nonconforming of about 25% is not very good. SoundTech should begin searching for improvements to its process. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. The Red Bead Experiment (slide 1 of 2) Deming often used the following red bead experiment. It illustrates that in a system subject only to common-cause variation, some workers are bound to be the “best” on some days and “worst” on others, for no particular reasons. It also illustrates how all workers can fail to live up to standards, through no fault of their own, if the system is not designed correctly. The experiment is very simple: There is a large container of beads, 20% of which are red and 80% of which are white. Red beads correspond to defectives. Several people are asked to play the role of workers, and others are asked to help out as inspectors. Each of the workers gets a “paddle” with 50 holes, where each hole can hold a single bead. Each worker must put his or her paddle into the container and pull out exactly 50 beads, which is one day’s production quantity. Each person’s job is to produce no more than two defectives per day. Obviously, the experiment is stacked against the workers. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. The Red Bead Experiment (slide 2 of 2) We can illustrate the red bead experiment with an Excel simulation and a p chart. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Process Capability When we analyze whether a process is able to meet set specifications, it is called a process capability analysis. Process capability measures the ability of an in-control process to produce items that meet specifications. In a process capability analysis, we are typically given lower and upper specification limits, denoted LSL and USL, and we want to calculate the proportion of outputs from a given process that fall within these limits. Based on data generated from the process, we perform a probability calculation to see how capable the current process is of producing outputs within the specification limits. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.5: Rods.xlsx (slide 1 of 3) Objective: To use control charts to check whether the manufacturing process is in control, and if it is, to use standard statistical procedures to estimate the proportion of rods that meet specs. Solution: A manufacturing process produces rods for a mechanical device. Engineers have determined that the diameters of the rods must be between 20.80 and 20.95 millimeters. Diameters of six randomly selected rods were measured every half hour for several production shifts, and the data are collected in the data file. First, examine X and R charts to see whether the current process is in control. The X chart below shows that the current process is in control. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.5: Rods.xlsx (slide 2 of 3) Once it is determined that the process is in control, estimate the proportion of rods that fall within the specification limits LSL = 20.80 and USL = 20.95. Count the number of observed rods with diameters within the limits. The proportion within the limits is 266/270 = 0.985. Create a probability model for the other rods. First, verify that rod diameters are normally distributed by creating a histogram of rod lengths. The histogram indicates a reasonably bellshaped distribution of diameters, so a normal probability model is reasonable. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Example 20.5: Rods.xlsx (slide 3 of 3) Then use the NORMDIST function to calculate the probability below the LSL and above the USL, as shown to the right. There is almost no probability of being below the LSL, but the probability of being above the USL is just below 0.02. Slightly more than 98% of rods should meet specifications if the process continues as it is currently operating. Finally, project the results to large numbers of items. Multiply each of the probabilities by 1,000,000. Almost 20,000 ppm (parts per million) will fall above the USL. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Process Capability Indexes (slide 1 of 5) If the outputs from an in-control process are approximately normally distributed with mean μ and standard deviation σ, then we know that almost all of the items produced will be within three standard deviations of the mean. This interval has length 6σ, but we want the items to be within the interval from LSL to USL, an interval of length USL − LSL. One way to judge the capability of a process is to compare the lengths of these two intervals with a process capability index. The capability index Cp is defined by the equation below. To understand Cp, assume that the ideal output value, called the “target,” is halfway between the LSL and the USL. Also assume that the current mean μ of the process is equal to the target, and that the distance from the target to either specification limit is 3σ. Then Cp = 1 and is illustrated by the figure to the right. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Process Capability Indexes (slide 2 of 5) When the target is midway between the specification limits, the process mean is equal to the target, and the process is normally distributed, it can be shown that the probability of falling outside the specification limits is: where Z is normal with mean 0 and standard deviation 1. We use this equation to show the effect of Cp in the figure below. The ppm outside the specification limits decreases dramatically as Cp increases. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Process Capability Indexes (slide 3 of 5) If there is variation in the process and the process mean is off target, we need a slightly different capability index, denoted by Cpk, to measure how close the process mean is to the nearest specification limit. Cpk is illustrated by the figure below. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Process Capability Indexes (slide 4 of 5) If the Cpk is unacceptably small, there are two possibilities: Try to “center” the process by adjusting the process mean to the target. (In this case, Cp and Cpk coincide.) Try to reduce the process variation, with or without a shift in the mean. Both Cp and Cpk are simply indexes of process capability. By reducing σ, we automatically reduce Cpk (and Cp), regardless of whether the mean is on target. The larger they are, the more capable the process is. An equivalent descriptive measure is the “number of sigmas” of a process. A k-sigma process is one in which the distance from the process mean to the nearest specification limit is kσ, where σ is the standard deviation of the process. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. Process Capability Indexes (slide 5 of 5) Summary: The Cp index is appropriate for processes in which the mean is equal to the target value (midway between the specification limits). The Cpk index is appropriate for all processes, but it is especially useful when the mean is off target. Processes with Cpk = 1 produce about 1350 out-of-specification items per million on the side nearest the target (and fewer on the other side), and again this number decreases dramatically as Cpk increases. Both Cp and Cpk are only indexes of process capability. Processes with Cp = 1 produce about 2700 out-of-specification items per million, but this number decreases dramatically as Cp increases. However, they imply the probability of an item being beyond specifications (and the ppm beyond specifications). A 3-sigma process has Cpk = 1, whereas a 6-sigma process has Cpk = 2. In general, the distance from the process mean to the nearest specification limit in a k-sigma process is kσ. Quality improves dramatically as k increases. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. More on Motorola and 6-sigma (slide 1 of 3) Until the 1990s, most companies were content to achieve a 3-sigma process, that is, Cp = 1. Motorola questioned the wisdom of this on two counts: Products are made of many parts. The probability that a product is acceptable is the probability that all parts making up the product are acceptable. When using control charts to monitor quality, shifts of 1.5 standard deviations or less in the process mean are difficult to detect. Given that the process mean might be as far as 1.5σ from the target and that a product is made up of many parts, a 3-sigma process might not be as good as originally thought. © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. More on Motorola and 6-sigma (slide 2 of 3) The following analysis is referred to as Motorola 6sigma analysis: Suppose a product is composed of m parts. Calculate the probability that all m parts are within specifications when the process mean is 1.5σ above the target and the distance from the target to either specification limit is kσ (that is, a k-sigma process with a process mean off center by an amount 1.5σ). Use this equation, where Z is normal with mean 0 and standard deviation 1: © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part. More on Motorola and 6-sigma (slide 3 of 3) We can easily implement this in Excel, as shown below. This figure shows that a 3-sigma process (row 13) is not that great. In contrast, a 6-sigma process (row 16) is extremely capable, with only 0.34% of its 1000-part products out of specifications. No wonder Motorola’s goal was to achieve “6-sigma capability in everything we do.” © 2015 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.