The quality goals of an organization include:

- the elimination of nonconformance,
- the minimization of variation around appropriate targets, and
- and doing so at minimum cost.

Management of these goals require knowledge as to how well these goals are being realized. More specifically, knowledge must be obtained as to how well processes are performing and what must be done to improve them.

For many years process assessment has employed the use of process capability measures. These include Cp, Cpk, and more recently, Cpm. The use of these measures requires demonstration of statistical control followed by a distribution assessment compared against the targets and specifications. Difficulties arise when statistical control cannot be assumed and also where a large number of process streams may be present. Here are a few examples.

A plastic lid manufacturer has 40 molding machines, 54 tools per molder, tools are replaced every few weeks, and tool-to-tool differences are found. In addition, slight differences are found as batches of raw materials are changed. Other sources of variation include maintenance cycles, startup periods, and any possible adjustments made by the operating personnel.

A metal crown company has eight presses, each containing 22 dies. Slight differences exist die-to-die and press-to-press. In addition, the process exhibits tool wear and slight fluctuations are observed with lot-to-lot changes in steel.

An aluminum can manufacturer has two lines in one of its plants. Each line contains 16 stations. Station-to-station differences are found. This process also undergoes tool wear and perfect through-time stability (or control) is not observed in many of the stations.

These type situations are frequently encountered. Unfortunately, traditional capability assessments are inadequate to handle these situations. If control cannot be demonstrated, traditional assessments of process capability cannot be properly made. In addition, the presence of a large number of process streams yields a combination of distributions that present a nightmare to mathematically model. In the meantime, quality practitioners still need a means of assessing their processes and implementing improvement strategies.

In recent years “process performance measures” have gained popularity. These measures, introduced by John T. Herman (1989), are analogous to their counterpart capability measures. A “P” representing “performance” replaces the “C” found in “capability” indices. The most popular performance measures are Pp, Ppk, and Ppm. These measures may be used to answer questions concerning how well the process is performing and may be employed before a process is documented to be in a state of statistical control.

The following table shows the capability indices and the counterpart performance measure:

Capability Measure | Performance Measure |
---|---|

Cpk | Ppk |

Cp | Pp |

Cpm | Ppm |

The calculations of performance measures are described in this file. (See Process Performance Measures.) The formulas for calculation of performance measures are similar to their capability counterparts, except they will employ sample statistics as opposed to estimates of population parameters.

Performance measures differ from capability measures in that:

- statistical control is not required for analysis,
- both common and special causes of variation may be present,
- they do not consider distributional shape, and
- they measure past performance only, and do not predict what may occur in the future.

These measures:

- assess total observed variation of what was produced during a particular time period, and
- can be used as validation measures for process improvement.

As introduced here, Process Performance Analysis uses performance measures and graphical techniques to display performance, potential performance, and to determine the sources of quality losses. Process Performance Analysis provides a means to:

- assess processes before statistical control is achieved,
- determine sources of process loss,
- make comparisons among characteristics, products, plants, and suppliers,
- determine control and improvement priorities, and
- assess the results of process improvement efforts.

Process Performance Analysis determines the performance of the process and identifies opportunities for improvement. The analysis uses Process Performance Measures, Variance Components, and Graphical Analysis to gain understanding of processes.

The following are typical elements and structure of this type analysis.

Upon data collection, graphical exploratory analysis should be conducted. The following may prove to be useful:

- Histograms of the combined data,
- Box-and-Whisker plots for each of the process streams,
- Control charts of the combined data, and
- Control charts within process streams.

See: Graphics Overview

The following descriptive measures should be generated:

- process mean of all combined data,
- process standard deviation of all combined data,
- quartiles (including the median), and low and high of the combined data,
- skewness and kurtosis of the combined data,
- mean calculated within each process stream,
- standard deviation calculated within each process stream,
- potential standard deviation estimates within each process stream, calculated using, average or median dispersion statistics,
- total number above the upper specification limit, and
- total number below the lower specification limit.

See: Output Text

The performance measures previously discussed should be calculated:

- Ppm, or Ppk
- Pp
- Pp(process stream)
- Cp(potential)
- Nonconforming Proportion

See: Performance Measures

To determine improvement opportunity, the relative contributions of the sources of variation around target should be assessed. This is done by calculating Variance Components.

See: Variance Components

First, using various graphs, perform a graphical analysis to understand the nature of the data under study.

Ppm or Ppk should be used as the primary performance measure. These measures provide an indication on how the process has performed. The other measures provide diagnostic capability and provide assistance in determining improvement opportunities.

See: Ppm and Ppk Measures

Next Pp is calculated. The difference between Ppm and Pp will highlight the contribution in performance loss from being off target.

See: Pp

The %Off-Target measure will provide further diagnostic capability for the losses associated with being off target.

See: % Off-Target

Pp(process stream) should be generated in those situations where multiple process streams are present. The difference between this measure and Pp will highlight the contribution in performance loss resulting from differences in process streams. If no process stream differences exist, Pp will be equal to Pp(process stream). This measure is an average of the Pp’s generated within each stream. Should significant differences in variation from stream to stream be found, further study of this condition may be desired. A number of different stratifications may be performed such as between lines, presses, tools or stations. If multiple stratifications are present in an analysis, the generation of multiple levels of Pp within and across process streams will generate the required analysis.

See: Pp(process stream)

The % Process Stream Difference is generated as a diagnostic tool in process stream analysis. Noting the minimum and maximum process stream means may also be useful.

See: % Process Stream Difference

Cp(potential) is an estimate of the potential of the process, should through-time stability be achieved, the process brought on target, and process stream differences eliminated. This measure is of interest because if Cp(potential) suggests that the process does not have the required potential, significant change to the process may be required.

As previously discussed, Cp(potential) may be generated using average or median dispersion statistics, or generated through one or more process potential studies.

Several factors may influence the assessment of Cp(potential). These include measurement error, sampling frequency, and stratifications.

If large measurement error exists, the Cp(potential) will be reduced. For example, a 30% P/T (Precision to Tolerance) ratio will yield a maximum Cp(potential) of 3.33 should no process variation exist. A 50% P/T ratio will yield a maximum of Cp(potential) of 2.00.

If long periods of time exist between samples, higher variability would be observed than if consecutive items were collected.

Finally, if nonstratified process streams exist, this will inflate the assessment of the potential variation. Should a high Cp(potential) be observed, none of these conditions would be of concern. If a low Cp(potential) is observed, further investigation may be warranted.

See: Cp(potential)

Since distributional analysis is not done with performance measures, and through-time stability may not be achieved, the use of process performance measures should be accompanied with examining the actual observed nonconforming proportion.

The performance measures may be displayed using Stacked Bar Charts. These charts will be useful to display performance and make comparisons with multiple processes or characteristics.

See: Stacked Bar Charts

Finally, the Variance Components should be examined to assess contribution to variability around target. These components may be displayed in a Pie or Stacked Bar.

See: Variance Components