Abstract
Paper aims: This paper studies the influence of process variation on deviation from nominal control chart performance and proposes some adjustments on the control limits to make it enable on small batches.
Originality: Specific methods were developed to monitor small batches, mainly due to unavailability of data for precise parameters estimation, like the deviation from nominal control charts. However, Montgomery (2014) highlights some essential aspects, such as the influence of process variation on its performance.
Research method: The method used was mathematical modeling and computer simulation.
Main findings: The results validated that there is a significant influence of the process variation on the control chart performance. It has been demonstrated that small adjustments on the control limits can make it enable on lean environments.
Implications for theory and practice: The main contribution is demonstrating the use of deviation from nominal control chart, through the valid control limits definition regardless of the samples size.
Keywords : Control charts for short production runs, Deviation from nominal control chart, Effect of parameters estimation on control charts performance.
Research Article
Analysis of deviation from nominal control chart performance on short production runs
Received: 05 August 2021
Accepted: 13 December 2021
The control charts, originally proposed by Shewhart, were intended solely to monitor the variation of processes in order to detect the occurrence of special causes as soon as possible. Later, new versions of Shewhart charts emerged, such as Cumulative Sum (CUSUM) charts, moving average and acceptance charts (Woodall, 1985; Baker & Brobst, 1978; Chakraborti, 2006; Jensen et al., 2006; Castagliola et al., 2009; Yu & Liu, 2011). Such charts had as fundamental assumptions that the monitored characteristic was a normal probability distribution and that the extracted samples had homoscedasticity and were independent. That is, the samples collected over time must come from a population of Independent, Identically and Normally Distributed (IIND) data (Alwan, 2007; Korzenowski & Werner, 2012; Montgomery, 2014; Gu et al., 2014).
The implementation of control charts has two phases: phase I for calculating the control limits, and phase II for monitoring the process. There are many publications reviewing the methods for phase I and phase II. For example, in phase I, new calculations of control limits were developed that minimize a false positive. The same for the phase II (Chakraborti, 2006; Jensen et al., 2006; Castagliola et al., 2009; Yang et al., 2012; Castagliola & Wu, 2012; Oprime & Ganga, 2013; McCracken & Chakraborti, 2013; Oprime & Mendes, 2017). Other examples of charts developed from Shewhart charts are multivariate charts, modified charts, use of non-parametric methods, charts for multiple flows, charts that simultaneously control mean and standard deviation, and joint monitoring of capability and variance (Ahmad et al., 2016; Oprime et al., 2019).
The current manufacturing scenario requires flexibility and reconfigurability of the production system to meet small batch production. Thus, according to Celano & Chakraborti (2021), problems such as: immediate monitoring without preliminary studies on the process; the distribution of the current quality characteristic is unknown due to constant process reconfigurations; and difficulty in evaluating the performance of control charts due to small batch production.
Therefore, while known to be effective in large-scale production systems, the use of Shewhart control charts in lean production environments may not be an appropriate option. This is because the lack of process data can violate its fundamental assumptions, leading users to misinterpret the results (Hillier, 1969; Mood et al., 1974; Cullen & Bothe, 1989; Crowder, 1992; Sower et al., 1994; Kim & Schniederjans, 2000; Chakraborti, 2006; Ho & Trindade, 2009; Celano et al., 2010; Gu et al., 2014; Wiederhold et al., 2016; Aykroyd et al., 2019).
The Deviation from Nominal (DNOM) control chart is one of the specific alternatives found in the literature for monitoring small-scale production such as those found in lean manufacturing environments. According to Montgomery (2014), the DNOM control chart is easy to use, so they are the most recommended for monitoring small batches. However, the use of this type of chart demands some conditions.
The DNOM control chart is applied to small processes, where the process and not the products are controlled (Montgomery, 2014). It does not monitor the quality characteristic of a particular product, but the difference between the nominal measure, or target value, and the found value. That is, it does not monitor measurement, but the difference between two values, so it can be applied in processes and machines that produce a large variety of products with a small production volume.
However, according to the literature, it is recommended to apply the DNOM control chart in cases where the process variation, measured through the standard deviation, has to be the same, or close, for all products manufactured by the monitored equipment (Cullen & Bothe, 1989; Crowder, 1992; Sower et al., 1994; Ho & Trindade, 2009; Celano et al., 2012; Capizzi & Masarotto, 2012).
This condition restricts the use of DNOM control chart and the literature does not specify an acceptable range of variability for its use, which seems to be a theoretical gap to be filled. The questions still unanswered regarding the use of the deviation from nominal control charts are the following:
What is the magnitude of the standard deviation between products that go through a particular process where the DNOM control chart can be used?
Assuming that there is a tolerable difference in the standard deviation difference between the products, could the control limits be adjusted to compensate for the effects of the difference between the standard deviations of the products manufactured in the same manufacturing process?
This article aims to answer the two questions presented, through the analysis of the performance of the DNOM chart considering different possible scenarios for the deviations of different products in the same process. The performance of the charts will be evaluated by Average Run Length (ARL), which, according to Li et al. (2014), is a measure recommended and widely used in the literature, to verify the influence of process variation on the performance of the control charts, and thus propose adjustments to the control limits in cases where the standard deviations are different between the products.
For this purpose, some steps were established: i) calculate the ARL of a control chart constructed under ideal conditions, that is, with known mean and standard deviation; ii) determine the ARL of a control chart of the deviation from the nominal with estimated parameters, without changing the standard deviation of the products; iii) determine the ARL of the same control chart with a gradual variation of 0.5, 1, 5 and 10% in the standard deviation of the products; iv) compare the ARL obtained from each chart with the ARL of the ideal control chart; v) identify the influence of process variation on the chart's ARL; vi) propose adjustments to the value of k to calculate new control limits.
The Quality Management (QM) movement, as we know it today, began in the 1920s, when Walter Shewhart of Bell Laboratories developed a system, known as Statistical Process Control (SPC) to measure variability in production systems for the purpose to diagnose problems. Later, during World War II, the US War Department hired Dr. W. Edwards Deming, a physicist and researcher at the US Census Bureau, to teach statistical process control to the defense industry. Quality control and statistical methods were considered critical factors in a successful war effort (Woodall, 2000; Michel & Fogliatto, 2002; Montgomery, 2014). Unfortunately, most companies in the United States stopped using these statistical tools after the war. US occupation forces in Japan invited Deming to help Japan with the postwar census. He was also invited to present lectures for business leaders on statistical process and quality control.
Quality is a subjective term that is related to the characteristics of the product or service that influence its ability to satisfy the stated or implied needs of users, or simply to products or services that are free from defects and deficiencies (Juran & Gryna, 1988). From a technical point of view, quality can be seen from several dimensions (Garvin, 1993), such as: reliability, durability, ease of maintenance and availability, aesthetics and design. From the point of view of organizational management, quality is a broad approach that involves principles, methods and techniques, when effectively use, reduces losses, increases productivity and, consequently, the effectiveness of organizations (Shewhart, 1931; Smith, 1947; Juran & Gryna, 1988; Kanji, 1994; Kim et al., 2003; Samohyl, 2009; Ryan, 2011; Montgomery, 2014; Toledo et al., 2013; Lizarelli et al., 2016).
As for quality planning, according to Garvin (1993), it is the process of developing actions linked to organizational strategy, including key requirements, performance indicators and operational procedures that ensure standardization and compliance with project requirements. In addition, Deming (1986), supported by the works of Shewhart, introduced, in addition to its 14 points that define the guidelines for the implementation of QM, the PDCA (Plan, Do, Check and Act) and the motivation for the use of statistical methods in support of problem solving. The PDCA is a cyclical model for planning and implementing actions aimed at solving problems and continuously improving processes.
A modern term that defines QM is Total Quality Management (TQM), which raises the scope of quality, with an emphasis on the organization's strategic aspects, and its principles are continuous improvement, involving everyone in quality problems, the use of methods and techniques in problem solving and motivational programs (Garvin, 1993). The methods for solving problems stand out in TQM, especially the SPC.
SPC is a broad set of quality tools known to industrial organizations, whose purpose is to improve processes. One way to do this is to verify that processes are operating in a state of control or meet design specifications. It is also used in initial studies of machine and equipment capability, or even in the choice of a new process, to ensure that it performs better than the old (Woodall, 1985; Baker & Brobst, 1978; Graves et al., 1999; Qin, 2003; Chakraborti, 2006; Jensen et al., 2006; Elg et al., 2008; Castagliola et al., 2009; Samohyl, 2009; Yu & Liu, 2011; Castagliola et al., 2013).
It was Shewhart and Deming who developed the first statistical tools to help correct and improve quality. Later, the Japanese began using these tools, guided by Kaoru Ishikawa, head of the Japanese Union of Scientists and Engineers (JUSE), and expanded them worldwide. For more than half a century, SPC has played a key role in controlling and improving the quality of industrial processes, initially based on Shewhart control charts. The primary issue regarding control charts is to understand the variability of a particular quality characteristic, establish process control, and promote improvement actions (Sower et al., 1994; Baker & Brobst, 1978; Graves et al., 1999; Woodall, 2000; Kim & Schniederjans, 2000; Chakraborti, 2006; Duarte & Saraiva, 2008; Ho & Trindade, 2009; Celano et al., 2013; Wiederhold et al., 2016; Aykroyd et al., 2019).
According to Woodall (1985) and Woodall & Montgomery (2014), control charts are used to distinguish between two types of variation. One is the variation of common causes, which is inherent in the process and cannot be changed without changing the process itself. The other type are special causes of variation that generate interruptions with significant effects on the process and must be removed.
The general knowledge is that the process is considered stable or In Control (IC) if the successively observed chart statistics are plotted within the control limits, that is, if the process is being influenced only by common causes. However, when chart statistics are plotted outside the control limits, this may be a sign that the process may be Out of Control (OC) and corrective action in the process may be required, as suggested by Jensen et al. (2006). When a special cause is detected, the normal action is to stop the process to eliminate it. This will bring stability to the process, but sometimes they also require financial resources, time and opportunities.
One of the limitations of Shewhart charts is that they address the problem of monitoring and controlling small batches. Another limitation is that if the purpose of a process control system is to make economic decisions about the process, it is possible to balance the consequences of two situations: taking action when it is not necessary (overreaction) versus not taking action when it is required. Thus, it is possible to make economically viable decisions about the condition of the process. According to Woodall (1985) there is a need to react, however, only when a cause has sufficient impact for it to be practical and economical to remove it to improve quality. Literature has investigated the development of economic models for process control (Woodall & Montgomery, 2014; García-Díaz & Aparisi, 2005).
Moreover, production systems have undergone significant changes. Large scale gave way to lean and customized production, bringing smaller and diversified production plans. According to Castillo et al. (1996), this trend is strongly related to the use of Just In Time (JIT) manufacturing techniques to reduce costs related to intermediate and finished product inventories.
Some doubts about how to statistically evaluate these processes through traditional methods arose from there. Strictly speaking, Shewhart control charts were proposed for monitoring large volume production processes. In these systems, the implementation of charts is not a big problem, as process information is always available, unlike JIT or job shop type production systems, for example (Hillier, 1969; Cullen & Bothe, 1989; Crowder, 1992; Sower et al., 1994; Castillo et al., 1996; Castillo & Montgomery, 1996; Khoo et al., 2005; Celano et al., 2010; Gu et al., 2011,2014; Wiederhold et al., 2016).
The first was to establish control limits to retrospectively test whether the process was under statistical control when reference samples had been collected for plotting during phase I. For each initial subgroup, the and R observations should be plotted on the charts. If values were outside the control limits, then the subgroup should be discarded and the control limits recalculated. In the second stage, new control limits should be defined to verify the permanence of the process in this same stability condition. The probability of type I errors should also be considered at this stage.
Later, Quesenberry (1991) proposed the use of the Q chart to deal with the observed problems. However, the chart had the same problem as Hillier (1969). While the occurrence of errors type I was low, trend detection was not as satisfactory as desired.
Shepardson et al. (1992), studied the inefficiencies of the Q chart and proposed the use of a control chart based on the Kalman filter, specifically when the process standard deviation was known and the mean was not. Castillo & Montgomery (1996) proposed adaptations that allowed the use of the chart in inverse situations, that is, when the mean was known and the standard deviation of the process was not.
Wasserman (1994) proposed the use of the Exponentially Weighted Moving Average (EWMA) chart based on a first order dynamic linear model with constant variation. According to Lucas & Saccucci (1990), the interpretation and implementation of the EWMA chart was reasonably easy.
The CUSUM chart, was initially presented by Page, in England, with the purpose of quickly detecting small changes in the process (Page, 1954; Barnard, 1959; Kemp, 1961; Brook & Evans, 1972; Singh & Prajapati, 2013; Celano et al., 2012; Montgomery, 2014; Abbasi & Haq, 2019).
Hawkins & Olwell (1998) proposed the use of an adapted chart called self-starting CUSUM for a small number of subgroups. The idea of this chart was to use regular process measurements for self-tuning and maintenance. Each successive observation should be standardized using the mean and standard deviation, not from a special fit sample, but from all observations accumulated up to the time of inspection. As the process proceeded and produced new observations, the mean and standard deviation estimates the true values (Hawkins & Olwell, 1998).
Unlike Shewhart control charts and EWMA chart, the use of the CUSUM chart was considered more complex, however, its implementation became easier with the availability of numerous statistical tools and software on the market (Kemp, 1961; Brook & Evans, 1972; Hawkins & Olwell, 1998; Castagliola & Maravelakis, 2011).
Nenes & Tagaras (2007) presented charts based on Bayes' theorem for monitoring small production batches. Celano et al. (2013) proposed the use of control charts based on the t student distribution as an alternative as efficient as traditional charts with known parameters.
This work studies the proposal of Cullen & Bothe (1989). The authors presented a chart where the difference between the measure found of the quality characteristic with its nominal measure, or with a target value, should be plotted on traditional control charts and R and called it the target value control chart.
The DNOM control chart can be used in the manufacturing process of more than one product sequentially. It represents a change in traditional CEP techniques, as it provides the means to monitor and control processes that would otherwise be considered inadequate (Cullen & Bothe, 1989; Crowder, 1992; Farnum, 1992; Sower et al., 1994; Montgomery, 2014). Montgomery (2014) presents three important conditions for the proper use of this chart:
The variation in the process must be the same for all parts, or as close as possible;
The procedure is more efficient when the size of the samples collected is the same for all products;
The charts have an intuitive appeal when the nominal measure of the characteristic is its own target value.
As with the other control charts, when the statistical parameters of the process are unknown, the constructive procedure of the chart of the deviation from the nominal is carried out in two phases. In phase I, samples of the process are used to estimate the parameters and calculate the control limits used in the next phase. In phase II, also called the monitoring phase, new samples are extracted and verified. If the observations are not located within the control limits, the process is considered out of control and a probable special cause must be identified (Grant, 1965; Castillo & Montgomery, 1996; Woodall & Montgomery, 2014; Chakraborti, 2000; Jensen et al., 2006; Chakraborti, 2006; Samohyl, 2009; Ryan, 2011; Montgomery, 2014; Jones-Farmer et al., 2014).
Psarakis et al. (2014), clarify that the good performance of the control charts depends on an accurate estimate of parameters performed during phase I of the implementation of the charts. During this phase, it is important that special causes are eliminated so that samples for defining the control limits used in the monitoring phase are collected from a stable process.
Jensen et al. (2006) highlights that the use of control charts with estimated parameters is a potential weakness. The unavailability of data, as a consequence, calculation of invalid control limits, can cause a significant rate of false alarms in the process, in addition to reducing the Detection Power (PD) of the control chart. For that reason, Chakraborti et al. (2008) reinforce the importance of defining an adequate number and sample size during phase I so that the performance of the control chart is as close as possible to the ideal.
To simulate the use of the nominal deviation control chart in any production process and analyze its performance, mathematical models were developed in the Maple 2016 software. The first model was created to calculate the ARL of the control chart built with known parameters, or KK chart, seen in Appendix 1. The chart's ARL is calculated in the same way as the mean chart, as both are based on a Normal probability distribution.
If is calculated from a sample n = , the probability of error type I is α = P ( (LCL, UCL)) or α = 1 P ( (LCL, UCL)). Considering known parameters, it is possible to say that the process is in control when:
Considering an out of control process , if and k , UCL and LCL are calculated according to Equations 2 and 3:
It is possible to write , according to Equations 4 to 7:
If e , the probability is obtained from Equation 8:
It can be write:
If the process is in control when , therefore, the Run Length (RL) is geometric with probability , then, the ARL of control chart KK is calculated according to Equation 10 bellow:
A model was created to simulate the ARL of a deviation from nominal control chart with both parameters estimated, seen in Appendix 2. If , , extracted during phase I, and can be estimated according to Equation 11:
Being:
Where:
Considering an in-control process with parameters and control limits estimated:
It is possible to say that the process is in control when:
Being:
The probability function of U is f , and if RL is geometric with probability , the ARL of control chart UU is calculated according to Equation 17:
To calculate the ARL of the KK chart, the displacement between means δ was considered, ranging from 0 to 2. Table 1 shows the ARL of the graph calculated for n, varying in 5, 10, 15, 20 and 25 observations.
When δ = 0, the ARL of the control chart equals 370.40, and, consequently, the probability of type I error is 0.27%. Confirming what the literature says about the low sensitivity of control graphs to small displacements, it is noted that, as δ moves away from 0, the faster is the detection of an out-of-control point.
Then, the ARL of a control chart of the deviation from the nominal UU without variation in the standard deviation was simulated. It was considered k = 3.00, m ranging from 10 to 85 and n ranging from 5 to 25 observations, as shown in Table 2.
Note that the mean ARL of the control chart is greater for a greater number of observations. For example, for m = 70 and n = 25, the ARL on the chart is 364.60, close to 370.40. Small subgroups with smaller sample sizes have a smaller ARL compared to the ARL of the ideal control chart.
ARL simulations of the same control chart were performed with changes in the standard deviation of the products. The purpose of the simulations is to identify the influence of process variation on chart performance. For the same products, the following standard deviations were considered, 0.584, 0.586, 0.589 and 0.592, respectively, which corresponds to a variation of 0.5%, as shown in Table 3.
Comparing the two previous tables, it can be seen that the ARL values decrease with a variation of only 0.5% in the standard deviation of the products. For the same condition, m = 70 and n = 25, the ARL of the chart, which was 364.60, becomes 357.10. Continuing the study, the standard deviation values of the products were changed to 0.584, 0.589, 0.595 and 0.601, which corresponds to a variation of 1%. Table 4 presents the average ARL.
New simulations were performed with product standard deviation values of 0.584, 0.613, 0.643 and 0.676, which corresponds to a 5% change in the previous values. Average ARL values are shown in Table 5.
Finally, the standard deviation values have now been changed to 0.584, 0.642, 0.706 and 0.777, which corresponds to a 10% variation, as shown in Table 6.
Observing the tables, it is noted that the ARL in the graph approaches 370.40 only when m = 85 and n = 20, considering a process without variation in the standard deviation of the products. In subsequent simulations, with a change in the standard deviation, it appears that the average ARL of the control chart is less than 300 in most cases, that is, at least 2,000 pieces would be needed for the chart to perform similar to the ideal control chart.
The same models were used with k value adjustment to calculate new control limits. By increasing the displacement of the control limits in relation to the center line, the probability of false alarms in the process is also reduced. For example, changing the value of k to 3.15, the probability of type I error becomes 0.16%. Table 7 presents the average ARL for the KK control chart.
New simulations were performed, considering the same standard deviation, initially, and then the same variation as before. Tables 8 to 12 present the mean ARL values, obtained from a run of 10 simulations, for k = 3.15.
Finally, new simulations were performed by adding the value of k to 10%. Table 13 presents the average ARL calculated for the KK control chart with k = 3.30.
Tables 14 to 18 present the mean ARL values, obtained from a run of 50 simulations, considering the same standard deviation initially and then the same variation in the standard deviation of the products.
For m = 25, it is noted that the ARL does not approach the ideal, even with no change in the standard deviation of the products. ARL is close to ideal for situations with process variation of up to 3% for k = 3.15. For k = 3.30, ARL is higher than 370.40 even with 10% variations in the standard deviation of the products.
The results of the simulations presented in the present work proved that the performance of the DNOM control chart is compromised when constructed with estimated parameters, as well as any other control graph. However, the greater the number of observations, the closer the ARL of the chart is to the ARL of the KK control chart. Therefore, it was concluded that the sampling strategy chosen to build the chart during phase I has to be carefully analyzed, as the good performance of the control chart depends on an accurate estimation of parameters or a relatively high number of samples.
The results found also validated the proposition of Montgomery (2014) regarding the influence of process variation on the performance of the control chart. There was a better performance of the graph constructed without variation in the standard deviation compared to the others. The greater the variation, the lower the performance of the graph observed in the simulations. Variations greater than 1% in the standard deviation of the products, for k = 3.00, make it impossible to use the chart for smaller numbers of samples.
However, with adjustments in the value of k, it is possible to use this graph in lean environments, since ARLs close to the ideal value are observed even with variations in the standard deviation of the products. For example, considering a variation of 0.5%, for m = 25, n = 5 and for k = 3.15, an average ARL of 362.30 is expected. Considering a 10% variation, for m = 10, n = 10 and for k = 3.30, an ARL of 373.10 is expected.
It is concluded that, in fact, the use of the chart can be viable even in groups of different products subjected to the same manufacturing process, as it is expected that the standard deviation value of each product is approximate and not necessarily the same.
The proposed graph assumes that data are normally distributed, this concern should be considered as a limitation of the study. In this way, future researches can evaluate the effect of non-normal distributions of the data considered in a DNOM control chart and, also, it is possible to investigate the use of methods based on Bayesian statistics and Fuzzy logic in this case.
In addition, as another future research, it is suggested the development of a new mathematical model to simulate the ARL of the DNOM control chart, also built with estimated parameters, with a change in the number and size of subgroups to verify the other propositions made by Montgomery (2014). It is also possible to analyze the performance of the graph for δ ≠ 0 and when the quality characteristic has a measure with unilateral variation, or even study the application of the graphs in real situations.
*pedro@dep.ufscar.br