Download Question Set 3 Statistics and Process Management

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

History of statistics wikipedia , lookup

Statistics wikipedia , lookup

Transcript
Question Set 3 Statistics and Process Management
Note: This class is about how Process Management (the design, implementation,
control/maintenance, and improvement of processes) helps to capitalize on the
human condition. We have established how following the principles of TQ can
help develop the correct products and processes. If it is known what the “correct
product” is for all levels of the firm, then it is possible to design the correct
processes to produce each correct product. For instance, in general, the product
of the organizational/strategic level is the correct mission, vision, and strategy
(MVS) given the firms SWOT; the product of the tactical/process level is the
design of the processes and control of the resource used to enable those
processes that accomplish the strategy; the product of the operational/personal
level are all the different products produced by the processes designed in the
tactical/process level. Thus, processes are designed to perform SWOT analysis
and generate the mission, vision, and strategies to reach the mission and vision.
Processes are also designed to produce the processes used to produce the
output from the operational/personal level.
We have also discussed how TQ methods contribute to process control
and improvement. Employees and suppliers who feel they are part of the solution
instead of part of the problem are more willing to make suggestions of how to
control and improve processes (Toyota, 1.5 million suggestions from employees
on how to improve processes per year with 95% adoption rate). Furthermore,
employees and suppliers who feel the firm is supporting them will be more
motivated to utilize their skills and abilities to make sure the current processes
are done correctly.
Know and
Practice TQ
Can Determine
Correct Product
Can Design
Correct Process
Now that we understand better how to motivate the correct product and
processes, our goal is to understand how statistics help employees, suppliers
and managers to maintain/control and improve processes, the utilization of the
scientific method and/or managing by fact.
Knowledge is driven by information, information is driven by data, data is
driven by measuring things. Therefore, the understanding and use of the science
of measurement and how QM affects that science is crucial to obtaining
competitive knowledge. Here are some questions we will answer.
How would you achieve process improvement?
Setting up a process improvement project>class website>bus456>Seven
Management and planning tools
Process improvement PDC(S)A, DMAIC (see at end of this discourse)
Know your process (know the mess you have, know where you are at)
Process Flow Diagram, Value added diagram, Cause and Effect
diagram (fish bone diagram), Gemba (place), Gembutsu (inputs),
Genjitsu (facts/data) are the 3Gen go to the place of production,
understand the inputs and what the facts are (seven sins of
memory, absent mindedness, transience, blocking, misattribution, suggestibility,
bias, persistence
Who-based leadership can be observed making excuses or making changes in
personnel, while why-based leadership can be observed exploring the reasons for
process failures, based on fact. It takes a great deal of courage and humility to
manage by fact. It is almost religious.
Read more: Lean Manufacturing Blog, Kaizen Articles and Advice | Gemba Panta Rei
Measure your process (see Statistics) (know where you are at)
Identify problems (know where you are at) 5 whys
Identify solutions (know where you want to go) brainstorming, affinity
diagrams, matrix diagram
Identify how to achieve solutions and implementations
Simulations and prototypes (a universal solution is mistake proofing
(poka-yoke))
Decide if the solutions and implementation are worth it (NPV with all cash
flows from affects on all stakeholders accounted for)
Implement
Start over (feedback)
Different formulas (Deming plan do study act, Jurans breakthrough
sequence, creative problem solving, FADE (pp 639 or so)
What are statistics?
Statistics are the science of collecting, organizing, description, analyzing,
interpreting, presenting, and use of data.
Statistics work because: (eyes of quality management and tool of management
of quality)
All work occurs in a system of interconnected processes
Variation exists in all processes (including measurement process)
Understanding and reducing dysfunctional variation and increasing
strategic variation are keys to success
Types of variation
Common variation: what can vary during production of a product that is
inherent to the process and process interaction with the environment?
Accounts for 80 to 99.999999% of dysfunctional process variance.
Special/assignable dysfunctional variation: result not inherent to process
Strategic Variability: mass customization within and across products due
to market demand. This variability will create more dysfunctional variance
due to set up and non standardization
Stable System: only common cause is evident, can predict outcomes with stable
system (both production and demand), maximize output, it is harder to find root
causes of variation or to detect an assignable cause the greater the variation as
the number of causal variables (thus the number of possible interactions) that
drive variation increases
•
•
•
•
•
•
•
When can variation in a process be predicted, when it is common cause
Where does most variation in a process come from, the inherent nature
due to design and execution of that design
Are numerical measures that many rewards and punishment are based on
meaningless? Yes, results are usually due to process not effort, thus you
are rewarding or punishing the process, not the worker. In addition, if the
worker does change the process, is it any different statistically
Who is responsible for processes and, thus system. Those with the
resources that say ‘this is what the process will be’
Are statistical principles and tools only for the production floor? Absolutely
not, the biggest share of processes of a firm are non-production
Is a stable process a good process, depends
Is a process that has special cause problems a bad process, depends
Two types of errors management makes: Type I error translates into assumption
that variation is due to a special cause when it is due to a common cause and
make adjustments or take corrective actions  result will be production further
from target and, depending on the business rule of what to do when a process is
out of control, the expense of finding a non-existing error and perhaps shutting
down production while the causation of the error is found. Type II error translates
into assumption that variation is due to a common cause when it is due to a
special cause and not adjust system  result will be production further from
target and a missed opportunity to fix the system within bounds of current
process
Sample space: all possible outcomes of an experiment
Frame: given subset of population
Population
Discrete random variable: only whole numbers
Continuous random variable: any value
Probability Distribution: the distribution of random variables (continuous go to
infinity without reaching boundary, constrained reach bounds, and discrete)
Binomial Distribution (constrained)
Uniform Distribution (constrained and can be discrete)
Normal Distribution (continuous)
Triangular distribution (constrained, often used when only know min,
mode, and max)
Poisson Distribution (continuous)
Exponential Distribution (Continuous)
Erlang Distribution (constrained)
And many more
Central limit theorem (CLT): Means of samples from a population will be normally
distributed, even if the population is not normally distributed, the means will be
normally distributed if the sample size is large enough (30). The larger the
sample size, the tighter the distribution (SD of X-bar = population SD/square root
of sample size. Thus, x-bar will equal mu if sample size of each sample is large
enough. 30 samples of 30 (safe). Confidence interval says that there is that
chosen level of confidence that the interval will contain the true population mean.
(of 100 samples each with a different mean with same margin of error, 90 would
contain the true mean if the confidence level were 90). When SD of population is
known the confidence interval is Xbar +/- ((z of alpha/2 times (population
SD)/sqrtn)) SD unknown CI = xbar+/- (t alpha/2, n-1(s/sqrtn)
Sampling (collecting), the bases of statistics READ HUMPHREY’S
DISSERTATION ON SAMPLING ERROR What does it mean to say that the
margin of error is ???? Have to know the confidence level for one thing;
about 19 % off if measured at 90% CL and report as 95%
Good sample is the least expensive and still tell the story
Sample error: error inherent to the sample where those sampled are not
representative of the population by chance, prevent by having larger
samples and being sure that samples are truly random.
Systematic error: problem with the sampling process ignoring trends,
assumption there is causation, faulty sampling techniques, biases in those
conducting experiments, or in some of those being sampled (put all 5’s in
a Likert scale)—prevent with the design of the experiment and calibration
of the measuring instrument (survey, observation techniques, machine)
How large depends on variance within the population and the ‘narrowness’
of the confidence interval associated with the confidence level needed to
make a decision.
Sample size calculation:
1) Parameter needed (proportion or mean)
2) Confidence level
3) Bond of the error of estimation (confidence interval)
n  (Z 2 )2 p(1  p) / E 2
n  (Z ) 2  2 / E 2
 2
First is for variables data, second is for attribute data
E = error absolute allowable difference between the point estimate and
true parameter for a given confidence level and population variation
[xbar-mu in the eqation Z=(xbar-mu)/(SD/(n^.5)) or t=(xbar-
mu)/(s/(n^.5))], we do not want the error to be any greater, or we want
a sample size large enough that the confidence interval has a given
chance of containing the desired parameter.
Alpha = 1-.95 (95 % confidence interval) Alpha/2 = .025 .5-.025 =
.4750 Z of 1.96. indicates a two tailed test, so .05/2 on each tail
If want an error of .07 inches and a confidence level of 95% in variables
data with a SD of .9 inches (note E and SD have to be in same units)
n = ((1.96^2)* (.9^2))/(.07^2)  635 Note, the smaller the SD, the smaller
sample size needed. Say x bar of the sample were 30 inches, then we
could say that we are 95% sure that the interval between 29.93 and 30.07
contains mu. We often do not know the standard deviation, so find the
range and divide by 4 or 5 for an approximation of the standard deviation.
If want an error of 2 percent and a confidence level of 95% when want to
know some proportion of population is one way or another n = ((1.96^2)*(.
5*(1-.5)))/(.02^2)  2401 If the sample showed that 60% were one way,
we would know that the proportion of the population that were that way
would have 95% chance of being contained within the interval between 58
and 62%.
Organizing/presenting data to make it into information and from there,
knowledge: frequency distributions, histograms, Pareto Diagram, scatter
plots/diagrams (correlation and regression), graphs, run charts, control charts,
tables, check sheets, data bases
Design of Experiments (Ch 10 506-510), ANOVA/MANOVA (510-512),
Regression & Correlation (pp512-513 Reliability (607-623),
Descriptive statistics: (pp 496-) range, standard deviation, variance (variance)
mean, median, proportions, (central tendencies), and mode (value that occurs
most often)
Range= Max-min
Standard deviation= square root of (sum of squares of difference between
mean and each value, all divided by N-1)
Variance= sum of squares divided by N-1
Mean= average affected by outliers
Median=value of measure in the middle of set of sorted numbers (not
affected by outliers, no more than half will be greater, no more than half
will be less
Mode= value of measure that happens most
Proportion= fraction of measures alike. Or fraction of items with similar
trait
Statistical inference:, DOE (design of experiments), hypothesis testing, ANOV,
MANOVA; drawing conclusions about unknown characteristics of a population
from the data collected (what is the population mean, what is the population
variation, what is the probability of a change in the population, what is the
probability the sample is not correct…)
Predictive statistics: from what we know, what will be the next value; regression,
correlation
Using:
Prediction: regression and correlation
Inference: confidence intervals that a parameter will be in a given area,
hypothesis testing, experimental design, Design of Experiments:
comparison of two or more methods to produce an outcome or understand
the relationship among variables, including the outcome variable
(dependent variable) Hypothesis Testing: what is the correct story
(inference) pertaining to two contrasting propositions (hypotheses) about a
population parameter assuming one proposition is true in absences of
contradictory data. Test population has to be stable, not trending over
period of time sampled (trend: analytic study; stable: enumerative study)
There are two types of studies
Enumerative/descriptive study: parameters of frame stay the same across
time and can use current parameters to predict parameters of the future frame
(processes is in control)
Analytic/comparative study: parameters of frame change over time,
parameters of current frame cannot predict parameters of a future frame Thus,
hypotheses testing does not work.
Frame: current sample space
Population: consisting of many frames?
What are control charts, Deming saw them as analytic studies, as what is
the chance that everything about the process is going to be the same.
Therefore, he did not like statements about the probability of Type I errors,
but he is seen as misguided here, as control charts do give us information
about the future and detecting change in the process relative to the first
frame. Most practitioners do just fine acting as though production studies
are enumerative. However, when the population parameters (mean, SD)
change, new control chart parameters (mean, UCL, LCL) need to be
calculated.
Regression: used to determine relationships between a dependent
variable and one or more causal variables/independent variables. Has to
be linier relationship
Correlation: degree to which there is a relationship between linear
variables
Factorial Experiment: study of main effects and interaction effects
ANOVA (analysis of variance) do means of different populations differ; can
tell by looking at variance within a group vs. across groups
PDSA OR PDCA & DMAIC
PLAN
Establish the objectives and processes necessary to deliver results in accordance
with the expected output. By making the expected output the focus, it differs from
other techniques in that the completeness and accuracy of the specification is also
part of the improvement.
DO
Implement the new processes. Often on a small scale if possible.
CHECK/Study
Measure the new processes and compare the results against the expected results to
ascertain any differences.
ACT
Analyze the differences to determine their cause. Each will be part of either one
or more of the P-D-C-A steps. Determine where to apply changes that will include
improvement. When a pass through these four steps does not result in the need to
improve, refine the scope to which PDCA is applied until there is a plan that
involves improvement.
About http://en.wikipedia.org/wiki/PDCA
PDCA was made popular by Dr. W. Edwards Deming, who is considered by many to be
the father of modern quality control; however it was always referred to by him as the
"Shewhart cycle". Later in Deming's career, he modified PDCA to "Plan, Do, Study, Act"
(PDSA) so as to better describe his recommendations.[citation needed]
The concept of PDCA is based on the scientific method, as developed from the work of
Francis Bacon (Novum Organum, 1620). The scientific method can be written as
"hypothesis" - "experiment" - "evaluation" or plan, do, and check. Shewhart described
manufacture under "control" - under statistical control - as a three step process of
specification, production, and inspection.[1] He also specifically related this to the
scientific method of hypothesis, experiment, and evaluation. Shewhart says that the
statistician "must help to change the demand [for goods] by showing...how to close up the
tolerance range and to improve the quality of goods".[2] Clearly, Shewhart intended the
analyst to take action based on the conclusions of the evaluation. According to Deming,
during his lectures in Japan in the early 1950s, the Japanese participants shortened the
steps to the now traditional plan, do, check, act.[3] Deming preferred plan, do, study, act
because "study" has connotations in English closer to Shewhart's intent than
"check".[citation needed]
A fundamental principle of the scientific method and PDSA is iteration - once a
hypothesis is confirmed (or negated), executing the cycle again will extend the
knowledge further. Repeating the PDSA cycle can bring us closer to the goal, usually a
perfect operation and output.[citation needed]
In Six Sigma programs, the PDSA cycle is called "define, measure, analyze, improve,
control" (DMAIC). The iterative nature of the cycle must be explicitly added to the
DMAIC procedure.[citation needed]
PDSA should be repeatedly implemented in spirals of increasing knowledge of the
system that converge on the ultimate goal, each cycle closer than the previous. One can
envision an open coil spring, with each loop being one cycle of the scientific method PDSA, and each complete cycle indicating an increase in our knowledge of the system
under study. This approach is based on the belief that our knowledge and skills are
limited, but improving. Especially at the start of a project, key information may not be
known; the PDSA - scientific method - provides feedback to justify our guesses
(hypotheses) and increase our knowledge. Rather than enter "analysis paralysis" to get it
perfect the first time, it is better to be approximately right than exactly wrong. With the
improved knowledge, we may choose to refine or alter the goal (ideal state). Certainly,
the PDSA approach can bring us closer to whatever goal we choose.[citation needed]
Rate of change, that is, rate of improvement, is a key competitive factor in today's world.
PDSA allows for major 'jumps' in performance ('breakthroughs' often desired in a
Western approach), as well as Kaizen (frequent small improvements associated with an
Eastern approach). In the United States a PDSA approach is usually associated with a
sizable project involving numerous people's time, and thus managers want to see large
'breakthrough' improvements to justify the effort expended. However, the scientific
method and PDSA apply to all sorts of projects and improvement activities.[citation needed]
The power of Deming's concept lies in its apparent simplicity. The concept of feedback in
the scientific method, in the abstract sense, is today firmly rooted in education. While
apparently easy to understand, it is often difficult to accomplish on an on-going basis due
to the intellectual difficulty of judging one's proposals (hypotheses) on the basis of
measured results. Many people have an emotional fear of being shown "wrong", even by
objective measurements. To avoid such comparisons, we may instead cite complacency,
distractions, loss of focus, lack of commitment, re-assigned priorities, lack of resources,
etc