Download T i

Document related concepts

Earned value management wikipedia , lookup

PRINCE2 wikipedia , lookup

Construction management wikipedia , lookup

Phase-gate process wikipedia , lookup

Software development wikipedia , lookup

Transcript
SOFTWARE PROJECT
MONITORING AND
CONTROL
QALITATIVE AND QUANTITATIVE DATA
Software project managers need both qualitative and
quantitative data to be able to make decisions and
control software projects so that if there are any
deviations from what is planned, control can be exerted.
Control actions may include:
Extending the schedule
Adding more resources
Using superior resources
Improving software processes
Reducing scope (product requirements)
MEASURES AND SOFTWARE METRICS
Measurements enable managers to gain insight for
objective project evaluation.
If we do not measure, judgments and decision making can
be based only on our intuition and subjective evaluation.
A measure provides a quantitative indication of the extend,
amount, dimension, capacity, or size of some attribute of a
product or a process.
IEEE defines a software metric as “a quantitative measure
of the degree to which a system, component, or process
possesses a given attribute”.
MEASURES AND SOFTWARE METRICS
Engineering is quantitative discipline, and direct measures such
as voltage, mass, velocity, or temperature are measured.
But unlike other engineering disciplines, software engineering is
not grounded in the basic laws of physics.
Some members of software community argue that software is not
measurable. There will always be a qualitative assessments, but
project managers need software metrics to gain insight and control.
“Just as temperature measurement began with an index
finger...and grew to sophisticated scales, tools, and techniques, so
too is software measurement maturing”.
DIRECT AND INDIRECT MEASURES
A direct measure is obtained by applying measurement
rules directly to the phenomenon of interest.
For example, by using the specified counting rules, a
software program’s “Line of Code” can be measured
directly. http://sunset.usc.edu/research/CODECOUNT/
An indirect measure is obtained by combining direct
measures.
For example, number of “Function Points” is an indirect
measure determined by counting a system’s inputs,
outputs, queries, files, and interfaces.
SOFTWARE METRICS TYPES
Product metrics, also called predictor metrics are
measures of software product and mainly used to
determine the quality of the product such as
performance.
Process metrics, also called control metrics are
measures of software process and mainly used to
determine efficiency and effectiveness of the process,
such as defects discovered during unit testing They are
used for Software Process Improvement (SPI).
Project metrics are measures of effort, cost, schedule,
and risk. They are used to assess status of a project
and track risks.
RISK TRIGGERS
As the project proceeds, risk monitoring activities
commence.
Project manager monitors factors that may provide an
indication of whether a risk is becoming more or less
likely.
For example, if a risk of “staff turnover” is identified,
general attitude of team members, how they get along,
interpersonal relationships, potential problems with
compensation and benefits should be monitored.
Also, effectiveness of risk mitigation strategies should
be monitored.
If mitigation plans fail, contingency plans should be
executed.
SOFTWARE METRICS USAGE
Product, Process, Project Metrics
Project Time
Project Cost
Product Scope/Quality
Risk Management
Future Project Estimation
Software Process Improvement
Metrics Repository
PROJECT MANAGEMENT LIFE CYCLE
MONITORING AND CONTROLLING
Monitoring and Controlling proces group consists of those
processes required to track, review, and orchestrate
progress and performance of the project .
KNOWLEDGE AREAS
SCOPE CONTROL
Monitoring project and product scope and
management changes to scope baseline.
Scope is controlled by using traceability
management techniques.
TRACEABILITY MANAGEMENT
• An item is traceable if we can fully figure out
– WHERE it comes from, WHY it is there
– WHAT it will be used for, HOW it will be used
• Objectives of traceability are to assess impact of proposed
changes
• For example, how does an item such as SDD document gets
affected if we change a Use Case in SRS document?
• We have to have a traceability link between SRS and SDD.
TRACEABILITY MANAGEMENT
• To be identified, recorded, retrieved
• Bidirectional: for accessibility from ...
– source to target (forward traceability)
– target to source (backward traceability)
• Within same phase (horizontal) or among phases (vertical)
forward
Objectives, domain concepts,
requirements, assumptions
backward
horizontal
vertical
Architectural components
& connectors
Source code
Test data
User manual
TRACEABILITY MANAGEMENT
• Backward traceability
– Why is this here? (and recursively)
– Where does it come from? (and recursively)
• Forward traceability

– Where is this taken into account? (and recursively)
– What are the implications of this? (and recursively)
Localize & assess impact of changes along
horizontal/vertical links
forward
Objectives, domain concepts,
requirements, assumptions
backward
horizontal
vertical
Architectural components
& connectors
Source code
Test data
User manual
TRACEABILITY MATRIX
• Matrix representation of single-relation traceability graph
– e.g. Dependency graph
Traceable item
T1
T2
T3
T4
T5
T4
T1 T2 T3 T4 T5
0
0
1
0
0
1
0
0
0
0
0
1
0
1
0
1
0
0
0
0
0
1
1
1
0
T3
T1
T5
T2
across Ti's row: forward retrieval of elements depending on Ti
down Ti's column: backward retrieval of elements which Ti depends
on
J forward, backward navigation
J simple forms of analysis e.g. cycle T1T4T3 T1 can be detected
L unmanageable, error-prone for large graphs; single relation only
SCEDULE AND COST CONTROL
Monitoring project activities and taking corrective
actions if there are any deviations from what is
planned like project crashing.
BINARY TRACKING
Binary tracking requires that progress on a
work package, change requests, and problem
reports be counted as:
 0% complete until the associated work
products pass their acceptance criteria
 100% complete when the work products
pass their acceptance criteria
BINARY TRACKING
Assume a 20,000 LOC system (estimated), with development
metrics:
270 of 300 requirements designed: 90%
750 of 1000 modules reviewed: 75%
500 of 1000 modules through CUT: 50%
200 of 1000 modules integrated: 20%
43 of 300 requirements tested: 14%
CUT: Code and Unit Test
These numbers are obtained using binary tracking of work
packages
BINARY TRACKING
Also assume our typical distribution of effort is:
• Arch. Design: 17 %
• Detailed Design: 26 %
• Code & Unit Test: 35 %
• Integration Test: 10 %
• Acceptance Test: 12 %
•
Percent complete is therefore:
90(.17)+75(.26)+50(.35)+20(.10)+14(.12)
= 56% complete
BINARY TRACKING
Project is 56% complete; 44% remains
Effort to date is 75 staff-months
Estimated effort to complete is therefore:
(44 / 56) * 75 = 60 staff-months
EARNED VALUE MANAGEMENT
EVM compares PLANNED work to COMPLETED
work to determine if work accomplished, cost, and
schedule are progressing as planned.
The amount of work actually completed and resources
actually consumed at a certain point in a project
TO
The amount of work planned (budgeted) to be
completed and resources planned to be consumed at
that same point in the project
EARNED VALUE MANAGEMENT
Budgeted Cost of Work Scheduled (BCWS): The cost
of the work scheduled or planned to be completed in a
certain time period per the plan. This is also called the
PLANNED VALUE.
Budgeted Cost of Work Performed (BCWP): The
budgeted cost of the work done up to a defined point in
the project. This is called the EARNED VALUE.
Actual Cost of Work Performed (ACWP): The actual
cost of work up to a defined point in the project.
EARNED VALUE MANAGEMENT
Schedule Variance:
SV = BCWP – BCWS
Schedule Performance Index:
SPI = BCWP / BCWS
Cost Variance:
CV = BCWP - ACWP
Cost Performance Index:
CPI = BCWP / ACWP
EARNED VALUE MANAGEMENT
SV, CV = 0 Project On Budget and Schedule
SV, CV < 0 Over Budget and Behind Schedule
SV, CV > 0 Under Budget and Ahead of Schedule
CPI, SPI = 1 Project On Budget and Schedule
CPI, SPI < 1 Over Budget and Behind Schedule
CPI, SPI > 1 Under Budget and Ahead of Schedule
EARNED VALUE MANAGEMENT
Project Description:
We are supposed to build 10 units of equipment
We are supposed to complete the project within 6
weeks
We estimated that 600 man-hours to complete 10
units
It costs us $10/hour to build the equipment
Our Plan:
We are supposed to build 1.67 units each week
Each unit costs $600
We will spend $1,000 each week
EARNED VALUE MANAGEMENT
Project status:
Week 3
4 units of equipment completed
400 man-hours spent
How are we doing?
Are we ahead or behind schedule?
Are we under or over budget?
Results:
Accomplished Work: 4/10 = %40 complete
Schedule: 3/6 = %50 over
Budget: 400/600 = %67 spent
EARNED VALUE MANAGEMENT
BCWS=(600 man-hours*$10/hour)*(3/6 weeks) = $3000
BCWP=(600 man-hours*$10/hour)*(4/10 units) = $2400
ACWP=400 man-hours*$10/hour = $4000
The price of the job that we have done is only $2400 (4
units)
Schedule: in 3 weeks, the price of the job that we should
have done was $3000
Cost: We spent much more; we spent $4000
EARNED VALUE MANAGEMENT
SV = BCWP – BCWS = $2400 - $3000 = -$600
SV is negative; we are behind schedule
CV = BCWP – ACWP = $2400 - $4000 = -$1600
CV is negative; we are over budget
SPI = BCWP / BCWS = $2400 / $3000 = 0.8
SPI is less than 1; we are behind schedule
CPI = BCWP / ACWP = $2400 / $4000 = 0.6
CPI is less than 1; we are over budget
EARNED VALUE MANAGEMENT
Earned Value analysis results are used to predict the future
performance of the project
Budget At Completion (BAC) = The total budget (PV or
BCWS) at the end of the project. If a project has Management
Reserve (MR), it is typically added to the BAC.
Amount expended to date (AC)
Estimated cost To Complete (ETC)
ETC = (BAC – EV) / CPI
Estimated cost At Completion (EAC)
EAC = ATC + AC
RISK CONTROL
Implementing risk response plans, tracking
identified risks, identifying new risks, and
evaluating risk process effectiveness.
RISK CONTROL
RISK EXPOSURE
Risk exposure is the product of
PROBABILITY x POTENTIAL LOSS
A project with 30% probability of late delivery and a penalty
of $100,000 for late delivery has a risk exposure of:
0.3 x 100,000 = $30,000
RISK LEVERAGE FACTOR
Risk Leverage Factor:
RLF = (REb - REa) / RMc
where
REb is the risk exposure before risk mitigation,
REa is the risk exposure after risk mitigation and
RMc is the cost of the risk mitigating actions
RISK LEVERAGE FACTOR
Suppose we are considering spending $25,000 to reduce
the probability of a risk factor with potential impact of
$500,000 from 0.4 to 0.1
then the RLF is: (200,000 - 50,000) / 25,000 = 6.0
 Larger RLFs indicate better investment strategies
 RLFs can be used to prioritize risk and determine
mitigation strategies
RISK REGISTER
A risk register contains the following information for each
identified risk factor:
 Risk factor identifier
 Revision number & revision date
 Responsible party
 Risk category (schedule, resources, cost, technical,
other)
 Description
 Status (Closed, Action, Monitor)
RISK REGISTER
If closed: date of closure and disposition
(disposition: avoided, transferred, removed from
watch list, immediate action or contingent action
completed, crisis managed)
If active: action plan number of contingency plan
number & status of the action)
(status: on plan; or deviating from plan and risk
factors for completing the plan)
QUALITY CONTROL
Monitoring and controlling project and product
quality
REVIEWS AND INSPECTIONS
 The review process in agile software development is
usually informal.
In Scrum, for example, there is a review meeting after
each iteration of the software has been completed (a
sprint review), where quality issues and problems
may be discussed.
 In extreme programming, pair programming ensures that
code is constantly being examined and reviewed by
another team member.
 XP relies on individuals taking the initiative to improve
and re-factor code. Agile approaches are not usually
standards-driven, so issues of standards compliance are
not usually considered.
REVIEWS AND INSPECTIONS
 These are peer reviews where engineers examine the
source of a system with the aim of discovering
anomalies and defects.
 Inspections do not require execution of a system so may
be used before implementation.
 They may be applied to any representation of the system
(requirements, design, configuration data, test data,
etc.).
 They have been shown to be an effective technique for
discovering program errors.
REVIEWS AND INSPECTIONS
 Agile processes rarely use formal inspection or peer
review processes.
 Rather, they rely on team members cooperating to
check each other’s code, and informal guidelines,
such as ‘check before check-in’, which suggest that
programmers should check their own code.
 Extreme programming practitioners argue that pair
programming is an effective substitute for inspection
as this is, in effect, a continual inspection process.
 Two people look at every line of code and check it
before it is accepted.
SOFTWARE PROCESS IMPROVEMENT
SPI encompasses a set of activities that will lead to a
better software process, and as a consequence a
higher-quality software delivered in a more timely
manner.
SPI help software engineering companies to find their
process inefficiencies and try to improve them.
SOFTWARE PROCESS IMPROVEMENT
SOFTWARE PROCESS IMPROVEMENT
 Process measurement
 Attributes of the current process are measured. These
are a baseline for assessing improvements.
 Process analysis
 The current process is assessed and bottlenecks and
weaknesses are identified.
 Process change
 Changes to the process that have been identified
during the analysis are introduced. For example,
process change can be better UML tools, improved
communications, changing order of activities, etc.
SOFTWARE PROCESS IMPROVEMENT
SOFTWARE PROCESS IMPROVEMENT
There are 2 different approaches to SPI:
Process Maturity: It is for “Plan-Driven” development and
focuses on improving process and project management.
Agile: Focuses on iterative development and reduction of
overheads.
SPI frameworks are intended as a means to assess the
extent to which an organization’s processes follow best
practices and help to identify areas of weakness for process
improvement.
CMMI PROCESS IMPROVEMENT
FRAMEWORK
 There are several process maturity models:
 SPICE
 ISO/IEC 15504
 Bootstrap
 Personal Software Process (PSP)
 Team Software Process (TSP)
 TickIT
 SEI CMMI
CMMI PROCESS IMPROVEMENT
FRAMEWORK
 Capability Maturity Model Integrated (CMMI) framework
is the current stage of work on process assessment and
improvement that started at the Software Engineering
Institute (SEI) in the 1980s.
 The SEI’s mission is to promote software technology
transfer particularly to US defense contractors.
 It has had a profound influence on process improvement.
 Capability Maturity Model introduced in the early 1990s
 Revised maturity framework (CMMI) introduced in 2001
CMMI PROCESS IMPROVEMENT
FRAMEWORK
 CMMI allows a software company’s development and
management processes to be assessed and assigned a
score.
 There are 4 process groups which include 22 process
areas.
 These process areas are relevant to software process
capability and improvement.
CMMI PROCESS IMPROVEMENT
FRAMEWORK
Category
Process area
Process management
Organizational process definition (OPD)
Organizational process focus (OPF)
Organizational training (OT)
Organizational process performance (OPP)
Organizational innovation and deployment
(OID)
Project management
Project planning (PP)
Project monitoring and control (PMC)
Supplier agreement management (SAM)
Integrated project management (IPM)
Risk management (RSKM)
Quantitative project management (QPM)
CMMI PROCESS IMPROVEMENT
FRAMEWORK
Category
Process area
Engineering
Requirements management (REQM)
Requirements development (RD)
Technical solution (TS)
Product integration (PI)
Verification (VER)
Validation (VAL)
Support
Configuration management (CM)
Process and product quality management (PPQA)
Measurement and analysis (MA)
Decision analysis and resolution (DAR)
Causal analysis and resolution (CAR)
CMMI MODELS
 There are 2 different CMMI models:
 Staged CMMI: Assesses a software company’s maturity from 1 to 5.
Process improvement is achieved by implementing practices at
each level and moving from the lower level to higher level. Used to
asses a company as a whole.
 Continuous CMMI: Assesses each process area separately, and
assigns a capability assessment score from 0 to 5. Normally
companies operate at different maturity levels for different process
areas. A company may be at level 5 for Configuration Management
process, but at level 2 for Risk Management process.
STAGED CMMI
STAGED CMMI
 Initial: Essentially uncontrolled
 Managed: Product management procedures defined and
used
 Defined: Process management procedures and
strategies defined and used
 Quantitatively Managed: Quality management strategies
defined and used
 Optimizing: Process improvement strategies defined and
used
STAGED CMMI (LEVELS 1-2)
Level
Focus
Process Area
Requirements management
Project planning
Managed
Basic project
management
Project monitoring and control
Supplier agreement management
Measurement and analysis
Process and product quality assurance
Configuration management
Performed
STAGED CMMI (LEVEL 3)
Level
Focus
Process Area
Requirements development
Technical solution
Product integration
Verification
Validation
Organizational process focus
Organizational process definition
Defined
Process
Standardization
Organizational training
Integrated project management
Integrated supplier management
Risk management
Decision analysis and resolution
Organizational environment for integration
Integrated teaming
STAGED CMMI (LEVELS 5-4)
Level
Focus
Process Area
Organizational innovation and
deployment
Optimizing
Continuous process
Casual analysis and resolution
improvement
Organizational process performance
Quantitatively managed
Quantitative
management
Quantitative project management
CONTINOUS CMMI
CONTINOUS CMMI
 The maturity assessment is not a single value but is a
set of values showing the organizations maturity in each
area.
 Examines the processes used in an organization and
assesses their maturity in each process area.
 The advantage of a continuous approach is that
organizations can pick and choose process areas to
improve according to their local needs.
PEOPLE CMM
 Used to improve the workforce.
 Defines set of 5 organizational maturity levels that
provide an indication of sophistication of workforce
practices and processes.
 People CMM complements any SPI framework.
PEOPLE CMM
SOFTWARE PRODUCT METRICS
Product metrics fall into two classes:
Dynamic metrics such as response time, availability, or
reliability are collected by measurements made of a program
in execution. These metrics can be collected during testing or
after system has gone into use.
Static metrics are collected by measurements made of
representations of the system such as the design program, or
documentation.
SOFTWARE PRODUCT METRICS
Unfortunately, direct product measurements cannot be made
on some of the quality attributes such as understandability and
maintainability.
Therefore, you have to assume that there is a relationship
between the internal attribute and quality attribute.
Model formulation involves identifying the functional form
(linear or exponential) by analysis of collected data, identifying
parameters that are to be included in the model, and calibrating
these parameters.
SOFTWARE QUALITY ATTRIBUTES
Subjective quality of a software system is largely based
on its non-functional system attributes:
Safety
Understandability
Portability
Security
Testability
Usability
Reliability
Adaptability
Reusability
Resilience
Modularity
Efficiency
Robustness
Complexity
Learnability
SOFTWARE QUALITY ATTRIBUTES
M a in ta in a b ility
P o r ta b ility
F le x ib ility
R e u s a b ility
T e s ta b ility
IIn
n te r o p e r a b ilit y
P R O D U C T R E V IS IO N
P R O D U C T T R A N S IT IO N
P R O D U C T O P E R A T IO N
C o rre c tn e s s
U s a b ility
R e lia b ility
E ff ic ie n c y
I n te g r ity
SOFTWARE QUALITY ATTRIBUTES
REQUIREMENTS MODEL METRICS
Technical work in software engineering begins with
the creation of the requirements model.
These metrics examine requirements model with the
intent of predicting the of the resultant system.
Size is an indicator of increased coding, integration,
and testing effort.
REQUIREMENTS MODEL METRICS
Function-Based Metrics: Function Point (FP) metric is used
to measure the functionality delivered by a software system.
Specification Metrics: Quality of analysis model and its
specification such as ambiguity, completeness, correctness,
understandability, verifiability, consistency, and traceability.
Although many of these characteristics are qualitative, there
are quantitative specification metrics.
REQUIREMENTS MODEL METRICS
Number of total requirements: Nr = Nf + Nnf
Nr= number of total requirements
Nf=Number of functional requirements
Nfn=Number of non-functional requirements
Ambiguity: Q = Nui / Nr; where
Nui= number of requirements for which all reviewers
had identical interpretations
The closer the value of Q to 1, the lower the ambiguity
of the specification.
REQUIREMENTS MODEL METRICS
The adequacy of uses cases can be measured using an
ordered triple (Low, Medium, High) to indicate:
Level of granularity (excessive detail) specified
Level in the primary and secondary scenarios
Sufficiency of the number of secondary scenarios in
specifying alternatives to the primary scenario
Semantics of analysis UML models such as sequence, state,
and class diagrams can also be measured.
OO DESIGN MODEL METRICS
Coupling: Physical connections between elements of
OO design such as number of collaborations between
classes or the number of messages passed between
objects.
Cohesion: Cohesiveness of a class is determined by
examining the degree to which the set of properties it
possesses is part of the problem or design domain.
CLASS-ORIENTED METRICS – THE CK
METRICS SUITE
Depth of the inheritance tree (DIT) is the maximum
length from the node to the root of the tree. As DIT
grows, it is likely that lower-level classes will inherit
many methods. This leads to potential difficulties when
attempting to predict the behavior of a class, but large
DIT values imply that many methods may be reused.
Coupling between object classes (CBO). The CRC
model may be used to determine the value for CBO. It
is the number of collaborations listed for a class on its
CRC index card. As CBO increases, it is likely that the
reusability of a class will decrease.
USER INTERFACE DESIGN METRICS
Layout Appropriateness (LA) measures the user’s
movements from one layout entity such as icons, text,
menu, and windows.
Web page metrics are number of words, links,
graphics, colors, and fonts contained within a Web
page.
USER INTERFACE DESIGN METRICS
Does the user interface promote usability?
Are the aesthetics of the WebApp appropriate for the
application domain and pleasing to the user?
Is the content designed in a manner that imparts the most
information with the least effort?
Is navigation efficient and straightforward?
Has the WebApp architecture been designed to
accommodate the special goals and objectives of WebApp
users, the structure of content and functionality, and the flow of
navigation required to use the system effectively?
Are components designed in a manner that reduces
procedural complexity and enhances the correctness, reliability
and performance?
SOURCE CODE METRICS
Length of identifiers is the average length of identifiers
(names for variables, methods, etc.)
The longer the identifiers, the more likely that they are
meaningful, thus more maintainable.
Depth of Conditional Nesting measures nesting of ifstatements.
Deeply nested if-statements are hard to understand and
potentially error-prone.
SOURCE CODE METRICS
Cyclomatic Complexity is a measure of the number of
independent paths such as if-else through the code and
measures structural complexity.
TESTING METRICS
Majority of metrics focus on the process of testing, NOT
actual technical characteristics of tests themselves
Testers rely on analysis, design, and code metrics to
guide them in design and execution of test cases.
For example, Cyclomatic Complexity, a design metric,
lies at the core of basis path testing.
Each if-else statement (normal and alternate path) path
must be tested.