Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Cost Estimation Overview LiGuo Huang Computer Science and Engineering Southern Methodist University 1 Software Cost Estimation Methods • Software mangers: responsible for controlling software budget • Cost estimation: prediction of both the person-effort and elapsed time of a project • Cost/Effort Estimation Methods [Boehm 1981]: – – – – – – – Algorithmic Expert judgement Estimation by analogy Parkinsonian Price-to-win Top-down Bottom-up 2 Algorithmic Models (1) • Software cost estimation as a function of a number of variables (cost drivers) • Linear Models: Effort = a0+a1x1+…+anxn x1 x2 xn Effort a a a ... a • Multiplicative Models: 0 1 2 n • Analytic Models: Effort f ( x1 ,..., xn ) • Tabular Models: tables relating values of cost driver variables either to portions of software development effort, or to multipliers used to adjust the effort estimate • Composite Models: a combination of linear, multiplicative, analytic, and tabular function – Example: SLIM, Price-S, COCOMO II 3 Algorithmic Models (2) • Strengths: – objective, repeatable, analyzable formula – efficient and able to support a family of estimates and sensitivity analysis – objectively calibrated to previous experience • Weakness: – subjective sizing and cost driver inputs – unable to deal with exceptional conditions (e.g., exceptional personnel, exceptional project teamwork, exceptional matches/mismatches) – calibrated to past, not future 4 Expert Judgment • Consulting with experts who use their experience and understanding of the proposed project • Strengths: – able to factor in the differences between past project experiences and the current project – able to factor in the exceptional conditions and other unique project considerations • Weakness: – may be biased due to optimism, pessimism, or unfamiliarity with key aspects of the project – no better than participants – Hard to balance the quick response expert estimate with well-documented group-consensus estimate • Highly complementary with algorithmic models 5 Group Consensus Techniques: Delphi • Originated at the Rand Corporation in 1948 • Balance biased individual opinion and group meeting • Standard Delphi technique – Experts fill out forms anonymously – No group discussion • Wideband Delphi Technique – Coordinator calls a group meeting focusing on discussing widely varied estimates – Combine the advantages of group meeting and anonymous estimation of standard Delphi 6 Estimation by Analogy • Reasoning by analogy with one or more completed projects – Relate the actual costs to a cost estimate of a similar new project – Estimate at the total project level or at subsystem level • Strengths: – Estimate is based on representative experience on a project • Weakness: – Depend on the representativeness of experience 7 Parkinsonian Estimation • Parkinson’s law: Work expands to fill the available volume • Often not accurate • Tends to reinforce poor software development practices • Not recommended! 8 Price-to-Win Estimating • Used when no powerful enough software cost estimating techniques provide convincingly legitimate estimate • Previously used to win contracts by many software companies • Often results in budget and schedule slips and lose-lose situation • Not recommended! 9 Top-down Estimating • An overall cost estimate for the project is derived from the global properties of the software product – A top-down estimate takes some factor external to the actual deliverables or activities, calculates an overall effort for the project, then distributes the efforts across the activities in the project. • Strengths: – System level focus – Take into account the costs of system level functions, e.g., integration, users’ manuals, configuration management, … • Weakness: – – – – May not identify low level technical problems escalating costs May miss software components to be developed Provide little detailed basis for cost justification and iteration Less stable than multicomponent estimate • Examples: – COCOMO, Function Point (FP/FPLite), USE CASE Worksheet (UCEW) 10 Bottom-up Estimating • Aggregate of estimates of the efforts to create specific elements of the solution or activities performed during the project – Software component cost is usually estimated by a responsible developer – Sum up to total estimated cost for overall product • Strengths: – Estimate is based on a more detailed understanding of the job – Estimate is backed up by the personal commitment of the individual responsible for the job – More stable; estimation errors balance out • Weakness: – May overlook system level costs, e.g., integration, configuration management, quality assurance, project management, … – Often underestimate – Requires more effort than top-down estimate 11 IBM SUMMIT Bottom-Up Estimation • Estimation is based on a project WBS • The project WBS is a task hierarchy with three levels: – Level 1 task: Phase (for summarization) – Level 2 task: Discipline (for summarization) – Level 3 task: RUP Workflow Detail (for effort calculation) 12 IBM SUMMIT Bottom-Up Estimation • Only Level 3 tasks have estimation parameters and Role assignments • The project estimate is calculated by adding the individual Level 3 task estimates Note: Each task in the WBS has a unique code. Example: RIW 3.1 R= RUP; I = Inception; W = Workflows; 3 = Requirements discipline 13 IBM SUMMIT Bottom-Up Estimation • There are three pre-prepared WBSs, called Route Maps, in SUMMIT • These three Route Maps match the corresponding RUP configurations: – Classic RUP project – Medium RUP project – Small RUP project • Users can select and modify a WBS for their project or create their own. 14 IBM SUMMIT Bottom-Up Estimation ─ Level 3 Task Estimation • Each of the Level 3 tasks has one or more Quantitative Influencing Factors (QIFs), which are a way of measuring things to be done. – Example: Use Cases, Development Platforms • Each QIF in each task has an Estimate Range of unit effort in staff hours. – Example: 0.5 to 2 hrs per Use Case and 2 to 8 hrs per Development Platform • The Level 3 task estimate also needs: – A Count for each QIF. Example: 4 Use Cases and 2 Development Platforms – A Formula for calculating effort associated with each QIF 15 Software Cost Estimation Methods • None of the alternatives is better than others from all aspects • Parkinson and Price-to-Win methods are unacceptable • Strengths and weakness are complementary – Algorithmic model vs. Expert Judgement – Top-down vs. bottom-up • Best approach is a combination of methods – Compare and iterate estimates, reconcile differences • COCOMO is the most widely used, thoroughly documented and calibrated cost model 16 Cost Estimating Techniques – New Observations • Model-based – SLIM, Price-S, SEER, COCOMO • Expertise-based – Delphi, Rule-based • Learning-oriented – Neural Network, Case-based • Dynamic-based • Regression-based – OLS, Robust • Composite, Bayesian-based – COCOMO II 17 Model-Based Techniques (1) – SLIM • SLIM: Putnam’s Software Life-cycle Model Percentage of Total Effort – applied to projects exceeding 70,000 lines of code – assumes software project effort is distributed similarly to a collection of Rayleigh curves d ( y) at 2 2 Kate d (t ) K=1.0 a=0.02 td=0.18 Time 18 Model-Based Techniques (1) – SLIM • Putnam Model – – – – – Technical constant: C= size * B1/3 * T4/3 Total Effort (Person Months): B=1/T4 *(size/C)3 T: Required Development Time in years Size is estimated in LOC C is a parameter dependent on the development environment and it is determined on the basis of historical data of the past projects 19 Model-Based Techniques (1) – SLIM • SLIM Assumptions – applied to projects exceeding 70,000 lines of code – assumes software project effort is distributed similarly to a collection of Rayleigh curves • SLIM Tools developed by Quantitative Software Management based on Putnam’s SLIM model – SLIM-Estimate: project planning tool – SLIM-Control: project tracking and oversight tool – SLIM-Metrics: software metrics repository and benchmarking tool • Constraints: – Depends on the accuracy of size estimation • More information of SLIM tools is available at http://www.qsm.com . 20 Model-Based Techniques (2) – Price-S • Price-S: proprietary model developed by RCA originally for use internally – on government software projects • Price-S model: three submodels – Acquisition submodel: forecasts software costs and schedules – Sizing submodel: SLOC, Function Points, Predictive Object Points (POPs) – Life-cycle cost submodel: rapid and early costing of the maintenance and support phase for the software • More information of PRICE systems is available at http://www.pricesystems.com 21 Model-Based Techniques (3) – SEER-SEM • SEER-SEM: developed by Galorath, Inc. – based on Jensen model – parametric approach – Scope: all phases of the project life-cycle, from early specification through design, development, delivery and maintenance • Model Inputs and Outputs: Size Cost Personnel Environment Complexity Effort SEER-SEM Schedule Risk Maintenance Constraints Reliability 22 Model-Based Techniques (3) – SEER-SEM • Model features: – Allows probability level of estimates, staffing and schedule constraints to be input as independent variables – Facilitates extensive sensitivity and trade-off analyses on model input parameters – Organizes project elements into work breakdown structures for convenient planning and control – Allows the interactive scheduling of project elements on Gantt charts. – Builds estimates upon a sizable knowledge base of existing projects • More information of SEER-SEM systems is available at http://www.galorath.com/ 23 Model-Based Techniques (4) – COCOMO • COCOMO: COnstructive COst Model – COCOMO 81 – COCOMO II.2000 • Early Design model • Post-Architecture model – Calibrated to a database of 161 projects collected from commercial, aerospace, government and non-profit organizations using the Bayesian approach • More information on COCOMO II is available at http://sunset.usc.edu/csse/research/COCOMOII/cocomo_ main.html 24 Model-Based Techniques Summary • Strengths: – Good for budgeting, tradeoff analysis, planning and control, and investment analysis – Calibrated to past experience • Weakness: – Difficulty in unprecedented situations 25 Neural Network • Strengths: – Estimate is based on previous project experience • Weakness: – Need extremely large data sets to accurately train neural networks – Little intuitive support for sensitivity analysis – Little intuitive support for planning and control 26 Dynamic-Based Techniques • Software project effort or cost factors are dynamic rather than static – Change over the duration of the system development • Strengths: – Good for planning and control • Weakness: – Difficult to calibrate 27 Regression-Based Techniques • Used in conjunction with model-based techniques – Standard regression: Ordinary Least Square (OLS) yt 1 2 xt 2 ... k xtk et – Robust regression Alleviate the problem of outliers 28 Composite Techniques • Combination of two or more techniques to formulate the most appropriate functional form for estimation • Bayesian approach – a-priori expert-judgement can be combined with sampling information (data) to produce a robust a-posteriori model CSE7390 Software Economics & Value-Based Software Engineering 29 • QSM: Key Components of a Successful Estimation Process 30