Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
A Tale of Two XP Teams “It was the best of times, it was the worst of times.” Laurie Williams North Carolina State University Agenda IBM Team: “Safe Subset” of XP Practices Sabre Airline Solutions Team: “Mostly all” of XP Summary/Comparison Extreme Programming Examination Extreme Programming Evaluation Framework XP-EF (said X-pef) XP-Context Factors (XP-cf) XP-Adherence Metrics (XP-am) (said X-pam) XP-Outcome Measures (XP-om) (said X-pom) •Reusable framework for reporting: •the extent to which an organization has adopted XP practices; and •the result of this adoption IBM: XP-Context Factors (XP-cf) Small team (7-10) Co-located Web development (toolkit) Supplier and customer distributed (US and overseas) Examined one release “old” (low XP) to the next “new” (more XP) IBM: XP-Adherence Metrics (XP-am) Subjective: Shodan Survey (http://agile.csc.ncsu.edu/survey) – – Old 56% New 72% Objective Metrics XP-am Metric Practice Old New Automated test class per user story Testing 0.11 0.45 Test coverage (statement) Testing 30% 46% Unit test runs per person day Testing 14% 11% Test LOC/Srce LOC Testing 0.26 0.42 Accept test execute Testing Manual Manual Did customers run your acceptance tests? Testing No No Pairing Frequency Pair Pro <5% 48% Release Length Shrt Rel 10 months 5 months Iteration Length Shrt Rel Weekly Weekly IBM: XP-Outcome Measures (XP-om) XP Result Metric Old New Internal Code Structure (mean values) Methods per class Depth of inheritance tree Number of children Coupling Response for class Lines of code per class McCabe Complexity 1.0 1.0 1.0 1.0 1.0 1.0 1.0 0.96 0.96 1.55 1.01 0.99 0.98 0.74 Response to Customer Change (Ratio (user stories in + out) /total) NA 0.23 Internally-Visible Quality (test defects/KLOEC of code) 1.0 0.50 Externally-Visible Quality (released defects/KLOEC of code) 1.0 0.24 Productivity (stories / PM) Relative KLOEC / PM 1.0 1.0 1.34 1.7 Customer Satisfaction NA High Morale (via survey) 1.0 1.11 Sabre: XP-Context Factors (XP-cf) Small team (6-10) Co-located Scriptable GUI environment Customer remote, multinational, several time zones Examined third release “old” (low XP) to the ninth release “new” (sustained XP) Sabre: XP-Adherence Metrics (XP-am) Subjective: Shodan Survey – New 77% Objective Metrics XP-am Metric Practice Old New Automated test class per user story Testing N/A 3.22 Test coverage (statement) Testing N/A 32.9% Unit test runs per person day Testing None 1.0 (anecdotal) Test LOC/Srce LOC Testing 0.054 0.296 Accept test execute Testing Manual Manual Did customers run your acceptance tests? Testing No No Pairing Frequency Pair Pro 0% 50% Release Length Shrt Rel 18 months 3.5 months Iteration Length Shrt Rel None 10 days Sabre: XP-Outcome Measures (XP-om) XP Result Metric Old New Internal Code Structure (mean values) Methods per class Depth of inheritance tree Number of children Coupling Response for class Lines of code per class McCabe Complexity 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.50 1.00 1.00 1.17 1.28 1.15 1.35 Response to Customer Change (Ratio (user stories in + out) /total) NA N/A Internally-Visible Quality (test defects/KLOEC of code) 1.0 0.25 Externally-Visible Quality (released defects/KLOEC of code) 1.0 0.70 Productivity (stories / PM) Relative KLOEC / PM N/A 1.0 N/A 1.46 Customer Satisfaction NA High Morale (via survey) N/A 68.1% Summary Two characteristically-agile teams: When used by teams operating within the specified context, the use of a specified subset of XP practices leads to an improvement in . . . Alternative Hypothesis IBM Case study Sabre case study evidence? evidence? internal code structure No No pre-release quality Yes Yes post-release quality Yes Yes programmer productivity Yes Yes customer satisfaction Yes N/A team morale Yes N/A Conclusions XP “successful” for two small, co-located teams “It was the best of times, it was the worst of times.” – OK, no “worst times” detected. – Though some may still yearn for the structure of plan-driven methods . . . a personal, cultural thing. Agile Practices Knowledge Base Breakout Group What information needs to be in a knowledge base to provide meaningful resources and information to researchers and practitioners? Use XP-EF as baseline – We know of shortcomings and some areas we are working on – What else? – Something different? Logistics of how such a knowledge base could work – OpenSeminar-style (http://openseminar.org) – Self-moderating, self-refereeing Do you have any data/experiences to contribute?