Download 3. Testing

Document related concepts
no text concepts found
Transcript
Projecto em Informática e Gestão de Empresas
TESTING METHODOLOGY
Version 2.2
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
DISTRIBUTION LIST
To:
Mário Romão
CC:
Ana Violante
Francisco Mineiro
REVISION HISTORY
Revision Date
Revision Author(s) / Summary of Change
Additional Distribution
Revision No.
23-Dec-2005
Joao Alves / First written
0.1
27-Dec-2005
Francisco Mineiro / Revision
0.2
3-Jan-2006
Ana Violante / Revision
0.3
10-Jan-2006
Joao Alves / Modified
0.4
14-Jan-2006
Joao Alves / Modified
0.5
19-Jan-2006
Ana Violante / Revision
0.6
23-Jan-2006
Joao Alves / Published
1.0
10-Feb-2006
Joao Alves / Modified
1.1
8-Mar-2006
Francisco Mineiro / Revision
1.2
10-Mar-2006
Joao Alves / Published
2.0
22-Mar-2006
Joao Alves, Ana Violante / Modified
2.1
22-Jun-2006
Francisco Mineiro, Joao Alves / Revision
2.2
Page 2 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
TABLE OF CONTENTS
1.
INTRODUCTION ....................................................................................................................................... 7
1.1. DOCUMENT OBJECTIVE ...................................................................................................................... 7
1.2. DOCUMENT OVERVIEW ....................................................................................................................... 7
2.
TESTING CONCEPTS .............................................................................................................................. 8
2.1. W HAT IS SOFTWARE TESTING? ........................................................................................................... 8
2.2. OVERVIEW ......................................................................................................................................... 8
2.3. TEST CASES, SUITES, SCRIPTS, AND SCENARIOS ................................................................................ 9
2.4. A SAMPLE TESTING CYCLE ................................................................................................................. 9
2.5. TEST TYPES..................................................................................................................................... 10
2.5.1. Functional Tests .................................................................................................................. 11
2.5.2. Regression Tests ................................................................................................................. 11
2.5.3. Performance Tests .............................................................................................................. 11
2.5.3.1. Load Tests ............................................................................................................................ 13
2.5.3.2. Stress Tests.......................................................................................................................... 13
3.
MANUAL VS AUTOMATED TESTING................................................................................................... 14
3.1. AUTOMATED TESTING METHODOLOGIES ............................................................................................ 14
3.1.1. What is “Automated testing”? .............................................................................................. 14
3.1.2. Cost-Effective Automated Testing ....................................................................................... 14
3.1.3. The Record/Playback Myth ................................................................................................. 15
3.1.4. Viable Automated Testing Methodologies ........................................................................... 15
3.1.4.1. The “Functional Decomposition” Method .............................................................................. 15
3.1.4.2. The “Key-Word Driven” or “Test Plan Driven” Method .......................................................... 17
3.1.5. Preparation is the Key ......................................................................................................... 18
3.1.6. Managing Resistance to Change ........................................................................................ 19
3.1.7. Staffing Requirements ......................................................................................................... 19
3.1.8. Summary ............................................................................................................................. 20
4.
TEST MANAGEMENT ............................................................................................................................ 21
4.1. W HAT IS TEST MANAGEMENT?.......................................................................................................... 21
4.2. PRINCIPLES ..................................................................................................................................... 21
4.3. IMPROVING THE PROCESS ................................................................................................................. 22
Page 3 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
4.4. REQUIREMENTS MANAGEMENT ......................................................................................................... 23
4.5. PLANNING AND RUNNING TESTS........................................................................................................ 23
4.5.1. Planning Tests ..................................................................................................................... 23
4.5.2. Running Tests ...................................................................................................................... 24
4.6. ISSUES AND DEFECTS TRACKING ...................................................................................................... 25
5.
PROCESS MODELS ............................................................................................................................... 27
5.1. THE V-MODEL.................................................................................................................................. 27
5.1.1. Development Process.......................................................................................................... 27
5.1.1.1. Requirements ....................................................................................................................... 27
5.1.1.2. System Specification ............................................................................................................ 27
5.1.1.3. System Design ..................................................................................................................... 28
5.1.1.4. Detailed Design .................................................................................................................... 28
5.1.1.5. Coding .................................................................................................................................. 28
5.1.2. Testing Process ................................................................................................................... 29
5.1.2.1. Unit Testing .......................................................................................................................... 30
5.1.2.2. Integration Testing ................................................................................................................ 30
5.1.2.3. System Testing ..................................................................................................................... 31
5.1.2.4. Acceptance Testing .............................................................................................................. 31
5.2. RATIONAL UNIFIED PROCESS ............................................................................................................ 32
5.2.1. Overview .............................................................................................................................. 32
5.2.1.1. The Inception Phase ............................................................................................................. 33
5.2.1.2. The Elaboration Phase ......................................................................................................... 33
5.2.1.3. The Construction Phase ....................................................................................................... 33
5.2.1.4. The Transition Phase............................................................................................................ 34
5.2.1.5. Milestones ............................................................................................................................ 34
5.2.1.6. Iterations ............................................................................................................................... 34
5.2.2. Best Practices of RUP ......................................................................................................... 35
5.2.2.1. Develop software iteratively .................................................................................................. 35
5.2.2.2. Manage requirements ........................................................................................................... 35
5.2.2.3. Use component-based architecture ...................................................................................... 36
5.2.2.4. Visually model software ........................................................................................................ 36
5.2.2.5. Verify software quality........................................................................................................... 36
5.2.2.6. Control changes to software ................................................................................................. 36
5.3. CONCERT ........................................................................................................................................ 37
Page 4 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
5.3.1. Basic Principles ................................................................................................................... 38
5.3.2. Concert Phases ................................................................................................................... 39
6.
TESTING PROCEDURES ....................................................................................................................... 40
6.1. TESTING PROCEDURE FOR THE V-MODEL .......................................................................................... 41
6.2. TESTING PROCEDURE FOR RATIONAL UNIFIED PROCESS (RUP) ......................................................... 43
6.3. TEST MANAGEMENT ......................................................................................................................... 44
6.3.1. Test Strategy and Plan ........................................................................................................ 44
6.3.2. Management ........................................................................................................................ 44
6.3.3. Test Team Organization ...................................................................................................... 45
6.3.4. Test Metrics ......................................................................................................................... 45
6.3.5. Risk Assessment ................................................................................................................. 45
6.3.6. Defect Tracking and Severity Levels ................................................................................... 46
6.3.7. Entry/Exit Criteria ................................................................................................................. 46
6.4. TESTING ACTIVITIES ......................................................................................................................... 47
6.4.1. Prepare Overall Test Strategy ............................................................................................. 47
6.4.1.1. Tasks .................................................................................................................................... 47
6.4.1.2. Components ......................................................................................................................... 49
6.4.2. Prepare Overall Test Plan ................................................................................................... 50
6.4.2.1. Tasks .................................................................................................................................... 50
6.4.2.2. Components ......................................................................................................................... 53
6.4.3. Prepare Unit Test Plan ........................................................................................................ 53
6.4.3.1. Tasks .................................................................................................................................... 54
6.4.3.2. Components ......................................................................................................................... 56
6.4.4. Prepare Integration Test Plan .............................................................................................. 56
6.4.4.1. Tasks .................................................................................................................................... 57
6.4.4.2. Components ......................................................................................................................... 59
6.4.5. Prepare System Test Plan ................................................................................................... 59
6.4.5.1. Tasks .................................................................................................................................... 60
6.4.5.2. Components ......................................................................................................................... 62
6.4.6. Prepare Acceptance Test Plan ............................................................................................ 62
6.4.6.1. Tasks .................................................................................................................................... 62
6.4.6.2. Components ......................................................................................................................... 64
6.4.7. Perform Unit Tests ............................................................................................................... 65
6.4.7.1. Tasks .................................................................................................................................... 65
Page 5 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.7.2. Components ......................................................................................................................... 68
6.4.8. Perform Integration Tests .................................................................................................... 68
6.4.8.1. Tasks .................................................................................................................................... 68
6.4.8.2. Components ......................................................................................................................... 72
6.4.9. Perform System Tests ......................................................................................................... 73
6.4.9.1. Tasks .................................................................................................................................... 73
6.4.9.2. Components ......................................................................................................................... 76
6.4.10.
Perform Acceptance Tests ......................................................................................... 76
6.4.10.1. Tasks .................................................................................................................................... 76
6.4.10.2. Components ......................................................................................................................... 79
7.
GLOSSARY ............................................................................................................................................. 80
8.
REFERENCES ........................................................................................................................................ 81
APPENDIX 1 – AN EXAMPLE FOR “FUNCTIONAL DECOMPOSITION METHOD” .................................. 84
APPENDIX 2 – AN EXAMPLE FOR “KEY-WORD DRIVEN METHOD” ...................................................... 86
APPENDIX 3 – CONCERT METHODOLOGY OVERVIEW ........................................................................... 88
APPENDIX 4 – TEST TEAM ROLES AND RESPONSIBILITIES.................................................................. 99
APPENDIX 5 – DOCUMENT TEMPLATES ................................................................................................. 101
TABLE OF FIGURES
Figure 1 - The V-Model ................................................................................................................................... 29
Figure 2 - Testing Phases ............................................................................................................................... 30
Figure 3 - The Rational Unified Process ......................................................................................................... 32
Figure 4 - CONCERT System Development and Related Process ................................................................ 38
Figure 5 - Testing Process in CONCERT ....................................................................................................... 40
Figure 6 - Testing Process in the V-Model ...................................................................................................... 41
Figure 7 - Testing in RUP ................................................................................................................................ 43
Figure 8 - Test Management Areas ................................................................................................................ 44
Page 6 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
1. INTRODUCTION
1.1. Document Objective
This documented intends to provide CGI Portugal project team members with a complete test process
within a standard software development model. Besides this it also provides with some concepts on
software testing.
1.2. Document Overview
The document is organized into the following sections:
Section 2 – Testing concepts provide several concepts on software testing.
Section 3 – This section intends to provide a comparison between manual and automated testing,
as well as help the decision on whether to do, or not to do, automated testing, and how to do it.
Section 4 – In this section will be provided an overview on effective test management process.
Section 5 – This section presents the software development processes which will be the base for the
testing procedure.
Section 6 – This section shows the testing procedure to use at CGI Portugal, based on the
processes introduced in the previous section.
Section 7 – A glossary for this document
Section 8 – The references used for writing this document.
Page 7 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
2. TESTING CONCEPTS
2.1. What is Software Testing?
Software Testing is a process used to help identify the correctness, completeness and quality of
developed computer software. With that in mind, testing can never completely establish the correctness
of arbitrary computer software. In computability theory, a field of computer science, a mathematical proof
exists which concludes that it is impossible to solve the halting problem, the question of whether an
arbitrary computer program will enter an infinite loop, or halt and produce output. In other words, testing
is nothing but criticism or comparison, which is comparing the actual value with the expected one.
There are many approaches to software testing, but effective testing of complex products is essentially a
process of investigation, not merely a matter of creating and following rote procedure. One definition of
testing is "the process of questioning a product in order to evaluate it", where the "questions" are things
the tester tries to do with the product, and the product answers with its behaviour in reaction to the
probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of
review or inspection, the word testing is connoted to mean the dynamic analysis of the product - putting
the product through its paces. The quality of the application can, and normally does, vary widely from
system to system but some of the common quality attributes include reliability, stability, portability,
maintainability and usability.
Software Testing does not guarantee that errors do not occur, although there are some techniques
which are highly reliable on achieving zero defects, but at the same time are very expensive. One of
these techniques is the Cleanroom methodology, in which correctness verification replaces unit testing
and debugging, and the software enters system testing immediately after coding is complete. All test
errors are accounted for from the first execution of the program. Since this type of techniques have very
high costs, it is more usual to use risk-based testing, where functionalities are prioritized by its degree of
importance, thus focusing the whole testing effort only on the most important modules.
2.2. Overview
In general, software engineers distinguish software faults and software failures. In case of a failure, the
software does not do what the user expects. A fault is a programming error that may or may not be
actually manifested as a failure. A fault can also be described as an error in the correctness of the
semantic of a computer program. A fault will become a failure if the exact computation conditions are
met, one of them being that the software is ported to a different hardware platform or a different
compiler, or when the software gets extended.
Regardless of the methods used or level of formality involved, the desired result of testing is a level of
confidence in the software so that the developers are confident that the software has an acceptable
defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An
arcade video game designed to simulate flying an airplane would presumably have a much higher
tolerance for defects than software used to control an actual airliner.
A problem with software testing is that the number of defects in a software product can be very large,
and the number of configurations of the product larger still. Bugs that occur infrequently are difficult to
find in testing. A rule of thumb is that a system that is expected to function without faults for a certain
length of time must have already been tested for at least that length of time. This has severe
consequences for projects to write long-lived reliable software.
Page 8 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
A common practice of software testing is that it is performed by an independent group of testers after
finishing the software product and before it is shipped to the customer. This practice often results in the
testing phase being used as project buffer to compensate for project delays. Another practice is to start
software testing at the same moment the project starts and it is a continuous process until the project
finishes.
Another common practice is for test suites to be developed during technical support escalation
procedures. Such tests are then maintained in regression testing suites to ensure that future updates to
the software don't repeat any of the known mistakes.
It is commonly believed that the earlier a defect is found the cheaper it is to fix it.
2.3. Test Cases, Suites, Scripts, and Scenarios
Black box1 testers usually write test cases for the majority of their testing activities. A test case is
usually a single step, and its expected result, along with various additional pieces of information. It can
occasionally be a series of steps but with one expected result or expected outcome. The optional fields
are a test case ID, test step or order of execution number, related requirement(s), depth, test category,
author, and check boxes for whether the test is automatable and has been automated. Larger test cases
may also contain prerequisite states or steps, and descriptions. A test case should also contain a place
for the actual result. These steps can be stored in a word processor document, spreadsheet, database
or other common repository. In a database system, you may also be able to see past test results and
who generated the results and the system configuration used to generate those results. These past
results would usually be stored in a separate table.
The most common term for a collection of test cases is a test suite. The test suite often also contains
more detailed instructions or goals for each collection of test cases. It definitely contains a section where
the tester identifies the system configuration used during testing. A group of test cases may also contain
prerequisite states or steps, and descriptions of the following tests. Collections of test cases are
sometimes incorrectly termed a test plan.
Most white box2 testers write and use test scripts in unit, system, and regression testing. A test script is
a short program written in a programming language used to test part of the functionality of a software
system.
A test scenario is a test based on a hypothetical story used to help a person think through a complex
problem or system. They can be as simple as a diagram for a testing environment or they could be a
description written in prose. The ideal scenario test has five key characteristics: it is (a) a story that is (b)
motivating, (c) credible, (d) complex, and (e) easy to evaluate. They are usually different from test cases
in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can
be used in concert for complete system tests.
2.4. A Sample Testing Cycle
Although testing varies between organizations, there is a cycle to testing:
1
Refer to Glossary for Black box testing
2
Refer to Glossary for White box testing
Page 9 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
1. Requirements Analysis: Testing should begin in the requirements phase of the software
development life cycle (SDLC).
2. Design Analysis: During the design phase, testers work with developers in determining what
aspects of a design are testable and under what parameter those testers work.
3. Test Planning: Test Strategy, Test Plan(s), Test Bed creation.
4. Test Development: Test Procedures, Test Scenarios, Test Cases, and Test Scripts to use in
testing software.
5. Test Execution: Testers execute the software based on the plans and tests and report any
errors found to the development team.
6. Test Reporting: Once testing is completed, testers generate metrics and make final reports on
their test effort and whether or not the software tested is ready for release.
7. Retesting the Defects
Not all errors or defects reported must be fixed by a software development team. Some may be caused
by errors in configuring the test software to match the development or production environment. Some
defects can be handled by a workaround in the production environment. Others might be deferred to
future releases of the software, or the deficiency might be accepted by the business user. There are yet
other defects that may be rejected by the development team (of course, with due reason) if they deem it
inappropriate to be called a defect.
2.5. Test Types
There are several test types:
 Functional Tests
 Regression Tests
 Performance Tests, and its subsets:
o
Load (or Volume) Tests
o
Stress Tests
 Security Tests
 User Acceptance Tests
 Assembly Tests
 Operational Acceptance Tests, and its subsets:
o
Scheduling Tests
o
Recovery/Fail Over Tests
 Installation/Deployment Tests
From these types, the most relevant for this document are Functional, Regression, and Performance
Tests.
Page 10 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
2.5.1. Functional Tests
System functional testing, or Functional Testing, is a form of software testing that attempts to
determine whether each function of the system works as specified.
2.5.2. Regression Tests
Regression testing is any type of software testing which seeks to uncover regression bugs.
Regression bugs occur whenever software functionality that previously worked as desired stops
working or no longer works in the same way that was previously planned. Typically regression bugs
occur as an unintended consequence of program changes.
Common methods of regression testing include re-running all previously run tests of all types and
checking whether accepted functionality or previously-fixed faults have re-emerged.
Experience has shown that as software is developed, this kind of re-emergence of faults is quite
common. Sometimes it occurs because a fix gets lost through poor revision control practices (or
simple human error in revision control), but just as often a fix for a problem will be "fragile" - if some
other change is made to the program, the fix no longer works. Finally, it has often been the case that
when some feature is redesigned, the same mistakes that were made in the original implementation
of the feature will be made in the redesign.
Therefore, in most software development situations it is considered good practice that when a bug is
located and fixed, a test that exposes the bug is recorded and regularly retested after subsequent
changes to the program. Although this may be done through manual testing procedures using
programming techniques, it is often done using automated testing tools, frequently, a 'test suite',
software tools that allows the testing environment to execute all the regression test cases
automatically; some projects even set up automated systems to automatically re-run all regression
tests at specified intervals and report any regressions. Common strategies are to run such a system
after every successful compile (for small projects), every night, or once a week.
2.5.3. Performance Tests
In software engineering, performance testing is a test type that is performed to determine how fast
some aspect of a system performs under a particular workload.
Performance testing can serve different purposes:

It can demonstrate that the system meets performance criteria.

It can compare two systems to find which performs better.

It can measure what parts of the system or workload cause the system to perform badly.
In the diagnostic case, software engineers use tools such as profilers to measure what parts of a
device or software contribute most to the poor performance or to establish throughput levels (and
thresholds) for maintained acceptable response time.
In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be
similar to the expected actual use.
Technology
For example in CRM systems, performance testing technology employs one or more PCs to act as
injectors – each emulating the presence or numbers of users and each running an automated
sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of
Page 11 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a
test conductor, coordinating and gathering metrics from each of the injectors and collating
performance data for reporting purposes. The usual sequence is to ramp up the load – starting with
a small number of virtual users and increasing the number over a period to some maximum. The
test result shows how the performance varies with the load, given as number of users vs. response
time. Various tools are available to perform such tests. Tools in this category usually execute a suite
of tests which will emulate real users against the system. Sometimes the results can reveal oddities,
e.g., that while the average response time might be acceptable, there are outliers of a few key
transactions that take considerably longer to complete – something that might be caused by
inefficient database queries, etc.
Performance testing can be combined with stress testing, in order to see what happens when an
acceptable load is exceeded – does the system crash?
Performance specifications
Performance testing is frequently not performed against a specification, i.e. no one will have
expressed what is the maximum acceptable response time for a given population of users. However,
performance testing is frequently used as part of the process of performance profile tuning. The idea
is to identify the “weakest link” – there is inevitably a part of the system which, if it is made to
respond faster, will result in the overall system running faster. It is sometimes a difficult task to
identify which part of the system represents this critical path, and some test tools come provided
with (or can have add-ons that provide) instrumentation that runs on the server and reports
transaction times, database access times, network overhead, etc. which can be analyzed together
with the raw performance statistics. Without such instrumentation one might have to have someone
crouched, for example, over Windows Task Manager at the server to see how much CPU load the
performance tests are generating. There is an apocryphal story of a company that spent a large
amount optimizing their software without having performed a proper analysis of the problem. They
ended up rewriting the system’s ‘idle loop’, where they had found the system spent most of its time,
but even having the most efficient idle loop in the world obviously didn’t improve overall performance
one iota!
Performance testing almost invariably identifies that it is parts of the software (rather than hardware)
that contribute most to delays in processing users’ requests.
Performance testing can be performed across the web, and even done in different parts of the
country, since it is known that the response times of the internet itself vary regionally. It can also be
done in-house, although routers would then need to be configured to introduce the lag what would
typically occur on public networks.
It is always helpful to have a statement of the likely peak numbers of users that might be expected to
use the system at peak times. If there can also be a statement of what constitutes the maximum
allowable 95 percentile response time, then an injector configuration could be used to test whether
the proposed system met that specification.
Tasks to undertake
Tasks to perform such a test would include:
 analysis of the types of interaction that should be emulated and the production of scripts to
do those emulations
 decision whether to use internal or external resources to perform the tests
 set up of a configuration of injectors/controller
Page 12 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 set up of the test configuration (ideally identical hardware to the production platform), router
configuration, quiet network (we don’t want results upset by other users), deployment of
server instrumentation
 running the tests – probably repeatedly in order to see whether any unaccounted for factor
might affect the results
 analyzing the results, either pass/fail, or investigation of critical path and recommendation of
corrective action
2.5.3.1. Load Tests
Load testing is the act of testing a system under load.
In software engineering it is a blanket term that is used in many different ways across the
professional software testing community.
Load testing generally refers to the practice of modelling the expected usage of a software
program by simulating multiple users or instances accessing the program's services
concurrently. As such, this testing is most relevant for multi-user systems, often built using a
client/server model, such as web servers. However, other types of software systems can be
load-tested also. For example, a word processor or graphics editor can be forced to read an
extremely large document; or a financial package can be forced to generate a report based on
several years' worth of data.
When the load placed on the system is raised beyond normal usage patterns, in order to test the
system's response at unusually high or peak loads, it is known as stress testing. The load is
usually so great that error conditions are the expected result, although there is a gray area
between the two domains and no clear boundary exists when an activity ceases to be a load
test and becomes a stress test.
There is little agreement on what the specific goals of load testing are. The term is often used
synonymously with performance testing, reliability testing, and volume testing.
2.5.3.2. Stress Tests
Stress testing is a form of testing that is used to determine the stability of a given system or
entity. It involves testing beyond normal operational capacity, often to a breaking point, in order
to observe the results. For example, a web server may be stress tested using scripts, bots, and
various denials of service tools to observe the performance of a web site during peak loads.
Page 13 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
3. MANUAL VS AUTOMATED TESTING3
3.1. Automated Testing Methodologies
3.1.1.
What is “Automated testing”?
Simply put, what is meant by “Automated Testing” is automating the manual testing process
currently in use. This requires that a formalized “manual testing process” currently exists in the
company or organization. Minimally, such a process includes:
1. Detailed test cases, including predictable “expected results”, which have been developed
from Business Functional Specifications and Design documentation
2. A standalone Test Environment, including a Test Database that is restorable to a known
constant, such that the test cases are able to be repeated each time there are modifications
made to the application
If the current testing process does not include the above points, one is never going to be able to
make any effective use of an automated test tool.
The real use and purpose of automated test tools is to automate regression testing. This means that
one must have or must develop a database of detailed test cases that are repeatable, and this suite
of tests is run every time there is a change to the application to ensure that the change does not
produce unintended consequences.
An “automated test script” is a program. Automated script development, to be effective, must be
subject to the same rules and standards that are applied to software development.
3.1.2.
Cost-Effective Automated Testing
Automated testing is expensive. It does not replace the need for manual testing or enable one to
“down-size” its testing department. Automated testing is an addition to the testing process. It can
take between 3 to 10 times as long (or longer) to develop, verify, and document an automated test
case than to create and execute a manual test case. This is especially true if one selects to use the
“record/playback” feature (contained in most test tools) as primary automated testing methodology.
Record/Playback is the least cost-effective method of automating test cases.
Automated testing can be made to be cost-effective, however, if some common sense is applied to
the process:
3

Choose a test tool that best fits the testing requirements of the organization or company.

Realize that it doesn’t make sense to automate some tests. Overly complex tests are often
more trouble than they are worth to automate. Concentrate on automating the majority of
your tests, which are probably fairly straightforward. Leave the overly complex tests for
manual testing.

Only automate tests that are going to be repeated. One-time tests are not worth automating.

Avoid using “Record/Playback” as a method of automating testing. This method is fraught
with problems, and is the most costly (time consuming) of all methods over the long term.
From Totally Data-Driven Automated Testing – A White Paper by Keith Zambelich
Page 14 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published

3.1.3.
Adopt a data-driven automated testing methodology. This allows developing automated test
scripts that are more “generic”, requiring only that the input and expected results be
updated.
The Record/Playback Myth
Every automated tool vendor will tell that their tool is “easy to use” and that non-technical user-type
testers can easily automate all of their tests by simply recording their actions, and then playing back
the recorded scripts. This one statement alone is probably the most responsible for the majority of
automated test tool software that is gathering dust on shelves in companies around the world.
Here’s why it doesn’t work:

The scripts resulting from this method contain hard-coded values which must change if
anything at all changes in the application

The costs associated with maintaining such scripts are astronomical, and unacceptable.

These scripts are not reliable, even if the application has not changed, and often fail on
replay (pop-up windows, messages, and other things can happen that did not happen when
the test was recorded).

If the tester makes an error entering data, etc., the test must be re-recorded.

If the application changes, the test must be re-recorded.

All that is tested are things that already work. Areas that have errors are encountered in the
recording process (which is manual testing, after all). These bugs are reported, but a script
cannot be recorded until the software is corrected. So what is being tested?
3.1.4. Viable Automated Testing Methodologies
Some effective methodologies for automating functional or system testing for most business
applications:
3.1.4.1. The “Functional Decomposition” Method
The main concept behind the “Functional Decomposition” script development methodology is to
reduce all test cases to their most fundamental tasks, and write User-Defined Functions,
Business Function Scripts, and “Sub-routine” or “Utility” Scripts which perform these tasks
independently of one another. In general, these fundamental areas include:
1. Navigation (e.g. “Access Payment Screen from Main Menu”)
2. Specific (Business) Function (e.g. “Post a Payment”)
3. Data Verification (e.g. “Verify Payment Updates Current Balance”)
4. Return Navigation (e.g. “Return to Main Menu”)
In order to accomplish this, it is necessary to separate Data from Function. This allows an
automated test script to be written for a Business Function, using data-files to provide both the
input and the expected-results verification. A hierarchical architecture is employed, using a
structured or modular design.
The highest level is the Driver script, which is the engine of the test. The Driver Script contains a
series of calls to one or more “Test Case” scripts. The “Test Case” scripts contain the test case
Page 15 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
logic, calling the Business Function scripts necessary to perform the application testing. Utility
scripts and functions are called as needed by Drivers, Main, and Business Function scripts.

Driver Scripts: Perform initialization (if required), then call the Test Case Scripts in the
desired order.

Test Case Scripts: Perform the application test case logic using Business Function
Scripts;

Business Function Scripts: Perform specific Business Functions within the
application;

Subroutine Scripts: Perform application specific tasks required by two or more
Business scripts;

User-Defined Functions: General, Application-Specific, and Screen-Access Functions.
Functions can be called from any of the above script types.
An example of this can be found in Appendix 1.
3.1.4.1.1. Advantages
 Using a modular design, and files or records to both input and verify data, reduces
redundancy and duplication of effort in creating automated test scripts.
 Scripts may be developed while application development is still in progress. If
functionality changes, only the specific “Business Function” script needs to be
updated.
 Since scripts are written to perform and test individual Business Functions, they can
easily be combined in a “higher level” test script in order to accommodate complex
test scenarios.
 Data input/output and expected results are stored as easily maintainable text records.
The user’s expected results are used for verification, which is a requirement for
System Testing.
 Functions return “TRUE” or “FALSE” values to the calling script, rather than aborting,
allowing for more effective error handling, and increasing the robustness of the test
scripts. This, along with a well-designed “recovery” routine, enables “unattended”
execution of test scripts.
3.1.4.1.2. Disadvantages
 Requires proficiency in the Scripting language used by the tool (technical personnel);
 Multiple data-files are required for each Test Case. There may be any number of
data-inputs and verifications required, depending on how many different screens are
accessed. This usually requires data-files to be kept in separate directories by Test
Case.
 Tester must not only maintain the Detail Test Plan with specific data, but must also reenter this data in the various required data-files.
 If a simple “text editor” such as Notepad is used to create and maintain the data-files,
careful attention must be paid to the format required by the scripts/functions that
process the files, or script-processing errors will occur due to data-file format and/or
content being incorrect.
Page 16 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
3.1.4.2. The “Key-Word Driven” or “Test Plan Driven” Method
This method uses the actual Test Case document developed by the tester using a spreadsheet
containing special “Key-Words”. This method preserves most of the advantages of the
“Functional Decomposition” method, while eliminating most of the disadvantages. In this
method, the entire process is data-driven, including functionality. The Key Words control the
processing.
An example can be found in Appendix 2.
The architecture of the “Test Plan Driven” method appears similar to that of the “Functional
Decomposition” method, but in fact, they are substantially different:




Driver Script
o
Performs initialization, if required;
o
Calls the Application-Specific “Controller” Script, passing to it the file-names of
the Test Cases (which have been saved from the spreadsheets as a “tabdelimited” files);
The “Controller” Script
o
Reads and processes the file-name received from Driver;
o
Matches on “Key Words” contained in the input-file;
o
Builds a parameter-list from the records that follow;
o
Calls “Utility” scripts associated with the “Key Words”, passing the created
parameter-list;
Utility Scripts
o
Process input parameter-list received from the “Controller” script;
o
Perform specific tasks (e.g. press a key or button, enter data, verify data, etc.),
calling “User Defined Functions” if required;
o
Report any errors to a Test Report for the test case;
o
Return to “Controller” script;
User Defined Functions
o
General and Application-Specific functions may be called by any of the above
script-types in order to perform specific-tasks;
3.1.4.2.1. Advantages
This method has all the advantages of the “Functional Decomposition” method, as well as
the following:
 The Detail Test Plan can be written in Spreadsheet format containing all input and
verification data. Therefore the tester only needs to write this once, rather than, for
example, writing it in Word, and then creating input and verification files as is required
by the “Functional Decomposition” method.
Page 17 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Test Plan does not necessarily have to be written using MS Excel. Any format can be
used from which either “tab-delimited” or “comma-delimited” files can be saved (e.g.
Access Database, etc.).
 If “utility” scripts can be created by someone proficient in the automated tool’s
Scripting language prior to the Detail Test Plan being written, then the tester can use
the Automated Test Tool immediately via the “spreadsheet-input” method, without
needing to learn the Scripting language.
 If Detailed Test Cases already exists in some other format, it is not difficult to translate
these into the “spreadsheet” format.
 After a number of “generic” Utility scripts have already been created for testing an
application, one can usually re-use most of these if he/she needs to test another
application.
3.1.4.2.2. Disadvantages
 Development of “customized” (Application-Specific) Functions and Utilities requires
proficiency in the tool’s Scripting language. Note that this is also true of the
“Functional Decomposition” method, and, frankly of any method used including
“Record/Playback”.
 If the application requires more than a few “customized” Utilities, this will require the
tester to learn a number of “Key Words” and special formats. This can be timeconsuming, and may have an initial impact on Test Plan Development.
3.1.5. Preparation is the Key
If adequate preparations have not been made, then the “ramp-up” time required is increased
dramatically. What then, does an organization do to prepare itself for this effort?
1. An adequate Test Environment must exist that accurately replicates the Production
Environment.
2. The Test Environment’s database must be able to be restored to a known baseline;
otherwise test performed against this database will not be able to be repeated, as the data
will have been altered.
3. Part of the Test Environment includes hardware. The automated scripts must have
dedicated PCs on which to run. If one is developing scripts, then these scripts themselves
must be tested to ensure that they work properly. If the company is purchasing say, 5
licenses for the test tool, then it would be prudent to assign at least 3 PCs to run the
automated scripts on. The other 2 can be used for script development (assuming 2
developers). Since test tool software requires a lot of processing speed and memory to run
correctly, it is imperative to get the biggest, fastest machines affordable to run this software.
4. Detailed Test Cases that are able to be converted to an automated format must exist. The
test-tool is not a thinking entity. One must tell it exactly what to do, how to do it, and when to
do it. Data to be entered and verified must be specific data, not “post a payment to an
account, and verify the results”.
5. The person or persons who are going to be developing and maintaining the automated
scripts must be recruited within the company (or in the worst case, hired) and trained.
Page 18 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
3.1.6. Managing Resistance to Change
One of the main reasons organizations fail at implementing automated testing (apart from getting
mired down in the “record/playback” quagmire) is that most testers do not welcome what they
perceive as a fundamental change to the way they are going to have to approach their jobs. Let’s
examine some concerns that might be expressed by testers, and some answers to these:

The tool is going to replace the testers
This is not even remotely true. The automated testing tool is just another tool that will allow
testers to do their jobs better.
The testers are still going to have to perform tests manually for specific application changes.

It will take too long to train all of the testers to use the tool
If the “test-plan-driven” method (described above) is used, the testers will not have to learn
how to use the tool at all if they don’t want to. All they have to learn is a different method of
documenting the detailed test cases, using the key-word/spreadsheet format.

The tool will be too difficult for testers to use
Perhaps, but as we have already discussed, they will not have to use it. What will be
required is that a “Test Tool Specialist” will need to be hired and trained to use the tool.
The “test-plan-driven” testing method will eliminate most of the testers’ concerns regarding
automated testing. They will perform their jobs exactly as they do now. They will only need to learn a
different method of documenting their test cases.
3.1.7. Staffing Requirements
One area that organizations desiring to automate testing seem to consistently miss is the staffing
issue.
A “Test Tool Specialist” or “Automated Testing Engineer” or some such position must be created
and staffed with at least one senior-level programmer. This person must be capable of designing,
developing. Testing, debugging, and documenting code. This person must want to do this job – most
programmers want nothing to do with the Testing Department. This is not going to be easy, but it is
nonetheless absolutely critical, In addition to developing automated scripts and functions, this
person must be responsible for:

Developing standards and procedures for automated script/function development

Developing
change-management
implementation

Developing the details and infrastructure for a data-driven testing method

Testing, implementing, and Managing the test scripts (spreadsheets) written by the
testers

Running the automated tests, and providing the testers with the results
procedures
for
automated
script/function
It is worth noting that no special “status” should be granted to the automation tester(s). The nontechnical testers are just as important to the process, and favouritism toward one or the other is
counter-productive and should be avoided.
Page 19 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
3.1.8. Summary

Establish clear and reasonable expectations as to what can and what cannot be
accomplished with automated testing in your organization.
o
Educate yourself on the subject of automated testing
o
Establish what percentage of your tests are good candidates for automation


Get a clear understanding of the requirements which must be met in order to be
successful with automated testing
o
Technical personnel are required to use the tool effectively
o
An effective manual testing process must exist before automation is possible.
You should have:
o


Eliminate overly complex or one-of-a-kind tests as candidates

Detailed, repeatable test cases, which contain exact expected results

A standalone test environment with a restorable database
You are probably going to require short-term assistance from an outside
company which specializes in setting up automated testing or a contractor
experienced in the test tool being used.
Adopt a viable, cost-effective methodology
o
Record/Playback is too costly to maintain and is ineffective in the long term
o
Functional Decomposition method is workable, but is not as cost-effective as a
totally data-driven method
o
Test Plan driven method is more cost-effective
Select a tool that will allow you to implement automated testing in a way that conforms to
your long-term testing strategy. Make sure the vendor can provide training and support.
Page 20 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
4. TEST MANAGEMENT
The place of testing in the application lifecycle is beginning to expand.
Increasingly more organizations are starting to adopt processes where testing takes place in parallel with
application development, starting as soon as the project has commenced.
Testing requires a methodical building-block approach that includes requirements definition, planning,
design, execution, and analysis – to ensure full coverage, consistency, and reusability of testing assets.
Based on the project objectives, test managers can build a master test plan that will communicate the testing
strategy, priorities, and objectives to the rest of the team. With the master plan in place, testers can then
define test requirements and specific testing goals. Requirements should define exactly what needs to be
tested and which objectives should be met – such as performance goals.
The aim of a well-designed test management process is to create one central point of control that is
accessible to all members of the testing team.
4.1. What is Test Management?
Test Management is a method of organizing application test assets and artefacts – such as test
requirements, test plans, test documentation, test scripts, and test results – to enable easy accessibility
and reusability. Its aim is to deliver quality applications in less time.
4.2. Principles
Most organizations don’t have a standard process for organizing, managing, and documenting their
testing efforts. Often, testing is conducted as an ad-hoc activity, which changes with every new project.
Without a standard foundation for test planning, execution, and defect tracking, testing efforts are nonrepeatable, non-reusable, and difficult to measure.
Properly designed test management processes can enable an organization to:

Conduct testing processes better, faster, and cheaper
Testing without following any planning or design standards can result in the creation of tests,
designs and plans that are not repeatable and therefore unable to be reused for future iterations
of the test.

Perform daily builds and smoke tests
A well-defined, systematic approach to testing and a centralized repository for tests, plans, and
execution results can significantly increase the accuracy of smoke tests and add value to having
frequent builds. Smoke tests generally consist of a suite of tests that can be applied to a newly
created build (or even better, be performed automatically by the build system). It should reveal
fundamental failures or anything severe enough for testers to reject the release.

Manage changing requirements
Complete requirements-based testing is the best way to ensure that the finished system meets
the user’s needs. Without a test management process that ties test plans to application
functional requirements and allows organizations to track requirements changes to test cases
Page 21 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
(and vice versa), it is nearly impossible to design tests to verify that the system contains
specified functionality.

Implement global testing
Cost considerations, higher availability of qualified testers, and better retention of skilled
employees force companies to turn to a distributed testing model. Without a clearly defined
testing process and an easy-to-use, intuitive collaboration method, any attempt to set up a
geographically distributed “virtual” testing team will likely bring more problems than benefits.

Organize many tests, many projects
Effective testing requires a systematic process. When a testing team is handling multiple
projects – each with its own cycles and priorities – it is easy to lose focus. To keep track of
multiple test cases, testers need a process that will allow them to manage multiple projects and
clearly define the objectives of each one.

Conduct more than bug tracking
There is much more to the test process than recording bugs and passing them to Research &
Development (R&D). Testing today focuses on verifying that the application’s design and
functionality meet the business constraints and user requirements. To successfully achieve
these goals, a testing process requires clearly defined system specifications and application
business rules.

Test early in the lifecycle
Testing no longer fits between the end of development and the beginning of implementation.
Early problem detection (which is also up to 10 times cheaper than finding issues at the end of
the lifecycle) requires that testing and development begin simultaneously. However, for testing
to be implemented earlier in the lifecycle, testers need a clearly defined set of goals and
priorities to help them better understand system design, requirements, and testing objectives.
Only a well-defined, structured, and intuitive test management process can help ensure that the
testing process meets its goals and contributes to enhancing application quality.

Provide organizational visibility into quality status
With the increase in both application complexity and importance, quality has become of interest
to not only the team or teams directly involved in application delivery, but throughout the
organisations. Users, managers, executives, and even external customers all want visibility into
the status of quality. These divergent groups need a single, easily accessible, easily digestible
summary of information they can use to learn about quality status, and as a guide to necessary
action.
4.3. Improving the process
No matter what the system does, how it is written, or what platform it is running on, the basic principles
of test management are the same. It begins with gathering and documenting the testing requirements
and continues through designing and developing tests, running the tests – both manual and automated,
functional and load – and analyzing application defects. The testing process is not linear, and naturally it
differs depending on each organization’s practices and methodologies. The underlying principles of
every testing process, however, are the same.
Page 22 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
4.4. Requirements Management
The testing process begins with defining clear, complete requirements that can be tested. Requirements
development and management play a critical role in both the development and testing of software
applications.
All too often, requirements are neglected in the testing effort, leading to a chaotic process in which
testers fix what they can and accept that certain functionality will not be verified. Quality can be achieved
only by testing the system against its requirements. Although unit testing of individual components may
be sufficient during the development process, only testing the entire system against its requirements will
ensure that the applications functions as expected.
Of the many types of requirements, functional and performance requirements are the ones that most
often apply to the testing process. Functional requirements help validate that the system contains
specified functionality and should be delivered directly from the use cases that developers use to build
the system.
Performance requirements cover any performance standards and specifications identified by the project
team, such as transaction response times or maximum numbers of users. These requirements – also
referred to as service-level objectives (SLOs) – are aimed at ensuring that the system can scale to the
expected number of users and provide a positive user experience.
Both functional and performance requirements are designed to give the testing team a clear, concise,
and functional blueprint with which to develop test cases. It is impossible to test everything.
Requirements-based testing is one way to help prioritize the testing effort.
Requirements can be grouped according to how critical they are to mission success. The ones that
affect the core functionality of the system must be extensively tested, while the not-so-critical
requirements can be covered by minimal testing efforts or be tested later in a lifecycle.
The key to selecting the appropriate tool is the functionality. Commercial tools designed for requirements
management are a better choice for organizations that want to create a solid, flexible, requirementsbased testing process.
Requirements-based testing will help keep the testing effort on track – even when priorities are shifting,
resources become tight, or time for testing runs out. Requirements-based testing is the best way to
measure quality against the end-user needs.
4.5. Planning and Running Tests
4.5.1. Planning Tests
Comprehensive planning is critical, even with short testing cycles. Planning tests and designing
applications simultaneously will ensure a complete set of tests that covers each function the system
is designed to perform. If test planning is not addressed until later in the application lifecycle, much
of the design knowledge will be lost and testers will have to return to the analysis stage in order to
try to recreate what has already been done before.
The planning phase is used to define which tests need to be performed, how these tests must be
executed, and which risks and contingencies require planning.
The following section lists some of the steps involved in the test-planning phase:
1. Set the Ground Rules
Page 23 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
This stage can be used for setting the ground rules for keeping test logs and documentation,
assigning roles within the team, agreeing on the naming convention for tests and defects, and
defining the procedure for tracking progress against the project goals.
2. Set Up the Test Environment
A test environment needs to be set up to support all testing activities and all testing phases
possible.
3. Define Test Procedures
During the planning stage, testers determine if tests are going to be manual or automated. If the
test is automated, testers need to understand which tools will be used for automation and
estimate what techniques and skills will be needed to effectively use these tools.
4. Develop Test Design
Test design may be represented as a sequence of steps that needs to be performed to execute
a test (in case of a manual test), or a collection of algorithms and code to support more
sophisticated, complex tests. During test planning, testers create a detailed description of each
test, as well as identify what kind of data or ranges of data are required for the tests.
5. Map Test Data
The testing team must understand what types of data need to be obtained to support each type
of test, and how this data could be obtained or generated.
6. Design Test Architecture
Test architecture helps plan for data dependencies, maps the workflow between tests, and
identifies common scripts that can potentially be reused for future testing.
7. Communicate the Test Plan
This way, more people in the organization will have visibility into the project and can add their
input, questions, or comments before the actual testing begins.
4.5.2. Running Tests
To test the system as a whole, testers need to perform various types of testing – functional,
regression, load, unit and integration – each with its own set of requirements, schedules, and
procedures.
During the test-run phase, testers can set up groups of machines to most efficiently use their lab
resources.
Scheduling automated tests is another way to optimize the use of lab resources, as well as save
testers time by running multiple tests at the same time across multiple machines on the network.
Having an organized, documented process not only helps with automated tests, it also helps make
manual test runs more accurately by providing testers with the clearly defined procedure that
specifies the tasks that need to be performed at each step of manual testing. For both manual and
automated tests, testers need to keep a complete history of all test runs, creating audit trails to help
trace history of tests, test runs and results.
Steps involved in running tests include:
1. Create Test Sets
Page 24 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
A popular way to manage multiple tests is grouping them into test sets, according to the
business process, environment, or feature. Individual tests (both manual and automated) can be
assigned to test sets to help testers ensure thorough coverage of each functional area of the
application.
2. Set Execution Logic
In order to verify application functionality and usability, tests have to realistically emulate the
end-user behaviour. To achieve this, test execution should follow the predefined logic, such as
running certain tests after other tests have passed, failed, or have completed. The execution
logic rules should be set prior to executing the actual tests.
3. Manage Test Resources
After the test environment has been set up and configured, testers define which tests or groups
of tests should be run on which machines. Managing test lab resources by assigning tests to
individual machines helps ensure that hardware and network resources are being used most
efficiently and effectively.
4. Run Manual Tests
While some manual tests may be routine, they are an essential part of the testing procedure,
allowing the tester to verify functionality and conditions that automated tools are unable to
handle.
5. Schedule Automated Test Runs
Scheduling is another way to avoid conflicts with hardware and system resources – tests can be
scheduled to run unattended, overnight, or when the system is in least demand for other tasks.
6. Analyze Test-Run Results
During the test-execution phase, testers will uncover application inconsistencies, broken
functionality, missing features, and other problems commonly referred to as “bugs” or “defects”.
The next step is to view the list of all failed tests and determine what caused the test to fail. If the
tester determines that the test failed due to an application defect, this defect has to be reported
into the defect tracking system for further investigation, correction, and re-test.
4.6. Issues and Defects Tracking
Managing or “tracking” issues and defects is a critical step in the testing process. A well-defined method
for defect management will benefit more than just the testing team. Developers, managers, customer
support, Quality Assurance, and even Beta customers can effectively contribute to the testing process
by having access to an open, easy-to-use, functional defect-tracking system.
The key to making a good defect-reporting and resolution process is setting up the defect workflow and
assigning permission rules. Extra time spent on documenting the defect and its history is often well
rewarded by easier analysis, shorter resolution times, and better application quality.
Analyzing defects is what essentially helps managers make the go/no-go decisions about application
deployment.
Effective management and tracking of issues should address the following steps:
1. Agree on the Naming Convention
The key to effective defect management is communication among different parties involved in
the process. Before reporting mechanisms can be put into place, the testing team needs to set
Page 25 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
the ground rules, such as defining the severity of the bugs and agreeing on what information
must be included in the defect report.
2. Establish the Reporting Procedure
The key to making a good defect report is supplying developers with as much information as
they will need to reproduce and fix the problem. It is critical that a defect report not only provides
information describing the bug, but also includes all data necessary to help fix the problem.
3. Set the Permission Rules
For each step in the management process, there must be different access privileges. This is
especially true in the defect-management stage, since the defect information helps the
management make a “go/no-go” decision on the application release.
4. Establish the Process Flow
The next step is to have it reviewed by a developer who determines if a reported issue is indeed
a defect, and if it is – assigns it a “new” status. Since few development organizations have the
bandwidth to repair all known defects, a developer or project manager must prioritize.
5. Re-Test Repairs
Whatever fixes or changes have been made to repair a known defect, the application needs to
be re-tested to verify that the changes have taken effect and that the fix did not introduce
additional problems and unexpected side effects. Besides this regression testing must also be
made.
6. Analyze Defects
Analyzing defects is the most critical part of the defect-tracking process. It allows testers to take
a snapshot of the application under test and view the number of known defects, their status,
severity, priority, age, etc. Based on defect analysis, management is then able to make an
informed decision as to whether the application is ready to be deployed.
Page 26 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
5. PROCESS MODELS
Since there isn’t any specific software development process within CGI Portugal, we shall adopt two different
approaches: one which is a non-evolutionary model, the V-Model, and a second one which is an
evolutionary model, the Rational Unified Process (RUP).
The reasons for this choice have to do primarily with the fact that these models enjoy a broad acceptance
among software developers, as well as the fact that the models have reached a fair maturity level.
Considering these two models, we will then base the testing methodology on a software development
process that already exists within CGI, but it is not used at CGI Portugal. This methodology is called
Concert, and it is described in the third part of this section.
5.1. The V-Model
As you get involved in the development of a new system, a vast number of software tests appear to be
required to prove the system. While they are consistent in all having the word “test” in them, their relative
importance to each other is not clear. In this point an outline of the various phases of software testing
will be given.
The main software testing phases are:

Unit

Integration

System

Acceptance
To put them all in context requires an outline of the development process.
5.1.1. Development Process
The development process for a system is traditionally as a Waterfall Model where each step follows
the next. This shows how various products produced at each step are used in the process following
it. It does not imply that any of the steps in a process have to be completed, before the next step
starts, or that prior steps will not have to be revisited later in development. It is just a useful model
for seeing how each step works with each of the others.
5.1.1.1. Requirements
The first step is to define a set of “Requirements”, which is a statement by the customer of what
the system shall achieve in order to meet the need. These involve both functional and nonfunctional requirements.
5.1.1.2. System Specification
“Requirements” are then passed to developers, who produce a “System Specification”. This
changes the focus from what the system shall achieve it by defining it in computer terms, taking
into account both functional and non-functional requirements.
Page 27 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
5.1.1.3. System Design
Other developers produce a “System Design” from the “System Specification”. This takes the
features required and maps them to various components, and defines the relationships between
these components. The whole design should result in a detailed system design that will achieve
what is required by the “System Specification”.
5.1.1.4. Detailed Design
Each piece of coding then has a “Detailed Design”, which describes in detail exactly how it will
perform its piece of processing.
5.1.1.5. Coding
Finally each piece of coding is built, and then is ready for the test process.
The level of test is the primary focus of a system and derives from the way a software system is
designed and built up. Conventionally this is known as the V-Model, which maps the types of test to
each stage of development.
Page 28 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Development
Testing
Acceptance
Testing
Requirements
System
Specification
System Testing
Integration
Testing
System Design
Detailed Design
Unit Testing
Coding
Figure 1 - The V-Model
5.1.2. Testing Process
As you go from Unit Testing all the way up to Acceptance Testing, the detail level of testing
decreases, and becomes more business oriented.
Page 29 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
System
Unit
Acceptance
Title: Testing Methodology
Integration
Type: Deliverable
Testing phase
+ Technical
+ Functional
 Detail
 Itemized
 Module


High-level
Complete
business flow
Figure 2 - Testing Phases
5.1.2.1. Unit Testing
Starting from the bottom the first test level is “Unit Testing”. It involves checking that each
feature specified in the “Detailed Design” has been implemented in the code.
In theory an independent tester should do this, but in practice the developer usually does it, as
he is the only person who understands how the code works. The problem is that pieces of code
perform only a small part of the functionality of a system, and it relies on co-operating with other
parts of the system, which may not have been built yet. To overcome this, the developer either
builds, or uses special software to trick the code into believing it is working in a fully functional
system.
5.1.2.2. Integration Testing
As the pieces of code are constructed and tested they are then linked together to check if they
work with each other. It is a fact that two pieces of code that have passed all their tests, when
connected to each other produce one new piece of code full of potential faults. These tests can
be done by specialists, or by the developers.
Integration Testing is not focused on what the pieces of code are doing but on how they
communicate with each other, as specified in the "System Design". The "System Design"
defines relationships between pieces of code, and this involves stating:

What a piece of code can expect from another piece of code in terms of services

How these services will be asked for

How they will be provided
Page 30 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published

How to handle non-standard conditions, i.e. errors
Tests are constructed to deal with each of these aspects.
The tests are organized to check all the integrations, until all the code has been built and
integrated with each other producing the whole system.
5.1.2.3. System Testing
Once the entire system has been built then it has to be tested against the "System Specification"
to check if it delivers the features required. In this phase begins the work of system testers, who
are responsible for the testing effort from this on. It is not required that system testers have
knowledge on programming.
System testing tends to be more of an investigatory testing phase, where the focus is to have
almost a destructive attitude and test not only the design, but also the behaviour and even the
believed expectations of the customer. System testing is intended to test up to and beyond the
bounds defined in the software/hardware requirements specification(s).
One could view System testing as the final destructive testing phase before Acceptance testing.
System testing can involve a number of specialist types of test to see if all the functional and
non-functional requirements have been met. In addition to functional requirements these may
include the following types of testing for the non-functional requirements:

Performance - Are the performance criteria met?

Volume - Can large volumes of information be handled?

Stress - Can peak volumes of information be handled?

Robustness - Does the system remain stable under adverse circumstances?

Regression – Does the system work after new changes have occurred?
There are many others, the needs for which are dictated by how the system is supposed to
perform.
5.1.2.4. Acceptance Testing
Acceptance Testing checks the system against the "Requirements". It is similar to system
testing in that the whole system is checked but the important difference is the change in focus:

System Testing checks that the system that was specified has been delivered.

Acceptance Testing checks that the system delivers what was requested.
The customer, and not the developer, should always do acceptance testing. The customer
knows what is required from the system to achieve value in the business and is the only person
qualified to make that judgment. To help them, prior training should be available.
Page 31 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
5.2. Rational Unified Process
The Rational Unified Process (RUP) is an iterative software development process created by the
Rational Software Corporation, now a division of IBM. The RUP is not a single concrete prescriptive
process, but rather an adaptable process framework. As such, RUP describes how to develop software
effectively using proven techniques. While the RUP encompasses a large number of different activities,
it is also intended to be tailored, in the sense of selecting the development processes appropriate to a
particular software project or development organization. The RUP is recognized as particularly
applicable to larger software development teams working on large projects. Rational Software offers a
product – the Rational Unified Process Product – that provides tools and technology for customizing and
executing the process.
5.2.1. Overview
Figure 3 - The Rational Unified Process
Figure 3 shows a high level view of the Process. The chart identifies which disciplines are the most
active during each phase of the process. For example, the red shape labelled Business Modelling
shows heavy activity only in the Inception and Elaboration phases, whereas the blue shape
representing Project Management shows a more graduated activity over the life of the Process, time
and the dynamic aspects of the RUP. From this point of view the process is described in cycles,
phases, iterations, and milestones.
Using the RUP, software product lifecycles are broken into individual development cycles. These
cycles are further broken into their main components, called phases. In RUP, these phases are:
 Inception
Page 32 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Elaboration
 Construction
 Transition
Phases are composed of iterations. Iterations are timeboxes; iterations have deadlines while phases
have objectives.
5.2.1.1. The Inception Phase
In this phase, the business case which includes business context, success factors (expected
revenue, market recognition, etc.), and financial forecast is established. To complement the
business case, a basic use case model, project plan, initial risk assessment and project
description (the core project requirements, constraints and key features) are generated. After
these are completed, the project is checked against the following criteria:
 Stakeholder concurrence on scope definition and cost/schedule estimates.
 Requirements understanding as evidenced by the fidelity of the primary use cases.
 Credibility of the cost/schedule estimates, priorities, risks, and development process.
 Depth and breadth of any architectural prototype that was developed.
 Actual expenditures versus planned expenditures.
If the project does not pass this milestone, called the Lifecycle Objective Milestone, it can either
be cancelled or it can repeat this phase after being redesigned to better meet the criteria.
5.2.1.2. The Elaboration Phase
The Elaboration phase is where the project starts to take shape. In this phase, the problem
domain analysis is made, and the architecture of the project gets its basic form.
This phase must pass the Lifecycle Architecture Milestone by meeting the following criteria:
 A use-case model in which the use-cases and the actors have been identified and most
of the use-case descriptions are developed. The use-case model should be 80%
complete.
 A description of the software architecture in a software system development process.
 Architecture prototype, which can be executed.
 Business case and risk list which are revised.
 A development plan for the overall project.
If the project cannot pass this milestone, there is still time for it to be cancelled or redesigned.
After leaving this phase the project transits into a high-risk operation where changes are much
more difficult and detrimental when made.
5.2.1.3. The Construction Phase
In this phase, the main focus goes to the development of components and other features of the
system being developed. This is the phase when the bulk of the coding takes place.
Page 33 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
This phase produces the first external release of the software. Its conclusion is marked by the
Initial Operational Capability (IOC) milestone.
5.2.1.4. The Transition Phase
In the transition phase the product has moved from the development organization to the end
user. The activities of this phase include training of the end users and maintainers and beta
testing of the system to validate it against the end users' expectations. The product is also
checked against the quality level set in the Inception phase. If it does not meet this level, or the
standards of the end users, the entire cycle in this phase begins again.
If all objectives are met, the Product Release (PR) milestone is reached and the development
cycle ends.
5.2.1.5. Milestones
In RUP, there are four major milestones that correspond to the four phases. If the milestone
criteria are not met, the project can be stopped or run in a new iteration to revisit the
bottlenecks.
This meta-model of a milestone emphasizes the links between phases, iterations and milestone
completion.
It is very important to reach the milestone.
5.2.1.6. Iterations
A typical project using the RUP will go through several iterations. Dividing the project into
iterations has advantages, such as risk mitigation, but it also needs more guidance and effort
than the traditional sequential approach. The RUP defines a Project Management Discipline that
guides the project manager through iteration management. Using iterations, a project will have
one overall phase plan, but multiple iteration plans.
In RUP all activities are organized into nine disciplines:
Engineering
 Business Modelling
 Requirements
 Analysis & Design
 Implementation
 Test
 Deployment
Supporting
 Configuration and Change Management
 Project Management
Page 34 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Environment
5.2.2. Best Practices of RUP
1. Develop software iteratively
2. Manage requirements
3. Use component-based architecture
4. Visually model software
5. Verify software quality
6. Control changes to software
5.2.2.1. Develop software iteratively
Given the time it takes to develop large, sophisticated, software systems, it is not possible to
define the problem and build the solution in a single step. Requirements will often change
throughout a project's development, due to architectural constraints, customer's needs or a
greater understanding of the original problem. Iteration allows the project to be successively
refined and addresses a project's highest risk items as the highest priority task. Ideally, each
iteration ends up with an executable release – this helps reduce a project's risk profile, allows
greater customer feedback and helps developers stay focused.
The RUP uses iterative and incremental development for the following reasons:
 Integration is done step by step during the development process, limiting it to fewer
elements.
 Integration is less complex, making it more cost effective.
 Parts are separately designed and/or implemented and can be easily identified for later
reuse.
 Requirement changes are noted and can be accommodated.
 Risks are attacked early in development since each iteration gives the opportunity for
more risks to be identified.
 Software architecture is improved by repeated scrutiny.
5.2.2.2. Manage requirements
Requirements Management in RUP is concerned with meeting the needs of end users by
identifying and specifying what they need and identifying when those needs change. Its benefits
include the following:
 The correct requirements generate the correct product; the customer’s needs are met.
 Necessary features will be included, reducing post-development cost.
Page 35 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
5.2.2.3. Use component-based architecture
Component Based Architecture creates a system that is easily extensible, intuitively
understandable and promotes software reuse. A component often relates to a set of objects in
Object-oriented programming.
Software Architecture is increasing in importance as systems are becoming larger and more
complex. RUP focuses on producing the basic architecture in early iterations. This architecture
then becomes a prototype in the initial development cycle. The architecture evolves with each
iteration to become the final system architecture. RUP also asserts design rules and constraints
to capture architectural rules. By developing iteratively it is possible to gradually identify
components which can then be developed, bought or reused. These components are often
assembled within existing infrastructures such as CORBA, COM, or J2EE.
5.2.2.4. Visually model software
Abstracting your programming from its code and representing it using graphical building blocks
is an effective way to get an overall picture of a solution. Using this representation, technical
resources can determine how best to implement a given set of inter-related logics. It also builds
an intermediary between the business process and actual code through information technology.
A model in this context is a visualization and at the same time a simplification of a complex
design. RUP specifies which models are necessary and why.
The Unified Modelling Language (UML) can be used for modelling Use-Cases, Class diagrams
and other objects. RUP also discusses other ways to build models.
5.2.2.5. Verify software quality
Quality assessment is the most common failing point of all software projects, since it is often an
afterthought and sometimes even handled by a different team. RUP assists in planning quality
control and assessment by building it into the entire process and involving all members of a
team. No worker is specifically assigned to quality; RUP assumed that each member of the team
is responsible for quality. The process focuses on meeting the expected level of quality and
provides test workflows to measure this level.
5.2.2.6. Control changes to software
In all software projects, change is inevitable. RUP defines methods to control, track and monitor
changes. RUP also defines secure workspaces, guaranteeing a software engineer's system will
not be affected by changes in another system. This concept ties in heavily with component
based architectures.
With the iterative approach, the need for change management is even more necessary because
of the sheer volume of artefacts developed. These artefacts will also need to be updated as the
iterations evolve. The Change Management workflow in RUP deals with three specific areas:
 Configuration Management
 Change Request Management
 Status and Measurement Management
Configuration Management
Page 36 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Configuration management is responsible for the systematic structuring of the products.
Artefacts such as documents and models need to be placed under version control and these
changes must be visible. It also keeps track of dependencies between artefacts so all related
articles are updated when changes are made.
Change Request Management
During the system development process many artefacts with several versions exist. CRM keeps
track of the proposals for change.
Status and Measurement Management
Change requests have states such as new, logged, approved, assigned and complete. A
change request also has attributes such as root cause, or nature (like defect and
enhancement), priority, etc. These states and attributes are stored in a database, so useful
reports about the progress of the project can be produced.
5.3. Concert
The Concert methodology is a result of the reflection and professional experience of CGI Subject Matter
Experts from all areas of system development and integration. It is also based on industry best practice
concepts (ISO-12207, IEEE) and methodologies, and is structured to best support CGI’s approaches to
systems development. Concert provides developers and managers with a life cycle model for IS/IT
solution development and delivery, and it establishes a standard approach to executions of related
activities. In addition,
 It outlines the objectives, the content and the result of each of the activities;
 It describes the framework within which the approach has to be positioned;
 It integrates and associates in a coherent ensemble, fundamental methodology and best practice
development techniques. These include:
o Recommendations, guides, best practice approaches, templates and other job aids to
promote and support managing requirements in an effective and compliant manner;
o Procedures and references to enable effective interfaces with:
 Project stakeholders;
 Other CGI process areas;
 Solution components and services provided by various CGI support groups.
The methodology consists of a comprehensive set of processes. A project team or organization
depending on its purpose, will select the appropriate subset as a toolkit to fulfil that purpose, project or
application.
For more on Concert, check the website on CGI’s intranet:
(http://www.intranet.cgi.ca/Concert/index_en.htm?%1%2ConcertHome_en.htm).
If you prefer, you can see an overview of Concert in Appendix 3.
Page 37 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
System Development Methodology
Preliminary
Analysis
Integration and
System Test
Analysis
Client
Acceptance
Test
Design
Construction
Implementation
Deployment
Synchronization Points
Related Processes
Project
Management
Quality
Assurance
Configuration
Management
Reuse
Management
Figure 4 - CONCERT System Development and Related Process
5.3.1.
Basic Principles
The life cycle model contained in Concert is based on principles that result from extensive
experience in systems development and management, as well as industry best practices. These
principles advocate:
 A phased approach to development:
Each phase is designed so that the information concerning decisions to be made is produced
at the right moment, and so as to ensure that the solution will be constructed effectively,
without losing sight of compliance requirements and of organization and financial
management issues.
 The need for a flexible, adaptable and scalable approach:
The methodology contains a flexible structure easily adaptable to any particular context which
becomes an integration tool based on a set of recognized principles while not imposing
needless constraints.
 Independent of technology:
The methodology should provide a framework for the full life cycle of systems development
independent of technology, platforms, vendors, etc.
 Client participation:
Client participation is essential for the successful development of an information system. Our
approach requires that everyone concerned or affected, must participate actively in activities
at the right/appropriate moment in time.
 Effective use of models:
Models supply a framework integrating the use of specific techniques, tools and norms, such
as fourth-generation languages, mechanized configuration management, and prototyping
Page 38 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
tools. They can be adapted to each organization’s unique context, and can also evolve with
the progress of technology and software engineering best practice.
 Clear identification of deliverables:
An accurate identification of deliverables facilitates management and control system
development projects. Templates are provided to facilitate more rapid and consistent and
professional production of these components.
 Effective interfaces with related processes:
In order to facilitate the work to be done in each process area, each activity in Concert
includes identification of related impacts on project management, on quality assurance, on
configuration management and reuse management.
5.3.2. Concert Phases
The method for development and implementation of a system is divided into eight phases:
Phases
Objective
Preliminary
Analysis
To recommend a viable information system solution based on the client’s
requirements, constraints and compliance with both applicable business
and technological directions.
Analysis
To translate requirements into specifications needed for the development
and implementation of an IS/IT solution. To develop strategies to optimally
deliver client requirements.
Design
To produce the detailed architecture and design for the IS/IT solution.
Develop plans in accordance with the strategies established during the
Analysis Phase.
Construction
To produce executable software components that properly reflect the
design in accordance with the Construction Plan.
Integration and
System Test
To construct the system by progressively adding increments and testing
each resulting assembly to ensure it operates properly;
On completion of the integration of required components, complete testing
of all system components to verify that they execute properly, and interface
properly among themselves and with related applications.
Client Acceptance
Test
To demonstrate that the system meets all client acceptance criteria.
Implementation
To make the solution available to the end users and ensure they can
assume ownership.
Deployment
To manage planning and execution of Implementation Phase activities to
enable rollout to multiple sites.
Page 39 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6. TESTING PROCEDURES
As it was stated in the previous section, we shall base our testing procedure on the Concert methodology.
For that we will map the testing process in Concert with a testing process for the V-Model and another for
RUP.
In order to do this, first we have to take a look at the testing process in Concert, which is shown in the figure
below.
Overall Test Strategy
Overall Test Plan
Detailed Test Plans (Unit,
Integration, System, Client
Acceptance*)
Unit Test Report
Preliminary
Analysis
Analysis
Design
Construction
Integration and
System Test
Client
Acceptance
Test
Implementation
Deployment
Integration Test Report
System Test Report
Client Acceptance
Test Report*
(* = involves client participation)
Figure 5 - Testing Process in CONCERT
Considering the figure, we then have the following table, which map the various phases to its testing results.
Phases
Testing Results
Preliminary Analysis
None
Analysis
Overall Test Strategy
Design
Overall Test Plan
Unit Plan
Integration Test Plan
System Plan
Client Acceptance Test Plan
Construction
Unit Test Report
Integration and System Test
Integration Test Report
System Test Report
Client Acceptance Test
Client Acceptance Test Report
Page 40 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Implementation
None
Deployment
None
6.1. Testing procedure for the V-Model
For the V-Model the testing process will be very similar to testing process in Concert. Comparing Figure
5 with the figure below we can see that, in both, the testing cycle goes side by side with the
development cycle.
Development
Testing
Acceptance
Testing
Requirements
Overall Test Strategy
System
Specification
Overall Test Plan
Acceptance Test Plan
System Test Plan
Integration Test Plan
System Testing
Integration
Testing
System Design
Unit Test Plan
Detailed Design
Unit Testing
Acceptance
Test Report
System Test Report
Integration Test Report
Unit Test Report
Coding
Figure 6 - Testing Process in the V-Model
Considering the figure, each phase of the V-Model will include the following testing activities:
Page 41 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Phases
Testing Activities
Requirements
Prepare Acceptance Test Plan
(Specify Acceptance Test Cases)
System Specification
Prepare Overall Test Strategy
Prepare System Test Plan (Specify
System Test Cases)
System Design
Prepare Overall Test Plan
Prepare Acceptance Test Plan
(Refine specified test cases and
perform all other tasks)
Prepare System Test Plan (Refine
specified test cases and perform all
other tasks)
Prepare Integration Test Plan
Detailed Design
Coding
Prepare Unit Test Plan
None
Unit Testing
Perform Unit Tests
Integration Testing
Perform Integration Tests
System Testing
Perform System Tests
Acceptance Testing
Perform Acceptance Tests
About this table there are a few notes that must be taken into consideration.
The Prepare Acceptance Test Plan and Prepare System Test Plan activities are divided into two phases.
Thus, during the Requirements phase the first task of the Prepare Acceptance Test Plan activity –
Specify Acceptance Test Cases – is made. This will then serve as input for the other tasks, as well as
increase the quality of the corresponding deliverable, which is only written in the System Design phase.
Similarly to this, the Prepare System Test Plan activity is divided into the System Specification phase,
where the Specify System Test Cases task is performed, and into the System Design phase, with all
other tasks.
Page 42 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.2. Testing procedure for Rational Unified Process (RUP)
If you are using an evolutionary model, such as RUP, the testing activities will be performed along with
the interactions for each of the process’ phases. Again we will base the testing procedure on Concert,
and map the activities with the four phases of the RUP.
Thus, considering the testing activities in Figure 5 we can extrapolate the activities for RUP.
Figure 7 - Testing in RUP
Below we can find which activities are performed in each of the RUP’s phases.
6.3.
Phases
Testing Activities
Inception
None
Elaboration
Prepare Overall Test Strategy
Prepare Overall Test Plan
Prepare Unit Test Plan
Prepare Integration Test Plan
Prepare System Test Plan
Prepare Acceptance Test Plan
Construction
Perform Unit Tests
Perform Integration Tests
Perform System Tests
Transition
Perform Acceptance Tests
Page 43 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.3. Test Management
This methodology supports the premise that testing must be managed as a project within the project,
using standard project management disciplines.
Thus, test management tasks for this methodology address: preparation of test strategy and plan, risk
assessment, problem escalation procedures, organization of the test team, test metrics, defects tracking,
and verification of entry/exit criteria for the test levels and sub-levels.
Test Management
Test Strategy
and Plan
Management
Risk
Assessment
Test Team
Organization
Defect Tracking/
Severity Levels
Test Metrics
Entry / Exit
Criteria
Figure 8 - Test Management Areas
6.3.1. Test Strategy and Plan
Before construction begins, an Overall Test Strategy and an Overall Test Plan are created. These
two documents go hand in hand. The test strategy defines the approach to be used to achieve the
test objectives. The test plan quantifies the strategy in terms of type, amount, and timing of
resources (both human and machine) required.
The Overall Test Strategy is a high level system-wide expression of major activities which achieve
the overall desired result as expressed by the testing objectives.
The Overall Test Plan addresses the overall project at a high level.
6.3.2. Management
Test Management tasks provide guidelines on overall testing including "how-to" navigate testing and
reporting processes through the development activities. Management tasks include also defining:
how test scripts will be stored and accessed, the progress reporting process, and the severity levels
of defects.
Page 44 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.3.3. Test Team Organization
The roles of the resources required to support test activities in this methodology include: Project
Manager, Technical Leader, System Analyst, System Administrator, Tester, Developer and Client
Representative. The roles and the responsibilities of the test team members are described in a table
provided in Appendix 4.
6.3.4. Test Metrics
Metrics allow the project manager to know the current level of quality and how much work remains
to be done to achieve a given target level of quality. Metrics also allow the project manager to make
decisions about whether to continue testing or not. Metrics are used to assess each level of testing
(unit, integration, system and acceptance) during the project. Testing metrics referenced in this
methodology are:
 Test progress: specifies the number of test cases planned versus executed;
 Test success: specifies the number of test cases passed versus failed;
 Test sub-level metrics: specify for each level of testing if target levels of quality of its sublevels are reached;
 Test variance impact: specifies the number and the rate of defects by status and severity
reported during the execution of testing;
 Defect detection: graphically represented, it specifies the number of defects by severity
reported over time during the execution of testing.
The project manager should control and maintain, or delegate the responsibility of maintaining and
collecting the testing metrics for the project.
6.3.5. Risk Assessment
The risks associated with the Information System are assessed to:
 Identify high-risk applications so that more extensive testing will be planned and performed for
these applications. Risks include new and immature technology, new and difficult to
understand business requirements, complexity of business logic, complexity of technology,
size of the project, and skill mix on the project.
 Identify the critical components and/or focus areas for testing that are most important for the
system from the client standpoint. Critical areas include security, usability, maintainability,
reliability, and performance.
Effective testing will result in effective risk management. Managing risks follows the steps described
below:
 Identify the risks;
 Evaluate the risks and the cost of failure;
 Identify and sequence test focus areas that address or minimize the identified risks;
 Document the considerations which led to the selection and priority of test focus areas.
Page 45 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.3.6. Defect Tracking and Severity Levels
The detected defects should be described clearly and precisely. A defect can exist in three states:
 Open: a defect that has been discovered but has not been fixed yet
 Fixed: a defect that has been opened and fixed but not yet re-tested
 Closed: a defect that has been opened, fixed and re-tested
The resolution of defects is prioritized using the following three levels of importance or severity:
 Severity 1: a problem that must be fixed and for which no work around is available;
 Severity 2: a problem which must be resolved quickly and for which a work around is available;
 Severity 3: all other problems.
6.3.7. Entry/Exit Criteria
The movement of an application from one test to another is controlled through the use of entry and
exit criteria.
Entry criteria are used to determine when the system components are ready to be tested and
provide answers to the following:
 Has the component reached a stage where it can be tested?
 Is there enough information available to properly test it?
Exit criteria are used to determine if the application has been successfully tested for a given level
and if the component can proceed to the next test level. Exit criteria are usually different for each
test level to ensure appropriate metrics are gathered and to make an informed business decision on
the risk of proceeding to the next level of testing.
Page 46 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4. Testing Activities
In this section, all the activities that have to be preformed during the testing effort are described. Please
note that the order in which these activities here appear do not correspond to the chronological order in
which they are performed. For this please refer to sections 6.1 and 6.2.
6.4.1. Prepare Overall Test Strategy
The objective of the Overall Test Strategy is to establish the high level strategy that will govern how
the system will be tested.
Overall Test Strategy is established to reduce risk, lower costs related to test activities, and ensure
successful implementation. This activity allows the project manager to:
 Assess the risks involved in developing or modifying the application;
 Identify and sequence test focus areas of the application to address or minimize the identified
risks;
 Establish target levels of quality that need to be met for each focus area of the application.
The desired target levels of quality reflect the risk in each test focus area. The higher the
target level of quality to be reached for a given focus area, the more testing needs to be
performed in that area;
 Select the test levels to be used during the project;
 Select the sub-levels of unit, integration, system, and acceptance tests to achieve the
established target levels of quality;
 Provide an initial estimate of the testing activities duration;
 Produce a document that includes the elements listed above.
6.4.1.1. Tasks
Prioritize Test Focus
Areas
To assess the risks associated with the system, and to identify
and sequence the application areas where the test should be
focused to address the risks.
Establish Target Levels
of Quality
To establish target levels of quality that need to be met to ensure
successful implementation of the system.
Select Test Sub-levels
To identify the different test sub-levels that will be used during the
project.
Estimate Test Activities
To provide an initial estimate of the duration of the testing
activities.
6.4.1.1.1. Prioritize Test Focus Areas
The objective for this task is to assess the risks associated with the Information System, and
to identify and sequence the application areas where the test should be focused to address
the risks.
Page 47 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
This task is performed to ensure that all resources assigned to the testing activities are
allocated to achieve the best results. It also assesses risks to help form test focus areas.
Some risks are:
 New and immature technology;
 New and difficult to understand business requirements;
 Complexity of business logic;
 Complexity of technology;
 Size of the project;
 Skill mix on the project.
This task selects and prioritizes the areas where the testing effort should be focused, taking
the following factors into account:
 Security;
 Usability;
 Maintainability;
 Reliability;
 Performance.
Steps
Identify the risks.
Evaluate the risks and the cost of failure.
Identify and sequence test focus areas that address or minimize the identified risks.
Document the considerations that led to the selection and priority of test focus areas.
Skill Profile
Technical Leader
6.4.1.1.2. Establish Target Levels of Quality
The objective for this task is to establish target levels of quality that need to be met to
ensure a successful implementation of the system.
The target levels of quality are used to determine the amount of testing needed for each
focus area within the software metrics and quality management. The desired target level of
quality reflects the risk in each test focus area. The higher target level of quality to be
reached for a given focus area, the more testing needs to be performed in that area.
Steps
Establish a target level of quality for each test focus area.
Skill Profile
Technical Leader
Page 48 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.1.1.3. Select Test Sub-levels
The objective for this task is to identify the different test sub-levels that will be used during
the project.
Test sub-levels are used to structure and break down the test activities in a way that will
ensure complete testing of the system. This task selects the test sub-levels to be used for
unit, integration, system, and acceptance testing.
Unit testing includes: sub-routine testing, coverage testing, path coverage, boundary value
testing, control flow testing, statement coverage testing, branch coverage testing and
equivalence partitioning.
Integration testing includes functional and integration testing.
System testing includes: database testing, interface testing, conversion testing, operational
testing, existing function and regression testing, stress testing, usability testing, networked
system testing, security testing, operability testing, compatibility testing, configuration
testing, installation testing, procedure testing, storage testing, backup-recovery testing,
cohabitation/reliability testing.
Acceptance testing includes documentation testing and regression testing.
Steps
Identify the unit test sub-levels.
Identify the integration test sub-levels.
Identify the system test sub-levels.
Identify the acceptance test sub-levels.
Skill Profile
Technical Leader
6.4.1.1.4. Estimate Test Activities
The objective of this task is to provide an initial estimate of the duration of the testing
activities.
This task provides an initial estimate of the time required to do each test activity based on
current estimation practices, e.g. historical data and experience. This estimate will be used
to plan the test activities for the system.
Steps
Estimate the duration of each test activity.
Skill Profile
Project Manager
6.4.1.2. Components
Input (Examples) System Analysis
Output
Overall Test Strategy (Template in Appendix 5)
Page 49 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.2. Prepare Overall Test Plan
The objective of the Overall Test Plan is to provide an overall plan for managing the testing activities
of the system.
This activity provides the project manager with a plan to manage the activities of each test level
(unit, integration, system and acceptance).
The following elements are included in the plan:
 Criteria that govern the movement of the application from one test level to another;
 Number of test cases required for each test level and sub-level to meet test objectives;
 Metrics that the project will produce;
 Problem escalation procedures to be used;
 Identification of the required resources for each sub-level within each test level;
 Identification of the personnel with the necessary skills to prepare the test site and to execute
the test activities;
 Estimates of the duration of the testing activities for all participants;
 Overall schedule of testing activities.
6.4.2.1. Tasks
Define Test Level Entry
and Exit criteria
To describe the conditions that determine when a component is
ready to be tested and whether the component can proceed to the
next level.
Estimate Required
Number of Test cases
To provide an initial estimate of the number of test cases required
for each test level and sub-level, and to define the metrics that the
project will produce.
Define Problem
Escalation Procedures
To define problem escalation procedures to be used during the
test activities of the project.
Identify Test
Environment
Requirements
To specify the necessary characteristics and properties of the test
environment.
Refine Test Team
Organization
To identify the roles and responsibilities of those supporting the
test activities.
Produce Overall Test
Schedule
To specify a schedule, including milestones, for testing activities
by refining duration estimates.
6.4.2.1.1. Define Test Level Entry and Exit Criteria
The objective for this task is to describe the conditions that determine when a component is
ready to be tested and whether the component can proceed to the next test level.
Entry conditions are used to determine when the system components are ready to be tested
and provide answers to the following:
Page 50 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Has the component reached a stage where it can be tested?
 Is there enough information available to properly test it?
Exit conditions are used to determine if the application has been successfully tested for a
given level and if the component can proceed to the next test level. Exit conditions are
usually different for each test level to ensure that appropriate metrics are gathered, and to
make an informed business decision on the risk of proceeding to the next level of testing.
Steps
Define entry conditions for unit, integration, system and acceptance testing.
Define exit conditions for unit, integration, system and acceptance testing.
Skill Profile
Technical Leader
6.4.2.1.2. Estimate Required Number of Test Cases
The objective for this task is to provide an initial estimate of the number of test cases
required for each test level and sub-level and to define the metrics that the project will
produce.
The estimate of the number of test cases required (including those for each test sub-level
for unit, integration, system and acceptance testing) is used to establish the level of effort
and activities needed to test the system.
The metrics to assess each level of testing (unit, integration, system and acceptance) during
the project consist of the following:
 Test progress - specifies the number of test cases planned and executed;
 Test success - specifies the number of test cases passed and failed;
 Test target levels of quality - specifies if the target level of quality is reached for each
test sub-level;
 Test variance impact - specifies the number and the rate of defects by status and
severity reported during the execution of testing;
 Defect detection - specifies, using a graphic illustration, the number of detected
defects versus the number of executed test cases over time.
Steps
Estimate the number of test cases required for unit testing.
Estimate the number of test cases required for each sub-level of unit testing.
Estimate the number of test cases required for integration testing.
Estimate the number of test cases required for each sub-level of integration testing.
Estimate the number of test cases required for system testing.
Estimate the number of test cases required for each sub-level of system testing.
Estimate the number of test cases required for acceptance testing.
Estimate the number of test cases required for each sub-level of acceptance testing.
Page 51 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Define the metrics that will be produced for each test level.
Skill Profile
Technical Leader
6.4.2.1.3. Define Problem Escalation Procedures
The objective for this task is to define problem escalation procedures to be used during the
test activities of the project.
Escalation procedures are used by those involved in test activities to determine the course
of actions to follow when they discover anomalies or defects. These are normally linked with
the Quality Assurance, Configuration Management and Project Management policies and
procedures.
Steps
Define the escalation procedures to be followed when the test team encounters problems
during testing.
Skill Profile
Technical Leader
6.4.2.1.4. Identify Test Environment Requirements
The objective for this task is to specify the necessary characteristics and properties of the
test environment.
The physical characteristics of the test environments are required to plan test activities.
These include hardware, communications, system software, mode of usage (e.g. standalone), and any other software or supplies needed to support testing.
In addition, the following requirements must be identified:
 The level of security that must be provided for the test facilities, system software, and
proprietary components such as software, data and hardware;
 Special testing tools, utilities and facilities needed, and any other items related to
testing such as publications, office space, etc.;
 The source and provisioning activities needed to supply the testers with what they
need.
Steps
Identify the following resources for unit, integration, system and acceptance testing:
 The required test facilities: hardware, software and network;
 The test tools and utilities that will be used.
Skill Profile
Technical Leader
Page 52 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.2.1.5. Refine Test Team Organization
The objective for this task is to identify the roles and responsibilities of those supporting the
test activities.
In order to schedule test activities, it is necessary to identify all members (end users,
testers, developers, database support, etc.) required to support those activities.
Steps
Identify the groups responsible for preparing the test environment and facilities.
Identify the groups responsible for performing the testing.
Skill Profile
Project Manager
6.4.2.1.6. Produce Overall Test Schedule
To specify a schedule, including milestones, for testing activities by refining duration
estimates.
Using the information gathered during the Prepare Overall Test Plan activity, the schedule
for test activities can be produced by:
 Refining the estimate of time required to perform each testing task;
 Defining test milestones that must be met;
 Specifying the initial schedule for each testing task and milestone;
 Specifying the periods during which each testing resource (i.e., facilities, tools, and staff)
must be available.
Steps
Refine the duration estimate of each testing activity.
Define the test milestones that must be met.
Specify the schedule for each testing activity.
Specify the schedule for each test milestone.
Specify the roles and responsibilities associated with each test activity.
Skill Profile
Project Manager
6.4.2.2. Components
Input (Examples) System Design
Overall Test Strategy (Template in Appendix 5)
Output
Overall Test Plan (Template in Appendix 5)
6.4.3. Prepare Unit Test Plan
The objective of the Unit Test Plan is to provide a detailed plan for performing unit test sub-levels.
Page 53 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
This activity provides the project manager with a plan to manage the execution of unit tests in
accordance with the Overall Test Plan. Unit testing is the initial testing of code in a module or subroutine. Unit test sub-levels include tasks to the following:
 Internal logic;
 Internal design;
 Test path and condition coverage;
 Test exception conditions and error handling.
Unit test sub-levels include: sub-routine testing, coverage testing, path coverage testing, control flow
testing, boundary value testing, statement coverage testing, branch coverage testing, and
equivalence partitioning. The purpose of the unit test sub-levels is to find and fix coding errors and
coding omissions in software components, such as modules or subroutines, prior to their integration
with other sub-components into the complete package.
The plan will provide the following for each component test sub-level, if applicable:
 Test cases needed;
 The number of test cases;
 The test tools and facilities needed (hardware, software, network);
 Staff resources, their roles and responsibilities required to perform test activities (includes time
for education and mentoring if required);
 A detailed test schedule, including a duration estimate for test activities;
 Unit testing metrics.
The Unit Test Plan must be aligned with all other plans.
6.4.3.1. Tasks
Specify Unit Test Cases
To specify and sequence the test cases needed to meet the
objectives of unit test sub-levels.
Update Test Facility
Requirements
To review and update the test tools and facilities required for unit
test sub-levels.
Refine Detailed Test
Team Organization
To identify resources required to perform unit test sub-levels.
Produce Detailed Test
Schedule
To produce a test schedule, including duration estimates for the
activities of unit test sub-levels.
6.4.3.1.1. Specify Unit Test Cases
The objective for this task is to specify and sequence the test cases needed to meet the
objectives of the unit test sub-levels.
Unit test cases are scenarios that demonstrate that a unit of code meets the clients'
requirements as specified, and validate that the code fulfils its intended use.
Page 54 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
It is recommended that test cases be designed when the code is specified, that is, prior to
coding. The advantage is that test conditions and cases are:
 Designed more objectively;
 Not influenced by coding style;
 Not overlooked.
Steps
For each unit test sub-level, design test cases following these practical steps:
 Derive test scenarios from client requirements;
 Break down test scenarios into test conditions. Each test condition has a measurable
objective and associated test input data. Each test condition normally requires the
execution of several test cases;
 Specify the expected outputs associated with the execution of a test case;
 Select the test cases that should be included in the regression test package;
 Prioritize the test cases (high, medium and low). The assigned priority is based on the
risk analysis and the test focus areas that were selected, as defined in the Overall Test
Strategy. Test cases are run by priority; the highest priority test cases are run first.
Once all the unit test case specifications are completed, they are attached to the Unit Test
Plan for the execution of unit testing.
Skill Profile
Technical Leader
6.4.3.1.2. Update Test Facility Requirements
The objective of this task is to review and update the test tools and facilities required for
each test sub-level.
This task allows the technical leader to expand the detail level of the tools and facilities
requirements identified in the Overall Test Plan for the Unit, Integration, System and
Acceptance Test Plans.
Steps
For each test sub-level:
 Review the requirements for testing tools and facilities identified in the Overall Test
Plan;
 If necessary, update these requirements to reflect the current situation in the
appropriate test plans (Unit, Integration, System and Acceptance Test Plans).
Skill Profile
Technical Leader
6.4.3.1.3. Refine Detailed Test Team Organization
The objective for this task is to identify the resources required to perform each test sublevel.
Page 55 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
This task allows the project manager to expand the detail level of the staff resource
requirements identified in the Overall Test Plan for the Unit, Integration, System and
Acceptance Test Plans.
Steps
Refine the roles and responsibilities of the team members responsible for each test sublevel and review this information with them.
Define the internal management structure of the test team to depict the lines of authority,
responsibility and communication within the test team. Graphical devices such as
hierarchical organization charts or matrix diagrams may be used to describe the
organizational structure.
Skill Profile
Project Manager
6.4.3.1.4. Produce Detailed Test Schedule
The objective for this task is to produce a test schedule, including duration estimates for the
activities of test sub-levels.
This task allows the project manager to expand the detail level of planning requirements
identified in the Overall Test Plan for Unit, Integration, System and Acceptance Test Plans.
When establishing the schedule, the following should be considered for each test sub-level:
 Dependencies between test activities;
 Key dates of test activities;
 Alignment with all other plans and schedules (initial corrective actions as required).
Steps
Produce a duration estimate for the activities of each test sub-level to determine the total
duration of the appropriate test level.
Skill Profile
Project Manager
6.4.3.2. Components
Input (Examples) Overall Test Strategy (Template in Appendix 5)
Overall Test Plan (Template in Appendix 5)
Detailed Design
Output
Unit Test Plan (Template in Appendix 5)
6.4.4. Prepare Integration Test Plan
The objective of the Integration Test Plan is to provide a detailed plan for performing integration test
sub-levels.
This activity provides the project manager with a plan to manage the execution of integration test in
accordance with the Overall Test Plan. Integration testing verifies the proper execution of application
components that interface with other applications. The communication among software components
Page 56 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
(e.g. modules) within the sub-system is tested in a controlled and isolated environment within the
project. It does not require the execution of the entire application as a unit, but of individual subsystems within the application.
Integration test sub-levels include functional and integration testing.
The plan provides the following for each applicable integration test sub-level:
 Test cases needed;
 The number of test cases estimated;
 The test tools and facilities needed (hardware, software, network);
 Staff resources, their roles and responsibilities required to perform test activities (include time
for education and mentoring if required);
 A detailed test schedule, including duration estimates for test activities;
 Integration testing metrics.
The Integration Test Plan must be aligned with all other plans.
6.4.4.1. Tasks
Specify Integration Test
Cases
To specify and sequence the test cases needed to meet the
objectives of integration test sub-levels.
Update Test Facility
Requirements
To review and update the test tools and facilities required for
integration test sub-levels.
Refine Detailed Test
Team Organization
To identify resources required to perform integration test sublevels.
Produce Detailed Test
Schedule
To produce a test schedule, including duration estimates for the
activities of integration test sub-levels.
6.4.4.1.1. Specify Integration Test Cases
The objective for this task is to specify and sequence the test cases needed to meet the
objectives of the integration test sub-levels.
Integration test cases are created to verify that application components interfacing with
other applications execute properly.
Steps
For each integration test sub-level, design test cases following these practical steps:
 Derive test scenarios from client requirements;
 Break down test scenarios into test conditions. Each test condition has a measurable
objective and associated test input data. Each test condition normally requires the
execution of several test cases;
 Specify the expected outputs associated with the execution of a test case;
 Select the test cases that should be included in the regression test package;
Page 57 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Prioritize the test cases (high, medium and low). The assigned priority is based on the
risk analysis and the test focus areas that were selected, as defined in the Overall Test
Strategy. Test cases are run by priority; the highest priority test cases are run first.
 Once all the integration test case specifications are completed, they are attached to
the Integration Test Plan for the execution of integration testing.
Skill Profile
Tester
6.4.4.1.2. Update Test Facility Requirements
The objective of this task is to review and update the test tools and facilities required for
each test sub-level.
This task allows the technical leader to expand the detail level of the tools and facilities
requirements identified in the Overall Test Plan for the Unit, Integration, System and
Acceptance Test Plans.
Steps
For each test sub-level:
 Review the requirements for testing tools and facilities identified in the Overall Test
Plan;
 If necessary, update these requirements to reflect the current situation in the
appropriate test plans (Unit, Integration, System and Acceptance Test Plans).
Skill Profile
Technical Leader
6.4.4.1.3. Refine Detailed Test Team Organization
The objective for this task is to identify the resources required to perform each test sublevel.
This task allows the project manager to expand the detail level of the staff resource
requirements identified in the Overall Test Plan for the Unit, Integration, System and
Acceptance Test Plans.
Steps
Refine the roles and responsibilities of the team members responsible for each test sublevel and review this information with them.
Define the internal management structure of the test team to depict the lines of authority,
responsibility and communication within the test team. Graphical devices such as
hierarchical organization charts or matrix diagrams may be used to describe the
organizational structure.
Skill Profile
Project Manager
Page 58 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.4.1.4. Produce Detailed Test Schedule
The objective for this task is to produce a test schedule, including duration estimates for the
activities of test sub-levels.
This task allows the project manager to expand the detail level of planning requirements
identified in the Overall Test Plan for Unit, Integration, System and Acceptance Test Plans.
When establishing the schedule, the following should be considered for each test sub-level:
 Dependencies between test activities;
 Key dates of test activities;
 Alignment with all other plans and schedules (initial corrective actions as required).
Steps
Produce a duration estimate for the activities of each test sub-level to determine the total
duration of the appropriate test level.
Skill Profile
Project Manager
6.4.4.2. Components
Input (Examples) Overall Test Strategy (Template in Appendix 5)
System Design
Overall Test Plan (Template in Appendix 5)
Output
Integration Test Plan (Template in Appendix 5)
6.4.5. Prepare System Test Plan
The objective of the System Test Plan is to provide a detailed plan for performing system test sublevels.
This activity provides the project manager with a plan to manage the execution of system tests in
accordance with the Overall Test Plan. System tests verify that all components (software, hardware
and network) of the entire application execute properly. The tests also verify the system's ability to
read interfaces from other applications and to generate interfaces to other applications.
System test sub-levels include: database testing, interface testing, conversion testing, operational
testing, existing function and regression testing, stress testing, usability testing, networked system
testing, security testing, operability testing, compatibility testing, configuration testing, installation
testing, procedure testing, storage testing, backup-recovery testing, cohabitation/reliability testing.
The plan provides the following for each applicable system test sub-level:
 Test cases needed;
 The number of test cases;
 The test tools and facilities needed (hardware, software, network);
 Staff resources, their roles and responsibilities required to perform test activities (include time
for education and mentoring if required);
 A detailed test schedule, including duration estimates for test activities;
Page 59 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 System testing metrics.
The System Test Plan must be aligned with all other plans.
6.4.5.1. Tasks
Specify System Test
Cases
To specify and sequence the test cases needed to meet the
objectives of system test sub-levels.
Update Test Facility
Requirements
To review and update the test tools and facilities required for
system test sub-levels.
Refine Detailed Test
Team Organization
To identify resources required to perform system test sub-levels.
Produce Detailed Test
Schedule
To produce a test schedule, including duration estimates for the
activities of system test sub-levels.
6.4.5.1.1. Specify System Test Cases
The objective for this task is to specify and sequence the test cases needed to meet the
objectives of the system test sub-levels.
System test cases are created to verify that all components (software, hardware and
network) of the entire application execute properly.
Steps
For each system test sub-level, design test cases following these practical steps:
 Derive test scenarios from the client's requirements;
 Break down test scenarios into test conditions. Each test condition has a measurable
objective and associated test input data. Each test condition normally requires the
execution of several test cases;
 Specify the expected outputs associated with the execution of a test case;
 Select the test cases that should be included in the regression test package;
 Prioritize the test cases (high, medium and low). The assigned priority is based on the
risk analysis and the test focus areas that were selected, as defined in the Overall Test
Strategy. Test cases are run by priority; the highest priority test cases are run first.
Once all the system test case specifications are completed, they are attached to the System
Test Plan so they can be used for the execution of system testing.
Skill Profile
Tester
6.4.5.1.2. Update Test Facility Requirements
The objective of this task is to review and update the test tools and facilities required for
each test sub-level.
Page 60 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
This task allows the technical leader to expand the detail level of the tools and facilities
requirements identified in the Overall Test Plan for the Unit, Integration, System and
Acceptance Test Plans.
Steps
For each test sub-level:
 Review the requirements for testing tools and facilities as identified in the Overall Test
Plan.
 If necessary, update these requirements to reflect the current situation in the
appropriate test plans (Unit, Integration, System and Acceptance Test Plans).
Skill Profile
Technical Leader
6.4.5.1.3. Refine Detailed Test Team Organization
To identify the resources required to perform each test sub-level.
This task allows the project manager to expand the detail level of the staff resources
requirements identified in the Overall Test Plan for the Unit, Integration, System and
Acceptance Test Plans.
Steps
Refine the roles and responsibilities of the team members responsible for each test sublevel and review them with the affected people.
Define the internal management structure of the test team to depict the lines of authority,
responsibility and communication within the test team (graphical devices such as
hierarchical organization charts or matrix diagrams may be used to describe the
organizational structure).
Skill Profile
Project Manager
6.4.5.1.4. Produce Detailed Test Schedule
The objective for this task is to produce a test schedule, including duration estimates for the
activities of test sub-levels.
This task allows the project manager to expand the detail level of planning requirements
identified in the Overall Test Plan for Unit, Integration, System and Acceptance Test Plans.
When establishing the schedule, the following should be considered for each test sub-level:
 Dependencies between test activities;
 Key dates of test activities;
 Alignment with all other plans and schedules (initial corrective actions as required).
Steps
Produce a duration estimate for the activities of each test sub-level to determine the total
duration of the appropriate test level.
Page 61 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Skill Profile
Project Manager
6.4.5.2. Components
Input (Examples) Overall Test Strategy (Template in Appendix 5)
System Analysis
Overall Test Plan (Template in Appendix 5)
Output
System Test Plan (Template in Appendix 5)
6.4.6. Prepare Acceptance Test Plan
The objective of the Acceptance Test Plan is to provide a detailed plan for performing the
acceptance test sub-levels.
This activity provides the project manager with a plan to manage the execution of acceptance
testing in accordance with the Overall Test Plan. Acceptance tests verify that the system meets
client requirements as specified, and validate that the system fulfils its intended use. Acceptance
testing takes place in, or simulates the client’s operational environment, and includes performance
and security testing. It demonstrates that the system performs by the sponsor and client expected,
so that they may accept it.
Acceptance test sub-levels include documentation testing and regression testing.
The plan provides the following for each applicable integration test sub-level:
 Test cases needed;
 The number of test cases estimated;
 The test tools and facilities needed (hardware, software, network);
 Staff resources, their roles and responsibilities required to perform test activities (include time
for education and mentoring if required);
 A detailed test schedule, including duration estimates for test activities;
 Acceptance testing metrics.
The Acceptance Test Plan must be aligned with all other plans.
6.4.6.1. Tasks
Specify Acceptance
Test Cases
Specify and sequence the test cases needed to meet the
acceptance test objectives.
Update Test Facility
Requirements
Review and update the test tools and facilities required for
acceptance test sub-levels.
Refine Detailed Test
Team Organization
Identify the resources required to perform acceptance tests.
Produce Detailed Test
Schedule
Produce a test schedule, including duration estimates for the
activities of acceptance test sub-levels.
Page 62 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.6.1.1. Specify Acceptance Test Cases
The objective for this task is to specify and sequence the test cases needed to meet the
objectives of acceptance test sub-levels.
Acceptance test cases are created to verify that the system meets client requirements as
specified, and to validate that the system fulfils its intended use.
Steps
For each acceptance test sub-level, design test cases following these practical steps:
 Derive test scenarios from the client's requirements;
 Break down test scenarios into test conditions. Each test condition has a measurable
objective and associated test input data. Each test condition normally requires the
execution of several test cases;
 Specify the expected outputs associated with the execution of a test case;
 Select the test cases that should be included in the regression test package;
 Prioritize the test cases (high, medium and low). The assigned priority is based on the
risk analysis and the test focus areas that were selected, as defined in the Overall Test
Strategy. Test cases are run by priority; the highest priority test cases are run first.
Once all the acceptance test case specifications are completed, they are attached to the
Acceptance Test Plan for the execution of acceptance testing.
Skill Profile
Tester, Client Representative
6.4.6.1.2. Update Test Facility Requirements
The objective of this task is to review and update the test tools and facilities required for
each test sub-level.
This task allows the technical leader to expand the detail level of the tools and facilities
requirements identified in the Overall Test Plan for the Unit, Integration, System and
Acceptance Test Plans.
Steps
For each test sub-level:
 Review the requirements for testing tools and facilities as identified in the Overall Test
Plan.
 If necessary, update these requirements to reflect the current situation in the
appropriate test plans (Unit, Integration, System and Acceptance Test Plans).
Skill Profile
Technical Leader
Page 63 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.6.1.3. Refine Detailed Test Team Organization
The objective for this task is to identify the resources required to perform each test sublevel.
This task allows the project manager to expand the detail level of the staff resources
requirements identified in the Overall Test Plan for the Unit, Integration, System and
Acceptance Test Plans.
Steps
Refine the roles and responsibilities of the team members responsible for each test sublevel and review them with the affected people.
Define the internal management structure of the test team to depict the lines of authority,
responsibility and communication within the test team (graphical devices such as
hierarchical organization charts or matrix diagrams may be used to describe the
organizational structure).
Skill Profile
Project Manager
6.4.6.1.4. Produce Detailed Test Schedule
The objective for this task is to produce a test schedule, including duration estimate for the
activities of test sub-levels.
This task allows the project manager to expand the detail level of planning requirements
identified in the Overall Test Plan for Unit, Integration, System and Acceptance Test Plans.
When establishing the schedule the following should be considered for each test sub-level:
 Dependencies between test activities;
 Test activities key dates;
 Alignment with all other plans and schedules (initial corrective actions as required).
Steps
Produce a duration estimate of the activities for each test sub-level to determine the total
duration of the appropriate test level.
Skill Profile
Project Manager
6.4.6.2. Components
Input (Examples) Overall Test Strategy (Template in Appendix 5)
System Requirements
Overall Test Plan (Template in Appendix 5)
Output
Acceptance Test Plan (Template in Appendix 5)
Page 64 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.7. Perform Unit Tests
The objective for this activity is to find, fix and document coding errors and omissions in program
sub-components, such as modules or routines, prior to their integration and packaging with other
sub-components.
Unit testing allows the developer to verify that the unit specification has been correctly translated to
the internal logic of the module or routine, and to ensure that the unit is ready for integration testing.
Unit testing is the initial testing of new and changed code in a module or subroutine. Unit test sublevels include tasks to:
 Test the internal logic;
 Verify the internal design;
 Test the path and condition coverage;
 Test exception conditions and error handling.
The results of unit testing are documented in the Unit Test Report for future metrics analysis.
6.4.7.1. Tasks
Execute Unit Test Plan
To confirm the Unit Test Plan and execute all unit test cases.
Track defects
To report and track the status of defects associated with the
execution of each unit test sub-level.
Analyze Test Results
To analyze and validate the results for each unit test sub-level and
document the required actions.
Repair Defects and
Retest
To implement corrective actions documented during testing.
6.4.7.1.1. Execute Unit Test Plan
The objective for this task is to confirm the Unit Test Plan and execute all unit test cases.
The following describes a typical sequence of unit test activities:
1. Ensure entry conditions are met as defined in the Unit Test Plan;
2. Execute the unit test cases by priority; the highest priority test cases are run first;
3. Record the successful/unsuccessful (pass/fail) execution of unit test cases in the
Test Case Log.
Steps
Review the Unit Test Plan and identify any required additional unit test cases.
Complete unit test activities in accordance with the Unit Test Plan.
Attach all the Test Case Logs to the Unit Test Report for metrics analysis as defined in the
Unit Test Plan.
Skill Profile
Developer
Page 65 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.7.1.2. Track Defects
The objective for this task is to report and track the status of defects associated with the
execution of each test sub-level.
Describe each detected defect in clear, precise terms so that if persons are somewhat
unfamiliar with the specific system, or significant time has passed, it can still be understood.
If appropriate care is taken in documenting errors, valuable data will be available in the
future for analysis which could identify improved methods of developing or maintaining the
system. The defect can exist in three states. These are:
 Open - a defect that has been discovered but has not been fixed yet;
 Fixed - a defect that has been opened and fixed but not yet re-tested;
 Closed - a defect that has been opened, fixed and re-tested.
All the defects detected during the execution of tests are recorded and attached to the
appropriate Test Report.
Steps
Track defects by updating the Defect Log as follows:
 Describe each detected defect in clear and precise terms;
 Record the status of the defect;
 Record the test case number to which the defect is related.
Attach the Defect Log to the appropriate Test Report for metrics analysis as defined in the
appropriate Test Plan.
Skill Profile
Developer, Tester
6.4.7.1.3. Analyze Test Results
The objective for this task is to analyze and validate the results for each test sub-level, and
document the required actions.
Defects can frequently be grouped into categories, which allows future data analysis of
encountered errors. The best time to categorize defects is when they are resolved and the
information is still fresh. Possible classifications for error categorization include:
 Defect type - requirements, specification, design, code, test, etc.;
 Defect frequency - recurring, non-recurring;
 Defect severity - the resolution of defects is prioritized according the following levels:
o Severity 1: a problem which must be fixed and for which no work-around is
available;
o Severity 2: a problem which must be resolved quickly and for which a workaround is available;
o Severity 3: all other problems.
Page 66 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
The severity of the defect and the difficulty of correcting it should be documented to provide
a basis for determining resource allocation and scheduling defect correction priorities. Errors
are frequently detected faster than they can be resolved, and performing an initial defect
analysis can provide valuable information to project management for establishing priorities.
Steps
Analyze the results for each test sub-level as follows:
 Compare the expected test results with the obtained test results;
 Analyze the defects;
 Prioritize the resolution of the defects using the severity levels;
 Identify the required actions. It is important to document the correction of the defect to
maintain proper configuration accounting. The description of the correction should
include:
o A narrative description of the correction;
o A list of program units affected;
o The number, revision, and sections of all documents affected;
o Any test procedures changed as a result of the correction.
Record the defect severity and the actions in the Defect Log.
Attach the Defect Log to the appropriate test level report for metrics analysis as defined in
the appropriate Test Plan.
Skill Profile
Developer, Tester
6.4.7.1.4. Repair Defects and Retest
The objective for this task is to implement corrective actions documented during testing.
The resolution of the defects depends on the severity level of the defect (level 1 is resolved
first, level 2 second, and level 3 last). Corrective actions usually involve:
 Adding or changing test cases according to the client authority;
 Changing or updating code. It is important to record when the implementation was
corrected, to identify which error was corrected in which version of software. The
description of the implementation should include:
o The software version in which the correction was incorporated;
o The authority for incorporating the correction.
Regression testing is used to help verify the changes. Re-testing the affected function is
necessary after the change is incorporated, since as many as 20 percent of all corrections
result in additional errors. In the event that latent defects were introduced by the correction,
one method of resolution would be to treat them as new errors and initiate a new error
report. The description of regression testing should include:
 A list of test paragraphs/objectives to be re-tested;
 The version of software used to perform regression test;
Page 67 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Indication of successful/unsuccessful accomplishment of test.
Steps
Change test cases, code or both.
Re-test the affected components.
Document the changes.
Ensure exit conditions are met.
Skill Profile
Developer, Tester
6.4.7.2. Components
Input
Unit Test Plan (Template in Appendix 5)
Output
Unit Test Report (Template in Appendix 5)
6.4.8. Perform Integration Tests
The objective of Integration Tests is to integrate software components and then find, fix and
document defects related to how the modules (including sub-systems) interface, to ensure they are
ready for system testing.
As defined during a design phase, software components are integrated with each other to ensure
they interact as expected. This activity involves:
 Performing the steps in the integration plan;
 Executing related integration tests after the addition of each increment.
The integration tests allow the tester to verify the proper execution (correct and timely) of application
components that interface with other applications. The communication between modules within the
sub-system is tested in a controlled and isolated environment within the project. It does not require
the execution of the entire application as a unit, but of individual sub-systems within the application.
The results of integration testing are documented in the Integration Test Report.
6.4.8.1. Tasks
Integrate Software
Components
To assemble software components incrementally in accordance
with what was defined during System Design.
Execute Integration Test
Plan
To confirm the Integration Test Plan and execute all integration
test cases.
Track defects
To report and track the status of defects associated with the
execution of each integration test sub-level.
Analyze Test Results
To analyze and validate the results for each integration test sublevel and document the required actions.
Repair Defects and
To implement corrective actions documented during testing.
Page 68 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Retest
6.4.8.1.1. Integrate Software Components
The objective for this task is to assemble software components incrementally in accordance
with what was defined during a design phase.
During a design phase, a detailed technical framework for the construction, integration and
testing of the system should be provided. This framework:
 Provides details of required component development, modification and integration;
 Identifies the sources of components to be acquired or reused;
 Contains a detailed plan to integrate the contents of each release.
This activity performs the steps in the Integration Test Plan. In most cases, i.e. when there
is more than one element to be added incrementally to another, this task will be executed
repeatedly along with the Execute Integration Test Plan task, until the last increment is
added to the target release.
Each assembled set of software must be identified in accordance with naming conventions
established for the project.
Steps
For the framework:
 Review dependencies and sequences, and update if necessary.
For each planned system integration unit:
 Confirm entry criteria for each component to be integrated;
 Add the components to the previously tested set;
 Create or modify the required sub-components;
 Compile and link all software components;
 Perform applicable integration tests for the system integration unit.
For each planned version:
 Confirm entry criteria for each system integration unit set to be integrated;
 Add all tested system integration unit components to the previously tested set;
 Create or modify the required sub-components;
 Compile and link all software components;
 Perform applicable integration tests for the version.
For the assembled release:
 Confirm entry criteria for each version set to be integrated;
 Add all tested version components to the previously tested set;
 Create or modify the required sub-components;
 Compile and link all software components;
Page 69 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Perform applicable integration tests for the complete release.
Skill Profile
Developer
6.4.8.1.2. Execute Integration Test Plan
The objective for this task is to confirm the Integration Test Plan and execute all integration
test cases.
The following describes a typical sequence of integration test activities:
1. Ensure entry conditions are met as defined in the Integration Test Plan;
2. Execute the integration test cases by priority; the highest priority test cases are run
first;
3. Record the successful/unsuccessful (pass/fail) execution of integration test cases in
the Test Case Log.
Steps
Review the Integration Test Plan and identify any required additional integration test cases.
Complete integration test activities in accordance with the Integration Test Plan.
Attach the Test Case Log to the Integration Test Report for metrics analysis as defined in
the Integration Test Plan.
Skill Profile
Tester
6.4.8.1.3. Track Defects
The objective for this task is to report and track the status of defects associated with the
execution of each test sub-level.
Describe each detected defect in clear, precise terms so that if persons are somewhat
unfamiliar with the specific system, or significant time has passed, it can still be understood.
If appropriate care is taken in documenting errors, valuable data will be available in the
future for analysis which could identify improved methods of developing or maintaining the
system. The defect can exist in three states. These are:
 Open - a defect that has been discovered but has not been fixed yet;
 Fixed - a defect that has been opened and fixed but not yet re-tested;
 Closed - a defect that has been opened, fixed and re-tested.
All the defects detected during the execution of tests are recorded and attached to the
appropriate Test Report.
Steps
Track defects by updating the Defect Log as follows:
 Describe each detected defect in clear and precise terms;
 Record the status of the defect;
Page 70 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Record the test case number to which the defect is related.
Attach the Defect Log to the appropriate Test Report for metrics analysis as defined in the
appropriate Test Plan.
Skill Profile
Developer, Tester
6.4.8.1.4. Analyze Test Results
The objective for this task is to analyze and validate the results for each test sub-level, and
document the required actions.
Defects can frequently be grouped into categories, which allow future data analysis of
encountered errors. The best time to categorize defects is when they are resolved and the
information is still fresh. Possible classifications for error categorization include:
 Defect type - requirements, specification, design, code, test, etc.;
 Defect frequency - recurring, non-recurring;
 Defect severity - the resolution of defects is prioritized according the following levels:
o Severity 1: a problem which must be fixed and for which no work-around is
available;
o Severity 2: a problem which must be resolved quickly and for which a workaround is available;
o Severity 3: all other problems.
The severity of the defect and the difficulty of correcting it should be documented to provide
a basis for determining resource allocation and scheduling defect correction priorities. Errors
are frequently detected faster than they can be resolved, and performing an initial defect
analysis can provide valuable information to project management for establishing priorities.
Steps
Analyze the results for each test sub-level as follows:
 Compare the expected test results with the obtained test results;
 Analyze the defects;
 Prioritize the resolution of the defects using the severity levels;
 Identify the required actions. It is important to document the correction of the defect to
maintain proper configuration accounting. The description of the correction should
include:
o A narrative description of the correction;
o A list of program units affected;
o The number, revision, and sections of all documents affected;
o Any test procedures changed as a result of the correction.
Record the defect severity and the actions in the Defect Log.
Attach the Defect Log to the appropriate test level report for metrics analysis as defined in
the appropriate Test Plan.
Page 71 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Skill Profile
Developer, Tester
6.4.8.1.5. Repair Defects and Retest
The objective for this task is to implement corrective actions documented during testing.
The resolution of the defects depends on the severity level of the defect (level 1 is resolved
first, level 2 second, and level 3 last). Corrective actions usually involve:
 Adding or changing test cases according to the client authority;
 Changing or updating code. It is important to record when the implementation was
corrected, to identify which error was corrected in which version of software. The
description of the implementation should include:
o The software version in which the correction was incorporated;
o The authority for incorporating the correction.
Regression testing is used to help verify the changes. Re-testing the affected function is
necessary after the change is incorporated, since as many as 20 percent of all corrections
result in additional errors. In the event that latent defects were introduced by the correction,
one method of resolution would be to treat them as new errors and initiate a new error
report. The description of regression testing should include:
 A list of test paragraphs/objectives to be re-tested;
 The version of software used to perform regression test;
 Indication of successful/unsuccessful accomplishment of test.
Steps
Change test cases, code or both.
Re-test the affected components.
Document the changes.
Ensure exit conditions are met.
Skill Profile
Developer, Tester
6.4.8.2. Components
Input (Examples) Construction Plan
Software Components
Database Components
Integration Test Plan (Template in Appendix 5)
Integration Test Environments
Output
Integration Test Report (Template in Appendix 5)
Page 72 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
6.4.9. Perform System Tests
The objective of System Tests is to find, fix and document defects related to the performance of the
system and its interfaces, and to ensure that the system is ready for acceptance testing.
System tests allow the tester to verify the correct and timely execution of all components of the
application including the ability (read, generate) to interface with other applications.
The results of system testing are documented in the System Test Report for future metrics analysis.
6.4.9.1.
Tasks
Execute System Test
Plan
To confirm the System Test Plan and execute all system test
cases.
Track defects
To report and track the status of defects associated with the
execution of each system test sub-level.
Analyze Test Results
To analyze and validate the results for each system test sub-level
and document the required actions.
Repair Defects and
Retest
To implement corrective actions documented during testing.
6.4.9.1.1. Execute System Test Plan
The objective for this task is to confirm the System Test Plan and execute all system test
cases.
The following describes a typical sequence of system test activities:
1.
Ensure entry conditions are met as defined in the System Test Plan;
2.
Execute the system test cases by priority; the highest priority test cases are run first;
3.
Record the successful/unsuccessful (pass/fail) execution of system test cases in the
Test Case Log.
Steps
Review the System Test Plan and identify any required additional system test cases.
Complete system test activities in accordance with the System Test Plan.
Attach the Test Case Log to the System Test Report for metrics analysis as defined in the
System Test Plan.
Skill Profile
Tester
6.4.9.1.2. Track Defects
The objective for this task is to report and track the status of defects associated with the
execution of each test sub-level.
Page 73 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Describe each detected defect in clear, precise terms so that if persons are somewhat
unfamiliar with the specific system, or significant time has passed, it can still be understood.
If appropriate care is taken in documenting errors, valuable data will be available in the
future for analysis which could identify improved methods of developing or maintaining the
system. The defect can exist in three states. These are:
 Open - a defect that has been discovered but has not been fixed yet;
 Fixed - a defect that has been opened and fixed but not yet re-tested;
 Closed - a defect that has been opened, fixed and re-tested.
All the defects detected during the execution of tests are recorded and attached to the
appropriate Test Report.
Steps
Track defects by updating the Defect Log as follows:
 Describe each detected defect in clear and precise terms;
 Record the status of the defect;
 Record the test case number to which the defect is related.
Attach the Defect Log to the appropriate Test Report for metrics analysis as defined in the
appropriate Test Plan.
Skill Profile
Developer, Tester
6.4.9.1.3. Analyze Test Results
The objective for this task is to analyze and validate the results for each test sub-level, and
document the required actions.
Defects can frequently be grouped into categories, which allow future data analysis of
encountered errors. The best time to categorize defects is when they are resolved and the
information is still fresh. Possible classifications for error categorization include:
 Defect type - requirements, specification, design, code, test, etc.;
 Defect frequency - recurring, non-recurring;
 Defect severity - the resolution of defects is prioritized according the following levels:
o Severity 1: a problem which must be fixed and for which no work-around is
available;
o Severity 2: a problem which must be resolved quickly and for which a workaround is available;
o Severity 3: all other problems.
The severity of the defect and the difficulty of correcting it should be documented to provide
a basis for determining resource allocation and scheduling defect correction priorities. Errors
are frequently detected faster than they can be resolved, and performing an initial defect
analysis can provide valuable information to project management for establishing priorities.
Steps
Page 74 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Analyze the results for each test sub-level as follows:
 Compare the expected test results with the obtained test results;
 Analyze the defects;
 Prioritize the resolution of the defects using the severity levels;
 Identify the required actions. It is important to document the correction of the defect to
maintain proper configuration accounting. The description of the correction should
include:
o A narrative description of the correction;
o A list of program units affected;
o The number, revision, and sections of all documents affected;
o Any test procedures changed as a result of the correction.
Record the defect severity and the actions in the Defect Log.
Attach the Defect Log to the appropriate test level report for metrics analysis as defined in
the appropriate Test Plan.
Skill Profile
Developer, Tester
6.4.9.1.4. Repair Defects and Retest
The objective for this task is to implement corrective actions documented during testing.
The resolution of the defects depends on the severity level of the defect (level 1 is resolved
first, level 2 second, and level 3 last). Corrective actions usually involve:
 Adding or changing test cases according to the client authority;
 Changing or updating code. It is important to record when the implementation was
corrected, to identify which error was corrected in which version of software. The
description of the implementation should include:
o The software version in which the correction was incorporated;
o The authority for incorporating the correction.
Regression testing is used to help verify the changes. Re-testing the affected function is
necessary after the change is incorporated, since as many as 20 percent of all corrections
result in additional errors. In the event that latent defects were introduced by the correction,
one method of resolution would be to treat them as new errors and initiate a new error
report. The description of regression testing should include:
 A list of test paragraphs/objectives to be re-tested;
 The version of software used to perform regression test;
 Indication of successful/unsuccessful accomplishment of test.
Steps
Change test cases, code or both.
Re-test the affected components.
Page 75 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Document the changes.
Ensure exit conditions are met.
Skill Profile
Developer, Tester
6.4.9.2.
Components
Input (Examples) System Test Plan (Template in Appendix 5)
System Test Environments
Output
System Test Report (Template in Appendix 5)
6.4.10. Perform Acceptance Tests
The objective of Acceptance Tests is to demonstrate that the system meets all client acceptance
criteria and to record acceptance test results for future analysis.
Acceptance tests allow the tester to verify that the system meets the client requirements as
specified, and validate that it fulfils its intended use. The acceptance tests are conducted in the
client’s operational environment (or a simulated one), and typically include performance, security
and documentation testing. The tests will demonstrate that the system performs to client
expectations, so that it may be accepted.
The results of acceptance testing are documented in the Acceptance Test Report for future metrics
analysis.
6.4.10.1. Tasks
Execute Acceptance
Test Plan
To confirm the Acceptance Test Plan and execute all acceptance
test cases.
Track defects
To report and track the status of defects associated with the
execution of each acceptance test sub-level.
Analyze Test Results
To analyze and validate the results for each acceptance test sublevel and document the required actions.
Repair Defects and
Retest
To implement corrective actions documented during testing.
6.4.10.1.1. Execute Acceptance Test Plan
The objective for this task is to confirm the Acceptance Test Plan and execute all
acceptance test cases.
The following describes a typical sequence of acceptance test activities:
1. Ensure entry conditions are met as defined in the Acceptance Test Plan;
2. Execute the acceptance test cases by priority; the highest priority test cases are run
first;
Page 76 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
3. Record the successful/unsuccessful (pass/fail) execution of acceptance test cases in
the Test Case Log.
Steps
Review the Acceptance Test Plan and identify any required additional acceptance test
cases.
Complete acceptance test activities in accordance with the Acceptance Test Plan.
Attach the Test Case Log to the Acceptance Test Report for metrics analysis as defined in
the Acceptance Test Plan.
Skill Profile
Tester, Client Representative
6.4.10.1.2. Track Defects
The objective for this task is to report and track the status of defects associated with the
execution of each test sub-level.
Describe each detected defect in clear, precise terms so that if persons are somewhat
unfamiliar with the specific system, or significant time has passed, it can still be understood.
If appropriate care is taken in documenting errors, valuable data will be available in the
future for analysis which could identify improved methods of developing or maintaining the
system. The defect can exist in three states. These are:
 Open - a defect that has been discovered but has not been fixed yet;
 Fixed - a defect that has been opened and fixed but not yet re-tested;
 Closed - a defect that has been opened, fixed and re-tested.
All the defects detected during the execution of tests are recorded and attached to the
appropriate Test Report.
Steps
Track defects by updating the Defect Log as follows:
 Describe each detected defect in clear and precise terms;
 Record the status of the defect;
 Record the test case number to which the defect is related.
Attach the Defect Log to the appropriate Test Report for metrics analysis as defined in the
appropriate Test Plan.
Skill Profile
Developer, Tester
6.4.10.1.3. Analyze Test Results
The objective for this task is to analyze and validate the results for each test sub-level, and
document the required actions.
Page 77 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Defects can frequently be grouped into categories, which allow future data analysis of
encountered errors. The best time to categorize defects is when they are resolved and the
information is still fresh. Possible classifications for error categorization include:
 Defect type - requirements, specification, design, code, test, etc.;
 Defect frequency - recurring, non-recurring;
 Defect severity - the resolution of defects is prioritized according the following levels:
o Severity 1: a problem which must be fixed and for which no work-around is
available;
o Severity 2: a problem which must be resolved quickly and for which a workaround is available;
o Severity 3: all other problems.
The severity of the defect and the difficulty of correcting it should be documented to provide
a basis for determining resource allocation and scheduling defect correction priorities. Errors
are frequently detected faster than they can be resolved, and performing an initial defect
analysis can provide valuable information to project management for establishing priorities.
Steps
Analyze the results for each test sub-level as follows:
 Compare the expected test results with the obtained test results;
 Analyze the defects;
 Prioritize the resolution of the defects using the severity levels;
 Identify the required actions. It is important to document the correction of the defect to
maintain proper configuration accounting. The description of the correction should
include:
o A narrative description of the correction;
o A list of program units affected;
o The number, revision, and sections of all documents affected;
o Any test procedures changed as a result of the correction.
Record the defect severity and the actions in the Defect Log.
Attach the Defect Log to the appropriate test level report for metrics analysis as defined in
the appropriate Test Plan.
Skill Profile
Developer, Tester, Client Representative
6.4.10.1.4. Repair Defects and Retest
The objective for this task is to implement corrective actions documented during testing.
The resolution of the defects depends on the severity level of the defect (level 1 is resolved
first, level 2 second, and level 3 last). Corrective actions usually involve:
 Adding or changing test cases according to the client authority;
Page 78 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Changing or updating code. It is important to record when the implementation was
corrected, to identify which error was corrected in which version of software. The
description of the implementation should include:
o The software version in which the correction was incorporated;
o The authority for incorporating the correction.
Regression testing is used to help verify the changes. Re-testing the affected function is
necessary after the change is incorporated, since as many as 20 percent of all corrections
result in additional errors. In the event that latent defects were introduced by the correction,
one method of resolution would be to treat them as new errors and initiate a new error
report. The description of regression testing should include:
 A list of test paragraphs/objectives to be re-tested;
 The version of software used to perform regression test;
 Indication of successful/unsuccessful accomplishment of test.
Steps
Change test cases, code or both.
Re-test the affected components.
Document the changes.
Ensure exit conditions are met.
Skill Profile
Developer, Tester, Client Representative
6.4.10.2. Components
Input (Examples) Acceptance Test Plan (Template in Appendix 5)
Acceptance Test Environments
Output
Acceptance Test Report (Template in Appendix 5)
Page 79 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
7. GLOSSARY
Black box testing – It is used in computer programming, software engineering and software testing to
check that the outputs of a program, given certain inputs, conform to the functional specification of the
program; The term black box indicates that the internal implementation of the program being executed is not
examined by the tester. For this reason black box testing is not normally carried out by the programmer. In
most real-world engineering firms, one group does design work while a separate group does the testing.
Also referred as concrete box or functional testing.
Initial Operational Capability Milestone – third milestone in RUP, at the end of the construction phase; at
this point one decides if the software, the sites, and the users are ready to go operational, without exposing
the project to high risks. This release is often called a "beta" release.
Lifecycle Architecture Milestone – second milestone in RUP, at the end of the elaboration phase; at this
point one examines the detailed system objectives and scope, the choice of architecture, and the resolution
of the major tasks.
Lifecycle Objectives Milestone – first milestone in RUP, at the end of the inception phase.
Product Release Milestone – fourth milestone in RUP, at the end of the transition phase; at this point, one
decides if the objectives were met, and if another development cycle should start. In some cases, this
milestone may coincide with the end of the inception phase for the next cycle.
Test bed - A test bed is a platform for experimentation for large development projects. They allow rigorous
testing of scientific theories and new technologies. The term is used across many disciplines to describe a
development environment that is shielded from the hazards of testing in a live or production environment.
UML – Unified Modelling Language; The Unified Modelling Language is a non-proprietary, object modelling
and specification language used in software engineering. UML includes a standardized graphical notation
that may be used to create an abstract model of a system: the UML model; while UML was designed to
specify, visualize, construct, and document software-intensive systems, UML is not restricted to modelling
software. UML has its strengths at higher, more architectural levels and has been used for modelling
hardware (engineering systems) and is commonly used for business process modelling, systems
engineering modelling, and representing organizational structure.
White box testing – It is used in computer programming, software engineering and software testing to
check that the outputs of a program, given certain inputs, conform to the structural specification of the
program. The term white box indicates that testing is done with a knowledge of the code used to execute
certain functionality. For this reason, a programmer is usually required to perform white box tests. Often,
multiple programmers will write tests based on certain code, so as to gain varying perspectives on possible
outcomes. Also referred to as glass box testing or structural testing.
Page 80 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
8. REFERENCES
ZAMBELICH, Keith, Totally Data-Driven Automated Testing – A White Paper, Automated Testing
Specialists, Inc., 2000
SPILLNER, Andreas, The W-MODEL – Strengthening the Bond Between Development and Test, 2002
Testing Methodology – An Overview, SolutionNET, 2004
Implementing an Effective Test Management Process, Mercury, 2005
Rational Unified Process – Best Practices for Software Development Teams, Rational Software Corporation,
1998
Rational Unified Process Base Plug-in, Version 7.0, CD-ROM, IBM, 2005
Types of testing in the development process including the V model, Coley Consulting,
<http://www.coleyconsulting.co.uk/testtype.htm>
Cleanroom Software Engineering, The Data & Analysis Center for Software,
<http://www.dacs.dtic.mil/databases/url/key.hts?keycode=64>
Concert Methodology, CGI, 2000,
<http://www.intranet.cgi.ca/Concert/index_en.htm?%1%2ConcertHome_en.htm>
Wikipedia contributors, Acceptance test, Wikipedia, The Free Encyclopedia, 15-Dec-2005,
<http://en.wikipedia.org/w/index.php?title=Acceptance_test&oldid=31421946>
Wikipedia contributors, Black box testing, Wikipedia, The Free Encyclopedia, 29-Dec-2005, <
http://en.wikipedia.org/w/index.php?title=Black_box_testing&oldid=33156338>
Wikipedia contributors, Load test, Wikipedia, The Free Encyclopedia, 4-Nov-2005,
<http://en.wikipedia.org/w/index.php?title=Load_testing&oldid=27326338>
Wikipedia contributors, IEEE 829, Wikipedia, The Free Encyclopedia, 15-Dec-2005,
<http://en.wikipedia.org/w/index.php?title=IEEE_829&oldid=31450519>
Page 81 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Wikipedia contributors, Performance testing, Wikipedia, The Free Encyclopedia, 12-Dec-2005,
<http://en.wikipedia.org/w/index.php?title=Performance_testing&oldid=31058312>
Wikipedia contributors, Rational Unified Process, Wikipedia, The Free Encyclopedia, 5-Jan-2006,
<http://en.wikipedia.org/w/index.php?title=Rational_Unified_Process&oldid=33947191>
Wikipedia contributors, Regression testing, Wikipedia, The Free Encyclopedia, 21-Dec-2005,
<http://en.wikipedia.org/w/index.php?title=Regression_testing&oldid=32232135>
Wikipedia contributors, Smoke test, Wikipedia, The Free Encyclopedia, 27-Dec-2005,
<http://en.wikipedia.org/w/index.php?title=Smoke_test&oldid=32831863>
Wikipedia contributors, Software testing, Wikipedia, The Free Encyclopedia, 3-Jan-2006,
<http://en.wikipedia.org/w/index.php?title=Software_testing&oldid=33717177>
Wikipedia contributors, Stress testing, Wikipedia, The Free Encyclopedia, 30-Oct-2005,
<http://en.wikipedia.org/w/index.php?title=Stress_testing&oldid=26850644>
Wikipedia contributors, System functional testing, Wikipedia, The Free Encyclopedia, 6-Jul-2005,
<http://en.wikipedia.org/w/index.php?title=System_functional_testing&oldid=18281463>
Wikipedia contributors, System testing, Wikipedia, The Free Encyclopedia, 16-Nov-2005,
<http://en.wikipedia.org/w/index.php?title=System_testing&oldid=28467956>
Wikipedia contributors, Testbed, Wikipedia, The Free Encyclopedia, 22-Sep-2005, <
http://en.wikipedia.org/w/index.php?title=Testbed&oldid=23721460>
Wikipedia contributors, Test plan, Wikipedia, The Free Encyclopedia, 23-Dec-2005,
<http://en.wikipedia.org/w/index.php?title=Test_plan&oldid=32481848>
Wikipedia contributors, Test script, Wikipedia, The Free Encyclopedia, 11-Apr-2006,
<http://en.wikipedia.org/w/index.php?title=Test_script&oldid=47995971>
Wikipedia contributors, Unified Modeling Language, Wikipedia, The Free Encyclopedia, 12-Jan-2006,
<http://en.wikipedia.org/w/index.php?title=Unified_Modeling_Language&oldid=34883276>
Wikipedia contributors, V model, Wikipedia, The Free Encyclopedia, 18-Sep-2005,
<http://en.wikipedia.org/w/index.php?title=V_model&oldid=23439967>
Page 82 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Wikipedia contributors, White box testing, Wikipedia, The Free Encyclopedia, 3-Jan-2006, <
http://en.wikipedia.org/w/index.php?title=White_box_testing&oldid=33714644>
Page 83 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
APPENDIX 1 – AN EXAMPLE FOR “FUNCTIONAL DECOMPOSITION METHOD”
The following steps could constitute a “Post a Payment” Test Case:
1. Access Payment Screen from Main Menu
2. Post a Payment
3. Verify Payment Updates Current Balance
4. Return to Main Menu
5. Access Account Summary Screen from Main Menu
6. Verify Account Summary Updates
7. Access Transaction History Screen from Account Summary
8. Verify Transaction History Updates
9. Return to Main Menu
A “Business Function” script and a “Subroutine” script could be written as follows:
Payment:
 Start at Main Menu
 Invoke a “Screen Navigation Function” to access the Payment Screen
 Read a data file containing specific data to enter for this test, and input the data
 Press the button or function-key required to Post the payment
 Read a data file containing specific expected results data
 Compare this data to the data which is currently displayed (actual results)
 Write any discrepancies to an Error Report
 Press button or key required to return to Main Menu or, if required, invoke a “Screen Navigation
Function” to do this.
Verify_Acct (Verify Account Summary & Transaction History):
 Start at Main Menu
 Invoke a “Screen Navigation Function” to access the Account Summary
 Read a data file containing specific expected results data
 Compare this data to the data which is currently displayed (actual results)
 Write any discrepancies to an Error Report
 Press button or key required to access Transaction History
 Read a data file containing specific expected results data
 Compare this data to the data which is currently displayed (actual results)
 Write any discrepancies to an Error Report
Page 84 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
 Press button or key to return to Main Menu or, invoke a “Screen Navigation Function”
The “Business Function” and “Subroutine” scripts invoke “User Defined Functions” to perform navigation.
The “Test Case” script would call these two scripts, and the Driver Script would call this “Test Case” script
some number of times required to perform all the required Test Cases of this kind. In each case, the only
thing that changes are the data contained in the files that are read and processed by the “Business
Function” and “Subroutine” scripts.
Using this method, if we needed to process 50 different kinds of payments in order to verify all of the
possible conditions, then we would need only 4 scripts which are re-usable for all 50 cases:
1. The “Driver” script
2. The “Test Case” (Post a Payment & Verify Results) script
3. The “Payment” Business Function script
4. The “Verify Account Summary & Transaction History” Subroutine script
If we were using Record/Playback, we would now have 50 scripts, each containing hard-coded data, that
would have to be maintained.
This method, however, requires only that we add the data-files required for each test, and these can easily
be updated/maintained using Notepad or some such text-editor. Note that updating these files does not
require any knowledge of the automated tool, scripting, programming, etc. meaning that the non-technical
testers can perform this function, while one “technical” tester can create and maintain the automated scripts.
It should be noticed that the “Subroutine” script, which verifies the Account Summary and Transaction
History, can also be used by other test cases and business functions (which is why it is classified as a
“Subroutine” script rather than a “Business Function” script) – Payment reversals, for example. This means
that if we also need to perform 50 “payment reversals”, we only need to develop three additional scripts.
1. The “Driver” script
2. The “Test Case” (Reverse a Payment & Verify Results) script
3. The “Payment Reversal” Business Function script
Since we already had the original 4 scripts, we can quickly clone these three new scripts from the originals
(which takes hardly any time at all).
We can use the “Subroutine” script (Verify Account Summary & Transaction History) as-is without any
modifications at all.
If different accounts need to be used, then all we have to do is update the Data-Files, and not the actual
scripts. It ought to be obvious that this is a much more cost-effective method than the Record/Playback
method.
Page 85 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
APPENDIX 2 – AN EXAMPLE FOR “KEY-WORD DRIVEN METHOD”
Consider the following example of our previous “Post a Payment” Test Case:
COLUMN 1
COLUMN 2
COLUMN 3
COLUMN 4
COLUMN 5
Key_Word
Field/Screen Name
Input/Verification Data
Comment
Pass/Fail
Start_Test:
Screen
Main Menu
Verify Starting Point
Enter:
Selection
3
Select Payment Option
Action:
Press_Key
F4
Access Payment Screen
Verify:
Screen
Payment Posting
Verify Screen accessed
Enter:
Payment Amount
125.87
Enter Payment data
Payment Method
Check
Action:
Press_Key
F9
Process Payment
Verify:
Screen
Payment Screen
Verify screen remains
Verify_Data:
Payment Amount
$ 125.87
Verify updated data
Current Balance
$1,309.77
Status Message
Payment Posted
Action:
Press_Key
F12
Return to Main Menu
Verify:
Screen
Main Menu
Verify return to Menu
Each of the “Key Words” in Column 1 causes a “Utility Script” to be called which processes the remaining
columns as input parameters in order to perform specific functions. Note that this could also be run as a
manual test. The test engineer must develop and document the test case anyway – why not create the
automated test case at the same time?
Page 86 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
The data in red indicates what would need to be changed if one were to copy this test case to create additional
tests.
How does this work?
When a Key Word is encountered, a list is created using data from the remaining columns. This continues
until a “null” (blank) column-2 is encountered. The “Controller” script then calls a Utility Script associated with
the Key-Word, and passes the “list” as an input parameter. The Utility Script continues processing until “endof-list”, and then returns to the “Controller” script, which continues processing the file until “end-of-file” is
reached.
Page 87 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
APPENDIX 3 – CONCERT METHODOLOGY OVERVIEW
1. INTRODUCTION
There is a critical need for a common framework that can be used by IS/IT practitioners to address the
creation and management of systems in a consistent and effective manner. The Concert Methodology is a
major component of the processes established by CGI to address this need. Concert provides developers
and managers with a life cycle model for IS/IT solution development and delivery, and it establishes a
standard approach to execution of related activities.
The concert methodology facilitate synergy and effective communication among all the participants of a
system development project (management, client representatives, and development team members), by
giving them a common base of reference.
2. METHODOLOGY
In order for systems to meet the needs of the Client while being developed efficiently, the active
collaboration of all participants is essential. Problems must be analyzed and addressed in the correct order
so that the necessary information to make decisions concerning directions, solutions, and installations is
available when required.
The structure of this methodology is detailed in the figure below.
Figure 1 - Concert System Development and Related Processes
Page 88 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
The method for development and implementation of a system is divided into eight phases.
Phases
Objective
Preliminary
Analysis
To recommend a viable information system solution based on the client's
requirements, constraints and compliance with both applicable business and
technological directions.
Analysis
To translate requirements into specifications needed for the development and
implementation of an IS/IT solution. To develop strategies to optimally deliver client
requirements.
Design
To produce the detailed architecture and design of the IS/IT solution, Develop plans
in accordance with the strategies established during the Analysis Phase.
Construction
To produce executable software components that properly reflect the design in
accordance with the Construction Plan.
Integration and
System Test
To construct the system by progressively adding increments and testing each
resulting assembly to ensure it operates properly;
On completion of the integration of required components, complete testing of all
system components to verify that they execute properly, and interface properly
among themselves and with related applications.
Client Acceptance
Test
To demonstrate that the system meets all client acceptance criteria.
Implementation
To make the solution available to the end users and ensure they can assume
ownership.
Deployment
To manage planning and execution of Implementation Phase activities to enable
rollout to multiple sites.
Table 1 – Methodology Phases
 Synchronization Points
The development process, as well as each of the related processes has an associated life cycle, that is,
they are established during project initiation in response to specified criteria and required activities are
executed at appropriate points as the development project progresses.
However, none of them can be executed in a completely stand-alone manner, and effective alignment of
activities that are specified in physically separate frameworks can often be challenging and time
consuming. Concert contains identification of the "appropriate points" where, in order to properly complete
an activity, or to most effectively use the information provided by execution of an activity, it is necessary to
execute other activities defined within one or more of the related processes.
This enables developers responsible for performing any activity defined in Concert to effectively align
related components of the Project Management (PM), Quality Assurance (QA), Configuration Management
(CM) and Reuse Management (ReM) processes.
Page 89 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
In short, Synchronization points are defined development life cycle checkpoints where corresponding
activities within one or more of the other processes should take place. At each of these checkpoints, the
methodology includes instructions referring to specific aspects of the other process that apply to that
development activity.
2.1. Preliminary Analysis
This phase is normally performed previous to investing significant effort in developing a system. It is
intended to provide decision makers with sufficient information to increase the level of confidence they
will have in determining whether to proceed further with the project.
This is done by examining the following:
 The current situation which includes reference to the context and functionality of related systems,
user needs/expectations, specific objectives to be achieved and constraints to be respected.
 Current and planned business/technological direction that impacts the methods and technology
that can be considered or used when conceiving the information system solutions.
 System requirements and constraints as they pertain to the needs and expectations of the client.
Type
Entry
Description
 Client Requirements
 Project Constraints and other compliance criteria
Exit
 A set of system requirements that is based on an analysis of the specific intended
use of the system, and matches the customer's stated and implied needs;
 A viable (conceptual) solution to the system requirements, along with estimates of
(development and operational) costs and benefits.
Table 2 – Preliminary Analysis’s Entry and exit conditions
The objective of Preliminary Analysis phase is to recommend a viable information system solution based on
the client's requirements, constraints and compliance with both applicable business and technological
directions.
The work performed in this phase will become valuable input for the subsequent phases and that the level of
details needed to convey the information is dependent on the needs of the client's decision makers.~
2.2. Analysis
This phase is performed to refine the work completed during the Preliminary Analysis and will serve to
establish the baseline by which the subsequent development phases will be defined. During this phase,
the proposed functionality and features of the system are identified and translated into technical terms.
Page 90 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
The scope of this phase always includes the development of a System Specification (a detailed
specification of what the system will do), followed by the definition of a high level Systems Architecture
and the descriptions of the environments needed for the development and implementation activities.
Wherever feasible, prototypes are built to assist the client to:
 Clarify and refine requirements;
 Confirm selected features of the specified system;
 Provide immediate and informed feedback on system's usability (e.g. GUI design).
Type
Description
Entry
 The client's general needs are clearly identified, and have been analyzed within the
context of the operational environment associated with the proposed solution.
Exit
 Requirements have been rigorously translated into specifications, and these have
been validated with the client;
 The requirements allocated to all components of the system architecture, and their
interfaces have been defined to match the customer's needs;
 Analyzed, correct and testable software requirements have been identified;
 The impact of defined software and other system requirements on the operational
environment is understood;
 A relevant software release strategy that defines the priority for implementing
software requirements has been developed.
Table 3 – Analysis’s Entry and exit conditions
The objective of this phase is to translate requirements into specifications needed for the design and
construction of an information system solution and to analyze issues related to solution development
and delivery (including operational impacts), and to develop appropriate strategies.
2.3. Design
The design process enables the development of detailed engineering specifications for all aspects of the
system. It addresses four distinct but interrelated sets of requirements: data design, architectural design,
interface design, and procedural design.
This phase:
 Completes the external design of the system, by fully and completely defining both interfaces and
processes (this includes manual and automated ones), data elements, and network components of
the system. When applicable, the work completed for the prototypes is refined into full detailed
specifications.
 Generates detailed designs for:
o components of each software item to be developed to ensure that all software requirements
are allocated, interfaces external to software items and between software components;
Page 91 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
o databases.
Type
Description
 System Specification (must be officially approved.
Entry
 System Architecture Specification.
 The design is traceable to the analysis model, exhibits uniformity and integration
and is compliant with applicable standards and requirements;
Exit
 Feasibility of construction, integration and testing and future operation and
maintenance has been established.
Table 4 – Design’s Entry and Exit conditions
This phase’s objective is to produce the detailed architecture and design of the system by performing
design activities in accordance with the strategies established during the Analysis Phase and to produce
detailed plans for the construction, integration and testing of system components.
2.4. Construction
This phase consist in building the physical components (software code, environment setup and related
documentation) that will be needed to implement the system. Since unit level testing is best performed
by those who developed the code, Unit testing has been included as part of this phase.
Typically this phase consists of the following activities:
 Creation of database and construction environment infrastructure;
 Coding, programming and debugging;
 Creation of system control, conversion, and backup/recovery processes;
 Unit testing, defect reporting and repairing;
 Technical writing for the supporting documentation.
Type
Entry
Exit
Description
 The System Design, Environment Specification, Operational Design, Construction
Plan, and Unit Test Plan components are up to date and complete.
Software code has been successfully evaluated for:
 Traceability and consistency with the requirements and design of the software
item;
 Test coverage of units;
 Appropriateness of coding methods and standards used.
Page 92 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Table 5 - Construction’s Entry and Exit conditions
This phase’s objective is to produce system components that properly reflect their design and in
accordance with the Construction Plan.
2.5. Integration and System Test
This phase consists of two major sets of activities.
The first set requires establishing appropriate component integration, and integration test environments,
followed by iterative sets of tasks that integrate components and then perform applicable Integration
Tests.
After that process is completed for all components to be integrated, the next steps are to establish
appropriate System Test environments, followed by performance of each specified System Test sublevel.
The Construction Plan contains the dependencies and sequences for managing integration.
The Integration Test Plan provides details for addressing required Integration Test sub-levels (including
testing of new functions).
The System Test Plan provides details for addressing required System Test sub-levels (including
operability testing).
Type
Entry
Description
 Software integration plans that include test requirements, procedures, data,
responsibilities and schedules;
 System integration plans that include test requirements, procedures, data,
responsibilities and schedules;
 Satisfactory completion of Unit Test activities;
 Confirmation that each component (purchased, built, modified, etc.) meets its
Specification and Design criteria.
Exit
 All anomalies identified during the testing processes scheduled for this phase have
been satisfactorily addressed;
 An integrated system demonstrating compliance with the system requirements
(functional, non-functional, operational and maintenance).
Table 6 – Integration’s and system’s test Entry and Exit conditions
The objective of Integrations and system Test phase is to construct the system by progressively adding
increments and testing each resulting assembly to ensure it operates properly. For the complete testing of all
system components it has to verify that they execute properly, and interface properly among themselves and
with related applications; and also to verify that all functional, performance, operational and other
requirements have been satisfactorily addressed.
Page 93 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
2.6. Client Acceptance Test
Client Acceptance Tests are conducted in accordance with the Client Acceptance Test Plan (prepared
during the Design phase) and in a manner that will increase the client's confidence level sufficiently to
declare the system ready for implementation and deployment. The tests allow the client to verify the
system meets the requirements as specified and validate that it fulfils its intended use. Special attention
is given to conducting these tests in a manner that replicates the real conditions that will be encountered
when the client uses the system.
During this phase, the following is performed:
 Client test environment is established (this may include elements of the actual production
environment or simulations of the target production environment);
 Performance, security and documentation testing to ensure the system meets pass/fail criteria
(meets applicable specification, design, and operational requirements) and works as expected
(intended use) in accordance with the Client Acceptance Test Plan. Test coverage normally
includes software, hardware, training, guides, documentation, and manual procedures;
 Tracking and resolution of reported defects;
 Knowledge transfer to enable effective execution of applicable test sub-levels.
Type
Entry
Description
 All anomalies identified during performance of Integration and System Test
processes have been satisfactorily addressed.
 Client Acceptance criteria have been established to the satisfaction of each
stakeholder in this process, and are reflected in an Client Acceptance Test Plan.
Exit
 All anomalies identified during the Client Acceptance Test Plan process have been
satisfactorily addressed.
Table 7 - Client Acceptance’s Test Entry and Exit conditions
This phase’s objective is to demonstrate that the system meets all client acceptance criteria.
2.7. Implementation
This phase is performed in accordance with the Delivery Plan and includes activities related to:
 Releasing the deliverables to the user community;
 Converting existing data to make it available for the new system;
 Assisting the client in tuning the new system;
 Providing training and support for using the system;
 Installing support facilities to manage system performance requirements.
Page 94 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Type
Description
 All anomalies identified during the Client Acceptance Test have been satisfactorily
addressed for the components that are being deployed.
Entry
 Customer requirements have been defined to the satisfaction of each stakeholder
in this process, and are reflected in the Delivery Plan.
 Roles and Responsibilities are assigned and understood and required facilities and
resources are available.
 Completion of all tasks required to implement the delivery plan in the designated
sites.
Exit
 All anomalies identified during the Implementation Phase have been satisfactorily
addressed for the components that are being deployed.
 Agreement between project stakeholders that development project responsibilities
have been satisfactorily addressed.
Table 8 – Implementation’s Entry and Exit conditions
The objective of Implementation phase is to make the solution available to the client and ensure they
can assume ownership.
2.8. Deployment
The Implementation Phase made the system available to the first set of selected users. The successive
rollout to additional sites, or additional sets of users, is the focus of Deployment Phase activities.
It is important that project planning for deployment, explicitly tailors each of the Implementation Phase
activities that are to be executed to support the implementation of the system at each site. The
deployment must reflect what has been done already and what is required for the designated sites. As a
result of the lessons learned from each site, the Delivery Plan is updated when needed.
Type
Entry
Description
 All anomalies identified during the Client Acceptance Test process have been
satisfactorily addressed for the components that are being deployed.
 All anomalies identified during previous executions of the Implementation process
e.g. for other sites have been satisfactorily addressed.
 Customer requirements have been defined to the satisfaction of each stakeholder
in this process, and are reflected in the Deployment Plan.
 Roles and Responsibilities are assigned and understood and required facilities and
resources are available.
Exit
 Completion of all tasks required to implement the deployment plan in the
designated sites;
Page 95 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Type
Description
 Agreement between project stakeholders that related development responsibilities
has been satisfactorily addressed.
Table 9 – Deployment’s Entry and Exit conditions
The Deployment’s phase objective is to manage planning and execution of Implementation Phase
activities to enable rollout to multiple sites.
3. TESTING
The testing methodology in Concert is based on Full Life-cycle Testing (FLT) which is a collection of best of
sort testing practices being used in industry. It supports the premise that testing must be managed as a
project within a project, using standard project management disciplines. In addition to calling for more
attention to testing early in the project life cycle, the methodology advocates more use of testing automation
throughout the project. It also calls for taking measurements and tracking quality throughout the project so
that, at every step of the way, the risks of proceeding to the next phase are known. With its emphasis on
disciplined management of the testing process, the testing methodology is fully integrated with Concert's
development process activities and tasks.
The testing process is spread across the different phases of Concert as shown in the figure below.
Figure 2 – Testing Process in Concert Methodology
Page 96 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Various components are produced in the different phases to successfully and efficiently manage and
execute testing and to document the results.
4. SUMMARY
As stated earlier, the most critical aspect of doing things right is having an understanding of what is required
to be done. Every project team member is responsible for ensuring that his or her activities are planned and
executed within an appropriate management and technical framework. In order to establish this framework,
one must always start with an understanding of requirements. It is essential that developers, reviewers, and
testers are aware of:
 Completion criteria for the activity being performed or the component to be produced (or reviewed);
 Requirements to be addressed i.e. what has to be done in order to complete an activity (or task);
 Component compliance requirements;
 Related roles and responsibilities for addressing various sets of requirements;
 Available tools e.g. templates and other job aids, examples, and reference material.
It is therefore important to review and understand all relevant components of the methodology, and to use it
as a base for executing all tasks.
5. ARCHITECTURE
As thinking about a system progresses from the general to the particular, it should be guided by an
architectural perspective to help ensure that all the parts continue to work together, and that nothing required
is left out. Taking different perspectives on System Specification elements and relationships facilitates an
orderly transition from Analysis through Design.
With most non-trivial systems, it is difficult to capture all the desired architectural ideas in a single
representation. This methodology therefore recommends constructing up to four different "views" of the
architecture, to convey all the information required. We call these four views by the following names:
 Conceptual View;
 Process View;
 Development View;
 Physical View;
Page 97 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
Figure 3 – View Model
Each one of these views provides a different representation of a system, each valid and useful in its own
right. The conceptual and development views show the static system structure while the process and
physical views give us the dynamic or runtime system structure. The conceptual and development views,
though very close, address very different concerns as described above. Further, there is no requirement or
implication that these structures allow any topological resemblance to each other. While the views give
different system structural perspectives they are not fully independent. Elements of one view will be
"connected" to elements of other views, and one needs to reason about those connections.
6. GLOSSARY
Expression
Meaning
IS/IT
Information services / Information Technologies
PM
Project Manager
QA
Quality Assurance
CM
Configuration Management
ReM
Reuse Management
FLT
Full Life-cycle Testing
Table 10 - Glossary
Page 98 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
APPENDIX 4 – TEST TEAM ROLES AND RESPONSIBILITIES
This appendix presents the roles and the responsibilities of the test team members.
Roles of test team members
Responsibilities of test team members
Client Representative
The client representative is the spokesperson assigned to represent the
client’s interest (end users, operator, IT client, etc.) during the development
of the system. They normally interact with the development team for
validation and verification purpose.
Developer
The primary responsibility of this individual is related to the programming of
the system. This person is involved with design, unit testing and
programming activities. The developer is called to participate early in the
development cycle in order to better understand the user needs and
boundaries of the project. Maintaining a disciplined, versatile and creative
approach, the developer combines functional and technical skills so that
innovative, yet robust, solutions can be programmed.
Project Manager
As defined in the Project Management Framework, the project manager:
 Plans and oversees the project;
 Acts as the CGI project team' s contact with the client's internal project
team;
 Ensures that the necessary management procedures are implemented;
 Keeps the project on time and on budget;
 Supervises the project team;
 Provides information on the project's status to those outside the project
team.
Note that the project manager is also responsible for certain reuse
management activities.
System Administrator
The system administrator is responsible for providing technical support to
the development team during the development and implementation of the
system in the operating environment. Pro-active, autonomous and good
technical skills are traits that describe good system administrators.
Technical Leader
The technical leader is primarily responsible for the technical directions
followed by the other developers on the team. Having excellent technical
skills, this person must understand the entire development life cycle and be
able to interact with non-technical people.
Tester
The tester is primarily involved in the design, construction, execution of test
cases and defect reporting. The is a typical list of activities the tester is
required to perform:
 Translate client requirements to test scenarios
 Identify the test conditions
 Translate the test cases into machine-readable test cases
 Create the input, test beds and test scripts necessary to run the test
Page 99 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
cases
 Run test cases
 Report the anomalies associated with the execution of test cases
 Analyse the test results and ensures all the requirements have been
met
 Record the test results (defects and test cases)
 Retest all the components
Page 100 of 101
TEST MANAGEMENT PROJECT
Testing Methodology
Type: Deliverable
Title: Testing Methodology
Author(s):
Last Saved Date / Time
Document No. / Revision No.
Draft / Published
Group 25: Joao Alves
22/06/2017 18:31:00
DOC-TM0004 / 2.2
Published
APPENDIX 5 – DOCUMENT TEMPLATES
Page 101 of 101
TEST MANAGEMENT PROJECT
Testing Methodology