Download EppDm4_11_03

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Binary search algorithm wikipedia , lookup

Transcript
CHAPTER 11
ANALYSIS OF
ALGORITHM
EFFICIENCY
Copyright © Cengage Learning. All rights reserved.
SECTION 11.3
Application: Analysis of
Algorithm Efficiency I
Copyright © Cengage Learning. All rights reserved.
The Sequential Search Algorithm
3
The Sequential Search Algorithm
The object of a search algorithm is to hunt through an array
of data in an attempt to find a particular item x.
In a sequential search, x is compared to the first item in the
array, then to the second, then to the third, and so on.
The search is stopped if a match is found at any stage.
On the other hand, if the entire array is processed without
finding a match, then x is not in the array.
4
The Sequential Search Algorithm
An example of a sequential search is shown diagrammatically
in Figure 11.3.1.
Sequential Search of a[1], a[2], . . . , a[7] for x where x = a[5]
Figure 11.3.1
5
Example 4 – Best- and Worst-Case Orders for Sequential Search
Find best- and worst-case orders for the sequential search
algorithm from among the set of power functions.
Solution:
Suppose the sequential search algorithm is applied to an
input array a[1], a[2], . . . , a[n] to find an item x.
In the best case, the algorithm requires only one
comparison between x and the items in a[1], a[2], . . . , a[n].
6
Example 4 – Solution
cont’d
This occurs when x is the first item in the array. Thus in the
best case, the sequential search algorithm is (1). (Note
that
)
In the worst case, however, the algorithm requires n
comparisons. This occurs when x = a[n] or when x does not
appear in the array at all.
Thus in the worst case, the sequential search algorithm is
(n).
7
The Insertion Sort Algorithm
8
The Insertion Sort Algorithm
Insertion sort is an algorithm for arranging the items in an
array into ascending order. Initially, the second item is
compared to the first.
If the second item is less than the first, their values are
interchanged, and as a result the first two array items are in
ascending order.
The idea of the algorithm is gradually to lengthen the
section of the array that is known to be in ascending order
by inserting each subsequent array item into its correct
position relative to the preceding ones.
When the last item has been placed, the entire array is in
ascending order.
9
The Insertion Sort Algorithm
Figure 11.3.2 illustrates the action of step k of insertion sort
on an array a[1], a[2], a[3], . . . , a[n].
Step k of Insertion Sort
Figure 11.3.2
10
The Insertion Sort Algorithm
Two aspects of algorithm efficiency are important: the
amount of time required to execute the algorithm and the
amount of memory space needed when it is run.
Occasionally, one algorithm may make more efficient use
of time but less efficient use of memory space than
another, forcing a trade-off based on the resources
available to the user.
11
Time Efficiency of an Algorithm
12
Time Efficiency of an Algorithm
How can the time efficiency of an algorithm be calculated?
The answer depends on several factors.
One is the size of the set of data that is input to the
algorithm; for example, it takes longer for a sort algorithm
to process 1,000,000 items than 100 items.
Consequently, the execution time of an algorithm is
generally expressed as a function of its input size.
13
Time Efficiency of an Algorithm
Another factor that may affect the run time of an algorithm
is the nature of the input data.
For instance, a program that searches sequentially through
a list of length n to find a data item requires only one step if
the item is first on the list, but it uses n steps if the item
is last on the list.
Thus algorithms are frequently analyzed in terms of their
“best case,” “worst case,” and “average case”
performances for an input of size n.
14
Time Efficiency of an Algorithm
Roughly speaking, the analysis of an algorithm for time
efficiency begins by trying to count the number of
elementary operations that must be performed when the
algorithm is executed with an input of size n (in the best
case, worst case, or average case).
What is classified as an “elementary operation” may vary
depending on the nature of the problem the algorithms
being compared are designed to solve.
15
Time Efficiency of an Algorithm
For instance, to compare two algorithms for evaluating a
polynomial, the crucial issue is the number of additions and
multiplications that are needed, whereas to compare two
algorithms for searching a list to find a particular element,
the important distinction is the number of comparisons that
are required.
As is common, we will classify the following as elementary
operations: addition, subtraction, multiplication, division,
and comparisons that are indicated explicitly in an
if-statement using one of the relational symbols
<, , >, , =, or ≠.
16
Time Efficiency of an Algorithm
When algorithms are implemented in a particular
programming language and run on a particular computer,
some operations are executed faster than others, and, of
course, there are differences in execution times from one
machine to another.
In certain practical situations these factors are taken into
account when we decide which algorithm or which machine
to use to solve a particular problem.
In other cases, however, the machine is fixed, and rough
estimates are all that we need to determine the clear
superiority of one algorithm over another.
17
Time Efficiency of an Algorithm
Since each elementary operation is executed in time no
longer than the slowest, the time efficiency of an algorithm
is approximately proportional to the number of elementary
operations required to execute the algorithm.
18
Time Efficiency of an Algorithm
Some of the orders most commonly used to describe
algorithm efficiencies are shown in Table 11.3.1.
Time Comparisons of Some Algorithm Orders
Table 11.3.1
As you see from the table, differences between the orders
of various types of algorithms are more than astronomical.
19
Time Efficiency of an Algorithm
The time required for an algorithm of order 2n to operate on
a data set of size 100,000 is approximately 1030,076 times
the estimated 15 billion years since the universe began
(according to one theory of cosmology).
On the other hand, an algorithm of order log2 n needs at
most a fraction of a second to process the same data set.
The next example looks at an algorithm segment that
contains a nested loop.
20
Example 2 – An Order for an Algorithm with a Nested Loop
Assume n is a positive integer and consider the following
algorithm segment:
a. Compute the actual number of additions, subtractions,
and multiplications that must be performed when this
algorithm segment is executed.
21
Example 2 – An Order for an Algorithm with a Nested Loop
cont’d
b. Use the theorem on polynomial orders to find an order
for this algorithm segment.
Solution:
a. There are two additions, one multiplication, and one
subtraction for each iteration of the inner loop, so the
total number of additions, multiplications, and
subtractions is four times the number of iterations of the
inner loop.
22
Example 2 – Solution
cont’d
Now the inner loop is iterated
You can see this easily if you construct a table that shows
the values of i and j for which the statements in the inner
loop are executed.
23
Example 2 – Solution
cont’d
There is one iteration for each column in the table.
24
Example 2 – Solution
cont’d
Hence the total number of iterations of the inner loop is
and so the number of additions, subtractions, and
multiplications is
25
Example 2 – Solution
cont’d
An alternative method for computing the number of
columns of the table:
Observe that the number of columns in the table is the
same as the number of ways to place two ’s in
n categories, 1, 2, . . . , n, where the location of the ’s
indicates the values of i and j with j  i.
26
Example 2 – Solution
cont’d
By Theorem 9.6.1, this number is
27
Example 2 – Solution
cont’d
Although, for this example, the alternative method is
more complicated than the one preceding it, it is simpler
when the number of loop nestings exceeds two.
b. By the theorem on polynomial orders,
, and so this algorithm segment
is
.
28
Time Efficiency of an Algorithm
The following is a formal algorithm for insertion sort.
Algorithm 11.3.1 Insertion Sort
[The aim of this algorithm is to take an array
a[1], a[2], a[3], . . . , a[n], where n  1, and reorder it. The
output array is also denoted a[1], a[2], a[3], . . . , a[n]. It has
the same values as the input array, but they are in
ascending order. In the kth step,
a[1], a[2], a[3], . . . , a[k – 1] is in ascending order, and a[k]
is inserted into the correct position with respect to it.]
29
Time Efficiency of an Algorithm
Input: n [a positive integer], a[1], a[2], a[3], . . . , a[n] [an
array of data items capable of being ordered]
Algorithm Body:
for
to n
[Compare a[k] to previous items in the array
a[1], a[2], a[3], . . . , a[k – 1], starting from the largest and
moving downward. Whenever a[k] is less than a preceding
array item, increment the index of the preceding item to
move it one position to the right. As soon as a[k] is greater
than or equal to an array item, insert the value of a[k] to the
right of that item. If a[k] is greater than or equal to a[k – 1],
then leave the value of a[k] unchanged.]
30
Time Efficiency of an Algorithm
Output: a[1], a[2], a[3], . . . , a[n] [in ascending order]
31
Example 6 – Finding a Worst-Case Order for Insertion Sort
a. What is the maximum number of comparisons that are
performed when insertion sort is applied to the array
a[1], a[2], a[3], . . . , a[n]?
b. Use the theorem on polynomial orders to find a worstcase order for insertion sort.
Solution:
a. In each attempted iteration of the while loop, two explicit
comparisons are made: one to test whether j ≠ 0 and the
other to test whether a[ j ] > x.
32
Example 6 – Solution
cont’d
During the time that a[k] is put into position relative to
a[1], a[2], . . . , a[k – 1], the maximum number of attempted
iterations of the while loop is k.
This happens when a[k] is less than every
a[1], a[2], . . . , a[k – 1]; on the kth attempted iteration, the
condition of the while loop is not satisfied because j = 0.
Thus the maximum number of comparisons for a given
value of k is 2k.
33
Example 6 – Solution
cont’d
Because k goes from 2 to n, it follows that the maximum
total number of comparisons occurs when the items in the
array are in reverse order, and it equals
34
Example 6 – Solution
cont’d
35
Example 6 – Solution
cont’d
b. By the theorem on polynomial orders,
,
and so the insertion sort algorithm has worst-case order
.
36