Download Document

Document related concepts
no text concepts found
Transcript
TIME COMPLEXITY OF
ALGORITHM
“ As soon as an Analytical
Engine exists, it will
necessarily guide the future
course of science.
Whenever any result is
sought by its aid, the
question will then arise –
By what course of
calculation can these
results be arrived at by the
machine in the shortest
time? ”
- Charles Babbage, 1864
1
Overview
 Introduction.
 Importance & Efficiency of Time complexity.
 Different notations.
 Different cases of Time complexity.
 Derivation of time complexity for searching
algorithm.
 Derivation of time complexity for sorting
algorithm.
2
Definition:Algorithm
An algorithm, named for the ninth century
Persian Mathematician al-Khowarizmi, is
simply a set of rules for carrying out some
calculations, either by hand or, more
usually, on a machine.
Informally, an algorithm is any well
defined computational procedure that takes
some value or set of values as output.
3
Criterias of Algorithms
All algorithms must satisfy the following criteria :
Input: Zero or more quantities are externally
supplied.
2. Output: Atleast one quantity is produced.
3. Definiteness: Each instruction is clear and
unambiguous.
4. Finiteness: If we trace out the instructions of an
algorithm, then for all cases , the algoprithm terminates
after a finite number of steps.
5. Effectiveness: Each instruction must be very basic
so that it can be carried out.
1.
4
In the fig. S is the starting node and G is the goal
node. In the graph the numeric value represents the
corresponding costs from one node to another. Our
aim is to traverse from S to G expensing less
amount. For conviences we neglect the other
factors like time, life risk and so on. In this Real
life problem we have to pay more cost for selecting
path if we don’t follow Algorithm. If we choose SA-G, S-B-A-G, S-C-B-G, we have to pay more
than if we choose S-C-D-G.
5
.
(ii) A Sorting Problem:
 Input : A sequence of n numbers <a1,a2,………an>
 Output : A permutation (reordering) <a1',
a2',………..an'> of the input sequence such that a1'≤ a2'
...............≤an'
An instance of a problem consists of the input
needed to compute a solution to the problem.
6
(iii) In internet technology also algorithms
are applied to manage and manipulate large
large amount of information.
Public-key cryptography (Network
Security) and digital signatures are core
technologies which on numerical
algorithms.
7
Definition:Time Complexity
The Time Complexity of an Algorithm is
the amount of time it needs to run to
completion. It indicates how many time
steps does the Computation of a problem
cost. Hence, it is called one of the
performance measure tool or parameter of
the problem size and the search space
dimension respectively.
8
IMPORTANCE & EFFICIENCY OF
TIME COMPLEXITY
Time Complexity has a greater importance
in the field of Software development which
includes
Line of Code
Time Estimation
Cost Estimation
9
Complexity :A measure of the
performance of an algorithm
An algorithm’s performance depends on internal and external
factors
Internal
The algorithm’s efficiency,
in terms of:
• Time required to run
• Space (memory storage)
required to run
External
• Size of the input to the
algorithm
• Speed of the computer
on which it is run
• Quality of the compiler
Complexity measures the internal factors
(usually more interested in time than space)
10
Efficiency
When we look at input sizes large enough to
make only the order of growth of the
running time relevant, this is the study of
asymptotic efficiency of algorithms.
We shall usually compare
algorithms on the basis of their execution
times and when we speak of the efficiency
we shall simply mean how fast it runs.
11
How can we analyze the efficiency of
algorithms?
We can measure the
• time (number of elementary computations) and
• space (number of memory cells) that the algorithm
requires.
These measures are called computational complexity and
space complexity, respectively.
Complexity theory is ‘the art of counting resources’.
12
Time complexity functions
 We need a more abstract / theoretical
approach to measure how fast an algorithm is.
 For this purpose , we define a function f(n)
called the time-complexity function of the
algorithm. It measures the number of
computational steps needed to solve the problem.
 The argument n of the function is a measure of
the size of the task.
13
Contd..
 Suppose that we want to sort n numbers. We can
use n as the argument of the time-complexity
function. We say we have a problem “of size n”.
 Suppose that we want to find if an integer n is the
square of another integer. We can use n as the
argument of the time complexity function.
14
Contd..
The function f(n) measures the number of
comput-ational steps . You can think of the
computational step as a simple operation
performed by the comp-uter. For example, it can
be an addition, a multiplication, or a comparison
between two floating point numbers.
This can become more formal using an
abstract model of the computer called Turing
machine.
Generally, f(n) increases with n.
15
Examples
(i) Two algorithms solve the same problem. Their
time complexity functions are 100n and n2,
respectively.
Which algorithm should we use if there is an
application specific constraint of 1000 basic
computations?
Which algorithm should we use if we upgrade
our computer and can afford 1,000,000 basic
computations?
16
Solution :The time complexity function of an algorithm is
100n + n2.
For n=10, the most computationally expensive
part of the algorithm is the 100n.
For n=1000, the most computationally expensive
part of the algorithm is the n2.
17
(ii) We run in our PC an algorithm with time
complexity function 100n.
For the same problem, a supercomputer
1,000 times faster than our PC runs an
algorithm with time complexity function
n2.
Who will finish first for n=1,000,000 ?
18
For example, let us assume two algorithms
A and B that solve the same class of
problems.
The time complexity of A is 5,000n, the
one for B is 1.1n for an input with n
elements.
19
Complexity
Comparison:time complexity of algorithms A and B
Input Size
Algorithm A
Algorithm B
n
5,000n
1.1n
10
50,000
3
100
500,000
13,781
1,000
5,000,000
2.51041
1,000,000
5109
4.81041392
20
This means that algorithm B cannot be used for
large inputs, while running algorithm A is still
feasible.
So what is important is the growth of the
complexity functions.
The growth of time and space complexity with
increasing input size n is a suitable measure for
the comparison of algorithms.
21
The notation we use to describe the asymptotic
running time of an algorithm are defined in terms of
functions whose domains are the set of natural
numbers
N={0,1,2,…}
Such notations are convenient for describing the
worst case running time function T(n), which is
usually defined only on integer input sizes.
22
Different asymptotic notations are
 Big O notation
 Big Omega (Ω) notation
 Theta (Θ) notation
 Little o notation
 Little omega (ω) notation
23
Big-Oh is the formal method of expressing the upper
bound of an algorithm's running time
More formally, for non-negative functions, f(n)
and g(n), if there exists an integer n0 and a constant
c > 0 such that for all integers n > n0,
f(n) ≤ cg(n),
then f(n) is Big Oh of g(n). This is denoted as
"f(n) = O(g(n))“
If graphed, g(n) serves as an upper bound to the
curve you are analyzing, f(n).
24
Fig: The asymptotical bounds O
25
Say that
f(n) = 2n + 8,
and g(n) = n2.
Can we find a constant c, so that 2n + 8 <= n2? The
number 4 works here, giving us 16 <= 16. For any
number c greater than 4, this will still work. Since we're
trying to generalize this for large values of n, and small
values (1, 2, 3) aren't that important, we can say that f(n)
is generally faster than g(n); that is, f(n) is bound by g(n), and will
always be less than it.
26
1: The running time of the program is constant.
Example: Find the 5th element in an array.
log N: Typically achieved by dividing the problem into
smaller segments and only looking at one input
element in each segment
Example: Binary search
N: Typically achieved by examining each element in the
input once
Example: Find the minimum element
27
N log N: Typically achieved by dividing the problem into
subproblems, solving the subproblems independently,
and then combining the results
Example: Merge sort
N^2: Typically achieved by examining all pairs of data
elements.
Example: Selection Sort
N^3: Often achieved by combining algorithms with a
mixture of the previous running times
Example: Matrix Multiplication
28
For non-negative functions, f(n) and g(n), if there
exists an integer n0 and a constant c > 0 such that for
all integers n > n0,
f(n) ≥ cg(n),
then f(n) is omega of g(n). This is denoted as
"f(n) = Ω(g(n))“
This is almost the same definition as Big Oh, except
that "f(n) ≥ cg(n)", this makes g(n) a lower bound
function, instead of an upper bound function. It
describes the best that can happen for a given data
size.
29
Fig: The asymptotical bounds Ω
30
For non-negative functions, f(n) and g(n), f(n) is
theta of g(n) if and only if f(n) = O(g(n)) and
f(n) = Ω(g(n)). This is denoted as
"f(n) = Θ(g(n))".
This is basically saying that the function, f(n) is
bounded both from the top and bottom by the
same function, g(n).
31
DIFFERENT CASES OF TIME
COMPLEXITY
Three different cases of Time
Complexity are
1. Best Case
2. Average Case
3. Worst Case
32
Contd..
Best, worst and
average cases of a
given algorithm
express what the
resource usage is
at least, at most
and on average,
respectively.
33
Worst, best & average cases
The worst-case complexity of the algorithm is
the function defined by the maximum number of
steps taken on any instance of size n.
The best-case complexity of the algorithm is the
function defined by the minimum number of
steps taken on any instance of size n.
Finally, the average-case complexity of the
algorithm is the function defined by the average
number of steps taken on any instance of size n.
34
Best case performance
The term best-case performance is used in
computer science to describe the way an
algorithm behaves under optimal
conditions. For example, a simple linear
search on an array has a worst-case
performance O(n), and average running
time is O(n/2), but in the best case the
desired element is the first element in the
array and the run time is O(1).
35
Average case performance
Determining what average input means is
difficult, and often that average input has
properties which make it difficult to
characterise mathematically. Similarly,
even when a sensible description of a
particular "average case" is possible, they
tend to result in more difficult to analyse
equations.
36
Worst case performance
Worst-case analysis has similar problems,
typically it is impossible to determine the exact
worst-case scenario. Instead, a scenario is
considered which is at least as bad as the worst
case. For example, when analysing an algorithm,
it may be possible to find the longest possible
path through the algorithm even if it is not
possible to determine the exact input that could
generate this. Indeed, such an input may not
exist.
37
 In computer science, linear search is a search
algorithm, also known as sequential search, that
is suitable for searching a set of data for a
particular value.
 It operates by checking every element of a list
one at a time in sequence until a match is found.
 Linear search runs in O(N).
 If the data are distributed randomly, on average
N/2 comparisons will be needed.
38
The best case is that the value is equal to
the first element tested, in which case only
1 comparison is needed.
The worst case is that the value is not in
the list (or is the last item in the list), in
which case N comparisons are needed.
39
 The simplicity of the linear search means that if
just a few elements are to be searched it is less
trouble than more complex methods that require
preparation such as sorting the list to be searched
or more complex data structures, especially when
entries may be subject to frequent revision.
 Another possibility is when certain values are
much more likely to be searched for than others
and it can be arranged that such values will be
amongst the first considered in the list.
40
The following pseudocode describes
the linear search technique.
 For each item in the list:
Check to see if the item you're
looking for matches the item in the
list.
If it matches.
Return the location where you
found it (the index).
If it does not match.
Continue searching until
you reach the end of the list.
If we get here, we know
the item does not exist in the list.
Return -1.
41
 This picture uses a different
approach to visualising the
dynamic nature of the linear
search algorithm. This shows the
moment when a key is found. A
grid of coloured circles
represents the data being
searched. It is searched through a
row at a time. The fact that a
particular circle has been
checked is shown by a cross
marked through the circle. The
key being found is shown as a
bull's eye.
42
1
3
5
2
9 10
For i = 0 to 5
begin
If ( a[i] = = x )
Print “item found”
43
c1 X n
c2
For i = 0 to N
begin
If ( a[i] = = x )
Print “item found”
End
If ( i = = n + 1 )
Print “ item not found ”
44
Contd…..
T (n ) = n c1 + c2
<= n c1 + n c2
n>=1
<= n ( c1+c2)
n>=1
T (n) = O (g(n)
So, for n elements,
Time complexity = O (n )
45
 Quicksort is a well-known sorting algorithm developed by
C.A.R. Hoare that,on average, makes Θ(n log n) comparisons to
sort n items.

However, in the worst case, it makes Θ(n2) comparisons.

Typically, quicksort is significantly faster in practice than other
Θ(n log n) algorithms.

This is because its inner loop can be efficiently implemented on
most architectures, and in most real-world data it is possible to
make design choices which minimize the possibility of requiring
quadratic time.
46
 Quicksort sorts by employing a divide and conquer
strategy to divide a list into two sub-lists.
 The steps are:
Pick an element, called a pivot, from the list.
Reorder the list so that all elements which are less
than the pivot come before the pivot and so that all
elements greater than the pivot come after it (equal
values can go either way). After this partitioning, the
pivot is in its final position. This is called the partition
operation.
Recursively sort the sub-list of lesser elements and
the sub-list of greater elements.
47
Quicksort in action on a list of random numbers.
The horizontal lines are pivot values.
48
SORTING TECHNIQUE
Consider the following numbers
10 9 6 12 5 3 16 2 17
49
QUICK SORT - Algorithm
QS ( LB, UB, A )
Begin
if( LB < UB)
Begin
flag = 1, i = LB + 1, j = UB, key = A[LB]
while( flag )
begin
while( key > A[i] )
while( key < A[j] )
if ( i < j )
swap( A[i], A[j] )
else
flag = 0
end
if ( i >= j )
swap( A[LB], A[j] )
end
QS( LB, j –1, A )
QS( j+1, UB, A )
50
10 9 6 12 5 3 16 2 17
The array is pivoted about its first element p
= 10
10 9 6 12 5 3 16 2 17
51
10 9 6 12 5 3 16 2 17
Find the first element larger than pivot (here,
12) and last element not larger than pivot
(here, 2).
10 9 6 12 5 3 16 2 17
52
10 9 6 12 5 3 16 2 17
Swap these two elements.
10 9 6 2 5 3 16 12 17
53
10 9 6 2 5 3 16 12 17
Scan again in both directions and again find
the required elements.
10 9 6 2 5 3 16 12 17
54
10 9 6 2 5 3 16 12 17
 The pointers have crossed here. So, swap the pivot with the last
element not larger than it. Now, array becomes –
3 9 6 2 5 10 16 12 17
55
3 9 6 2 5 10 16 12 17
 Now we can observe that the left side of the pivot
contains elements smaller than it and the right side of
the pivot contains elements greater than it. Now,
recursively sort the sub arrays on each side of the
pivot.
3 9 6 2 5 10 16 12 17
First sublist
Second sublist
56
Now, we have to sort these subarrays
separately.
3
9
6
2
Left sublist
5
16 12 17
Right sublist
57
THE SORTED ARRAY
After the sorting of the left and right
sublists, finally array becomes sorted
as---
2 3 5 6 9 10 12 16 17
58
 The running time of this algorithm is usually
measured by the number f(n) of comparisons
required to solve n elements.
2
 It has a worst case running time of order n /2.
 But an average case running time of order n
log n.
59
 The worst case occurs when the list is already sorted.
 Thus the first element requires n comparisons to
recognize that it remains in the first position.
 Furthermore the first sublist will be empty, but the
second sublist will have n – 1 elements.
 So, the second element will require n – 1 comparisons
to recognize that it remains in the second position and
so on.Consequently, there will be a total of
f(n) = n + (n-1) + ….. + 2 + 1 = n(n+1)/2
2
= n /2 +O(n)
2
= O( n )
60
The Worst Case for
QuickSort
The tree on the
right illustrates
the worst case of
quick-sort,
which occurs
when the input
is already
sorted!
61
The complexity f(n) = O (n log n) of the average case comes from the fact
that,on the average , each reduction step of the algorithm produces two
sublists. Accordingly,
 Reducing the initial list places one element and produces two sublists.
 Reducing the two sublists places two elements and produces four
sublists.
 Reducing the four sublists places four elements and produces eight
sublists.
 Reducing the eight sublists places eight elements and produces sixteen
sublists.
And so on.The reduction step in the k th level finds the location of 2(k-1)
elements.Hence there will be log2n levels of reduction steps. And each
level uses at most n comparisons. So, f(n) = O(n log n).
In fact, mathematical analysis and empirical evidence both show that
f(n) = 1.4 [ n log n ] is the expected number of comparisons for the quick
sort algorithm.
62
T(n) = 2. T(n/2) + cn
T(n/2) =2T{(n/2)/2} + c(n/2)
 T(n) = 2{2T(n/4) +c(n/2)} + cn
= 4T(n/4) +cn + cn
= 8T(n/8) + 3cn
[ let, n/2k=1
= 23 T(n/23) + 3 cn
k
n
=
2
………………………..
 k = log2 n ]
= 2k T(n/2k) + k cn
= n T(1) + log2 n . cn
[ T (1) = 1 ]
= cn log2 n + n. 1
<= cn log2 n + cn log2 n
<= 2cn log2 n
[ n> 1 ]
So, O(n log2 n ) is the best case.
63
Competitive sorting algorithms
 Quicksort is a space-optimized version of the binary tree sort.
Instead of inserting items sequentially into an explicit tree,
quicksort organizes them concurrently into a tree that is implied
by the recursive calls. The algorithms make exactly the same
comparisons, but in a different order.
 The most direct competitor of quicksort is heap sort. Heapsort is
typically somewhat slower than quicksort, but the worst-case
running time is always O(nlogn).
 Quicksort is usually faster, though there remains the chance of
worst case performance except in the introsort variant. If it's
known in advance that heapsort is going to be necessary, using it
directly will be faster than waiting for introsort to switch to it.
64
Formal analysis
 It's not obvious that quicksort takes O(n log n)
time on average. It's not hard to see that the
partition operation, which simply loops over the
elements of the array once, uses Θ(n) time. In
versions that perform concatenation, this
operation is also Θ(n).
 In the best case, each time we perform a partition
we divide the list into two nearly equal pieces.
This means each recursive call processes a list of
half the size. Consequently, we can make only
log n nested calls before we reach a list of size 1.
65
 This means that the depth of the call tree is O(log n).
But no two calls at the same level of the call tree
process the same part of the original list; thus, each
level of calls needs only O(n) time all together (each
call has some constant overhead, but since there are
only O(n) calls at each level, this is subsumed in the
O(n) factor). The result is that the algorithm uses only
O(n log n) time.
 An alternate approach is to set up a recurrence
relation for T(n), the time needed to sort a list of size
n. Because a single quicksort call involves O(n) work
plus two recursive calls on lists of size n/2 in the best
case, the relation would be:
T(n) = O(n) + 2T(n/2)
66
Randomized quicksort expected complexity
 Randomized quicksort has the desirable property that
it requires only O(n log n) expected time, regardless of
the input. But what makes random pivots a good
choice?
 Suppose we sort the list and then divide it into four
parts. The two parts in the middle will contain the
best pivots; each of them is larger than at least 25% of
the elements and smaller than at least 25% of the
elements. If we could consistently choose an element
from these two middle parts, we would only have to
split the list at most 2log2 n times before reaching lists
of size 1, yielding an O(n log n) algorithm.
67
 Unfortunately, a random choice will only choose from
these middle parts half the time. The surprising fact is
that this is good enough. Imagine that you are flipping
a coin over and over until you get k heads. Although
this could take a long time, on average only 2k flips
are required, and the chance that you won't get k
heads after 100k flips is infinitesimally small. By the
same argument, quicksort's recursion will terminate
on average at a call depth of only 2(2log2 n). But if its
average call depth is O(log n), and each level of the
call tree processes at most n elements, the total
amount of work done on average is the product, O(n
log n).
68
Average complexity
 Even if we aren't able to choose pivots randomly,
quicksort still requires only O(n log n) time over
all possible permutations of its input. Because
this average is simply the sum of the times over
all permutations of the input divided by n
factorial, it's equivalent to choosing a random
permutation of the input. When we do this, the
pivot choices are essentially random, leading to
an algorithm with the same running time as
randomized quicksort.
69
 More precisely, the average number of comparisons
over all permutations of the input sequence can be
estimated accurately by solving the recurrence
relation:
 Here, n - 1 is the number of comparisons the partition
uses. Since the pivot is equally likely to fall anywhere
in the sorted list order, the sum is averaging over all
possible splits.
 This means that, on average, quicksort performs only
about 39% worse than the ideal number of
comparisons, which is its best case. In this sense it is
closer to the best case than the worst case. This fast
average runtime is another reason for quicksort's
70
practical dominance over other sorting algorithms.
 The height of the tree is N-1 not O(log(n)). This
is because the pivot is in this case the largest
element and hence does not come close to
dividing the input into two pieces each about half
the input size.
 Since the pivot does not appear in the children, at
least one element from level i does not appear in
level i+1 so at level N-1 you can have at most 1
element left. So we have the highest tree possible.
71
 Note also that level i has at least i pivots missing
so can have at most N-i elements in all the nodes.
Our tree achieves this maximum.
 So the time needed is proportional to the total
number of numbers written in the diagram which
is N + N-1 + N-2 + ... + 1, which is again the one
summation we know N(N+1)/2 or Θ(N2).
72
 Perhaps the problem was in choosing the last element as the
pivot. Clearly choosing the first element is no better; the same
example on the right again illustrates the worst case (the tree has
its empty nodes on the left this time).
 Since are spending linear time (as opposed to constant time) on
the division step, why not count how many elements are present
(say k) and choose element number k/2? This would not change
the complexity (it is also linear). You could do that and now a
sorted list is not the worst case. But some other list is. Just put the
largest element in the middle and then put the second largest
element in the middle of the node on level 1. This does have the
advantage that if you mistakenly run quick-sort on a sorted list,
you won't hit the worst case. But the worst case is still there and
it is still Θ(N2).
73
 Why not choose the real middle element as the pivot,
i.e., the median. That would work! It would cut the
sizes in half as desired. But how do we find the
median? We could sort, but that is the original
problem. In fact there is a (difficult) algorithm for
computing the median in linear time and if this is used
for the pivot, quick-sort does take O(Nlog(N)) time in
the worst case. However, the difficult median
algorithm is not fast in practice. That is, the constants
hidden in saying it is Θ(N) are rather large.
 Instead of studying the fast, difficult median
algorithm, we will consider a randomized quick-sort
algorithm and show that the expected running time is
74
Θ(Nlog(N)).