Download lecture5

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Musical notation wikipedia , lookup

Mathematics of radio engineering wikipedia , lookup

Infinitesimal wikipedia , lookup

History of mathematical notation wikipedia , lookup

Abuse of notation wikipedia , lookup

Bra–ket notation wikipedia , lookup

Approximations of π wikipedia , lookup

Location arithmetic wikipedia , lookup

Principia Mathematica wikipedia , lookup

Large numbers wikipedia , lookup

Non-standard analysis wikipedia , lookup

Proofs of Fermat's little theorem wikipedia , lookup

Addition wikipedia , lookup

Arithmetic wikipedia , lookup

Positional notation wikipedia , lookup

Big O notation wikipedia , lookup

Elementary mathematics wikipedia , lookup

Transcript
308-203A
Introduction to Computing II
Lecture 5: Complexity of
Algorithms
Fall Session 2000
How long does a
program take to run?
• Depends on what the input is
• May just depend on some parameter(s) of the input
Example: copying the String depends on the length
“a”.clone( ) is less work than “abc…z”.clone()
To Quantify That...
• Assume some simple operations take fixed time
e.g. a[i]= 0; => 1 time unit
• Complex operations depend on how many
simpler operations are performed
e.g. for i := 1 to n do a[i] = 0; => n time units
Do constants matter?
Q. What if a[i] = 0 and x = 0 don’t take the same time?
A. As it happens, this isn’t as important as loops,
recursions etc.
Therefore, we will use an asymptotic notation, which
ignores the exact value of these constants...
The Big O( )
Definition: If there is a function f(n) where there
exists N and c such that for any input of length
n > N, the running time of a program P is bounded
as
Time(P) < c f(n)
we say that P is O(f(n))
The Big O( )
What does it really mean?
Time
f(x)
n>N
n<N
n
N
The Big O( )
WARNING: CORRUPT NOTATION
We write…
g(n) = O( f(n) )
even though g(n) and O( f(n) ) are not equivalent.
Examples
Example
Growth
A.k.a.
x = 1;
O(1)
constant
for j := 1 to n do A[j] = 0;
O(n)
linear
for j := 1 to n do
for k := 1 to n do A[j][k] = 0;
O(n2)
quadratic
for (int j = n; j != 0; j /= 2) A[j] = 0 O(log n) logarithmic
f(n): prints all strings of { a, b }*
of length n
O(en)
exponential
Worst Case Analysis
Big O( ) is worst-case in that the real running time
may be much less ( f(n) is an upper-bound):
Example:
String s = … ;
for (int j = 0; j < s.length( ); j++)
if (s [ j ] = “a”) break;
Worst Case Analysis
Big O( ) is worst-case in that the real running time
may be much less ( f(n) is an upper-bound):
Example:
String s = … ;
Time = O(n)
for (int j = 0; j < s.length( ); j++)
if (s [ j ] = “a”) break;
Best-case Analysis
• We may choose to analyze the least time
the program could take to run
• This is called big-W notation
• If P is O( f(n) ) and W ( f(n) ) we say:
P is Q ( f(n) )
Intuitively...
O, Q, and W do for functions
what less than, equal and
greater than do for numbers.
f(x) = O ( g(x) )
f(x) = Q ( g(x) )
f(x) = W ( g(x) )
(i < j)
(i = j)
(i > j)
A little more notation
Lower-case letters act like the corresponding
strict inequalities (<, >), i.e. it is known
that f(x) = Q( g(x) ):
f(x) = o ( g(x) )
“Little-oh”
f(x) = w ( g(x) )
“Little-omega”
Some Things to Note
1. O( ) is a bound, so:
• If P = O( 1 ), it is also true that P = O( n )
• If P = O( nk ), it is also true that P = O( n j ) for (j > k)
2. If P = O( f(n) + g(n) ) and f(n) = O( g(n) ) then
P = O( g(n) )
Example: P = O( x2 + x ) => P = O( x2 )
More examples:
What about adding two numbers??
1) In what parameter do we do the analysis?
2) O, Q, and W ?
More examples:
What about adding two numbers??
Let n be the number of digits
in the numbers (assume same length)
ad a(d-1) a(d-2) ... ai … a3 a2 a1
+
bd b(d-1) b(d-2) ... bi … b3 b2 b1
c(d+1) cd c(d-1) c(d-2) ... ci … c3 c2 c1
More examples:
What about adding two numbers??
We do exactly one (primitive) addition for each
of d digits => Q ( d )
ad a(d-1) a(d-2) ... ai … a3 a2 a1
+
bd b(d-1) b(d-2) ... bi … b3 b2 b1
c(d+1) cd c(d-1) c(d-2) ... ci … c3 c2 c1
The parameter is important!
Let’s say we did the analysis on the number itself
rather than how many digits it contains…
… is it still linear ???
The parameter is important!
Let’s say we did the analysis on the number itself
rather than how many digits it contains…
… is it still linear ???
NO! If the number is n, d = log n
O( d ) = O ( log n )
So what is O(1) in Java
• Primitive math operations (i.e. +,-,*, / on
ints, doubles, etc)
• Accessing simple variables (and data members)
• Accessing an array, A[i]
So what is not O(1) in Java
• Method calls usually aren’t: depends on the body
of the method
• This includes Java library calls like in java.lang.math
• Loops of any kind
Another example: Exponentiation
What is the order of growth of and can
we do better than:
Function exp(m,n) ::=
{
result := 1
while (n > 0)
result := result * m
n := n -1
}
Another example: Exponentiation
What is the order of growth of and can
we do better than:
Answer #1: O( n )
Answer #2: yes…
Better Exponentiation
Observe:
We can rewrite exponentiations like
513 = 5 (52)2 (( 52)2 ) 2
This has only seven multiplications
(instead of thirteen)
Better Exponentiation
Function exp(m, n) ::=
{
result := 1
while (n > 0)
if ( n is even)
m := m2
n := n /2
else
result := result * m
n := n - 1
}
Order of Growth??
Best-case: We always divide by 2 until n := 1
=> W ( log n ) iterations
Worst-case: If we’re forced into the other branch
(n odd) it will be even next time, so :
2 log n = O( log n )
Conclusion: Q ( log n )
Any questions?