Download UNIT-I - WordPress.com

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

C Sharp syntax wikipedia , lookup

Program optimization wikipedia , lookup

C syntax wikipedia , lookup

Sieve of Eratosthenes wikipedia , lookup

Sorting algorithm wikipedia , lookup

Travelling salesman problem wikipedia , lookup

Simulated annealing wikipedia , lookup

Multiplication algorithm wikipedia , lookup

Genetic algorithm wikipedia , lookup

Quicksort wikipedia , lookup

Drift plus penalty wikipedia , lookup

Selection algorithm wikipedia , lookup

Fisher–Yates shuffle wikipedia , lookup

Fast Fourier transform wikipedia , lookup

K-nearest neighbors algorithm wikipedia , lookup

Smith–Waterman algorithm wikipedia , lookup

Operational transformation wikipedia , lookup

Algorithm characterizations wikipedia , lookup

Factorization of polynomials over finite fields wikipedia , lookup

Algorithm wikipedia , lookup

Transcript
What is Algorithm?
 Name of a Persian mathematician Abu Jafar
Mohammed Ibn Musa Al Khowarizmi (ninth
century).
 Finite set of instructions that accomplishes a
particular task.
(or)
 sequence of unambiguous instructions for solving a
problem
Characteristics:-
 input
 output
 definiteness: clear and unambiguous
 finiteness: terminate after a finite number of
steps
 effectiveness: instruction is basic enough to
be carried out
Process for Design and Analysis of Algorithm
Understand the problem
Solution as an algorithm
Algorithm Design
technique
Prove Correctness
No
Yes
Analyse the algorithm
is it efficient
No
Yes
Code the algorithm
Fig: Process for design and analysis of algorithms
Algorithm Types
•
Approximate Algorithm: If it is infinite and repeating.
•
Probabilistic algorithm: If the solution of a problem is
uncertain.
Ex: Tossing of a coin.
•
Infinite Algorithm: An algorithm which is not finite.
Ex: A complete solution of a chessboard, division by
zero.
•
Heuristic algorithm: Giving fewer inputs getting more
outputs. Ex: All Business Applications.
Issues for Algorithm
1.
2.
3.
4.
5.
How to devise algorithm
How to express algorithm
How to validate algorithm
How to analyze algorithm
How to test a program
i) Debugging
ii) Profiling (or) Performance Measuring
Specification of Algorithm
Using natural language
Pseudocode
Algorithm
Flow chart
Program (Using programming language)
Pseudo-Code for Expressing Algorithms
1. Comments begin with // and continue until the end of line.
2.A block of statements / compound statements are
represented using { and } for example if statement, while
loop, functions etc.,.
Example
{
Statement 1;
Statement 2;
.........
.........
}
3.The delimiters [;] are used at the end of the each statement.
Contd…..
4. An identifier begins with a letter. Example: sum,
sum5, a; but not in 5sum, 4a etc.,.
5. Assignment of values to the variables is done using
the assignment operators as := or .
6. A There are two Boolean values TRUE and FALSE.
Logical operators: AND, OR, NOT.
Relational operators: <, , ≥,,=,.
Arithmetic operators: +, -, *, /, %;
Contd…..
7. The conditional statement if-then or if-then-else is
written in the following form.
If (condition) then (statement)
If (condition) then (statement-1) else (statement-2)
If a condition is true the particular block of
statements are execute.
Example
if(a>b) then
{
write("a is big");
}
else
{
write("b is big");
}
Contd…..
8. The Case statement
case
{
:(condition -1): (statement-1)
:(condition -2): (statement-2)
:(condition -n): (statement-n)
..............
..............
else :(statement n+1);
}
9. Loops for
for variable:=value 1 to value n step do
{
Statement -1;
Statement -1;
.......
.......
Statement -n;
}
Example:
for i:=1 to 10 do
{
write(i); //displaying numbers from 1 to 10
i:=i+1;
}
Contd….. while <condition> do
{
<statement 1>
<statement 2>
........
........
<statement n>
While
Example:
i:=1;
while(i<=10)do
{
write (i);//displaying numbers from 1 to 10
i:=1+1;
}
}
Repeat -until
repeat
{
<statement 1>
<statement 2>
......
......
<statement n>
}until <condition>
Example
i:=1;
repeat
{
write (i);
i:=i+1;
}
until (i>10);
Contd…..
10. Break: this statement is exit from the loop.
11. Elements of array are accessed using [ ].
Ex: if A is an one-dimensional array, then A[i]. If A
is two-dimensional array, A[i,j].
12. Procedures (functions):
procedure
Syntax:
name of the procedure
Algorithm Name (<parameter list>)
{
body of the procedure
}
Contd…..
13. Compound data-types can be formed with records
Syntax:
Name = record
{
data-type -1 data 1;
data-type -2 data 2;
data-type -n data n;
}
Example
Employee =record
{
int no;
char name[10];
float salary;
}
Performance Analysis:
Need for analysis
 To determine resource consumption
 CPU time
 Memory space
 Compare different methods for solving the same
problem before actually implementing them
and running the programs.
 To find an efficient algorithm
Complexity
 A measure of the performance of an algorithm
 An algorithm’s performance depends on
 internal factors
 external factors
External Factors
 Speed of the computer on which it is run
 Quality of the compiler
 Size of the input to the algorithm
Internal Factor
 The algorithm’s efficiency, in terms of:
 Time required to run
 Space (memory storage)required to run
Note:
Complexity measures the internal factors (usually
more interested in time than space)
Two ways of finding complexity
 Experimental study
 Theoretical Analysis
Experimental study
 Write a program implementing the algorithm
 Run the program with inputs of varying size and
composition
 Get an accurate measure of the actual running time
Use a method like System.currentTimeMillis()
 Plot the results
Limitations of Experiments
 It is necessary to implement the algorithm, which may
be difficult.
 Results may not be indicative of the running time on
other inputs not included in the experiment.
 In order to compare two algorithms, the same
hardware and software environments must be used.
 Experimental data though important is not sufficient.
Theoretical Analysis
 Uses a high-level description of the algorithm instead
of an implementation
 Characterizes running time as a function of the input
size, n.
 Takes into account all possible inputs
 Allows us to evaluate the speed of an algorithm
independent of the hardware/software environment
Space Complexity
 The space needed by an algorithm is the sum of a fixed part and a
variable part.
 Space complexity S(P) = C + Sp (Instance characteristics)
 The fixed part includes space for
 Instructions
 Simple variables
 Fixed size component variables
 Space for constants, Etc..
 The variable part includes space for
 Component variables whose size is dependant on the particular problem
instance being solved
 Recursion stack space, Etc..
Examples
Algorithm NEC (float x, float y, float z)
{
Return (X + Y +Y * Z + (X + Y +Z)) /(X+ Y) + 4.0;
}
Algorithm ADD ( float [], int n)
{
sum := 0.0;
for i:=1 to n do
sum:=sum + X[i];
return sum;
}
Time Complexity
 The time complexity of a problem is
 the number of steps that it takes to solve an instance of the
problem as a function of the size of the input (usually measured
in bits), using the most efficient algorithm.
 Priori analysis or compile time
 Posteriori analysis or run (execution) time.
 Time complexity T(P) = C + TP(n)
 In general it can be two measured in ways:
 By using equation count method.


Example: TP(n) = CaADD(n)+ CsSUB(n)+ CmMUL(n)+
CdDIV(n)+……………..
By using step count method.
Contd….
2.
Global variable count method:
Statement
Ex:
S/e
Frequency
Total steps
1. Algorithm Sum(a, n)
0
-
0
2. {
0
-
0
1
1
1
3.
s:=0;
4.
for i:=1 to n do
1
n+1
n+1
5.
s:=s+a[i];
1
n
n
6.
return s;
1
1
1
0
-
0
7. }
Total
2n+3 steps
Contd….
 Step count method have two approaches:
1. Global variable count method:
Example:
Algorithm Sum(a, n)
{
s:=0;
for i:=1 to n do
{
s:=s+a[i];
}
return s;
}
Algorithm sum with count statement added
count:=0;
Algorithm Sum(a,n)
{
s:=0;
count:=count+1;
for i:=1 to n do
{
count:=count +1;
s:=s+a[i];
count:=count+1;
}
count:=count+1; //for last time of for loop
count:=count+1; //for return statement
return s;
}
Thus the total number of steps are 2n+3
Time complexity cases
 Best Case: Inputs are provided in such a way that the
minimum time is required to process them.
 Average Case: The amount of time the algorithm takes
on an average set of inputs.
 Worst Case: The amount of time the algorithm takes
on the worst possible set of inputs.
Example: Linear Search
3
A
1
6
5
4
2
3
7
4
10
9
5
6
12
7
15
8
9
Asymptotic Notation
 The exact number of steps will depend on exactly what
machine or language is being used.
 To avoid that problem, the Asymptotic notation is
generally used.
 Running time of an algorithm as a function of input
size n for large n.
 Expressed using only the highest-order term in the
expression for the exact running time.
Big - Oh notation: (O)
Let f(n) and g(n) be the two
non-negative functions. We say
that f(n) is said to be O(g(n)) if
and only if there exists a positive
constant ‘c’ and ‘n0‘ such that,
f(n)c*g(n) for all non-negative
values of n, where n≥n0.
Here, g(n) is the upper bound
for f(n).
Ex: ---
Big - Omega notation: 
Let f(n) and g(n) be the two nonnegative functions. We say that f(n)
is said to be (g(n)) if and only if
there exists a positive constant ‘c’
and ‘n0‘ such that, f(n) ≥ c*g(n) for
all non-negative values of n, where
n≥n0.
Here, g(n) is the lower bound for
f(n).
Ex: ---
Big - Theta notation: 
Let f(n) and g(n) be the
two non-negetive functions. We
say that f(n) is said to be (g(n))
if and only if there exists a
positive constants ‘c1’ and ‘c2’,
such that, c1g(n)  f(n)  c2g((n)
for all non-negative values n,
where n ≥ n0.
The above definition
states that the function f(n) lies
between ‘c1’times the function
g(n) and ‘c2’, times the function
g(n) where ‘c1’ and ‘c2’ are
positive constants.
Ex: ---
Amortized Analysis
 Not just consider one operation, but a sequence of
operations on a given data structure.
 Average cost over a sequence of operations.
 Probabilistic analysis:
 Average case running time: average over all possible inputs for one
algorithm (operation).
 If using probability, called expected running time.
 Amortized analysis:
 No involvement of probability
 Average performance on a sequence of operations, even some
operation is expensive.
 Guarantee average performance of each operation among the
sequence in worst case.
Three Methods of Amortized Analysis
 Aggregate analysis:
 Total cost of n operations/n,
 Accounting method:
 Assign each type of operation an (different) amortized cost
 overcharge some operations,
 store the overcharge as credit on specific objects,
 then use the credit for compensation for some later operations.
 Potential method:
 Same as accounting method
 But store the credit as “potential energy” and as a whole.
Aggregate Analysis
 In fact, a sequence of n operations on an initially empty
stack cost at most O(n). Why?
Each object can be POP only once (including in MULTIPOP) for each
time it is PUSHed. #POPs is at most #PUSHs, which is at most n.
Thus the average cost of an operation is O(n)/n = O(1).
Amortized cost in aggregate analysis is defined to be average cost.
Amortized Analysis: Accounting Method
 Idea:
 Assign differing charges to different operations.
 The amount of the charge is called amortized cost.
 amortized cost is more or less than actual cost.
 When amortized cost > actual cost, the difference is
saved in specific objects as credits.
 The credits can be used by later operations whose
amortized cost < actual cost.
 As a comparison, in aggregate analysis, all
operations have same amortized costs.
Accounting Method (cont.)
 Conditions:
 suppose actual cost is ci for the ith operation in the
sequence, and amortized cost is ci',
 i=1n ci' i=1n ci should hold.


since we want to show the average cost per operation is small
using amortized cost, we need the total amortized cost is an
upper bound of total actual cost.
holds for all sequences of operations.
 Total credits is i=1n ci' - i=1n ci , which should be
nonnegative,

Moreover, i=1t ci' - i=1t ci ≥0 for any t>0.
The Potential Method
 Same as accounting method: something prepaid is
used later.
 Different from accounting method
 The prepaid work not as credit, but as “potential energy”,
or “potential”.
 The potential is associated with the data structure as a
whole rather than with specific objects within the data
structure.
Distinguish between Algorithm and Pseudocode.
• An algorithm is a well-defined sequence of steps that
provides a solution for a given problem, while a pseudocode
is one of the methods that can be used to represent an
algorithm.
• Algorithms can be written in natural language, pseudocode
is written in a format that is closely related to high level
programming language structures.
• Pseudocode does not use specific programming language
syntax and therefore could be understood by programmers
who are familiar with different programming languages.
• Transforming an algorithm presented in pseudocode to
programming code could be much easier than converting an
algorithm written in natural language.
Big-Oh vs Big-Omega
 Big oh notation is denoted by ‘O’, whereas Omega notation
is denoted by ‘’.
 Big oh is used to represent the upper bound of an
algorithm’s running time, i.e. we can give largest amount of
time taken by the algorithm to complete. But whereas
omega notation represent the lower bound of an
algorithm’s running time, i.e. we can give smallest amount
of time taken by the algorithm to complete.
 Let f(n) and g(n) be the two non-negative functions. We
say that f(n) is said to be O(g(n)) if and only if there exists a
positive constant ‘c’ and ‘n0‘ such that, f(n)c*g(n) for all
non-negative values of n, where n≥n0. But whereas
function f(n)= (g(n)) (read as for of n is omega of g of n)
if and only if there exist positive constants ‘c’ and ‘n0’ such
that, f(n) ≥ c*g(n) for all n, n≥n0.
Matrix Multiplication (STEP COUNT)
void multiply(int A[][N], int B[][N], int C[][N])
{
Step 1: for (int i = 0; i < N; i++)
{
Step 2: for (int j = 0; j < N; j++)
{
Step 3: C[i][j] = 0;
Step 4: for (int k = 0; k < N; k++) \
{
Step 5:
C[i][j] += A[i][k]*B[k][j];
}
}
}
}