Download How to write an algorithm?

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Algorithm:
An algorithm defines some process, how an operation works. It is specified in terms of simple steps. Each step must
be clearly implementable in a mechanical way, perhaps by a single machine instruction, by a single programming
language statement or by a previously defined algorithm. An algorithm to solve a problem must be correct; it must
produce correct output when given valid input data. When the input data is invalid we usually want the algorithm to
warn us and terminate gracefully. Often we want to know how much time an algorithm will take or how much space
(computer memory) it will use and this leads to the analysis of algorithms and to the field of computational complexity.
Algorithms and data structures can be specified in any adequately precise language. English and other natural
languages are satisfactory if used with care to avoid ambiguity but more precise mathematical languages and
programming languages are generally preferred. The execution of the latter can also be automated. A program is
the expression of some algorithm(s) and data structure(s) in a particular programming language. The program can
be used on all types of computer for which suitable language compilers or interpreters exist, making it more valuable.
The basic features of an algorithm includes appropriate input, output, definiteness, finiteness and effectiveness. An
algorithm must operate upon a given set of inputs to give desired output. An algorithm must definitely result in an
output that is either known or desired from the execution of a program. An algorithm must be finite in itself designed
within appropriate boundaries. An algorithm must be effective and efficient enough in terms of space and time
considerations to minimize the complexity inherent in its implementation.
Top down and Bottom up Approaches:
In the Top-down model an overview of the system is formulated, without going into detail for any part of it. Each part
of the system is then refined by designing it in more detail. Each new part may then be refined again, defining it in
yet more detail until the entire specification is detailed enough to validate the Model. The “Top-down” Model is often
designed to get an overview of the proposed system, but insufficient and irrelevant in understanding the elementary
mechanisms.
By contrast in Bottom-up design individual parts of the system are specified in detail. The part are then linked
together to form larger components, which are in turn linked until a complete system is formed. Strategies based on
the “bottom-up” information flow seem potentially necessary and sufficient because they are based on the knowledge
of all variables that may affect the elements of the system.
ALGORITHM COMPLEXITY
Space Complexity:
Space complexity theory measures the amount of space required to run an algorithm or a program. The better the
time complexity of an algorithm is, the faster the algorithm will carry out its work in practice. Apart from time
complexity, its space complexity is also important. This is essentially the number of memory cells required by an
algorithm. A good algorithm keeps this number as small as possible.
Instruction Space:
Instruction space is the space for simple variables, fixed-size structure variables and constants.
Time Complexity:
Sometimes, it is essential to pre-determine the time required by an algorithm to execute successfully to give the
desired result. Sometimes, it is required to estimate the time, a program may take for its execution, before it is
actually executed. A good measure for the running time is the number of executed comparisons.
Time complexity theory measures the amount of time required to execute an algorithm or a program. Time
complexity theory measures things in number of “steps”, or computer operations (like addition, subtraction,
multiplication, assignment etc.). A count of the number of steps to run an algorithm is the “raw time complexity”. A
raw time complexity of “4N2 – 5N + 41″ will be O(N2). The complexity for constant values can all be reduced to one,
say O(1).
The number of (machine) instructions which a program executes during its running time is called its time complexity.
This number depends primarily on the size of the program’s input, that is approximately on the number of the strings
to be sorted (and their length) and the algorithm used.
Time Space trade-off:
There is often a time-space-tradeoff involved in a problem, that is, it cannot be solved with few computing time and
low memory consumption. One then has to make a compromise and to exchange computing time for memory
consumption or vice versa, depending on which algorithm one chooses and how one parameterizes it.
The Big O-notation
Time complexities are always specified in the so-called O-notationin computer science. One way to describe
complexity is by saying that the sorting method has running time O(n2). The expression O is also called Landau’s
symbol. Mathematically speaking, O(n2) stands for a set of functions, exactly for all those functions which, “in the
long run”, do now grow faster than the function n2, that is for those functions for which the function n 2 is an upper
bound (apart from a constant factor). To be precise, the following holds true: A function f is an element of the set
O(n2) if there are a factor c and an integer number n0 such that for all n equal to or greater than this n0 the following
holds:
f(n) <= c . n2
A function f from O(n2) may grow considerably more slowly than n 2 so that, mathematically speaking, the quotient
f/n2 converges to 0 with growing n. An example of this is the function f(n) = n.
Algorithm is a step by step procedure, which defines a set of instructions to be executed in
certain order to get the desired output. Algorithms are generally created independent of
underlying languages, i.e. an algorithm can be implemented in more than one
programming language.
From data structure point of view, following are some important categories of algorithms −
 Search − Algorithm to search an item in a datastructure.
 Sort − Algorithm to sort items in certain order
 Insert − Algorithm to insert item in a datastructure
 Update − Algorithm to update an existing item in a data structure
 Delete − Algorithm to delete an existing item from a data structure
Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the below
mentioned characteristics −
 Unambiguous − Algorithm should be clear and unambiguous. Each of its





steps (or phases), and their input/outputs should be clear and must lead to
only one meaning.
Input − An algorithm should have 0 or more well defined inputs.
Output − An algorithm should have 1 or more well defined outputs, and
should match the desired output.
Finiteness − Algorithms must terminate after a finite number of steps.
Feasibility − Should be feasible with the available resources.
Independent − An algorithm should have step-by-step directions which
should be independent of any programming code.
How to write an algorithm?
There are no well-defined standards for writing algorithms. Rather, it is problem and
resource dependent. Algorithms are never written to support a particular programming
code.As we know that all programming languages share basic code constructs like loops
(do, for, while), flow-control (if-else) etc. These common constructscan be used to write an
algorithm.We write algorithms in step by step manner, but it is not always the case.
Algorithm writing is a process and is executed after the problem domain is well-defined.
That is, we should know the problem domain, for which we are designing a solution.
We design an algorithm to get solution of a given problem. A problem can
be solved in more than one ways. Hence, many solution algorithms can be
derived for a given problem. Next step is to analyze those proposed solution
algorithms and implement the best suitable.
Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and
space used by the Algorithm X are the two main factors which decide the
efficiency of X.
Time Factor − The time is measured by counting the number of key
operations such as comparisons in sorting algorithm
 Space Factor − The space is measured by counting the maximum memory
space required by the algorithm.

The complexity of an algorithm f(n) gives the running time and / or storage
space required by the algorithm in terms of n as the size of input data.
Space Complexity Space complexity of an algorithm represents the amount
of memory space required by the algorithm in its life cycle. Space required
by an algorithm is equal to the sum of the following two components −
A fixed part that is a space required to store certain data and variables, that
are independent of the size of the problem. For example simple variables &
constant used, program size etc.
 A variable part is a space required by variables, whose size depends on the
size of the problem. For example dynamic memory allocation, recursion
stack space etc.

Space complexity S(P) of any algorithm P is S(P) = C + SP(I) Where C is the
fixed part and S(I) is the variable part of the algorithm which depends on
instance characteristic I. Following is a simple example that tries to explain
the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B and C and one constant. Hence S(P) =
1+3. Now space depends on data types of given variables and constant types
and it will be multiplied accordingly.
Time Complexity Time Complexity of an algorithm represents the amount
of time required by the algorithm to run to completion. Time requirements
can be defined as a numerical function T(n), where T(n) can be measured as
the number of steps, provided each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the
total computational time is T(n) = c*n, where c is the time taken for addition
of two bits. Here, we observe that T(n) grows linearly as input size
increases.Asymptotic analysis of an algorithm, refers to defining the mathematical
boundation/framing of its run-time performance. Using asymptotic analysis, we
can very well conclude the best case, average case and worst case scenario of an
algorithm. Asymptotic analysis are input bound i.e., if there's no input to the
algorithm it is concluded to work in a constant time. Other than the "input" all
other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation
in mathematical units of computation. For example, running time of one
operation is computed as f(n) and may be for another operation it is
computed as g(n2). Which means first operation running time will increase
linearly with the increase in n and running time of second operation will
increase exponentially when n increases. Similarly the running time of both
operations will be nearly same if n is significantly small.
Usually, time required by an algorithm falls under three types −




Best Case − Minimum time required for program execution.
Average Case − Average time required for program execution.
Worst Case − Maximum time required for program execution.
Asymptotic Notations
Big Oh Notation, Ο
The Ο(n) is the formal way to express the upper bound of an algorithm's
running time. It measures the worst case time complexity or longest
amount of time an algorithm can possibly take to complete.
Common Asymptotic Notations
constant
−
Ο(1)
logarithmic
−
Ο(log n)
linear
−
Ο(n)
n log n
−
Ο(n log n)
quadratic
−
Ο(n2)
cubic
−
Ο(n3)
polynomial
−
nΟ(1)
exponential
−
2Ο(n)
http://www.tutorialspoint.com/data_structures_algorithms/array_data_structure.htm
Ways to calculate frequency/efficiency of an algorithm:
Eg. (i):
for i = 1 to n
for j = 1 to n
i
j
No. of times
1
1 to n
n
2
2 to n
n-1
3
3 to n
n-2
–
–
–
–
–
–
–
–
–
n-1
n-1 to n
2
n
n to n
1
Therefore,
1+2+3+……………+(n-1)+n
=n(n+1)/2 (Sum of n terms)
Frequency, f(n) = n(n+1)/2 = n2/2 + n/2 = 0.5 n2 + 0.5 n
A higher value of n2 will be most effective. 0.5 is a negligible value.
Therefore in Big(O) notation, f(n) = O(n2).
Eg. (ii):
for i = 1 to n
sum = sum + I;
Big (O) Notation, f(n) = O(n)
Linear Search:
1. Best Case: When search value is found at first position
f(n) = 1
2. Worst Case: Value not in array or in the end
f(n) = n (n number of comparisons are made).
3. Average Case: 1, 2, 3, ………………………..n
(1 + 2 + 3 + ……………………… + n) / n
=n(n+1)/2.n = (n+1)/2
=n/2 + 1/2
=0.5 n + 0.5
f(n) = O(n)
Thus Big-O Notation is a useful measurement tool for measuring time complexity.