Download December 2010 January 2011 February 2011

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

Cluster analysis wikipedia , lookup

Expectation–maximization algorithm wikipedia , lookup

K-means clustering wikipedia , lookup

Transcript
February 2011
NCCSE – 2011: Second National Conference on Computational Science and
Engineering
Date : 4-5, Feb 2011 at Kochi, India
Organized by: Department of Computer Science & CSI Student Branch
Rajagiri College of Social Sciences, Cochin In association with CSI Div. IV on
Communications and Cochin Chapter
For details contact: Dr. P. X. Joseph, Conference Convener, Prof. & HOD,
Department of Computer Science, Rajagiri College of Social Sciences.
Rajagiri P.O, Kalamassery, Cochin - 683104, Kerala, India. Phone: Ph: 04842555564, Email: [email protected] or visit the website at: www.
rajagiri.edu
December 2010
Workshop on Java Androids & Web Technologies
Date : 10-12 December 2010
Hosted by: Jaypee University of Engineering & Technology, Guna (MP)
Organised by: CSI and Jaypee University of Engineering & Technology,
Guna
For details contact: Dr. Shishir Kumar, [email protected]
National Conference on E-Governance & E-Society (NCEGOVS-2010)
Date : 11-12 December 2010
Hosted by: Allahabad Chapter
For details contact: Mr. D.K. Dwivedi, [email protected]
ICoAC 2010: 2nd International Conference on Advanced Computing
Date: 14-16, Dec. 2010 at Chennai, India
Organised by: Dept. of Information Technology, Anna University Chennai,
MIT Campus and IEEE Madras Section and Supported by Computer
Society of India Div IV & Chennai Chapter, IEEE Computer Society, Madras
Chapter, Centre for Development of Advanced Computing (CDAC) and
University Grants Commission (UGC)
For details contact: Dr. S Thamarai Selvi, Professor, Dept. of Information
Technology, MIT Campus, Anna University Chennai, Chromepet,
Chennai 600044, India. Phone: 91-44-22516319 / 22516015.
Email: [email protected] OR Mr. H R Mohan, Chair Div. IV at
[email protected] Website: www.annauniv.edu/icoac2010
ICSIP-2010: International Conference on Signal and Image Processing
Date : 15-17, Dec. 2010 at Chennai, India
Organized by: RMD College of Engineering and University of Mysore in
association with Computer Society of India Div IV & Chennai Chapter and
IEEE Computer Society, Madras Chapter
For details contact: Prof. Dr. R. M. Suresh, Chair – Programme Committee at
[email protected] or [email protected] OR Mr. H R Mohan, Chair
Div IV at [email protected] Website: www.rmd.ac.in/icsip2010/
Role of IT in National Rural Employment Guarantee Act (NREGA)
Date : 17-18 December 2010
Hosted by: Tata Institute of Social Sciences
Organised by: CSI and Tata Institute of Social Sciences
For details contact: Prof. Bino Paul, [email protected]
Seminar on Knowledge Management
Date : 18th December 2010
Hosted by: Academic Staff College, VIT University
Organised by: CSI SIG-KM and CSI Vellore Chapter
For details contact: [email protected], [email protected]
January 2011
ConfER-2011: The 4th National Conference on Education & Research
Date : 23-24 January, 2011
Hosted by: Shambhunath Institute of Engineering & Technology, Allahabad
Organized by: CSI Division V, Region-I and Allahabad Chapter
For details contact: Prof. J P Mishra
E-mail: [email protected]),
Mr. Zafar Aslam (e-mail: [email protected])
CONSEG-2011 : International Conference on Software Engineering
Date : 17-19 February, 2011
Organized by: CSI Div. II (Software) and Bangalore Chapter
For details contact: Dr. Anirban Basu, [email protected]
Second International Conference on Emerging Applications of Information
Technology (EAIT 2011)
Date : 18-20 February, 2011
Host by: Kolkata Chapter
For details contact: Mr. D P Sinha, [email protected]
March 2011
27th CSI National Student Convention
Date : 9-12, March 2011
Hosted by: ITM Gwalior
Organized by: CSI ITM Universe Student Branch and CSI Gwalior Chapter
For details contact: [email protected], [email protected],
[email protected]
April 2011
NCVESCOM-11: 4th National Conference on VLSI, Embedded Systems,
Signal Processing and Communication Technologies
Date : 8-9, Apr 2011 at Chennai
Organized by: Department of Electronics & Comunications Engg., Aarupadai
Veedu Institute of Technology, Vinayaka Missions University and supported
by CSI Div. IV (Communication), IEEE madras Section, IEEE COMSOC, IEEE
CS, IETE, BES(I).
For details contact: D Vijendra Babu, Conference Co-Chair, NCVESCOM-11,
HOD & Associate Professor/ECE, Aarupadai Veedu Institute of Technology,
Paiyanoor-603104. Email: [email protected]
Phone: +91 9443538245 or Mr. H.R. Mohan, Chair, Div II
at [email protected] Website: www.avit.ac.in
July 2011
ACC-2011: International Conference on Advances in Computing and
Communications
Date : 22-24, Jul 2011 at Kochi, India
Organized by: Rajagiri School of Engineering and Technology (RSET)
in association with Computer Society of India (CSI), Div. IV & Cochin
Chapter, The Institution of Electronics and Telecommunication Engineers
(IETE),The Institution of Engineers (India) and Project Management Institute
(PMI),Trivandrum, Kerala Chapter.
For details contact: Dr. Sabu M. Thampi, Conference Chair - ACC2011,
Professor, Dept. of Computer Science & Engineering, Rajagiri School of
Engineering and Technology, Rajagiri Valley, Kakkanad, Kochi 682 039,
Kerala, INDIA. Email: [email protected]
Website: http://www.acc-rajagiri.org
M D Agrawal
Vice President & Chair, Conference Committee, CSI
Executive Committee
2010-11/12
President
Prof. P Thrimurthy
[email protected]
Vice-President
Mr. M D Agrawal
[email protected]
Volume No. 34
Dr. S Subramanian
[email protected]
Division-III
(Applications)
Mr. H R Mohan
[email protected]
Division-IV (Communications)
Prof. Swarnalatha Rao Division-V [email protected]
(Edu. & Research)
Hon. Secretary
Prof. H R Vishwakarma
[email protected]
Hon. Treasurer
Mr. Saurabh H Sonawala
[email protected]
Nominations Committee
Immd. Past President
Mr. S Mahalingam
[email protected]
Prof. (Dr.) U K Singh
Dr. Shyam Sunder Agrawal
Dr. Suresh Chandra Bhatia
Mr. M P Goel
[email protected]
(Region I)
Dr. D P Mukherjee
[email protected]
(Region II)
Prof. S G Shah
[email protected]
(Region III)
Mr. Sanjay Mohapatra
[email protected]
(Region IV)
Dr. D B V Sarma
[email protected]
(Region V)
Mr. C G Sahasrabuddhe (Region VI)
[email protected]
Mr. S Ramanathan
[email protected]
Mr. Jayant Krishna
[email protected]
Theme Section : Nature Inspired Computing
04
Nature Inspired Machine Intelligence
Ajith Abraham
08
DPSO -Dynamic Particle Swarm Optmization
Debora Maria Rossi de Medeiros & Andre C. P. L. F. de Carvalho
1 2
Harmony Search Algorithm
Zong Woo Geem
1 4
Nature Inspired Computing in Digital Watermarking Systems
Ashraf Darwish
HR Column
1 7
Think Local, Act Global
Aditya Narayan Mishra
Publications Committee
Articles
Chairman
Prof. S. V. Raghavan
[email protected]
Neuro Fuzzy Vertical Handoff Decision Algorithm for overlaid
Heterogeneous Network
Anita Singhrova & Nupur Prakash
Data Mining : A Process to Discover Data Patterns and Relationships
for Valid Predictions
Jasmine K S
Students Korner
Chief Editor
Dr. T V Gopal
[email protected]
Division-I (Hardware)
Dr. T V Gopal
[email protected]
Division-II (Software)
26
30
Analysis of Techniques for Detection of Deep Web Search Interface
Dilip Kumar Sharma & A. K. Sharma
CSI2010 - Special Report
Resident Editor
Mrs. Jayshree Dhere
[email protected]
Executive Secretary
Mr. Suchit Gogwekar
(Region VIII) [email protected]
Dr. Deepak Shikarpur
[email protected]
19
Director (Education)
Wg. Cdr. M Murugesan (Retd.)
[email protected]
(Region VII)
Division Chairpersons
December 2010
CONTENTS
Regional Vice-Presidents
Issue No. 9
Published by
Mr. Suchit Gogwekar
For Computer Society of India
35
38
44
Technology and Society: the human touch
R Gopalakrishnan
CSI Annual Convention 2010 at Mumbai – A Report
CSI Honors @ 45th Annual National Convention 2010, Mumbai
Departments
02 Community Talk
03 President’s Desk
34 ExecCom Transacts
CSI Topics
CSI Calendar 2010-11 45
(2nd Cover)
CSI Elections 2011-2012/2013 (Back Cover)
From CSI Chapters
CSI COMMUNICATIONS | DECEMBER 2010
1
COMMUNITY TALK
From: Sanjay Mehta
[email protected]
Sent: Saturday, November 27, 2010
12:27 AM
Dear Mr. Agrawal,
The CSI event was very nicely
organized and content was
excellent.
The participation of Government
IT, Academicians and Industry IT at
this scale jointly is never seen in a
single event in any forum.
I have been part of multiple awards
as sponsor, jury or audience but
the process and transparency
with participation for awards from
Government and Industry was at
the highest level of excellence as
spoken by you on dais.
Congratulation on the success of
this event and hard work of your
team.
I am happy that you I got invited
and be part of this event. Good luck.
Regards
Sanjay Mehta
Thank
you!
“The Outline of Science” is a four volume series
which has the motto “A Plain Story Simply Told”.
J. Arthur Thompson, Regius Professor of Natural
History in the University of Aberdeen is the Editor of
this series first published in 1922. One Chapter in the
first volume is “The Dawn of Mind”. In this chapter,
Prof. Arthur Thompson succinctly observes, if we
are to form a sound judgment on the intelligence of
mammals we must not attend too much to those that
have profited by man’s training, nor to those whose
mental life has been dulled by domestication.
The nature versus nurture debate concerns
the relative importance of an individual’s innate
qualities (“nature”, i.e. nativism, or innatism) versus
personal experiences (“nurture”, i.e. empiricism or
behaviorism) in determining or causing individual
differences in physical and behavioral traits. In the
long-running battle of whether our thoughts and
personalities are controlled by genes or environment,
scientists are building a convincing body of evidence
that it could be either or both.
Nobel laureate David Gross outlined 25
questions in science that he thought physics might
help answer. One of the Gross’s questions involved
human consciousness. The greatest brainteaser in
this field has been to explain how processes in the
brain give rise to subjective experiences. A closer
look at nature from the point of view of information
processing can and will change what we mean by
computation.
“Biology and computer science–life and
computation–are related. I am confident that at their
interface great discoveries await those who seek them.”
– Leonard Adleman, Scientific American, Aug. 1998
Nature Inspired Computing is the field of
research that investigates models and computational
techniques inspired by nature and, dually, attempts
to understand the world around us in terms of
information processing. It is a highly interdisciplinary
field that connects the natural sciences with
computing science, both at the level of information
technology and at the level of fundamental research.
“Computers are rigid, unbending, unyielding,
inflexible, and quite unwieldy. Let’s face it, they’ve
improved our lives in many a way, but they do tend to be
a pain. When interacting with them you have to be very
methodical and precise, in a manner quite contrary to
human nature. Step outside the computer’s programmed
repertoire of behavior, it will simply refuse to cooperate,
or--even worse--it will “crash” (a vivid term coined
by computing professionals to describe a computer’s
breaking down). Computers are notoriously bad at
learning new things and at dealing with new situations.
It all adds up to one thing: At their most fundamental,
computers lack the ability to adapt. Adaptation concerns
a system’s ability to undergo modifications according
to changing circumstances, thus ensuring its continued
functionality.”
- Moshe Sipper, Machine Nature, McGraw-Hill, New
York, 2002
The “Adaptive, Bio-inspired systems mooted
by Moshe Sipper have the Complexity, which that is
more than simply being complicated objects or to
the difficulty of building and comprehending them.
Adaptive, Bio-Inspiration and Complexity is soon
becoming the new ABC of computing.
Among the oldest examples of nature inspired
models of computation are the cellular automata
conceived by Ulam and von Neumann in the 1940s.
John von Neumann, who was trained in both
mathematics and chemistry, investigated cellular
automata as a framework for the understanding of
the behavior of complex systems. In particular, he
believed that self-reproduction was a feature essential
to both biological organisms and computers.
Despite being marvels of complexity and human
ingenuity, computers are notoriously bad at learning
new things and dealing with new situations. Marvin
Minsky of MIT suggested in his Society of Minds
theory, “To explain the mind, we have to show how
minds are built from mindless stuff, from parts that
are much smaller and simpler than anything we’d
consider smart.” So, if a person wants to formulate a
problem-solving strategy based on some observation
from nature, how and where should he/she start?
Based on the general principles of “survival
of the fittest” – whereby poor performers will be
eliminated – and the “law of the jungle” – whereby
weak performers will be eaten by stronger ones
systems have been devised to solve some well-known
constraint satisfaction problems.
Autonomy Oriented Computing (AOC) has
become a new field of computer science that
systematically explores the metaphors and models of
autonomy as offered by nature (e.g., physical, biological,
and social entities of varying complexity), as well as
their role in addressing our practical computing needs.
It studies emergent autonomy as the core behavior of
a computing system and draws on such principles as
multi-entity formulation, local interaction, nonlinear
aggregation, adaptation, and self-organization.
On behalf of the CSI Communications team,
I thank Dr. Ajith Abraham for providing insightful
articles for this issue.
Going to CSI2010 has been one of the most
memorable experiences for me. I congratulate the
CSI2010 Team.
Dr. Gopal T V
Hon. Chief Editor
[email protected]
The CSIC Team thanks the Chairpersons of all the Committees formed for CSI2010
and all the Session Chairpersons for ensuring that we work seamlessly with all the
members of their respective teams, the Dignitaries, Award Winners and the Invited
Speakers. A special report on CSI2010 is in this issue.
CSI COMMUNICATIONS | DECEMBER 2010
2
PRESIDENT’S DESK
From : [email protected]
Subject: President’s Desk
Date: 1st December, 2010
Dear Affectionate Members of CSI Family,
Like Armed forces to defend the country on continuous basis, the ICT
professionals need to be alert on meeting the following challenges, while
continuing research in heterogeneous disciplines:
1.
The demand of increasing the productivity in various sectors
of Economy,
2. The commitment for improving the Quality of life citizens,
and
3. Learning new technologies for designing & developing cost
effective solutions.
Review & SWOT Analysis:
It is well accepted that India is the most observed country and has been
making great impact on use of ICT in the Global market. Most advanced
sectors of ICT are depending upon the brains of Indian youth. As we are
entering in the second decade of the 21st century, the challenges of the
Indian dreams are increasingly visible. Much of the global economy is
depending upon the Indian software developers, yet there is always a
possibility of the countries like China to come in the race and to share the
ICT market with India. Thus India has to be alert not only from external
threats of competition but also from the internal demand of shaping the
youth in the country as high quality of Human Resources to keep the
lead in the world.
After two major recessions in the very last decade (one in 2000 and
other in 2008), we have learnt many lessons and have strived hard to
survive. We have been successful in overcoming these two recessions,
yet we have to exercise a lot of thought process and find the venues
and avenues where the Indian youth can be deployed for improving
productivity and economy of our country in particular and of the world,
in general. ICT in its domain has grown significantly but the application
domains are still starving for their growth in terms of their ICT enabled
format. The development of new infrastructure, the improvement in
human healthcare processes, social networking and many other domains
are expanding with the help of new versions of ICT such as Cloud
Computing, Semantic Web, Search Engines, etc.
Thus, we had concentrated on providing such kind of platforms that
enable our CSians (members of CSI) to exchange their success stories,
to get the feel of new technologies, to develop competitive spirit through
innovative applications and e-Governance applications and to provide
opportunities on use of new technologies and above all to network
themselves at CSI 2010 annual convention at Mumbai during 24-28
November 2010.
iGen and CSI 2010
CSI2010 Annual convention at Mumbai came up with many critical
issues related to betterment of business, improvement in academics
& learning, enhancement of productivity in Industries, more efficiency
in Government and many others in the coming decade. The theme of
the convention “iGen: Technologies for the Next Decade” has been very
exciting and appealing where by there was a great deal of knowledge
exchange to arrive at planning optimized ICT based solutions for the
focused problems that are likely to occur in the coming decade. The
3-day convention focused on a combined effort to provide all possible
solution paradigms to diversified problems.
Covering the whole ICT domain, several interesting Tracks that had
been arranged in parallel during the three days, include Architecture,
Enterprise, Society, Entrepreneurship, Connectivity, Solutions, Education
& Research, eGovernance, Excellence in IT. The sessions were chaired
by many learned experts and the talks were delivered by many eminent
practitioners.
We are thankful to the track chairs for their efforts in organizing the
session that include Prof. Manohar Chandwani, Prof. Umesh Bellur, Dr
Satish Babu & Dr. Sasi Kumar, Prof. Anirudh Joshi, Mr. Manak Singh, Prof.
Karandikar, Shri. Sunil Mehta,. Maj. Gen. R K Bagga, Mr Mohan Datar,
Mr. Anil Srivastava and Mr. Awantkia Varma and the 151 distinguished
speakers, who have converted the Annual conference as a festival of
Learning and delightful knowledge sharing environment.
We are honored on the presence of the Past Presidents of CSI to
encourage the family spirit in CSI. Happy that Shri. Deepakbhai Parekh
of HDFC and Shri. R. Gopala Krishnan of M/s. Tata Sons, could come
to inspire the gathering at the inaugural session. It is amazing to see the
audience (on their own) giving standing ovation, to Dr. F. C. Kohli on
his coming to the stage to receive the CSI Life time achievement award.
National Council of CSI and AGM had a new look with good attendance
and it was a great honor to present honors to the winners of the CSI
chapter level award winners from all over the country, on their voluntary
contributions to CSI.
We congratulate all the award winners of the CSI-Nihilent e-Governance
award winners, IT Award winners, Fellowship award winners of this year,
Chapter level achievement award winners, winner of Vandana Goyal
award and Shri. Satish Doshi award and the winners of other awards that
are instituted by the philanthropists to the CSI members.
It was nice that the Revenue Minister of Goa Mr. Jose Philip D’Souza
participating in the technical presentations and inviting the CSI-Nihilent
e-Governance team for Goa for conducting the Knowledge Sharing
Summit.
Salaam Pyara Mumbai CSians.
Mumbai Chapter of CSI has made all of us proud in creating an
atmosphere of knowledge sharing celebrations. While Dr. Vishnu
Kanhere and Shri R C Goyal with their teams, running around for
providing Hospitality, Dr. Rajiv Gerela, Shri Ravi Eppaturi, Shri Manish
Shah, Shri. Ravikiran Mankikar, Shri M R Datar, Shri V.L.Mehta, Shri
Rohinton Dhumasia, Dr. T J Mathews, Dr. Sasi Kumar and managing
Committee members of Mumbai Chapter, had contributed for managing
the sequence of events for CSI 2010 in professional style. The CSI HQ
staff lead by Shri Suchit has been excellent.
Convention Ambassador Mr. M D Agarwal, Conference Chair Shri. S
Mahalingam and Prof. D B Pathak monitored the logistics and had been
a great support to interface the Industry, while PC Chairs Dr. Atanu
Rakshit ( who had contributed for the theme of the conference) and
Dr. Manohar Chandwani with track chairs had organized the resource
persons to provide excellent technical presentations. We are grateful
to these stalwarts.
Past presidents Dr. F C Kohli, Shri Hemant Sonawala, Prof. PVS Rao, Prof.
S Ramani, Dr. M. L. Goyal, Brig. SVS Chowdhry, Dr. Rattan Dutta, Prof.
H N Mahabala, and Prof. K K Aggarwal had spared their time to review
periodically and encouraged the event. Mumbai CSI Chapter deserves
my sincere salute for their loving care in hosting the CSI 2010. Salaam to
all those CSI members who made all of us proud in the Country.
Alarm bell ringing to improve quality of Research and for
controlling Plagiarism:
About 80% of the papers that were received (against the call for papers
to CSI-2010) were rejected by our referees. There were casual papers
with out Quality. More alarming is that several cases of blatant plagiarism
were reported. The paper committee and the referees adopted ‘Zero
tolerance for plagiarism’. I appeal to all budding professionals to shun
the plagiarism. India has may stories on ethics. A crow can not become a
pea-cock by putting on the feathers of the Pea-Cock. Indian National bird
is our icon and we are to continue as original.
Prof. P Thrimurthy
President, Computer Society of India
CSI COMMUNICATIONS | DECEMBER 2010
3
GUEST EDITORIAL
Nature Inspired Machine Intelligence
Ajith Abraham
Director - Machine Intelligence Research Labs (MIR Labs), Scientific Network for Innovation and Research
Excellence (SNIRE), P.O. Box 2259, Auburn, Washington 98071, USA
Email: [email protected], [email protected] •
WWW: http://www.softcomputing.net, http://www.mirlabs.org
Nature inspired computation is a general term referring to computing inspired by
nature. It is an emerging interdisciplinary area in the information technology field. So
far a range of techniques and methods are studied for dealing with large, complex,
and dynamic problems. The idea is to mimic (concepts, principles and mechanisms)
the complex phenomena occurring in the nature as computational processes in order
to enhance the way computation is performed mainly from a problem solving point of
view. The general area of machine intelligence is currently undergoing an important
transformation by trying to incorporate computational ideas borrowed from the nature
all around us. Some of the key paradigms falling under this umbrella are detailed below.
Artificial Neural Networks
Artificial neural networks have been developed as
generalizations of mathematical models of biological
nervous systems. In a simplified mathematical
model of the neuron, the effects of the synapses
are represented by weights that modulate the effect
of the associated input signals, and the nonlinear
characteristic exhibited by neurons is represented
by a transfer function, which is usually the sigmoid,
Gaussian function etc. The neuron impulse is then
computed as the weighted sum of the input signals,
transformed by the transfer function. The learning
capability of an artificial neuron is achieved by
adjusting the weights in accordance to the chosen
learning algorithm [7].
Evolutionary Algorithms
Evolutionary algorithms (EA) are adaptive
methods, which may be used to solve search
and optimization problems, based on the genetic
processes of biological organisms [8]. Over many
generations, natural populations evolve according
to the principles of natural selection and ‘survival of
the fittest’. By mimicking this process, evolutionary
algorithms are able to ‘evolve’ solutions to real
world problems, if they have been suitably encoded.
Usually grouped under the term evolutionary
algorithms or evolutionary computation, we find the
domains of genetic algorithms, evolution strategies,
evolutionary programming, genetic programming
and learning classifier systems. They all share a
common conceptual base of simulating the evolution
of individual structures via processes of selection,
mutation, and reproduction.
Cultural Algorithms are computational models
of cultural evolution. They consist of two basic
components, a population space (using evolutionary
algorithms), and a belief space. The two components
interact by means of a vote-inherit-promote protocol
[14]. Likewise the knowledge acquired by the problem
solving activities of the population can be stored in
the belief space in the form of production rules etc.
Cultural algorithms represent a general framework for
producing hybrid evolutionary systems that integrate
evolutionary search and domain knowledge.
Swarm Intelligence
Swarm intelligence is aimed at collective
behaviour of intelligent agents in decentralized
systems. Most of the basic ideas are derived from the
real swarms in the nature, which includes ant colonies,
bird flocking, honeybees, bacteria and microorganisms
etc. Swarm models are population-based and the
population is initialised with a population of potential
solutions [9]. These individuals are then manipulated
(optimised) over many several iterations using several
heuristics inspired from the social behaviour of insects
in an effort to find the optimal solution. Ant Colony
Optimization (ACO) algorithms are inspired by the
behavior of natural ant colonies, in the sense that
they solve their problems by multi agent cooperation
using indirect communication through modifications
in the environment. Ants release a certain amount of
pheromone (hormone) while walking, and each ant
prefers (probabilistically) to follow a direction, which is
rich of pheromone. This simple behavior explains why
ants are able to adjust to changes in the environment,
such as optimizing shortest path to a food source or a
CSI COMMUNICATIONS | DECEMBER 2010
4
nest. In ACO, ants use information collected
during past simulations to direct their
search and this information is available and
modified through the environment.
Bacterial
Foraging
Optimization
algorithm is a swarm intelligence based
meta-heuristic mimicking the Escherichia
coli (E. coli) bacteria whose behavior to
move comes from a set of up to six rigid
100–200 rps spinning flagella, each driven
as a biological motor. Selection behavior
of bacteria tends to eliminate animals
with poor foraging strategies and favor the
propagation of genes of those animals that
have successful foraging strategies. After
many generations, a foraging animal takes
actions to maximize the energy obtained per
unit time spent foraging [10]. That is, poor
foraging strategies are either eliminated or
shaped into good ones.
Harmony Search
Harmony search is inspired from
the music improvisation process where
musicians improvise their instruments’
pitches searching for a perfect state of
harmony. Although the estimation of a
harmony is aesthetic and subjective, on
the other hand, there are several theorists
who have provided the standard of
harmony estimation: Greek philosopher and
mathematician Pythagoras (582–497BC)
worked out the frequency ratios (or string
length ratios with equal tension) and found
that they had a particular mathematical
relationship, after researching what notes
sounded pleasant together. The octave
was found to be a 1:2 ratio and what we
today call a fifth to be a 2:3 ratio; French
composer Leonin (1135–1201) is the first
known significant composer of polyphonic
‘‘organum’’, which involved a simple
doubling of the chant at an interval of a
fifth or fourth above or below; and French
composer Jean-Philippe Rameau (1683–
1764) established the classical harmony
theories in the book ‘‘Treatise on Harmony’’,
which still form the basis of the modern
study of tonal harmony [16].
In engineering optimization, the
estimation of a solution is carried out by
putting values of decision variables to
objective function or fitness function and
evaluating the function value with respect to
several aspects such as cost, efficiency, and/
or error. Just like music improvisation seeks
a best state (fantastic harmony) determined
by an aesthetic standard, optimization
process seeks a best state (global optimum)
determined by objective function evaluation;
the pitch of each musical instrument
determines the aesthetic quality, just as the
objective function value is determined by
the set of values assigned to each decision
variable; aesthetic sound quality can be
improved practice after practice, objective
function value can be improved iteration by
iteration. The HS metaheuristic algorithm
was derived based on natural musical
performance processes that occur when
a musician searches for a perfect state of
harmony, such as during jazz improvisation.
Simulated Annealing
Simulated annealing is based on the
manner in which liquids freeze or metals recrystalize in the process of annealing [3]. In
an annealing process, molten metal, initially
at high temperature, is slowly cooled so that
the system at any time is approximately in
thermodynamic equilibrium. If the initial
temperature of the system is too low or
cooling is done insufficiently slowly the
system may become brittle or unstable
with forming defects. The initial state of
a thermodynamic system is set at energy
E and temperature T, holding T constant
the initial configuration is perturbed and
the change in energy dE is computed. If
the change in energy is negative the new
configuration is accepted. If the change
in energy is positive it is accepted with a
probability given by the Boltzmann factor
exp -(dE/T). This processes is then repeated
for few iterations to give good sampling
statistics for the current temperature, and
then the temperature is decremented and
the entire process repeated until a frozen
state is achieved at T=0
Membrane Computing
Membrane computing (P systems)
is a framework, which abstracts from the
way live cells process chemical compounds
in their compartmental structure [4]. In
a membrane system, multisets of objects
are placed in the compartments defined by
the membrane structure, and the objects
evolve by means of reaction rules are also
associated within the compartments,
and applied in a maximally parallel,
nondeterministic manner. The objects
can be described by symbols or by strings
of symbols. For symbol-objects, a set of
numbers are computed, and in the case of
string-objects a set of strings are computed,
which is more like a language. The objects
are able to through membranes and the
membranes can dissolve, divide and change
their permeability. These features are used in
defining transitions between configurations
of the system, and sequences of transitions
are used to define computations. A sequence
of transitions is a computation.
Artificial Immune System (AIS)
The artificial immune systems like
other biologically inspired techniques, tries
to extract ideas from a natural system, in
particular the vertebrate immune system,
in order to develop computational tools
for solving engineering problems. The
basic idea of AIS is to exploit the immune
system’s characteristics of learning and
memory to solve a problem. AIS can be
broadly categorized into three subgroups:
those using the clonal selection theory,
those using negative selection and those
using the immune network theory as their
main inspiration [11].
DNA Computation
DNA computing is a form of
computing which uses DNA (DeoxyriboNucleic Acid) and molecular biology,
instead of the traditional silicon-based
microprocessors. Just like a string of
binary data is encoded with ones and
zeros, a strand of DNA is encoded with
four bases, represented by the letters A,
T, C and G (nucleotides) and the data
density is very impressive. An important
property of DNA is its double stranded
nature with every DNA sequence having a
natural complement. This complementarity
feature makes DNA a unique data structure
for computation and can be exploited in
many ways. In the cell, DNA is modified
biochemically (on the molecular level) by a
variety of enzymes, which are tiny protein
machines. This mechanism along with some
synthetic chemistry is responsible for the
various DNA computation operators [12].
Computing with Words
Computing with words is a methodology
in which the objects of computation are
words and propositions drawn from a natural
language. Computing with words is inspired
by the brain’s crucial ability to manipulate
perceptions without any measurements
or computations. Computing with words
provides a foundation for a computational
theory of perceptions. A basic difference
between perceptions and measurements
is that, in general, measurements are crisp,
whereas perceptions are fuzzy [13].
Artificial Life
Artificial life (Alife) attempts at setting
up systems with life like properties, which
all biological organisms possess, such as
reproduction, homeostasis, adaptability
etc. Alife is often described as attempting
to understand high-level behavior from
low-level rules; for example, how the
simple rules of Darwinian evolution lead to
high-level structure, or the way in which
the simple interactions between ants and
their environment lead to complex trailfollowing behavior [15]. Understanding this
relationship in particular systems promises
to provide novel solutions to complex realworld problems, such as disease prevention,
CSI COMMUNICATIONS | DECEMBER 2010
5
stock-market prediction, and data-mining
on the Internet.
Quantum Computation
In conventional silicon computers,
the amount of data is measured by bits;
in a quantum computer, it is measured by
qubits (quantum-bit). A qubit can be a 1 or
a 0, or it can exist in a superposition that is
simultaneously both 1 and 0 or somewhere
in between. The basic principle of quantum
computation is that the quantum properties
of particles can be used to represent and
structure data, and that devised quantum
mechanisms can be used to perform
operations with this data [5].
Hybrid Approaches
Several adaptive hybrid intelligent
systems have in recent years been
developed and many of these approaches
use the combination of different knowledge
representation schemes, decision making
models and learning strategies to solve a
computational task [6]. This integration
aims at overcoming limitations of individual
techniques through hybridization or fusion
of various techniques. It is well known
that the intelligent systems, which can
provide human like expertise such as
domain knowledge, uncertain reasoning,
and adaptation to a noisy and time varying
environment, are important in tackling
practical computing problems. In contrast
with conventional artificial intelligence
techniques, which only deal with precision
and certainty the guiding principle of hybrid
approaches is to exploit the tolerance for
imprecision, uncertainty, robustness and to
provide optimal solutions etc.
This special issue is a collection of
articles reflecting some of the current
technological innovations in the Nature
inspired computation field and its real world
applications.
In the first article, de Medeiros and
de Carvalho illustrate the performance of
Particle Swarm Optimization (PSO) and
also introduce two new modified versions
of PSO, where two parameters from the
original PSO version are adjusted on-the-fly.
Experimental results show that these new
versions are able to provide better results
than traditional versions of PSO in clustering
tasks.
Real-life optimization problems are
often NP-hard, and CPU time and/or
memory consuming. In the second article,
Talbi discusses about the usage of parallel
bioinspired algorithms to significantly
reduce the computational complexity of the
search process.
Liu et al. in the third article illustrated
the need for a biologically inspired
computational model of language cognition.
Functional Magnetic Resonance Imaging
(fMRI) provides a high resolution volumetric
mapping of the haemodynamic response
of the brain, which can be correlated with
neural activity, thereby allowing the spatially
localized characteristics of brain activity to
be observed. Authors illustrate Chinese
character and Arabic numerals cognition
during the brain activations.
Computational grids are expected to
leverage unprecedented larger computing
capacities by virtually joining together
geographically distributed resources at large
scale. To achieve this objective, the design
of efficient Grid schedulers that map and
allocate tasks and applications onto Grid
resources is a key issue. Xhafa and Abraham
in the fourth article illustrate how various
nature inspired heuristic and meta-heuristic
methods can be used to design efficient
schedulers in computational grids.
In the fifth article Geem provides a
gentle introduction to Harmony search.
Digital watermarking is used to protect
the copyrights for multimedia. A significant
merit of digital watermarking over traditional
protection methods (cryptography) is to
provide a seamless interface so that users
are still able to utilize protected multimedia
transparently by embedding an invisible
digital signature (watermark) into multimedia data (audio, images, video). Darwish
in the final article present some nature
inspired computing methods that have been
proposed to solve digital watermarking
problems.
We would like to take this opportunity
to thank all the contributors of this special
issue. We hope this special issue inspires
researchers to extend the current nature
inspired technologies to build advanced
applications. Finally, we would like to
gratefully thank Dr. Gopal T.V. (Honorary
Editor-in-Chief, CSIC) for the timely advices,
efforts and painstaking editorial work during
the preparation of this special issue.
Ajith Abraham
References
[1] A. Abraham, “Neuro-Fuzzy Systems:
State-of-the-Art Modeling Techniques”,
in Jose Mira and Alberto Prieto, eds.,
Connectionist Models of Neurons,
Learning Processes, and Artificial
Intelligence, Springer Verlag Germany,
2001, pp. 269-276.
[2] W. Banzhaf, P. Nordin, E.R. Keller, and
F.D. Francone, “Genetic Programming:
An Introduction on The Automatic
Evolution of Computer Programs and
its Applications”, Morgan Kaufmann
Publishers, Inc., 1998
[3] Kirkpatrick, S., C. D. Gelatt Jr., M. P.
Vecchi, Optimization by Simulated
Annealing, Science, 220, 4598, 671680, 1983.
[4] G. Paun, Computing with membranes,
Journal of Computer and System
Sciences, 61 (1), 108-143, 2000.
[5] Deutsch, D., Quantum Theory, the
Church-Turing Principle, and the
Universal Quantum Computer”. Proc.
Roy. Soc. Lond. A400, 97–117, 1985.
[6] A. Abraham, Intelligent Systems:
Architectures and Perspectives, Recent
Advances in Intelligent Paradigms and
Applications, Abraham A., Jain L. and
Kacprzyk J. (Eds.), Studies in Fuzziness
and Soft Computing, Springer Verlag
Germany, ISBN 3790815381, Chapter
1, pp. 1-35, 2002.
[7] Bishop C.M., Neural Networks for
Pattern Recognition, Oxford University
Press, Oxford, UK, 1995.
[8] Fogel, D. B., Evolutionary Computation:
Toward a New Philosophy of Machine
Intelligence. IEEE Press, Piscataway, NJ,
Second edition, 1999.
[9] Kennedy J. and Eberhart R. Swarm
intelligence. Morgan Kaufmann
Publishers, Inc., San Francisco, CA,
2001.
[10]Passino, K.M., Biomimicry of Bacterial
Foraging for Distributed Optimization
and Control, IEEE Control Systems
Magazine, pp. 52-67, June 2002.
[11] de Castro, L. N. and Timmis, J. I.,
Artificial Immune Systems: A New
Computational Intelligence Approach,
Springer-Verlag, London, 2002.
[12]Amos M., Theoretical and Experimental
DNA Computation. Springer, ISBN:
3-540-65773-8, 2005.
[13]Zadeh L.A. and Kacprzyk J. (Eds.)
Computing with Words in Information/
Intelligent Systems: Foundations,
Studies in Fuzziness and Soft
Computing, Springer Verlag, Germany,
ISBN 379081217X, 1999.
[14]R e y n o l d s R . G . , M i c h a l e w i c z ,
Z. Cavaretta M.J., Using Cultural
Algorithms for Constraint Handling in
GENOCOP. Proceedings of the Fourth
Annual Conference on Evolutionary
Programming. MIT Press, Cambridge,
pp. 289-305, 1995.
[15]C. Adami, Introduction to Artificial Life.
Springer-Verlag New York, Inc., 1998.
[16]Z.W. Geem, J.H. Kim, and G.V.
Loganathan, “A new heuristic
optimization algorithm: harmony
search”, Simulation 76 (2), 60–68,
2001.
CSI COMMUNICATIONS | DECEMBER 2010
6
About the Guest Editor
Ajith Abraham received the M.S. degree from Nay-ang Technological University, Singapore, and the Ph.D. degree
in computer science from Monash University, Melbourne, Australia. He is currently the Director of Machine
Intelligence Research Labs (MIR Labs), Scientific Network for Innovation and Research Excellence, USA, which
has members from more than 75 countries. He has a worldwide academic experience with formal appointments
in Monash University; Oklahoma State University, Stillwater, OK; Chung-Ang University, Seoul, Korea; Jinan
University, Jinan, China; Rovira i Virgili University, Tarragona, Spain; Dalian Maritime University, Dalian, China;
Yonsei University, Seoul, Korea; the Open University of Catalonia, Barcelona, Spain; the National Institute of
Applied Sciences (INSA-Lyon), Lyon, France; and the Norwegian University of Science and Technology (NTNU),
Trond-heim, Norway. He serves/has served the editorial board of over 50 International journals and has also
guest edited 40 special issues on various topics. He has published more than 700 publications, and some of the
works have also won best paper awards at international conferences. His research and development experience
includes more than 20 years in the industry and academia. He works in a multidisciplinary environment involving
machine intelligence, terrorism informatics, network security, sensor networks, e-commerce, Web intelligence,
Web services, computational grids, data mining, and their applications to various real-world problems. He has
given more than 54 plenary lectures and conference tutorials in these areas.
Dr. Abraham is a Senior Member of the IEEE Systems Man and Cybernetics Society, the IEEE Computer
Society, the Institution of Engineering and Technology (U.K.), the Institution of Engi-neers Australia
(Australia), etc. He is a chair of the IEEE Systems Man and Cybernetics Society Technical Committee on
Soft Computing. He is actively involved in the Hybrid Intelligent Systems (HIS); Intelligent Systems Design and
Applications (ISDA); Information Assurance and Security (IAS); and Next Generation Web Services Practices
(NWeSP) series of international conferences, in addition to other conferences.
More information at: http://www.softcomputing.net
Branches of Biology
[Excerpted from the Wikipedia: http://en.wikipedia.org/wiki/Biology]
These are the main branches of biology:
• Aerobiology – the study of airborne organic particles
• Agriculture – the study of producing crops from the land, with an
emphasis on practical applications
• Anatomy – the study of form and function, in plants, animals, and other
organisms, or specifically in humans
• Astrobiology – the study of evolution, distribution, and future of life in the
universe. Also known as exobiology, exopaleontology, and bioastronomy
• Biochemistry – the study of the chemical reactions required for life to
exist and function, usually a focus on the cellular level
• Bioengineering – the study of biology through the means of engineering
with an emphasis on applied knowledge and especially related to
biotechnology
• Bioinformatics – the use of information technology for the study,
collection, and storage of genomic and other biological data
• Biomathematics or Mathematical Biology – the quantitative or
mathematical study of biological processes, with an emphasis on
modeling
• Biomechanics – often considered a branch of medicine, the study of the
mechanics of living beings, with an emphasis on applied use through
prosthetics or orthotics
• Biomedical research – the study of the human body in health and disease
• Biophysics – the study of biological processes through physics, by
applying the theories and methods traditionally used in the physical
sciences
• Biotechnology – a new and sometimes controversial branch of biology
that studies the manipulation of living matter, including genetic
modification and synthetic biology
• Building biology – the study of the indoor living environment
• Botany – the study of plants
• Cell biology – the study of the cell as a complete unit, and the molecular
and chemical interactions that occur within a living cell
• Conservation Biology – the study of the preservation, protection, or
restoration of the natural environment, natural ecosystems, vegetation,
and wildlife
• Cryobiology – the study of the effects of lower than normally preferred
temperatures on living beings.
• Developmental biology – the study of the processes through which an
organism forms, from zygote to full structure
• Ecology – the study of the interactions of living organisms with one
another and with the non-living elements of their environment
• Embryology – the study of the development of embryo (from fecundation
to birth). See also topobiology.
• Entomology – the study of insects
• Environmental Biology – the study of the natural world, as a whole or in a
particular area, especially as affected by human activity
• Epidemiology – a major component of public health research, studying
factors affecting the health of populations
• Ethology – the study of animal behavior
• Evolutionary Biology – the study of the origin and descent of species over
time
• Genetics – the study of genes and heredity
• Herpetology – the study of reptiles and amphibians
• Histology – the study of cells and tissues, a microscopic branch of
anatomy
• Ichthyology – the study of fish
• Integrative biology – the study of whole organisms
• Limnology – the study of inland waters
• Mammalogy – the study of mammals
• Marine Biology – the study of ocean ecosystems, plants, animals, and
other living beings
• Microbiology – the study of microscopic organisms (microorganisms)
and their interactions with other living things
• Molecular Biology – the study of biology and biological functions at the
molecular level, some cross over with biochemistry
• Mycology – the study of fungi
• Neurobiology – the study of the nervous system, including anatomy,
physiology and pathology
• Oceanography – the study of the ocean, including ocean life, environment,
geography, weather, and other aspects influencing the ocean
CSI COMMUNICATIONS | DECEMBER 2010
7
THEME ARTICLE
DPSO -Dynamic Particle Swarm
Optmization
Debora Maria Rossi de Medeiros* & Andre C. P. L. F. de Carvalho**
Instituto de Ciências Matemáticas e de Computação [ICMC], University of São Paulo at São Carlos - USP
Av. do Trabalhador Saocarlense, 400 13566-590 - São Carlos, BRAZIL
*Email: [email protected]
**Email: [email protected]*ICMC/USP, São Carlos, Brazil, Email: [email protected]
Particle Swarm Optimization (PSO) comprises population-based techniques that perform
optimization tasks by parallel searches among candidate solutions. These techniques
have become popular for their simplicity and ability to avoid local optima. This paper
presents two new modified versions of PSO, where two parameters from the original PSO
version are adjusted on-the-fly. Experimental results show that these new versions are
able to provide better results than traditional versions of PSO in clustering tasks.
1.Introduction
Population based techniques are optimization
algorithms that use a population of candidate solutions
at each point of the optimization process. These
techniques, among them Particle Swarm Optimization
(PSO), are based on meta¬heuristics, which make
them less likely to get stuck into local optima. Such
techniques have been successfully applied to several
tasks, including data clustering (Hruschka et al., 2009;
Egan et al., 1998; Yi et al., 2006; Chen and Zhao,
2009; Cui et al., 2005)
Differently from Evolutionary Algorithms
(Goldberg, 1989), which encode solutions as
chromosomes and employ genetic operators and
selection mechanisms, in PSO, each candidate
solution is a particle that adjusts its position in the
search space based on its own experience and the
other particles experience.
In this paper, we propose two modified versions
of PSO and evaluate them in the clustering context.
For such, the paper is organized as follows. Section
2 briefly surveys the PSO techniques we use as
benchmark and some examples of the PSO application
in clustering tasks. Section 3 describes the proposed
approaches. Section 4 contains some experimental
results obtained by the proposed approaches.
2.
Related work
In this section, we briefly explain the main
mechanisms of traditional PSO introduced by
Kennedy and Eberhart (1995) and the modified
version proposed by Shi and Eberhart (1998). Both
PSO versions are widely used and are employed
here as references to evaluate the performance of
the proposed approaches. We also mention some
research works that employ PSO in clustering tasks.
In PSO, each particle, ei, represents a candidate
solution in a M-dimensional search space and is
denoted by its coordinates. During the optimization
process, each particle adjusts its position in the search
space based on its own experience and experience
from other particles. For such, each particle keeps
track of its best position so far, pi, and the best position
found so far by any particle in the population, global
best, g. These parameters, together with a velocity
parameter, fi, define the new velocity and position
for each particle, according to Equations 1 and 2,
respectively.
ei = ei + fi(1)
fi = wfi + c1r1(pi – ei)+ c2r2(g – ei)(2)
where c1 and c2 are confidence coefficients, r1 and r2
are random values in the range [0, 1] and w controls
the balance between local search and global search.
The initial particles have their velocity and position
randomly defined.
Another version of PSO was introduced by Shi and
Eberhart (1998), where the parameter w is defined as
a decreasing function of time, as shown by Equation
3. This time decreasing weight leads to a finer local
search.
(3)
PSO has been already investigated in the context
of clustering problems. In Yi et al. (2006), the authors
proposed a PSO-based image clustering. In this
approach, they considered each particle to represent
a prototype of the cluster. The fitness function is
the objective function used by the Fuzzy c-means
CSI COMMUNICATIONS | DECEMBER 2010
8
to the K-means algorithm.
3
DPSO algorithm
in In
twothis
phases:
is used
optimizeversions
the cluster
then, the
resulting
prototypes1are
as initial seeds
paper,first,
we PSO
propose
twotomodified
of prototypes,
PSO algorithm,
named
Dynamic-PSO
andused
2 (DPSO-1
and
to the K-means
algorithm.
DPSO-2).
They are
called “dynamic” because the parameters c and c are updated during the convergence process.
Table 1: Main characteristics of
1 the 2synthetic datasets.
In both algorithms, DPSO-1 and DPSO-2, there is an initial value for c1 and c2 : co1 and co2 . These values are updated
#featuresof the population
#objects
per
cluster
Prototypes
according
to thealgorithm
rate of improvement
best
fitness
value along the interactions. In these
algorithms, if
3 #clusters
DPSO
T
the
best
fitness
value
remains
the
same
during
t
interactions,
c
and
c
decrease
by
s
×
10%
their
original
values,
is, 8:97; 1:14]T,
1
2
Synthetic-1
3
3
50
v
=
[8:54;
7:53;
4:91]
, v2 =that
[2:86;
1
in−two
phases:
first,
used
to optimize
the
cluster
resulting
prototypes
are used as initial seeds
c1 = In
we propose
two
modified
versions
PSO
algorithm,
named
Dynamic-PSO
1 then,
and updated.
2the
(DPSO-1
and
0.1s),
where
s −PSO
1ofisis
the
number
of times
that
c1 e cprototypes,
There
co1 this
(1 −paper,
0.1s) and
c2 = co2 (1
2 were already
T
v3 = [4:37; 4:76; 8:37]
the because
K-means
DPSO-2).
arefor
called
“dynamic”
parameters
are
minimumThey
values
c1 and
are updated
during where
the convergence
process.
(1 −c20.1ρ),
respectively,
ρ is a positive
integer
c2to
, defined
as co1 the
(1algorithm.
−
0.1ρ) andcc1o2and
T
T
In both
initial value for c1 andv c2=: [2:0;
constant.
co1 and6:0]
co2 . These
updated
Synthetic-2
6 algorithms, DPSO-12and DPSO-2, there is an50
, v2 =values
[3:5;are
2:0]
, v3 = [5:0; 5:0]T,
1
Tfitness
T if
T
In
DPSO-2,
besides
c
according
to
the
rate
of
improvement
of
the
population
best
fitness
value
along
the
interactions.
In
these
algorithms,
and
c
decreasing
mechanism,
there
is
also
an
increasing
rule:
if
the
best
value
improves
1
2
v
=
[5:0;
8:0]
,
v
=
[6:5;
2:0]
v6 = [8:0;are
6:0]
in two phases: first, PSO is used to optimize the cluster prototypes,
then, the
resulting
are usedthe
as 4cluster
initial
seeds
phases:
first,
PSO isprototypes
used to optimize
prototypes, then,
the resulting, prototypes
used as initial seeds
3in twoDPSO
algorithm
5
o
the best
fitness valuecremains
same
during
interactions,
c1 andvalues,
t interactions,
c2 decrease
s×
that is,
(1 + 0.1s).
+10%
0.1s)their
andoriginal
c2 = co2values,
increase
by s ×t10%
their original
c1 = cby
1 and c2 the
1 (1
to the K-means algorithm.during
to
the
K-means
algorithm.
o
c(1981),
values
for c21 and
(1are
−
0.1s),
where
stwo
−+
10.1ρ)
isvectors,
the and
number
of+times
that
c1 of
e cPSO
already
There
0.1s) and
= co2cis
= co1 (1 −augmented
0.1ρ),
respectively.
(1
cmodified
defined
asby
co1we
2 were
1 maximum
2
2 (1
In
this paper,
propose
two
versions
algorithm,
1 and 2 (DPSO-1
and
algorithm Bezdek The
formed
representing
2, updated.
rnamed
= 5, tDynamic-PSO
= 8 and population
size
are minimum values for c1 and cDPSO-2).
0.1ρ),the
respectively,
where
ρ
is
a
positive
integer
(1 −called
0.1ρ)“dynamic”
and co2 (1 −
as co1 are
2 , defined They
because
parameters
c
and
c
are
updated
during the convergence process.
1
2100.
cluster
centroids
and
feature
weights.
The
with
an
additional
term
to
maximize
the
3 DPSO algorithmconstant.
3 InDPSO
algorithm
both algorithms,
and DPSO-2, there is an initial value for c1 and c2 : co1 and co2 . These values are updated
Experiments
left
portion,
clusterDPSO-1
centers,
has
C x N real
distance between the4 prototypes.
also
In DPSO-2,They
besides
c1 and c2according
decreasing
there is also
an
rule: if
thefitness
best fitness
improves
tomechanism,
the rate of improvement
ofincreasing
the population
best
valuevalue
along
the interactions. In these algorithms, if
o
In this paper,
we propose
two
modified
versions
of PSO
algorithm,
named
Dynamic-PSO
1 andposition
(DPSO-1
In this
paper,
we their
propose
two
modified
PSO algorithm,
Dynamic-PSO 1 and 2 (DPSO-1 and
numbers
specifying
the
centroid
proposed
a modification
where
the particles
during
t
interactions,
c
+and
0.1s)
and cc22 decrease
= cnamed
+
increase
by
s
×
10%
original
values,
c21versions
=interactions,
co1 (1 of
1 and c2 the
2 (1
best
fitness
value
remains
the
same
during
tproblems.
cthese
by0.1s).
s × 10% their original values, that is,
1 andproblems,
DPSO-1
and
DPSO-2
had
their
performance
evaluated
in
data
clustering
In
DPSO-1
o
o
DPSO-2).
They are
called weights
“dynamic”
because
the for
parameters
c1the
and
c2clusters.
are as
updated
during
the
convergence
process.
DPSO-2).
They
are
called
“dynamic”
because
the
parameters
c
1 and c2 are updated during the convergence process.
o
odefined
The
maximum
values
c
(1
+
0.1ρ)
and
c
(1
+
0.1ρ),
respectively.
c
are
c
of
C
The
right
part,
the
feature
also encode
feature
that
represent
1 andc
2
1and c2As
2
−Figure
0.1s), where
sparticle
− 1 is the
numberby
of times
that c1o e c2 were
already updated. There
c1 (1 − 0.1s)
c2 (1
1 =
o = seen
o in
and
DPSO-2
evolve
populations
of
clustering
solutions.
4,
the
is
formed
two
vectors,
In both algorithms, DPSO-1 and DPSO-2, there is an initial In
value
c1 and c2 :DPSO-1
c1 and c2and
. These
values
updated
bothforalgorithms,
DPSO-2,
there
an initial value
for c1 and c2 : c1 and co2 . These values are updated
o are is
weights,
define
the
importance
of each
the importance of representing
the features.
These andare
minimum
values
for
c1portion,
(1 − 0.1ρ),
respectively, where ρ is a positive integer
and c2 , cluster
defined
as cinput
− 0.1ρ)
andreal
co2 numbers
centroids
feature
The
left
centers,
C best
×ifN
specifying
1 (1has
according to the rate of improvement
ofcluster
the population
bestaccording
fitness weights.
value
along
the
interactions.
In
algorithms,
to the
rate
of
improvement
of these
the
population
fitness
value along
the interactions. In these algorithms, if
constant.
the
centroid
the C clusters.
The
rightthe
part,
the
weights,
define
importance
each
feature
feature
for
clustering.
Thus,
both
cluster
weights
arevalue
optimized
with
4 simultaneously
Experiments
the
best fitness
remains
the
same position
during t of
interactions,
c1 and
cfitness
by
sfeature
× 10%
their
values,
that is, c1ofand
the
best
value
remains
the
sameoriginal
during
tthe
interactions,
c2 input
decrease
by sfor
× 10% their original values, that is,
2 decrease
In DPSO-2,
besides
c1 and
c optimized
decreasing
mechanism, there
is also
increasing rule: if the best fitness value improves
o
o
the
clustering.
Thus,
both
cluster
centroids
feature
values
are
simultaneously
PSO.anof
feature
are
optimized
In Chen and Zhao (2009),
c1the
number
ofand
times
that
evalues
cc2o22were
already
updated.
= cprototypes.
ccentroids
−
0.1s)
and
c21 =
(1 −
0.1s),
where
s −There
1 isbythethenumber
times that
There
co1 (1and
1 =
1 (1 − 0.1s) and c2 = c2 (1 − 0.1s), where s − 1 is the
o c1 e c2 were already updated.
during
c1 and cin2 data
(1
+ 0.1s) and c2 = co2 (1 + 0.1s).
increase
by s ×oproblems.
10% their original
values,
c
=
c
1
o t interactions,
o
DPSO-1
and
DPSO-2
had
their
performance
evaluated
clustering
In
these
problems,
DPSO-1
1
are
minimum
valuesproposed
for c1 and c2fuzzy
respectively,
ρ is as
a positive
, definedclustering
as c1 (1 − 0.1ρ) are
andminimum
c2 (1 − 0.1ρ),
simultaneously
by
the
values
for
c1 PSO.
0.1ρ) and co2 (1 o− 0.1ρ), respectively, where ρ is
a positive integer
and cwhere
c1 (1 o−integer
the
authors
2 , defined
The
maximum
values
for
c
(1
+
0.1ρ),
respectively.
(1
+
0.1ρ)
and
c
and
c
are
defined
as
c
and DPSO-2 evolve populationsconstant.
of clustering solutions. 1As seen2 in Figure 4, the1 particle is formed
2 by two vectors,
constant.
based on the maximum
entropy
principle
representing
cluster
centroidsthere
and feature
weights.
Therule:
left
portion,
cluster
centers,
has Cthere
× Nisreal
numbers
specifying
In
DPSO-2,
besides
c
and
c
decreasing
mechanism,
is
also
an
increasing
if
the
best
fitness
value
improves
In
DPSO-2,
besides
c
and
c
decreasing
mechanism,
also
an
increasing
rule: if the best fitness value improves
1
2
1
2
(MEP) and the use of PSO.
The particles
o the importance of each input feature
o feature weights, define
of thetheir
C clusters.
The
rightc part,
the
o
o for
during t interactions, c1 andthe
(1
+
0.1s)
and
c
=
c
(1
+
0.1s).
c2 centroid
increaseposition
by s × 10%
original
=
c
during
tvalues,
interactions,
c
and
c
increase
by
s
×
10%
their
original
values,
c
4
Experiments
2
1
2
1 = c1 (1 + 0.1s) and c2 = c2 (1 + 0.1s).
2
11
encode candidate cluster
centroids
and
the + 0.1ρ)
Thus,
centroids
feature
values
arecoptimized
simultaneously
by the PSO.
o
o (a) Synthetic-1 dataset.
The maximum values for c1the
and
co2 and
(1 +values
0.1ρ),for
respectively.
andclustering.
c2 are defined
asboth
co1 (1cluster
The
maximum
c1 and
2 are defined as c1 (1 + 0.1ρ) and c2 (1 + 0.1ρ), respectively.
membership grades of the partition matrix
DPSO-1 and DPSO-2 had their performance evaluated in data clustering problems. In these problems, DPSO-1
are constructed with the use of the MEP. Cui
and
DPSO-2
evolve populations
of clustering solutions.
As seen in Figure 4, the particle is formed by two vectors,
Fig.
: Candidate
representation,
Figure
1:
Candidate
solution
representation,
where N solution
is the number
of items in thewhere
datasetNand C is the number of clusters.
4et al.
Experiments
4 1Experiments
representing
cluster
centroids
featureand
weights.
The left portion, cluster centers, has C × N real numbers specifying
(2005) presents a hybrid clustering
is the number
of items
in theand
dataset
C is the
the
centroid
position of the C clusters. The right part, the feature weights, define the importance of each input feature for
strategy
two
phases:
first, PSO
number
clusters.
DPSO-1that
and works
DPSO-2inhad
performance
evaluated
in solutions
dataofclustering
problems.
these
problems,
DPSO-1 functions
DPSO-1
and
DPSO-2
had In
their
performance
evaluated
in data clustering
problems. In these problems, DPSO-1
Intheir
order
to evaluate
the
candidate
byThus,
different
three
different
were employed:
the clustering.
bothperspectives,
cluster centroids
and
featurefitness
values are optimized
simultaneously by the PSO.
is used
to evolve
optimize
the cluster
prototypes,
and
DPSO-2
populations
of clustering
solutions. As
in Figure
4, Nthe
particle
is
by solutions.
two vectors,As seen in Figure 4, the particle is formed by two vectors,
andseen
DPSO-2
populations
of2formed
clustering
evolve
C
2
u
v
−x
k
i=1
k=1 ik
•and
Xie-Beni
(Xie
and The
Beni,
1991):
XB
=
, where
lowerspecifying
XB
are better.
In cluster
order
tocentroids
evaluate
the
candidate
representing
centroids
feature
portion,
centers,
has Ciand
×2N
real
numbers
representing
cluster
feature
weights.
Thevalues
left portion,
cluster centers, has C × N real numbers specifying
then, the cluster
resulting
prototypes
areweights.
used
as left
N (mini,j vi −vj )
the
centroid
position
of
the
C
clusters.
The
right
part,
the
feature
weights,
define
the
importance
of
each
input
feature
for
the
centroid
position
of
the
C
clusters.
The
right
part,
the
feature
the importance of each input feature for
Figure
1:
Candidate
solution
representation,
where
N
is
the
number
of
items
in
the
dataset
and
C
is
theweights,
number ofdefine
clusters.
solutions
by
different
perspectives,
three
2
initial seeds to the K-means algorithm.
the clustering. Thus, both cluster
centroids
and feature values
are
optimized
simultaneously
bycentroids
the−
PSO.
the
clustering.
Thus,
both cluster
featureXvalues
optimized
simultaneously
by the PSO.
• Dataset
Reconstruction
(Pedrycz
and Oliveira,
2008)
error:were
X
X̃ and
, where
is theare
original
dataset
and X̃
different
fitness
functions
employed:
3. DPSO algorithm
In order to evaluate the candidate
solutions
by
different
perspectives,
three
different
fitness
functions
were
employed:
• resulting
Xie-Beni
(Xie andmatrix
Beni, U
1991):
is the dataset rebuilt from the
membership
= [uij ] and prototypes vi , according to: X̃ =
Dataset
C two
C N
In this paper, we propose
modified
2
2
2
i=1 ui vi
i=1
k=1 uik vi −xk Lower
this index
• named
Xie-Beni
andvalues
Beni,of
1991):
XB are
= better.
, where lower XB values are better.
C
2 . (Xie
2
versions of PSO algorithm,
N (mini,j vi −vj )
i=1 ui Dynamic
where
Figure
1:
Candidate
solution
representation,
N σisi +σ
thel number of items in the dataset and C is the number of clusters.
2
PSO 1 and 2 (DPSO-1 and
They are (Davies andwhere
lower
XBDB
values
better.
• •DPSO-2).
Fuzzy
Davies-Bouldin
Bouldin,
1979):
= C1 areXC
max
, where
l=i
i=1
vi −v
Dataset
Reconstruction
(Pedrycz
and
Oliveira,
2008)
error:
−
X̃
,
where
X
is
the
original dataset and X̃
l
N
called “dynamic” because the parameters
c
• Dataset Reconstruction (Pedrycz and
k=1 uik xk −vi ~
In
order
to
evaluate
the
candidate
solutions
by
different
perspectives,
three
different
functions were employed:
σ
=
and
lower
values
are
better.
(b)
Synthetic-2
dataset.
the dataset
] and prototypes vi , according
to: X̃ =fitness
N rebuilt from the resulting membership matrix U = [u
and c2 are updated during iis
the
Oliveira, 2008) error: (X – X)2, ijwhere
C convergence
2
C N
~
2
2
i=1 ui vi
u
v
−x
Figure 1: Candidate solutionTwo
representation,
N values
isproduced
the number
of
items
in
the
dataset
and
Crepresentation,
is theand
number
ofisclusters.
Candidate
N isk=1
the number
of
items in theclusters,
dataset and C is the number of clusters.
Lower
of this
index
are
better.
i
k spherical
i=1
ik with
synthetic
datasets,
inX
order
to(Xie
follow
the
multivariate
gaussian
distribution
process.
C
2 .where
is 1:the
original
dataset
X
the
• Figure
Xie-Beni
andsolution
Beni,
1991):
XB
=where
, where lower
XB values are better.
2
i=1 ui
N (min
vi −v
j 2:) Plots of the synthetic datasets.
according
to the
settings
in Table 1, were
used. Bothrebuilt
datasets from
are illustrated
Figure
2. i,j Figure
In both algorithms,
DPSO-1
and
DPSOC in
dataset
the
resulting
σ
+σ
1
i
l
2different fitness functions were employed:
Synthetic-1
(b)DB
Synthetic-2
dataset.
• (a)
Fuzzy
Davies-Bouldin
(Davies
Bouldin,
1979):
= C
max
, where
order to evaluate the candidate
byodataset.
different
perspectives,
three
different
fitness
functions
were
Inand
order
to
evaluate
the
candidate
solutions
by
different
perspectives,
three
l=employed:
i
i −vl and
and
2, In
there
is an initial value for c1 solutions
• membership
Dataset Reconstruction
and
2008)
error: X − X̃ , where X is the original dataset and X̃
matrix(Pedrycz
U =i=1
[uijOliveira,
] andv
Nc2: c
1 xk
The
initial
population of particles was
−v
Ni 2
2
C
N
2
2
~
k=1 uikC
o
uik v
i −xk values are better.
σi =XB
lower
i=1
k=1 uik vi −xk These values
are
updated
according
c2•. Xie-Beni
vrebuilt
, according
to:
X
(Xie and
Beni,
1991):
= Ni=1tok=1 and
lower
XB
values
are
better.
• prototypes
Xie-Beni
(Xie
and
Beni,
1991):
XB
= membership
where
XB
better.
2
is
the, where
dataset
from
the
resulting
matrix2 U ,=
[ufollows:
and prototypes
vi ,particle,
accordingCto: X̃ =
ij ]lower
i datasets.
as
forvalues
eachare
NFigure
(mini,j2:
−v
)1:the
N (mini,j initialized
vi −vj )
synthetic
viPlots
Table
j of
C Main
2 characteristics of the synthetic datasets.
ui vi
the rate of improvementTwo
of the
population
i=1
synthetic
datasets, produced in
order
to.Lower
followvalues
the multivariate
gaussian
distribution
with
spherical
clusters, selected from the
of
this
index
are
better.
2
2
data
items
were
randomly
C
2
Dataset
#clusters #features #objects
per cluster
Lower
values
of this
index
arePrototypes
i=1 ui
best
fitnessReconstruction
value along
the interactions.
• Dataset
(Pedrycz
X
− X̃
, where
Xare
is illustrated
the
original
dataset
and
X̃ error: X − X̃ , where
Dataset
Reconstruction
(Pedrycz
and
2008)
dataset and X̃
according
toand
the Oliveira,
settingsIn
in2008)
Tableerror:
1, •were
used.
Both
datasets
inOliveira,
Figure
2.
X is the
original
T dataset and
T prototypes
to calculate
C used
Synthetic-1
3value
3 • better.
50
v
1.14]as
, σi +σl , where
1 = [8.54, 7.53, 4.91] , v2 = [2.86,
these
algorithms,
if
the
best
fitness
Fuzzy
Davies-Bouldin
(Davies
and
Bouldin,
1979):
DB
= C1 UT 8.97,
max
l=i prototypes
is
the
dataset
rebuilt
from
the
resulting
membership
matrix
U
=
[u
]
and
prototypes
v
,
according
to:
X̃
=
i=1
is
the
dataset
rebuilt
from
the
resulting
membership
matrix
=
[u
]
and
v
,
according
v
−v
ij
i
ij
i
i
l
The
population of particles was initialized
follows:
for each particle,
C datavitems
werearandomly
selected
partition
for each particle, accordingto: X̃ =
8.37] matrix
C initial
C 2Davies-Bouldin
3 = [4.37, 4.76,
• as Fuzzy
(Davies
2
N
u
vi uik xk −vi remains
the
1
i vi same during t interactions, c1
i=1 uik=1
T are
T
i=1
.
Lower
values
of
this
index
are
better.
.
Lower
values
of
this
index
are
better.
σ
=
and
lower
values
better.
from the
and
used
as
prototypes
to
calculate
a
partition
matrix
for
each
particle,
according
to:
u
.T
=
C dataset
2
C
2
i
Synthetic-2
6
2 Table
v =
=synthetic
[2.0, 6.0]
, v2 = [3.5,to:
2.0]
v3= [5.0,
ik
uik, =
ui
ui 50 N 1979): DB
d25.0] ,
and
Bouldin,
i=1
1:
Main
characteristics of 1the
datasets.
decrease
by s – 10% their original
and c2i=1
ik
T
= [5.0, 8.0]T , v5 = [6.5, 2.0]T , v6=C
26.0] j=1
[8.0,
v
d
4
o
C
σiproduced
+σl
C gaussian
σi +σl
1 synthetic
datasets,
in order
to follow
the
multivariate
distribution
1
jk
Datasetandand
#clusters
#objects
per
Prototypes
(1–0.1s)
c2 = 1979):
c2 #features
values,
that
is, c1 = c 1 (Davies
• Fuzzy
Davies-Bouldin
Bouldin,
DB =
max
,
where
•Two
Fuzzy
Davies-Bouldin
(Davies
and
Bouldin,
1979):
DB
=
max
, with
wherespherical clusters,
where
l=icluster
l=i
i=1
i=1
C
vi −v
vi −vl C
l used as the initial population of particles.
Then, the centroids
corresponding
to
each
partition
matrix
are
computed
and
Then,
the
centroids
corresponding
to
according
to
the
settings
in
Table
1,
were
used.
Both
datasets
are
illustrated
in
Figure
2.
T
T
N
N
Synthetic-1
3
3
50
v
=
[8.54,
7.53,
4.91]
,
v
=
[2.86,
8.97,
1.14]
,
(1–0.1s),
where
s
–
1
is
the
number
of
times
u
x
−v
u
x
−v
1
2
i
k
i
ik
k
k=1 ik
and lower values are better.
σi = k=1 and
and
lower
values
are better. each partition
lower
values
i =
This σinitialization
improve
the
results.
T
matrix
are
computed
and
N scheme was found to speed up the convergence
N slightlyand
8.37] PSO introduced
3 = [4.37,
performance
of are
DPSO-1 and DPSO-2 was compared with the traditionalvPSO
and the4.76,
modified
updated.
There
that c1 e c2 were alreadyThe
aresynthetic
better.
T
T
T
Thesynthetic
number datasets,
of clusters
varied
to6follow
16O clusters.
The
experimental
results
measured
to
the
used
as, vdifference
the
initial
of particles.
Synthetic-1
dataset.
(b)
Synthetic-2
dataset.
Two
infrom
order3(1998).
to
the
gaussian
with
spherical
Two
datasets,
produced
in
order
follow
the
multivariate
gaussian
distribution
with
spherical clusters,
Synthetic-2
2multivariate
50
=
[2.0,
6.0]to
, according
vclusters,
= [3.5,
2.0]
5.0]population
,
Shi
and
Eberhart
parameters
employed
by distribution
each
PSOv1are
version
were:
2
3 = [5.0,
and
c2, defined
as
c1 The(a)
minimum
values for cby1 produced
Table
1:
Main
ofthe
the
syntheticin
datasets.
Tcharacteristics
T dataset
T 2.
Two
datasets,
produced
in
between tothe
matrix
obtained
fromdatasets
the original
structure
of classes
clusters
associated
do
and
according
theproximity
settingso in Table
1, were
used. Both
are illustrated
Figure
2. Tableor
according
to synthetic
theinsettings
in
1,
were
used.
Both
datasets
are
illustrated
Figure
This
initialization
scheme
was found to
=
[5.0,
8.0]
,
v
=
[6.5,
2.0]
,
v
=
[8.0,
6.0]
v
4
5
6
(1 – 0.1 r) and c2 (1 – •0.1Traditional
r), respectively,
100.
PSO: itermax = 200, c1 = c2 = 2, w = 0.9 and population
k1size k
o
(a) Synthetic-1
dataset.
(b) Synthetic-2 dataset.
2
cluster
where
Dataset
#clusters
#features
#objects
per
Prototypes
order
to follow
the
multivariate
gaussian
the resulting clusters, referred here as Proximity error.
This
error
is
− prk1up
convergence
and slightly
k2 , the
N datasets.
N pk1 kspeed
Figure
2: calculated
Plots of the as
synthetic
2
where r is a positive integer
constant.
=
200,
c
=
c
=
2,
w
=
0.2,
w
=
0.9
and
population
size
100.
• PSO
with decreasing w: iter
max
1
2
min
max
C
distribution
with
spherical
clusters,
Synthetic-1
3uik ). An3additional restriction
50
v1 =
[8.54,
7.53, 4.91]T , v2 = [2.86, 8.97, 1.14]T ,
improve
the
results.
]
is
the
proximity
matrix
and
p
=
min
(u
,
was
applied
to
the
P = [p
k
k
k
k
ik
1 2
1 2 o
1
2
and
c decreasing
In DPSO-2,
besides• cThe
o i=1
performance
of DPSO-1
DPSO-2
was
compared
with
the
traditional
PSOFigure
and the
PSOsynthetic
introduced
2:modified
Plots
ofnumber
the
datasets.
cand
2,synthetic
ρ =to
5,
tthe
=
8settings
and needs
population
size
DPSO-1
and
DPSO-2:
Table
1:
Main
characteristics
the
datasets.
Table
1:100.
Main
characteristics
ofpopulation
the synthetic
datasets.
= [4.37,
4.76, varied
8.37]T from
v3 of
into
Table
1, were
1 =according
2of=
The
clusters
convergence
criterion
of
each
PSO
version:
the clast
population
best
fitness
be
better
than
the
first
best
T
mechanism,
there
is
also
an
increasing
rule:
by
Shi
and
Eberhart
(1998).
The
parameters
employed
by
each
PSO
version
were:
Synthetic-2
6 are
2
v16
[2.0, 6.0] The
, v2 =
[3.5, 2.0]T , v3 results
= [5.0, 5.0]T ,
Dataset
#clusters The
#features
#objects per cluster
Prototypes
Dataset
#clusters
#features
per cluster
Prototypes
1 =clusters.
used.
Both
in#objects
Fig.
3 C
to data
experimental
fitness.
initial population
particles
wasdatasets
initialized
asillustrated
follows:
for
each2.50
particle,
items
were
randomly
selected
T
T
if the best fitness value
improves
during
t of
T and population size 100.
T
T
=
[5.0,
8.0]
,
v
=
[6.5,
2.0]
,
v
=
[8.0,
6.0]TT,
v
=
200,
c
=
c
=
2,
w
=
0.9
•
Traditional
PSO:
iter
4
5
6
1
max
1 [8.54,
2 7.53,
and
Synthetic-1
3 the results
50 as
vThe
,of
v23=DPSO-1
[2.86,
8.97,
1.14]
, particle,
Synthetic-1
3 4.91]
50
v1according
=objective
[8.54,according
7.53,
, v2 the
= [2.86,
8.97,
1.14]
1 =
are
difference
Figure 4 presents
for
with
mean
and
standard
over
three
runs
the measured
three
the3 dataset
used
prototypes
toperformance
calculate
adeviation
partition
matrix
for
eachfor
to: 4.91]
uik =to
2T .
d
c increase
byeach
sand
x dataset,
10%
interactions,
c andfrom
T
The
initial
ofthe
particles
was 0.9
initialized
as follows:
each particle,
Cikdata items were ran
[4.37,
v3population
4.76,C
8.37]
vfor
DPSO-2
was
with
traditional
3 = [4.37, matrix
= 200,
ccompared
c=
2,with
w4.76,
=8.37]
0.2,
wmax
and
population
size 100.
• PSO with
decreasing
w: iter
max
1 =
2 =
min
2
between
the
proximity
j=1 dobtained
functions
(performance
indexes),
by the
of PSO:
traditional,
decreasing
w,=DPSO-1
and DPSO-2
for
T and used as prototypes
T
T
T
T jk
1 + 0.1s)
andfour
c50=versions
their
original
values,
from
the
dataset
to
calculate
av1partition
matrix
for[3.5,
each
particle,
according
Synthetic-2
6 c = c• (2DPSO-1
=
[2.0,
6.0]
,
v
=
[3.5,
2.0]
,
v
5.0]
,
Synthetic-2
6
2
50
=
[2.0,
6.0]
,
v
=
2.0]
,
v3 = [5.0,
5.0]T , to: uik
o PSO
o v1
22
3 = [5.0,
2
and
the
modified
PSO
introduced
by
from
the
original
structure
of
classes
= performance
2,
ρ =partition
5, t =
8 DPSO-1
and
population
size 100.
and DPSO-2:
c1 = cThe
datasets
Synthetic-1
and
Synthetic-2.
Then,
the
centroids
corresponding
each
matrix
are
computed
and
used
as
the
initial
population
of
particles.
2 to
of
and
DPSO-2
was
compared
with
the
traditional
PSO
and
the
modified
PSO or
introduced
T
T
T
T
T
T
c (1 + 0.1s). The maximum values for c1 and
=
[5.0,
8.0]
,
v
=
[6.5,
2.0]
,
v
=
[8.0,
6.0]
v
=
[5.0,
8.0]
,
v
=
[6.5,
2.0]
,
v
=
[8.0,
v
4
5
6
6
Shi
(1998).
The parameters
clusters
associated
and 6.0]
Shiand
and
Eberhart
The parameters
employed
each
PSO4the
version
were: 5do the dataset
initialization
found
to Eberhart
speed up(1998).
the
convergence
and
slightlybyimprove
results.
1 + 0.1ρ)
and c (1 +scheme
0.1ρ), wasby
c2 are defined as c (This
Then, the
centroids
corresponding
partition
matrix are clusters,
computed referred
and used here
as the as
initial populati
employed
by16
each
PSO
version
were: to each
the
resulting
The number of clusters varied
3 to
clusters.
The=experimental
according
to the difference
200, c1 = c2 =results
2, w =are
0.9measured
and
population
size 100.
•from
Traditional
PSO:
itermax
respectively.
This
initialization
scheme
was found
to speed
up
the
convergence
and
slightly
improve
the
results.
•
Traditional
PSO:
iter
=
200,
c
error.
error
is calculated
max
1
betweenand
theDPSO-2
proximity
matrix obtained
from
thePSO
ofwas
orProximity
the
dataset
The performance of DPSO-1
was compared
with
the
traditional
and
thestructure
modified
PSO
introduced
performance
oforiginal
DPSO-1
and
DPSO-2
compared
with
traditional
PSO
and
the
modified
PSO
introduced
= 200,
cclasses
c16
2,clusters
w
= associated
0.2,
wmaxThis
=do
0.9
and
population
size 100.
•The
PSO
with
decreasing
w: iter
max
13 =
2 =clusters.
min
are and
the
The
number
of
clusters
varied
from
to
The
experimental
results
measured
according t
k
k
o
r
4.Experiments
=
c
=
2,
w
=
0:9
and
population
1
2
by Shi and Eberhart (1998).
parameters
employed
by each
PSOand
version
byhere
Shi
Eberhart
(1998).
parameters
employed
by each
were:
2The
2were:
the The
resulting
clusters,
referred
as
Proximity
error.
error
is calculated
asPSONversion
pk1 k2 − pk1 k2 , where
o This
N 100.
= co2 = obtained
2, ρ = 5, tfrom
= 8 and
•between
DPSO-1
and
DPSO-2:
the
proximity
the
structure
of classes or w
clusters
do
Cc1matrix
a population
soriginal size
he r associated
e
DPSO-1
and
DPSO-2
had
their
size
100.
c1 the
= c2proximity
= 2, w = 0.9
population
size
100.max min
• Traditional PSO: iter
= 200,
cik
=, u
c2ik=
w =additional
0.9 and population
size 100.
•andTraditional
11
matrix
and pk1 kPSO:
= iter
(uhere
).2, An
restriction
was
applied toasthe k1 k2 po
P max
= [p=k1200,
k2 ] is
2clusters,
2
i=1
the
resulting
referred
as
Proximity
error.
This
error
is
calculated
P
=
[p
]
is
the
proximity
matrix
and
p
k1 k2 −
• PSO with decreasing w: itermax =
performance evaluated in data clustering
N k1k2 N
k1k2
convergence
PSO
version:
the
last
best
fitness
needs
better
than
the first
bestsize
= 200, c1 of
= each
c2 = 2,
w•min
= 0.2,
= population
0.9w:
and
population
sizec100.
• PSO with decreasing
w: itermax criterion
= 200,
c2 = to
2, be
w
wmax
= 0.9population
and population
100.
PSO
withw
decreasing
iter
C= 0.2,
max
max
1 =
min
the
proximity
matrix
and
p
=
min
(u
,
u
).
An
additional
restriction
wa
P = [p200,
An
additional
=
=
c
=
2,
w
=
0:2,
w
=
problems. In these problems,
DPSO-1
and
k1 k2 ]cis
k
k
ik
ik
1 2
1
2
i=1
1
2
mino
max
fitness.
ρ = 5, t = 8 and population
sizeand
100.
• DPSO-1
and DPSO-2:
co1 = co2 =
csize
2, ρversion:
= 5, t = the
8 and
population
size best
100.
• DPSO-1
DPSO-2:
co1 =
2 =
convergence
criterion
of
each
PSO
last
population
fitness
to be better than the first
restriction
was
applied
toneeds
the convergence
0:9
and
population
100.
DPSO-2
evolve
populations
of2,clustering
Figure 4 presents the results for each dataset, with mean and standard
deviation
runs for the three objective
2 over three
fitness.
criterion
of each PSO version: the last
• DPSO-1 and DPSO-2: co1 = co2 =
solutions. As seen in Figure 4, the particle
functions
by the
fourand
versions
of (c)
PSO:
traditional,
withand
decreasing
(a) Synthetic-1
dataset(performance
and Re- (b)indexes),
Synthetic-1
dataset
FDB fitSynthetic-1
dataset
Xie-Beniw, DPSO-1 and DPSO-2 for
Figure 4 presents the results for each dataset, with mean and standard deviation over three runs for the
construction fitness.
ness.
fitness.
datasets Synthetic-1 and
Synthetic-2.
functions (performance indexes), by the four versions of PSO: traditional, with decreasing w, DPSO-1 a
2 datasets Synthetic-1 and Synthetic-2.
2
CSI COMMUNICATIONS | DECEMBER 2010 9
1
1
1
2
1
o
2
2
o
1
2
o
1
o
2
(a) Synthetic-1 dataset and Reconstruction fitness.
(b) Synthetic-1 dataset and FDB fitness.
(c) Synthetic-1 dataset and Xie-Beni fitness.
(d) Synthetic-2 dataset and Reconstruction fitness.
(e) Synthetic-2 dataset and FDB fitness.
(f) Synthetic-2 dataset and Xie-Beni fitness.
Fig. 3: Plot of Proximity error for the Synthetic-1 and Synthetic-2 datasets, as a function of the number of clusters for traditional PSO, PSO with decreasing w, DPSO-1 and
DPSO-2 with 3 fitness functions.
Table 2 : Win/tie/loss table of PSO versions
Traditional PSO
Traditional PSO
PSO with decreasing w
DPSO-1
DPSO-2
—
22/0/14
14/0/22
20/0/16
PSO with decreasing w
14/0/22
—
10/0/26
13/0/23
DPSO-1
22/0/14
26/0/10
—
20/0/16
DPSO-2
16/0/20
23/0/13
16/0/20
—
population best fitness needs to be better
than the first population best fitness.
Figure 4 presents the results for each
dataset, with mean and standard deviation
over three runs for the three objective
functions (performance indexes), by the
four versions of PSO: traditional, with
decreasing w, DPSO-1 and DPSO-2 for
datasets Synthetic-1 and Synthetic-2.
In general, the four PSO versions tested
performed similarly for the Synthetic-1
dataset. DPSO-1 and DPSO-2 provided
better results than the other PSO versions
in several situations, mainly when using XieBeni function as fitness criterion. The results
obtained with Synthetic-2 dataset highlight
the improvement provided by DPSO-1 and
DPSO-2.
To summarize the comparison of
the four versions of PSO tested, Table
2 contains the number of wins, ties and
losses of the techniques listed on the first
column, comparing to the techniques listed
on the first row. The different PSO versions
were tested on 36 situations: 2 datasets X
6 number of clusters X 3 fitness functions.
For instance, the entry 26/0/10 on the
third arrow and second column means that
DPSO-1 won 26 times and lost 10 against
PSO with decreasing w. It is also interesting
to highlight that DPSO-1 provided better
results than traditional PSO in 22 cases
and lost in only 14. Another interesting
result regards the comparison between
DPSO-2 and PSO with decreasing w, where
the first won 23 times and lost 13 times.
These results suggest that the proposed
techniques, DPSO-1 and DPSO-2, can
improve the performance of PSO in data
clustering.
5.Conclusion
In this paper, we proposed two
modified versions of the optimization
technique PSO. Both approaches are based
on adjusting two PSO parameters during the
algorithm convergence, allowing a finer local
search and reducing the chances of local
minima. The techniques were evaluated in
clustering problems and the experimental
results showed that the proposed PSO
versions were able to reach better results
than traditional versions of PSO in several
situations.
There
are
still
some
future
investigations toward the validation of the
proposed approaches. To mention a few,
it would be interesting to evaluate their
performance when applied to real datasets
and, also, continue to explore different ways
of automatically adjusting the parameters.
The proposed approaches could also be
compared to other bioinspired approaches
for data clustering, such as Genetic
Algorithms (Goldberg, 1989).
6.Acknowledgments
The authors would like to thanks
CAPES, CNPq and FAPESP for the financial
support.
References
• Bezdek, J. C. (1981). Pattern recognition
with fuzzy objective function algorithms.
Ed. Prenum.
• Chen, D. and Zhao, C. (2009).
Data-driven fuzzy clustering based
on maximum entropy principle and
pso. Expert Systems with Applications,
36(1):625 – 633.
CSI COMMUNICATIONS | DECEMBER 2010
10
•
•
•
•
Cui, X., Potok, T., and Palathingal, P.
(2005). Document clustering using
particle swarm optimization. In Swarm
Intelligence Symposium, 2005. SIS 2005.
Proceedings 2005 IEEE, pages 185 – 191.
Davies, D. L. and Bouldin, D. W. (1979).
A cluster separation measure. Pattern
Analysis and Machine Intelligence, IEEE
Transactions on, 1(2):224–227.
Egan, M., Krishnamoorthy, M., and
Rajan, K. (1998). Comparative study
of a genetic fuzzy c-means algorithm
and a validity guided fuzzy c-means
algorithm for locating clusters in
noisy data. In Evolutionary Computation
Proceedings, 1998. IEEE World Congress
on Computational Intelligence., The 1998
IEEE International Conference on, pages
440–445.
Goldberg, D. E. (1989). Genetic
Algorithms in Search, Optimization and
ANNOUNCEMENT
•
•
•
•
Machine Learning. Addison-Wesley
Longman Publishing Co., Inc., Boston,
MA, USA, 1st edition.
Hruschka, E., Campello, R., Freitas, A.,
and de Carvalho, A. (2009). A survey of
evolutionary algorithms for clustering.
Systems, Man, and Cybernetics, Part
C: Applications and Reviews, IEEE
Transactions on, 39(2):133–155.
Kennedy, J. and Eberhart, R. (1995).
Particle swarm optimization. In Neural
Networks, 1995. Proceedings., IEEE
International Conference on, volume 4,
pages 1942 –1948 vol.4.
Pedrycz, W. and Oliveira, J. V. (2008).
A development of fuzzy encoding and
decoding through fuzzy clustering.
IEEE Transactions on Instrumentation and
Measurement, 57(4):829–837.
Shi, Y. and Eberhart, R. (1998). A
modified particle swarm optimizer. In
•
•
Evolutionary Computation Proceedings,
1998. IEEE World Congress on
Computational Intelligence., The 1998
IEEE International Conference on, pages
69 –73.
Xie, X. and Beni, G. (1991). A validity
measure for fuzzy clustering. Pattern
Analysis and Machine Intelligence, IEEE
Transactions on, 13(8):841 –847.
Yi, W., Yao, M., and Jiang, Z. (2006).
Fuzzy particle swarm optimization
clustering and its application to image
clustering. In Zhuang, Y., Yang, S.,
Rui, Y., and He, Q., editors, Advances
in Multimedia Information ProcessingPCM 2006, volume 4261 of Lecture
Notes in Computer Science, pages
459–467. Springer Berlin / Heidelberg.
10.1007/11922162_53.
Computer Society of India
CSI National Headquarters,
Education Directorate, Chennai
Call for Panel of Evaluators – Minor Research Projects for R&D
Education Directorate invites volunteers for evaluation of Minor Research Project reports received from the Researchers.
Projects are in the following indicative thrust areas and other specializations.
Technology: OS, Programming Languages, DBMS, Computer & Communication Networks, Software Engineering, Multimedia
& Internet Technologies, Hardware & Embedded Systems
Process & Tools: Requirements Engineering, Estimation & Project Planning, Prototyping, Architecture & Design, Development,
Testing & Debugging, Verification & Validation, Maintenance & Enhancement, Change Management, Configuration
Management, Project Management, Software Quality Assurance & Process Improvement
Vertical Applications: Scientific Applications, Enterprise Systems, Governance, Judiciary & Law Enforcement, Manufacturing,
Healthcare, Education, Infrastructure, Transport, Energy, Defence, Aerospace, Automotive, Telecom, Agriculture & Forest
Management
Inter-disciplinary Applications: CAD/CAM/CAE, ERP/SCM, EDA, Geo-informatics, Bioinformatics, Industrial Automation,
CTI and Convergence.
Inter-disciplinary Applications : CAD/CAM/CAE, ERP/SCM, EDA, Geo-informatics, Bioinformatics, Industrial Automation,
CTI and Convergence.
We request all members of CSI from Academia and Industry to offer their valuable services to CSI by agreeing to be
empanelled as research evaluators. A token honorarium of ` 2000/- will be paid for each completed evaluation.
Your co-operation is solicited to improve the quality of R&D work by CSI associates.
Volunteers are requested to send their registration by e-mail in the prescribed format to csinhq_accounts@csi-india.
org. The format can be downloaded from the CSI website using the link http://www.csi-india.org/c/document_library/
get_file?uuid=2826ed1a-6be6-4f91-a7d3-dc17c511b9fc&groupId=10616 Hard copy registrations can be sent to DirectorEducation at the below-mentioned address:
Address for Correspondence
Director (Education)
CSI Education Directorate
National Headquarters, C.I.T. Campus, Taramani, Chennai – 600 113.
Phone : +91-44-2254 1102/1103/2874 • Fax: +91-44-2254 1143
CSI COMMUNICATIONS | DECEMBER 2010
11
THEME ARTICLE
Harmony Search Algorithm
Zong Woo Geem
Environmental Planning and Management Program, Johns Hopkins University
729 Fallsgrove Drive #6133, Rockville, Maryland 20850, USA. Email: [email protected]
http://sites.google.com/a/hydroteq.com/www/
Harmony search (HS) is a music-inspired
algorithm (Geem et al., 2001) and has been
applied to various optimization problems including
music composition, Sudoku puzzle, magic square,
timetabling, tour planning, logistics, web page
clustering, text summarization, Internet routing, visual
tracking, robotics, energy system dispatch, power
system design, cell phone networking, structural
design, water network design, dam scheduling, flood
model calibration, groundwater management, soil
stability analysis, ecological conservation, vehicle
routing, heat exchanger design, satellite heat pipe
design, offshore structure mooring, RNA structure
prediction, medical imaging, medical physics, etc
(Geem, 2009; 2010a). Recently, HS was also applied
to astronomical data analysis, which was published in
Nature (Deeg et al., 2010).
Each musician in music performance plays a
musical note at a time, and those musical notes
together make a harmony. Likewise, each variable in
optimization has a value at a time, and those values
together make a solution vector. Just like the music
group improves their harmonies practice by practice,
the algorithm improves its solution vectors iteration
by iteration.
The HS algorithm basically has three operations,
such as memory consideration, pitch adjustment,
and random selection. Using memory consideration
operation, HS chooses a value from harmony memory
(HM); using pitch adjustment operation, HS chooses
a value which is slightly modified from HM; and
using random selection operation, HS chooses a
value randomly from entire value range. These basic
operations constitute a novel stochastic derivative
(Geem, 2008), instead of traditional calculus-based
derivative, in order to search for the right direction to
the optimal solution.
For more advanced issues in HS, researchers
have researched exploratory power (Das et al., 2010),
multi-modal solution space (Gao et al., 2009), multiobjective optimization (Geem, 2010b), distributed
memory (Pan et al., 2010), hybridization (Fesanghary
et al., 2008), and adaptive theory (Geem and Sim,
2010). In addition, HS has a unique derivative which
considers the relationship among variables (Geem,
2011).
References
Das, S., Mukhopadhyay, A., Roy, A., Abraham,
A., & Panigrahi, B. K. (2010) Exploratory Power
of the Harmony Search Algorithm: Analysis and
Improvements for Global Numerical Optimization.
IEEE Transactions on Systems, Man, and Cybernetics,
Part B: Cybernetics, http://dx.doi.org/10.1109/
TSMCB.2010.2046035
Deeg, H. J., Moutou, C., & Erikson A. et al. A
transiting giant planet with a temperature between
250 K and 430 K. Nature, 464, 384-387.
Fesanghary, M., Mahdavi, M., Minary-Jolandan,
M., Alizadeh, Y. (2008). Hybridizing harmony search
algorithm with sequential quadratic programming for
engineering optimization problems. Computer Methods
in Applied Mechanics and Engineering, 197(33-40),
3080-3091.
Gao, X. Z., Wang, X., & Ovaska S. J. (2009) Unimodal and Multi-modal Optimization Using Modified
Harmony Search Methods. International Journal of
Innovative Computing, Information and Control, 5(10A),
2985-2996.
Geem, Z. W. (2008). Novel Derivative of
Harmony Search Algorithm for Discrete Design
Variables. Applied Mathematics and Computation,
199(1), 223-230.
Geem, Z. W. (2009). Music-Inspired Harmony
Search Algorithms: Theory and Applications. Berlin:
Springer.
Geem, Z. W. (2010a). Recent Advances in
Harmony Search Algorithm. Berlin: Springer.
Geem, Z. W. (2010b). Multiobjective
Optimization of Time-Cost Trade-Off Using Harmony
Search. ASCE Journal of Construction Engineering and
Management, 136(6), 711-716.
Geem, Z. W. (2011). Stochastic Co-Derivative
of Harmony Search Algorithm. International Journal of
Mathematical Modelling and Numerical Optimisation,
2(1), 1-12.
CSI COMMUNICATIONS | DECEMBER 2010
12
Geem, Z. W., Kim, J. H., & Loganathan,
G. V. (2001). A New Heuristic Optimization
Algorithm: Harmony Search. Simulation,
76(2), 60-68.
Geem, Z.W., Sim, K.-B. (2010).
Parameter-Setting-Free Harmony Search
Algorithm. Applied Mathematics and
Computation, http://dx.doi.org/10.1016/j.
amc.2010.09.049
Pan, Q.-K., Suganthan, P.N., Liang, J.
J., Tasgetiren, M.F. (2010). A local-best
harmony search algorithm with dynamic
subpopulations. Engineering Optimization,
42(2), 101 - 117.
ooo
Harmony search
[Excerpted from http://en.wikipedia.org/wiki/Harmony_search]
In computer science and operations research, harmony search (HS) is a phenomenon-mimicking algorithm (also known as
metaheuristic algorithm, soft computing algorithm or evolutionary algorithm) inspired by the improvisation process of musicians. In the
HS algorithm, each musician (= decision variable) plays (= generates) a note (= a value) for finding a best harmony (= global optimum)
all together. The Harmony Search algorithm has the following merits:
•
HS does not require differential gradients, thus it can consider discontinuous functions as well as continuous functions.
•
HS can handle discrete variables as well as continuous variables.
•
HS does not require initial value setting for the variables.
•
HS is free from divergence.
•
HS may escape local optima.
•
HS may overcome the drawback of GA’s building block theory which works well only if the relationship among variables in a
chromosome is carefully considered. If neighbor variables in a chromosome have weaker relationship than remote variables, building
block theory may not work well because of crossover operation. However, HS explicitly considers the relationship using ensemble
operation.
•
HS has a novel stochastic derivative applied to discrete variables, which uses musician’s experiences as a searching direction.
•
Certain HS variants do not require algorithm parameters such as HMCR and PAR, thus novice users can easily use the algorithm.
The Outline of Science
“A Plain Story Simply Told” [Four Volume Series]
J. Arthur Thompson
Introduction – Volume 1
There is abundant evidence of a widened and deepened interest in
modern science. How could it be otherwise when we think of the
magnitude and the eventfulness of recent advances?
reduces to order the disorder of disease. Science is always setting
forth on Columbus voyages, discovering new worlds and conquering
them by understanding.
But the interest of the general public would be even greater than it is
if the makers of new knowledge were more willing to expound their
discoveries in ways that could be “understanded of the people.” No
one objects very much to technicalities in a game or on board a
yacht, and they are clearly necessary for terse and precise scientific
description.
For Knowledge means Foresight and Foresight means Power.
It is certain, however, that they can be reduced to a minimum
without sacrificing accuracy, when the object in view is to explain
“the gist of the matter.” So this OUTLINE OF SCIENCE is meant for
the general reader, who lacks both time and opportunity for special
study, and yet would take an intelligent interest in the progress of
science which is making the world always new.
The story of the triumphs of modern science is one of which Man
may well be proud. Science reads the secret of the distant star and
anatomises the atom; foretells the date of the comet’s return and
predicts the kinds of chickens that will hatch from a dozen eggs;
discovers the laws of the wind that bloweth where it listeth and
The idea of Evolution has influenced all the sciences, forcing us
to think of _everything_ as with a history behind it, for we have
travelled far since Darwin’s day. The solar system, the earth, the
mountain ranges, and the great deeps, the rocks and crystals, the
plants and animals, man himself and his social institutions--all must
be seen as the outcome of a long process of Becoming. There are
some eighty-odd chemical elements on the earth to-day, and it is
now much more than a suggestion that these are the outcome of
an inorganic evolution, element giving rise to element, going back
and back to some primeval stuff, from which they were all originally
derived, infinitely long ago. No idea has been so powerful a tool in
the fashioning of New Knowledge as this simple but profound idea
of Evolution, that the present is the child of the past and the parent
of the future. And with the picture of a continuity of evolution from
nebula to social systems comes a promise of an increasing control-a promise that Man will become not only a more accurate student,
but a more complete master of his world.
CSI COMMUNICATIONS | DECEMBER 2010
13
THEME ARTICLE
Nature Inspired Computing in
Digital Watermarking Systems
Ashraf Darwish
Computer Science Department, Helwan University, Cairo, Egypt
E-mail: [email protected], [email protected]
Abstract—Digital Watermarking using Nature Inspired Computing (NIC) methodologies
is currently attracting considerable interest from the research community. Characteristics
of nature inspired computational frameworks are adaptation, fault tolerance, high
computational speed and error resilience in noisy information environment and all these
fit the requirements of building a good watermarking model. This article provides a quick
overview of the research progress in applying nature inspired methods to the problem
of digital watermarking (DW). The scope of this review will encompass core methods
including artificial neural networks, evolutionary computation, swarm intelligence, and
the hybrid of these systems. The findings of this review should provide useful insights
into the current digital watermarking literature and be a good source for anyone who
is interested in the application of nature inspired approaches to DW systems or related
fields.
1.Introduction
The aim of this article are two folds: first aims
to present a comprehensive survey on research
contributions that investigate the utilization of NIC
methods in building digital watermarking models; the
second aim is to define existing research challenges,
and to highlight promising new research directions.
The scope of the survey is the core methods of NIC,
which encompass artificial neural networks, genetic
algorithms, swarm intelligence and some hybrid
approaches.
2.
Digital Watermarking Algorithms
Recently, a very important and popular technique,
digital watermarking, was effectively applied to protect
the copyrights for multimedia and gained prominent
results [6]. A significant merit of digital watermarking
over traditional protection methods (cryptography) is
to provide a seamless interface so that users are still
able to utilize protected multimedia transparently by
embedding an invisible digital signature (watermark)
into multi-media data (audio, images, video). We
present the core NIC approaches that have been
proposed to solve digital watermarking problems.
2.1 Neural Networks in Digital Watermarking
An artificial neural network (ANN) consists of a
collection of processing units called neurons that are
highly interconnected in a given topology. ANNs have
the ability of learning-by-example and generalizing
from limited, noisy, and incomplete data; they
have, hence, been successfully employed in a broad
spectrum of data intensive applications.
Watermarking techniques integrates both color
image processing and cryptography, to achieve content
protection and authentication for color images. These
watermarking techniques are mainly based on neural
networks to further improve the performance of
Kutter’s technique for color images [7]. Due to neural
networks possessing the learning capability from
given learning (training) patterns, the used method
can memorize the relations between a watermark and
the corresponding watermarked image. This approach
can pave the way for developing the watermarking
techniques for multimedia data since color images
are ubiquitous in the contemporaneous multimedia
systems and also are the primary components of
MPEG video.
2.2 Genetic Algorithms in Digital Watermarking
In recent decades with the rapid development of
biomedical engineering, digital medical images have
been becoming increasingly important in hospitals
and clinical environment. Concomitantly, traversing
medical images between hospitals exists complicated
network protocol, image compression and security
problems. Many techniques have been developed to
resolve these problems. For example, HIS (hospital
information system) and PACS (picture arching and
CSI COMMUNICATIONS | DECEMBER 2010
14
communication system) are currently the
two primary data communication systems
used in hospitals. Although HIS may be
slightly different between hospitals, data
can be exchanged based on the standard—
HL7 (health level seven). Similarly, PACS
transmits medical images using the
standard—DICOM (digital imaging and
communications in medicine). Furthermore,
IEEE 1073 was published in order to set a
standard for measured data and signals
from different medical instruments.
In GA, variables of a problem are
represented as genes in a chromosome,
and the chromosomes are evaluated
according to their fitness using some
measures of profit or utility that we want
to optimize. Recombination typically
involves two genetic operators: crossover
and mutation. The genetic operators
alter the composition of genes to create
new chromosomes called offspring. The
selection operator is an artificial version
of natural selection, a Darwinian survival
of the fittest among populations, to create
populations from generation to generation,
and chromosomes with better fitness have
higher probabilities of being selected in
the next generation [8]. In recent works,
two conflicting requirements for typical
watermarking systems are selected,
namely, the watermarked image quality
and the robustness of the watermarking
algorithm. Simulation results also show
both the robustness under attacks and the
improvement in watermarked image quality
by using a genetic algorithm.
2.3 Swarm Intelligence in Digital
Watermarking
Swarm
intelligence
approaches
intend to solve complicated problems by
multiple simple agents without centralized
control or the provision of a global model.
Ant Colony Optimization has been
proposed and tested for document image
watermarking system. Generally speaking,
SI models are population-based. Individuals
in the population are potential solutions.
These individuals collaboratively search
for the optimum through iterative steps.
Individuals change their positions in the
search space, however, via direct or indirect
communications, rather than the crossover
or mutation operators in evolutionary
computation. There are two popular
swarm inspired methods in computational
intelligence areas: Ant colony optimization
(ACO) [9] and particle swarm optimization
(PSO) [10]. ACO simulates the behavior
of ants, and has been successfully applied
to discrete optimization problems; PSO
simulates a simplified social system of
a flock of birds or a school of fish, and is
suitable for solving nonlinear optimization
problems with constraints.
All security systems based on
encryption and watermarking are bound to
be broken in time given sufficient resources.
Hence, a number of important factors
need to be taken into consideration in
designing systems for protecting content
in consumer electronics devices. These
include robustness, renew ability and
cost. Robustness: Refers to how strong
the system is against conceivable attacks.
Every successful design should produce a
security system that is sufficiently robust for
the application it is used for. Renewability:
When a protection system is hacked, there
must be a way to replace it with a new, more
robust system. This general concept can be
implemented in two fundamental ways. (1)
Replacement of renewable security device:
all the security functionality is assigned
to a renewable device such a smartcard.
When its secrets are disclosed, it is simply
replaced by a new card. (2) Revocation of
consumer electronics device: the secrets are
embedded in the CE device, and cannot be
removed. If the device is understood to be
a pirate device, it is not allowed to receive
copy-protected content. Cost: The consumer
electronics industry is in a constant effort
to minimize the cost of manufacturing so
that the end product is affordable for the
consumer. Any additional cost needs to be
justified from the consumer›s viewpoint.
3.
Hybrids Approaches in Digital
Watermarking
We review some of the hybrid
approaches for digital watermarking, for
example, ANN-Fuzzy systems, and Geneticswarm systems.
A watermarking technique is to prevent
digital images that belong to rightful owners
from being illegally commercialized or used,
and it can verify the intellectual property
right. The embedded watermark should
be robust and transparent, but the ways of
pursuing transparency and robustness are
conflict. For instance, if we would like to
concentrate on the transparency issue, it is
natural to embed the smallest modulation
into images whenever possible. However,
due to such small values in the embedded
watermark, attacks can easily destroy the
watermark. Thus, it is an important issue
to find a fair balance between transparency
and robustness. The watermark is doubly
embedded in two opposite directions to
resist various attacks. However, the major
drawback of cocktail watermarking is that
it requires a suitable heuristic-tuning weight
for various images. Cocktail watermarking
has extended to as a blind multipurpose
watermarking system with the capability
of detecting malicious modification if the
watermark is known. The trade-off is that
the blind watermark algorithms are usually
less robust and have relatively higher false
alarm than those algorithms requiring
original images.
The fusion of neural networks and
fuzzy logic benefits both sides: neural
networks perfectly facilitate the process of
automatically developing a fuzzy system
by their learning and adaptation ability.
This combination is called neuro-fuzzy
systems; fuzzy systems make ANNs robust
and adaptive by translating a crisp output
to a fuzzy one. This combination is called
fuzzy neural networks (FNN). For example,
Zhang et al. [11] employed FNNs to detect
anomalous system call sequences to
decide whether a sequence is ‘‘normal’’ or
‘‘abnormal’’.
The watermark is doubly embedded
in two opposite directions to resist various
attacks. However, the major drawback of
cocktail watermarking is that it requires
a suitable heuristic-tuning weight for
various images. Cocktail watermarking
has extended to as a blind multipurpose
watermarking system with the capability
of detecting malicious modification if the
watermark is known. The trade-off is that
the blind watermark algorithms are usually
less robust and have relatively higher false
alarm than those algorithms requiring
original images. Recently, intelligent
algorithms such as genetic algorithm (GA)
and particle swarm optimization (PSO) have
shown good performances in optimization
problems. Intelligent algorithms based
watermark techniques can simultaneously
improve security, robustness, and image
quality of the watermarked images.
In [12], a hybrid watermarking
technique based on GA and PSO is
proposed. In the proposed technique, the
parameters of PLR obtained from JND values
and wavelet coefficients are first derived.
Thereafter, GA and PSO are simultaneously
performed to search the optimal values of
PLR. The proposed hybrid watermarking
technique uses GA and PSO to search the
optimal values for the derived parameters
of PLR. The proposed hybrid watermarking
technique in [12] is based on GA and PSO.
The general idea of the proposed technique
is to combine the advantages of PSO and
GA, the ability to cooperatively explore
the search space and to avoid premature
convergence.
Genetic algorithm (GA) has been
successfully applied to solve many
combinatorial optimization problems. The
application of GA to the evolution of fuzzy
rules can be found in [13, 14] for intrusion
detection. In [14], a simple GA is applied to
CSI COMMUNICATIONS | DECEMBER 2010
15
generate and evolve the fuzzy classifiers that
use complete expression tree and triangular
membership function for the formulation
of chromosome. To evaluate the fitness of
individual solutions, the weighted sum of
fitness values of multiple objective functions
is proposed in [14] where the pro- posed
weights are user-defined and cannot be
optimized dynamically for different cases.
4.Conclusion
Over the past decade intrusion
detection based upon NIC approaches has
been a widely studied topic, being able to
satisfy the growing demand of reliable and
intelligent digital watermarking systems.
In our view, these approaches
contribute to digital watermarking in
different ways. Digital watermarking
based upon NIC is currently attracting
considerable interest from the research
community. In this article, we attempted
to present the challenges faced by digital
watermarking technology. While much
progress in recent years has been done in
this direction especially by robustness of
embedded watermarking but attackers can
still encounter many challenges in this area.
REFRENCES
[1] D. Poole, A. Mackworth, R. Goebel,
Computational Intelligence—A Logical
Approach, Oxford University Press,
Oxford, UK, 1998, ISBN-10:195102703.
[2] J.C. Bezdek, What is Computational
Intelligence? Computational
Intelligence Imitating Life, IEEE Press,
New York, 1994, pp. 1–12.
[3] B. Craenen, A. Eiben, Computational
intelligence. Encyclopedia of Life
Support Sciences, in: EOLSS, EOLSS
Co. Ltd., 2002.
[4] W. Duch, What is computational
intelligence and where is it going?
in:W. Duch, J. Man ´ dziuk (Eds.),
Challenges for Computational
Intelligence, volume 63 of Studies in
Computational Intelligence, Springer,
Berlin/Heidelberg, 2007, pp. 1–13.
[5] Pao-Ta Yu, Hung-Hsu Tsai,and JyhShyan Lin, Digital watermarking based
on neural networks for color images,
Elsevier, Signal Processing 81 663-671,
(2001).
[6] Ahmet M. Eskicioglu andEdward J. Delp,
An overview of multimedia content
protection in consumer electronics
devices, Signal Processing: Image
Communication 16 681}699, Elsevier,
(2001).
[7] Pao-Ta Yu, Hung-Hsu Tsai,and JyhShyan Lin, Digital watermarking based
on neural networks for color images,
Elsevier, Signal Processing 81 663-671,
(2001).
[8] J.H. Holland, Adaptation in Natural and
Arti7cial Systems, The University of
Michigan Press, Ann Arbor, MI, 1975.
[9]S. Olariu, A.Y. Zomaya (Eds.),
Handbook of Bioinspired Algorithms
and Applications, Chapman & Hall/
CRC, 2006, ISBN-10: 1584884754.
[10]J. Kennedy, R. Eberhart, Particle swarm
optimization, in: Proceedings of IEEE
International Conference on Neural
Networks, vol. 4, November/December,
IEEE Press, 1995, pp. 1942–1948.
[11]B. Zhang, Internet intrusion detection
by autoassociative neural network,
in: Proceedings of International
Symposium on Information &
Communications Technologies,
Malaysia, December 2005, 2005.
[12]Zne-Jung Lee, Shih-Wei Lin , ShunFeng Su , and Chun-Yen Lin, A hybrid
watermarking technique applied to
digital images, Elsevier, Applied Soft
Computing 8 798–808, (2008).
[13]J.E. Dickerson, J. Juslin, O. Koukousoula,
J.A. Dickerson, Fuzzy intrusion
detection, in: Proceedings of IFSA
World Congress and 20 th North
American Fuzzy Information Processing
Society Conference, NAFIPS 2001,
Vancouver, British Columbia, July 2001,
pp. 1506–1510.
[14]J. Gomez, D. Dasgupta, Evolving fuzzy
classifiers for intrusion detection, in:
Proceedings of IEEE Workshop on
Information Assurance, United States
Military Academy, West Point, New
York, June 2001, pp. 68–75.
The Human Use of Human Beings
[Excerpted from Wikipedia: http://en.wikipedia.org/wiki/The_Human_Use_of_Human_Beings]
The Human Use of Human Beings is a book by Norbert Wiener. It was
first published in 1950 and revised in 1954.
Wiener was the founding thinker of cybernetics theory and an
influential advocate of automation. Human Use argues for the
benefits of automation to society. It analyzes the meaning of
productive communication and discusses ways for humans and
machines to cooperate, with the potential to amplify human power
and release people from the repetitive drudgery of manual labor,
in favor of more creative pursuits in knowledge work and the
arts. He explores how such changes might harm society through
dehumanization or subordination of our species, and offers
suggestions on how to avoid such risks.
The word cybernetics refers to the theory of message transmission
among people and machines. The book’s thesis:
“It is the thesis of this book that society can only be understood
through a study of the messages and the communication facilities
which belong to it; and that in the future development of these
messages and communication facilities, messages between man
and machines, between machines and man, and between machine
and machine, are destined to play an ever-increasing part.”
Increasingly better sensory mechanics will allow machines to
react to changes in stimuli, and adapt more efficiently to their
surroundings. This type of machine will be most useful in factory
assembly lines, giving humans the freedom to supervise and use
their creative abilities constructively. Medicine can benefit from
robotic advances in the design of prostheses for the handicapped.
Wiener mentions the Vocorder, a device from Bell Telephone
Company that creates visual speech. He discusses the possibility
of creating an automated prosthesis that inputs speech directly into
the brain for processing, effectively giving deaf individuals the ability
to “hear” speech again.
Machines, in Wiener’s opinion, are meant to interact harmoniously
with humanity and provide respite from the industrial trap we have
made for ourselves. Wiener describes the automaton as inherently
necessary to humanity’s societal evolution. People could be free to
expand their minds, pursue artistic careers, while automatons take
over assembly line production to create necessary commodities.
These machines must be “used for the benefit of man, for increasing
his leisure and enriching his spiritual life, rather than merely for
profits and the worship of the machine as a new brazen calf”
CSI COMMUNICATIONS | DECEMBER 2010
16
HR COLUMN
Think Local, Act Global
Aditya Narayan Mishra
Director – Marketing, Ma Foi Randstad, No.49, Cathedral Road, Chennai - 600 086, India.
E-mail: [email protected]
Through globalization people are becoming
increasingly interconnected in all aspects; cultural,
economic, political, technological, and environmental.
Flow of information, finance and goods through
multinational corporations is certainly one of the
major contributors to globalization. The wisdom of
these corporations is contained in the practice of
customizing products and services for consumption
in accordance with the respective currency, culture,
regulatory policies and even to the extent of
local language in product manuals. The resultant
customized IT solutions and operational procedures
further encouraged companies to preserve their own
corporate culture while prevailing as players in the
global market.
It is fairly easy to retain the local culture and
environment of an organization if the business merely
entails trading around the globe or even operating
through overseas offices. This does not define the
organization as a global unit. True globalization lies in
aligning business functions and management policies
& practices of offices in various countries to make the
entire unit one global organization.
So how do we sustain the uniqueness of our local
culture while working in a global environment? The
global offices are consistently sensitive to the local
culture and the legal implications while managing
the 24 hour global marketplace. The little pockets
of local presence, while retaining their individualistic
culture, exist seamlessly in the massive canvas of
global enterprise. Globalization impacts all types of
businesses; a small product developer might not serve
the global market, but can find alternate products in
other countries. Larger corporates, of course, benefit
completely from spreading their respective businesses
around the globe. Access to resources is no longer
limited. Capital, raw material and information flow
across continents and technology is available for
an affordable price to those who cannot develop it.
Digital highways, more than airways, have bridged the
boundaries between countries.
Initially globalization was perceived to be a
challenge with regard to technology and logistics.
However, the biggest issue, if not impediment to
this phenomenon, is culture. Money, machines
and materials that build an organization can be
managed even better with overseas resources
available at competitive rates. It is the people, who
run organizations and they need to be motivated to
effectively perform at global standards. With advanced
systems and resources provided to match international
standards, management of human resources becomes
critical. It is futile to transform organizational practices
and processes to emerge as an international player, if
the company does not take serious efforts to change
the mindset of its people. While the industry boasts
of entering the international marketplace, the path
to success will remain untraveled if HR practices are
not correspondingly enhanced. Organizations must
seek benchmarks in international practices of human
resource management and quickly catch up with them
to become a truly international organization.
The
international
players
have
been
endeavouring to instil the `Think Global, Act Local’
approach in the business intent as well as corporate
culture. Reasonable independence to operate within
its territory is given to overseas units. Therefore
though business objectives are aligned to the parent
organization, the heads of the branch and subsidiary
offices are treated as `domain’ experts and endowed
with the freedom to take decisions regarding their
respective markets. The country heads, well aware
and accustomed to the local laws and customs enjoy
the empowerment of operating their units in a manner
they deem fit to the nature of their regions. These
operating functions include business aspects, logistics,
purchase and vendor selection and management.
There is also a certain privacy with regard to IT policies
and deployment that is given to the overseas offices.
This movement of global perception in a local
marketplace is itself a major change for managers.
The current method of allowing control over local
operations is therefore vital. It helps in creating the
comfort and confidence that is so essential to work
performance. Hence transforming an international
organization to a wholly global model would be
a drastic change which would well be met with
immense resistance. Also, working in a global market
and yet retaining the local control works well for
the business. The regional experts are important to
help business grow in the respective locations and
this would take priority over creating a one culture
organization. The culture would be one with respect to
performance standards and work ethics. Beyond that,
CSI COMMUNICATIONS | DECEMBER 2010
17
HR policies, salary structure with taxation,
legal policies, work timings and so on would
most essentially pertain to the respective
country’s practice and custom.
Having said this, the fact remains
that an international organization needs
uniform and synchronized business
practices and processes across offices to
function in harmony. The dilemma here
lies in evolving or reengineering policies,
practices and processes globally relevant
as well as efficient but locally practical and
accountable
A multinational company (MNC)
has to do an indepth study of the related
countries’ governing bodies in addition to
understanding and respecting their business
philosophy, culture and customs. Long
term successful and profitable relationships
are built based on mutual respect and
awareness. However simple a practice is,
it is vital to paid heed to it. Once instance
comes to my mind in this context: A
business partner from the Western world
was visiting and during an outstation
trip with the team, enquired of a woman
colleague. `Is it okay if I offer to carry your
baggage?’ He did not want to demonstrate
his chivalrous trait without checking if it
would be misconstrued as an intrusion of
privacy or even hint that the colleague is not
capable of handling her baggage.
The shared services function caters to
regional need for flexibility and adaptability
in terms of legal compliance. Cross
cultural training sessions and country
specific orientation programs are part
of corporate induction in multi national
organizations. When globalization set
in, organizations were only focussing on
logistic and communication issues; little
or no importance was given to cultural
challenges. Since English is the universal
language, non English speaking countries
are now focussing on language learning as
part of vocational skill development. Many
an organization insists on communication
skills with English language proficiency as a
must have competency for hiring. Countries
like China have structured programs to build
language skills across levels and functions.
Finetuning communications, aligning
business functions and, management, in
particular HR practices so that they are
consistent throughout the organization is
critical for the success of a multi national
business. All this is sustained on the one
hand, while on the other, the regional
offices preserve and adhere to cultural and
compliance issues as per the locational
requirements. The balance of global function
in local setup is the crux of thinking local and
acting global.
Driving this two fold approach is
not simple. Operational efficiencies and
standardization of several critical processes,
professional management and more
important, sensitivity to diversity ensures
that an organization thrives in the global
market.
About the Author
ADITYA NARAYAN Mishra heads the Marketing function in addition to the operational excellence division
which handles business planning and change management. He has been with Ma Foi since 1999. In his
current role he is responsible marketing, communication for Ma Foi Randstad, in addition to handling
knowledge management, learning and development activities..
Mishra holds a Bachelor’s degree in Engineering (Electronics and Telecommunication) from Sambalpur
University, and a Masters in Business Administration from Jadavpur University. He is a Certified Six Sigma
Green Belt and a Certified Assessor on Business Excellence on EFQM model.
Hu m or
“The hell it should had!? said the salesman. didn’t realize that
Arabs read from right to left”
Know Your Customers:
Software Rules:
A disappointed salesman of Coca Cola returns from his Middle
East assignment.
When software bugs are reported, the standard operating
procedure is:
A friend asked, “Why weren’t you successful with the Arabs?”
Generate detailed reports showing customers are happy.
The salesman explained:
Prove bugs are user errors.
“When I got posted in the Middle East , I was very confident that I
would make a good sales pitch as Cola is virtually unknown there.
But, I had a problem I didn’t know to speak Arabic. So, I planned to
convey the message through three posters...
Lable bugs as requests for enhancements.
First poster: A man lying in the hot desert sand...totally exhausted
and fainting.
Prove that the customer does not need a bug fixed.
Second poster: The man is drinking our Cola.
Third poster: Our man is now totally refreshed.
And Then these posters were pasted all over the place
Keep asking for more information until the customer gives up.
Pass a bug around until it goes away.
Have customers prioritize a list of bugs. With luck, customers will
make the mistake of marking some of the bugs as anything but
critical.
When all else fails, attempt to fix a bug within 2-3 revs.
“Then that should have worked!” said the friend.
CSI COMMUNICATIONS | DECEMBER 2010
18
ARTICLE
Neuro Fuzzy Vertical Handoff Decision
Algorithm for overlaid Heterogeneous
Network
Anita Singhrova* & Nupur Prakash**
* Computer Science and Engineering Department, DCR University of Sc & Tech., Murthal, Sonepat, India
Email: [email protected]
**University School of IT and Principal, Indira Gandhi Institute of Tech.,GGS IPU, Delhi, India
Email: [email protected]
This paper has been selected as
“The Best Paper” at the National
Conference on Mobile and Ad
Hoc Networks (NCMAN) held
at Dr. Mahalingam College of
Engineering and Technology,
Pollachi between 29 and 30
October, 2010.
CSI
Coimbatore
Chapter
organized
the
Conference
and was supported by CSI
Divisions 3,4 and Region 7 &
IEEE Computer Society Madras
section.
Due to extremely high demand of cell phones, laptops etc. among people, who have
become extra mobile over the years, the demand of seamless mobility is on the rise. With
the increasing availability of different wireless (mobile) devices, it has become essential
that the mobile terminal should be able to move across disparate wireless networks. An
efficient vertical handoff algorithm needs to be implemented, to envision this kind of
mobility in heterogeneous networks.
This paper proposes a vertical handoff decision algorithm. The proposed algorithm uses
six parameters namely received signal strength, velocity of mobile terminal, number
of users, battery level, bandwidth and coverage area for decision-making. Because of
uncertainness in the input data and supervised learning technique, the paper proposes
to use neuro-fuzzy approach for making vertical handoff decision. Finally, we analyze
and compare our algorithm with the classic approach. The results show that the reduced
number of vertical handoffs in the proposed algorithm results in reduced ping-pong and
increased throughput. The quality indicator is also directly dependent upon ping-pong
effect. Thus, the reduced ping-pong effect results in improved quality of service.
Keywords:component; seamless mobility; neuro-fuzzy; quality indicator; vertical
handoff decision; ping-pong effect; throughput..
1.Introduction
The
phenomenal
growth
in
wireless
communication services such as wireless web
browsing, real time mobile multimedia streaming
and interactive applications, motivate the rapid
development of the next generation mobile wireless
networks. In India, the mobile user base is expected
to increase to 737 million by 2012 as against present
strength of 500 million mobile subscribers [1].
Keeping in view the fast growing mobile market in
India, Telecom Regulatory Authority of India (TRAI),
intends to leap frog to 4G directly, so as to provide
comprehensive and secure all-IP based solutions, high
speed and low cost of data transfer.
The objective of the next generation of wireless
architecture is based on the integration of heterogeneous
wireless networks and inter-networking. Table I
describes the complexities of heterogeneous networks
over and above the homogeneous environment [2].
Table I : Homogeneous vs. Heterogeneous
Homogeneous Networks
Detection of access points
belonging to same system.
Heterogeneous Networks
Detection of access points
belonging
to
multiple
systems.
Mobile Host needs to Mobile host needs to decide
decide among access points among access points of
of the same technology.
multiple technologies.
Handoff initiation triggered Handoff initiation triggered
mainly by signal strength by multiple events.
fading.
The execution method can The execution methods
be applied in every situation. depend on context and not
all methods can be applied
in every scenario.
Adaptation process is not Adaptation is essential, the
as important because the mobile host roams between
technologies
mobile host roams between disparate
similar conditions (same and conditions change
technology).
drastically.
CSI COMMUNICATIONS | DECEMBER 2010
19
A significant challenge of 3G and 4G
wireless networks is to coordinate among
the different wireless technologies used in
different networks. Thus, there is a significant
need for a single unified approach that
integrates disparate wireless technologies
(mentioned in Table II) enabling MT to
seamlessly roam between access networks.
A.
Handoff
Table II : Diversity in existing and emerging wireless technologies [3]
Network
Coverage
Data Rates
Mobility
Cost
Satellite
World
Max. 144 Kbps
High
High
GSM/GPRS
35 Km
9.6Kbps–144Kbps
High
High
IEEE 802.16a
30 Km
Max 70Mbps
Low/Medium
Medium
IEEE 802.20
20execution.
Km
1-9 Mbps
High
This
paper focuses on Very
the High
vertical handoff
20decision
Km
Upon
to 2several
Mbps parameters. High
High
based
A.Handoff
UMTS (MT)
Handoff is an event, when a mobile terminal
Handoff
is
an
event,
when
a
mobile
utilized resources are transferred as MT changes
its point 2of
HIPERLAN
terminal
(MT)
resources
attachment
and utilized
moves from
one are
cell to another [3].
IEEE 802.11a
transferred
as MT changes
of
Different handoff
strategiesitsarepoint
horizontal
handoff and
attachment
and
moves
from
one
cell
to
IEEE 802.11b
vertical handoff.
another [3]. Different handoff strategies are
In case of horizontal handoff as the mobileBluetooth
user crosses
horizontal handoff and vertical handoff.
70 to 300m 25 Mbps
II.
50 to 300m 54 Mbps
Medium/High
LITERATURE
REVIEW
Medium/High
Low
Low
The various existing strategies for vertical handoff
50decision
to 300min 11
Mbps
Medium/High
Low
next
generation network
are reviewed in [4].
Paper
[5]
discuss
the
vertical
handoff
between
two
network
10m
Max 700 Kbps
Very low
Low
access points that use the different access network
the cell boundary, handoff takes place between two network
In case of horizontal handoff as the
aresame
complementary
technologies
user
prediction
of handoff
technologies,
IEEE 802.11b
andmobility
CDMA for
cellular
network.
access points (AP) or base stations (BS) thatthese
use the
mobile user crosses the cell boundary,
WLAN covers
areas
and provides
attempts of
andpolicy
analyses
thehandoff
effect of number
Paper
[6] proposes
the optimization
based
wireless access network technology for example
IEEE hotspot
handoff takes place between two network
greater bandwidth
but
low mobility
at low
of users
on Quality
of Services
(QoS). Paper
decision
along with
the cost
function
to select
the target
802.11b base station to geographically neighboring
IEEE
access points (AP) or base stations (BS)
cost,of whereas
network
provides
proposes a VHDA,
network.
In paper
[7], the[12]
performance
of fourto enable
VHDAa wireless
802.11b i.e. homogeneous networks. In case
vertical cellular
that use the same wireless access network
comparatively
bandwidth
high
access
network
to balance
the overall
MEWbut
(Multiplicative
Exponent
Weighting),
SAW
handoff, handoff is between two network access
points or low namely
technology for example IEEE 802.11b base
mobility access
at a higher
cost. Additive
The overlay
load among
all attachment
points
and to
(Simple
Weighting),
TOPSIS
(Technique
for
base
stations
that
use
the
different
network
station to geographically neighboring IEEE
network
of
two
different
interfaces
- WLAN
maximize
theSolution)
collectiveand
battery
lifetime of
Order
Preference
by
Similarity
to
Ideal
GRA
technologies
such
as
IEEE
802.11b
base
station
to
an
802.11b i.e. homogeneous networks. In
cellular and
network(Grey
shown inRelational
Fig. 1, provides
Mobile
(MN). AnFor
implementation
of
Analysis)
areNodes
compared.
next
overlaid
cellular
network
or isIEEE
802.16 and
(WiMax)
case
of vertical
handoff,
handoff
between
a
combination
of
high
bandwidth
and
high
vertical
handoff
for
a
mobile
node
roaming
generation
networks,
paper
[8]
suggests
a
network
selection
vice-versa.
two
network access points or base stations
MT terminal
remains to
connected
betweenusers
IPv4being
and IPv6
and best
continuously
mechanism
guarantee mobile
always
In use
ordertheto different
support seamless
mobility inmobility.
the current
that
network access
to at least
one baseconnected
station or access
point
communicating
with
correspondent
node
(ABC)
using
Analytic
Hierarchy
Process
(AHP)
scenario
of
heterogeneous
networks
comprising
legacy
technologies such as IEEE 802.11b base
within
the
service
area.
in
IPv6
network
in
heterogeneous
networks
and Grey Relational Analysis (GRA). Paper [9] discusses
networkto(2G,
2.5G, 3G),
4G and
hot spot
station
an overlaid
cellular
network
or areas covered by
The threeismain VHDA
steps involved
vertical
is given
paper [13].
performance
basedinon
the fuzzy
controlin theory.
The The
handoff
WLAN
etc,(WiMax)
a singleand
unified
vertical handoff algorithm
IEEE
802.16
vice-versa.
handoff
process are
systemconsiders
discovery,
of adaptive
hysteresis
vertical
handoff
algorithm
the
factors
of
power
level,
cost
and
required.
The
vertical
handoff
decision
algorithm
(VHDA)
In order to support seamless mobility
handoff
decision,
and
handoff
execution.
scheme
for
overlaid
WCDMA
and
bandwidth. A vertical handoff algorithm in multi tier WLAN
required
overscenario
and above
the horizontal handoff in case of
inis the
current
of heterogeneous
This
paper
focuses
on the vertical
handoff
networks
is given in paper
(overlay)
network
proposedheterogeneous
in [10] uses pattern
recognition
heterogeneous
networks.
networks
comprising
legacy network (2G,
decision based on several
parameters.
[14].
A
VHDA
based
on
an advanced
to estimate user’s position using global positioning system
2.5G,
3G), 4G and
hot spotof
areas
covered by networks
B. Overlay
Structure
Heterogeneous
filteringPaper
mechanism
(in 3G/WLAN)
and
and
making
handoff
decision.
[11]
describes
the
II.
Literature
Review
WLAN etc, a single unified vertical handoff
traffic
types
(real
time
or
non-real
time
One ofis the
overlaid
systems discussed
in
The various
existing
for
concept strategies
of accumulated
user mobility for prediction of
algorithm
required.
Theheterogeneous
vertical handoff
services) is presented in paper [15] and
this paper
is the integration
the IEEE
WLAN
and decision
vertical
handoff
in next
generation
handoff
attempts
and analyses
the effect of number of users
decision
algorithm
(VHDA) is of
required
over802.11
[16] respectively. To assist the handoff
cellular
systems.
These
systems
coexist
as
many
cellular
network
are
reviewed
in
[4].
Paper
[5]
on Quality of Services (QoS). Paper [12] proposes a VHDA,
and above the horizontal handoff in case of
decision, the MT utilizes the downloaded
devices support
dual Radio Frequency (RF) discuss
interfaces
the for
verticaltohandoff
two
enable between
a wireless
access network to balance the overall
heterogeneous
networks.
base station radio propagation parameters
WLAN and cellular access. Moreover,network
theseaccess
are points
thatamong
use theall
different
load
attachment points and to maximize the
B.
Overlay Structure
of Heterogeneous
and coverage radius information to estimate
complementary
technologies
- WLAN coversaccess
hotspotnetwork
areas technologies,
802.11b
collective IEEE
battery
lifetime of Mobile Nodes (MN). An
networks
the distance and terminal velocity from the
and provides greater bandwidth but low mobility
at lowcellular
and CDMA
network. Paperof [6]
implementation
verticalsurrounding
handoff for
a mobile node
One
of
the
overlaid
heterogeneous
base stations [17]. Paper [18]
cost, whereas cellular network provides comparatively
lowoptimization
proposes the
of policy
based
roaming
between
IPv4
and
IPv6
and
continuously
systems
discussed
this paper
the cost. The overlay
compares the performance
of hysteresis
bandwidth
but highinmobility
at a is
higher
handoff decision along
with the cost function
communicating
with
correspondent
node
in
IPv6
in where a
integration of the IEEE 802.11 WLAN
based vertical handoff network
algorithms,
select
the targetheterogeneous
network. In paper
[7],
network of two different interfaces - WLANto and
cellular
networks
is
given
in
paper
[13].
The
and cellular systems. These systems
two state non-homogeneous Markov chain
the performance
of
four VHDAof namely
network shown in Fig.1, provides a combination
of high
performance
adaptive hysteresis
verticalmobile
handoff
schememovement
coexist as many cellular devices support
model models
terminal
MEW (Multiplicative
bandwidth and high mobility. MT terminal
remains
forExponent
overlaidWeighting),
WCDMA and between
WLAN heterogeneous
networks
dual Radio Frequency (RF) interfaces for
two
networks.
In addition, a
SAW
(Simple
Weighting),
TOPSIS
connected to at least one base station or access
point
withinAdditive
is given
in paper
[14]. A mobility
VHDA management
based on anscheme
advanced
WLAN and cellular access. Moreover,
for IP based
(Technique for Order
the service area.
filteringPreference
mechanism by
(in 3G/WLAN)
and traffic
types (real
networks
using
the
transport
layer
protocol
Similarity to Ideal Solution)
GRA (Grey
time or and
non-real
time services)
isand
presented
in layer
paperprotocol
[15] SIP is
SCTP
application
Relational Analysis)and
are compared.
For next To assist the handoff decision, the
[16] respectively.
suggested in paper [19].
II. TYPE STYLE AND FONTS generation networks, paper [8] suggests a
MT utilizes the downloaded base
All station
the radio
paperspropagation
discussed above
network selection mechanism
guarantee
parameterstoand
coverage radius
information
toother
estimate
the at the
consider
one
or
the
parameter,
mobile users being distance
always best
andconnected
terminal velocity
from
the
surrounding
base
most
three
for
deciding
vertical
handoff. In
(ABC) using Analytic Hierarchy Process
stations [17]. Paper [18] view
compares
the performance
ofalgorithm
of
the
above,
the
proposed
(AHP) and Grey Relational Analysis (GRA).
hysteresis based vertical handoff
algorithms,
wheremulti
a twoparameter
considers
neurofuzzy
Paper [9] discusses VHDA based on the
state non-homogeneous Markov
chain
model
models
mobile
based
vertical
handoff
decision.
Since there
fuzzy control theory. The handoff algorithm
terminal
movement
between
two
networks.
In
addition,
afuzzy logic
is
an
element
of
uncertainty,
the
Cellular
considers the factors of power level, cost
B
mobility
management
scheme
for
IP
based
networks
using
based
membership
function
is
used
and for
Coverage
S
and bandwidth. A vertical handoff algorithm
the transport layer protocol
SCTP the
and weight
application
layer
adjusting
matrix,
we
propose
in multi
tierfrom
(overlay) network proposed in
(1)Vertical
Vertical handoff
handoff from WLAN to cellular
handoff
(1)
cellular coverage
coverage(2) Vertical
protocol SIP is suggested intopaper
use[19].
neural network. To reduce the
cellular
coverage
to WLAN
(3) Horizontal
from WLAN
[10]to WLAN
uses pattern recognition to estimate
(2)
Vertical
handoff
from cellular
coveragehandoff
to WLAN
processing
time we
use
network that
All
the
papers
discussed
above
consider
one
orneural
the other
Figure
1. Overlay
structure
WLANtoand
cellular network
user’s position using global positioning
(3)
Horizontal
handoff
from of
WLAN
WLAN
inherently
incorporates
parallel
parameter,
at
the
most
three
for
deciding
vertical
handoff.
Inprocessing
The1 three
main
steps ofinvolved
in cellular
vertical handoff
process
Fig.
: Overlay
structure
WLAN and
system and making handoff decision. Paper
and latter
this considers
can be implemented
in
view of of
theaccumulated
above, the proposed
algorithm
neuronetwork
are system discovery, handoff decision, [11]
anddescribes
handoffthe concept
fuzzy multi parameter based vertical handoff decision. Since
CSI COMMUNICATIONS | DECEMBER 2010
20
hardware, which further reduces time.
The rest of the paper is organized as
follows: Section 2 provides the literature
review. For making of vertical handoff
decision, the classical approach is discussed
in Section 3, whereas the neuro-fuzzy
approach is proposed in Section 4. The
analysis of results is carried out in Section
5 followed by the paper conclusions and
future work in Section 6 and 7 respectively.
III. Classical Approach
In classical approach, VHDA is based
on Received Signal Strength (RSS) of the
Access Point (AP) of WLAN. The RSS in a
mobile radio channel shows that the average
RSS at any point declines as a power law
of the distance between a transmitter and
receiver [20]. The average received power
(Pr) at a distance (d) from the transmitting
antenna is given as
Pr = P0(d/d0)-n (1)
where n is path loss exponent, Pr is
the average received power at a distance d
from the transmitting antenna and P0 is the
transmitted power or power received at a
close-inreference point in the far field region
of antenna at a small distance d0.
Pr = P0 – [10 * n * log(d/d0)](2)
Value of n depends upon specific
propagation environment [20].
IV. Multi Parameter Baesd Neuro Fuzzy
VHDA
The next generation networks require
implementation of an adaptive and proactive
approach to vertical handoff decision. The
algorithm proposed here uses six input
parameters (shown in Table 3) for arriving
at a handoff decision.
Since, some of the input parameters,
VMT and BL are dependent on MT whereas
other input parameters are related to
network conditions; a hybrid approach
called mobile assisted handoff (MAHO) is
used for handoff execution.
Neuro-fuzzy approach has been
chosen for evaluation of the proposed
vertical handoff decision, mainly because of
its ability to handle imprecise and uncertain
data. The fuzzy logic also allows the
induction of rules in natural language. Based
on these rules, input is conveniently mapped
to output and by using adaptive neuro fuzzy
inference system (ANFIS); the self-learning
process is inherently incorporated. The
details of fuzzification, inference engine,
defuzzification etc shown in Fig. 2 are
described in subsections 5.1 to 5.4.
Table II : Diversity in existing and emerging wireless technologies [3]
Parameters
Range
Received Signal Strength -78dBm to -66
(RSS)
dBm
Remarks
It is the strength of received signal by MT from AP in WLAN. The MT measures RSS continuously
from the present and neighboring cells to initiate Handoff.
Velocity of Mobile
Terminal (VMT)
0 to 54m/
sec or 0 to
200Km/hrs
It is the velocity, with which the mobile terminal (MT) is moving. For high speed MT, cellular is
preferred because of greater coverage area. This range includes the speed of two-wheeler, four
wheeler and travel by fast train.
No. of Users(UN)
0 to 15 users
This shows the number of users. The QoS of WLAN is UN sensitive. As the number of users
increase, available bandwidth is reduced and number of collisions results in network congestion.
Bandwidth available
(BAV)
0 to 56 Mbps
It is the amount of unused bandwidth of the candidate base station (BS) or access point (AP).
This parameter takes care of congestion in the network.
Battery life (BL)
0 to 1 Watt
This represents the energy level in a mobile host. The energy is used for every packet transmitted
or received by the mobile host. The attachment to the closest AP or BS is known to consume the
least power for individual mobile devices at a given instant.
Coverage Area (C)
0 to 300 m
It is the area covered by the network. If the speed of the MT is high then cellular network with
wider coverage area is preferred over WLAN.
Standalone
fuzzy engine
Input parameters
Bav, Vmt, Un, BI, RSS,
Coverage Area
Fuzzy
Inference
System
(Sugeno)
Handoff
Output
Training of the fuzzy
inference engine
Fig. 2. Fuzzy Inference System
CSI COMMUNICATIONS | DECEMBER 2010
21
A.Fuzzification
In the fuzzification stage, a data point
is assigned membership in each set. A
membership function is a curve that
defines how each point in the input space is
mapped to a membership value between 0
and 1. Fuzzy Logic Toolbox includes 11 built
in membership function types [22]. For
simplicity the piecewise linear membership
function namely, Triangular membership
function are chosen. For each of the
input parameters mentioned above the
membership function is drawn and based on
that membership function, the membership
value is obtained.
scale of 0 to 1 by using equation (3).
Normalized_Value = (Current_value–
Lowest_Range)/(Highest_Range–Lowest_
Range)(3)
where Current_value is generated in a given
range (Highest – Lowest) using random
number generators and Normalized_Value
is the value obtained in the range 0 to 1.
The
membership
function
for
normalized RSS is shown in fig. 3(a) and for
VMT, UN, BAV, BL and C is shown in fig. 3(b).
Degree of membership
Fig. 3(a). Membership curves for normalized RSS
Medium
High
Degree of membership
1
0.8
0.6Fig. 3(a). Membership curves for normalized RSS
0.4
0.2
0
0
0.1
0.2
0.3
0.4
0.5
INPUT
(b)
0.6
0.7
0.8
0.9
1
Fig. 3(b) : Membership curve for normalized input
parameters (VMT, UN, BAV, BL, C)
(b)
Fig. 3(b). Membership
curve for
parameters
(VMT,
The value
of normalized
all inputinput
variables
RSS,
UN, BAV, BL, C)
VMT, UN, BAV, BL and C are normalized in a
Input (P) =
1
0
0
0
0
1
0
0
0
1
0
0
0
0
1
0
0
0
0
1
1
0
0
0
0
1
0
0
D. Training of neural network
Fuzzy inference engine
The Neural network is used for training
Fuzzy inference engine Fuzzy inference
of weight matrix due to its inherent parallel
engine depicts if-then-else rules. Vertical
processing and learning capabilities. Three
handoff between WLAN and cellular
steps involved are initialization, iteration and
network
is
not
reversible
i.e.
the
motive
termination.
The toolbox function ANFIS
The value of all input variables RSS, VMT, UN, BAV
, BL
toandhandoff
between
constructs a fuzzy inference system (FIS) by
C are normalized
in aWLAN
scale of 0toto 1cellular
by using equation
(3).
network
and vice versa is quite different.
establishing a relationship between an input
Normalized_Value
= (Current_value–Lowest_Range)
/ output data set as shown in Fig.4. In
The
time taken to handoff
from WLAN to
and an
(Highest_Range–Lowest_ Range)
---- (3)
cellular
network
is criticalisasgenerated
the delayinleads
where
Current_value
a given each
range pass, the function training proceeds
to(Highest
disconnection,
handoff
the evaluation of membership
– Lowest) whereas
using random
numberfrom
generatorsthrough
and
Normalized_Value
is the
value obtained
in the
range 0 to
1.
cellular
network to
WLAN
can wait
little
function
for specified inputs, calculating the
The due
membership
function
for normalized
RSS is shown
longer
to overlay
structure,
(shown
output by using centroid method based on
in fig. 3(a) and for VMT, UN, BAV, BL and C is shown in fig. 3
in(b).Fig.1) as base station signal of cellular
fuzzy rules, evaluation of error by using least
network continues even in WLAN.
square method and the network weight
B. If
Fuzzy
engine
the inference
antecedent
of a given rule has
adjustment for each input vector by using
engine
moreFuzzy
thaninference
one part,
thedepicts
fuzzyif-then-else
operator rules.
is Vertical
back-propagation algorithm.
handoff between WLAN and cellular network is not
applied
to obtain
number
that represents
reversible
i.e. theone
motive
to handoff
between WLAN toThe above-mentioned training steps
the
result
of the
thatdifferent.
rule. Theare
cellular
network
andantecedent
vice versa isofquite
timerepeated until network output (actual
taken rule
to handoff
WLAN
to cellular
network
Every
has a from
weight,
which
is applied
to is critical
output) becomes equal to target vector
the delaygiven
leads by
to disconnection,
whereas handoff(desired
from
aasnumber
antecedent, thereafter
output) and the error reduces to
cellular network to WLAN can wait little longer due to
applying
implication
method.
The
fuzzy
zero
overlay structure, (shown in Fig.1) as base station signal of or at least approaches to zero. Error
Inference
Process
used iseven
Sugeno
Type fuzzy
is defined as the sum of squared difference
cellular network
continues
in WLAN.
If the antecedent
a given rule
more than onebetween
part,
Inference
method, of
because
it ishascompact
the actual (network output) and
the computationally
fuzzy operator is efficient
applied to
obtain one numberthe
thatdesired output (target vector). This
and
representation
represents the result of the antecedent of that rule. Every rule
than
the other fuzzy inference method
adjustment allows fuzzy systems to learn
has a weight, which is applied to a number given by
namely,
Mamdami
System.
The
Sugeno
antecedent, thereafter applying implication method.from
The the data they are modeling. Once
system
lends itself
to theused
use isof Sugeno
adaptiveType trained,
fuzzy Inference
Process
fuzzy the weights are frozen and now the
Inference method,
because it is compact
computationally
techniques
for constructing
modelsand[22].
neural network recognizes even the unseen
efficient
representation
thaninference
the other
fuzzy inference
The
adaptive
neuro fuzzy
system
input.
method namely, Mamdami System. The Sugeno system
(ANFIS)
of ofMATLAB
is used for
forconstructing
lends itselfeditor
to the use
adaptive techniques
V. Results and Analysis
implementation
adaptive
fuzzy
inference
models [22].The ofadaptive
neuro
fuzzy
inference systemThe vertical handoff decision of the
(ANFIS)
editor of MATLAB is used for implementation of
engine
(FIS).
proposed algorithm and the classical
B.
Below Threshold
Above Threshold
A. Fuzzification
1
In the fuzzification stage, a data point is assigned
membership
0.8 in each set. A membership function is a curve
that defines how each point in the input space is mapped to a
0.6 value between 0 and 1. Fuzzy Logic Toolbox
membership
includes 11
0.4 built in membership function types [22]. For
simplicity the piecewise linear membership function namely,
Triangular0.2
membership function are chosen. For each of the
input parameters
mentioned above the membership function
0
is drawn and
based
on0.3 that
membership
function,
the
0
0.1
0.2
0.4
0.5
0.6
0.7
0.8
0.9
1
membership value is obtained. RSSdBm)
Slow
adaptive fuzzy inference engine (FIS).
Fuzzy Inference Engine
Rule base
Neural network
Initial wts IW =
[0 0 0 0 0 0 0]
Initial bias Ib=[0];
n = W*P + bias
n= [0 0 0 0 ]
hardlim(n) =
{0 if n<0}
{1 otherwise}
Adjust weights
Wnew = Wold + ΔW
new
W =[ 0 -1 -1 -1 -1 0 -1]
bnew = bold + Δb =
[-3]
C.Defuzzification
The input to a defuzzification process is
fuzzy and the output is a single crisp value.
The most popular defuzzification method is
the centroid calculation, which returns the
centre of curve. Different rules cannot share
the same output membership function i.e.
no rule sharing. The output variable of fuzzy
inference handoff decision is given as {Fit,
Not Fit} = {F, NF}
Target Vector T= [1 0 0 0 ]
a=hardlim
(W*P+bias)
a = [1 1 1 1 ]
e = 1, then ΔW =ePTr = PTr
e = -1,then ΔW = ePTr=-PTr
ΔW=[0 -1 -1 -1 -1 0 -1]
Δb= (t-a) = [-3]
Fig.
showing
verticalvertical
handoff decision
Fig.4.4.Procedure
Procedure
showing
handoff decision
Compare error e where
e =T-a
e = [ 0 -1 -1 -1]
e = 0, ΔW = 0, Δb = 0
Trained weights W’=W,
b’=b
approach is passed to the simulator. The
simulator evaluates the number of vertical
handoffs for both the algorithms. The
graph in Fig. 5 shows the reduced number
of vertical handoffs in case of proposed
handoff than classical approach.
The reduced number of handoffs also
affects the pingpong effect and throughput.
The parameters are described below:
1. Ping-pong Effect: The rate of pingpong handoff (PpingpongHO) is defined
as the number of ping-pong handoff
(NpingpongHO) per total handoff executions
(NHO).
Pping-pongHO = Nping-pongHO / NHO (4)
The time tag is used to indicate the
time-period that has passed from the
last successful handoff. As time tag
increases, the ping-pong effect reduces.
2. Throughput: It is a positive indicator
CSI COMMUNICATIONS | DECEMBER 2010
22
and is dependent upon the effective
data rate and delay caused due to
handoffs. Lesser number of handoffs
and higher effective data rate increases
throughput. To reduce the number of
handoffs the ping-pong effect shall be
reduced. Throughput is given as:
Throughput=R cellular_nw (T cellular-nw –
TN/2) + RWLAN (TWLAN – TN/2) (5)
where TN = N Δ / T.
Rcellular_nw and RWLAN are the effective
data rate available in cellular network &
WLAN.
Tcellular-nw and TWLAN are the total
contiguous stretch of time when power is
less than threshold or above threshold in
cellular and WLAN respectively.
T is total time-period, which is sum of
Tcellular-nw and TWLAN
N is the number of handoffs.
Δ is the handoff completion time.
Both these analysis parameters are
based on number of correct handoffs.
3. Computational complexity: The
number of operations defines the
computational complexity of an
algorithm.
In classic approach, the evaluation of
handoff decision is based on RSS only. The
approach is very basic and simple with
minimum complexity. However, the limited
input to decide upon handoff causes high
ping-pong effect and low throughput with
minimum complexity.
In the proposed approach, the handoff
depends upon multiple parameters. The
correct handoff results in low pingpong
effect and high throughput.
The reduced ping-pong effect results
in high Quality-Indicator (QIh) which is
directly dependent upon ping-pong handoff
rate.
QIh [NHO - Nping-pongHO] / NHO(6)
QIh = K [NHO - Nping-pongHO] / NHO or
(7)
QIh =K[1- Pping-pongHO](8)
where k is constant of proportionality
which is dependent upon other factors like
number of calls blocked and number of calls
dropped.
The reduced ping-pong means lesser
number of handoffs and this in turn makes
the negative part of the equation (8)
negligibly small, thus increasing throughput.
There is a tradeoff between
computational
complexity
and
low
unnecessary handoffs. Computational
complexity is high in the proposed approach,
and to take care of this aspect, the proposed
multi parameter based vertical handoff
algorithm is implemented using ANFIS. This
neural network using ANFIS approach not
only invokes selflearning process but also
provides blue prints for implementation at
hardware level.
VI.Conclusion
While traditional handoff is based on
RSS comparisons, the proposed VHDA
evaluates on additional factors such as
monetary cost, offered services, network
conditions and user preferences.
For vertical handoff decision between
WLAN and cellular networks, multiple
parameters: available bandwidth, speed of
mobile terminal, number of users, received
signal strength, battery level and coverage
area is used. The multi parameter based
proposed adaptive vertical handoff decision
helps determine which network it should
handoff to as incorrect handoff decision
will not only result in poor QoS but at times
may even break off current communication
resulting in increased call dropping.
The results show that the number of
vertical handoffs in the proposed algorithm
is lesser than the classical approach. These
reduced number of vertical handoffs results
in reduced ping-pong effect and increased
throughput. The quality indicator is also
directly dependent upon ping-pong effect.
Thus, the reduced ping-pong effect
results in improved quality of service.
VII. Future Work
The future work involves the
comparison of performance analysis of this
proposed multi parameter based VHDA
with a conventional VHDA on parameters
like
Complexity,
Dropped
Packets,
Retransmitted packets comparison and
WLAN delay.
References
1]http://www.trai.gov.in/NGN.asp
2] Pablo Vidales, Javier Baliosian, Joan
Serrat, “Autonomic system for mobility
support in 4G networks”, IEEE Journal
on selected areas in communications,
vol. 23, no.12, Dec 2005, pp. 22882303.
3] Pablo Vidales, Glenford Mapp, Frank
Stanjano, Jon Crowcroft, A Practical
approach for 4G systems: Deployment
of overlay networks, Computer
Laboratory, University of Cambridge,
UK.
4] Anita Singhrova and Nupur Prakash,
“Review of various strategies for vertical
handoff decision in next generation
networks”, in Asia Pacific Advanced
Network Research Workshop (APAN
’08), Queenstown, New Zealand, 4th
-8 th August 2008.
5] Hyosoon Park, Sunghoon Yoon,
Taehyoun Kim, Jungshin Park, Misun
Do, and Jaiyong Lee, “Vertical handoff
procedure and algorithm between
IEEE802.11 WLAN and CDMA cellular
network”, Lecture Notes in Computer
Science (LNCS) Springer Berlin/
Heidelberg, Mobile Communications,
vol. 2524, pp. 103-112, January 2003.
6] Fang Zhu and Janise McNair,
“Optimizations for vertical handoff
decision algorithms”, in IEEE Wireless
Communications and Networking
Conference (WCNC ’04), vol. 2, no.
21-25, pp. 867-872, Atlanta, Ga, USA,
March 2004.
7] Enrique Stevens Navarro and Vincent
W.S Wong, “Comparison between
vertical handoff decision algorithms
for heterogeneous wireless networks” ,
in the 63rd IEEE Vehicular Technology
Conference (VTC ’06), vol.2, pp. 947951, Melbourne, Vic., Australia, May
2006.
8] Qingyang Song and Abbas Jamalipour,
“A network selection mechanism
for next generation networks”, in
IEEE International Conference on
Communications (ICC ’05), vol. 2, pp.
1418-1422,Seoul, Korea, May 2005.
9] Hongwei Liao, Ling Tie and ZhaoDu,,
“A vertical handover decision algorithm
based on fuzzy control theory”, in the
1st International Multi-Symposiums on
Computer and Computational Sciences
(IMSCCS ’06), IEEE Computer Society,
vol. 2, pp. 309-313, Hangzhou, China,
June 2006.
10] A. Mehbodniya, J. Chitizadeh, “An
intelligent vertical handoff algorithm
for next generation wireless networks”,
in the 2nd IEEE/IFIP International
Conference on Wireless and Optical
Communications Network (WOCN
’05), pp. 244-249, Dubai, UAE, March
2005.
11] Liang Tee Lee and Chen Feng Wu,
“An Adaptive Handoff Algorithm
with Accumulated attempts of user
mobility for supporting QoS in wireless
cellular networks”, IJCNS Intl. journal of
computer Science and network security,
Vol. 6, No. 9B, September 2006, pp.
56-62.
12] Su Kyoung Lee, Kotikalapudi Sriram, et
al., “Vertical handoff decision algorithm
providing optimized performance in
heterogeneous wireless networks”,
in IEEE Global Telecommunications
Conference (GLOBECOM ’07),
pp.5164-5169, Washington DC, USA,
November 2007.
13]Q i n x u e S u n , B o H u , S h a n z h i
Chen and Jingjing Zhang, “An
Implementation of Vertical Handoff in
Heterogeneous Networks”, in the 5th
International Conference on Wireless
Communications, Networking and
Mobile Computing (WiCom ‘09), pp.14, Beijing, China, September 2009.
14] Yung-Fa Huang, Chin-Wei Hsu, Fu-
CSI COMMUNICATIONS | DECEMBER 2010
23
Bin Gao and Hsing-Chung Chen,
“Performance of adaptive vertical
handoff in heterogeneous networks
of WLAN and WCDMA systems”, in
the 5th International Joint Conference
on INC, IMS and IDC (NCM ‘09), pp.
2103-2107, Seoul, Korea , August 2009.
15] R. Khan, S. Aissa and C. Despins,
“Seamless vertical handoff algorithm
for heterogeneous wireless networksan advanced filtering approach”, in
IEEE Symposium on Computers and
Communications (ISCC 2009), pp.255
– 260, Sousse, Tunisia, July 2009.
16] Ying-Hong Wang, Chih-Peng Hsu,
Kuo-Feng Huang and Wei-Chia
Huang, “Handoff decision scheme
with guaranteed QoS in heterogeneous
network”, in the 1st IEEE International
Conference on Ubi-Media Computing,
pp. 138-143, Lanzhou, China, July-
August 2008.
17] Feng He, Furong Wang and Duan Hu,
“Distance and velocity assisted handoff
decision algorithm in heterogeneous
networks”, in the 2nd International
Conference on Future Generation
Communication and Networking
(FGCN ‘08), vol.1, pp. 349–353, Hainan
Island, China, December 2008.
18]A . H. Zahran and B en Li ang,
“Performance evaluation framework
for vertical handoff algorithms in
heterogeneous networks”, in
IEEE International Conference on
Communications (ICC ‘05), vol.1,
pp.173 – 178, Seoul, Korea, May 2005.
19] Yaw Nkansah-Gyekye, and Johnson
I Agbinya, “Vertical handoff between
WWAN and WLAN”, in International
Conference on Networking,
International Conference on Systems
and International Conference on
Mobile Communications and Learning
Technologies (ICN/ICONS/MCL ‘06),
pp. 132 – 137, Mauritius, April 2006.
20]Theodore S. Rappaport, Wireless
Communications: Principles and
Practice, 2nd Edition, Prentice Hall of
India, New Delhi, 2007.
21] George J. Klir and Bo Yuan, Fuzzy
Sets and Fuzzy Logic: Theory and
Applications, Prentice Hall of India,
New Delhi, 1995.
22]A n i t a S i n g h r o v a a n d N u p u r
Prakash,“Multi parameter based vertical
handoff decision in next generation
networks,” Communications of the
Systemics and Informatics World
Network (SIWN ’08), vol. 4, pp. 68-71,
2008.
February 2011
ITC 2011: First M. P. State IT Convention on
Challenges Before Indian IT Industry in Current Scenario
Date:
26 - 27 February, 2011
Organised By : Computer Society of India Bhopal Chapter
Hosted By
: MANIT Bhopal
For Detail Contact:
Dr. (Mrs) Poonam Sinha [email protected]
Prof. H B Khurasia
[email protected]
Dr. Rajeev Shrivastava
[email protected]
Challenges Before Indian IT Industry in Current Scenario
Organised By Computer Society of India Bhopal Chapter at MANIT Bhopal
February 26 - 27,2011
Topic includes:
Cloud Computing, Green Computing, Mobile Computing, Wireless Networking, Pervasive Computing, Parallel Computing in High-end &
Embedded Application, 3G, Device Convergence, Indian language Computing, ERP and any other relevant topics .
Important Dates:
Paper Submission
: 24 January 2011
Acceptance Noticification : 31 January 2011
Early Bird Registration
: 05 February 2011
Final Paper Submission : 12 February 2011
Registration Fees:
CSI Members
: Rs. Rs.1,500/-CSI Non-Members Rs Rs.2000/CSI Student Members
: Rs. 100/-
CSI Student Non-Members : Rs.200/
Contact Details:
Address for Correspondance:
Dr. (Mrs) Poonam Sinha Dr. (Mrs) Poonam Sinha
Hon. Secretary Bhopal Chapter
Head IT & MCA Deptt.
[email protected],
Prof. H.B. Khurasia (Vice-Chairman Bhopal Chapter)
Barkatullah University Bhopal
[email protected]
Bhopal (M.P.) 462026
Dr. Rajeev Shrivastava
[email protected]
CSI COMMUNICATIONS | DECEMBER 2010
24
CALL FOR PROPOSALS
PARTICIPATION
Computer Society of India
NATIONAL HEADQUARTERS
Education Directorate, Chennai
Invites Project Proposals from Faculty Members and Students
Under the Scheme of R&D Funding for the year 2010-2011
As India’s largest and one of the world’s earliest IT professional
organizations, the Computer Society of India has always aimed
at promoting education and research activities, especially in
advanced technological domains and emerging research areas. It
is also committed to take the benefits of technological progress to
the masses across India in particular to unrepresented territories. In
order to promote research and innovation meeting the grass-root
level ICT needs and emphasize the importance of joint research by
faculty-students, the CSI has been providing R&D funding for last
several years.
The CSI Student Branches are requested to motivate the
young faculty members and students (including undergraduate
and postgraduate) to benefit from this scheme. Proposals for
2010-11 meeting the following aim/objectives, expected outcome,
indicative thrust areas for research funding may be submitted
to: The Director (Education), Computer Society of India, Education
Directorate, CIT Campus, IV Cross Road, Taramani, Chennai 600113.
Last date for Receipt of Proposals: 31st January 2011
Aim and Objectives
¬¬ To provide financial support for research by faculty members,
especially for developing innovative techniques and systems
to improve teaching-learning and learning management
processes.
¬¬ To provide financial support to students for developing new
systems catering to the needs of socially relevant sectors
and/or involving proof of concepts related to emerging
technologies
¬¬ To facilitate interaction/collaboration among academicians,
practitioners and students
¬¬ To develop confidence and core competence among faculty/
students through research projects
¬¬ To foster an ambience of ‘Learning by Doing’ and explore
opportunities of industry funding and mentoring for
inculcating professionalism and best practices among
students and faculty
To recognize innovation and present excellence awards for
path-breaking projects through CSI YITP awards and industry
associations, Govt. Agencies and professional societies.
Expected Outcome
¬¬ Identification of thrust areas, capability assessment, gap
¬¬
¬¬
¬¬
¬¬
analysis, recommendations and future education and
research directions
Integration of research methodologies into the university
teaching-learning process and evolving a quality control
mechanism for academic programmes and curricula
Strengthening of industry-institutes interaction through
commercialization of technologies and products developed
by students and faculty
Publication of research studies (ICT penetration, technological
innovation, diffusion & adaptation), state-of-the-art reports
and case studies of education/ research initiatives
Identification of potential new and innovative projects of
young faculty, researchers and students for possible business
incubation
Indicative Thrust Areas for Research funding
The financial assistance up to Rs 50,000/- for hardware
projects and up to Rs 30,000/- for software projects would be
provided to cover items like equipment, books/journals, field
work, questionnaire, computation work and report writing. The
indicative thrust areas for funding include (but not limited):
Technology- OS, Programming Languages, DBMS, Computer &
Communication Networks, Software Engineering, Multimedia
& Internet Technologies, Hardware & Embedded Systems
Process & Tools- Requirements Engineering, Estimation & Project
Planning, Prototyping, Architecture & Design, Development,
Testing & Debugging, Verification & Validation, Maintenance &
Enhancement, Change Management, Configuration Management,
Project Management, Software Quality Assurance & Process
Improvement, Vertical Applications - Scientific Applications,
Enterprise Systems, Governance, Judiciary & Law Enforcement,
Manufacturing, Healthcare, Education, Infrastructure, Transport,
Energy, Defence, Aerospace, Automotive, Telecom, Agriculture
& Forest Management, Inter-disciplinary Applications - CAD/
CAM/CAE, ERP/SCM, EDA, Geo-informatics, Bioinformatics,
Industrial Automation, CTI and Convergence.
Last date for Receipt of Proposals: 31st January 2011
For further details, please visit CSI knowledge porta at
www.csi-india.org. The application form can also be downloaded
from the portal.
For more details, please visit www.siet.in or contact
Mr. Shrikant Karode
CSI National Student Coordinator
Email: [email protected]
Prof. Swarnalatha Rao
CSI Division V Chairperson
Email: [email protected]
Wg Cdr M Murugesan (Retd.)
Director (Education), CSI Education Directorate,
CIT Campus, IV Cross Road, Taramani, Chennai-600113. E-mail: [email protected]
CSI COMMUNICATIONS | DECEMBER 2010
25
ARTICLE
Data Mining : A Process to
Discover Data Patterns and
Relationships for Valid Predictions
Jasmine K S
Assistant Professor, Department of MCA, R V College of Engineering, Mysore Road, R V Vidyaniketan Post,
Bangalore-560059, India. Email: [email protected]
“Data mining involves the use of sophisticated data analysis tools to discover previously
unknown, valid patterns and relationships in large data sets.”
[Two Crows Corporation, 1999]
“Knowledge discovery in databases is the non-trivial process of identifying valid, novel,
potentially useful, and ultimately understandable patterns in data.”
[Fayyad, Piatetsky-Shapiro and Smyth, 1996]
Introduction
Data mining is a powerful new technology with
great potential to help researchers/organizations
focus on the most important information in their large
databases. Data mining tools predict future trends
and behaviors, allowing businesses to make proactive,
knowledge-driven
decisions.
The
automated,
prospective analysis offered by data mining move
beyond the analysis of past events provided by
retrospective tools typical of decision support systems.
Data mining tools can answer business questions that
traditionally were too time consuming to resolve.
Most of the organizations, who have massive
quantities of data, started implementing data
mining techniques rapidly on their existing software
and hardware platforms to enhance the value of
existing information resources and also to know how
integration is possible with new products and systems
as they are brought on-line. When implemented on
high performance client/server or parallel processing
computers, data mining tools can analyze massive
databases to predict the promising clients.
1.
What is Data Mining?
Data mining consists of more than collecting and
managing data; it also includes analysis and prediction.
Simply stated, data mining refers to extracting or
“mining” knowledge from large amounts of data [1].
Data mining can be performed on data represented in
quantitative, textual, or multimedia forms. Data mining
applications can use a variety of parameters to examine
the data. They include association, sequence or path
analysis, classification, clustering and forecasting [2].
Some observers consider data mining to be just
one step in a larger process known as knowledge
discovery in databases (KDD). Other steps in the KDD
process, in progressive order, include data cleaning,
data integration, data selection, data transformation,
(data mining), pattern evaluation, and knowledge
presentation.
Fig.1: Steps in Data mining [19]
2.
History of data mining
Data mining is a fairly new concept which was
emerged in the late 1980s. But it soon attracted huge
interests for research works and flourished with many
new and remarkable techniques being discovered
throughout the 1990s. Data mining, in many ways,
is fundamentally the adaptation of machine learning
techniques to business applications. Data mining is
best described as the union of historical and recent
developments in statistics, AI, and machine learning.
These techniques are then used together to study data
and find previously-hidden trends or patterns within.
The evolution of Database technology is as
follows [17]:
ƒƒ 1950s: First computers, use of computers for
census
ƒƒ 1960s: Data collection, database creation
CSI COMMUNICATIONS | DECEMBER 2010
26
ƒƒ
ƒƒ
ƒƒ
3.
(hierarchical and network models)
1970s: Relational data model
1980s: Ubiquitous RDBMS, advanced
data models and application oriented
DBMS
1990s: Data mining and data
warehousing,
massive
media
digitization, multimedia databases and
web technology
Scope of Data mining
Given databases of sufficient size
and quality, data mining technology can
generate new business opportunities by
providing these capabilities:
ƒƒ Automated prediction of trends and
behaviors. Data mining automates
the process of finding predictive
information in large databases.
Questions that traditionally required
extensive hands-on analysis can now
be answered directly from the data
very quickly.
ƒƒ Automated discovery of previously
unknown patterns. Data mining tools
sweep through databases and identify
previously hidden patterns in one step.
Data mining techniques can yield
the benefits of automation on existing
software and hardware platforms, and can
be implemented on new systems as existing
platforms are upgraded and new products
developed. When data mining tools are
implemented on high performance parallel
processing systems, they can analyze
massive databases in minutes. i.e, the users
can automatically experiment with more
models to understand complex data and
makes it practical for users to analyze huge
quantities of data for improved predictions.
4. Limitations of Data Mining
While data mining products can be
very powerful tools, they are not selfsufficient applications. To be successful,
data mining requires skilled technical and
analytical specialists who can structure
the analysis and interpret the output that
is created. Consequently, the limitations of
data mining are primarily data or personnel
related, rather than technology-related.
Although data mining can help reveal
patterns and relationships, it does not tell
the user the value or significance of these
patterns. These types of determinations
must be made by the user. Similarly,
the validity of the patterns discovered
is dependent on how they compare to
“real world” circumstances. For example,
to assess the validity of a data mining
application designed to identify potential
variables from a large pool of identified
variables, the user may test the model using
data that includes information about known
information/data. However, while possibly
re-affirming a particular profile, it does
not necessarily mean that the application
will identify a suspected variable whose
behavior significantly deviates from the
original model.
Another limitation of data mining
is that while it can identify connections
between behaviors and/or variables, it does
not necessarily identify a causal relationship.
5.
Data mining and data warehousing
Data warehousing is a process of
organizing the storage of large, multivariate
data sets in a way that facilitates the
retrieval of information for analytic
purposes. Frequently, the data to be mined
is first extracted from an enterprise data
warehouse into a data mining database or
data mart (Figure 2). There is some real
benefit if the data is already part of a data
warehouse. The problems of cleansing data
for a data warehouse and for data mining
are very similar. If the data has already been
cleansed for a data warehouse, then it most
likely will not need further cleaning in order
to be mined. Furthermore, many of the
problems of data consolidation must have
addressed and put in place maintenance
procedures. The data mining database
may be a logical rather than a physical
subset of data warehouse, provided that
the data warehouse DBMS can support the
additional resource demands of data mining.
Data Source
Geographic
Data Mart
Data
Warehouse
Analysis Data
Mart
Data Mining
Data Mart
Fig 2: Data mining data mart extracted from a data
warehouse [3].
A data warehouse is not a requirement
for data mining. Setting up a large data
warehouse that consolidates data from
multiple sources, resolves data integrity
problems, and loads the data into a query
database can be an enormous task,
sometimes taking years and costing millions
of dollars. However, one can mine data from
one or more operational or transactional
databases by simply extracting it into a
read-only database (Figure 3). This new
database functions as a type of data mart.
Data Source
Data Mining
Data Mart
Fig 3: Data mining data mart extracted from
operational databases
Many data mining tools currently
operate outside of the warehouse, requiring
extra steps for extracting, importing, and
analyzing the data [8]. Furthermore,
when new insights require operational
implementation, integration with the
warehouse simplifies the application of
results from data mining. The resulting
analytic data warehouse can be applied to
improve business processes throughout the
organization.
6. Data mining and OLAP
Data mining and OLAP (On-Line
Analytical Processing) are very different
tools that can complement each other.
OLAP software allows for the real-time
analysis of data stored in a database [1].
The OLAP server is normally a separate
component that contains specialized
algorithms and indexing tools to efficiently
process data mining tasks with minimal
impact on database performance.
OLAP is part of the spectrum of
decision support tools [10]. Traditional
query and report tools describe what is in
a database. OLAP goes further; it’s used to
answer why certain things are true. The user
forms a hypothesis about a relationship and
verifies it with a series of queries against
the data. In other words, the OLAP analyst
generates a series of hypothetical patterns
and relationships and uses queries against
the database to verify them or disprove
them. OLAP analysis is essentially a
deductive process. But it becomes much
more difficult and time-consuming to find a
good hypothesis and analyze the database
with OLAP to verify or disprove it when the
number of variables being analyzed is very
large.
Data mining is different from OLAP
because rather than verifying hypothetical
patterns, it uses the data itself to uncover
such patterns. It is essentially an inductive
process. For example, suppose the analyst
who wanted to identify the failure factors for
a student in exam plan to use a data mining
tool. The data mining tool might discover
those students with less scores were
identified as candidates for failure but it
might go further and also discover a pattern
the analyst did not think to try, such as that
irregularity is also a determinant factor of
failure. Here is where data mining and OLAP
can complement each other. Before acting
on the pattern, the analyst needs to know the
reasons behind the patterns analysis. The
OLAP tool can allow the analyst to answer
those kinds of questions. Furthermore,
OLAP is also complementary in the early
stages of the knowledge discovery process
because it can help to explore the data, for
instance by focusing attention on important
variables identifying exceptions, or finding
interactions. This is important because the
CSI COMMUNICATIONS | DECEMBER 2010
27
better one understands the data, the more
effective the knowledge discovery process
will be.
7.
Data mining applications
Data mining is becoming an
increasingly important tool to transform this
data into information. It is commonly used in
a wide range of applications such as:
1.Telecommunication: To handle
transactional data such as phone call,
data on mobile phones, house based
phones, etc., other customer data such
as billing, personal information, etc.and
additional data such as network load,
faults, etc.
2.Health: To handle different aspects
of the health system such as Personal
health records, Hospital data and Billing
information
3.Astronomy: To process terabytes of
image and other data receives from
telescopes and satellites
4. Economics and commerce: Analysis
and prediction of stock market
5.Bioinformatics: To predict diseases
based on genome sequences
6.Governments: For statistics, census and
taxation and also to prevent fraud
7. Credit card and insurance companies:
(for example segment customers for
targeted marketing)
8. Terror, crime and fraud detection: To
find and predict unusual events
8. How data mining works
Modeling is the basic technique that is
used in data mining to perform its intended
task. Modeling is simply the act of building
a model in one situation where one knows
the answer and then applying it to another
situation that one doesn’t. For example,
to predict the marketing possibilities of a
particular product, one can use the business
experience stored in the database to build a
model and predict the results much better
than random.
9. Models in Data mining
In the data mining literature, various
models are used to serve as blueprints for
how to organize the process of gathering
data, analyzing data, disseminating results,
implementing results, and monitoring
improvements.
Data Mining is an analytic process
designed to explore data (usually in large
amounts) in search of consistent patterns
and/or systematic relationships between
variables, and then to validate the findings
by applying the detected patterns to new
subsets of data. The process of data mining
consists of three stages: (1) the initial
exploration, (2) model building or pattern
identification with validation/verification,
and (3) deployment.
Stage 1 : Exploration. This stage involves
cleaning of data, data transformations,
selecting subsets of records based on some
preliminary feature selection operations
to bring the number of variables to a
manageable range.
Stage 2: Model building and validation.
This stage involves considering various
models and choosing the best one based on
their predictive performance.
Stage 3: Deployment. This is the
final stage which involves using the model
selected as best in the previous stage and
applying it to new data in order to generate
predictions or estimates of the expected
outcome.
Data mining models can be generally
classified as follows:
1. Descriptive Models: Models which
describe all the data or a process
for generating the data. Data
randomly generated from a “good”
descriptive model will have the same
characteristics as the real data.
2. Predictive Models: A model is created
or chosen to try to best predict the
probability of an outcome. Predictive
models exploit patterns found in
historical and transactional data to
identify risks and opportunities.
A predictive model is made up of a
number of predictors, which are variable
factors that are likely to influence future
behavior or results. In marketing, for
example, a customer’s gender, age, and
purchase history might predict the likelihood
of a future sale.
I.
Data Description for Data Mining
a.Clustering
Clustering is a data mining technique
used to place data elements into related
groups without advance knowledge of
the group definitions. Popular clustering
techniques include k-means clustering and
expectation maximization (EM) clustering
[13].
Clustering divides a database into
different groups. The goal of clustering is to
find groups that are very different from each
other, and whose members are very similar
to each other. Unlike classification one
doesn’t know what the clusters will be when
the procedure starts or by which attributes
the data will be clustered. Consequently,
someone who is knowledgeable in the
business must interpret the clusters. Often
it is necessary to modify the clustering
by excluding variables that have been
employed to group instances, because upon
examination the user identifies them as
irrelevant or not meaningful. After you have
found clusters that reasonably segment
your database, these clusters may then be
used to classify new data.
Clustering
is
different
from
segmentation. Segmentation refers to the
general problem of identifying groups that
have common characteristics. Clustering is
a way to segment data into groups that are
not previously defined.
Data modeling puts clustering in a
historical perspective rooted in mathematics,
statistics, and numerical analysis. From
a machine learning perspective clusters
correspond to hidden patterns, the search
for clusters is unsupervised learning, and
the resulting system represents a data
concept. From a practical perspective
clustering plays an outstanding role in
data mining applications such as scientific
data exploration, information retrieval and
text mining, spatial database applications,
Web analysis, CRM, marketing, medical
diagnostics, computational biology, and
many others.
b.
Link analysis
Link analysis is a descriptive approach
to exploring data that can help identify
relationships among values in a database.
The two most common approaches to
link analysis are association discovery and
sequence discovery [1]. Association discovery
finds rules about items that appear together
in an event such as a purchase transaction.
Sequence discovery is very similar, in that a
sequence is an association related over time.
Associations are written as A
B,
where A is called the antecedent or left-hand
side (LHS), and B is called the consequent or
right-hand side (RHS). For example, in the
association rule “If people buy a hammer
then they buy nails,” the antecedent is “buy
a hammer” and the consequent is “buy
nails.”
Graphical methods may also be very
useful in seeing the structure of links. In
Figure 4, each of the circles represents a
value or an event. The lines connecting
them show a link. The thicker lines represent
stronger or more frequent linkages, thus
emphasizing potentially more important
relationships such as associations.
Fig 4: Linkage diagram
II.
Predictive Data Mining
The ultimate goal of data mining is
prediction and predictive data mining is
CSI COMMUNICATIONS | DECEMBER 2010
28
the most common type of data mining
and one that has the most direct business
applications. The term Predictive Data
Mining is usually applied to identify data
mining projects with the goal to identify a
statistical or neural network model or set
of models that can be used to predict some
response of interest.
a. Neural Networks. Neural Networks are
analytic techniques modeled after the
(hypothesized) processes of learning in
the cognitive system and the neurological
functions of the brain and capable of
predicting new observations (on specific
variables) from other observations (on the
same or other variables) after executing a
process of so-called learning from existing
data[1].
Fig 5: Neural network representation
The first step is to design a specific
network architecture that includes a
specific number of “layers” each consisting
of a certain number of “neurons”. The new
network is then subjected to the process of
“training.” In that phase, neurons apply an
iterative process to the number of inputs to
adjust the weights of the network in order
to optimally predict the sample data on
which the “training” is performed. After the
phase of learning from an existing data set,
the new network is ready and it can then be
used to generate predictions.
One of the major advantages of neural
networks is that, theoretically, they are
capable of approximating any continuous
function, and thus the researcher does not
need to have any hypotheses about the
underlying model, or even to some extent,
which variables matter. An important
disadvantage, however, is that the final
solution depends on the initial conditions of
the network.
b.Classification
Classification is a data mining technique
used to predict group membership for
data instances [12]. Popular classification
techniques include decision trees and neural
networks.
Classification problems aim to identify
the characteristics that indicate the group to
which each case belongs. This pattern can
be used both to understand the existing data
and to predict how new instances will behave.
Data mining creates classification
models by examining already classified data
(cases) and inductively finding a predictive
pattern. These existing cases may come
from an historical database. They may come
from an experiment in which a sample of the
entire database is tested in the real world
and the results used to create a classifier.
Sometimes an expert classifies a sample
of the database, and this classification is
then used to create the model which will be
applied to the entire database.
c.Regression
Regression is a data mining technique
used to fit an equation to a dataset. The
simplest form of regression, linear regression,
uses the formula of a straight line (y = mx +
b) and determines the appropriate values for
m and b to predict the value of y based upon
a given value of x. Advanced techniques,
such as multiple regression, allow the use of
more than one input variable and allow for
the fitting of more complex models, such
as a quadratic equation [1].Regression uses
existing values to forecast what other values
will be [12]. Unfortunately, many real-world
problems are not simply linear projections
of previous values. In such cases, more
complex techniques such as logistic
regression, decision trees, or neural nets etc
is necessary to forecast future values.
d.
Time series
Time series forecasting predicts
unknown future values based on a timevarying series of predictors. Like regression,
it uses known results to guide its predictions
[15]. Models must take into account the
distinctive properties of time, especially
the hierarchy of periods (including such
varied definitions as the five- or seven-day
work week, the twelveth-“month” year,
etc.), seasonality, calendar effects such
as holidays, date arithmetic, and special
considerations such as how much of the
past is relevant.
Conclusion
Wide-ranging data warehouses that
integrate operational data with customer,
supplier, and market information have
resulted in an explosion of information.
Competition
requires
timely
and
sophisticated analysis on an integrated
view of the data. In this context, a new
technological leap is needed to structure
and prioritize information for specific enduser problems. The data mining tools
can make this leap. Quantifiable business
benefits have been proven through the
integration of data mining with current
information systems, and new products are
on the horizon that will bring this integration
to an even wider audience of users[19][20].
References
1]
Jiawei Han and Micheline Kamber, Data Mining:
Concepts and Techniques (New York: Morgan
Kaufmann Publishers, 2001.
2] Pieter Adriaans and Dolf Zantinge, Data Mining
(New York: Addison Wesley, 1996), pp. 5-6.
3] Dr. Osmar R. Zaïane, Principles Knowledge
Discovery in Databases, University of Alberta,
1999.
4] Kantardzic, Mehmed (2003). Data Mining:
Concepts, Models, Methods, and Algorithms.
John Wiley & Sons,2003
5] Alex Guazzelli, Wen-Ching Lin, Tridivesh Jena.
PMML in Action: Unleashing the Power of
Open Standards for Data Mining and Predictive
Analytics. CreateSpace, 2010
6] Alex Guazzelli, Michael Zeller, Wen-Ching Lin,
Graham Williams. PMML: An Open Standard
for Sharing Models. The R Journal, vol 1/1, May
2009.
7] Y. Peng, G. Kou, Y. Shi, Z. Chen (2008).
“A Descriptive Framework for the Field of
Data Mining and Knowledge Discovery”.
International Journal of Information Technology
and Decision Making, Volume 7, Issue 47: 639
– 682.
8] Fayyad, Usama; Gregory Piatetsky-Shapiro, and
Padhraic Smyth (1996). “From Data Mining to
Knowledge Discovery in Databases”. http://
www.kdnuggets.com/gpspubs/aimag-kddoverview-1996-Fayyad.pdf. Retrieved 200812-17. 9] Ellen Monk, Bret Wagner (2006). Concepts in
Enterprise Resource Planning, Second Edition.
Thomson Course Technology, Boston, MA.
ISBN 0-619-21663-8. OCLC 224465825. 10] Tony Fountain, Thomas Dietterich & Bill
Sudyka (2000) Mining IC Test Data to
Optimize VLSI Testing, Proceedings of the
Sixth ACM SIGKDD International Conference
on Knowledge Discovery & Data Mining. (pp.
18-25). ACM Press.
11] Xingquan Zhu, Ian Davidson, Knowledge
Discovery and Data Mining: Challenges and
Realities. Hershey, New Your. pp. 18.,2007. 12] B r i e m a n , F r e i d m a n , O l s h e n , a n d
Stone(1984), Classification and Regression
Trees,Wadsworth.
13] Dorian Pyle(1999), Data Preparation for Data
Mining, Morgan Kaufmann.
14] Norén GN, Bate A, Hopstadius J, Star K,
Edwards IR. Temporal Pattern Discovery for
Trends and Transient Effects: Its Application to
Patient Records. Proceedings of the Fourteenth
International Conference on Knowledge
Discovery and Data Mining SIGKDD 2008,
pages 963-971. Las Vegas NV, 2008.
15] James Kobielus (1 July 2008) The Forrester
Wave™: Predictive Analytics and Data Mining
Solutions, Q1 2010, Forrester Research.
16] Dominique Haughton, Joel Deichmann,
Abdolreza Eshghi, Selin Sayek, Nicholas
Teebagy, & Heikki Topi (2003) A Review
of Software Packages for Data Mining, The
American Statistician, Vol. 57, No. 4, pp. 290309.
17] U. Fayyad, Piatetsky-Shapiro, Smyth, and
Uthurusamy(1996), Advances in Knowledge
Discovery and Data Mining, MIT Press.
18]http://dataminingwarehousing.blogspot.com
19] Gartner Group Advanced Technologies and
Applications Research Note, 2/1/95.
20] META Group Application Development
Strategies: “Data Mining for Data Warehouses:
Uncovering Hidden Patterns.”, 7/13/95 .
CSI COMMUNICATIONS | DECEMBER 2010
29
STUDENTS KORNER
Analysis of Techniques for Detection
of Deep Web Search Interface
Dilip Kumar Sharma1, A. K. Sharma2
GLA University, Mathura, UP, India. Email: [email protected]
YMCA University of Science and Technology, Faridabad, Haryana, India
1
2
The volume of information on the web is increasing day by day. The information in the
web can be broadly categorized into two types i.e. surface web and deep web. The
surface web pages can be easily indexed through conventional techniques but the deep
web, whose size assumed to be thousand times larger than surface web, cannot be
indexed through conventional search technique. The first stage of the extraction of the
deep web information is the detection of deep web search interface. A search interface
is generally consisting of html forms. The conventional techniques of searching the deep
web information is done by filling the html forms on the search interface manually but
recently the research is going on automatic accessing and understanding of html forms.
Being the first stage of deep web extraction process, the detection of deep web search
interface becomes one of the important module of deep web information retrieval. In this
paper a technical analysis of some of the important deep web search interface detection
techniques is done to find out their relative strengths and limitations with reference to
current development in the field of deep web information retrieval technology.
Keywords Deep web, hidden web, search interface detection, crawler, random forest.
Introduction
The whole process of extraction of information
from deep web can be broadly categorized into four
steps i.e. query interface analysis, values allotment,
response analysis & navigation and relevance ranking.
Query interface analysis is the first and most important
step for deep web information retrieval. In query
interface analysis, a request of fetching a web page from
a web server is made by a crawler. After completion
of the fetching process, an internal representation of
the web page is produced after parsing and processing
of html forms based on the developed model. Further
the query interface analysis can be broken into the
some modules that are detection of hidden web search
interface, search form schema matching and domain
ontology identification. In these module the detection
of hidden web search interface is the first and foremost
step towards deep web information retrieval. As
expected, a human user can easily identify a deep web
search interface but to understand a deep web search
interface through a automatic technique without
human intervention is a challenging task [1][2][3]
[4][5]. Figures 1 depict the different types of search
interfaces.
Fig. 1 : Different types of search interface
CSI COMMUNICATIONS | DECEMBER 2010
30
Related Work
One of the prominent works for
detection of deep web search interface is
done by Leo Breiman (2001)[6] in form
of random forest algorithm. A random
forest algorithm detects the deep web
search interface by using a model, based
on decision trees classification. A random
forest model can be defined as a collection
of decision trees. A decision tree can be
generated by bootstrapping processing of
the training data. Various classification trees
can be generated through random forest
algorithm. To classify a new object from its
input vector, the sample vector is passed to
every tree defined in algorithm. A decision
for classification is given by every tree. A
decision about most voted classification is
done by using all of the classification results
of the individual trees. The advantages of
random forest algorithm are that it exhibits a
substantial performance improvement over
single tree classifiers and injecting of the
right kind of randomness makes accurate
classifiers and regulators. The disadvantage
of this algorithm is that it may select
unimportant and noisy features in the training
data, as a result a bad classification results
because of its random selection feature.
One of the deep web crawler
architecture is proposed by Sriram
Raghavan and Hector Garcia-Molina
(2001) [7]. In this paper, a task-specific,
human-assisted approach is used for
crawl the hidden web. There are two basic
problems related to deep web search,
firstly the volume of the hidden web is very
large and secondly there is a need of such
type of crawlers which can handle search
interfaces efficiently, which are designed
mainly for humans. In this paper a model of
task specific human assisted web crawler is
designed and relized in HiWE (hidden web
exposure). The HiWE prototype built at
Stanford which crawl the dynamic pages.
HiWE is designed to automatically process,
analyze, and submit forms, using an internal
model of forms and form submissions.
HiWE uses a layout-based information
extraction (LITE) technique to process and
extract useful information. The advantages
of HiWE architecture is that its application/
task specific approach allows the crawler to
concentrate on relevant pages only and with
the human assisted approach automatic
form filling can be done. Limitations of this
architecture are that it is not precise with
response to partially filled forms and it is
not able to identify and respond to simple
dependency between form elements.
A technique for collecting hidden web
pages for data extraction is proposed by
Juliano Palmieri Lage et al. (2002) [8] . In
this technique the authors have proposed the
concept of web wrappers. A web wrapper is
programs which extract the unstructured
data from web pages. It takes a set of target
pages from the web source as an input.
These set of target pages are automatically
generated by an approach called “Spiders”.
Spiders automatically traverse the web for
web pages. Hidden web agents assist the
wrappers to deal with the data available
on the hidden web. The advantage of
this technique is that it can access a large
number of web sites from diverse domains
and limitation of this technique is that it
can access only that web site that follow
common navigation patterns. Further,
modification can be done in this technique
to cover navigation patterns based on these
mechanisms.
A technique for automated discovery
of search interface from a set of html forms
is proposed by Jared Cope, Nick Craswell
and David Hawking (2003) [9]. This paper
defined a novel technique to automatically
detect search interface from a group of
html forms. A decision tree was developed
with the C4.5 learning algorithm using
automatically generated features from
html markup that can give a classification
accuracy of about 85% for general web
interfaces. Advantage of this technique is
that it can automatically discover the search
interface. Limitation of this technique is
that it is based on single tree classification
method and number of feature generation is
limited due to use of limited data set. As a
future work, modification is suggested that a
search engine can be develop using existing
methods for other stages along with the
proposed one with a technique to eliminate
false positives.
A technique for understanding web
query interfaces through best effort parsing
with hidden syntax is proposed by Zhen
Zhang et al. (2004)[10]. This paper
addresses the problem of understanding web
search interfaces by presenting a best-effort
parsing framework. The paper presented
a form extractor framework based on 2P
grammar and the best effort parses in a
language parsing framework. It identifies the
search interface by continuously producing
fresh instances by applying productions
until attaining a fix-point, when no fresh
instance can be produced. Best effort parser
technique minimizes wrong interpretation
as much as possible in a very fast manner.
It also understands the interface to a large
extent. Advantage of this technique is that
it is a very simple and consistent technique
with no priority among preferences and
it can handle missing elements in form
and limitation of this technique is that
establishment of single global grammar that
can be interacted to the machine globally is
a critical issue.
A technique named as “siphoning
hidden web data through key word based
interface” for retrieval of information from
hidden web databases through generation
of a small set of representative keywords
and build queries is proposed by Luciano
Barbosa and Juliana Freire (2004) [11]. This
technique is designed to enhance coverage
of deep web. Advantage of this technique is
that it is a simple and completely automated
strategy that can be quite effective in
practice, leading to very high coverage of
deep web. Limitation of this technique is
that it is not able to achieve the coverage
for collection whose search interface fixes a
number of results. Further the authors have
advised that modification can be done in this
algorithm to characterize search interfaces
techniques in a better way so that different
notions and levels of security can be achieved.
An improved version of random
forest algorithm is proposed by Deng et al.
(2008) [12]. In this improved technique
a weighted feature selection algorithm is
proposed to generate the decision trees. The
advantage of this improved algorithm is that
it minimizes the problem of classification of
high dimension and sparse search interface
using the ensemble of decision trees.
Disadvantage of this improved algorithm
is that it is highly sensitive towards the
changes in training data set.
Further improvement in random forest
algorithm is done by Yunming Ye et al.
(2009) [13] by using feature weighting
random forest algorithm for detection of
hidden web search interface. This paper
had presented a feature weighting selection
process rather than random selection
process. Advantage of this technique is
that it makes a weighted feature selection
process instead of random selection hence
reduces the chances of noisy feature
selection and limitation of this techniques
is that features available only in the search
forms were used. Future modification
suggested in random forest algorithm to
investigate more feature weighting methods
for construction of random forests.
An algorithm named as “The naive
bayesian web text classification algorithm”
is proposed by Ping Bai and Junqing Li
(2009) [14] for automatic and effective
classification of web pages with reference
to given model for machine learning. In the
conventional techniques, category abstracts
are produced using the inspection by domain
experts either through semiautomatic
method or artificial method. All the items
are provided equal important according to
CSI COMMUNICATIONS | DECEMBER 2010
31
conventional common bayesian classifier
whereas according to improved naive
bayesian web text classification algorithm,
whole of the items in every title are provided
higher importance to others. The strength
of this technique is that text classification
results are very accurate and further scope
in this algorithm is suggested to make
the classification process automatic in an
efficient way.
An approach for automatic detection
and unification of web search query
interfaces using domain ontology is
proposed by Anuradha and A.K.Sharma
(2010) [15]. The technique proposed in this
paper works by concentrating the crawler
on the given topic considering the domain
ontology. This technique results in the pages
which contains the domain specific search
form. The strengths of this technique are that
results are produced from multiple sources,
human effort is reduced and results are very
accurate in less execution time. Limitation of
this technique is that it is domain specific.
Summary of various techniques for
detection of deep web search interface
By going through the literature survey
of some deep web search interface detection
techniques, it is concluded that each
techniques for detection of deep web search
interface have some relative strengths and
limitations. A tabular summary is given
below in table 1, which summarizes the
techniques, strengths and limitations of
some of important detection techniques for
deep web search interface.
Table 1 : Summary of various techniques for detection of deep web search interface
Authors
Technique
Strengths
Limitations
Leo Breiman
(2001)
Forest of regression
trees as classifiers.
A substantial improvement in performance
over single tree classifiers.
May include un-important or noisy features.
Sri Ram Raghavan et al.
(2001)
Hidden Web Exposer.
An application specific approach to hidden web
crawling.
Imprecise in filling the forms.
Palmieri Lage et al.
(2002)
Hidden Web Agents.
Wide coverage of distinct domains.
Restricted to web sites that follow common
navigation patterns.
Jared Cope et al.
(2003)
Single tree classifiers.
Automatically discovery of search interface,
performed well when rules are generated on
the same domain.
Long rules, large size of feature space in training
samples, Over fitting, Classification precision is not
very satisfying.
Zhen Zhang et al.
(2004)
2P Grammar and Best
effort Parser.
Very simple and consistent, No priority among
preferences,
Handling of missing elements in form.
Critical to establish single global grammar that can
be interacted to the machine globally.
Luciano Barbosa et al.
(2004)
Automatic query
generation based on
small set of keywords.
A simple and completely automated strategy
that can be quite effective in practice
A large domain of Keywords has to be generated.
Deng, X. B. et al. (2008)
weighted feature
selection algorithm
Minimizes the problem of classification of high
dimension and sparse search interface using
the ensemble of decision trees
Highly sensitive towards the changes in training
data set.
Ye, Li, Deng et al.
(2009)
Feature weighted
selection process
Minimizes the chances of selection of noisy
features.
No use of contextual information associated with
forms.
Ping Bai et al.
(2009)
Naïve Bayesian
Algorithm
Text classification results are very accurate.
Classification algorithm is not automatic.
Anuradha et al. (2010)
Based on domain
ontology.
Results are produced from multiple sources,
reduces the human effort, less execution time,
accuracy is high.
It is domain specific.
Conclusion
Deep web search interface are the
entry point for the searching of the deep web
information. A deep web crawler should
understand and detect the deep web search
interface efficiently to facilitate the further
process of deep web information retrieval.
An efficient detection of deep web search
interface may results towards a significant
retrieval of deep web information so the first
and foremost step of deep web information
retrieval is the efficient understanding and
detection of deep web search interface. In
this paper a technical analysis of some of
the techniques for detection of deep web
search interface is done and it is concluded
that each of them have some relative
strengths and limitations in detecting of
deep web search interface. To explore
the deep web information efficiently, an
efficient technique for detection of deep
web search interface should be designed
which should have strengths simultaneously
and particularly in terms of wide coverage
of different domains, automatic procedure,
resistant to noisy and unwanted features,
ability to consider the features as per their
importance, application specific approach as
per requirement and user friendly approach.
Finally the technique for detection of deep
web search interface should be compatible
with current web technology.
References
1. Bergman, M.K. (2001). The Deep Web:
Surfacing Hidden Value. In The Journal
of Electronic Publishing, Vol. 7, No. 1.
2. Peisu, X., Ke, T. and Qinzhen, H.(2008).
A Framework of Deep Web Crawler. In
Proceedings of the 27th Chinese Control
Conference, Kunming,Yunnan, China.
Sharma, D. K., and Sharma, A.K.
(2010). Deep Web Information
Retrieval Process: A Technical Survey.
In International Journal of Information
Technology & Web Engineering, USA,
Vol 5, No. 1.
4. Khare, R., An, Y., and Song, Y. (2010).
Understanding Deep Web Search
Interfaces: A Survey. In ACM SIGMOD
Record, Volume 39 , Issue 1, PP: 3340.
5. Sharma D. K., and Sharma A.K. (2009).
Query Intensive Interface Information
Extraction Protocol for Deep Web.,
In Proceedings of IEEE International
Conference on Intelligent Agent & MultiAgent Systems, PP. 1-5 , IEEE Explorer.
6. Breiman, L. (2001). Random Forests.
In Machine Learning, Vol. 45, No.1, PP:
3.
CSI COMMUNICATIONS | DECEMBER 2010
32
7.
8.
9.
10.
11.
12.
13.
5-32, Kluwer Academic Publishers.
Raghavan, S. and Garcia-Molina, H.
(2001). Crawling the Hidden Web.
In Proceedings of the 27th International
Conference on Very Large Data Bases,
Roma, Italy.
Lage, P. et al. (2002). Collecting
Hidden Web Pages for Data Extraction.
In Proceedings of the 4th international
workshop on Web information and data
management , PP: 69-75.
Cope, J., Craswell, N., and Hawking,
D. (2003). Automated Discovery of
Search Interfaces on the web. In
Proceedings
of
the
Fourteenth
Australasian
Database
Conference
(ADC2003), Adelaide, Australi,a.
Zhang, Z., He, B., and Chang, K.
(2004). Understanding Web Query
Interfaces: Best-Effort Parsing with
Hidden Syntax. In Proceedings of ACM
International Conference on Management
of Data ,PP: 107-118.
Barbosa, L., and Freirel, J.(2004).
Siphoning
Hidden-Web
Data
through Keyword-Based Interface., In
Proceedings of SBBD.
Deng, X. B., Ye, Y. M., Li, H. B., & Huang,
J. Z. (2008). An Improved Random
Forest Approach For Detection Of
Hidden Web Search Interfaces.
O
ANN
UN
EN
M
E
C
T
In Proceedings of the Seventh
International Conference on Machine
Learning and Cybernetics, Kunming,
China. IEEE.
14. Ye, Y., et al. (2009). Feature Weighting
Random Forest for Detection of Hidden
Web Search Interfaces. In Computational
Linguistics and Chinese Language
Processing , Vol. 13, No. 4, PP: 387-404.
15. Bai, P., and Li, J.(2009). The Improved
Naive Bayesian WEB Text Classification
Algorithm, In International Symposium
on Computer Network and Multimedia
Technology, IEEE Explorer.
16. Anuradha, and Sharma, A.K. (2010).
A Novel Approach For Automatic
Detection and Unification of Web
Search Query Interfaces Using
Domain Ontology. In International
Journal of Information Technology
and Knowledge Management, JulyDecember, Vol. 2, No. 2,PP: 196-199.
17. Dilip Kumar Sharma is B.Sc, B.E.(CSE),
M.Tech.(IT), M.Tech. (CSE) and pursuing
Ph.D in Computer Engineering. He is life
member of CSI, IETE, ISTE,, ISCA, SSI and
member of CSTA, USA. He has attended 21
short term courses/workshops/seminars
organized by various esteemed originations.
He has published 21 research papers in
International Journals /Conferences of
repute and participated in 18 International/
National conferences. Presently he is
working as Reader in Department of
Computer Science, IET at GLA University,
Mathura, U.P. since March 2003 and he
is also CSI Student branch Coordinator. His
research interests are deep web information
retrieval, Digital Watermarking and Software
Engineering. He has guided various projects
and seminars undertaken by the students of
undergraduate/postgraduate.
18. Prof. A. K. Sharma received his M.Tech.
(CST) with Hons. from University of Roorkee
(Presently I.I.T. Roorkee) and Ph.D (Fuzzy
Expert Systems) from JMI, New Delhi and
he obtained his second Ph.D. in Information
Technology form IIITM, Gwalior in 2004.
Presently he is working as Dean, Faculty of
Engineering and Technology & Chairman,
Dept of Computer Engineering at YMCA
University of Science and Technology,
Faridabad. His research interest includes
Fuzzy
Systems,
OOPS,
Knowledge
Representation and Internet Technologies.
He has guided 9 Ph.D thesis and 8 more
are in progress with about 175 research
publications in International and National
journals and conferences. The author of
7 books, is actively engaged in research
related to Fuzzy logic, Knowledge based
systems, MANETS, Design of crawlers.
Besides being member of many BOS and
Academic councils, he has been Visiting
Professor at JMI, IIIT&M, and I.I.T. Roorkee.
Computer Society of India
National Headquarters, Education Directorate
CIT Campus, 4th cross Road, Taramani, Chennai 600 113
CSI Education Directorate requests Chennai based MBA fresh graduates to apply for the post of Marketing
Executive. The incumbent will work out of Chennai. He / She should have very good communication skills and
a flair for marketing.
Responsibilities include:
•
Enrolment of faculty / students for CSI Training programs and certifications
•
Liaise and co-ordinate with resource persons
•
Explore and get sponsorship for CSI programs
•
Promote CSI Membership and student branches
Prospective candidates may apply along with a Resume and expected remuneration to above address or by
e-mail to [email protected] on or before December 30, 2010. For further details, please contact
Mr. S Natarjan at Ph: 044-2254 1102
Wg. Cdr. M Murugesan
Director, Education
CSI COMMUNICATIONS | DECEMBER 2010
33
ExecCom Transacts
1.
2.
3.
CSI-2010 Convention: The CSI-2010 was successfully hosted
setting a new record of quality programmes and an unprecedented
participation by the CSI Chapters, Student Branches and the
members at large. The convention witnessed an overwhelming
support from the industry in terms of workshop/tutorials and
an exceptional financial sponsorship. The meetings of the CSI
Presidents’ Council, Think Tank, the National Council and the
Annual General body were well attended and highly stimulating.
Membership Development: The membership committee
deliberated at length on the strategies and mechanisms with
regard to the membership services and new membership
development. The CSI HQ and Chapters will have to collaborate
and cooperate given the vast geographical areas and potential
for the membership. While the HQ will primarily be responsible
for membership services with the chapters and student branches
across India will drive the new membership development
programmes. It is very heartening to note about the voluntary
services of several our senior and life members supporting the
above collaborative and cooperative model. With the proposed
substantial increase in the membership fee for all categories of the
membership (wef 1st April 2011), the members are requested to
renew their membership and enroll new members on a large scale
during the grace period till 31st March 2011.
Strengthening the SIGs: The SIG Coordination Committee along
with special invitees brainstormed on “CSI’s Special Interest
Groups (SIGs): Need for Re-alignment with the CSI Ecosystem”.
It was decided to make the necessary changes in the CSI
membership application forms – facilitating the members to join
the CSI SIGs of their choice. The current members will be sent an
official communication from the CSI HQ inviting them to join the
CSI SIGs. The CSI SIG– announcements and event reports will be
published in the CSI Communications and on the CSI Website.
The SIG Coordination Committee shall meet at least twice in a
year. Further, it t was resolved to constitute a core committee
(Chairman: Prof. D.B. Phatak, Convener: Mr. Satish Babu, Member:
Dr. R.K. Bagga) to finalize a comprehensive proposal and the set
of guidelines for the perusal/approval of the ExecCom. It was also
resolved that Mr. Satish Babu would be a special invitee to the
ExecCom meeting to represent the SIG Coordination Committee.
4.
Research Guidance and Supervision: It is a matter of pride that
a large section of our members possess Ph.D. or equivalent
qualifications. On the other hand, we have several of our members
aspiring to acquire Ph.D. qualification. As such, there is a proposal
to formalize the Research Mentors Network immediately. It is
an earnest appeal to the members with the Ph.D. or equivalent
qualifications to indicate their availability and willingness to join
the Research Mentors Network.
5.
Operational Guidelines and Standing Committees: There is a need
to formulate operational guidelines and to constitute standing
committees at the National, Regional and Chapter levels for
Conferences, Membership Development, Publications, BusinessIndustry-Government Liaison, Collaborative Academic-ResearchConsultancy, Student Branch Coordination, Entrepreneurship
Development, Career Counselling, Inter-society coordination,
Website /KM portal, Media & Public Relationship, Exhibitions,
Finance & Sponsorships. The members are requested to send their
suggestions for the above as well as their consent to volunteer
their services.
6.
Recognition/Appreciation for OC/PC Chairs: There are
suggestions to recognize/ appreciate the efforts of members
across India who have contributed significantly in organizing
national/international events of CSI. There is a proposal to honour
such members-contributors during the CSI Annual Conventions
to begin with CSI-2011. The specific guidelines will be formulated
through wider consultation.
7.
Publication of CSI Research Digests: The proposals for publications
of the CSI Research Digests incorporating highlights and important
outcomes of the CSI conventions, workshops and SIG events have
been received from a few leading publishers. The organizers of
various CSI conventions and programmes are requested to send
the content (on CDs) to the CSI-HQ for publication purposes.
Subsequently, the CSI Website will have provision for uploading
the content online. All such publications shall cite (prominently
and visibly) “This research study and publication is supported by
Computer Society of India”.
8.
Recognition for Sponsors of CSI Events and Programmes: There
are several suggestions to recognize the support/sponsorship
from the organizations and individuals for the CSI events and
programmes – such as conventions, workshops and publications.
As it is suggested to the CSI Chapters, Student Branches and other
host organizations to appropriately acknowledge such sponsorship
and support services. This may include – honouring in valedictory
session, issuing letters of appreciation/commendation, listing the
regular sponsors on the Website, complimentary CSI membership
up to one year, invitation to upcoming CSI events etc.
9.
Strengthening CSI Education Directorate: There is a proposal to
augment the human resources at the CSI Education Directorate
to offer continuing education and professional development
programmes. The senior members from the academia, R&D and
industry are requested to indicate their availability and willingness
to offer their voluntary expert services. The services of such
volunteer will be suitably rewarded.
10. Call for Technology Appreciation Seminars: The mission of the
CSI is to facilitate research, knowledge sharing, learning and
career enhancement for its stakeholders, while simultaneously
inspiring and nurturing new entrants into the industry and helping
them to integrate into the IT community. The CSI is also working
closely with other industry associations, government bodies and
academia to ensure that the benefits of IT advancement ultimately
percolate down to every single citizen of India. In pursuant to the
above mission and objectives of the Computer Society of India, it
is proposed to host a series of Technology Appreciation Seminars
for building/strengthening Knowledge-communities (TASK) for
the year 2011-2012. The proposals from the Chapters, Student
Branches and Institutional Members may please be sent to vp@
csi-india.org with cc to [email protected].
11. CSI Signs an MoU with JUET: CSI has executed an MoU with
Jaypee University of Engineering and Technology, Raghogarh,
Guna, MP (JUET) for the setting up of a Digital Forensics
Research Centre (DFRC) at JUET. The center will undertake R&D
activities in the areas of Cyber Security and Forensics, specially
Image Forensics, Video Forensics, e-mail Forensics and Network
Forensics. The deliverables would include awareness building
programs, workshops, conferences and short term courses, in
addition to execution of R&D projects. CSI will play the role of
advisor and provide various types of support for the joint activities
of the centre.
Prof. H.R. Vishwakarma
Hon. Secretary, Computer Society of India
CSI
CSI COMMUNICATIONS
COMMUNICATIONS || DECEMBER
DECEMBER 2010
2010
34
34
CSI2010 - SPECIAL REPORT
Technology and Society:
the human touch
R Gopalakrishnan
Executive Director, Tata Sons, Bombay House, 24, Homi Mody Street, Mumbai 400 001. India
E-mail: [email protected]
It is a great pleasure to address such distinguished
members of a young profession, which has played a
key role in repositioning India into global prominence.
Your profession will continue to play an even more
important role going forward. It is my privilege to
speak at this inaugural session of the 45th Annual
Convention of Computer Society of India (CSI) with
the very relevant theme of “Technologies for the Next
Decade”.
Over forty years ago, I began my career at
Hindustan lever as a computer trainee. Over the
next five years after joining, I resisted my seniors’
suggestion to move into the mainstream marketing
of consumer goods. I was quite infatuated with the
mysteries of mainframes, Cobol and Fortran languages,
and the wonders that computers could accomplish
for my company, mankind and society. But the slow
acceptance of mainframe computers, the national
shortage of foreign exchange to import machines and
the stiff resistance from labour unions took their toll.
I took leave of computers as a profession in 1972 and
succumbed to the seduction of FMCG marketing.
Thirty years later, I addressed students at IIT
Bombay on the subject of “Leadership and Foresight.”
I said all the predictable things about foresight,
hopefully in an inspiring way, until one curious
student asked, “What you say about foresight is so
true, but I have a question. Since you must have been
endowed with foresight, did you not see that to leave
the computer field would be a mistake?” I had great
difficulty in explaining the difference between talking
about foresight and exercising foresight!
Technology has played a central role in human
history for centuries: first, by inventing useful things,
and later, by diffusing the benefits of technology
to transform society. Technology will play an even
more significant role in the years to come. As a
former ‘techie’ myself, I found that when one is close
to a subject, one just might miss the larger social
consequences of the technology one practices. I
wish to quote two examples of social transformation
from history as that will enable us to think about the
transformational effects of ICT.
First, before Gutenberg invented the printing
press, the church was the only interpreter of the Bible.
People did not even possess a copy of the Bible to read
it for themselves. As Gutenberg unknowingly gave the
world the gift of being able to possess a personal copy
of the Bible, people felt empowered and curious. This
impacted society intimately over a few hundred years
and led to the European Age of Enlightenment and
Reason.
Second, our ancestors were all farmers. That is
why the GDP of nations was approximately in the
proportion of their population. Until 1800, as Angus
Madison has demonstrated, China was about 30%
of world GDP and India was roughly 20%. China’s
economy was always bigger than that of India’s. I
am puzzled to note the flood of books and papers on
whether or not India will catch up with China. How
does it matter when it has not been bigger for 5000
years?
The industrial revolution changed the relationship
between farming and wealth. I am sure that I need not
dwell on explaining it. But a less recognized effect of
the industrial revolution was the way it changed a very
hierarchical European society: no more did common
people have to be beholden to the Lords, Dukes and
Barons. Money earned through industrial methods
could be earned by anybody, and that ‘anybody’ could
sit at the same table with a ‘somebody’.
Globalization, socialization and adaptation are
three circles and technology sits in the space that is
common among these. What do these words mean in
simple terms? Think of the idea of foreigner or pardesi,
for example.
ƒƒ Phagun: 1958: ‘Phagun’ is the tale of a forbidden
love between a Zamindar’s son Bijan (Bharat
Bhushan) and gypsy chieftain’s daughter Banani
(Madhubala), who defy all social conventions.
Pardesi = Someone from another village
ƒƒ Raja Hindustani: 1996: Aarti (Karishma Kapoor)
goes for a vacation to a small hill station named
Palankhet. Once there, she meets Raja Hindustani
(Aamir Khan), and after a short period of time,
both fall in love with each other. Pardesi =
Someone from another city
ƒƒ Dev D: 2009 London educated Dev (Abhay Deol)
is in love with Chanda (Kalki Koechlin) who is also
a part foreigner. Also seen is in the clip is Dev’s
CSI COMMUNICATIONS | DECEMBER 2010
35
first girlfriend from Punjab. Pardesi =
someone from another country.
Through the depiction of Pardesi, the
changing perception of globalization can be
sensed.
Socialization too has evolved: a new
urban Indian has emerged and so have
attitudes to native place, marriage, language
and food!
ƒƒ It is difficult to find youngsters today
with clear answer to ‘where are you
from?’ They may be native to one
place, may move around when they are
growing up, may take a job somewhere
else and decide to settle down at
yet another place. A new Indian has
emerged who perhaps denotes a little
bit of all the places and people that he
has touched along the way.
ƒƒ During my growing years, it was
thought to be inappropriate for a middle
class person to get married outside
his or her caste. Today it has almost
become routine to have international
son-in-laws and daughter-in-laws!
ƒƒ Children from parents from different
linguistic backgrounds speak in
a common language which is
often, English! So we have a South
Indian father and a Punjabi mother
communicating with each other and
their kids in English! View this against
the curious fact that Punjabi is set to
become the fourth most widely-spoken
language in Canada by 2011, after
English, French and Chinese! Strange
are the ways of globalization and
socialization!
ƒƒ The same is true of food. Tandoori
Chicken and Mutter Paneer came to
define the Indian cuisine in overseas
countries. Nowadays, Indian food in
India is influenced by overseas cuisines.
This slide exemplifies my point of how
globalization has truly arrived if we
look at internationalization of various
cuisines.
People and technology adapt to each
other with consummate ease. Think of the
numerous ways in which this fact is visible
from the black phone to sleek cell phones,
from box cameras to digital cameras and
pictures, from face to face meetings to
virtual meetings.
Technology Diffusion in the Future
In general technology has been an
urban and transformational tool to enhance
productivity, convenience and performance.
Products are now so ingrained that advent
of videogames, playlists and movies are
seen as threat to declining attention span
and the death of reading habit. Last week’s
examples:
ƒƒ
Madhuri Sule was an IIT Kanpur
student, who was found hanging in
her room on 17th November this year,
because her academic performance
had declined. Authorities traced the
reason to excessive use of internet
and less sleep. The attempt to restrict
internet access to students has caused
unrest on the Kanpur campus. (Indian
Express, 22nd Nov, 2010)
ƒƒ In the USA, researchers state that
computers, cell phones and the
constant stream of stimuli that they
provide pose a profound challenge
to focus and learning. Michael Rich, a
professor at Harvard Medical School,
says, “Their brains are rewarded not
for staying on the task but for jumping
to the next thing…the worry is that we
are raising a generation of kids whose
brains are wired differently….downtime
is to the brain what sleep is to the
body….but the new generation kids
are in a constant state of stimulation.”
(International Herald Tribune, 22nd Nov,
2010)
ƒƒ There is a raging debate between
techno-holics and techno-critics on
whether or not attention spans are
being shrunk because of MTV, the
Internet and the Web.
Of the 6.5 billion on this planet, over
4 billion are somewhat less influenced by
technology developments; they seem to be
concerned with three endemic difficulties:
1. Poverty Alleviation (through education,
health and employment)
2. Information Asymmetries (through
inadequate data access)
3. Bad Governance (through corruption
and leakages)
For long, Bharat and India have been
divided by education and prosperity. This
divide can be, and is about to be, bridged in
the next 15 years through technology, much
as the Industrial Revolution bridged the social
divides in Europe. What cell phones did through
mobile communication in the last fifteen years
between 1995 to 2010, wireless broadband is
about to do in India in an even more dramatic
and inclusive way.
To this audience, I need hardly offer
reasons for this assertion. A report by global
advisory and consulting firm, Ovum, says
countries like India and China with a huge
population of mobile users will play the
most aggressive role in growth of mobile
broadband in the world. The firm notes that
the advent of 3G in markets such as China
and India, the sheer number of mobile users
and poor fixed line penetration in these
markets means broadband access to a very
large number of people will be based on
mobile access, including handsets. However,
I enumerate five reasons for the imminent
and dramatic impact of broadband:
I. For sure, there is a correlation between
adoption rate of a technology and the
Gross Domestic Product (GDP). As per
this report, a 10 percentage increase
in broadband penetration accounted
for highest percentage increase in
per capita GDP growth in developing
economies as you can see from the
right-most bar.
II. Low broadband penetration in India as
per the red line (under 1% compared
with tele-density of 52%) represents
a huge scope and opportunity both for
the Government and private players
to serve people who would be ready
to pay for the immense value that
broadband can create in their lives.
III. Technologies like 3G/BWA become
more accessible because they do
not depend on laborious physical
infrastructure.
IV. Success of broadband depends on
three critical factors: connectivity,
content and customer premises
equipment (CPE). The cell phone is
the most economical CPE and would
ride on the wave of 3G to provide value
added information and services.
V. Broadband now finds itself in the
similar sweet spot as telecom. Similar
focus both by the Government and
private players will ensure that wireless
broadband undergoes an explosive
growth:
Two questions arise with respect to the
oncoming broadband revolution:
1. How will broadband manifest itself?
2. What does it take to happen?
How will broadband manifest itself?
Modern Agriculture:
An energizing future can only
be created if we empower the most
important stakeholder in the entire agrichain, the ‘Indian Farmer’. There is a
need to help farmers to reach out; they
must feel empowered through a broader
understanding of their attitude, mindset,
requirements and needs. Famers must
earn more and become self-reliant by
adopting the latest agronomic practices and
technologies.
Various models and initiatives have
been rolled out by Government to reach
out to farmers, but they have met limited
success because of the certain inefficiencies:
ƒƒ Need to use local language for delivery
of personalized services.
ƒƒ These models have not been holistic
and have instead focused on specific
parts in the value chain.
CSI COMMUNICATIONS | DECEMBER 2010
36
ƒƒ
Most of these initiatives could not
scale up and impact millions of other
farmers.
Broadband technologies can deliver
personalized and integrated services to
millions of farmers.
Companies like TCS, have come up
with proprietary models of using mobile
as a tool to provide critical information to
famers. Such tools integrate technologies
such as Wireless Sensors, Camera phone
and script technology; to deliver advisory
services through a mobile phone. It ensures
business benefits to the stakeholders by
enabling to connect them to farmers directly.
Recent developments would suggest that
connecting millions of farmers to give them
personalized and integrated services, is
no pipe dream. India has emerged as the
outsourcing and off-shoring destination
of choice for western companies. Our call
centers and software companies have been
giving outstanding services to customers
outside India. Surely we have the capability
and now the technology (through wireless
broadband) to repeat the same feat, for our
own farmers this time!
Better Governance:
Broadband will play an important
role in automating governance processes
and reduce any administrative delays.
E-Governance is increasingly being viewed
as the route for governments to strengthen
good governance, for it not only improves
efficiency, accountability and transparency
of government processes, but it can also
be a tool to empower citizens by enabling
them to participate in the decision-making
processes of governments
Broadband will provide last mile
connectivity for Services provided through
the various e- Government initiatives
assist governments in reaching the yet
‘unreached’ and thereby contribute to
poverty reduction in rural and remote areas
by increasing access to critical information
and opportunities. At the same time,
this process also enables involvement
and empowerment of marginalized
groups through their participation in the
government process.
The government also intends to further
strengthen e-governance initiatives through
broadband technology. For instance,
Government of India has approved the
scheme of establishing Common Service
Centres (CSCs) across the country. The
CSC scheme envisages the establishment
of 100,000 broadband Internet - enabled
kiosks in rural areas, which would deliver
government and private services at the
doorstep of the citizens.
We have enough instances of how
technology has empowered people and
given them greater access to information:
Availability of land records online, applying
and tracking passport online, getting railway
tickets booked from the comfort of your
house, filing IT returns online without the
need to go through an agent, the list is
endless. Greater automation of processes
and access to broadband will further spread
the benefits and empower people.
Reduced Corruption
Broadband connectivity will ensure
that anybody sitting even in remote part
of the country can raise a query to ensure
that his works get done. It will ensure
that fundamental rights of citizens are
not compromised and will give voice to
all citizens that cannot be ignored. It will
actually redistribute power and ensure that
it goes to where it should ideally belong, to
citizens of India. Let me share with you an
instance in my personal knowledge.
What does it take to happen?
Technologists can do a lot and so can
government policies help a lot. But we also
need some barefoot professionals, who can
connect socially with the target audience.
This is where my initial techie insights
merge with my subsequent marketing
background. We need to enlist the help of
anthropologists and story tellers, intuitive
and experiential professionals, right brained
people rather than only left brained people.
I was interested to read about a 40 year old
British-born, Shanghai based researcher
called Jan Chipchase (FORTUNE Asia Pacific,
6th Dec 2010).
Chipchase works for Frog Design and
travels the world, trying to understand why
the planet’s poor people would want to use
the technologist’s products and devices.
He is a part anthropologist, part designer,
part explorer and part entrepreneur,
according to reports. Bill Maurer, Professor
of anthropology at UC Irvine says that Jan
Chipchase was the first to write about the
use of airtime as a form of currency. His
employers and clients pay to get his insights
into how to reach those billions of difficultto-reach customers for technology.
Can CSI and NASSCOM conspire to
strengthen this capability? It is worth the
effort.
About the Author
R GOPALAKRISHNAN (called Gopal) worked for his first 31 years in India’s most Indian multinational, Hindustan
Unilever. Since then, he has worked for India’s most multinational Indian company – Tata.
Currently, he is the executive director of Tata Sons. He is also the chairman of Tata AutoComp Systems, Rallis India
and Advinus Therapeutics, vice chairman of Tata Chemicals, and a director of Tata Power and Tata Technologies.
He also serves as an independent director on the boards of the Indian subsidiaries of Akzo Nobel and BP Castrol.
Gopal studied physics at Calcutta University and engineering at IIT. From 1967, he served Hindustan Unilever
for over three decades in various capacities. The appointments held by Gopal from 1990 onwards were: chairman
of Unilever Arabia (based in Jeddah), followed by managing director of Brooke Bond Lipton India (based in
Bangalore), followed by vice chairman of Hindustan Lever.
He joined Tata Sons in September 1998 as executive director.
Gopal is involved with education through his board memberships of a school and two management institutes.
He is a past president of All India Management Association. He has delivered guest lectures in India and abroad.
His articles have been published in management journals and financial newspapers.
In 2007, he authored his first book, The Case of the Bonsai Manager, published by Penguin India. In 2010, his
second book entitled When the penny drops: Learning what is not taught has been published by Penguin India.
CSI COMMUNICATIONS | DECEMBER 2010
37
A REPORT
CSI Annual Convention 2010 at Mumbai
iGen Technologies for the next decade...
Report Prepared by: Jayshree Dhere, Resident Editor, CSI Communications and the CSIC Team
The People Behind CSI2010:
Convention Ambassador: M D Agrawal, Vice President, CSI
Programme Committee Chairs: Prof. Atanu Rakshit, Prof.
Manohar Chandwani
Organizing Committee: Rajiv Gerela (Chairman), Vishnu
Kanhere, Ravi Eppaturi, Rohinton Dumasia, Sandip Chintawar,
Mable Monthero, Vinay Thaly of the CSI Mumbai Chapter.
CSI Annual Convention for the year 2010 was held in Mumbai
during 25-27 November 2010. This was 45th national
convention, which was very well attended and conluded with
professional grace.
At the inaugural function on 25th November, 2010, the key note
address on “Technology and Society: the human touch“ was
delivered by Mr. R Gopalkrishnan, Executive Director of Tata
Sons, who was the chief guest for the convention. The second
key note address was delivered by guest of honour, Mr. Deepak
Parikh, Chairman, HDFC Ltd.
emergence of this model has seen a spate of new technologies,
virtualization being at the forefront. Besides virtualization, Cloud
Computing also requires a layer of management software that
can make the mechanics of sharing computer and software
resources transparent to the user.
This track focused on the notion of Cloud Computing as a whole.
It brought different perspectives - that of a public Infrastructure
as a Service (IaaS) cloud provider, that of a Cloud software
technology provider, that of Software as a Service (SaaS) provider
and finally that of traditional IT services providers and strategy
consultants, who help customers make the tough decisions to
move to this new paradigm.
Dr. Harrick Vin, Vice President and the Chief Scientist TRDDC,
spoke about “Enterprise Cloud Computing: Opportunities and
Challenges” and Mr. Sumit Mukhija, National Sales Manager,
CISCO, talked on “Aligning IT to Business: The Competitive
Advantage of Cloud Computing”.
While Mr. Vikram Bhatia of Microsoft gave a talk on “Building
Interoperable Cloud Applications Using PaaS”, Mr. Phil Barlow
from VMware spoke on “Journey to Private Cloud - Building
seamless computing environments”. He elaborated the concept
of enabling cloud like benefits such as encapsulation, pooling
etc for existing applications. He also spoke about extended
virtualization for private cloud.
Mr. R Gopalkrishnan, Prof. P Thrimurthy, President, CSI and Mr. Deepak
Parikh lighting the lamp at the inaugural function.
Various technical sessions were organized in a number of parallel
tracks during the convention. Brief abstract of these tracks is
described below -
Friday 26 November 2010
Track: iArchitecture
Track Chair: Prof. Umesh Bellur, IIT, Bombay
While setting the tone for the track, Prof. Umesh informed that last
few years have witnessed a paradigm shift in the way software is
deployed and consumed by the enterprises. Economies of scale
have driven “cloud computing” to the forefront, where computing
and software is served as a utility much like power. The
Mr. Ravindra Ranade of Redhat gave an interesting perspective
on “Excitements in and Barriers to Cloud Computing” and
emphasized on the ease with which it is possible to enhance
capacity by incorporating several servers in one go using
virtualization in Linux OS. Mr. Simone Brunozzi, a technology
enthusiast from Amazon.com, spoke about “Enterprise
Cloud Computing” and explained the key concepts related to
cloud environment such as no capex, pay as you use, elastic
capacity, faster time to market and less time for infrastructure
management. He spoke about success stories of using Amazon
Web Services such as Hungama.com, Netflix and SAP.com, who
are heavy users of amazon web services.
Mr. Anish Malhotra of SalesForce.com spoke on “Cloud
Computing: The 21st Century Business Platform”. He explained
building user-friendly business applications using Salesforce.com
cloud services and hampered upon the multi-tenancy concept of
cloud.
Mr. Kavindra Sharma, VP Consulting at L&T Infotech, deliberated
on “Navigating the XaaS World - Making Choices in Cloud
Computing”, while explaining the term ‘cloud’ in a lighter vein.
He said that concept of ‘Private Cloud’ is actually oxymoron
and gave advice on which applications can easily be taken
CSI COMMUNICATIONS | DECEMBER 2010
38
to cloud. Development, test and QA environments can easily be
migrated to cloud and also the pre-production applications such as
those used for training etc. Enterprise applications that have elastic
needs – both functional (e.g. seasonal websites) and capacity (e.g.
batch processing) – can be moved to cloud and also the outer-ring
applications such as CRM, HRM, SRM etc. can be taken to cloud.
In addition, even messaging and collaboration applications (satellite
applications) can also be candidate applications for moving to cloud.
Mr. Sanjay Mehta, CEO and founder of MAIA Intelligence, talked
on various aspects of “Strategy for implementing a BI solution in
cloud”. He first addressed the concerns, which are usually raised
by users before adopting cloud such as privacy and security of data,
business continuity and disaster recovery etc. He explained that
these are actually better addressed in many cloud environments and
should not deter users from moving their BI solutions onto cloud. His
company has developed a solution called 1KEY application, which
requires data to be converted into metadata before moving it onto
cloud.
Thus, the whole track witnessed experts from leading companies,
who are in charge of the Cloud computing, speaking about software
architectures, challenges and solutions and open problems about all
aspects of Cloud Computing. They presented experiences and case
studies highlighting what works well and what does not with the
hope that others can improve upon these best practices and avoid
the pitfalls already discovered.
Plenary session being delivered by Dr. S Seshadri, CEO Boltell Infomedia Pvt. Ltd.,
Ex. CTO Yahoo India.
Track: iEnterprise
Track Chair: Dr. Satish Babu, InApp and Dr. Sasi Kumar, C-DAC
While introducing the topic, Mr. Satish Babu, pointed out that
Enterprise Computing spans a wide variety of definitions, from
Enterprise 2.0 definition, which was about Web 2.0 collaborative
tools such as blogs, wikis, RSS feeds, and online chat, to highperformance computing, which includes grid, cluster and ultimately,
cloud computing. New trends in Enterprise computing also included
the penetration of Free and Open Source Software (FOSS). Also, a
recent concern is about Green Computing.
Dr. Keshav Nori, Honorary Advisor to TCS, spoke on the topic “A
Future of Alignment between IT and Business Enterprises”. He
explained the results of a recent research study, which examined the
value proposition of IT in the Enterprise, and the factors that require
to be aligned for deriving significant value from IT.
Mr. Raj Kalady, MD, PMI India, while covering the topic “Enterprise
Project Management”, talked about the emerging developments in
Enterprise Project Management. The major point that he introduced
was the timely completion of software projects as planned is
necessary, but not sufficient. An additional requirement was that the
project team also required to understand the business use and value
of the software delivered.
Mr. C. Achuthan, Former Presiding Officer, The Securities Appellate
Tribunal stressed the need of corporate governance, as we move
from proprietary organizations to partnerships, to private limited
companies, and to public limited companies. Independent Directors
and audit committees can oversee governance, provided they have
the right attitude and mindset. If it is not there, there could be other
disasters similar to the Satyam debacle.
Mr. Prasad Modali from Intel spoke on “Enterprise Computing:
Multicore Architecture Landscape”. Prasad spoke about the
roadmap of the multicore computing chips from Intel. He pointed
out that the multicore architecture side-stepped the limitations
of Moore’s law. While the multicore architecture had immense
potential for parallelization, one of the challenges was to convert the
single-threaded application to a parallel model. In order to render new
applications in a parallelized mode, Intel has provided frameworks
that run on top of platforms such as Visual Studio.
Ms. Bishakha Dutta, Board Member, Wikimedia Foundation spoke
about Wikipedia, the world’s fifth largest web site (after Google,
Yahoo, Facebook and Youtube), and the largest .org website in the
world. Although Wikipedia was used by over 400 million users per
month, it was run by a staff of just 40. This was because there were
over 100,000 volunteer contributors, who created content in their
spare time. Wikipedia is now focusing on Indian languages, and
hopes to substantially improve its presence in India.
Mr. Dhruv Singal, Senior Director and Head of Solution Consulting,
Oracle India, spoke on “Enterprise Integration for Business
Agility”. Mr. Dhruv contrasted two models of enterprise application
interfacing: the first was point-to-point interfacing and the second,
Service-Oriented Architecture, which is a loosely coupled, webservice based strategy. While the former has low start-up costs, it
has progressively increasing maintenance costs, whereas the SOAapproach, while requiring more effort in the beginning, proves to
be much cheaper down the line, especially during the maintenance
phase.
Dr. G. Nagarjuna, Professor at TIFR, Mumbai, spoke on “Free and
Open Source Software in Enterprises of the Future”. Dr. Nagarjuna
traced the evolution of FOSS and the principle of ‘Copyleft’. He
pointed out that FOSS is a robust and cost-effective alternative to
proprietary software. He emphasized that the peer-to-peer social
networks provided a superior way to construct software compared
to the traditional Enterprise mode, as has been repeatedly shown by
FOSS projects such as Gnome, Apache, and the Linux Kernel.
Mr. Navin Mehra from Fortinet, spoke about “Emerging Security
Threats”. Navin outlined the current safety threats such as
spamming, carding, phishing, viruses and trojans. Navin pointed out
that frequency of such threats is increasing, and the monetary losses
are massive - although under-reported. He shared the findings of a
study conducted by Fortinet, which enumerated different threats and
their relative risk profile.
Track: eGovernance
Track Chair: Mr. Mohan Datar and Dr. R K Bagga
Out of the three sessions of this track, the first was chaired by
Mr. Piyush Gupta of NISG and had following presentations:
1. State Planning & Policies Implementation Gujarat by Neeta
Shah, Director, e-Gov, Gujarat
2. Directorate of Settlement & Land Records by Mihir Vardhan,
IPS, IG Prisons & Director, Goa
3. District SBS Nagar by Shruti Singh, IAS, Dy Commissioner, SBS
Nagar, Nawanshahr, Punjab
4. Sales Tax Administration by Ravindra Patil, Joint Commissioner
Sales Tax, Maharashtra
CSI COMMUNICATIONS | DECEMBER 2010
39
5.
Gujarat Pollution Control Board by Hardik Shah, Member
Secretary, GPCB, Gujarat
The second session consisted of two panel discussions. The first panel
discussion was on the topic of “UID: Challenges and Opportunities”.
In the opening remarks the session chair Mr. Mohan Datar said that
UID is definitely India’s single largest e-governance project and
could also be the largest in the world. Hence, there are tremendous
challenges in its implementation. But the bigger challenges will be
in making use of UID in other e-Governance as well as corporate,
BFSI systems so that its purpose will be fulfilled. Hence this project
presents tremendous challenges and opportunities to Governments,
Corporates and IT industry alike. He also said that there are many
myths, perceptions and misconceptions about UID and the panel
would clear at least some of them.
The panel was moderated by Mr. Sumanesh Joshi, Additional
Director General, UIDAI, who also delivered the keynote address.
He provided complete details of the UID project from start to its
current status. He informed that UIDAI has named this identification
number as ‘AADHAAR’. The Government has undertaken a brand
building exercise for this brand. He also elaborated on its objectives,
technologies and future roadmap. He informed that UIDAI aims at
issuing Aadhaar ids to 600 million persons within next 4 years. Two
novel features of this UID number are that the number is going to be
purely random and will not have any intelligence in it. So nobody can
infer any personal attributes by looking at the number. Secondly, it
will be issued also at birth. However, at that stage it will not capture
any biometric data. The biometrics data will be captured later at
the age of 5 and again at the age of 12 to safeguard against changes
during the growing years.
Mr. Ravi Saxena, Additional Chief Secretary of Gujarat, compared
the Aadhaar number to the naming ceremony of a child and advised
that one should not expect anything more from this number.
All further uses will come from other e-Governance systems.
Prof. R. Ramakumar of TISS, Mumbai, expressed concerns about
infringement of privacy, rolling out the project before passing of law
by the parliament and lack of data on cost benefit analysis. Mr. Rajiv
Agarwal, Food Commissioner, UP, expressed the desire that UID
must issue a card and not just a number. Prof. Guruprasad Murthy,
Ex Director of JBIMS and renowned management guru, opined that if
it is going to one additional card, then it would be counterproductive.
If it is able to replace many other cards, then it is welcome. Mr.
Venkatesh Hariharan, Corporate Affairs Director (Asia-Pacific) for
Redhat, expressed that UIDAI should ensure that it follows open
standards.
The second panel discussion was on “e-Governance Implementation
and Technology Issues”. The panel was moderated by Dr. Ashok
Agarwal, Past Chairman, SIGeGOV. He posed three questions before
the panel members, viz. a) has e-governance helped in improving the
quality of life? b) has it reduced corruption in government and can we
live without touts? And c) What are the areas for improvement? Mr.
Anurag Jain, Secretary IT, Government of Madhya Pradesh, answered
in the affirmative to all three questions. He, however, said that touts
can be eliminated where direct interaction with Government systems
is possible. They cannot be fully removed where manual processes/
interventions are necessary. Mr. Niraj Prakash, Director, Public
Sector Marketing, Microsoft, said that lack of comprehensiveness,
lack of process reforms and lack of replications were three main
areas for improvements. He also stressed that in future e-governance
systems should ensure digital inclusion and governments should
upgrade their quality of adaptations to remain on top of continuous
change. Mr. Keshav Dhakad of BSA stressed the need for software
asset management to ensure highest level of security. Mr. Ravi Teja,
Vice President of Nihilent mentioned that today’s solutions become
tomorrow’s problems if they are not designed comprehensively. Mr.
Sanjay Bhatia, Commissioner of Sales Tax, Maharashtra, bemoaned
the current constraints of procurement mechanisms and suggested
outsourcing as an option for Government agencies. He also suggested
adopting an incremental approach rather than a big-bang approach
for ensuring successful implementation.
The third session of the track witnessed the distribution of CSINihilent e-Governance awards to the 16 winners, and the awards were
distributed by Dr Vijay Bhatkar Chairman e-governance Committee
Government of Maharashtra. While congratulating the winners he
informed that India now ranks 4th largest economy in the world in
terms of PPP. He felt that India has potential to become number one
if it focuses on good education from primary level onwards and has
good governance.
Lastly, presentation of CSI Award of Excellence at State Level to
Gujarat was done along with the release of the book ‘Enablers of
Change: Selected eGovernance Initiatives in India’, published by
ICFAI University Press, 5th in the series released by CSI SIGeGOV.
‘Enablers of Change: Selected eGovernance Initiatives in India’ being released by
Mr. Philip Jose D’Souza, Hon’ble Revenue Minister of Goa in presence of Piyush
Gupta, R K Bagga and Ayaluri Sridevi (Editors) and CSI Office bearers.
Track: iSociety
Track Chair: Prof. Anirudha Joshi, IIT, Bombay
Dr. Vijay Bhatkar gave keynote address, and set a philosophical tone
for the iSociety track. He started from the big bang, and spoke through
the evolution of mankind and covered a wide range of topics including
development of society and climate change. He contended that we
cannot easily predict the outcomes of technological developments
and tectonic shifts may be happening today in front of our eyes that
we cannot easily perceive.
Mr. Kaushal Sarda gave an interesting talk weaving together two
very popular themes - gaming and social networking. He showed
many examples ranging from disaster preparedness to motivating
healthy lifestyles, where social networks get strengthened because
of game play, and games were used for better social development.
He identified the elements that make games engaging and social
interactions fruitful.
Dr. Amit Nanavati presented three pilot case studies, where he
showed how the advantages of the world wide web can be brought
to low-literate, less tech-savvy users with the help of the voice-web
that his lab has developed. With minimal interventions, users in rural
Andhra Pradesh and Gujarat and in the slums of Delhi could discover
and use applications such as on-line advertising, greeting cards,
matrimony, agricultural expertise, and radio on demand merely with
the help of a “dumb phone” and smart technology. If the current
challenges are met with, the mobile phone-based voice web can do
for the bottom 83% of the world’s population that the computerbased world wide web did for the top 17%.
CSI COMMUNICATIONS | DECEMBER 2010
40
Saturday 27 November 2010
Plenary session being delivered by Dr. Sorel Reisman, President-Elect, IEEE,
Computer Society.
Track: iResearch
Track Chair: Dr. Rattan K. Datta, Hon. Research Director CSI
The track on research was organized with many eminent speakers
including key note address by Prof. Sorel Reisman, president elect
IEEE (CS). Prof. Reisman presented review process of evaluating the
performance a faculty member for promotion etc based on the three
factors i) content preparation, ii) delivery system, & iii) Research
activities.
Mr. Ashish Sonal & Neha of Orkash Services gave a talk on “An
integrated approach to mimic human into collation and create
intelligence in an automated manner”. The thrust of the presentation
was use of AI techniques & parallel processing.
Dr. Jaya Panvelkar of NVIDIA, Pune presented her lecture on “HPC
& cloud computing with GPU: A Paradigm Shift”. She surveyed the
progress in high performance computing (HPC) & super computing
with their current performance vis-a vis power utilization. She also
highlighted the application of GPU as a new paradigm for parallel
processing.
Prof. K.S. Rajan of IIIT Hyderabad presented his talk on “Pushing the
frontier areas of computer science”. Focus of the talk was use of
spatial information for agriculture production using high performance
computing.
Dr. Raghuram Krishnapuram of IBM-Research India, gave his talk
on “Text Analytics Tools for Customer Insight”. For e-governance
& other support to the public there is need to carry out analysis of
documents. Various tools developed by IBM were presented by him.
Dr. Hemant Darbari of CDAC presented his talk on “Direction
and scope of research in multilingual implementation in software
packages and solutions”. It was noted that excellent work done
by CDAC in multilingual approach needs projection for universal
application in the country.
The interaction & detailed discussion led to the following
recommendations by the track group A) There is a need to develop a group in parallel computing to
promote research on this future look discipline. It was decided to
propose to CSI Execom to approve a SIG on Parallel Computing
with Dr. Jaya Panvelkar as founder chairperson. She was
requested to form a proposal in this regard and send it to CSI.
B) C-DAC may use CSI communication & other platforms of CSI to
popularise their work on multilingual applications.
Track: iSolution
Track Chair: Mr. Sunil Mehta and Prof. Pradeep Pendse
Mr. Pradeep Waychal, Sr. VP – Patni, delivered the keynote address
for the track. He shared some of his research related to IT enabled
business transformation. He proposed a Maturity framework for IT
enabled business transformation indicating the key process areas
at each stage. The highest stage is where there is a continuous
innovation led by IT – which he refers as the optimized level on the
maturity framework.
Mr. Atul Bhandari, VP – SAP – highlighted the fact that many
enterprise wide IT projects lack a consciously defined business case.
He emphasized the need for a proper business case, which is aligned
to the company’s business objectives. He also emphasized the fact
that enterprise solutions should be viewed as business projects and
not IT projects. He also outlined some of the strategies such as
On device, On demand etc and the importance of aligning these to
overarching business goals.
Mr. Shailendra Lande from TIBCO explained the challenge of working
with heterogeneous technology environments and the crucial role,
which TIBCO plays in ensuring seamless integration across multiple
platforms. With the help of schematic diagrams he showed how
the architecture for Enterprise level applications using TIBCO, can
be transformed. This transformation makes its scalable as well as
seamless across platforms to help construct more durable, scalable
and reliable infrastructure.
Mr. Srikant Palkar, Chief Architect, UST Global, a leading US based
software company, spoke on the architecture for software process
improvement to a nearly packed room. His talk was interactive,
informative and thought provoking. He used the 4 + 1 views framework
usually used for Software architecture to discuss the various views
of software improvement leading to an appropriate strategy for
software process improvement. He used 3 real life case examples to
illustrate his views. One of these was a large retail company, which
has a USD 95 million budget for process improvement, and another
one is a large retail company. He discussed the pros and cons of big
bang versus incremental approach to process changes. He shared
detailed experiences for each of the views on process improvement.
He finally proposed a way forward.
The final talk during this track was delivered by Dr. Dharam Singh,
faculty in Computer Science and the chairman of the CSI SIG on
Wireless Networking. He explained his research, which could lead
to an optimization of bandwidth and several other benefits on
wireless networks making delivery of Video of higher quality at lesser
bandwidth. His talk was seen as an infrastructure solution for many
of the solution ideas discussed in the previous sessions.
Track: iEntrepreneur
Track Chair: Mr. Manak Singh, TIE, Mumbai
Mr. Manak Singh of TiE, Mumbai, put together this track, a day full
of inspiration & insights to encourage as many of the delegates to
either....
¬¬ convert their business idea from white paper into a business
plan,
¬¬
scale up their existing enterprise,
¬¬
look around & collaborate with friends and peers to form teams,
Support growing enterprises.
The track provided ‘Thought for Enterprising India’.
The future of Enterprising India is full of such powerful ideas, which
can potentially Change the Nation and Lead the World. India is
genetically an Enterprising nation with a deep rooted and home
grown “Jugaad” culture.
The challenge and opportunity is to shift the Indian Entrepreneurial
mindset from “Survival” to “Choice”. The Choice - to Innovate, to
collaborate, to scale, to create social Impact, to consciously create
jobs, to create and distribute wealth for one and many around .…
With an existing force of millions of SMEs (and fast growing), emerging
CSI COMMUNICATIONS | DECEMBER 2010
41
generations of new age Entrepreneurs and a large unfolding consumption
story, it is time to scale up Indian “Jugaad” to organised growth.
Key thoughts are ƒƒ Purpose, Ideas, Execution
ƒƒ Leveraging the growing domestic support system – Mentoring,
Investors, Networking
ƒƒ “Value for Money” enterprises – India’s leadership opportunity
amidst Global Turmoil
Another dimension of entrepreneurship is the infectious art of story
telling....so the i-Entrepreneur track primarily featured exciting stories
of some very inspiring and enterprising Indians like Mr. VSS Mani of
Justdial, Mr. Deepak Ghaisas of Genvocal Strategic Services, Mr. L.C.
Singh of Nihilent, Mr. Phanindra Sharma of RedBus.in, Mr. Ajit Nagral
of SciFormix, Mr. Mohit Dube of Carwale, Mr. Prashant Bhaskar of Plug
HR, and Mr. Rajesh Solanki of EverFocus & Energos.
In India we believe very strongly in the institution of marriage. This
mindset tends to be even stronger in IT, given the sheer fantasy of
possibilities that technology has to offer. But to sustain this marriage it
is imperative that our IT services / products are not just fantasy but are
driven by demand, solving real pain points.
It’s this mindset that would help us to build sustainable enterprises, to
build solutions and not just a service or product.
There were various panel discussions among the participating
entrepreneurs. While first two panels gave the above key message the
third panel focused on ‘Teamwork’.
The key message of this panel was – “Building and sustaining growth
of your enterprise can be best achieved through collective experience,
passion and effort.”
This panel discussed the role of effective collaboration with the
ecosystem, building teams, forging partnerships that help propel their
enterprises in a high growth trajectory.
The fourth panel concentrated on diving into the minds of Enterprising
Indians - “Entrepreneurship is the art of infectious Story-telling!” This
panel featured some diverse and maverick examples of enterprising
Indians. The stories of Venky (Goli Vada Pav) and Faisal (mouthshut.
com <http://mouthshut.com>) stirred and inspired the audience. The
foot tapping rhythm of their enterprising journey, gave insights on the
true essence of jugaad - the Indian art of the start evolved into sustained
value delivery.
The fifth panel was on “Meet the enablers - Mentors & investors” and
panellists were Mr. Chetan Shah of Indian Angel Network, Mr. Anand
Lunia of SeedFund and Mr. Manik Arora of IDG India Ventures.
As Entrepreneurs, you can’t just be “Operational’, you have to constantly
Plan – validate your plan – replan, Raise Funds, build an organisation
– Teams, infrastructure - Selling your product / service - Customers,
Partners, etc., strategise for Growth, make corrections in current state.
One has to wear many hats…in short you have to be a Super Human!
It is imperative hence for entrepreneurs to surround themselves with
the right set of human intelligence that helps them validate plans,
challenge assumptions, execute efficiently, scale fast and smart. This
panel discussed the role of mentors, investors playing the acceleration
role for entrepreneurs.
The panel also oriented the audience on various forms of equity funding,
do & don’ts of approaching investors, as well as did some crystal ball
gazing to share emerging trends and opportunity areas.
Track: iConnect
Track Chair: Prof. Abhay Karandikar, IIT, Bombay
iConnect track was chaired by Prof Abhay Karandikar of IIT Bombay.
It had 5 interesting talks by industry veterans focussing on mobile
broadband and convergence applications.
The first session was a talk by Dr. Vinod Vasudevan, Group CEO of
Flytxt India. While speaking on the topic of “Mobile Broadband –
Convergence and C2C”, he presented a different paradigm on mobile
broadband applications for telecom operators. According to Dr. Vinod,
the major differentiator for an operator will be to deploy innovative
applications on mobiles and not treat a mobile device as just yet
another broadband device. A typical mobile handset has many contexts
including locations, which need to be exploited.
The second talk was delivered by Mr Akhil Bahl of Cisco. Akhil emphasized
the need of “Collaboration as a Service”. He presented an architecture
of collaboration service and gave an insight into Cisco’s vision.
Mr. C.S. Rao, President, Reliance Communications, spoke on “Next
Wave of Mobile Broadband”. He stressed the need of many innovations
for next generation architectures and applications. Operators have
invested significantly in spectrum and they need innovative models for
their return on investment.
Mr. Sreedhar of Avaya gave an interesting presentation on enterprise
video communications applications. He also addressed the capabilities
of various solutions.
The track was concluded with a talk by Mr. Vishal Gupta, CEO, Salesforce,
who spoke on innovative information security architecture. The topic of
the talk was “Seclore Collaboration and Security: A conflict of goals ..
Is there a balance?” In this paradigm of innovative information security
architecture, meta-data associated with information will ensure
security of the data.
All the speakers gave their perspective of innovations required in the
next generation converged scenario.
Track: Education & Research
Track Chair: Dr. Manohar Chandwani IET DAW, Indore
The track on Education & Research was targeted towards apprising
the participants and delegates with the future requirements of ICT
education system in India. The emphasis was given on the educational
technologies for the next decade, research at UG level, IndustryInstitute interfacing, India’s need in 2040 and improving the quality in
technical education using ICT.
Prof. Deepak Phatak of IIT, Mumbai, spoke about “Scaling New Heights
in Quality Education using ICT”. The Indian education system is now
growing at a faster rate than before. The emergence of newer and
private sector Institutes has led to broader spectrum of education, but
at the same time, the quality has not been taken into consideration
carefully by the education system developers. Despite that the fact that
ICT has progressed with a great impact on industry, it has not been able
to influence the education for better quality in a wider perspective. This
talk addressed the issues related to scaling up the heights in the field of
education for bettering the quality especially in Technical Institutions
that are struggling for overcoming the faculty shortage. The talk also
addressed how the ICT can be used to impart training to the teachers
of engineering colleges with virtual and distance education principles.
There can be interactive teaching programs to train teachers at various
centres that are remote. A specific example of 2-week ISTE sponsored
winter/ summer school at engineering colleges from IIT, Mumbai, was
illustrated.
Prof. Rajeev Sangal, Director, IIIT, Hyderabad, spoke on the topic of
“Research Led Education: A New Model for Research University in
India”. In this talk, he introduced a new model of education, which is
based on linking its with research. There are two essential elements
of this model: (1) Linking UG programmes with research, and (2)
Undertaking research that links up with industry and society. The first
element implies introducing flexibility in curriculum, more projects,
and most importantly, allowing depth to be pursued without waiting
for the breadth to be covered. The second one implies working on
research problems that can be linked with industrial applications. Some
examples from IIIT, Hyderabad, were given where this model has been
successfully developed. It has led to the setting up of strong research
CSI COMMUNICATIONS | DECEMBER 2010
42
groups in several areas, within a short period.
Prof. Dheeraj Sanghi, IIT, Kanpur, gave a speech on “Improving ICT
Education: Role of Various Stakeholders”. It is quite known that the
quality of ICT education is very poor in India, except perhaps in 50
odd institutions. This leads to not just poor employability, but also
insufficient people getting trained to do research and go for academic
careers, which implies that even in near future the quality of education
will continue to remain poor. To change the situation, all the stakeholders
will have to take steps to improve the quality. In this talk, two major
stakeholders were considered, namely, Industries and Universities, and
it was discussed as to what they can do to improve the quality.
Mr. Anant Krishnan, Chief Technology Officer, TCS, Chennai, spoke
about “Accelerating Education & Research with Better Quality & Greater
Collaboration”. He said that India’s ICT educational infrastructure
should take much credit for the country’s growth as a knowledge
economy. However, we must swiftly move to the next level to keep
our competitive advantage. One area of concern is that we currently
produce very low number of Ph.D.s. To do path-breaking research as
well as to offer high value R&D services, we need to give this area some
focus. As we move ahead in the 21st century, we need to leverage our
ICT education to close gaps in several sectors, ranging from education,
public health, agriculture, bio-technology, apart from ICT itself. Both
academic Institutions and industry are clear that an Institute-Industry
interface is vital; It is time to scale this up, by bringing more institutes
and industrial outfits in a clear, process oriented way. R&D has to be a
part of a company’s growth strategy and Indian ICT companies today
have the capacity to invest in this and create a virtuous cycle of growth.
Collaborative Research – where Academia understand real world
challenges, and Industry learns to look at a problem from scientific point
of view and works on a solution with scientific rigor – can bring benefits
not just to the teams involved, but to customers, the universities and
the larger society as well. Orchestrating collaborative research requires
transparent processes and the management experience and expertise.
Mr. Subrahmanya S V, Vice President, Education & Research, Infosys,
Bangalore, deliberated on “Collaborative Research between IT Industry
and Institutes of Higher Learning” in his speech. He recognized that
Indian IT industry offers a number of challenges that requires continuous
innovation. Indian Universities and Institutes of higher learning such as
Indian Institutes of Science and Technology are increasingly focusing
on solutions to problems faced by real world including the IT industry.
He informed that at Infosys, people are guided by ideals of building
a healthy eco-system of Industry and Institute cooperation. The
participation of academia ensures formalism to IT Solutions. The talk
presented Infosys’ experience in the Collaboration between the two.
The talk also indicated possibilities of pursuing Ph.D. work at Infosys on
real problems to provide IT solutions.
Mr. Shankar Iyer of Microsoft India spoke about “21st Century
Learning Skills & Technology”. His talk addressed new thinking and
developments in the area of learning skills that have emerged lately
by means of technology. New learning skills are dependent upon
students’ diversity and multi sensory aspects that can be implemented
in the form of educational technology. He spoke about a new system
called teacher.tv, which is used for easing the learning process by
learners ranging from school students to higher-level pupils. 21st
century learning is student-centered instead of teacher-centered and
uses multimedia technology that causes critical thinking, collaborative
work, informed decision making and “pull instead of push” paradigm of
learning and information exchange. By way of technology, the issues of
global awareness; financial, economic, business and entrepreneurship
literacy; civic literacy and health & wellness awareness are tackled from
a learner’s perspective. A complete Microsoft road map was discussed
covering 21st century learning skills & technologies.
CSI2010 - Awards for Excellence in IT
- Anil Srivastava, Convenor
BFSI Sector
Winner - HDFC Bank Ltd
Runners Up - Asset Reconstruction Company (India ) Ltd.
Product Manufacturing Sector
Winner - Mahindra Group
Runners Up - Bajaj Electricals Limited
Service Industries Sector
Winner - The Tata Power Company Limited
Runners Up - Reliance Infrastructure
Non Profit Organization Sector
Certificate of Appreciation
- Small & Medium Business Development Chamber of India
- Institute of Cybernetics Systems and Information Technology
Quality Assurance Sector
– Merged with Product Manufacturing
Life-Time &
Fellowship Awards
Dr. F C Kohli receiving Life Time Achieving Award from
Prof. P Thrimurthy, President CSI
(L to R) Mr. Anil Srivastava, Mr. V L Mehta and Prof. Kesav Nori receiving Fellowship Awards from Prof. P Thrimurthy, President CSI.
CSI COMMUNICATIONS | DECEMBER 2010
43
CSI Honors
@ 45th Annual National Convention 2010, Mumbai
Dr. Faqir Chand Kohli
Mr. Anil Srivastava
A visionary and pioneer by nature, Dr. Kohli is acknowledged as the
‘Father of the Indian Software Industry’.
Graduated from IIT, Kanpur and a Post Graduate from IIM,
Bangalore, Anil has worked for 25 years with Government as a
member of Indian Administrative Service (IAS). He joined Maxwell
School, in Syracuse University during fall 2002, for a Masters in Public
Administration and Information Technology Policy Management.
Pursuing Ph.D. in Social Science, his area of research is E-governance
& Application of Information Technology in Government leading to
increase in transparency and issues related with that.
As an IAS officer of MP Cadre he served in various capacities
including Departments of Revenue, Food Processing Industries,
Horticulture as Principal Secretary, Finance Department, Panchayat &
Rural Development Department as Secretary, Commissioner Jabalpur
division, Commissioner Treasuries & Accounts, Commissioner
Industries and Secretary Department of Commerce & Industries,
Managing Director M.P. State Electronics Development Corporation
and Optel Telecommunications Ltd, Bhopal, Managing Director, M.P.
State Small Scale Industries Development Corporation Ltd., Bhopal,
Collector and District Magistrate, Hoshangabad and Shajapur.
Mr. Srivastava’s present assignment is that of Principal Secretary
Department of Revenue Govt. of MP
Anil has been a member of CSI for the last 30 years and
has actively participated in the activities of the Society. He has
represented CSI in various committees and Forums to pursue the
interests of the Society. He has served the society in various elected
positions at the Chapter level.
In grateful recognition of his services to the Computer Society
of India and his outstanding accomplishments as an IT professional,
the CSI has decided to name him FELLOW of the Society. The Society
takes pride and pleasure in presenting him with the Citation on the
occasion of its 45th Annual Convention held at Mumbai.
TCS, under his leadership and propelled by his vision, pioneered
India’s IT Revolution and helped the country to build the IT Industry.
Be it the propagation of computerisation in India at a time when no
one realized its potential, or bringing the benefits of IT to India’s rural
masses through computer based Adult Literacy programme, Dr.
Kohli saw IT as an instrument of national development. He has been
working on advancing engineering education at undergraduate level
to world standards to create a large pool of students for undertaking
graduate studies and research.
Dr. F. C. Kohli was the President of Computer Society of India in
1974-76. He has given memorable addresses in CSI Conventions.
He led a group to draft the Constitution of CSI. He has truly been a
mentor to succeeding Presidents of CSI and has been guiding them
on the tasks ahead.
He has received many awards including the prestigious Dadabhai
Naoroji Memorial Award in 2001 and was conferred the Padma
Bhusan in the year 2002.
In grateful recognition of his immense contribution to the Nation, to
the IT Industry and to Computer Society of India, the CSI has decided
to confer on him the Lifetime Achievement Award. The Society takes
pride and pleasure in presenting him with the citation on the occasion
of the Annual Convention of CSI held at Mumbai in November 2010.
Mr. V. L. Mehta
Mr. V. L. Mehta is a visionary, and he has successfully converted his
vision into reality through innovative ideas and methodologies.
During a distinguished career spanning over 39 years, he has
significantly contributed towards high-end IT education.
Prof. Kesav V. Nori
His vision of post-Y2K period compelled him to establish an
information security company in 1998. His was a pioneering effort
in creating awareness and evangelizing the need for information
security in our country during 1999-2001.
Prof. Kesav V. Nori has a BTech (EE) from IIT Bombay (1967) and an
MTech (EE) from IIT Kanpur (1970).
He was a two-time recipient of UNDP Fellowship, in Europe and USA
respectively, during 1974-1975.
Kesav Nori’s research contributions and interests include System
Design of Computers, Definition, Design and Implementation of
Programming Languages, Automation in Assembly, Production and
Manufacture of Software, Software Process and Product Engineering,
Software Quality, A Systems View of Business and its Information
Systems, Systems Design Methodology and Systemic Understanding
of Diffusion of Innovation in Organizations.
He is a Life Member of CSI and a member of IEEE (Computer Society)
and ACM. He has readily lectured at various Annual Conventions
of CSI since 1971 and was the Program Chairman of its Annual
Convention in 1995 in Hyderabad. He was Program Chairman for
the Indian FST&TCS Conferences during 1984-1989, and of IEEE
Conference on Smart Appliances held in Hyderabad in 2004.
In grateful recognition of his services to the Computer Society of
India and his outstanding accomplishments as an IT professional, the
CSI has decided to name him FELLOW of the Society. The Society
takes pride and pleasure in presenting him with the Citation on the
occasion of its 45th Annual Convention held at Mumbai.
Mr. V L Mehta has been a member of the CSI since 1976 and
has actively participated in the endeavours of the Society. He has
served as an Office Bearer for the last twelve years continuously
at the Mumbai Chapter Chairman, Regional VP and Nominations
Committee. He revived the activities of Mumbai Chapter while he
was the Vice-Chairman and led the Chapter to the level of the Best
Chapter Award for two consecutive years. He is fully committed to
CSI and has always pursued the interests of the Society.
In grateful recognition of his association and services to the Computer
Society of India, and for driving the performance of Mumbai Chapter
to its peak during his leadership and his outstanding accomplishments
as an IT professional, the CSI has decided to name him FELLOW of
the Society. The Society takes pride and pleasure in presenting him
with the Citation on the occasion of its 45th Annual Convention held
at Mumbai.
CSI COMMUNICATIONS | DECEMBER 2010
44
Coimbatore
ANITS
Chapter News
Please check detailed news at:
http://www.csi-india.org/web/csi/chapternews-december2010
SPEAKER(S)
TOPIC AND GIST
AURANGABAD
Dr. T. V. Gopal, Manjula Dharmalingam and Ajay Deshkar 9 October 2010: “Software 2.0 - Emerging Competencies for Enterprise
Solutions and Knowledge Based Engineering (KBE)”
Software professionals and IT students tasted a bit of advancements in
Cybermetics through this seminar. After a long time professionals at Aurangabad
could get exposure to the most modern developments in the IT industry.
The key topic, which was discussed during this programme was as follows:
Inauguration of the Seminar on “Software 2.0”
“Software 1.0” has been happening for nearly five decades beginning with the
questions “what should it do?” and “what can it do?” The success rate of Software
Projects has been alarmingly low considering the number of professionals working
in this area over the past 50 years. Several challenges in the areas of Human
Resources, Technology, Management and Innovation remain. “Software2.0” is to
integrate Cybernetics and Systems Theory with the best practices in “Software1.0”.
BANGALORE
Mr. Kupendra Shivapuram, Sr. Manager, VMware
Editor’s Choice: “From an IT perspective, the way we have
reduced our carbon footprint is through virtualization. We
instantly saw its power and began to virtualize everything
we could. ... Now instead of having 28 servers at 10
percent utilization, we have three machines at 80 percent
utilization.”
- Brad Sukut, Midwest Family Mutual
30 October 2010 : “Workshop on Cloud Computing”
The VMware team gave an exhaustive coverage on Cloud Computing including
introduction on Cloud Computing, its benefits, VMware Virtualization, VMware
Enterprise Data Centre products, VMware Enterprise Solutions, Cloud Solutions
and case studies. In the afternoon, hands-on session on Creating virtual data
center, Catalogs, allocating resources to virtual data center/organization, and
granting rights to the application were conducted.
CSI COMMUNICATIONS | DECEMBER 2010
45
BHOPAL
Dr. Sanjay Sharma, Professor at MANIT, Prof. Aasif 20 October 2010 : “Cloud Computing and Network Technologies”
Hasan, Prof. Patheja, BIST, Prof. Prakash Saxena, Dr. R.P Prof. Sharma explained the objectives of ‘Cloud computing’ to the participants
Singh Director, MANIT, Dr. Sanjay Jain Den (P&D), BGI and described how a proper strategy & planning is essential for achieving the
scale of excellence in the areas of information technology.
Prof. Aasif Hasan emphasized the necessity of soft computing and described the
utility of soft computing. Dr. R.P Singh, Director, MANIT, highlighted the great
potential of ‘Cloud Computing & Network Technologies’, which can be exploited
in the years to come. Dr. Sanjay Jain Den (P&D), BGI, mentioned that ‘Cloud
computing’ resources are shared. This helps businesses to save time and money
by placing their information all in one location, which is easy for their workers to
look up and access.
Workshop of Cloud Computing and Network Technologies
CHANDIGARH
Dr. Raghav, Institute of Microbiology, Chandigarh
16th Oct. 2010 : “Use of Information Technology in Medical Research”
The talk covered concepts of Biometrics, molecular nature of human body,
Gnomes and use of IT in Medical Research, benefits and other critical issues. The
talk was highly informative and the session was interactive.
Editor’s Choice:
Patient
Records
Communications
Mr. V P Girdhar greeted speaker Dr. Raghav
Expert
Systems
ICT in
Medicine
Internet
Equipment
Research
CHENNAI
Prof. D K Subrahmanian, President FAER, Dr. Anirudha
Joshi, IIT, Mumbai, Mr. Kevin Devasia, Apollo Hospitals,
Mr. A P Guruswamy, DGM, Indian Bank, Mr. Sanjay
Sharma, Polaris, Mr. Deepak from TCS, Mr. Ganesh
Kumar, Chief General Manager, IDRBT, Mr. Kiruba
Shankar, CEO, Business Blogging Pvt Ltd., Mr. Suman
Kuman, ESPN, Mr. Srinivasu Chakravarthula, Yahoo!,
India.
11-13 Nov., 2010 : “Good Design for Better Living”
The deliberations during the two-day seminar focused on integrating the
quantitative, qualitative, formal, semi-formal and informal practices into the
software development life-cycle to engineer user-centric products and services.
In the rapidly dynamic and increasingly self-structured environments, it is clear
that traditional user experience research methods will be of limited use, and even
then only in the most structured areas of the user experience continuum. As
more users move into the more self-structured environments, a new paradigm
for user experience research is required to fulfill the promise of richer, usable user
experiences.
“You can never accurately measure the usability of a software product. When
you drag people into a usability lab to watch their behavior, the very act of
watching their behavior makes them behave differently.”
The new design paradigms must include research methods that allow organizations
to capture the experience whenever the user is interacting a given artifact and the
interaction is unmoderated and asynchronous with the researcher’s schedule even in the middle of the night, and from any time zone and independent of the
Prof. D K Subrahmanian, President, FAER and Event General Chair –
UMO2010 delivering the Keynote address on “World Usability Day delivery channel - cell phone, PDA, laptop, kiosk and any operating system.
2010”
CSI COMMUNICATIONS | DECEMBER 2010
46
COIMBATORE
Mr. H R Mohan, Chairman, Div IV and Associate Vice 29 - 30 October 2010: “Mobile and Ad Hoc Networks”
In his inaugural address, Mr. Mohan, traced the developments in the field of
President, The Hindu.
communications and how the country has benefited by the advancements in
this cutting-edge and pervasive technology. He also pointed out that any leading
technology like mobile communication is a double-edged sword and had its fair
share of problems and misuse in terms of cyber crimes. He added that the recent
advancements such as Bluetooth introduced a new type of wireless systems
known as mobile ad-hoc networks. These networks operate in the absence of
fixed infrastructure and offer quick and easy network deployment in situations
where it is not possible otherwise and typically used in Military scenarios, Sensor
networks, Rescue operations, Students networking on campus, Free Internet
connection sharing and Conferences. Mobile ad-hoc network is an autonomous
system of mobile nodes connected by wireless links; each node operates as an
Release of Conference Proceedings at the two-days National
end system and a router for all other nodes in the network. With the advent of 3G,
Conference NCMAN
these Ad Hoc networks are bound to become much more popular.
COCHIN
Mr. James Joseph, Director Executive Engagement, 14 October 2010 : “Unified Communications – Transcending the national
boundaries”
Microsoft India
The session gave an overview of unified communications in the convergence
of email, unified messaging, web conferencing, instant messaging (IM), audio/
video collaboration, speech recognition and integrated presence. The session
also addressed the challenges, industry trends, and opportunity of unified
communications.
Prof. M.V. Rajesh delivering a talk
Editor’s Choice: Unified communications (UC) is the integration of real-time
communication services such as instant messaging (chat), presence information,
telephony (including IP telephony), video conferencing, call control and speech
recognition with non-real-time communication services such as unified messaging
(integrated voicemail, e-mail, SMS and fax). UC is not a single product, but a set of
products that provides a consistent unified user interface and user experience across
multiple devices and media types.
- Excerpted from http://en.wikipedia.org/wiki/Unified_communications
Prof. M V Rajesh, Head of the Dept., Dept. of Electronics 11 November 2010: “Introduction to Fuzzy Logic and its Applications”
Engineering, College of Engineering, Cherthala
The speaker explained the concept of fuzzy logic in a simple way using examples.
TIRUCHIRAPALLI
Ms B Smitha Evelin Zoraida, Professor & Head, Dept. of 19 October 2010: “DNA Computing”
Computer Applications, MAM College Of Engineering, DNA computing is a form of computing which uses DNA strands, biochemistry and
molecular biology, instead of the traditional silicon-based computer technologies.
Sirugunoor
It is well established that DNA encodes the genetic information of cellular
organisms. Various biological operations can be performed on DNA strands
viz., denaturing, annealing, ligation, polymerase chain reaction (PCR), gel
electrophoresis and cloning.
It is estimated that a mix of 1018 DNA strands could operate 104 times faster than
the speed of a today’s advanced supercomputer
It is estimated that the inherent parallelism in DNA computers makes 1020
operations per second realistic for DNA computers. On the other hand, modern
supercomputers perform 1012 operations per second. Similarly, the energy
consumption is lesser for a DNA computer as compared to a silicon digital
computer. Solving Multiple travelling salespersons problem using DNA strands
Ms. B. Smitha Evelin Zoraida, Professor at MAM College Of
was presented to bring out the employability of DNA strands in problem solving.
Engineering, delivering a lecture on “DNA Computing”
TRIVANDRUM
Prof. K C Raveendranathan, Government Engg. College
27 October 2010: “Wireless Home Automation Networks”
Mr. P. Abraham Paul, Ex: Vice President (TS) SPCNL, 3 November 2010: “Money through Mobile (MTM) For Financial Inclusion of
SIEMENS ICN/GM & SMT TBG BPL Mobile, TES (I) DOT Lower Strata”
Dr. Venugopal Reddy MD, MRCP, Physician & Life Skills 10 November 2010: “The Art & Craft of Public Speaking”
Expert, New York, USA.
Mr. N T Nair, Chief Editor, Executive Knowledge Lines – 17 November 2010: “Information Technology (IT) - Energy Scenario”
monthly,
CSI COMMUNICATIONS | DECEMBER 2010
47
Student Branches
Please check detailed news at:
http://www.csi-india.org/web/csi/chapternews-november2010
SPEAKER(S)
TOPIC AND GIST
BDCOE, Sevagram, Vardha
31 August 2010 : “Inauguration of Student Chapter”
The inauguration of the student branch was followed by a debate competition.
1 September 2010: “Career Avenues after Engineering and Orientation for
Mr. Gaurav Namjoshi, Director, Endeavor, Nagpur
Aptitude Test”
Mr. Avinash Moharil, Director, Techior Solutions Pvt. Ltd. 8 September 2010: “Software Development Skills”
Prof. M. A. Gaikwad, Dean (R & D).
28 - 31 October 2010: “Essentials of Visual Modeling & Rational Software
Architecture (RSA)”
This was a four-days workshop organized in association with ZenSOFT Services
Pvt. Ltd., Pune, under IBM Academic Initiative.
2 November 2010: “Cloud Computing & You”
Four days workshop on RSA in progress
Mr. Ajay Chaudhari, Sr. Manager (R & D), VMware
software Pvt. Ltd., Bangalore
Editor’s Choice: Visual modeling is the activity of representing objects and systems of
interest using graphical languages. As with other modeling languages, visual modeling
languages may be classified as: general-purpose/domain-specific; executable/nonexecutable; and open/proprietary. Some examples are - UML, SysML, BPMN, BPEL.
Reference - Visual Modeling Forum.
Chandigarh Engineering College, Landran, Mohali
Dr. G S Singh, Vice Chairman, CSI Chandigarh Chapter, 21-22 October 2010 : “Transdisciplinary Computer Applications and Design
Mr. Amit Deogar, Associate Prof. NITTTR Chandigarh Challenges”.
This was a two-days regional student convention. The chief guest Dr. Singh
and Mr. Vipin, Incharge Computer Centre
highlighted the significance of computer applications in various fields of our daily
life. He empasized upon the new challenges in designing computer programs as
per exact requirement of the consumers.
On the second day, sessions were conducted on “Emerging Technologies in
Open Source Web Platform”. During these sessions, the speakers explained the
deployment of Open Source Software (OSS) onto personal business servers and
applications arenas.
Editor’s Choice: “Open-source software (OSS) is software available in source code
form for which source code and certain other rights normally reserved for copyright
holders are provided under a software license that permits users to study, change, and
improve the software. A report by Standish Group states that adoption of open-source
Guests (Right to Left) Prof. (Col.) H S Sarin, Wg. Cdr. D N Mishra
(Hony Secy), Dr. G.S.Singh (Vice President), Dr. S. V. Rama Gopal software models has resulted in savings of about $60 billion per year to consumers.”
(Scientist & MC Member), Dr. G. D. Bansal (Director-General, CGC
Landran, Mohali), Dr. Harmesh Kansal (Prof. at UIET, Chandigarh).
- Wikipedia.
Muthayammal Engineering College, Rasipuram
Mr. Jude Xavier, Assistant Vice President HR, Polaris 8 October 2010 : “Inauguration of Student Branch”
Software Lab Ltd., Chennai
Mr. Xavier inaugurated the function and shared his experiences in his speech.
Dr. N. Kasthuri, Professor/ECE, Kongu Engineering
College, Erode
14 October 2010 : “Research Issues in Image Processing”
In her speech, she shared her experience and added more information about the
concepts behind image processing. She gave a detailed lecture on the Restoration
and Compression concept in image processing.
Editor’s Choice: “Digital has obviously changed things a lot, but not all for the better
as far as I’m concerned. Of course it’s much more convenient and you’re getting
instant results, but to me it just lacks the finesse of a roll of film and it has a slightly
superimposed feel.”
Dr. N. Kasthuri giving speech on Image Processing
- Graeme Le Saux
CSI COMMUNICATIONS | DECEMBER 2010
48
Mr. K. Vengatesan, Lecturer, CSE, Muthayammal
Engineering College
20 October 2010: "Dot Net Programming"
Prof. R. Bhaskaran and S. Karal Marx, CSE, Muthayammal
Engineering College
28 October 2010: ”Multi Core Programming”
Prof. R. Bhaskaran and S. Karal Marx delivered the special lecture on Multi
Core Programming. In their speech, they explained multi core programming,
parallel processing, pipeline processing and research issues of multi core
programming.
Mr. M. Sayee Kumar and Mr. M. RamKumar, CSE,
Muthayammal Engineering College
10 November 2010: "Java Programming"
In their speech, the speakers explained OOPs concepts and features of java
programming.
Shri S. Baskar, Chief Executive Officer, Linuxpert System,
Chennai
12 November 2010: "Open Source System"
S. Baskar shared his experience about the open source softwares, freedom
on open source softwares and the advantages of using it.
Sinhgad Institute of Technology, Lonavala
Mr. Avinash Nazare, HR Head TCS Pune, Mr. Mayur 28 - 29 September 2010: National level technical festival ‘XENOS’
Tannu, Sr.Executive (Sales) SakaalTimes Pune, Mr.
The event this time had included new activities such as rinkism, rock climbing,
Vishwesh Deshmukh, Manager, Career Forum,
obstacle games and trekking as new adventurous tasks. The Tech Fest was
Mr. Narayan Maheshwari, AEGIS Technology, Mr.
initiated by the Microsoft Student’s Club Session and Devcon (Developer’s
Subhashish Sen Gupta, Regional Manager, IMS learning
Conference). The main attractions among the events were Coding, E-Burst,
resources, Pune, Mr. Nitin Kulkarni Director,Router
Virtual Recruitment, Gaming and the newest of all The Flying Eagles. Other
InfoTech, Pune
events such as Technical Paper Presentation, Photoshop, and Treasure Hunt
were lived up-to the mark as they usually test the individuals beyond their
expectations. This time the Robotics was on its huge excitement as the track
was designed specially taking into account the needs of various levels of
difficulties and interests to test their inventions (ROBOTS) from all possible
ways.
Editor’s Choice: “Computer games don’t affect kids, I mean if Pac Man affected
us as kids, we’d all be running around in darkened rooms, munching pills and
listening to repetitive music”
Technical festival ‘XENOS’ in progress.
- Gareth Owen
Terna Engineering College, Nerul, Navi Mumbai
Dr. Vishnu Kanhere, Chairman of CSI Mumbai Chapter, 14 October 2010: “Inauguration of CSI student branch”
Mr. K G Chari
Dr. Kanhere delivered an inspirational and motivating speech and made
students aware about the current status of computer technology in the world.
Mr. Chari shared his experience in Business Development, Implementation
and Training of various software engineering methods. He also guided the
students and appealed them to utilize the opportunity of being a CSI member
for their benefit.
Editor’s Choice: “So many dreams at first seem impossible. And then they seem
improbable. And then, when we summon the will, they soon become inevitable.”
Dr. Vishnu Kanhere speaking during inauguration
- Christopher Reeve
Licenced to Post Without Prepayment
MR/TECH/WPP 241/WEST/09-11
Registered with Registrar of News Papers
for India - RNI 31668/78
Regd. No. MH/MR/WEST-76-2009-11
If undelivered return to :
CSI, 122, TV Indl. Estate,
Mumbai - 400 030
CSI Elections
2011-2012/2013
Following is the final slate by the Nominations Committee (2010-2011) for the various offices of the Computer Society
of India for 2011-2012/2013.
For the Term 2011-2012 (April 1, 2011 – March 31, 2012)
Vice President cum President Elect
Nomination Committee
•
Mr. Satish Babu
•
Prof. (Dr.) A K Nayak
•
Mr. Satish Kumar Syal
•
Mr. P R Rangaswami
•
Mr. Sanjay K Mohanty
•
Mr. Satish Kumar Khosla
For the Term 2011-2013 (April 1, 2011 – March 31, 2013)
Hon. Treasurer
•
•
•
Mr. Ajit Kumar Sahoo
Mr. M. P . Goel
Mr. V. L. Mehta
Regional Vice President (Reg. I)
•
Mr. Piyush Kumar Goyal
•
Mr. R K Vyas
Regional Vice President (Reg. III)
•
Mr. Anil Srivastava
•
Mr. G F Vohra
Regional Vice President (Reg. V)
•
•
Prof. D B V Sarma
Mr. Iqbal Ahmed
Regional Vice President (Reg. VII)
• Mr. S Ramasamy
Divisional Chair Person - Div. I
• Dr. C R Chakravarthy
• Prof. S G Shah
Divisional Chair Person - Div. III
• Mr. Devesh Kumar Dwivedi
• Prof. Pradeep Pendse
• Mr. S P Soman
• Dr. S. Subramanian
Divisional Chair Person - Div. V
• Dr. Manohar Chandwani
Note : All election related notices and Bio-Data will be published on the CSI website www.csi-india.org
Nominations Committee 2010-2011
Dr. S S Agrawal
(Chairman)
Dr. S C Bhatia
(Member)
Dr. (Prof.) U K Singh
(Member)
Published by Suchit Gogwekar for Computer Society of India at 122, TV Indl. Estate, S K Ahire Marg, Worli, Mumbai-400 030 • Tel.: 022-249 34776
and Website : www.csi-india.org • Email : [email protected] and printed by him at GP Offset Pvt. Ltd., Mumbai 400 059.