Download Special Issue-1 - IASIR-International Association of Scientific

Document related concepts
no text concepts found
Transcript
ISSN (ONLINE): 2279-0071
ISSN (PRINT): 2279-0063
Special Issue No. 1, Volume 1
May 2015
International Journal of Software
and Web Sciences
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
STEM International Scientific Online Media and Publishing House
Head Office: 148, Summit Drive, Byron, Georgia-31008, United States.
Offices Overseas: Germany, Australia, India, Netherlands, Canada.
Website: www.iasir.net, E-mail (s): [email protected], [email protected], [email protected]
Guest Editors
Mr. Vijai Singh
Department of Computer Science and Engineering,
IMS Engineering College,
Ghaziabad, 201009, Uttar Pradesh,
INDIA
&
Dr. Pankaj Agarwal
Department of Computer Science and Engineering,
IMS Engineering College,
Ghaziabad, 201009, Uttar Pradesh,
INDIA
PREFACE
We are delighted to welcome you to the special issue of the International Journal of Software
and Web Sciences (IJSWS). In recent years, advances in science, technology, engineering,
and mathematics have radically expanded the data available to researchers and
professionals in a wide variety of domains. This unique combination of theory with data has
the potential to have broad impact on educational research and practice. IJSWS is
publishing high-quality, peer-reviewed papers covering a number of topics in the areas of
Software architectures for scientific computing, , Mobile robots, Artificial intelligence systems
and architectures, Microcontrollers & microprocessor applications, Natural language
processing and expert systems, Fuzzy logic and soft computing, Semantic Web, Web
retrieval systems, Software and multimedia Web, Advanced database systems, Information
retrieval systems, Computer architecture & VLSI, Distributed and parallel processing,
Software testing, verifications and validation methods, Web mining and data mining,
UML/MDA and AADL, Object oriented technology, Software and Web metrics, Software
maintenance and evolution, Component based software engineering, middleware, and tools,
Service oriented software architecture, Hypermedia design applications, Ontology creation,
evolution, reconciliation, and mediation, Web authoring tools, Web application architectures
and frameworks, Testing and evaluation of Web applications, Empirical Web engineering,
Deep and hidden Web, and other relevant fields available in the vicinity of software and Web
sciences.
The editorial board of IJSWS is composed of members of the Teachers & Researchers
community who have expertise in a variety of disciplines, including software process models,
software and technology deployments, ICT solutions, and other related disciplines of
software and Web based applications. In order to best serve our community, this Journal is
available online as well as in hard-copy form. Because of the rapid advances in underlying
technologies and the interdisciplinary nature of the field, we believe it is important to provide
quality research articles promptly and to the widest possible audience.
We are happy that this Journal has continued to grow and develop. We have made every
effort to evaluate and process submissions for reviews, and address queries from authors
and the general public promptly. The Journal has strived to reflect the most recent and finest
researchers in the field of emerging technologies especially related to Software and Web
sciences. This Journal is completely refereed and indexed with major databases like:
IndexCopernicus, Computer Science Directory, GetCITED, DOAJ, SSRN, TGDScholar,
WorldWideScience, CiteSeerX, CRCnetBASE, Google Scholar, Microsoft Academic Search,
INSPEC, ProQuest, ArnetMiner, Base, ChemXSeer, citebase, OpenJ-Gate, eLibrary,
SafetyLit, SSRN, VADLO, OpenGrey, EBSCO, ProQuest, UlrichWeb, ISSUU, SPIE Digital
Library,
arXiv,
ERIC,
EasyBib,
Infotopia,
WorldCat,
.docstoc
JURN,
Mendeley,
ResearchGate, cogprints, OCLC, iSEEK, Scribd, LOCKSS, CASSI, E-PrintNetwork, intute,
and some other databases.
We are grateful to all of the individuals and agencies whose work and support made the
Journal's success possible. We want to thank the executive board and core committee
members of the IJSWS for entrusting us with the important job. We are thankful to the
members of the IJSWS editorial board who have contributed energy and time to the Journal
with their steadfast support, constructive advice, as well as reviews of submissions. We are
deeply indebted to Mr. Vijai Singh and Dr. Pankaj Agarwal (IMS Engineering College,
Ghaziabad, 201009, Uttar Pradesh, INDIA) who are the guest editors for the special issue of
IJSWS with other numerous anonymous reviewers who have contributed expertly
evaluations of the submissions to help maintain the quality of the Journal. We have highest
respect to all the authors who have submitted articles to the Journal for their intellectual
energy and creativity, and for their dedication to the field of software and web sciences.
This special issue of the IJSWS has attracted a number of authors and researchers in
particularly from the Department of Computer Science and Engineering, IMS Engineering
College, Ghaziabad, 201009, Uttar Pradesh, INDIA and this issue would definitely provide
an effective platform to all the intellectuals of different streams to put forth their suggestions
and ideas which might prove beneficial for the accelerated pace of development of emerging
technologies in Software and Web sciences and may open new area for research and
development. We hope you will enjoy this special issue of the IJSWS and are looking
forward to hearing your feedback and receiving your contributions.
(Administrative Chief)
(Managing Director)
(Editorial Head)
--------------------------------------------------------------------------------------------------------------------------Published papers in the International Journal of Software and Web Sciences (IJSWS), ISSN
(Online): 2279-0071, ISSN (Print): 2279-0063 (May 2015, Special Issue No. 1, Volume 1).
---------------------------------------------------------------------------------------------------------------------------
BOARD MEMBERS



























EDITOR IN CHIEF
Prof. (Dr.) Waressara Weerawat, Director of Logistics Innovation Center, Department of
Industrial Engineering, Faculty of Engineering, Mahidol University, Thailand.
Prof. (Dr.) Yen-Chun Lin, Professor and Chair, Dept. of Computer Science and Information
Engineering, Chang Jung Christian University, Kway Jen, Tainan, Taiwan.
Divya Sethi, GM Conferencing & VSAT Solutions, Enterprise Services, Bharti Airtel, Gurgaon,
India.
CHIEF EDITOR (TECHNICAL)
Prof. (Dr.) Atul K. Raturi, Head School of Engineering and Physics, Faculty of Science, Technology
and Environment, The University of the South Pacific, Laucala campus, Suva, Fiji Islands.
Prof. (Dr.) Hadi Suwastio, College of Applied Science, Department of Information Technology,
The Sultanate of Oman and Director of IETI-Research Institute-Bandung, Indonesia.
Dr. Nitin Jindal, Vice President, Max Coreth, North America Gas & Power Trading, New York,
United States.
CHIEF EDITOR (GENERAL)
Prof. (Dr.) Thanakorn Naenna, Department of Industrial Engineering, Faculty of Engineering,
Mahidol University, Thailand.
Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial
Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Huiyun Liu, Department of Electronic & Electrical Engineering, University College
London, Torrington Place, London.
ADVISORY BOARD
Prof. (Dr.) Kimberly A. Freeman, Professor & Director of Undergraduate Programs, Stetson
School of Business and Economics, Mercer University, Macon, Georgia, United States.
Prof. (Dr.) Klaus G. Troitzsch, Professor, Institute for IS Research, University of Koblenz-Landau,
Germany.
Prof. (Dr.) T. Anthony Choi, Professor, Department of Electrical & Computer Engineering, Mercer
University, Macon, Georgia, United States.
Prof. (Dr.) Fabrizio Gerli, Department of Management, Ca' Foscari University of Venice, Italy.
Prof. (Dr.) Jen-Wei Hsieh, Department of Computer Science and Information Engineering,
National Taiwan University of Science and Technology, Taiwan.
Prof. (Dr.) Jose C. Martinez, Dept. Physical Chemistry, Faculty of Sciences, University of
Granada, Spain.
Prof. (Dr.) Panayiotis Vafeas, Department of Engineering Sciences, University of Patras, Greece.
Prof. (Dr.) Soib Taib, School of Electrical & Electronics Engineering, University Science Malaysia,
Malaysia.
Prof. (Dr.) Vit Vozenilek, Department of Geoinformatics, Palacky University, Olomouc, Czech
Republic.
Prof. (Dr.) Sim Kwan Hua, School of Engineering, Computing and Science, Swinburne University
of Technology, Sarawak, Malaysia.
Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial
Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Rafael Ignacio Alvarez Sanchez, Department of Science of the Computation and
Artificial Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Praneel Chand, Ph.D., M.IEEEC/O School of Engineering & Physics Faculty of Science &
Technology The University of the South Pacific (USP) Laucala Campus, Private Mail Bag, Suva,
Fiji.
Prof. (Dr.) Francisco Miguel Martinez, Department of Science of the Computation and Artificial
Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Antonio Zamora Gomez, Department of Science of the Computation and Artificial
Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Leandro Tortosa, Department of Science of the Computation and Artificial Intelligence,
Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Samir Ananou, Department of Microbiology, Universidad de Granada, Granada, Spain.
Dr. Miguel Angel Bautista, Department de Matematica Aplicada y Analisis, Facultad de
Matematicas, Universidad de Barcelona, Spain.































Prof. (Dr.) Prof. Adam Baharum, School of Mathematical Sciences, University of Universiti Sains,
Malaysia, Malaysia.
Dr. Cathryn J. Peoples, Faculty of Computing and Engineering, School of Computing and
Information Engineering, University of Ulster, Coleraine, Northern Ireland, United Kingdom.
Prof. (Dr.) Pavel Lafata, Department of Telecommunication Engineering, Faculty of Electrical
Engineering, Czech Technical University in Prague, Prague, 166 27, Czech Republic.
Prof. (Dr.) P. Bhanu Prasad, Vision Specialist, Matrix vision GmbH, Germany, Consultant, TIFACCORE for Machine Vision, Advisor, Kelenn Technology, France Advisor, Shubham Automation &
Services, Ahmedabad, and Professor of C.S.E, Rajalakshmi Engineering College, India.
Prof. (Dr.) Anis Zarrad, Department of Computer Science and Information System, Prince Sultan
University, Riyadh, Saudi Arabia.
Prof. (Dr.) Mohammed Ali Hussain, Professor, Dept. of Electronics and Computer Engineering, KL
University, Green Fields, Vaddeswaram, Andhra Pradesh, India.
Dr. Cristiano De Magalhaes Barros, Governo do Estado de Minas Gerais, Brazil.
Prof. (Dr.) Md. Rizwan Beg, Professor & Head, Dean, Faculty of Computer Applications, Deptt. of
Computer Sc. & Engg. & Information Technology, Integral University Kursi Road, Dasauli,
Lucknow, India.
Prof. (Dr.) Vishnu Narayan Mishra, Assistant Professor of Mathematics, Sardar Vallabhbhai
National Institute of Technology, Ichchhanath Mahadev Road, Surat, Surat-395007, Gujarat,
India.
Dr. Jia Hu, Member Research Staff, Philips Research North America, New York Area, NY.
Prof. Shashikant Shantilal Patil SVKM, MPSTME Shirpur Campus, NMIMS University Vile Parle
Mumbai, India.
Prof. (Dr.) Bindhya Chal Yadav, Assistant Professor in Botany, Govt. Post Graduate College,
Fatehabad, Agra, Uttar Pradesh, India.
REVIEW BOARD
Prof. (Dr.) Kimberly A. Freeman, Professor & Director of Undergraduate Programs, Stetson
School of Business and Economics, Mercer University, Macon, Georgia, United States.
Prof. (Dr.) Klaus G. Troitzsch, Professor, Institute for IS Research, University of Koblenz-Landau,
Germany.
Prof. (Dr.) T. Anthony Choi, Professor, Department of Electrical & Computer Engineering, Mercer
University, Macon, Georgia, United States.
Prof. (Dr.) Yen-Chun Lin, Professor and Chair, Dept. of Computer Science and Information
Engineering, Chang Jung Christian University, Kway Jen, Tainan, Taiwan.
Prof. (Dr.) Jen-Wei Hsieh, Department of Computer Science and Information Engineering,
National Taiwan University of Science and Technology, Taiwan.
Prof. (Dr.) Jose C. Martinez, Dept. Physical Chemistry, Faculty of Sciences, University of
Granada, Spain.
Prof. (Dr.) Joel Saltz, Emory University, Atlanta, Georgia, United States.
Prof. (Dr.) Panayiotis Vafeas, Department of Engineering Sciences, University of Patras, Greece.
Prof. (Dr.) Soib Taib, School of Electrical & Electronics Engineering, University Science Malaysia,
Malaysia.
Prof. (Dr.) Sim Kwan Hua, School of Engineering, Computing and Science, Swinburne University
of Technology, Sarawak, Malaysia.
Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial
Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Rafael Ignacio Alvarez Sanchez, Department of Science of the Computation and
Artificial Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Francisco Miguel Martinez, Department of Science of the Computation and Artificial
Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Antonio Zamora Gomez, Department of Science of the Computation and Artificial
Intelligence, Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Leandro Tortosa, Department of Science of the Computation and Artificial Intelligence,
Universidad de Alicante, Alicante, Spain.
Prof. (Dr.) Samir Ananou, Department of Microbiology, Universidad de Granada, Granada, Spain.
Dr. Miguel Angel Bautista, Department de Matematica Aplicada y Analisis, Facultad de
Matematicas, Universidad de Barcelona, Spain.
Prof. (Dr.) Prof. Adam Baharum, School of Mathematical Sciences, University of Universiti Sains,
Malaysia, Malaysia.
Prof. (Dr.) Huiyun Liu, Department of Electronic & Electrical Engineering, University College
London, Torrington Place, London.

































Dr. Cristiano De Magalhaes Barros, Governo do Estado de Minas Gerais, Brazil.
Prof. (Dr.) Pravin G. Ingole, Senior Researcher, Greenhouse Gas Research Center, Korea
Institute of Energy Research (KIER), 152 Gajeong-ro, Yuseong-gu, Daejeon 305-343, KOREA
Prof. (Dr.) Dilum Bandara, Dept. Computer Science & Engineering, University of Moratuwa, Sri
Lanka.
Prof. (Dr.) Faudziah Ahmad, School of Computing, UUM College of Arts and Sciences, University
Utara Malaysia, 06010 UUM Sintok, Kedah Darulaman
Prof. (Dr.) G. Manoj Someswar, Principal, Dept. of CSE at Anwar-ul-uloom College of Engineering
& Technology, Yennepally, Vikarabad, RR District., A.P., India.
Prof. (Dr.) Abdelghni Lakehal, Applied Mathematics, Rue 10 no 6 cite des fonctionnaires dokkarat
30010 Fes Marocco.
Dr. Kamal Kulshreshtha, Associate Professor & Head, Deptt. of Computer Sc. & Applications, Modi
Institute of Management & Technology, Kota-324 009, Rajasthan, India.
Prof. (Dr.) Anukrati Sharma, Associate Professor, Faculty of Commerce and Management,
University of Kota, Kota, Rajasthan, India.
Prof. (Dr.) S. Natarajan, Department of Electronics and Communication Engineering, SSM College
of Engineering, NH 47, Salem Main Road, Komarapalayam, Namakkal District, Tamilnadu
638183, India.
Prof. (Dr.) J. Sadhik Basha, Department of Mechanical Engineering, King Khalid University, Abha,
Kingdom of Saudi Arabia
Prof. (Dr.) G. SAVITHRI, Department of Sericulture, S.P. Mahila Visvavidyalayam, Tirupati517502, Andhra Pradesh, India.
Prof. (Dr.) Shweta jain, Tolani College of Commerce, Andheri, Mumbai. 400001, India
Prof. (Dr.) Abdullah M. Abdul-Jabbar, Department of Mathematics, College of Science, University
of Salahaddin-Erbil, Kurdistan Region, Iraq.
Prof. (Dr.) P.Sujathamma, Department of Sericulture, S.P.Mahila Visvavidyalayam, Tirupati517502, India.
Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family
Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001
(Haryana) India.
Prof. (Dr.) Manjulatha, Dept of Biochemistry,School of Life Sciences,University of
Hyderabad,Gachibowli, Hyderabad, India.
Prof. (Dr.) Upasani Dhananjay Eknath Advisor & Chief Coordinator, ALUMNI Association, Sinhgad
Institute of Technology & Science, Narhe, Pune- 411 041, India.
Prof. (Dr.) Sudhindra Bhat, Professor & Finance Area Chair, School of Business, Alliance
University Bangalore-562106.
Prof. Prasenjit Chatterjee , Dept. of Mechanical Engineering, MCKV Institute of Engineering West
Bengal, India.
Prof. Rajesh Murukesan, Deptt. of Automobile Engineering, Rajalakshmi Engineering college,
Chennai, India.
Prof. (Dr.) Parmil Kumar, Department of Statistics, University of Jammu, Jammu, India
Prof. (Dr.) M.N. Shesha Prakash, Vice Principal, Professor & Head of Civil Engineering, Vidya
Vikas Institute of Engineering and Technology, Alanahally, Mysore-570 028
Prof. (Dr.) Piyush Singhal, Mechanical Engineering Deptt., GLA University, India.
Prof. M. Mahbubur Rahman, School of Engineering & Information Technology, Murdoch
University, Perth Western Australia 6150, Australia.
Prof. Nawaraj Chaulagain, Department of Religion, Illinois Wesleyan University, Bloomington, IL.
Prof. Hassan Jafari, Faculty of Maritime Economics & Management, Khoramshahr University of
Marine Science and Technology, khoramshahr, Khuzestan province, Iran
Prof. (Dr.) Kantipudi MVV Prasad , Dept of EC, School of Engg, R.K.University,Kast urbhadham,
Tramba, Rajkot-360020, India.
Prof. (Mrs.) P.Sujathamma, Department of Sericulture, S.P.Mahila Visvavidyalayam, ( Women's
University), Tirupati-517502, India.
Prof. (Dr.) M A Rizvi, Dept. of Computer Engineering and Applications, National Institute of
Technical Teachers' Training and Research, Bhopal M.P. India
Prof. (Dr.) Mohsen Shafiei Nikabadi, Faculty of Economics and Management, Industrial
Management Department, Semnan University, Semnan, Iran.
Prof. P.R.SivaSankar, Head, Dept. of Commerce, Vikrama Simhapuri University Post Graduate
Centre, KAVALI - 524201, A.P. India.
Prof. (Dr.) Bhawna Dubey, Institute of Environmental Science( AIES), Amity University, Noida,
India.
Prof. Manoj Chouhan, Deptt. of Information Technology, SVITS Indore, India.

































Prof. Yupal S Shukla, V M Patel College of Management Studies, Ganpat University, KhervaMehsana, India.
Prof. (Dr.) Amit Kohli, Head of the Department, Department of Mechanical Engineering,
D.A.V.Institute of Engg. and Technology, Kabir Nagar, Jalandhar, Punjab(India)
Prof. (Dr.) Kumar Irayya Maddani, and Head of the Department of Physics in SDM College of
Engineering and Technology, Dhavalagiri, Dharwad, State: Karnataka (INDIA).
Prof. (Dr.) Shafi Phaniband, SDM College of Engineering and Technology, Dharwad, INDIA.
Prof. M H Annaiah, Head, Department of Automobile Engineering, Acharya Institute of
Technology, Soladevana Halli, Bangalore -560107, India.
Prof. (Dr.) Shriram K V, Faculty Computer Science and Engineering, Amrita Vishwa
Vidhyapeetham University, Coimbatore, India.
Prof. (Dr.) Sohail Ayub, Department of Civil Engineering, Z.H College of Engineering &
Technology, Aligarh Muslim University, Aligarh. 202002 UP-India
Prof. (Dr.) Santosh Kumar Behera, Department of Education, Sidho-Kanho-Birsha University,
Purulia, West Bengal, India.
Prof. (Dr.) Urmila Shrawankar, Department of Computer Science & Engineering, G H Raisoni
College of Engineering, Nagpur (MS), India.
Prof. Anbu Kumar. S, Deptt. of Civil Engg., Delhi Technological University (Formerly Delhi College
of Engineering) Delhi, India.
Prof. (Dr.) Meenakshi Sood, Vegetable Science, College of Horticulture, Mysore, University of
Horticultural Sciences, Bagalkot, Karnataka (India)
Prof. (Dr.) Prof. R. R. Patil, Director School Of Earth Science, Solapur University, Solapur, India.
Prof. (Dr.) Manoj Khandelwal, Dept. of Mining Engg, College of Technology & Engineering,
Maharana Pratap University of Agriculture & Technology, Udaipur-313 001 (Rajasthan), India
Prof. (Dr.) Kishor Chandra Satpathy, Librarian, National Institute of Technology, Silchar-788010,
Assam, India.
Prof. (Dr.) Juhana Jaafar, Gas Engineering Department, Faculty of Petroleum and Renewable
Energy Engineering (FPREE), Universiti Teknologi Malaysia, 81310 UTM Johor Bahru, Johor.
Prof. (Dr.) Rita Khare, Assistant Professor in chemistry, Govt. Women,s College, Gardanibagh,
Patna, Bihar, India.
Prof. (Dr.) Raviraj Kusanur, Dept of Chemistry, R V College of Engineering, Bangalore-59, India.
Prof. (Dr.) Hameem Shanavas .I, M.V.J College of Engineering, Bangalore, India.
Prof. (Dr.) Sandhya Mehrotra, Department of Biological Sciences, Birla Institute of Technology
and Sciences, Pilani, Rajasthan, India.
Prof. (Dr.) Dr. Ravindra Jilte, Head of the Department, Department of Mechanical
Engineering,VCET, Thane-401202, India.
Prof. (Dr.) Sanjay Kumar, JKL University, Ajmer Road, Jaipur
Prof. (Dr.) Pushp Lata Faculty of English and Communication, Department of Humanities and
Languages, Nucleus Member, Publications and Media Relations Unit Editor, BITScan, BITS, PilaniIndia
Prof. Arun Agarwal, Faculty of ECE Dept., ITER College, Siksha 'O' Anusandhan University
Bhubaneswar, Odisha, India
Prof. (Dr.) Pratima Tripathi, Department of Biosciences, SSSIHL, Anantapur Campus Anantapur515001 (A.P.) India.
Prof. (Dr.) Sudip Das, Department of Biotechnology, Haldia Institute of Technology, I.C.A.R.E.
Complex, H.I.T. Campus, P.O. Hit, Haldia; Dist: Puba Medinipur, West Bengal, India.
Prof. (Dr.) ABHIJIT MITRA , Associate Professor and former Head, Department of Marine Science,
University of Calcutta , India.
Prof. (Dr.) N.Ramu , Associate Professor , Department of Commerce, Annamalai University,
AnnamalaiNadar-608 002, Chidambaram, Tamil Nadu , India.
Prof. (Dr.) Saber Mohamed Abd-Allah, Assistant Professor of Theriogenology , Faculty of
Veterinary Medicine , Beni-Suef University , Egypt.
Prof. (Dr.) Ramel D. Tomaquin, Dean, College of Arts and Sciences Surigao Del Sur State
University (SDSSU), Tandag City Surigao Del Sur, Philippines.
Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family
Studies College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana)
India.
Prof. (Dr.) R.K.Tiwari, Professor, S.O.S. in Physics, Jiwaji University, Gwalior, M.P.-474011,
India.
Prof. (Dr.) Sandeep Gupta, Department of Computer Science & Engineering, Noida Institute of
Engineering and Technology, Gr.Noida, India.
Prof. (Dr.) Mohammad Akram, Jazan University, Kingdom of Saudi Arabia.
































Prof. (Dr.) Sanjay Sharma, Dept. of Mathematics, BIT, Durg(C.G.), India.
Prof. (Dr.) Manas R. Panigrahi, Department of Physics, School of Applied Sciences, KIIT
University, Bhubaneswar, India.
Prof. (Dr.) P.Kiran Sree, Dept of CSE, Jawaharlal Nehru Technological University, India
Prof. (Dr.) Suvroma Gupta, Department of Biotechnology in Haldia Institute of Technology,
Haldia, West Bengal, India.
Prof. (Dr.) SREEKANTH. K. J., Department of Mechanical Engineering at Mar Baselios College of
Engineering & Technology, University of Kerala, Trivandrum, Kerala, India
Prof. Bhubneshwar Sharma, Department of Electronics and Communication Engineering, Eternal
University (H.P), India.
Prof. Love Kumar, Electronics and Communication Engineering, DAV Institute of Engineering and
Technology, Jalandhar (Punjab), India.
Prof. S.KANNAN, Department of History, Annamalai University, Annamalainagar- 608002, Tamil
Nadu, India.
Prof. (Dr.) Hasrinah Hasbullah, Faculty of Petroleum & Renewable Energy Engineering, Universiti
Teknologi Malaysia, 81310 UTM Johor Bahru, Johor, Malaysia.
Prof. Rajesh Duvvuru, Dept. of Computer Sc. & Engg., N.I.T. Jamshedpur, Jharkhand, India.
Prof. (Dr.) Bhargavi H. Goswami, Department of MCA, Sunshine Group of Institutes, Nr. Rangoli
Park, Kalawad Road, Rajkot, Gujarat, India.
Prof. (Dr.) Essam H. Houssein, Computer Science Department, Faculty of Computers &
Informatics, Benha University, Benha 13518, Qalyubia Governorate, Egypt.
Arash Shaghaghi, University College London, University of London, Great Britain.
Prof. Rajesh Duvvuru, Dept. of Computer Sc. & Engg., N.I.T. Jamshedpur, Jharkhand, India.
Prof. (Dr.) Anand Kumar, Head, Department of MCA, M.S. Engineering College, Navarathna
Agrahara, Sadahalli Post, Bangalore, PIN 562110, Karnataka, INDIA.
Prof. (Dr.) Venkata Raghavendra Miriampally, Electrical and Computer Engineering Dept, Adama
Science & Technology University, Adama, Ethiopia.
Prof. (Dr.) Jatinderkumar R. Saini, Director (I.T.), GTU's Ankleshwar-Bharuch Innovation Sankul
&Director I/C & Associate Professor, Narmada College of Computer Application, Zadeshwar,
Bharuch, Gujarat, India.
Prof. Jaswinder Singh, Mechanical Engineering Department, University Institute Of Engineering &
Technology, Panjab University SSG Regional Centre, Hoshiarpur, Punjab, India- 146001.
Prof. (Dr.) S.Kadhiravan, Head i/c, Department of Psychology, Periyar University, Salem- 636
011,Tamil Nadu, India.
Prof. (Dr.) Mohammad Israr, Principal, Balaji Engineering College,Junagadh, Gujarat-362014,
India.
Prof. (Dr.) VENKATESWARLU B., Director of MCA in Sreenivasa Institute of Technology and
Management Studies (SITAMS), Chittoor.
Prof. (Dr.) Deepak Paliwal, Faculty of Sociology, Uttarakhand Open University, Haldwani-Nainital
Prof. (Dr.) Dr. Anil K Dwivedi, Faculty of Pollution & Environmental Assay Research Laboratory
(PEARL), Department of Botany,DDU Gorakhpur University,Gorakhpur-273009, India.
Prof. R. Ravikumar, Department of Agricultural and Rural Management, TamilNadu Agricultural
University, Coimbatore-641003,Tamil Nadu, India.
Prof. (Dr.) R.Raman, Professor of Agronomy, Faculty of Agriculture, Annamalai university,
Annamalai Nagar 608 002Tamil Nadu, India.
Prof. (Dr.) Ahmed Khalafallah, Coordinator of the CM Degree Program, Department of
Architectural and Manufacturing Sciences, Ogden College of Sciences and Engineering Western
Kentucky University 1906 College Heights Blvd Bowling Green, KY 42103-1066
Prof. (Dr.) Asmita Das , Delhi Technological University (Formerly Delhi College of Engineering),
Shahbad, Daulatpur, Delhi 110042, India.
Prof. (Dr.)Aniruddha Bhattacharjya, Assistant Professor (Senior Grade), CSE Department, Amrita
School of Engineering , Amrita Vishwa VidyaPeetham (University), Kasavanahalli, Carmelaram
P.O., Bangalore 560035, Karnataka, India
Prof. (Dr.) S. Rama Krishna Pisipaty, Prof & Geoarchaeologist, Head of the Department of
Sanskrit & Indian Culture, SCSVMV University, Enathur, Kanchipuram 631561, India
Prof. (Dr.) Shubhasheesh Bhattacharya, Professor & HOD(HR), Symbiosis Institute of
International Business (SIIB), Hinjewadi, Phase-I, Pune- 411 057
Prof. (Dr.) Vijay Kothari, Institute of Science, Nirma University, S-G Highway, Ahmedabad
382481, India.
Prof. (Dr.) Raja Sekhar Mamillapalli, Department of Civil Engineering at Sir Padampat Singhania
University, Udaipur, India.






























Prof. (Dr.)B. M. Kunar, Department of Mining Engineering, Indian School of Mines, Dhanbad
826004, Jharkhand, India.
Prof. (Dr.) Prabir Sarkar, Assistant Professor, School of Mechanical, Materials and Energy
Engineering, Room 307, Academic Block, Indian Institute of Technology, Ropar, Nangal Road,
Rupnagar 140001, Punjab, India.
Prof. (Dr.) K.Srinivasmoorthy, Associate Professor, Department of Earth Sciences, School of
Physical,Chemical and Applied Sciences, Pondicherry university, R.Venkataraman Nagar, Kalapet,
Puducherry 605014, India.
Prof. (Dr.) Bhawna Dubey, Institute of Environmental Science (AIES), Amity University, Noida,
India.
Prof. (Dr.) P. Bhanu Prasad, Vision Specialist, Matrix vision GmbH, Germany, Consultant, TIFACCORE for Machine Vision, Advisor, Kelenn Technology, France Advisor, Shubham Automation &
Services, Ahmedabad, and Professor of C.S.E, Rajalakshmi Engineering College, India.
Prof. (Dr.)P.Raviraj, Professor & Head, Dept. of CSE, Kalaignar Karunanidhi, Institute of
Technology, Coimbatore 641402,Tamilnadu,India.
Prof. (Dr.) Damodar Reddy Edla, Department of Computer Science & Engineering, Indian School
of Mines, Dhanbad, Jharkhand 826004, India.
Prof. (Dr.) T.C. Manjunath, Principal in HKBK College of Engg., Bangalore, Karnataka, India.
Prof. (Dr.) Pankaj Bhambri, I.T. Deptt., Guru Nanak Dev Engineering College, Ludhiana 141006,
Punjab, India .
Prof. Shashikant Shantilal Patil SVKM, MPSTME Shirpur Campus, NMIMS University Vile Parle
Mumbai, India.
Prof. (Dr.) Shambhu Nath Choudhary, Department of Physics, T.M. Bhagalpur University,
Bhagalpur 81200, Bihar, India.
Prof. (Dr.) Venkateshwarlu Sonnati, Professor & Head of EEED, Department of EEE, Sreenidhi
Institute of Science & Technology, Ghatkesar, Hyderabad, Andhra Pradesh, India.
Prof. (Dr.) Saurabh Dalela, Department of Pure & Applied Physics, University of Kota, KOTA
324010, Rajasthan, India.
Prof. S. Arman Hashemi Monfared, Department of Civil Eng, University of Sistan & Baluchestan,
Daneshgah St.,Zahedan, IRAN, P.C. 98155-987
Prof. (Dr.) R.S.Chanda, Dept. of Jute & Fibre Tech., University of Calcutta, Kolkata 700019, West
Bengal, India.
Prof. V.S.VAKULA, Department of Electrical and Electronics Engineering, JNTUK, University
College of Engg., Vizianagaram5 35003, Andhra Pradesh, India.
Prof. (Dr.) Nehal Gitesh Chitaliya, Sardar Vallabhbhai Patel Institute of Technology, Vasad 388
306, Gujarat, India.
Prof. (Dr.) D.R. Prajapati, Department of Mechanical Engineering, PEC University of
Technology,Chandigarh 160012, India.
Dr. A. SENTHIL KUMAR, Postdoctoral Researcher, Centre for Energy and Electrical Power,
Electrical Engineering Department, Faculty of Engineering and the Built Environment, Tshwane
University of Technology, Pretoria 0001, South Africa.
Prof. (Dr.)Vijay Harishchandra Mankar, Department of Electronics & Telecommunication
Engineering, Govt. Polytechnic, Mangalwari Bazar, Besa Road, Nagpur- 440027, India.
Prof. Varun.G.Menon, Department Of C.S.E, S.C.M.S School of Engineering, Karukutty,
Ernakulam, Kerala 683544, India.
Prof. (Dr.) U C Srivastava, Department of Physics, Amity Institute of Applied Sciences, Amity
University, Noida, U.P-203301.India.
Prof. (Dr.) Surendra Yadav, Professor and Head (Computer Science & Engineering Department),
Maharashi Arvind College of Engineering and Research Centre (MACERC), Jaipur, Rajasthan,
India.
Prof. (Dr.) Sunil Kumar, H.O.D. Applied Sciences & Humanities Dehradun Institute of Technology,
(D.I.T. School of Engineering), 48 A K.P-3 Gr. Noida (U.P.) 201308
Prof. Naveen Jain, Dept. of Electrical Engineering, College of Technology and Engineering,
Udaipur-313 001, India.
Prof. Veera Jyothi.B, CBIT ,Hyderabad, Andhra Pradesh, India.
Prof. Aritra Ghosh, Global Institute of Management and Technology, Krishnagar, Nadia, W.B.
India
Prof. Anuj K. Gupta, Head, Dept. of Computer Science & Engineering, RIMT Group of Institutions,
Sirhind Mandi Gobindgarh, Punajb, India.
Prof. (Dr.) Varala Ravi, Head, Department of Chemistry, IIIT Basar Campus, Rajiv Gandhi
University of Knowledge Technologies, Mudhole, Adilabad, Andhra Pradesh- 504 107, India
Prof. (Dr.) Ravikumar C Baratakke, faculty of Biology,Govt. College, Saundatti - 591 126, India.































Prof. (Dr.) NALIN BHARTI, School of Humanities and Social Science, Indian Institute of
Technology Patna, India.
Prof. (Dr.) Shivanand S.Gornale, Head, Department of Studies in Computer Science, Government
College (Autonomous), Mandya, Mandya-571 401-Karanataka
Prof. (Dr.) Naveen.P.Badiger, Dept.Of Chemistry, S.D.M.College of Engg. & Technology,
Dharwad-580002, Karnataka State, India.
Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family
Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001
(Haryana) India.
Prof. (Dr.) Tauqeer Ahmad Usmani, Faculty of IT, Salalah College of Technology, Salalah,
Sultanate of Oman,
Prof. (Dr.) Naresh Kr. Vats, Chairman, Department of Law, BGC Trust University Bangladesh
Prof. (Dr.) Papita Das (Saha), Department of Environmental Science, University of Calcutta,
Kolkata, India
Prof. (Dr.) Rekha Govindan , Dept of Biotechnology, Aarupadai Veedu Institute of technology ,
Vinayaka Missions University , Paiyanoor , Kanchipuram Dt, Tamilnadu , India
Prof. (Dr.) Lawrence Abraham Gojeh, Department of Information Science, Jimma University,
P.o.Box 378, Jimma, Ethiopia
Prof. (Dr.) M.N. Kalasad, Department of Physics, SDM College of Engineering & Technology,
Dharwad, Karnataka, India
Prof. Rab Nawaz Lodhi, Department of Management Sciences, COMSATS Institute of Information
Technology Sahiwal
Prof. (Dr.) Masoud Hajarian, Department of Mathematics, Faculty of Mathematical Sciences,
Shahid Beheshti University, General Campus, Evin, Tehran 19839,Iran
Prof. (Dr.) Chandra Kala Singh, Associate professor, Department of Human Development and
Family Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001
(Haryana) India
Prof. (Dr.) J.Babu, Professor & Dean of research, St.Joseph's College of Engineering &
Technology, Choondacherry, Palai,Kerala.
Prof. (Dr.) Pradip Kumar Roy, Department of Applied Mechanics, Birla Institute of Technology
(BIT) Mesra, Ranchi-835215, Jharkhand, India.
Prof. (Dr.) P. Sanjeevi kumar, School of Electrical Engineering (SELECT), Vandalur Kelambakkam
Road, VIT University, Chennai, India.
Prof. (Dr.) Debasis Patnaik, BITS-Pilani, Goa Campus, India.
Prof. (Dr.) SANDEEP BANSAL, Associate Professor, Department of Commerce, I.G.N. College,
Haryana, India.
Dr. Radhakrishnan S V S, Department of Pharmacognosy, Faser Hall, The University of
Mississippi Oxford, MS-38655, USA
Prof. (Dr.) Megha Mittal, Faculty of Chemistry, Manav Rachna College of Engineering, Faridabad
(HR), 121001, India.
Prof. (Dr.) Mihaela Simionescu (BRATU), BUCHAREST, District no. 6, Romania, member of the
Romanian Society of Econometrics, Romanian Regional Science Association and General
Association of Economists from Romania
Prof. (Dr.) Atmani Hassan, Director Regional of Organization Entraide Nationale
Prof. (Dr.) Deepshikha Gupta, Dept. of Chemistry, Amity Institute of Applied Sciences,Amity
University, Sec.125, Noida, India
Prof. (Dr.) Muhammad Kamruzzaman, Deaprtment of Infectious Diseases, The University of
Sydney, Westmead Hospital, Westmead, NSW-2145.
Prof. (Dr.) Meghshyam K. Patil , Assistant Professor & Head, Department of Chemistry,Dr.
Babasaheb Ambedkar Marathwada University,Sub-Campus, Osmanabad- 413 501, Maharashtra,
INDIA
Prof. (Dr.) Ashok Kr. Dargar, Department of Mechanical Engineering, School of Engineering, Sir
Padampat Singhania University, Udaipur (Raj.)
Prof. (Dr.) Sudarson Jena, Dept. of Information Technology, GITAM University, Hyderabad, India
Prof. (Dr.) Jai Prakash Jaiswal, Department of Mathematics, Maulana Azad National Institute of
Technology Bhopal-India
Prof. (Dr.) S.Amutha, Dept. of Educational Technology, Bharathidasan University, Tiruchirappalli620 023, Tamil Nadu-India
Prof. (Dr.) R. HEMA KRISHNA, Environmental chemistry, University of Toronto, Canada.
Prof. (Dr.) B.Swaminathan, Dept. of Agrl.Economics, Tamil Nadu Agricultural University, India.






























Prof. (Dr.) Meghshyam K. Patil, Assistant Professor & Head, Department of Chemistry, Dr.
Babasaheb Ambedkar Marathwada University, Sub-Campus, Osmanabad- 413 501, Maharashtra,
INDIA
Prof. (Dr.) K. Ramesh, Department of Chemistry, C .B . I. T, Gandipet, Hyderabad-500075
Prof. (Dr.) Sunil Kumar, H.O.D. Applied Sciences &Humanities, JIMS Technical campus,(I.P.
University,New Delhi), 48/4 ,K.P.-3,Gr.Noida (U.P.)
Prof. (Dr.) G.V.S.R.Anjaneyulu, CHAIRMAN - P.G. BOS in Statistics & Deputy Coordinator UGC
DRS-I Project, Executive Member ISPS-2013, Department of Statistics, Acharya Nagarjuna
University, Nagarjuna Nagar-522510, Guntur, Andhra Pradesh, India
Prof. (Dr.) Sribas Goswami, Department of Sociology, Serampore College, Serampore 712201,
West Bengal, India.
Prof. (Dr.) Sunanda Sharma, Department of Veterinary Obstetrics Y Gynecology, College of
Veterinary & Animal Science,Rajasthan University of Veterinary & Animal Sciences,Bikaner334001, India.
Prof. (Dr.) S.K. Tiwari, Department of Zoology, D.D.U. Gorakhpur University, Gorakhpur-273009
U.P., India.
Prof. (Dr.) Praveena Kuruva, Materials Research Centre, Indian Institute of Science, Bangalore560012, INDIA
Prof. (Dr.) Rajesh Kumar, Department Of Applied Physics , Bhilai Institute Of Technology, Durg
(C.G.) 491001
Prof. (Dr.) Y.P.Singh, (Director), Somany (PG) Institute of Technology and Management, Garhi
Bolni Road, Delhi-Jaipur Highway No. 8, Beside 3 km from City Rewari, Rewari-123401, India.
Prof. (Dr.) MIR IQBAL FAHEEM, VICE PRINCIPAL &HEAD- Department of Civil Engineering &
Professor of Civil Engineering, Deccan College of Engineering & Technology, Dar-us-Salam,
Aghapura, Hyderabad (AP) 500 036.
Prof. (Dr.) Jitendra Gupta, Regional Head, Co-ordinator(U.P. State Representative)& Asstt. Prof.,
(Pharmaceutics), Institute of Pharmaceutical Research, GLA University, Mathura.
Prof. (Dr.) N. Sakthivel, Scientist - C,Research Extension Center,Central Silk Board, Government
of India, Inam Karisal Kulam (Post), Srivilliputtur - 626 125,Tamil Nadu, India.
Prof. (Dr.) Omprakash Srivastav, Centre of Advanced Study, Department of History, Aligarh
Muslim University, Aligarh-202 001, INDIA.
Prof. (Dr.) K.V.L.N.Acharyulu, Associate Professor, Department of Mathematics, Bapatla
Engineering college, Bapatla-522101, INDIA.
Prof. (Dr.) Fateh Mebarek-Oudina, Assoc. Prof., Sciences Faculty,20 aout 1955-Skikda University,
B.P 26 Route El-Hadaiek, 21000,Skikda, Algeria.
NagaLaxmi M. Raman, Project Support Officer, Amity International Centre for Postharvest,
Technology & Cold Chain Management, Amity University Campus, Sector-125, Expressway, Noida
Prof. (Dr.) V.SIVASANKAR, Associate Professor, Department Of Chemistry, Thiagarajar College Of
Engineering (Autonomous), Madurai 625015, Tamil Nadu, India
(Dr.) Ramkrishna Singh Solanki, School of Studies in Statistics, Vikram University, Ujjain, India
Prof. (Dr.) M.A.Rabbani, Professor/Computer Applications, School of Computer, Information and
Mathematical Sciences, B.S.Abdur Rahman University, Chennai, India
Prof. (Dr.) P.P.Satya Paul Kumar, Associate Professor, Physical Education & Sports Sciences,
University College of Physical Education & Sports, Sciences, Acharya Nagarjuna University,
Guntur.
Prof. (Dr.) Fazal Shirazi, PostDoctoral Fellow, Infectious Disease, MD Anderson Cancer Center,
Houston, Texas, USA
Prof. (Dr.) Omprakash Srivastav, Department of Museology, Aligarh Muslim University, Aligarh202 001, INDIA.
Prof. (Dr.) Mandeep Singh walia, A.P. E.C.E., Panjab University SSG Regional Centre Hoshiarpur,
Una Road, V.P.O. Allahabad, Bajwara, Hoshiarpur
Prof. (Dr.) Ho Soon Min, Senior Lecturer, Faculty of Applied Sciences, INTI International
University, Persiaran Perdana BBN, Putra Nilai, 71800 Nilai, Negeri Sembilan, Malaysia
Prof. (Dr.) L.Ganesamoorthy, Assistant Professor in Commerce, Annamalai University, Annamalai
Nagar-608002, Chidambaram, Tamilnadu, India.
Prof. (Dr.) Vuda Sreenivasarao, Professor, School of Computing and Electrical Engineering, Bahir
Dar University, Bahirdar,Ethiopia
Prof. (Dr.) Umesh Sharma, Professor & HOD Applied Sciences & Humanities, Eshan college of
Engineering, Mathura, India.
Prof. (Dr.) K. John Singh, School of Information Technology and Engineering, VIT University,
Vellore, Tamil Nadu, India.
Prof. (Dr.) Sita Ram Pal (Asst.Prof.), Dept. of Special Education, Dr.BAOU, Ahmedabad, India.


































Prof. Vishal S.Rana, H.O.D, Department of Business Administration, S.S.B.T'S College of
Engineering & Technology, Bambhori,Jalgaon (M.S), India.
Prof. (Dr.) Chandrakant Badgaiyan, Department of Mechatronics and Engineering, Chhattisgarh.
Dr. (Mrs.) Shubhrata Gupta, Prof. (Electrical), NIT Raipur, India.
Prof. (Dr.) Usha Rani. Nelakuditi, Assoc. Prof., ECE Deptt., Vignan’s Engineering College, Vignan
University, India.
Prof. (Dr.) S. Swathi, Asst. Professor, Department of Information Technology, Vardhaman college
of Engineering(Autonomous) , Shamshabad, R.R District, India.
Prof. (Dr.) Raja Chakraverty, M Pharm (Pharmacology), BCPSR, Durgapur, West Bengal, India
Prof. (Dr.) P. Sanjeevi Kumar, Electrical & Electronics Engineering, National Institute of
Technology (NIT-Puducherry), An Institute of National Importance under MHRD (Govt. of India),
Karaikal- 609 605, India.
Prof. (Dr.) Amitava Ghosh, Professor & Principal, Bengal College of Pharmaceutical Sciences and
Research, B.R.B. Sarani, Bidhannagar, Durgapur, West Bengal- 713212.
Prof. (Dr.) Om Kumar Harsh, Group Director, Amritsar College of Engineering and Technology,
Amritsar 143001 (Punjab), India.
Prof. (Dr.) Mansoor Maitah, Department of International Relations, Faculty of Economics and
Management, Czech University of Life Sciences Prague, 165 21 Praha 6 Suchdol, Czech Republic.
Prof. (Dr.) Zahid Mahmood, Department of Management Sciences (Graduate Studies), Bahria
University, Naval Complex, Sector, E-9, Islamabad, Pakistan.
Prof. (Dr.) N. Sandeep, Faculty Division of Fluid Dynamics, VIT University, Vellore-632 014.
Mr. Jiban Shrestha, Scientist (Plant Breeding and Genetics), Nepal Agricultural Research Council,
National Maize Research Program, Rampur, Chitwan, Nepal.
Prof. (Dr.) Rakhi Garg, Banaras Hindu University, Varanasi, Uttar Pradesh, India.
Prof. (Dr.) Ramakant Pandey. Dept. of Biochemistry. Patna University Patna (Bihar)-India.
Prof. (Dr.) Nalah Augustine Bala, Behavioural Health Unit, Psychology Department, Nasarawa
State University, Keffi, P.M.B. 1022 Keffi, Nasarawa State, Nigeria.
Prof. (Dr.) Mehdi Babaei, Department of Engineering, Faculty of Civil Engineering, University of
Zanjan, Iran.
Prof. (Dr.) A. SENTHIL KUMAR., Professor/EEE, VELAMMAL ENGINEERING COLLEGE, CHENNAI
Prof. (Dr.) Gudikandhula Narasimha Rao, Dept. of Computer Sc. & Engg., KKR & KSR Inst Of
Tech & Sciences, Guntur, Andhra Pradesh, India.
Prof. (Dr.) Dhanesh singh, Department of Chemistry, K.G. Arts & Science College, Raigarh (C.G.)
India.
Prof. (Dr.) Syed Umar , Dept. of Electronics and Computer Engineering, KL University, Guntur,
A.P., India.
Prof. (Dr.) Rachna Goswami, Faculty in Bio-Science Department, IIIT Nuzvid (RGUKT), DistrictKrishna , Andhra Pradesh - 521201
Prof. (Dr.) Ahsas Goyal, FSRHCP, Founder & Vice president of Society of Researchers and Health
Care Professionals
Prof. (Dr.) Gagan Singh, School of Management Studies and Commerce, Department of
Commerce, Uttarakhand Open University, Haldwani-Nainital, Uttarakhand (UK)-263139 (India)
Prof. (Dr.) Solomon A. O. Iyekekpolor, Mathematics and Statistics, Federal University, WukariNigeria.
Prof. (Dr.) S. Saiganesh, Faculty of Marketing, Dayananda Sagar Business School, Bangalore,
India.
Dr. K.C.Sivabalan, Field Enumerator and Data Analyst, Asian Vegetable Research Centre, The
World Vegetable Centre, Taiwan
Prof. (Dr.) Amit Kumar Mishra, Department of Environmntal Science and Energy Research,
Weizmann Institute of Science, Rehovot, Israel
Prof. (Dr.) Manisha N. Paliwal, Sinhgad Institute of Management, Vadgaon (Bk), Pune, India
Prof. (Dr.) M. S. HIREMATH, Principal, K.L.ESOCIETY’S SCHOOL, ATHANI, India
Prof. Manoj Dhawan, Department of Information Technology, Shri Vaishnav Institute of
Technology & Science, Indore, (M. P.), India
Prof. (Dr.) V.R.Naik, Professor & Head of Department, Mechancal Engineering , Textile &
Engineering Institute, Ichalkaranji (Dist. Kolhapur), Maharashatra, India
Prof. (Dr.) Jyotindra C. Prajapati,Head, Department of Mathematical Sciences, Faculty of Applied
Sciences, Charotar University of Science and Technology, Changa Anand -388421, Gujarat, India
Prof. (Dr.) Sarbjit Singh, Head, Department of Industrial & Production Engineering, Dr BR
Ambedkar National Institute of Technology, Jalandhar, Punjab,India

































Prof. (Dr.) Professor Braja Gopal Bag, Department of Chemistry and Chemical Technology,
Vidyasagar University, West Midnapore
Prof. (Dr.) Ashok Kumar Chandra, Department of Management, Bhilai Institute of Technology,
Bhilai House, Durg (C.G.)
Prof. (Dr.) Amit Kumar, Assistant Professor, School of Chemistry, Shoolini University, Solan,
Himachal Pradesh, India
Prof. (Dr.) L. Suresh Kumar, Mechanical Department, Chaitanya Bharathi Institute of Technology,
Hyderabad, India.
Scientist Sheeraz Saleem Bhat, Lac Production Division, Indian Institute of Natural Resins and
Gums, Namkum, Ranchi, Jharkhand
Prof. C.Divya , Centre for Information Technology and Engineering, Manonmaniam Sundaranar
University, Tirunelveli - 627012, Tamilnadu , India
Prof. T.D.Subash, Infant Jesus College Of Engineering and Technology, Thoothukudi Tamilnadu,
India
Prof. (Dr.) Vinay Nassa, Prof. E.C.E Deptt., Dronacharya.Engg. College, Gurgaon India.
Prof. Sunny Narayan, university of Roma Tre, Italy.
Prof. (Dr.) Sanjoy Deb, Dept. of ECE, BIT Sathy, Sathyamangalam, Tamilnadu-638401, India.
Prof. (Dr.) Reena Gupta, Institute of Pharmaceutical Research, GLA University, Mathura-India
Prof. (Dr.) P.R.SivaSankar, Head Dept. of Commerce, Vikrama Simhapuri University Post
Graduate Centre, KAVALI - 524201, A.P., India
Prof. (Dr.) Mohsen Shafiei Nikabadi, Faculty of Economics and Management, Industrial
Management Department, Semnan University, Semnan, Iran.
Prof. (Dr.) Praveen Kumar Rai, Department of Geography, Faculty of Science, Banaras Hindu
University, Varanasi-221005, U.P. India
Prof. (Dr.) Christine Jeyaseelan, Dept of Chemistry, Amity Institute of Applied Sciences, Amity
University, Noida, India
Prof. (Dr.) M A Rizvi, Dept. of Computer Engineering and Applications , National Institute of
Technical Teachers' Training and Research, Bhopal M.P. India
Prof. (Dr.) K.V.N.R.Sai Krishna, H O D in Computer Science, S.V.R.M.College,(Autonomous),
Nagaram, Guntur(DT), Andhra Pradesh, India.
Prof. (Dr.) Ashok Kr. Dargar, Department of Mechanical Engineering, School of Engineering, Sir
Padampat Singhania University, Udaipur (Raj.)
Prof. (Dr.) Asim Kumar Sen, Principal , ST.Francis Institute of Technology (Engineering College)
under University of Mumbai , MT. Poinsur, S.V.P Road, Borivali (W), Mumbai, 400103, India,
Prof. (Dr.) Rahmathulla Noufal.E, Civil Engineering Department, Govt.Engg.College-Kozhikode
Prof. (Dr.) N.Rajesh, Department of Agronomy, TamilNadu Agricultural University -Coimbatore,
TamilNadu, India
Prof. (Dr.) Har Mohan Rai, Professor, Electronics and Communication Engineering, N.I.T.
Kurukshetra 136131,India
Prof. (Dr.) Eng. Sutasn Thipprakmas from King Mongkut, University of Technology Thonburi,
Thailand
Prof. (Dr.) Kantipudi MVV Prasad, EC Department, RK University, Rajkot.
Prof. (Dr.) Jitendra Gupta,Faculty of Pharmaceutics, Institute of Pharmaceutical Research, GLA
University, Mathura.
Prof. (Dr.) Swapnali Borah, HOD, Dept of Family Resource Management, College of Home
Science, Central Agricultural University, Tura, Meghalaya, India
Prof. (Dr.) N.Nazar Khan, Professor in Chemistry, BTK Institute of Technology, Dwarahat-263653
(Almora), Uttarakhand-India
Prof. (Dr.) Rajiv Sharma, Department of Ocean Engineering, Indian Institute of Technology
Madras, Chennai (TN) - 600 036, India.
Prof. (Dr.) Aparna Sarkar, PH.D. Physiology, AIPT, Amity University , F 1 Block, LGF, Sector125,Noida-201303, UP, India.
Prof. (Dr.) Manpreet Singh, Professor and Head, Department of Computer Engineering, Maharishi
Markandeshwar University, Mullana, Haryana, India.
Prof. (Dr.) Sukumar Senthilkumar, Senior Researcher, Advanced Education Center of Jeonbuk for
Electronics and Information Technology, Chon Buk National University, Chon Buk, 561-756,
SOUTH KOREA. .
Prof. (Dr.) Hari Singh Dhillon, Assistant Professor, Department of Electronics and Communication
Engineering, DAV Institute of Engineering and Technology, Jalandhar (Punjab), INDIA. .
Prof. (Dr.) Poonkuzhali, G., Department of Computer Science and Engineering, Rajalakshmi
Engineering College, Chennai, INDIA. .


































Prof. (Dr.) Bharath K N, Assistant Professor, Dept. of Mechanical Engineering, GM Institute of
Technology, PB Road, Davangere 577006, Karnataka, India.
Prof. (Dr.) F.Alipanahi, Assistant Professor, Islamic Azad University, Zanjan Branch, Atemadeyeh,
Moalem Street, Zanjan IRAN.
Prof. Yogesh Rathore, Assistant Professor, Dept. of Computer Science & Engineering, RITEE,
Raipur, India
Prof. (Dr.) Ratneshwer, Department of Computer Science (MMV),Banaras Hindu University
Varanasi-221005, India.
Prof. Pramod Kumar Pandey, Assistant Professor, Department Electronics & Instrumentation
Engineering, ITM University, Gwalior, M.P., India.
Prof. (Dr.)Sudarson Jena, Associate Professor, Dept.of IT, GITAM University, Hyderabad, India
Prof. (Dr.) Binod Kumar, PhD(CS), M.Phil(CS), MIEEE,MIAENG, Dean & Professor( MCA),
Jayawant Technical Campus(JSPM's), Pune, India.
Prof. (Dr.) Mohan Singh Mehata, (JSPS fellow), Assistant Professor, Department of Applied
Physics, Delhi Technological University, Delhi
Prof. Ajay Kumar Agarwal, Asstt. Prof., Deptt. of Mech. Engg., Royal Institute of Management &
Technology, Sonipat (Haryana), India.
Prof. (Dr.) Siddharth Sharma, University School of Management, Kurukshetra University,
Kurukshetra, India.
Prof. (Dr.) Satish Chandra Dixit, Department of Chemistry, D.B.S.College, Govind Nagar,Kanpur208006, India.
Prof. (Dr.) Ajay Solkhe, Department of Management, Kurukshetra University, Kurukshetra, India.
Prof. (Dr.) Neeraj Sharma, Asst. Prof. Dept. of Chemistry, GLA University, Mathura, India.
Prof. (Dr.) Basant Lal, Department of Chemistry, G.L.A. University, Mathura, India.
Prof. (Dr.) T Venkat Narayana Rao, C.S.E, Guru Nanak Engineering College, Hyderabad, Andhra
Pradesh, India.
Prof. (Dr.) Rajanarender Reddy Pingili, S.R. International Institute of Technology, Hyderabad,
Andhra Pradesh, India.
Prof. (Dr.) V.S.Vairale, Department of Computer Engineering, All India Shri Shivaji Memorial
Society College of Engineering, Kennedy Road, Pune-411 001, Maharashtra, India.
Prof. (Dr.) Vasavi Bande, Department of Computer Science & Engineering, Netaji Institute of
Engineering and Technology, Hyderabad, Andhra Pradesh, India
Prof. (Dr.) Hardeep Anand, Department of Chemistry, Kurukshetra University Kurukshetra,
Haryana, India.
Prof. Aasheesh shukla, Asst Professor, Dept. of EC, GLA University, Mathura, India.
Prof. S.P.Anandaraj., CSE Dept, SREC, Warangal, India.
Prof. (Dr.) Chitranjan Agrawal, Department of Mechanical Engineering, College of Technology &
Engineering, Maharana Pratap University of Agriculture & Technology, Udaipur- 313001,
Rajasthan, India.
Prof. (Dr.) Rangnath Aher, Principal, New Arts, Commerce and Science College, Parner, DistAhmednagar, M.S. India.
Prof. (Dr.) Chandan Kumar Panda, Department of Agricultural Extension, College of Agriculture,
Tripura, Lembucherra-799210
Prof. (Dr.) Latika Kharb, IP Faculty (MCA Deptt), Jagan Institute of Management Studies (JIMS),
Sector-5, Rohini, Delhi, India.
Raj Mohan Raja Muthiah, Harvard Medical School, Massachusetts General Hospital, Boston,
Massachusetts.
Prof. (Dr.) Chhanda Chatterjee, Dept of Philosophy, Balurghat College, West Bengal, India.
Prof. (Dr.) Mihir Kumar Shome , H.O.D of Mathematics, Management and Humanities, National
Institute of Technology, Arunachal Pradesh, India
Prof. (Dr.) Muthukumar .Subramanyam, Registrar (I/C), Faculty, Computer Science and
Engineering, National Institute of Technology, Puducherry, India.
Prof. (Dr.) Vinay Saxena, Department of Mathematics, Kisan Postgraduate College, Bahraich –
271801 UP, India.
Satya Rishi Takyar, Senior ISO Consultant, New Delhi, India.
Prof. Anuj K. Gupta, Head, Dept. of Computer Science & Engineering, RIMT Group of Institutions,
Mandi Gobindgarh (PB)
Prof. (Dr.) Harish Kumar, Department of Sports Science, Punjabi University, Patiala, Punjab,
India.
Prof. (Dr.) Mohammed Ali Hussain, Professor, Dept. of Electronics and Computer Engineering, KL
University, Green Fields, Vaddeswaram, Andhra Pradesh, India.












































Prof. (Dr.) Manish Gupta, Department of Mechanical Engineering, GJU, Haryana, India.
Prof. Mridul Chawla, Department of Elect. and Comm. Engineering, Deenbandhu Chhotu Ram
University of Science & Technology, Murthal, Haryana, India.
Prof. Seema Chawla, Department of Bio-medical Engineering, Deenbandhu Chhotu Ram
University of Science & Technology, Murthal, Haryana, India.
Prof. (Dr.) Atul M. Gosai, Department of Computer Science, Saurashtra University, Rajkot,
Gujarat, India.
Prof. (Dr.) Ajit Kr. Bansal, Department of Management, Shoolini University, H.P., India.
Prof. (Dr.) Sunil Vasistha, Mody Institute of Tecnology and Science, Sikar, Rajasthan, India.
Prof. Vivekta Singh, GNIT Girls Institute of Technology, Greater Noida, India.
Prof. Ajay Loura, Assistant Professor at Thapar University, Patiala, India.
Prof. Sushil Sharma, Department of Computer Science and Applications, Govt. P. G. College,
Ambala Cantt., Haryana, India.
Prof. Sube Singh, Assistant Professor, Department of Computer Engineering, Govt. Polytechnic,
Narnaul, Haryana, India.
Prof. Himanshu Arora, Delhi Institute of Technology and Management, New Delhi, India.
Dr. Sabina Amporful, Bibb Family Practice Association, Macon, Georgia, USA.
Dr. Pawan K. Monga, Jindal Institute of Medical Sciences, Hisar, Haryana, India.
Dr. Sam Ampoful, Bibb Family Practice Association, Macon, Georgia, USA.
Dr. Nagender Sangra, Director of Sangra Technologies, Chandigarh, India.
Vipin Gujral, CPA, New Jersey, USA.
Sarfo Baffour, University of Ghana, Ghana.
Monique Vincon, Hype Softwaretechnik GmbH, Bonn, Germany.
Natasha Sigmund, Atlanta, USA.
Marta Trochimowicz, Rhein-Zeitung, Koblenz, Germany.
Kamalesh Desai, Atlanta, USA.
Vijay Attri, Software Developer Google, San Jose, California, USA.
Neeraj Khillan, Wipro Technologies, Boston, USA.
Ruchir Sachdeva, Software Engineer at Infosys, Pune, Maharashtra, India.
Anadi Charan, Senior Software Consultant at Capgemini, Mumbai, Maharashtra.
Pawan Monga, Senior Product Manager, LG Electronics India Pvt. Ltd., New Delhi, India.
Sunil Kumar, Senior Information Developer, Honeywell Technology Solutions, Inc., Bangalore,
India.
Bharat Gambhir, Technical Architect, Tata Consultancy Services (TCS), Noida, India.
Vinay Chopra, Team Leader, Access Infotech Pvt Ltd. Chandigarh, India.
Sumit Sharma, Team Lead, American Express, New Delhi, India.
Vivek Gautam, Senior Software Engineer, Wipro, Noida, India.
Anirudh Trehan, Nagarro Software Gurgaon, Haryana, India.
Manjot Singh, Senior Software Engineer, HCL Technologies Delhi, India.
Rajat Adlakha, Senior Software Engineer, Tech Mahindra Ltd, Mumbai, Maharashtra, India.
Mohit Bhayana, Senior Software Engineer, Nagarro Software Pvt. Gurgaon, Haryana, India.
Dheeraj Sardana, Tech. Head, Nagarro Software, Gurgaon, Haryana, India.
Naresh Setia, Senior Software Engineer, Infogain, Noida, India.
Raj Agarwal Megh, Idhasoft Limited, Pune, Maharashtra, India.
Shrikant Bhardwaj, Senior Software Engineer, Mphasis an HP Company, Pune, Maharashtra,
India.
Vikas Chawla, Technical Lead, Xavient Software Solutions, Noida, India.
Kapoor Singh, Sr. Executive at IBM, Gurgaon, Haryana, India.
Ashwani Rohilla, Senior SAP Consultant at TCS, Mumbai, India.
Anuj Chhabra, Sr. Software Engineer, McKinsey & Company, Faridabad, Haryana, India.
Jaspreet Singh, Business Analyst at HCL Technologies, Gurgaon, Haryana, India.
TOPICS OF INTEREST
Topics of interest include, but are not limited to, the following:






































Software architectures for scientific computing
Computer architecture & VLSI
Mobile robots
Artificial intelligence systems and architectures
Distributed and parallel processing
Microcontrollers & microprocessor applications
Natural language processing and expert systems
Fuzzy logic and soft computing
Semantic Web
e-Learning design and methodologies
Knowledge and information management techniques
Enterprise Applications for software and Web engineering
Open-source e-Learning platforms
Internet payment systems
Advanced Web service technologies including security, process management and QoS
Web retrieval systems
Software and multimedia Web
Advanced database systems
Software testing, verifications and validation methods
UML/MDA and AADL
e-Commerce applications using Web services
Semantic Web for e-Business and e-Learning
Object oriented technology
Software and Web metrics
Techniques for B2B e-Commerce
e-Business models and architectures
Service-oriented e-Commerce
Enterprise-wide client-server architectures
Software maintenance and evolution
Component based software engineering
Multimedia and hypermedia software engineering
Enterprise software, middleware, and tools Service oriented software architecture
Model based software engineering
Information systems analysis and specification
Aspect-oriented programming
Web-based learning, wikis and blogs
Social networks and intelligence
Social science simulation
TABLE OF CONTENTS
International Journal of Software and Web Sciences (IJSWS)
ISSN (Print): 2279-0063, ISSN (Online): 2279-0071
(May-2015, Special Issue No. 1, Volume 1)
Special Issue No. 1, Volume 1
Paper
Code
Paper Title
Page No.
IJSWS
15-301
GREEN COMPUTING: AN APPROACH TO GO ECO-FRIENDLY
Saumya Agrawal, Ayushi Sharma
01-05
IJSWS
15-302
SEMANTIC BASED SEARCH ENGINE
Shreya Chauhan, Tanya Arora
06-09
IJSWS
15-303
SIMULTANEOUS LOCALIZATION AND MAPPING
Pawan Srivastava, Paawan Mishra
10-13
IJSWS
15-304
LI-FI: NEXT TO WI-FI, OPTICAL NETWORKING
Surjeet Singh, Shivam Jain
14-17
IJSWS
15-305
3D INTERNET
Shubham Kumar Sinha, Sunit Tiwari
18-22
IJSWS
15-306
APPLICATION OF CLUSTERING TECHNIQUES FOR IMAGE SEGMENTATION
Nidhi Maheshwari, Shivangi Pathak
23-28
IJSWS
15-308
BIG DATA: INFORMATION SECURITY AND PRIVACY
Shubham Mittal, Shubham Varshney
29-32
IJSWS
15-309
REVOLUTIONIZING WIRELESS NETWORKS: FEMTOCELLS
Somendra Singh, Raj K Verma
33-35
IJSWS
15-310
CROSS-PLATFORM MOBILE WEB APPLICATIONS USING HTML5
Rohit Chaudhary, Shashwat Singh
36-40
IJSWS
15-311
CLOUD -COMPUTING: AN ANALYSIS OF THREATS AND SECURITIES
Sakshi Sharma, Akshay Singh
41-49
IJSWS
15-312
REAL TIME TRAFFIC LIGHT CONTROLLER USING IMAGE PROCESSING
Yash Gupta, Shivani Sharma
50-54
IJSWS
15-314
COMPUTATIONAL AND ARTIFICIAL INTELLIGENCE IN GAMING
Shubham Dixit, Nikhilendra Kishore Pandey
55-60
IJSWS
15-316
SOLVING TRAVELLING SALESMAN PROBLEM BY USING ANT COLONY OPTIMIZATION
ALGORITHM
Priyansha Mishra, Shraddha Srivastava
61-64
IJSWS
15-317
THE FUTURE OF AUGMENTED REALITY IN OUR DAILY LIVES
Nipun Gupta, Utkarsh Rawat
65-69
IJSWS
15-318
DIGITAL WATERMARKING FOR RIGHTFUL OWNERSHIP AND COPYRIGHT
PROTECTION
Pooja Kumari, Vivek Kumat Giri
70-77
IJSWS
15-319
MOBILE CROWDSENSING -CURRENT STATE AND FUTURE CHALLENGES
Pulkit Chaurasia, Prabhat Kumar
78-81
IJSWS
15-321
STEGANOGRAPHY IN AUDIO FILES
Nalin Gupta, Sarvanand Pandey
82-86
IJSWS
15-322
DIGITAL IMAGE PROCESSING
Sandeep Singh, Mayank Saxena
87-93
IJSWS
15-323
SECURITY THREATS USING CLOUD COMPUTING
Satyam Rai, Siddhi Saxena
94-96
IJSWS
15-325
REVIEW OF DATA MINING TECHNIQUES
Sandeep Panghal, Priyanka Yadav
97-102
IJSWS
15-326
BUDGET BASED SEARCH ADVERTISEMENT
Vijay Kumar, Rahul Kumar Gupta
103-109
IJSWS
15-327
MOBILE SOFTWARE AGENTS FOR WIRELESS NETWORK MAPPING AND DYNAMIC
ROUTING
Shivangi Saraswat, Soniya Chauhan
110-113
IJSWS
15-328
BRAIN COMPUTING INTERFACE
Shivam Sinha, Sachin Kumar
114-119
IJSWS
15-329
MOUSE CONTROL USING A WEB CAMERA BASED ON COLOUR DETECTION
Vinay Kumar Pasi, Saurabh Singh
120-125
IJSWS
15-330
SECURITY THREATS TO HOME AUTOMATION SYSTEM
Priyaranjan Yadav, Vishesh Saxena
126-129
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
GREEN COMPUTING: AN APPROACH TO GO ECO-FRIENDLY
Saumya Agrawal1, Ayushi Sharma2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
_________________________________________________________________________________
Abstract: Global warming has become a serious issue throughout the world. One of the ways to cope up with it
is green computing. And now a day, going green is in best of the interest throughout the world. The term green
computing explains the study and practice of efficient and eco-friendly computing. In this paper, we report the
awareness towards green computing and present several initiatives to go green that are currently under way in
the computer industry, as well as issues that have been raised regarding these initiatives and present a study to
learn more about the future of green computing.
__________________________________________________________________________________________
1,2
I.
INTRODUCTION
Computers have become an essential part of our lives, in business as well as in homes. As technology is
advancing, newer and faster computers are being introduced annually as companies rush to gain market share
and improve profit margins [7]. However, computers have made our lives easy to go. But the other side of this
coin comes with lots of problems in hand for us. A solution to those problems is Green computing. It is the
study and practice of efficient and eco-friendly computing resources [1] to protect environment and to save
energy. Green computing not only helps in protecting environment but also helps in reducing the cost. GREEN
COMPUTING simply means to use the resources efficiently [3]. The goals of green computing are to maximize
energy efficiency during the product's lifetime, reducing the use of hazardous materials, and promoting
recyclability or biodegradability of defunct products and factory waste. Such approaches include the
implementation of energy-conserving central processing units (CPUs), servers and peripherals as well as
reduced resource consumption and proper disposal of electronic waste (e-waste) [6]. One of the earliest
initiatives toward green computing was the voluntary labeling program known as "Energy star". This was to
promote energy efficiency in hardware of all types [2]. IT industry has begun to address energy consumption in
the data center through a variety of approaches including the use of more efficient cooling systems, blade
servers, virtualization, and storage area networks. Now a days, the companies and even consumers are
demanding for the eco-friendly products in their places [4]. Moreover, the IT vendors and manufacturers are
designing the devices which are energy efficient and eco-friendly. The 5 core green computing technologies
advocated by GCI (Green Computing Initiative) are Green Data Center, Virtualization, Cloud Computing,
Power Optimization and Grid Computing[1].
II.
LITERATURE REVIEW
Green Computing is a recent trend towards building, designing and operating computer systems to be energy
efficient and eco friendly. When it comes to PC disposal you need to know everything about the involvement in
green computing. Basically, the whole green aspect came about quite a few years back when the news came that
the environment was not a renewable resource, it really hit home and people started realizing that they had to do
something to protect the environment. Energy Star reduces the amount of energy consumed by a product by
automatically switching it into sleep mode when not in use or reducing the amount of power used by a product
when in standby mode. Basically, the green computing is the efficient use of computers. The only idea behind
this is to make computers a "GREEN PRODUCT". A very good example of green computing are the mobile
phones which can do everything that the computer does [1].
III.
WHY GREEN COMPUTING
Green computing is an effective approach to protect our environment from the hazardous material and its
effects that comes from the computers and its related devices. It is an effective study of environmental science
that use manufacturing, using, disposing and recycling of computer and other electronic devices. Green
computing aims are to reduce the use of hazardous materials, maximize energy efficiency during the product's
lifetime, and even promote the recyclability or biodegradability of defunct products and factory waste [5].
1.
2.
IV.
ACTIVITIES OF GREEN COMPUTING
Energy-intensive manufacturing of computer parts can be minimized by making manufacturing process
more energy efficient.
Lead can be replaced by silver and copper in the use of toxic materials.
IJSWS 15-301; © 2015, IJSWS All Rights Reserved
Page 1
S. Agrawal and A. Sharma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp.01-05
3.
4.
5.
It also saves energy for a whole new computer by avoiding the discards.
For green computing, we use green light displays made of OLEDs, or organic light-emitting diodes instead
of power sucking displays.
Landfills can be controlled by making best use of the device by upgrading and repairing in time with a need
to make such processes (i.e., upgradation and repairing) easier and cheaper.[2]
V. APPLICATION OF GREEN COMPUTING
1. Blackle:
Blackle is a search-engine site that is based on the fact that the display of different colors consumes different
amounts of energy on computer monitors.It is powered by Google Search. It was a really good implementation
of Green Computing.
2. Sunray thin client:
Sun Microsystems, a thin desktop client increased customer interest a lot. Thin clients like the SunRay consume
far less electricity than conventional desktops. Most of the heavy computation is performed by a server as a
sunray on a desktop consumes 4 to 8 watts of power.
3. Fit-PC:
Fit-PC is the size of a paperback and absolutely silent, fit enough to run Windows XP or Linux. A tiny PC that
draws only 5w.Fit-PC is designed to fit where a standard PC is too bulky, noisy and power hungry. Fit pc is the
perfect fit for a pc to be compact, quiet and green.
4. Other ultra portables:
The "ultra-portable" class of personal computers is characterized by a small size, fairly low power CPU,
compact screen and low cost. These factors combine to enable them to run more efficiently and use less
power[2]. So, it increases green computing.
VI. APPROACHES TO GREEN COMPUTING
A. POWER OPTIMISATION [8]
A new approach to reduce energy utilization in data centers.
This approach relies on consolidating services dynamically onto a subset of the available servers and
temporarily shutting down servers in order to conserve energy. We present initial work on a probabilistic service
dispatch algorithm that aims at minimizing the number of running server. Given the estimated energy
consumption and projected growth in data centers, the proposed effort has the potential to positively impact
energy consumption.
B. CLOUD LOAD BALANCING [9]
The goal of load balancing is to minimize the resource consumption which will reduce energy consumption and
carbon emission rate that is the need of cloud computing.
It further compares on various parameters like performance, scalability, associated overhead etc., that are
considered in different techniques.
Cloud computing is emerging as a new paradigm of largescale distributed computing. It has moved computing
and data away from desktop and portable PCs, into large data centers.
C. DATA CENTERS
It includes the role of communication fabric in data center energy consumption and also presents a scheduling
approach that combines energy efficiency and network awareness, named ---DENS. The DENS methodology
balances the energy EEN consumption of a data center, individual job performance, and traffic demands. The
proposed approach optimizes the tradeoff between job consolidation (to minimize the amount of computing
servers) and distribution of traffic patterns (to avoid hotspots in the data center network).
IJSWS 15-301; © 2015, IJSWS All Rights Reserved
Page 2
S. Agrawal and A. Sharma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp.01-05
VII. WAYS TO GO GREEN WHILE COMPUTING [11]
We need not to stop using computers and even need not to stop using electricity but we have to do some efforts
to make environment healthy. The following actions should be taken by us:
A. Use Energy Star labelled products:- All the energy star labelled products are manufactured only to increase
Green Computing and its features. These products are manufactured on the idea of less power
consumption.They itself power-down to a low power state or even when they are not in use. So we have to
use “Energy Star” labelled desktops, monitors, laptops, printers and other computing devices.
B. Turn off your computer:- As the pc's and other devices consume more power. Due to this, the resultant is
the high amount of CO2 emission. So never hesitate to turn off your personal computers when they are not
in use.
C. Sleep Mode:- Sleep mode puts our computer in a low power state so that we can quickly resume windows
and also saves our session. Keep your PC on sleep mode when not in use. It saves 60-70 percent of
electricity.
D. Hibernate your computer:- When we are not using our computer for a short period of time we can hibernate
it. It saves the electricity when computer is not in use. This mode allows us to shut everything down.
E. Set a power plan:- Set an effective power plan to save electricity. As we know if our computer consumes
more electricity, they produce more harmful impacts on our environment.
F.
Avoid using screen saver:- Screen saver can be a graphic, text or an image that shows on computer screen
when it is not used for pre-set time. Screen savers also consumes electricity when not in use.
IJSWS 15-301; © 2015, IJSWS All Rights Reserved
Page 3
S. Agrawal and A. Sharma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp.01-05
G. Turn down monitor brightness:- If we use our PC at a high brightness it consumes more electricity than
using at a normal brightness. So we should always turn down our PC’s brightness to save electricity.
H. Stop informal Disposing:- Computer and its components use toxic chemicals when manufactured and when
we use informal disposing they put harmful impacts on our environment.
I. Use LCD rather than CRT monitors:- LCD (Liquid Cristal Display) is the less power consumption device
then CRT (Cathode Ray Tube).It reduces the power consumption.So, we can use LCDs in place of CRTs to
save energy.
J. Recycle old hardware using formal techniques:- Recycling of computer hardware is manufacturing of new
hardware devices using old one. It is done in a special laboratory. It also consumes a lot of money but the
main feature of formal recycling is to save our environment from pollution.It is also followed by various
companies.
VIII. CONCLUSION
So far, consumers haven't cared about ecological impact when buying computers, they've cared only about
speed and price.But, consumers are becoming pickier about being green. Devices use less and less power while
renewable energy gets more and more portable and effective. New green materials are developed every year,
and many toxic ones are already being replaced by them. The greenest computer will not miraculously fall from
the sky one day; it’ll be the product of years of improvements. The features of a green computer of tomorrow
would be like: efficiency, manufacturing & materials, recyclability, service model, self-powering, and other
trends.
IX. FUTURE OF GREEN COMPUTING
The plan towards green IT should include new electronic products and services with optimum efficiency and all
possible options towards energy savings.The possible works that can be done in future to promote GREEN IT
can be: (a) More emphasis on green disposal, and (b) Use of AMOLED TVs instead of LCDs andLEDs.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
Gaurav Jindal,Manisha Gupta.Green Computing “Future of Computers”.International Journal of Emerging Research in
Management & Technology.ISSN: 2278-9359.
Sk. Fayaz Ahamad, P.V.Ravikanth. "Green Computing Future of Liveliness."International Journal of Computational Engineering
Research (IJCER) ISSN: 2250-3005.
Parichay Chakraborty, Debnath Bhattacharyya, Sattarova Nargiza Y,and Sovan Bedajna. "Green computing:practice of efficient
and eco friendly computing resources."
Shalabh Agarwal, Arnab Datta, Asoke Nath."IMPACT OF GREEN COMPUTING IN IT INDUSTRY TO MAKE ECO
FRIENDLY ENVIRONMENT."
Mrs .Sharmila Shinde ,Mrs. Simantini Nalawade,Mr .Ajay Nalawade. "Green Computing: Go Green and Save Energy."
N.P.Jadhav,R.S.Kamble,S.V.Kamble."Green Computing-New Approaches of Energy Conservation and E- Waste
Minimization."IOSR Journal of Computer Engineering.ISSN: 2278-0661, ISBN: 2278-8727, PP: 25-29
IJSWS 15-301; © 2015, IJSWS All Rights Reserved
Page 4
S. Agrawal and A. Sharma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp.01-05
[7]
[8]
[9]
[10]
[11]
Victoria SEITZ,Fitri YANTI,Yasha KARANT."ATTITUDES TOWARD GREEN COMPUTING IN THE US: CAN THEY
CHANGE?".
Walter Binder, Niranjan Suri."Green Computing:Energy Consumption Optimized Service Hosting".
Nidhi Jain Kansal, Inderveer Chana.” Cloud Load Balancing Techniques : A Step Towards Green Computing”.
Dzmitry Kliazovich · Pascal Bouvry · Samee Ullah Khan” DENS: data center energy-efficient network-aware scheduling”.
Dr. Pardeep Mittal, Navdeep Kaur.” Green Computing – Need and Implementation”. International Journal of Advanced
Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 3, March 2013.
IJSWS 15-301; © 2015, IJSWS All Rights Reserved
Page 5
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Semantic Based Search Engine
Shreya Chauhan1, Tanya Arora2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
_______________________________________________________________________________________
Abstract: The INTERNET or WORLD WIDE WEB (WWW) allows people to share information from large
repositories across the world .The amount of information available grows each day thus growing billions of
databases. Therefore to search meaningful information we need to search with specialize tools. This searching
of meaningful information is also known as semantic search as it promises to produce precise answers to user
queries. In this paper, we have reviewed various researches related to semantic search. We found that, earlier,
tools were designed to enhance the search done by naïve user but they have many drawbacks. One important
among them is having knowledge about back-end like underlying ontologies and SQL query, etc. Here, we have
mainly focused on one of the semantic search engine called “SemSearch”. It provided lot of gap between end
user and underlying ontologies making it user friendly. Moreover, it is efficient of handling complex queries.
Hence, it is considered as best among various search engines reviewed.
Keyword: Semantic Web, Ontology, RDF, keyword based SemSearch.
_________________________________________________________________________________________
1,2
I.
Introduction
One important goal a user has in mind while searching on any topic is retrieval of useful and specific
information thus to have more effective access to knowledge contained in different information systems. In
realizing this goal, Semantic search plays an important role. Semantic Web service an extension of current web
service [3], allows information's meaning to be well defined in terms of words that are intended to be useful and
accessible to both humans and computers. For example, in fig.1. when we search “news about MLA”,
with traditional search only the blogs and web pages "containing the word MLA" will appear. This may not be
the information user might be looking for because the result having the names of MLA or more specific
information may not be shown. Semantic web search, however, uses related data and relations to search for the
query and hence displaying precise information [1].
Fig. 1. Showing Semantic Search
Traditional search engine are very useful in finding data which is getting better over the years
using HTML pages which are very useful, but they too fail under some conditions .Further, they suffer from the
fact that they cannot describe the meaning of data contained. There are various semantic web tools available.
Although, they improved performance of traditional search engine but yet many of them (e.g. ontology based,
SQL-query based) are complicated as they require deep knowledge to understand [1].
Semantic Search engine, SemSearch, provides various ways in which this issue can be resolved:
IJSWS 15-302; © 2015, IJSWS All Rights Reserved
Page 6
S. Chauhan and T. Arora, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 06-09
1)
SemSearch help in providing a user friendly query system much more like Google-based query
interface .By this kind of interface ,user doesn’t need to have back-end knowledge.
2)
SemSearch provides better way to address the complex queries as well .It provides an extensive mean
to make sense of user queries and help in translating them to formal queries.
Thus, SemSearch made easier for naïve user to exploit the various benefits of semantic search without having
knowledge of underlying data.
II.
Background studies
In this section, we will see how the current semantic search approaches address the user support. According to
the user interface provided, the semantic search engines are classified into four main categories:
1) form-based search engines, which provide complicated web forms in which user can specify queries, by
specifying type of ontology ,classes ,properties and values .
2) RDF based querying language search engine which uses the complex querying languages like SQL for
search.
3) Semantic based keyword search engine, which improved use of traditional search engine by using the
semantic data.
4) Question answering tools, which uses semantic data for answering question asked in natural language.
Earlier, the Semantic web search was heavily dependent on the use of ontologies. Ontology is “a formal,
explicit specification of a conceptualization” [6]. J. Heflin and J Hendler, proposed SHOE [7] ,the first approach
used for form-based search engine. It was unique as it allowed user to select area of interest from where to
search. It provided GUI through which the user can easily enter his request by selecting the type of ontology and
filling the forms with required attributes and constraints. It returns the result in tabular form in which each tuple
consist of links to documents from where the original information was extracted [5].
The main disadvantage of this approach was that user will have to learn the user interface or simply, we can say,
one should have the back-end knowledge. Moreover, it only searches the web pages annotated by SHOE [7].
The users also face difficulties in making queries on their own, related to the information they wish to find [1].
RDF is one of the very important concepts recommended by W3C to represent the relationships i.e. ontologies
between the keywords. In Semantic Search using RDF [8], the various words in sentence are put into codes,
rather in sets of triples. These triples are treated like subject, verb and object of an elementary sentence which
express some meaning. In RDF, the millions of web pages are linked in ways, which most of the machines are
using to process data.
The search engine proposed by, O. Corby, R. Dieng-Kuntz, and C. Faron-Zucker is Corese [3]. Its
representation language is built upon RDF(S), working on the relationship between subject, verb and object.
Keyword based search engines are the engines that work with the various keywords present in the different
web pages [10]. Whenever any search is being performed the search engine returns the web pages that have the
similar keywords as was present in the query entered by a user. Example of it is a search engine proposed by R.
Guha, R. McCool, and E. Miller.
The traditional search engines using keyword based search engine provide information but most of it is
irrelevant as a large amount of web pages contain a lot of keywords which are designed specifically to attract the
users acting nothing more than an advertisement. Another major drawback is that they do not know the meaning
of the keywords as well as the relation between the expressions being provided by the user or the web pages i.e.
no relation centric search can be done.
Homophones (words with are spelled alike but have different meaning) which are being used by the user also
act as a barrier in the successful usage of the keyword based search engine.
Another type of approach used for semantic search is ontology based question answering engines. In this [9], the
use of natural language processing technologies was done for modifying the natural language queries into
different type of data. One is in form of triples of ontology (e.g. In Aqua Log) and another form is complicated
query languages (e.g.ORAKEL). Although, this approach was useful for end users but its performance was
extremely dependent on the results of natural language processing techniques used.
All the tools discussed above, use various techniques and help in improving the search of query by using the
available semantic data and their underlying ontologies. These tools are, however, not suitable for naïve users.
There are mainly two problems associated with them [1]. One is need of proper knowledge, the user must have
deep knowledge of various back-end ontologies or some specific SQL like querying languages in order to
generate search queries and as well as for understanding the outcomes. Another problem is lack of support for
solving and answering the complex queries.
III.
SemSearch
The main focus of this work is make search less complex and easier for end user by hiding background semantic
details. This will, thereby, increase the effectiveness of search. In order to achieve this, following requirements
needs to be fulfilled:
IJSWS 15-302; © 2015, IJSWS All Rights Reserved
Page 7
S. Chauhan and T. Arora, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 06-09
1)
Less obstruction to access for ordinary end users. The Semantic Search Engine should be less
complicated to work on. The need of knowledge about ontology’s structure, vocabulary and expertise
in querying language should be eliminated.
2)
Handling complex queries. The existing search engines can only process simple queries. The semantic
search engine should have the capability to take complex queries from the user and should provide
proper ways for handling and processing them.
3)
Accurate and self-explanatory results. The semantic search engine besides, being able to handle
complex queries should be able to return the result which are accurate and self-explanatory. Moreover,
the results should be in the form in which the user can easily understand without referring the
background details of ontologies etc.
4)
Quick response. The search engine should also give quick response to queries encountered. This will
help user in exploiting the advantages of semantic web. To achieve this, the mechanism should be
simple.
To meet the above requirements, the keyword based search engine technique was used instead of using natural
language question answering technology. The reason being expensive linguistic processing in term of search.
The limitation of current keyword based search engine was removed by providing the Google-like querying
interface which supports complex queries and give result by finding relation between multiple keywords.
The SemSearch [1] architecture separates user and ontology’s structure and other related data by several layers.
The various layers are discussed below:
1)
The Google-like User Interface Layer: Here, user can specify the query in terms of keywords. The
Google-like querying interface extends the traditional keyword based search engine by allowing the
way to handle complex queries and giving out results by relating multiple keywords.
2)
The Text Search layer: It finds out unambiguous semantic meaning of user keyword and with help of it
understands user queries. There are two main components of this layer:
a)
A semantic entity index engine: It provides index number to various documents and associated
semantic entities like class, properties and attributes.
b)
A semantic entry search engine: It analyzes the user keywords and search for their proper
semantic entities.
3)
The Semantic Query Layer: It generates search results for queries given by users by translating user
queries into formal queries. This layer further have three components :
a)
A formal query construction engine, translates user queries into formal queries
b)
A query engine, search in related meta-data repository using generated formal queries
c)
A ranking engine ranks the search results according to the degree of their closeness to user
query.
4)
The Formal Query Language Layer: Here, the semantic relations from underlying semantic data layer
can be accessed by using special formal query language.
5)
The Semantic Data Layer: It consists of various semantic metadata collected from various data sources
and are stored in form of different ontologies.
IV.
Conclusion and future work
In this paper, we have made brief study of existing literatures regarding Semantic Search Engines. We have
review characteristics of various search engines. In addition, we concluded that various Semantic Search
Engines have following issues:

First of all, they doesn’t provide interactive user interface.

Less précised results.

Static knowledge structures.

Relationship between various keywords in user queries was difficult to find.

One should have knowledge of back-end ontologies, underlying semantic data or SQL querying
language to use them.
Moreover, we mainly focused on SemSearch, which removed the above mentioned drawbacks. Although,
SemSearch is better than other Search Engine but still have scope for improvement. Future work will mainly
focus on searching the data more intelligently without looking for keywords here and there. Optimization should
also be increased so that best results are always on top.
References
[1]
[2]
[3]
D.Pratiba , Dr.G.Shobha ,”Semantic Web Search Engine” .In proceeding of International Journal of Advanced Research in
Computer Science and Software Engineering,2013.
G.Madhu et al.,"Intelligent Semantic Web Search Engines: A Brief Survey", International journal of Web & Semantic
Technology (IJWesT) Vol.2, No.1, January 2011
O. Corby,” R. Dieng-Kuntz, and C. Faron-Zucker. “Querying the Semantic web with Corese Search Engine. In Proceedings of
15th ECAI/PAIS, Valencia (ES), 2004
IJSWS 15-302; © 2015, IJSWS All Rights Reserved
Page 8
S. Chauhan and T. Arora, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 06-09
[4]
[5]
[6]
[7]
[8]
[9]
[10]
Danushka Bollegala ET. Al ,” A Web search Engine-Based Approach to measure Semantic Similarity between Words” .In
proceeding of IEEE transactions on knowledge and data engineering ,vol.23,no.7,July 2011.
Mark Vickers, “Ontology-Based Free-Form Query Processing for the Semantic Web”. In proceeding of Department of
Computer Science, Brigham Young University, 2005.
T. R. Gruber, “A Translation Approach to Portable ontologies,” Knowledge Acquisition, 5(2):199-220, 1999.
J. Heflin and J Handler, “Searching the Web with SHOE,” .In proceeding of Artificial Intelligence for Web Search. Papers from
the AAAI Workshop, pages 35-40, Menlo Park, California, 2000.
Anthony Narzary, et. al,”A Study On Semantic Web Languages and Technologies”. In proceeding of International Journal of
Engineering and computer Science,2014.
V. Lopez, M. Pasin, and E. Motta. Aqua Log: An Ontology-portable Question Answering System for the Semantic Web. In
Proceedings of European Semantic Web Conference (ESWC 2005), 2005.
Jagendra Singh et.al."A Comparative Study between Keyword and Semantic Based Search Engines".In proceeding of
International Conference on Cloud, Big Data and Trust 2013, Nov 13-15, RGPV.
IJSWS 15-302; © 2015, IJSWS All Rights Reserved
Page 9
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Simultaneous Localization and Mapping
Pawan Srivastava1, Paawan Mishra2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
_______________________________________________________________________________________
Abstract: Since the solution of SLAM problem is achieved about a decade ago. But some difficulties left behind
realizing more general solutions. Here making and using perceptually good maps will help in generalizing it.
The purpose is to provide a broad knowledge to this rapidly growing field. The paper begins by providing a
brief background of early developments in SLAM. Then we have three sections, i.e., formulation section,
solution section and application section. The formulation section uses the structure the SLAM problem in
present standard Bayesian form, and explains evolution of this process. Next, the solution section describes the
two key computational solutions to the SLAM problem through the use of the extended Kalman filter (EKFSLAM) which is one of the popular approximate solution methods and through the use of Rao-Blackwellized
particle filters (FastSLAM). The application section describes the different real-life applications of SLAM and
also implementations where the sensor data and software are freely downloadable for other researchers to
study. Here major issues in data association, computation, and convergence in SLAM are described.
________________________________________________________________________________________
1,2
I.
INTRODUCTION
SLAM is the process by which a robot can build a map of its surroundings and at the same time use this map to
find its location. A solution to the SLAM problem has been seen as a “holy grail” for the mobile robotics
community as it would provide the means to make a robot truly self-controlled. The great majority of work has
focused on improving computational efficiency while ensuring consistent and accurate estimates for the map
and vehicle positions [1]. However, there has also been much research on issues such as nonlinearity, data
association, and landmark characterization, all of which are vital in achieving a practical and robust SLAM
implementation.
SLAM has been formulated and solved as a theoretical problem in many forms. It has also been used in a many
domains from indoor robots, to outdoor, underwater and airborne systems. At a theoretical and conceptual level,
SLAM can now be considered a solved problem. However, some issues remain in practically realizing more
general solutions to the SLAM problem [1]. The difficulty in SLAM arises from the fact that measurements of
sensor are never accurate but always used to measure distraction. The problem of SLAM has received much
attention in recent years, and has been described as the answer to the question “where am I?”. The SLAM
problem is hard because the same sensor data must be used for both mapping and localisation. We can
distinguish two major uncertainties in solving it:
(i) The discrete uncertainty in the identification of environmental features (data association).
(ii) The continuous uncertainty in the positions of the robot.
There exists an optimal solution for SLAM with some assumptions:

The measured distraction is assumed to be independent and drawn from a Gaussian distribution with
known covariance.

The optimal map can be computed by solving the linear equation system obtained from the
measurements.

A standard least squares algorithm could be applied. However, this approach would not fit good for
operation in large environments i.e., for n landmarks and p robot poses the computational cost is O((n +
p)3).
In general three criteria are important for the performance of a SLAM algorithm, concerning respectively as:

map quality,

storage space and

computation time.
The following requirements for an ideal SLAM algorithm were identified by Frese and Hirzinger
1. Bounded uncertainty: uncertainty of map must not be larger than minimal uncertainty
2. Linear storage space: the storage area of a map
3. Linear update cost: large area should have computational cost at most linear.
In summarizing it, requirement 1 says that the map should represent nearly all information contained in the
measurements, thus binding the map to reality whereas requirement 2 and 3 tells about efficiency, linear space
and time consumption [2].
IJSWS 15-303; © 2015, IJSWS All Rights Reserved
Page 10
P. Srivastava and P. Mishra, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 10-13
II.
LITERATURE REVIEW
A consistent whole solution to the localization and mapping problem would require a joint state composed of:

the vehicle pose and

every landmark position
These need to be updated following each landmark observation. This paper showed that as a mobile robot
moves through an unknown environment taking relative observations of landmarks, the estimates of these
landmarks are all necessarily correlated with each other because of the common error in estimated vehicle
location. The genesis of the probabilistic SLAM problem occurred at the 1986 IEEE Robotics and Automation
Conference held in San Francisco, California. This was a time when probabilistic methods were only just
beginning to be introduced into both robotics and artificial intelligence. [3] Many researchers had been looking
at applying estimation-theoretic methods to mapping and localization problems. Some of them are: Peter
Cheeseman, Jim Crowley, and Hugh Durrant- Whyte. A few long discussions about consistent mapping took
place. The result of this conversation was a recognition that consistent probabilistic mapping was a fundamental
problem in robotics with huge computational issues that needed to be removed. Over the next few years a
number of key papers were produced. A key element of this work was to represent that there must be a high
degree of correlation between location of different landmarks in a map and that, indeed, these correlations
would grow with successive observations. At the same time Ayache and Faugeras were undertaking early work
in visual navigation, Crowley and Chatila and Laumond were working in sonar-based navigation of mobile
robots using Kalman filter-type algorithms.
III.
FORMULATION AND STRUCTURE OF THE SLAM PROBLEM
A.
Preliminaries
Let us consider a mobile robot moving through an environment taking relative observations of a number of
unknown landmarks using a sensor located on it.
At any time instant i, the following quantities are defined:
* xk: state vector describing the location and orientation of the vehicle.
* uk: control vector, applied at time k − 1 to drive the vehicle to a state xk at time k
* mi: vector describing the location of the ith landmark whose correct location is assumed time invariant
* zik : an observation taken from the vehicle of the location of the ith landmark at time k. When there are
multiple landmark observations at any one time or when the specific landmark is not relevant to the discussion,
the observation will be written simply as zk .
In addition, the following sets are also defined:
* X0: k = {x0, x1, · · · , xk} = {X0:k−1, xk}: the history of vehicle locations
* U0: k = {u1, u2, · · · , uk} = {U0:k−1, uk}: the history of control inputs
* m = {m1,m2, · · · ,mn} the set of all landmarks
* Z0:k = {z1, z2, · · · , zk} = {Z0:k−1, zk} : the set of all landmark observations.
B.
Solutions to the SLAM Problem
Solutions to the probabilistic SLAM problem involve finding an appropriate representation for both the
observation model which is represented as:
P(zk|xk, m)
and motion model which is represented as:
P(xk|xk-1, uk)
that allows efficient and consistent computation of the prior and posterior distributions. By far, the most
common representation is in the form of a state-space model with additive Gaussian noise, leading to the use of
the extended Kalman filter to solve the SLAM problem. One important alternative representation is to describe
the vehicle motion model as a set of samples of a more general non- Gaussian probability distribution. This
leads to the use of the Rao-Blackwellized particle filter, or FastSLAM algorithm, to solve the SLAM problem.
While extended Kalman filter and FastSLAM are the two most important solution methods, newer alternatives,
which offer much potential, have been proposed, including the use of the information-state form .
IV.
EKF-SLAM
Time-update
Observation-update
IJSWS 15-303; © 2015, IJSWS All Rights Reserved
Page 11
P. Srivastava and P. Mishra, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 10-13
where
and where ∇h is the Jacobian of h evaluated at xk|k−1 and mk−1.
This EKF-SLAM solution is very well known and inherits many of the same benefits and problems
as the standard EKF solutions to navigation or tracking problems [4].
A.
Rao-Blackwellized Filter
The FastSLAM algorithm is introduced by Montemerlo et al. This algorithm marked fundamental conceptual
shift in the design of recursive probabilistic SLAM. Previous efforts focussed on improving the performance of
EKF-SLAM, while retaining its essential linear Gaussian assumptions. FastSLAM, with its basis in recursive
Monte Carlo sampling, or particle filtering, was the first to directly represent the nonlinear process model and
non-Gaussian pose distribution. (FastSLAM still linearizes the observation model, but this is typically a
reasonable approximation for range-bearing measurements when the vehicle pose is known). This approach was
influenced by earlier probabilistic mapping experiments of Murphy and Thrun [4].
B.
Particle Filter Path Estimation
FastSLAM employs a particle filter for estimating the path posterior p(st | zt,ut,nt), using a filter that is similar
(but not identical) to the Monte Carlo localization (MCL) algorithm . MCL is an application of particle filter to
the problem of robot pose estimation (localization). At each point in time, both algorithms maintain a set of
particles rep- resenting the posterior p(st | zt,ut,nt), denoted St. Each particle st,[m] ∈ St represents a “guess” of
the robot’s path: St = {st,[m]}m = {s[m] 1 ,s[m] 2 ,...,s[m] t }m. We use the superscript notation [m] to refer to
the mth particle in the set. The particle set St is calculated incrementally, from the set St−1 at time t−1, a robot
control ut, and a measurement zt. First, each particle st,[m] in St−1 is used to generate a probabilistic guess of
the robot’s pose at time t s[m] t ∼ p(st | ut,s[m] t−1 ), obtained by sampling from the probabilistic motion model.
This estimate is then added to a temporary set of parti- cles, along with the path st−1,[m]. Under the assumption
that the set of particles in St−1 is distributed according to p(st−1 | zt−1,ut−1,nt−1)(which is an asymptotically
correct approximation), the new particle is distributed according to p(st | zt−1,ut,nt−1). This distribution is
commonly referred to as the proposal distribution of particle filtering. After generating M particles in this way,
the new set St is obtained by sampling from the temporary particle set. Each particle st,[m] is drawn (with
replacement) with a probability proportional to a so-called importance factor w[m] t , which is calculated as
follows :
w[m] t = target distribution proposal distribution = p(st,[m] | zt,ut,nt) p(st,[m]zt−1,ut,nt−1)
The exact calculation will be discussed further below. The resulting sample set St is distributed according to an
approximation to the desired pose posterior p(st | zt,ut,nt), an approximation which is correct as the number of
particles M goes to infinity. We also notice that only the most recent robot pose estimate s[m] t−1 is used when
generating the particle set St. This will allows us to silently “forget” all other pose estimates, rendering the size
of each particle independent of the time index t [5].
V.
FUTURE ASPECTS
In future work, we are going to try some approaches to increase the efficiency and complexity of the algorithms.
Global consistent range scan matching methods will be used to accomplish global localization and mapping. We
hope to apply this technique to applications such as driving assistant system, 3D city-sized mapping, and
research of dynamic social activities. Some issues are still left and corresponds to future guidelines that can be
used to improve correct word. Concerning the context in which the work is inserted (project CHOPIN), the
SLAM technique should be extended to deal with environment filled with smoke. Optimization of map
exchanges using a compression method would also be important, to reduce the load of communication and time
needed for information exchange between robots. Before data compression, would be important to optimize the
information exchange, at the time of map merging. This optimisation should signal the report about which cells
in the occupation grid map have new information since the last exchange, with respect to all robots in the team.
Another interesting feature to implement a module to align maps that did not depend on functions from
OpenCV, in order to make the faster, lighter and more importantly, compatible for P robots instead of being
limited to pairs of robots. Finally in term of navigation and coordination of robots, it would be interesting to
IJSWS 15-303; © 2015, IJSWS All Rights Reserved
Page 12
P. Srivastava and P. Mishra, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 10-13
have an autonomous exploration technique for robots to explore the environment without depending on human
teleoperation.
VI.
CONCLUSIONS
This paper has explained the SLAM problem and the necessary methods for solving it. It also summarizes and
demonstrates key implementations of the method. While there are so many practical issues to remove (in more
complex outdoor environments), the general SLAM method is now a well understood part of robotics. It also
summarizes more recent work in some of the remaining issues in SLAM, including computation, feature
representation, and data association.
A.
Experimental Results
The FastSLAM algorithm was tested extensively under various conditions. Real-world experiments were
complimented by systematic simulation experiments, to investigate the scaling abilities of the approach. Overall,
the results indicate favorably scaling to large number of landmarks and small particle sets. A fixed number of
particles (e.g., M = 100) appears to work well across a large number of situations. It is showed the physical
robot testbed, which consists of a small arena set up under NASA funding for Mars Rover research. A Pioneer
robot equipped with a SICK laser range finder was driven along an approximate straight line, generating the raw
data. The resulting map generated with M = 10 samples is depicted, with manually determined landmark
locations marked by circles. The robot’s estimates are indicated by x’s, illustrating the high accuracy of the
resulting maps. FastSLAM resulted in an average residual map error of 8.3 centimeters, when compared to the
manually generated map. Unfortunately, the physical testbed does not allow for systematic experiments
regarding the scaling properties of the approach. In extensive simulations, the number of landmarks was
increased up to a total of 50,000, which Fast- SLAM successfully mapped with as few as 100 particles. Here, the
number of parameters in FastSLAM is approximately 0.3% of that in the conventional EKF. Maps with 50,000
landmarks are out of range for conventional SLAM techniques, due to their enormous computational
complexity. The example maps with smaller numbers of landmarks, for different maximum sensor ranges as
indicated. The ellipses visualize the residual uncertainty when integrated over all particles and Gaussians. In a
set of experiments specifically aimed to elucidate the scaling properties of the approach, we evaluated the map
and robot pose errors as a function of the number of landmarks K, and the number of particles M, respectively.
The results are graphically depicted. An increase in the number of landmarks K mildly reduces the error in the
map and the robot pose. This is because the larger the number of landmarks, the smaller the robot pose error at
any point in time. Increasing the number of particles M also bears a positive effect on the map and pose errors.
In both diagrams, the bars correspond to 95% confidence intervals [6].
REFERENCES
[1].
[2].
[3].
[4].
[5].
[6].
M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “Fast-SLAM: A factored solution to the simultaneous localization and
mapping problem,” in Proc. AAAI Nat. Conf. Artif. Intell., 2002
Daphne Koller and Ben Wegbreit Computer Science Department Stanford University, FastSLAM: A Factored Solution to the
Simultaneous Localization and Mapping Problem
A.J. Davison, Y.G. Cid, and N. Kita, “Real-time 3D SLAM withwide-angle vision,” in Proc. IFAC/EURON Symp. Intell. Auton.
Vehicles, 2004
M. Csorba, “Simultaneous Localisation and Map Building,” Ph.D. dissertation, Univ. Oxford, 1997
Grisetti, G., Stachniss, C. and Burgard, W. 2005. Improving grid-based SLAM with Rao-Blackwellized particle filters by
adaptive proposals and selective resampling. IEEE International Conference on Robotics and Automation (ICRA-05)
S. Thrun, D. Fox, and W. Burgard, “A probabilistic approach to concurrent mapping and localization for mobile robots,” Mach.
Learning, 1998
IJSWS 15-303; © 2015, IJSWS All Rights Reserved
Page 13
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Li-Fi: Next to Wi-Fi, Optical Networking
Surjeet Singh1, Shivam Jain2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
[email protected]
_________________________________________________________________________________________
Abstract: Nowadays many people are using internet to accomplish their task through wired or wireless network.
As number of users get increased in wireless network the speed decreases proportionally. Though Wi-Fi gives
us speed up to 150mbps as per IEEE 802.11n, it is not greater in bandwidth, efficiency, availability and
security. To improve this limitation of Wi-Fi, the concept of Li-Fi is introduced by germen physicist Harald
Haas -“data through illumination” taking the fibber out of fiber optic by sending data through an LED light
bulb that varies in intensity faster than the human eye can follow. By using the cheap LEDs and the lightning
units, it makes it economical to exploit this medium, for publically accessing the internet. Li-Fi is ideal for high
density wireless data coverage in confined area and for relieving radio interference issues. Haas says his
invention, which he calls D-LIGHT, can produce data rates faster than 10 megabits per second, which is
speedier than your average broadband connection. Thus Li-Fi is better in bandwidth, efficiency, availability,
security than Wi-Fi and has already achieved lightning speed in the lab.
Keywords: Wireless-Fidelity (Wi-Fi), Light-Fidelity (Li-Fi), Light Emitting Diode (LED), Line of Sight (LOS)
Visible Light Communication (VLC).
__________________________________________________________________________________________
1,2
I. INTRODUCTION
Li-Fi (Light-fidelity) is a label for wireless communication system which is used to describe visible light
communication technology applied to high speed wireless communication. It acquires the name due to similarity
with Wi-Fi, using visible light instead of radio waves. “D-Light” as termed by Harald Hass acquired its name
using light intensity faster than human eye range for the transmission of data. This term was first used by Hass
in 2011 in TED Global talk on visible light communication [7]. Li-Fi can play a major role in relieving the
heavy loads which the current wireless system faces since it adds a new and unutilized bandwidth of visible
light to the currently available radio waves for data transfer. Thus it offers much larger frequency band (300
THz) compared to that available in RF communications (300GHz). Also, more data coming through the visible
spectrum could help alleviate concerns that the electromagnetic waves that come with Wi-Fi could adversely
affect our health.
Li-Fi can be the technology for the future where data for laptops, smart phones, and tablets will be transmitted
through the light in a room. Security would not be an issue because if you can‘t see the light, you can‘t access
the data. As a result, it can be used in high security military areas where RF communication is prone to
eavesdropping[1],[3].
II.
PRINCIPLE OF LI-FI TECHNOLOGY
Heart of Li-Fi technology is high brightness LED’s. Light emitting diodes can be switched on and off faster
since operating speed of LED’s is less than 1 μs, than the human eye can detect, causing the light source to be
appear continuously. This invisible on-off activity enables a kind of data transmission using binary codes.
Switching on an LED is a logical ‘1’, switching it off is a logical ‘0’. It is possible to encode data in the light by
IJSWS 15-304; © 2015, IJSWS All Rights Reserved
Page 14
S. Singh and S. Jain, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 14-17
varying the rate at which LED flickers on and off to give different strings of 1s and 0s. Modulation is so fast that
the human eye doesn’t detect [5]. A light sensitive device (photo detector) receives the signal and reconverts it
back to the original data. This method of using rapid pulses of light to transmit information wirelessly is
technically referred as Visible Light Communication (VLC) though its potential to compete with conventional
Wi-Fi has inspired the popular characteristics Li-Fi [6].
A. Visible Light Communication
Li-Fi is a fast and cheap version of Wi-Fi, which is based on visible light communication (VLC). The Visible
light communication is a data communication medium using visible light between 400 THz (780 nm) and 800
THz (375nm) as optical carrier for data transmission and illumination [3]. Visible light is not harmful to vision.
Typical examples of visible light communication is given in fig.1.
Fig. 1
B. Devices used in visible light communication
Devices which are used for transmission purpose in VLC are LED light bulbs and fluorescent lamp. LED light
intensity is modulated by controlling its current. The technology uses fluorescent lamps to transmit signals at
10bit/s, or LED’s for up to 500 Mbit/s. Devices which are used for reception purpose in visible light
communication are pin photodiode (high speed reception up to 1Gbps), Avalanche photo diode (very sensitive
reception) and Image sensor (simultaneous image acquisition and data reception) as shown in fig. 2.
Fig. 2(a) photo-diode
Fig. 2(b) Smart Sensor
III. CONSTRUCTION AND WORKING OF LI-FI TECHNOLOGY
Li-Fi is implemented using white LED light bulbs at downlink transmitter. These devices are used for
illumination only by applying a constant current. By fast and subtle variations of the current, optical output can
be made to vary at extremely high speeds. This variation is used to carry high speed data. An overhead lamp
fitted with an LED, signal processing technology streams data embedded in its beam at ultra high speeds to the
photodiodes. A receiver dongle than converts the tiny changes in amplitude into an electrical signal, which is
then converted back into a data streams & transmitted to a computer or mobile device.
IJSWS 15-304; © 2015, IJSWS All Rights Reserved
Page 15
S. Singh and S. Jain, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 14-17
Fig. 3
IV. COMPARISION WITH WI-FI






V. ADVANTAGES
A free band that does not need license
Low maintenance cost
extremely energy efficient
cheaper than WI-FI
No more monthly broadband bills
Secured-Light does not penetrate through walls.




VI. LIMITATIONS AND CHALLENGES
Cannot use in rural areas.
If the receiver is blocked the signals cut off.
Reliability and network coverage
Interference from external light sources
IJSWS 15-304; © 2015, IJSWS All Rights Reserved
Page 16
S. Singh and S. Jain, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 14-17







VII. APPLICATIONS
Education systems: Li-Fi is the latest technology that can provide fastest speed internet access.
Medical Applications: Operation theatres (OTs) do not allow Wi-Fi due to radiation concerns.
Cheaper Internet in Aircrafts: The passengers travelling in aircrafts get access to high speed internet
at very less rate.
Underwater applications: Li-Fi can even work underwater where Wi-Fi fails completely, therefore
having open endless opportunities for military operations [9].
Applications in sensitive areas: Power plants need fast, inter-connected data systems so that demand,
grid integrity and core temperature (in case of nuclear power plants) can be monitored.
Traffic management: In traffic signals Li-Fi can be used which will communicate with the LED lights
of the cars which can help in managing the traffic in a better manner and the accidents can be
reduced[9].
Replacement for other technologies: Li-Fi don‘t works on radio waves. So, it can be easily used in
the places where Bluetooth, infrared, Wi-Fi, etc. are banned.
VIII. FUTURE WORK
Further enhancements can be made in this method, like using an array of LEDs for parallel data transmission or
using mixture of red, green and blue LEDs to alter the light’s frequency with each frequency encoding a
different data channels. Such advancements promises a theoretical speed of 10 Gbps - meaning one can
download a full high definition film in just 30 seconds [4].
IX. CONCLUSION
The possibilities are numerous and can be explored further. If this technology can be put into practical use,
every bulb can be used something like a Wi-Fi hotspot to transmit wireless data and we will proceed toward the
cleaner, greener, safer and brighter future. The concept of Li-Fi is currently attracting a great field of interest,
because it must offers a genuine and very efficient alternative to radio-based wireless. As a growing number of
people and their devices accessing wireless internet, the airwaves are becoming increasingly clogged, making it
more and more difficult to get a reliable, high-speed signal. This may solve issues such as the shortage of radiofrequency bandwidth and also allow internet where traditional radio based wireless isn’t allowed such as in
aircrafts or hospitals. One of the shortcomings however is that it only works in direct line of sight.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
en.wikipedia.org/wiki/Li-Fi
www.lificonsortium.org
en.wikipedia.org/wiki/visible_light_communication
Gordon Povey, ‘Li-Fi Consortium’, dated 19 October 2011.
Ian Lim, ‘Li-Fi – Internet at the Speed of Light’, the gadgeteer, 29 August 2011.
http://heightech.blogspot.in/2012/10/lifi-latest-technology-in-wireless.html, October 2012.
Haas, Harald (July 2011). "Wireless data from every light bulb". TED Global. Edinburgh, Scotland.
An
IEEE
Standard
for
Visible
Light
Communicationsvisiblelightcomm.com,datedApril2011.Tsonev,
Sinsanovic, S.; Haas, Harald (15 September 2013). "Complete Modeling of Nonlinear Distortion in OFDM Based Optical
Wireless Communication". IEEE Journal of Lightwave Technology 31 (18): 3064– 3076.doi:10.1109/JLT.2013.2278675
Jyoti Rani, Prerna Chauhan, Ritika Tripathi, ―Li-Fi (Light Fidelity)-The future technology In Wireless communicational,
International Journal of Applied Engineering Research, ISSN 0973-4562 Vol.7 No.11 (2012).
IJSWS 15-304; © 2015, IJSWS All Rights Reserved
Page 17
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
3D Internet
Shubham Kumar Sinha1, Sunit Tiwari2
1,2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
____________________________________________________________________________________
Abstract: The 3D web also known as virtual world. The 3D Internet is a strong new way for you to reach,
business customers, consumers, co-workers, students and partners. It combines the immediacy of television, the
diverse content of the Web, and relationship-building strengths of social networking sites like twitter and
Facebook. Yet unlike the passive experience of television and smart TV, the 3D Internet is inherently interactive
and engaging. The Virtual worlds provide us an immersive 3D experiences that replicate the real life. People
who take part in virtual worlds stay online longer with keen level of interest and heightened level of obsession.
To take advantage of this obsession, diverse businesses and organizations have claimed an early stake in this
fast growing market.

The World Wide Web (www), which has started as a document bank, is rapidly transforming to a full
flying virtual environment that facilitates interaction, services and communication.

Under this, the Semantic Web 1.0, Web 2.0 and newly introduced Web 3.0 movements can be seen as
intermediate steps of a natural evolution towards a new milestone, the 3D Internet.

Here we are going to present how to implement 3D internet against 2D Technologies and present 3D
methodologies.
_____________________________________________________________________________________
I.
INTRODUCTION
The achievements of 3D mapping applications and communities, combined with the falling costs of producing
3D atmosphere, are leading some experts to analyze that a dramatic shift is taking place in the way people see
and navigate the Web. The appeal of 3D world to vendor, users and consumers lies in the level of immersion
that the programs provide.
The experience of involvement with another character in a 3D environment, as opposed to a flat image or a
screen name, adds new appeal to the act of socializing on the web. As in Microsoft Virtual Earth 3D which is an
extension for Internet explorer which provides you a virtual sight of the most important cities of the United
States: Detroit, Philadelphia, Las Vegas, Los Angeles, San Francisco and many more. Advertisements in
Microsoft's Virtual Earth 3D mapping application are placed as billboards and signs on top of buildings, mix
with the application's urban landscapes.
The Internet is evolving to become the concerning fact on cyberspace (the notional environment in which
communication over computer networks works) or virtual environment facilitating business, communications
and entertainment on a global scale. On the other hand, metaverses which is a collective virtual space or virtual
worlds such as Second Life (SL) or World of Warcraft (WoW) are much younger when compared to other Web
technologies. Today, the momentum and success of virtual worlds are undeniable. The market for MMOGs
(Massively Multiplayer Online Games) is estimated to be worth more than one billion US dollars and such
metaverses are fast becoming significant platforms in the converged media world according to some experts.
Virtual worlds are increasingly seen as more than game and interpreted within a business context rather than
entertainment. The view that metaverses will play a efficient role in the upcoming days is shared by many
researchers, analysts and professionals in the corresponding field. Among them are the participants of the
metaverse blueprint who aim to explore multiple pathways to the 3D enhanced web, the Croquet Consortium, as
well as the VRML (Virtual Reality Modeling Language) and X3D languages.
We imagine a 3D Internet which will be to 2D graphical user interface (GUI) and Web of today what 2D GUI
and World Wide Web (WWW) were to command line interface (CLI) two decades ago. While the concept
seems increasing in the sense that it merely adds 3D graphics to the current Web, it is infact revolutionary for it
provides a complete virtual environment that facilitates communication, services and interaction. From this
perspective, the 3D Internet can be seen as an evolutionary end point of ongoing efforts such as Semantic Web
and Web 2.0. We define the 3D Internet concepts and discuss why it is a goal worth pursuing, what it does
involve, and how one can interact with it. Along with its vast potential the 3D Internet also opens many research
challenges in order to become a reality. Metaverses recently, have caught the attention of advertisement,
gaming, 3D design, and performing arts creativity among others. However, it is probably difficult to claim that
the same level of interest has been raised in the areas of machine learning, machine learning, distributed
computing and networking. Without overcoming these engineering challenges and making a business case to
stakeholders, the 3D Internet is destined to be an exercise and remain in the space of science fiction a doom
experienced by many initially promising concepts such as artificial intelligence or virtual reality. Since our daily
IJSWS 15-305; © 2015, IJSWS All Rights Reserved
Page 18
S. K. Sinha and S. Tiwari, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 18-22
activities are more and more attached to the "digital social behavior". Investors in industry are highly oblivious
of this passion and allocate more financial means to develop personal fit products to reach the up to the level.
Our aim is to reduce the complication. The more complicated it is to use, the less it can be used in everyday life
since we are not willing to learn all information about this. Therefore the demand of the users that they would
like the compound needs with their personal informatics in an user friendly, easy way and is fully
understandable. Man live in a 3D world. All our knowledge is in 3D and we use 3D in our Non and Para verbal
communication i.e. the communication without words. No wonder that it is a natural need from the customers
that they would like to communicate with their personal informatics in the very same way in 3D. That is why
among others, the appearance of all allocated and collective knowledge of man should also be in 3D on internet.
II.
LITERATURE REVIEW
What is 3D Internet [1, 4]
3D Internets an interactive virtual environment for communication, interaction, entertainment and services .The
Internet is evolving to become virtual world or virtual environment to provide facility in education
communication, business, sports and entertainment on a large scale. A simple 2D website is an extremely
abstract entity and consists of nothing but a bunch of text, documents, videos and pictures. Within the website,
at each level of the interaction, the developer needs to provide the user immediate navigational support.
Otherwise, the user would get lost. 3D internet is actually an interactive alternative way of collecting and
organizing data which everybody knows and uses. 3D Internet is combination of two powerful entity i.e.
INTERNET & 3D GRAPHICS. As a result, 3D Internet is interactive and helpful, 3D Graphic delivered over
the web to get easily accessible. 3D Internet uses efficient architecture, lots of protocols and provides experience
of 3D Internet.
Why 3D Internet now [1, 4, 5]
We live all our lives in a 3D environment moving between places and organizing objects in spatial systems.
Sometimes we need search engines to find what we are looking for and our brains are naturally adopted at
remembering spatial relations. Let us consider the following scenario on the 3D Internet. Besides of a flat 2D
desktop we can safely put my documents on my desk or table at home or offices where documents, desk, and
home are virtual objects that are 3D representations of real-world counterparts with spatial relationships. Later,
when the need of getting such documents requires, there is a high probability that I can easily remember their
location without involving two additional processes such as search engines or a recent documents folder on a
click. Obviously, it is very hard to realize such scenario on the current Internet. We are there like 2D creatures
living on such flat documents or files not knowing where we are or what are going to be next. We travel
constantly from one flat surface to another, each time looks like getting lost, each time asking for next what to
do. In contrast, the ease of use and intuitively of 3D GUIs are an immediate result of the way our brains work, a
result of an evolutionary process assuring adaptation to our world. Although the 3D Internet does not provides a
solution to all problems, it provides an HCI (Human–computer interaction) framework that can decrease mental
stress and open a way to rich, innovative and interactive interfaces designs through spatial relationships.
Another important term is the Web place metaphor of the 3D Internet which makes interaction among people in
a natural way. In this sense, the 3D Internet seems to be a natural successor of Web 2.0. The metaverses such as
SL can be considered as starting point of the 3D Internet. Yet, they already points towards its significant
business opportunities and purposes. Not only existing online businesses would benefit from the inherent
interactive nature and spatial HCI (Human–computer interaction) paradigms of the 3D Internet but also a whole
range of businesses such as fashion, and tourism, real state, education can finally start using the Internet with
impact. We expect that the probability of providing attractive 3D representations of products and services will
have an extraordinary effect on online business and business to customer commercial activity. From virtual try
before buy to interactive shopping the commercial power of the 3D Internet is fantastic.

3D Internet is more preferable than a document repository for providing an efficient virtual environment
for services, interaction, knowledge and communication.

Availability of low cost hardware: GPUs, graphic cards etc.

Emerging Output devices: Video Eyewear

Emerging 3D Input devices: 3Dconnexion's Space Navigator

Advances in 3D graphics technologies: DirectX,openGL.

3D support on traditional desktops Compiz,Vista 3D Flip.

Distance Learning would be beyond joy.
We are discussing a 3D Internet structure as an illustrative example. It shares the main principles of time tested
and underlying architecture of the present Internet as well as many web concepts. The operational principles the
3D Internet shares with its predecessor include flexible and open architecture, open protocols, simple
implementation at the network core, intelligence at the corners, and distributed implementations in the systems.
We adopt here the terms universe, world, and web place as 3D parts of WWW (World Wide Web), websites,
and subdomain. We describe each components‘ functionality briefly below: World servers: provide user or
IJSWS 15-305; © 2015, IJSWS All Rights Reserved
Page 19
S. K. Sinha and S. Tiwari, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 18-22
server side created, dynamic and static content making the specific web place (3D environment) including
visuals, physics engine, avatar data, media, and more to client side and its programs. A world server has the
necessary and important task of coordinating the co-existence of connected users, initiating communication
among them, and assuring in-world consistency in real time. They may also provide various services such as email, instant messaging, and lots to do. Avatar/ID servers: virtual identity management systems containing
avatar information and identity as well as inventory (not only in world graphics but also documents, text, emails, pictures etc.) of authenticated and registered users and providing these to individual world servers and
relevant client programs while assuring security and privacy of already stored information. Avatar/ID servers
may be part of world servers. Universe location servers: virtual location management systems similar to and
including current DNS providing connection to the Internet via methods similar to SLurl as well as virtual
geographical information. They may act as a distributed directory of the world, avatar servers and users. Clients:
browser-like viewer programs running on users computers with extensive networking,3D rendering capabilities
and caching. Additional activity of the 3D Internet include web places (replacing websites) and 3D object
creation/editing software i.e. easy-to-handle 3D modeling and design programs such as Sketch-Up and
communication protocols and standardized mark-up languages. Arising of new software and its tools in addition
to the ones mentioned should naturally be like a bun in the oven.
III.
EVOLUION OF 3D INTERNET [3, 10]
 Web 1.0 :
In Web 1.0, a small number of coders created Web pages for a large number of users. As a result, people could
get information by going directly to the source like Microsoft.com for Windows issues, and CNN.com for news
and adobe.com for graphics design issues.
 Web 2.0 :
People publish content that other people can read and use, companies build platforms that let people to publish
content for other people (e.g. YouTube, Blogger, Adsense, Wikipedia, Flicker, MySpace, RSS). Web 2.0 sites
often features a user friendly and rich interface based on Open Laszlo, Flex 3, Ajax or similar rich media. Web
2.0 has become popular because of its rich look, and it uses Best GUI‘s.
 Web 3.0 :
The newly introduced Web 3.0 applications, we will see the data being integrated and applying it into innovative
ways that were never possible before.
Imagine taking things from Amazon, merging it with data from Google and then building a site that would
define your shopping experience based on a combination of Google Trends and New Products.
Another major leap in the Web 3.0 is the introduction of the 3D Internet into the web, hence these would replace
the existing Web Pages.
IV.
HOW IT WORKS [1,5, 7, 8]
The conventional web caching approaches will not be much satisfactory for the needs of the 3D Internet
environment consisting of 3D virtual worlds, which are supposed to be hosted on different servers. One
challenge stems from the fact that avatar(an icon or a figure representing a particular person) contain
significantly more information about the user who is visiting a 3D world than cookies do about a 2D web site
visitor. For instance, avatar contains information about appearance (e.g. clothing, height, weight) and behavior
(e.g. Conversation, visibility). As avatars move between worlds, caching will be needed in server-to-server
interactions to enable fast and responsive transition between worlds. This will be intensified by avatars carrying
objects (e.g. a car) or virtual companions (e.g. a virtual cat or any animal) with them, which will require the
transfer of large volumes of information in a short time when transforming the world. Another challenge is
related to the fact that some virtual objects or companions are essentially not static documents but running
programs. They have code that defines how they react to certain inputs, and they have a partly autonomous
behavior. Thus, when an avatar and its companions move to a world, the world server (or servers) needs to
execute their corresponding set of codes. This will rise a number of interesting research problems such as how
can we safely run a potentially untrusted code (for instance, when the virtual companions are user-generated and
custom built) How will the economics of such transactions be handled or how can we move running code
between different world servers without fatally disrupting its execution or platforms will be needed that allow
the dynamic deployment of potentially untrusted computation at globally dispersed servers, in a secure, fast and
accountable manner.
A.
Latency Minimization
As the 3D Internet will increase the reliance on interactivity and graphics, it will be vital that the latency that
clients observe when interacting with servers is reduced. It has been known from existing implementations such
as SL (Second Life) that high latency incurs low responsiveness and reduced user satisfaction. Therefore, the
network has to be designed intelligently and peculiarly to overcome these challenges. We propose a hybrid peerto-peer (P2P) approach to reduce server load and ensure scalability of the 3D Internet infrastructure. It consists
IJSWS 15-305; © 2015, IJSWS All Rights Reserved
Page 20
S. K. Sinha and S. Tiwari, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 18-22
of three types of communications i.e. client to server (C2S), server to server (S2S) and client to client (C2C)
each with different latency and band- width requirements.
B.
Security and Trust
There is an array of alternatives for enabling the transparent and seamless authentication of avatars, users and
other objects in the 3D Internet world. The Single Sign On concept envisages users logging in only once, for
example on a web page of an on-line service, and visiting further services or web-based applications without the
need to log in again. The user can thus experience an unhindered, seamless usage of services.
V.
FUTURE ASPECTS
In future it can be used in many applications like E-commerce, Product 3D visualization on web, 3D virtual
shops for shopping, Interactive demonstration, 3D banner for ads, Training-Web-based training using interactive
3D for education, Games Multi-player, 3D Entertainment Streaming 3D animation ( lower bandwidth than
video with less traffic, can run on full screen have better interactivity), Social interaction 3D chat, Education
Virtual field trips interactive Distance education as well as on-campus, Virtual experiments for physical
sciences on 3D web Historical recreation for social sciences and arts.
A.
3D information visualization for various fields:
A1.
3D MOUSE
3Dconnexion manufactures a line of human interface devices for manipulating and navigating computergenerated 3D imagery. These devices are often referred to as 3D motion controllers, 3D navigation devices,
6DOF devices (six degrees of freedom) or a 3D mouse. Commonly used in CAD applications, animation, 3D
modeling, product visualization and 3D visualization, users can manipulate the controller's pressure-sensitive
handle (historically referred to as a ball, cap, knob or mouse) to fly through 3D environments or manipulate 3D
models within the system. The appeal of these devices over a mouse and keyboard is the ability to rotate, pan
and zoom 3D imagery at the same instant, without stopping to change software interface or without change in
directions using keyboard shortcuts.
A2.
3D SHOPPING
3D Shopping is the most effective way for shopping online. 3D Internet took years of development and research
and has developed the world's first fully functional, interactive, innovative and collaborative shopping mall
where online users can use our 3DInternet's Hyper-Reality technology to navigate and participate themselves in
a Virtual Shopping atmosphere.
In real life, we get tired running around a mall looking for that perfect gift for someone; But today using 3d
internet, you won't have to take risk about your kids getting lost in the crowd; and you can finally say a goodbye
to long waiting lines in order to checkout.
A3.
HANDS ON: EXIT REALITY
The idea behind ExitReality is that when browsing the web in the old & busted 2D version which we are
undoubtedly using now,we can hit a button to transform the 2D site into a 3D environment that we can walk
around in and virtually interact with other users who are visiting the same site. This shares many of the same
goals as Google's Lively though Exit Reality is admittedly attempting a few other tricks. Installation is
performed through an executable file which places ExitReality shortcuts in Quick Launch and on the system's
screen, but somehow forgets to add the necessary ExitReality button to Firefox's toolbar. After adding the button
manually and repeatedly being told our current version was out of date, we were ready to 3D-ify some websites
and see just how much of reality we could leave in two-dimensional dust. Exit Reality is designed to offer
different kinds of 3D environments that exist around spacious rooms that could be explore and customize by
users, but it can also turn some sites amazon into virtual museums, hanging photos on virtual walls, halls and
more. Strangely, it's treating technical as an image gallery and presenting it as a malformed 3D gallery.
VI.
CONCLUSION
The 3D internet will lead to new research trends of the Future Media Internet, with emphasis on media, without
neglecting the importance of the network. It is highlighted with the goal of determining the current trend and
future perspectives of the research on the Future Media Internet.

At this point of time we are facing an unique opportunity of evolution of internet towards much more
versatile, interactive and usable version i.e. 3D INTERNET.

There are still many research challenges on the way.

We can use the existing hype as the driver of research and realization of 3D INTERNET.

3D internet is a step ahead to future which could serve for not only as metaverse but will change the way
we perceive internet of today.

It also provides some parameters such as:
 Scalable multimedia compression, transmission, concealment;
 Network coding and streaming;
 Content & context fusion for improved multimedia access;
IJSWS 15-305; © 2015, IJSWS All Rights Reserved
Page 21
S. K. Sinha and S. Tiwari, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 18-22
 3D content generation leveraging emerging acquisition channels.
 Immersive multimedia experiences;
 Multimedia, multimodal & deformable objects search;
 Content with memory and behavior.
Based on the above challenges the experts have identified potential applications and impact that these challenges
should have. This white paper is expected to be the basis for a Future Media Internet Book which is the next big
challenge of the Task Force.
REFERENCES
[1].
[2].
[3].
[4].
[5].
[6].
[7].
[8].
[9].
[10].
Towards 3D Internet: Why, what, and how? T Alpcan, C Bauckhage, Cyberworlds, 2007. ieeexplore.ieee.org
ISReal: an open platform for semantic-based 3D simulations in the 3D internet P Kapahnke, P Liedtke, S Nesbigall, S Warwa,
The Web Semantic, 2010 Springer
Tamás P: 3D Measuring of the Human Body by Robots. 5th International Conference Innovation and Modelling of Clothing
Engineering Processes – IMCEP 2007 Slovenia, October 2007, University of Maribor, Maribor (2007).
MPML3D: scripting agents for the 3D internet H Prendinger, S Ullrich, A Nakasone, IEEE Transactions on, 2011 ieeexplore.ieee.org
Generation 3d: Living in virtual worlds M Macedonia - Computer, 2007 - ieeexplore.ieee.org
Klara Wenzel, Akos Antal, Jozsef Molnar, Bertalan Toth, Peter Tamas. New Optical Equipment in 3D Surface Measuring.
Journal of Automation, Mobile Robotics & Intelligent Systems, Vol. 3, No. 4, 29-32, 2009.
A Future Perspective on the 3D Media Internet. P Daras, F Alvarez - Future Internet Assembly, 2009, books.google.com
Gabor Ziebig. Achieving Total Immersion: Technology Trends behind Augmented Reality – A Survey, In Proc. of Simulation,
Modelling and Optimization, 2009.
NPSNET: a multi-player 3D virtual environment over the Internet MR Macedonia, DP Brutzman, MJ Zyda on Interactive 3D,
1995, dl.acm.org
3D internet for cognitive info-communication, P Baranyi, B Solvang, H Hashimoto, Proc. of 10th, 2009, researchgate.net
IJSWS 15-305; © 2015, IJSWS All Rights Reserved
Page 22
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Application of Clustering Techniques for Image Segmentation
Nidhi Maheshwari1, Shivangi Pathak2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
______________________________________________________________________________________
Abstract: In the field of Computer Science, by segmentation, we mean a process defining the splitting up of the
digital images into different parts which are collectively known as pixels. This paper aims to survey the various
methods of clustering in order to perform the process of segmentation in a more efficient manner. During many
years, clustering has been a significant method and a strong recommendation for image segmentation
effectively. Definition of clustering says, the process of assembling objects of similar kind in the given sample
space. Clustering is primarily performed by taking into account the following attributes: shape, size, color,
texture etc. The principal purpose of clustering is to extract the relevant data from the extensive database like
image and also for the efficacious deployment of the image.
1,2
Keywords: Segmentation, Clustering, k-means clustering algorithm, fuzzy clustering, fuzzy c-means, ISODATA
technique, Hierarchical clustering, Content-based image retrieval (CBIR), Log-based Clustering.
______________________________________________________________________________________
I.
INTRODUCTION
Nowadays, images are considered as one of the important tool for fetching information. The biggest
confrontation is to exactly understand the image and withdraw the information it contains so that it can be put to
use. To begin with, we must partition the image and to realize the different objects in them. Image segmentation
means the process of splitting the given image into similar regions on the basis of certain attributes like size,
color, shape and texture etc. Segmentation plays a very important role in extracting the information from the
image to form a similar region by classifying pixels on some basis to group them into a region of similarity [1].
Clustering of an image is one noble approach for the segmentation of images. Post extraction of the attributes,
they are treated as vectors and are amalgamated as segregate units of clusters based on each class of image.
Clustering is classified into two as Supervised Clustering and Unsupervised Clustering [1].
Supervised clustering involves humans in the decision-making of clustering rules or criteria whereas
unsupervised clustering involves no such interface and takes the decisions itself [1].
II.
IMAGE SEGMENTATION
Image segmentation is the process of partitioning a given image into multiple segments (set of pels) on the basis
of common attributes. These attributes can be illustrated by vector of color, texture, shape and many others each
of which contributes in finding the similitude of the picture element in that region. The objective of
segmentation is to transform the image into yet simple format and change the illustration into something more
positive to scrutinize. It is basically used to determine the objects and their borders (curves, lines etc.) in the
provided image. The result of the image segmentation is a set of regions that compositely represents the entire
image. In each region, pixels of common features or attributes like color, shape, size etc. are represented. In
other words, image segmentation can be defined as the process of evaluating and analyzing the image. It helps
in the recognition of the objects, image compression, editing etc [1].
III.
CLUSTERING
Clustering may be defined as the process of grouping the objects which is based on some attributes, so that the
objects with similar attributes should lie in the same cluster [2]. Clustering is an important data mining
technique used in pattern recognition, image processing, data analysis etc. An image is considered as a
collection grouped under keyword. Clustering is build upon the concept of metadata and it involves a keyword
which is a form of font or attribute, used to depict different factors of an image. The common factor of an image
are supposedly, classified as same clusters by giving them some value. In content-based clustering, an actual
reference is drawn to the color, texture or any other component or information that can be derived from the
image. While in hierarchical-based clustering, a hierarchy of clusters is sustained for each and every element.
Variable clustering based techniques are easily available for the image segmentation process, to perform the
same effectively and efficiently. The clustering methods mentioned in this paper are k-means clustering
algorithm, fuzzy c-means, ISODATA technique, Hierarchical clustering, Content-based image retrieval, Logbased clustering [1].
IJSWS 15-306; © 2015, IJSWS All Rights Reserved
Page 23
N. Maheshwari and S. Pathak, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 23-28
IV.
CLUSTERING TECHNIQUES
Images contain more than one object so, in order to partition it according to the available features for extraction
of meaningful results from the same has become a tough task. Therefore, one of the easy and suitable methods
for segmentation is clustering. We can classify clustering algorithms into two categories: Supervised which is
sub-divided into semi-supervised and unsupervised techniques. The former technique, as fore-mentioned,
requires human interaction for better results. The latter technique includes density-based algorithm e.g. fuzzy cmeans [1]. Having explained about clustering and mentioning its techniques, following is an attempt to throw
some light on the various means of clustering in a brief manner.
A.
K-Means Clustering
It is an unsupervised clustering technique in which the data vector obtained is amassed into predefined clusters
[3]. Initially, the centroids of the predefined clusters are initialized randomly. There is at least one object in
every cluster and each cluster mustn't have any over lapping with each other. The dimensions for the centroid
and the data vectors are the same. Euclidean norm measures the closeness which forms the basis for every pixel
to be assigned to the clusters [4]. k-means is a frequentative technique that is used to partition an image into kclusters. The algorithm followed is as follows:
 Choose k cluster centers, either randomly or heuristically.
 Give each pixel of the image to the clusters such that it diminishes the distance between the picture element
and cluster center.
 Calculate the mean of all the pixels in the cluster and compute the cluster center again.
 Repeat steps 2 and 3 until no significant changes are encountered [1][5].
Following is the flow chart to explain the algorithm written above [5]:
Original Image
Modify the RGB color space to L*a*b color space
Classify the color using k-means clustering
With the help of the obtained results, label the pixels
Outputs obtained are the images that segment the original image
by color
A.1 Application: k-means clustering algorithm has been used for the prediction of Students' Academic
Performance. It served as a good standard for monitoring the progress report of students' performance in higher
institutions. It also improves the decision-making of the academic planners to survey the candidates' conduct
semester by semester by improving on the future academic results in the academic sessions [6]. Some other real
world applications of k-means clustering are in data mining, machine learning etc.
B.
Fuzzy C-Means Clustering
Fuzzy clustering technique is implemented with the help of graphs. A group of images form a cluster based on
similarity checks by using the fuzzy clustering and each image represents a node of a graph. One of the most
useful fuzzy clustering algorithms is the Fuzzy C-Means (FCM) algorithm [12].
FCM is an unsupervised clustering algorithm and its applications lies in agriculture engineering, astronomy,
chemistry, image analysis, medical diagnosis and the list goes on. FCM is used the study on the basis of distance
between many input data points. The distance decides the cluster formation and hence, the cluster centers for
each cluster [8].
IJSWS 15-306; © 2015, IJSWS All Rights Reserved
Page 24
N. Maheshwari and S. Pathak, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 23-28
FCM is also called data clustering technique where a set of data is assembled into n clusters where every data
point is related to every cluster and it has a high degree of connection to that cluster whereas other data point
lying far away from the cluster center has a low degree of connection to that cluster [8].
The algorithm followed is as follows:
 Calculate the c center vector {Vxy}.
 Distance matrix D[c,n] is calculated.
 Upgrade the partition matrix U(0) for the rth step.
If ||U(k+1)-U(k)||<δ then, exit else return to step 2 by updating the cluster center repeatedly [8].
B.1
Implementation: FCM can be implemented with the help of matlab function fcm. The function fcm
selects the inputs as the data set and a desired number of clusters and in return provides the optimum cluster
centers and membership grades for each point. Initially, the cluster centers are guessed which are most likely
incorrect. Following, fcm sets every data point a membership grade for each cluster. These steps are performed
iteratively, each time updating the cluster centers and membership grades to finally move them to the right
location within the data set [8].
The time complexity of FCM algorithm is O(ndc2i). It requires more computation time due to the measures
calculations involved [8].
B.2
Application: One of the applications of FCM clustering includes cluster metabolomics data. FCM is
directly performed on the data matrix in order to generate a membership matrix that would represent the degree
of association the samples have with each cluster. The parameters used are - number of clusters (C) and the
fuzziness coefficient (m) which represents the extent of fuzziness in the algorithm. Based on the parameters, the
FCM was able to unveil main phenotype changes and individual characters of 3 gene types of E. coli. Therefore,
it provides a powerful research tool for metabolomics with enhanced visualization, explicit classification and
outlier approximation [9]. Thus, FCM clustering is a part of the oldest component of software computing and is
really suitable for handling the issues concerning ability of patterns, noisy data, human interaction and it can
provide approximate solutions in a fast manner [8].
C.
Hierarchical Clustering
The hierarchical clustering is one of the primary clustering techniques for information retrieval. This process
involves amalgamation of several images and the formation of clusters in the form of tree (a dendogram). It
involves two main methods: First one is called agglomerative approach, also called bottom-up approach where
we start from the bottom which contains all the objects and then, we move up by merging the objects. This
process is continued till all the objects are combined into a single cluster. Second one is called the divisive
method, also called top-down approach wherein all objects are considered as single group and hence, we split
them recursively into two till all groups contain only one object [1].
Agglomerative Clustering
Algorithm
 Initiate with each sample in its own singleton cluster.
 At each step of time, acquisitively merge two most alike clusters.
 End when there is only one cluster of all samples, else goto step 2 [1].
The general case complexity of this approach is O(n3). As a result, it is too slow for large set of data [7].
Divisive Clustering
Algorithm
 Begin with all the samples in the same cluster.
 At each time, from the least cohesive cluster, remove the outsiders.
 End when every sample is in its own singleton cluster, else goto step 2.
The complexity of divisive clustering is O(2n) which is worse than the complexity of agglomerative clustering
[7].
Cluster Dissimilarity: A definition for difference
For deciding the clusters to be combined or to be divisive, a measure of dissimilarity between the set of
observations is necessary. This is favorably achieved by the use of an appropriate Metric and a linkage basis that
specifies the difference of sets as a function of coupled distances of observations [7].
Below is the list of common metrics for hierarchical clustering:
 Euclidean distance
 Square Euclidean distance
 Manhattan distance
 Maximum distance
 Mahalanobis distance [7].
IJSWS 15-306; © 2015, IJSWS All Rights Reserved
Page 25
N. Maheshwari and S. Pathak, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 23-28
C.1
FACT: Hierarchical clustering requires no number of clusters to be specified. It, sometimes, is slow
and gives dissimilar partitioning on the basis of level-of-resolution one is looking at.
C.2
Application: Hierarchical clustering is useful to analyze proteomic interaction data. The logic involves
the use of information for all the interactions between the elements of a set to study the power of the interaction
of each element pair wise. To denote the usefulness of this approach, an analysis of a real case involving 137
Saccharomyces Cerevisiae proteins. This method has a broad range of application and hence, it can be accepted
as a benchmark analysis of proteomic data.
D.
ISODATA Technique
The Isodata method was developed by Ball, Halland others which augmented division of a cluster, and
processing of fusion to the K-means method. We can control the individual intensity of the cluster by
performing division and fusion to the cluster generated from the K-means method. The individual in a cluster
segments past (a detached building) and its cluster, and the distance between clusters joins them with past close.
The parameter which set up division and fusion beforehand determines. The procedure of the Isodata method is
depicted below [11]:
1.
Variables, such as the number of the last clusters, a convergence condition of rearrangement, judgment
conditions of a minute cluster, branch condition of division and fusion, and end conditions, are
determined.
2.
The initial cluster center of gravity is selected.
3.
Based on the convergence condition of rearrangement, an individual is rearranged in the way of the Kmeans method.
4.
It considers with a minute cluster if it is below threshold with the number of individuals of a cluster,
and accepts from future clustering.
5.
When it exceeds the threshold that exists within fixed limits which the number of clusters centers on
the number of the last clusters, and has the least of the distance between the cluster center of gravity
and is less than the threshold with the maximum of distribution in a cluster, clustering regards it as
convergence and ends processing. When not converging, it progresses to the following step.
6.
In case the number of clusters exceeds the fixed range, when large, a cluster is divided, and when
small, it will unite. It divides in case of number of times of a repetition being odd when the number of
clusters within fixed limits, and if the number is even, it unites. If division and fusion finish, it will
return to step 3 and processing will start again.
Summarizing the steps:
(a)
Division of a cluster: If it is more than threshold with distribution of a cluster, carry out the
cluster along with the 1st principal component for 2 minutes, and search for the new cluster
center of gravity. Distribution of a cluster is re-calculated, and division is continued until it
becomes below threshold.
(b)
Fusion of a cluster: If it is below threshold with the minimum of the distance between the
cluster centers of gravity, unite the cluster pair and search for the new cluster center of gravity.
The distance between the cluster center of gravity is re-calculated, and fusion is continued until
the minimum becomes more than threshold [11].
D.1
ISODATA Segmentation: ISODATA technique processes the objects that are the pixels of the input
image. The observation matrix in this case is formed by two columns representing the attributes that is replaced
by another one agreeing with the constraint associated with each pixel of the image: the columns are associated
with the MGL and the DMGL. The size of the square window used should have an odd length (3 * 3, 5 * 5 …)
[11]. Each pixel is attributed to a specific class in the process. The image that comes as a result in segmented
into C different and distinct regions wherein each and every region is associated with a class [11].
E.
Relevance Feedback
It is one of the primitive methods introduced in “information retrieval field” but it proved to be more essential in
CBIR field. Under this algorithm, a user is allowed to interact with the image retrieval technique as the
algorithm provides elementary feature information that the user finds pertinent to the inquiry. This algorithm
proved to be very efficient in case of 'keyword-based image retrieval'. Keyword-based image retrieval or CBIR
includes recovery of the image based on the keywords provided by the user and the corresponding images in the
database [1]. However, this technique faces a challenge which is that some images might not have apt keyword
to explain them which would make the search quite complex. Due to these reasons, the relevance feedback
algorithm is used for segmentation which involves users' feedback to minimize the plausible faults and
redundancy. This method uses an underlying technique called Bayesian Classifier that is used to peddle with
positive as well as negative feedback [1]. The primary curb faced by this algorithm is its static nature, i.e., its
inability to handle the changes a user requirement incurs like augmentation of a new topic. This is when logbased clustering algorithm came into view [1].
IJSWS 15-306; © 2015, IJSWS All Rights Reserved
Page 26
N. Maheshwari and S. Pathak, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 23-28
F.
Log-Based Clustering
Logs can also be used to achieve image segmentation (using clustering). Logs are maintained by information
retrieval process, example, and web servers’ access logs. The drawback of this algorithm is the delivery of
incomplete information. The reason for this curb is that the log-based clusters are based on only those files that
are accessed by some users. The solution of this problem is the maintenance of a log-based vector for each
session vector on the basis of log-based document. A single vector represents a given document in a “hybrid
matrix form” and the documents belonging to the log-based documents are created in this form. The advantage
of this solution is that an inaccessible document can also create its own vector. A hybrid matrix is used for
representation of the documents that are a group of content-based clustering algorithm [1]. In comparison with
CBIR algorithm, it is quite accurate, however, not very efficacious for use [1].
V.
Algorithm
K-means
Time
Complexity
O(ncdi)
COMPARATIVE ANALYSIS
Predefined
clusters
Required
Learning
Techniques
Unsupervised
Structure
Concept
Application
Not defined
Biometrics
Fuzzy C-means
O(ndc i)
Not required
Unsupervised
Graph
Partitional
clustering
Data clustering
Hierarchical
Agglomerative
(O(n3))
Divisive (O(2n))
O(nkl)
Not required
Unsupervised
Dendogram(treelike)
Sequential
clustering
Not required
Unsupervised
Not defined
Iterative Self
Organizing Data
Analysis
Technique
ISODATA
2
Pattern
recognition
Audio Event
Detectors
Multispectral
Remote Sensing
System
VI.
APPLICATION OF IMAGE SEGMENTATION IN MEDICAL FIELD
Medical diagnosis pursues information from various sources for appropriate conclusions on diseases. The
sources can be results of clinical tests, patient history, histological reviews, and imaging techniques. Imaging
techniques have a humongous contribution to the development of medical diagnosis. One such safe and easily
available technique would be the ultrasound imaging technique. But this methodology has certain disadvantages
being the images are not too clear and also it needs a specialist to interfere and segment out the organs from the
image available. Also, the process can be said to be time taking since it requires the expert to obtain the image
and to recognize the specific parts he/she wants for the examination process. Further, the process also causes
discomfort to the patient. The major issues being delay in diagnosis and lack of clarity. Very much experienced
specialist would be needed to reach to the inferences using this imaging technique [13]. Therefore, we further
explain here that the K-means clustering is a better option for it gives better result for kidney image
segmentation due to its less intensity variations in ultrasound kidney image and the results are compared with
other segmentation methods [13].
A. Need for Segmentation
The main objective of segmentation is to unravel and/or modify the representation of an image into something
that is more meaningful, concrete and simpler to study. As mentioned earlier, Image segmentation is typically
used to locate objects and boundaries (lines, curves, etc.) in images. In order to carve out a particular portion
from an image for better diagnosis. More specifically speaking, image segmentation is the process of allotting a
label to every picture element in an image such that pixels with the same label share certain visual
characteristics [13].
B.






Segmentation of Medical Images
Locate tumors and other pathologies
Measure tissue volumes
Computer-guided surgery
Diagnosis
Treatment planning
Study of anatomical structure [13].
C.
Contribution of Clustering Techniques in Medical Images
Clustering is said to be an unsupervised learning task, where we are in need to recognize a finite set of
categories called clusters in order to classify pixels. Clustering use no training stages rather is self-trained with
the help of the available data. Clustering is mainly used when classes are prior known. A similarity criterion is
explicated between pixels and then common pixels are brought together in a group to form clusters. The
IJSWS 15-306; © 2015, IJSWS All Rights Reserved
Page 27
N. Maheshwari and S. Pathak, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 23-28
principle behind grouping of the pels is magnifying the intra-class similarity as well as the inter class similarity.
The quality of the result of a clustering depends on both the similarity measure used by the method and its
implementation. Clustering algorithms are classified as hard clustering, k- means clustering, fuzzy clustering,
etc. [13]. In intensity-based clustering methods especially K-means clustering, it gives the most optimized
segmentation, because the variation in the intensity in the ultrasound kidney image will be very less
comparatively, therefore the clustering process will become very easy and it is too straight for classification and
easy for implementation [13].
VII.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
REFERENCES
M. Agarwal, G. Dubey, “Application of clustering techniques for image segmentation”, International Journal of Advanced
Research in Computer Science and Software Engineering, vol. 3, pp. 01, Apr. 2013.
Giacinto G., Roli F., “Bayesian relevance feedback for content-based image retrieval. Pattern Recognition”,vol. 7, pp. 14991508, 2004
P. S. Sandhu, H. Singh, “A Neuro-Fuzzy Based Software Reusability Evaluation System With Optimized Rule Selection”, IEEE
2nd International Conference on Emerging Technologies (Institute of Electrical and Electronics Engineers International
Conference on Emerging Technologies 2006), pp. 664-669, Nov. 2006
Irani, A.A.Z. Belaton, “A k-means Based Generic Segmentation System B. Dept. of Computer Sci., Univ. Sains Malaysia,
Nibong Tebal, Malayasia Print ISBN : 978-0-7695-3789-4, pp. 300-307, 2009.
A. K. Bhogal, N. Singla, M. Kaur, “Color Image Segmentation Using k-means Clustering Algorithm”, International Journal of
Engineering and Technology, vol. 1, pp. 18-20, 2010.
Oyelade, O.J., Oladipupo, O.O., Obagbuwa, I.C., “Application of k-Means Clustering Algorithm for Prediction of Students'
Academic Performance”, International Journal of Computer Science and Information Security, vol. 7, 2010.
Rokach, Lior, O. Maimon, “Clustering methods”, Internet: http://en.wikipedia.org/wiki/Hierarchical_clustering, Mar. 28, 2015
[Apr. 3, 2015].
S. Ghosh, S. K. Dubey, “Comparative Analysis of K-Means and Fuzzy C-Means Algorithms”, International Journal of
Advanced Computer Science and Applications, vol. 4, 2013.
Li X, Lu X, Tian J, Gao P, Kong H, Xu G, “Application of Fuzzy C-Means Clustering in Data Analysis of Metabolomics”,
National Center for Biotechnology Information, vol. 81, pp. 4468-4475, 2009.
M Merzougui, M. Nasri, B.Bouali, “Image Segmentation using Isodata Clustering with Parameters Estimated by Evolutionary
Approach: Application to Quality Control”, International Journal of Computer Application, vol. 16, 2013.
M. Merzougui, A. EL Allaoui, M. Nasri, M. EL Hitmy and H. Ouariachi. ‘Unsupervised classification using evolutionary
strategies approach and the Xie and Beni criterion’. International Journal of Advanced Science and Technology, vol. 19, pp 4358 Jun. 2010.
Bezdek,
James
C.,”Pattern
Recognition
with
Fuzzy
Objective
Function
Algorithms”,
Internet:
http://en.wikipedia.org/wiki/Fuzzy_clustering, Feb. 26, 2015 [Mar. 25, 2015].
Ahmed, Mohamed N., Yamany, Sameh M., Nevin, Farag, Aly A., Moriarty and Thomas, “A Modified Fuzzy C-Means
Algorithm
For
Bias
Field
Estimation
and
Segmentation
of
MRI
Data”,
Internet:
http://en.wikipedia.org/wiki/Fuzzy_clustering#Fuzzy_c-means_clustering, Feb. 26, 2015 [Mar. 26, 2015]
V. Jeyakumar, M. Kathirarasi Hasmi,” Quantative Analysis Of Segmentation Methods On Ultrasound Kidney Image “,
International Journal of Advanced Research in Computer and Communication Engineering, vol. 2, 2013.
IJSWS 15-306; © 2015, IJSWS All Rights Reserved
Page 28
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
BIG DATA: INFORMATION SECURITY AND PRIVACY
Shubham Mittal1, Shubham Varshney2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
__________________________________________________________________________________________
Abstract: Big data is an evolving term that describes any voluminous amount of structured, semi-structured and
unstructured data that has the potential to be mined for information. Just Encryption isn’t a perfect solution for
securing big data. There must be other methods and terms to secure the data. This paper includes various
methodologies which can be used for a project whose basic necessity is BIG DATA. Different technology and
software used are also discussed.
Keywords: 3V’s, Data Mining, Insecure Computation, Granular Access Control, Hadoop
__________________________________________________________________________________________
1,2
I.
INTRODUCTION
A.
Big Data
Big data refers to high volume, high velocity, and/or high variety information assets that require new forms of
processing to enable enhanced decision making, insight discovery and process optimization. Due to its high
volume and complexity, it becomes difficult to process big data using on-hand database management tools.
It usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate,
manage, and process data within a tolerable elapsed time. Big data "size" is a constantly moving target, as of
2012 ranging from a few dozen terabytes to many petabytes of data. Big data is a set of techniques and
technologies that require new forms of integration to uncover large hidden values from large datasets that are
diverse, complex, and of a massive scale. It generates values from very large data sets that cannot be analyzed
with traditional computing techniques.
B.
Big Data Explosion
The quantity of data on planet earth is spreading rapidly. Following are the Areas or Fields of Big Data
1.
Retailer database
2.
Logistics , Financial and Health care
3.
Social media
4.
Vision recognition
5.
Internet of things
6.
New forms of scientific data
When hosting big data into the cloud, the data security becomes a major concern as cloud servers cannot be
fully trusted by data owners. Attribute-Based Encryption (ABE) has emerged as a promising technique to ensure
the end-to-end data security in cloud storage system.
So, we can say that the grand challenge is to guarantee the following requirements:
1) Correctness
2) Completeness
3) Security
C.
Big Data Characteristics
Big data is mainly based upon 3v’s which are
“Volume, Velocity & Variety”
1.
Volume: big data could help many organization to understand people better and to allocate resources
more effectively. However, traditional computing techniques are not scalable to handle this magnitude.
2.
Velocity: rate at which data is flowing into many organizations. Now exceeding the capacity of variety
system. In addition user extreme data which is stream to the in real time and delivering this conclude
quite a challenge.
3.
Variety: the variety of datatypes to be processed is becoming increasingly diverse. Today big data have
to deal with many datasets such as

Photographs

Audio and video

3D models

Simulation

Location data
In comparison to those at time of traditional data which only contain easily hand able data as
IJSWS 15-308; © 2015, IJSWS All Rights Reserved
Page 29
S. Mittal and S. Varshney, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 29-32




Documents
Finances
Stock records
Personal files
II.
LITERATURE REVIEW
The complex nature of big data is primarily driven by the unstructured nature of much of the data that is
generated by modern technologies, such as that from web logs, radio frequency Id (RFID), sensors embedded in
devices, machinery, vehicles, Internet searches, social networks such as Facebook, portable computers, smart
phones and other cell phones, GPS devices, and call center records. In most cases, in order to effectively utilize
big data, it must be combined with structured data (typically from a relational database) from a more
conventional business application, such as Enterprise Resource Planning (ERP) or Customer Relationship
Management (CRM). Similar to the complexity, or variability, aspect of big data, its rate of growth, or velocity
aspect, is largely due to the ubiquitous nature of modern on-line, real-time data capture devices, systems, and
networks. It is expected that the rate of growth of big data will continue to increase for the foreseeable future.
With great amounts of knowledge comes even greater responsibility to guarantee the security of this data.
Remember, you and your technology provider together are expected to be the trusted guardians of this data. In
many geographies, you have a legal obligation to safeguard this data. To achieve this, we implemented industry
leading IT infrastructure including multiple layers of replication in data centers for a high level of redundancy
and failover reliability, and datacenter backup facilities in separate locations for disaster recovery assurance and
peace of mind. So, we directly or indirectly we can say that we have to protect Big Data at all costs and have
this data at your fingertips always. Be prepared to access and analyze even greater volumes of data when Big
Data gets even bigger; that day is not too far away, so the best time to start is now!
III.
HOW BIG DATA IS BETTER THAN TRADITIONAL DATA
Following are some ways through which big data is better than traditional data
1.
Existing analytical techniques don’t work well at large scales and typically produce so many false
positives that their efficacy is undermined. The problem becomes worse as enterprises move to cloud
architectures and collect much more data.
2.
Differentiating between traditional data analysis and big data analytics for security is, however, not
straight forward. After all, the information security community has been leveraging the analysis of
network traffic, system logs, and other information sources to identify threats and detect malicious
activities for more than a decade, and it’s not clear how these conventional approaches differ from big
data.
3.
Report, “Big Data Analytics for Security Intelligence” by Cloud Security Alliance (CSA) details how
the security analytics landscape is changing with the introduction and widespread use of new tools to
leverage large quantities of structured and unstructured data.
4.
Although analyzing logs, network flows, and system events for forensics and intrusion detection has
been a problem in the information security community for decades,
5.
However, new big data applications are starting to become part of security management software
because they can help clean, prepare, and query data in heterogeneous, incomplete, and noisy formats
efficiently. Finally, the management of large data warehouses has traditionally been expensive, and
their deployment usually requires strong business cases. The Hadoop framework and other big data
tools are now commoditizing the deployment of large-scale, reliable clusters and therefore are enabling
new opportunities to process and analyze data.
6.
In particular, new big data technologies such as the Hadoop ecosystem (including Pig, Hive, Mahout,
and RHadoop), stream mining, complex-event processing, and NoSQL databases are enabling the
analysis of large scale, heterogeneous datasets at unprecedented scales and speeds.
7.
Security vendors started the development of SIEMs (security information and event management),
which aimed to aggregate and correlate alarms and other network statistics and present all this
information through a dashboard to security analysts. Now big data tools are improving the information
available to security analysts by correlating, consolidating, and contextualizing even more diverse data
sources for longer periods of time.
8.
For traditional SIEM systems (it took between 20 minutes to an hour to search among a month’s load
of data). In new Hadoop system running queries with Hive, it gets the same results in approximately
one minute. This incorporation of unstructured data and multiple disparate datasets into a single
analysis framework is one of big data’s promising features.
9.
Big data tools are also particularly suited to become fundamental for advanced persistent threat (APT)
detection and forensics.4,5 APTs operate in a low-and-slow mode (that is, with a low profile and longterm execution); as such, they can occur over an extended period of time while the victim remains
oblivious to the intrusion. To detect these attacks, we need to collect and correlate large quantities of
IJSWS 15-308; © 2015, IJSWS All Rights Reserved
Page 30
S. Mittal and S. Varshney, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 29-32
10.
11.
diverse data (including internal data sources and external shared intelligence data) and perform longterm historical correlation to incorporate a posteriori information of an attack in the network’s history.
CSA report focuses on the use of big data analytics for security, but the other side of the coin is the use
of security to protect big data.
Introducing new tools, such as Apache’s Accumulo, to deal with the unique security problems in big
data management.
IV.
TOWARDS SMART ORGANISATION
Due to challenges of 3v’s many organizations have little choices but to ignore or rapidly exceed large quantities
of potentially variable information. Indeed if we sing of organization as creatures that process data then most are
primitive’s forms of life. There sense are knighting system whose jobs is to filter from that big data ocean to get
the valuable information. As a result, large proportion of data surrounding organization is ignored. For e.g. the
retailer database was not well maintained and all the information in organization such as hospitals are deleted
within weeks.
V.
BIG DATA TECHNOLOGIES
Today relating big data technology is ‘Hadoop’, which is an open source software library for reliable, scalable,
distributed computing and provides the first valuable platform for big data analytics. Hadoop is already used by
most big data pioneers. For e.g. LinkedIn uses it to generate over 100 billion personalize recommendations
every week. Hadoop distributes storage and processing of large databases across groups or clusters of server
computers incomparsion to traditional methods very large quantity of valuable hardware was used which was
not as efficient.
Technically Hadoop consist of 2 key components
1.
Hadoop distributed file system which permit high bandwidth close to base storage.
2.
Data processing framework called MapReduce.
Based on google search technology MapReduce distribute large databases across multiple servers. Each server
then create a summary of data is being allocated. All of this summery information is then aggregated in the term
Reduce Stage. MapReduce subsequently allows extremely large raw databases to be rapidly distilled before
more traditional data analysis tools are adapted. For organization who can’t afford an internal big data
infrastructure cloud based big data solutions already available where public big datasets need to utilized running
everything in the cloud also makes a lot of sense as data doesn’t have to get downloaded.
Looking forward ahead quantum computing may greatly enhance big data processing. Quantum computers
stolen process data using quantum mechanical states angularly exceed the massively parallel processing of
unstructured data.
Problems in Dealing with Big Data

Hadoop is already suitable for many big data problems

Real time analytics

Graph computation

Low latency queries.
VI.
SECURITY CONCERNS IN BIG DATA
A.
Insecure Computation
Untrusted computation program somehow used by an attacker to submit to your big data solution to extract or
turn out sensitive information from data sources and then fetch all the privacy data from your sources. So, it is
therefore an unsecure computation. Apart from information leak unsecure computation can corrupt your data.
B.
Input Validation and Filtering

Because big data need to collect input from variety of servers, therefore it is quite mandatory and quite
important to validate the input which involves making a decision of which kind of data is entrusted and
what are entrusted data sources.

It also need to filter rogue or malicious data

These challenges are often faced in traditional data bases, however in big data apart from these because
of GBs and TBs of continuous data inflow makes it extremely difficult to perform input validation and
data filtering.
Again, signature based data filtering has limitations, as it can’t filter data which shows the behavior aspects of
needed data.
C. Granular Access control

Big data was mainly designed for performance with no thought of security in mind.

Apart from it the management techniques of traditional bases such as table, row or cell level access
control gone missing in big data.

Adhoc queries poses additional challenges

Access control is disabled by default
IJSWS 15-308; © 2015, IJSWS All Rights Reserved
Page 31
S. Mittal and S. Varshney, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 29-32
i.e., you have to depend on access control provide by a third party.
D.



Insecure Data Storage
As the data is stored at thousands of nodes Authorization, Authentication & Encryption is challenging.
Encryption of real time data can have performance impacts.
Secure communication among nodes, middleware and end users are disabled by default.
E.
Privacy Concerns in Data Mining and Analytics
Data mining is an analytic process designed to explore data (usually large amount of data)

Monetization or convert (or adapt) to trade based on the exchange of money of big data generally
involves Data Mining.

Sharing of those monetize results involves multiple challenges.
E.g.:- Invasion of privacy, Invasive marketing, Unintentional disclosure of information.
VII.
BEST STEPS TO INTRODUCE SECURITY IN BIG DATA
These are things which you should keep in mind while designing a big data solution
A.
Secure your computation code

Implement access control, code signing dynamic analysis of computation code. To verify that it is not
malicious and it doesn’t leak information.

Strategy to prevent data in case of untrusted code.
B.
Implement comphrensive input validation and filtering

Consider all internal and external sources

Evaluate input validation and filtering of your big data solution
C.
Implement granular access control

Review and configure role and privilege matrix. As user can be administrator, developer, end user,
knowledge worker so the person in concern have to work according to the role of user.

Review permission to execute Adhoc queries.

Enable access control explicitly
D.
Secure your data storage and computation

Sensitive data should be segregated

Enable data encryption for sensitive data

Audit administrator access on data nodes

Verify configuration of API Security.
E.
Review and Implement privacy preserving data mining and analytics

Analytics data should not disclose sensitive information. You need to verify your analytical algorithm
that they do not disclose information.

Get your Big Data implementation pen tested.
VIII.
CONCLUSION
In this report, we have investigated the policy updating problem in big data access control systems and
formulated some challenging requirements of this problem. We have developed an efficient method to outsource
the policy updating to the cloud server, which can satisfy all the requirements.
REFERENCES
[1].
[2].
[3].
[4].
[5].
[6].
[7].
Wikipedia- https://en.wikipedia.org/wiki/Big_data
Explaining Computers- http://explainingcomputers.com/big_data.html
Whatls- http://searchcloudcomputing.techtarget.com/definition/big-data-Big-Data
Privacy and Big Data Report by- Brian M. Gaff, Heather Egan Sussman, and Jennifer Geetter, McDermott Will & Emery, LLP
Big Data Analytics for Security by- Alvaro A. Cárdenas, Pratyusa K. Manadhata, Sreeranga P. Rajan
Big Data Security Challenges by- Dave Edwards
Enhancing Big Data Security with Collaborative Intrusion Detection by- Upasana T. Nagar, Xiangjian He, and Priyadarsi
Nanda, University of Technology, Sydney
IJSWS 15-308; © 2015, IJSWS All Rights Reserved
Page 32
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Revolutionizing Wireless Networks: Femtocells
Somendra Singh1, Raj K Verma2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
1,2
___________________________________________________________________________
Abstract: The growing necessity of wireless networks with strong signals at any place now-a-days has
hampered the cellular market which gave rise to new technology called Femtocells, also called Home Base
station (HBS) are designed to meet the voice calls and better data coverage which are data access points
installed in indoor environment such as home user or business firms. This has led to better quality network and
higher performance ratio. Femtocells have the high speed 3G technology which gives the 7Mbps downloading
speed. In the article, we highlighted about the history of Femtocells, their major aspects, role in the networking
field, problems and challenges it has faced, and gave few preliminary ideas for overcoming them in future.
Keywords: 3G, Communication Network, Femtocells, Interference Management, Wi-Fi.
___________________________________________________________________________
I.
Introduction
The major objective of cellular operators is to increase the data transmission and capacity over a wide cell
coverage area, because of its demand of wireless network service. The demand for higher data rates in wireless
networks is unrelenting, and has triggered the design and development of new data-minded cellular standards
such as WiMAX (802.16e), 3GPP’s High Speed Packet Access (HSPA) and LTE standards, and 3GPP2’s
EVDO and UMB standards[1]. Femtocells are an alternative way of fixed mobile coveragence. The device,
which resembles a wireless router, essentially acts as a repeater. The device communicates with the mobile
phone and converts voice calls into voice over IP (VoIP) packets. The packets are then transmitted over a
broadband connection to the mobile operator's servers. Cellular data system gives service which comparable to
that offered by Wi-Fi networks. The increase growth in wireless storage can be demonstrated by the observation
from Martin Cooper of Arraycomm-“The wireless capacity has doubled every 30 months over the last 104
years”. At the end of 2010, one fifth of more than 5 billion mobile subscriptions globally head access to mobile
broadband [2]. Femtocells are also secure. It comprises of different aspects of security such as access control
mechanism, data integrity of devices, and saving of the updation process of software. The major threats to data
discovered by firms are : Denial of the service attacks against femtocells; Tunnelling of traffic between
Femtocells; Masquerading as other users; Eavesdropping over the data of the user; Booting the device with
modified firmware. Although discussed threats are discussed abstractly, their practical inpact over cellular
communication is somewhat unclear and ambiguious. When subscribers at home connect to the wireless
operator’s mobile network over previous overall network structure, it allows users to make mobile
communications truly prevalent, creating long-term bonds with subscribers that agitate least and giving new
economic opportunities from bundle mobile and broadband service packages. The various types of Femtocells
are namely 2G Femtocells, 3G Femtocells and OFDM Based Femtocells. 2G Femtocells are established on
Global System for Mobile Communication (GSM) air interfaces. 3G Femtocells are founded on air interface of
Universal Mobile Telecommunication System called UMTS Terrestrial Radio Access (UTRA). The OFDM
based Femtocells are WIMAX and Long Team Evolution (LTE) femtocells. LTE femtocells are being
considered as future technology at home as well as business environment.
II.
Necessity of Femtocells
•
Minimization of in-home call charges rates
•
A very High Quality coverage
•
Minimized chur
•
Lower-cost voice and data services
•
Excellent indoor network coverage
•
Minimized network and data cost
•
Transmission Power is low
Femtocell is a very small, low cost home base stations which transmit very low power. These devices are
integrated to small plastic frame and wall mounts cases that are powered from customer’s electricity sockets.
Femtocell backhaul connection is via customer’s internet connection that can be DSL, or cable modem or any
other [3]. The network coverage has become the major issue in rural areas due to long distance between base
stations and in indoor and underground remote locations because of wall attenuations. Hence, various factors
IJSWS 15-309; © 2015, IJSWS All Rights Reserved
Page 33
S. Singh and R. K Verma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 33-35
like capacity, coverage and security issues etc. illustrates the requirement of device like Femtocell which can
give a solution to these type of problems and therefore Femtocells are very much in demand and in need in this
growing technology. This innovative technology is not only concentrated to indoor environment but also it has
extended its area beyond home environment. Now various operators are launching dual femtocell access points
for enterprise environments and consumers as well.
III.
Concept of Femtocells
The Femtocells technology is based on how to improve the network coverage and capacity of mobile networks.
The femtocells generate a personal mobile phone signal in the home and connect to this operator using standard
broadband DSL on cable service. Before use of the Femtocells, there were many type of cells, antenna and
microcells were used. Femtocells operated on the very low power levels like phones and Wi-Fi. Femtocells are
connected directly to the internet and there is no need for BSC/MSC Infrastructure. In the femtocells network
architecture based on the different interfaces:

Iu-b over IP: Existing RNCs connect to femtocells through standard Iu-CS (circuit-switched) and
Iu-PS (packet-switched) interfaces present in macrocell networks. The advantage is that the Capex is
comparatively low in so far as the operator can purchase existing RNCs. Due to shortage, there are the
lack of scalability, and that the interface is not yet standardized.

IMS/SIP: The Internet Media Sub-System/Session Initiation Protocol interface provides a core
network residing between the femtocell and the operator. The IMS interface converts subscriber traffic
into IP packets and employs Voice over IP (VoIP) using the SIP protocol, and coexists with the
macrocell network in femtocells. The main advantages are scalability and rapid standardization. The
femtocell network architecture supports the following key requirements:

Service Parity: Femtocells support the same voice and broadband data services that mobile users are
currently receiving on the macrocell network. This includes circuit-switched services such as text
messaging and various voice features, such as call forwarding, caller ID, voicemail and emergency
calling.

Call Continuity: Femtocell networks are well integrated with the macrocell network so that calls
originating on either macrocell or femtocell networks can continue when the user moves into or out of
femtocell coverage. Femtocell network architecture needs to include the necessary connectivity
between the femtocell and macrocell networks to support such call continuity.

Security: Femtocells use the same over-the-air security mechanisms that are used in macrocell radio
networks. But additional security capabilities need to be supported to protect against threats that
originate from the Internet or through tampering with the femtocell itself. Femtocell network
architecture provides network access security, and includes subscriber and femtocell authentication and
authorization procedures to protect against fraud.
IV.
Advantages offered by Femtocells [6, 7]
(a)
It provides better performance operations in indoors.
(b)
The reuse of frequency is also applicable from one place to another place or building.
(c)
It minimizes the network cost i.e. it reduces CAPEX and OPEX.
(d)
Coverage is likely to remain consistent wherever you are located in the office/home, due to the
femtocell.
(e)
Femtocells have the capacity to limit how many people are permitted to log on.
(f)
Using Femtocells, a mobile phone can be used as the main phone(s).
(g)
It minimizes latency and therefore creates proper user experience for mobile data services and will
enhance wireless data rates.
(h)
It has overcome the problem of voice traffic priority over the data traffic.
V.
Challenges faced by Femtocells
This article overviews the main challenges faced by femtocell networks. They are:
a.
Voice Femtocells: Interference management in femtocells, allowing access to femtocells, handoffs,
providing Emergency-911 services and movability.
b.
Network Infrastructure: Securely bridging the femtocell with the operator network over IP.
c.
Broadband Femtocells: Resource allocation, timing or synchronization and backhaul.
To minimize the cost of femtocells, it requires very little for installing and setting it up. The devices should be
auto configured for the ease to access for the customers and they may not face any further difficulty while using
it. All user have to do is that just plug in the cable wires for internet connection and electricity and rest all
should be configured by itself. To add to this, the proper location of femtocell cannot be accessed in prior. They
can also face problem to cope up with the nearby environment since it changes continuously. If femtocells are
used over a large scale then the strength of the signal should also be quite good to handle it. Even walls of the
buildings and windows attenuate the networks. It is quite challenging to deliver a good quality network signal in
indoors. Femtocells are million in number. It will lead to the problrm of interference. The configuration of the
femtocell is so easy that a normal user can also install and easily access it.
IJSWS 15-309; © 2015, IJSWS All Rights Reserved
Page 34
S. Singh and R. K Verma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 33-35
VI.
Application of Femtocells
A.
DSL Modem
This step is to combine the femtocell into an existing DSL broadband modem design. It does not require any
additional external connections [4]. The power and data connectivity is already there in the modem, and also
there is regularly a catalogue of various other pattern description too. The component of femtocells is hardwired
into the modem and it can be given priority of voice calls to ensure better quality performance [4]. The overall
money required for the total unit is much less than two part boxes, it is the relieve of setting up and remote
management, hence this option is advantageous. Several other mobile operators have underway offering DSL
broadband as an added service, mainly in Europe. If the extra cost of a combined modem/femtocell is acceptable
[5], then this could be shipped to customers as division of an encircle.
B.
Cable Modem
Most of the households in the America get their broadband internet service from their cable TV supplier than
from the phone company and they prefer it too. The modem can be separated from a combined unit or TV
Set-top box. In US, the very large Cable TV companies, such as Comcast, beforehand had agreements to resell
mobile services on the Sprit network. This type of networks seems to have been discontinued in the manner in
which it was earlier. Although Cable TV companies do personal some spectrum (via the Spectrum Co) business,
and so could lawfully launch and operate a rather than conventional mobile phone [4].
VII.
Future Scope
The concept of femtocell, and the notable research and development that has concluded in early trials of
femtocell technology gives some very exciting glimpses of one version of the cellular operator’s future network
and scope. The market of Femtocells suffered a lot during the first half of the year 2012 and then it boosted
itself in the next half of the year. Shipments reaching slightly above 2 million units and this rising updated with
ABI Research's latest femtocell in 2012. In the second half of the year shipment is double from first half which
creation up for some of the lost momentum [8]. Femtocells work as a mini network in your home. Day by day
the trends in the mobile Internet along with the capabilities promised by 4G technology and this technology are
the beginning of building coverage revolution. This revolution is so profound that each building will become an
individual "Network area" where RF engineers will be required to paint unique individual blocks of coverage. In
this world, the quality of in-building coverage (once an afterthought) could become even more important than
macro coverage. This technology provides better speed of service.
VIII. Conclusion
The LTE femtocell work as a key weapon in the mobile operator arsenal. It provides better network for mobile
broadband performance and quality of experience. Beyond the market for the LTE femtocell devices
themselves, the movement to small cells creates enurmous opportunity for silicon vendors, SON specialists,
backhaul optimization and software providers. Femtocells have more advantages where networks are weak.
Femtocell and Wi-Fi will be both exit in future. Customers and operators taken benefit from the femtocells
technology if it is combine with Wi-fi networks. Femtocells will be successful to make the landlines away. More
importantly, this is not health effects from radio waves below the limits applicable to wireless communication
system. Femtocells represents this revival with their organic plug-and-play deployment, high cost, and the
possible disorder they introduce to the network. This article– and special issue– comments though that fears
about femtocells negative effects are florid. Whether or not they live up to the promotion and help move the data
snow slip to being a backhaul problem is as yet unclear; but it seems to the authors that there is nothing
fundamental preventing very dense femtocell deployments, and that the economic and capacity benefits
femtocells provide appear to justify the optimistic sales forecasts.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Vikram Chandrasekhar and Jeffrey G. Andrews,”Femtocells networks: A Survey”, The University of Texas at Austin Alan
Gatherer, Texas Instruments , June 28, 2008.
ABI Research. One Billion Mobile Broadband Subscripption in 2011: A Rosy Picture Ahead for Mobile Network Operators,
February 2011.
Tanvir Singh, Amit Kumar, Dr. Sawtantar Singh Khurmi, “Scarce Frequency Spectrum and Multiple Access Techniques in
Mobile Communication Networks”, IJECT Vol. 2 Issue 2, June 2011.
Cisco, “Cisco visual networking index: Global mobile data traffic forecastupdate,20102015” , February 2011.
Analysis Of Femtocell -Opportunities and challenges, School Of Integrated Technology, Yonsei University, June 2011.
Darrell Davies (2010):”Femtocell in Home”, Motorola,[ Online ]Available.
“Femtocell is eding towards the the enterprise.” [Online] Available: www.setyobudianto.com.
H.P. Narkhede, “Femtocells: A Review”, Volume 2, Issue 9, September 2011.
IJSWS 15-309; © 2015, IJSWS All Rights Reserved
Page 35
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Cross-Platform Mobile Web Applications Using HTML5
Rohit Chaudhary1, Shashwat Singh2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
1,2
________________________________________________________________________
Abstract: In recent years the use of smart mobile devices has become an integral part of everyday life leading to
the expansion of applications development for the various mobile platforms, requiring separate software
development process, which subsequently increases the corresponding effort for app development. The market
of smartphones has a variety and there is no single dominant mobile platform or OS. Developing a particular
application for a particular OS is time consuming. With the emergence of HTML5 these issues can be addressed
efficiently since application development is allowed in a cross-platform manner. The apps which are developed
using HTML5 are also reliable, efficient and portable. HTML5 also allows to develop webOS which can be used
in TV’s, Smart watches, etc.
Keywords: Cordova API’s, HTML5, webOS, Feature Detection, Cross-platform
___________________________________________________________________________
I. Introduction
What is a cross-platform web application? It is an application that is available to the customers through
whatever device they use, wherever they are using a Windows PC, an Apple iPad or an Android phone.Why
should we use HTML5? Because we can take advantage of cross-platform nature of HTML5 language, APIs,
and tools. As the code of app is written using HTML5, the app works on approximately all the new devices as
they come into the market.
Currently, the market of web applications is very high. The number of smartphone users are very large and also
the number of mobile platforms are increasing, hence developing mobile applications becomes very difficult for
the developers because they need to develop the same application for each mobile platform or operating system.
Native apps are generally more difficult to develop and it requires a high level of technical knowledge and
experience.
The primary goal of cross-platform mobile app development is to achieve native app performance and run on
any mobile platform. Today’s cross-platform app development has both opportunities and challenges for the
generation. Since these apps can run on any platform, hence the apps need to adapt to various screen sizes,
resolutions, aspect ratios and orientations. The mobile devices today also provide the facilities, such as
accelerometer and GPS, etc. and the app should be compatible with these services. Cross-platform apps should
be developed in the way such that it can take advantage and use accordingly of these functionalities in an
appropriate and portable manner and provide a good service and experience to the users across a variety of
range of mobile phones.
HTML5 apps do not have a constraint i.e. it is not limited to the web pages which are opened and displayed in
any web browser. The HTML5 code can be packaged and deployed as a locally installed (on any user device)
hybrid web application. It enables the users to use the same distribution and monetization channels like native
apps, and the same procedure to install the app and same launch experience.
II. Literature Review
A mobile app is a computer program which is designed to run on smartphones, tablet, computers and
other mobile devices. We can download the apps usually through the application distribution platform, which
started appearing in the year 2008 and are generally managed by the owner of the mobile operating systems,
such as the App Store from Apple, Google Play from Google, Windows Phone Store from Microsoft,
and BlackBerry App World from Blackberry.The term "App" is a short form of the term "Application
Software". This word became very popular among the people and in the year 2010, and it was declared as
"Word of the Year" by the American Dialect Society. [1]
A. Need of the Mobile Apps
In modern era, the growth of the mobile and Internet has taken a large step. With the rapid development of
mobile and Internet and increasing use of Internet, most of the companies transfer their business to MCommerce (Mobile Commerce), and more and more users make their payment through their mobile devices.
Meanwhile, the diverse range of mobile devices and the secure transaction are the main factors that put some
restriction on the M-Commerce advancement. There is a possibility that the app which is designed for some
platforms may not work on some different platforms. Here comes the need of a cross-platform app which may
run on all the platforms without running on only some specific platforms. The cross-platform issues are solved
IJSWS 15-310; © 2015, IJSWS All Rights Reserved
Page 36
R. Chaudhary and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 36-40
by using Web technology such as HTML5, CSS and JavaScript. For providing the secure transaction, such that
no one can track your ID’s and passwords, “CUPMobile”, mobile payment standard of China can be applied.
Currently, this solution has been successfully used for the payment through the mobile apps in China. [2]
The new mobile network companies and new electronic devices like smartphones and tablets are changing
opportunities rapidly for public sector departments in delivering smart, easy and fast mobile e-services to their
citizens. They want to provide the services at more ease day by day. HTML5 language standard enables crossdevice and cross-browser support, making the development and deployment of these services much easier than
before and the costs are also low. This paper carries the analysis of the important features and applications of
HTML5 web language and its applications in developing cross platform web apps, a web OS from Firefox. [3]
To develop an interactive TV Commercial or in short iTVC for Internet connected TVs is complicated because
of the number of different platforms, each of which has its own operating system and application programming
interface (API). For achieving the cross-platform compatibility for the ads, we should use standard web
technologies, instead of native APIs for each individual device. By using these standard web languages like
HTML5, JavaScript and CSS, only one iTVC needs to be developed, which contains commonly used features of
these kinds of advertisements. The iTVC was developed on a desktop computer and after development, it was
tested on three different smart TV platforms for testing the feature compatibility. After achieving compatibility,
a user study which included 36 participants, evaluated how platform-related differences affected the user
experience (UX) and effectiveness of the interactive ad. The measured User Experience, effectiveness aspects
and usability were consistent and satisfiable. This experiment shows the power and potential of web
technologies to provide a uniform and user interactive Ad across a variety of heterogeneous devices. [4]
B. Smart Phone market: A glimpse
In PC market, there is a single dominant platform i.e. Microsoft [5], which provides Operating Systems,
Document editor such as MS office and many more things to the users. However, the market of smartphones is
very much heterogeneous, fragmented and distributed among various OS providing companies or organization.
The market of smartphone apps is divided among Android (From Google), Symbian (From Nokia), iOS (From
Apple Inc.), Blackberry Operating Systems (From Research In Motion or RIM ) and now windows (From
Microsoft). According to the data released in 2012 ( table 1), Android is the leading in mobile apps has the
majority of the market shares with 78.9%, Windows has 3.9%, Blackberry has 1.0%, Apple’s iOS has a 14.9%
shares and others 1.3% share of total app market.
Table 1: Android Leading in Mobile apps Market [6]
90
80
70
60
50
2012
40
2013
30
2014
20
10
0
Android
iOS
Windows
Blackberry
Others
C. What is HTML5
HTML5 is a core technology markup language of the Internet which is used for structuring and presenting
content for the World Wide Web or simply, it is used for web designing. As of October 2014, HTML5 is the
final and complete fifth revision of the HTML standard of the World Wide Web Consortium (W3C). [7] HTML
4, the older version of HTML5, was finally revised and standardised in the year 1997. HTML5 improves and
support better markup for the documents, and introduces application programming interfaces (APIs) for
complex web applications. Hence, HTML5 has a potential for cross-platform mobile applications development.
Many of the features of HTML5 are designed such that it can run on the devices which are low-powered such as
tabs and smartphones. Analysing the scope of HTML5, the research firm “Strategy Analytics”, forecasted in
December 2011 about the sales of HTML5 compatible phones that it would be more than 1 billion in the year
2013[8]. A very interesting new element in HTML5 is <canvas> which provides an area of the screen which can
be drawn upon programmatically [9]. It has widespread support and is available in the most recent versions of
Chrome, Firefox, Internet Explorer, Opera, and Safari and also on Mobile Safari.
D. Features of HTML5

Improved design rules accommodating screen size and potential interface limitations.
IJSWS 15-310; © 2015, IJSWS All Rights Reserved
Page 37
R. Chaudhary and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 36-40







Improved support of digital media, such as video and voice, and reducing the need of extensions or
plug-ins.
Improved support of common hardware accessories, such as GPS.
Improved interaction with hardware for better response time.
Improved support of caching for simpler application usage while offline.
Improved support of native graphics (SVG and Canvas).
Support for the open-source SQLite database and independent threaded processes (“web workers”) to
enable more sophisticated applications and better offline capabilities.
Better substitution of markup language rather than scripting.
a) New APIs
HTML5
specifies
scripting application
programming
interfaces (APIs)
that
can
be
used
with JavaScript. Existing document object model (DOM) interfaces are extended. The new APIs, are as follows:

The canvas element for immediate mode 2D drawing.

Timed media playback

Offline Web Applications

Document editing

Drag-and-drop

Cross-document messaging

Browser history management

MIME type and protocol handler registration

Microdata

Geolocation

Web SQL Database, a local SQL Database (no longer maintained).
b) Popularity
Popularity of HTML5 can be seen as according to a report released on 30 September 2011, 34 of the world's top
100 Web sites were using HTML5 web language, led by search engines and social networks [10]. In August
2013, a report has shown that 153 of the Fortune 500 U.S. companies implemented HTML5 to design their
corporate websites. [11] HTML 5 is at least partially supported by most of the popular layout engines.
E. Using Feature Detection technology to build Cross-Platform HTML5 Packaged Web Apps
HTML5 enables cross-platform apps, which means you can write one app that runs on multiple platforms using
only a single source code [12]. The HTML5 web apps are mostly cross-platform and sometimes require
conditional code to deal with platform-specific minute or large differences. Not all HTML5 web app platforms
are Apache Cordova platforms that use Cordova APIs. We can use feature detection technology in the apps,
hence it is possible to build a cross-platform web app that runs on Cordova platforms and also on those
platforms that support other JavaScript APIs rather than Cordova APIs.
a) Building Packaged Web Apps
HTML5 web apps are packaged in a single bundle so that they can be downloaded from the internet, installed
and executed on a portable or mobile platform. Some examples of the platforms that support packaged web apps

Chrome OS

Firefox OS

Ubuntu Mobile

Cordova on Android, iOS, and Windows 8 platforms
An HTML5 packaged web app is a ZIP file containing a platform-specific manifest file that describes the
application’s name, icon, system requirements, permissions and any other attributes related to the platform. This
is important to say that this ZIP file contains the HTML5 source files such as JavaScript, CSS and HTML and
assets such as images, fonts, etc. that are ingredients to that web application.
b) Platform Detection and Feature Detection in the app
There are two methods to detect the platform API differences that might affect the execution and functionality
of the cross-platform web app1. Feature detection
2. Platform detection
The feature detection is very useful for identifying at runtime the lack or unavailability of an HTML5
functionality, which is usually JavaScript APIs. This technique can be used to conditionally implement the
backup features or eliminate some optional app features, when the necessary HTML5 APIs are not available
IJSWS 15-310; © 2015, IJSWS All Rights Reserved
Page 38
R. Chaudhary and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 36-40
with the zip file you have downloaded. Platform or browser detection is an easy way to deal with the Cordova
API differences.
c) Using Feature Detection and Platform Detection with Cordova API’s
Apache Cordova is an open-source mobile development framework which allows the developers to use standard
web technologies such as HTML5, CSS3, and JavaScript for cross-platform development of mobile apps,
avoiding each mobile platforms' native development language. [13]
Table 2: Decision Drivers [14]
Drivers
Quality of user experience
Native Apps
Excellent
Mobile Web/HTML5
Very Good
Cross-platform Tool
Excellent
Application Sophistication
High
Moderate
High
Addressable Audience
Limited to Smartphones
Large
Cost per User
Typically Medium to
High
Large,
supported
by
Smartphones and featured
Phones
Typically Low
Agility
Technical Risk
OS/Platform vendor Risk
Medium to Low
High
High
Operational Issues
Operationally
Flexible
Security
More Flexible
Supportability
Complex
High
Medium
Medium to Low
More
Requires
Network
Connectivity but with
HTML5
Can
Operate
Offline to Some Degree
Inflexible, Expected to
Improve
Simple
Low
to
Medium
Development,
Medium to High Licensing
Medium to High
High
High
Operationally
Flexible
More
More Flexible
Medium to Complex
III. Conclusion and Future Work
The major goal of this paper is to throw light on the potentials of HTML5. We can conclude that HTML5 has
helped the developers a lot as it reduces the cost, effort and time while developing a web app. The crossplatform nature of the app makes its universal deployment on any mobile platform. HTML5 features such as
canvas, audio, video support and improved design tools, etc. elaborates it’s usability at a larger scale. Also
HTML5 has helped in developing webOS, which can be used to operate TV’s and Smartwatches. The apps
should have Functionality, Usability, Efficiency, Maintainability, Portability and Reliability. These all features
are available in web apps developed using HTML5. Functionality is present there as there are available different
features in HTML5 applications. Usability and efficiency can be evaluated on the basis of user feedbacks in
using HTML5 based Smartphone apps. These apps also support maintainability as editing the code is easier. At
last, Portability and Reliability must be there as these apps are cross-platform. There is a large work to be done
using HTML5.WebOS for personal computers is a great work. Also the mobile apps should be developed which
are cross-platform not native.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
http://www.americandialect.org/app-voted-2010-word-of-the-year-by-the-american-dialect-society-updated
Zhijie Qiu , Lei Luo , Jianchao Luo, “A Cross-Platform Mobile Payment Solution Based on Web Technology”, Sch. of Comput.
Sci. & Eng., Univ. of Electron. Sci. & Tech. of China, Chengdu, China, 2012.
Andersson K, Johansson D, “Mobile e-services using HTML5”,Dept. of Computer Science, Electr. & Space Eng., Lulea Univ. of
Technol., Skelleftea, Sweden, 2012
Perakakis E, Ghinea G, “HTML5 Technologies for Effective Cross-Platform Interactive/Smart TV Advertising”, Department of
Computer Science, College of Engineering Design and Physical Sciences, Brunel University, UK, 2015.
Yousuf Hasan, Mustafa Zaidi, Najmi Haider , W.U.Hasan and I.Amin, “Smart Phones Application development using HTML5
and related technologies: A tradeoff between cost and quality”, Computer Science, SZABIST Karachi, Sindh, Karachi, Pakistan,
2012.
http://www.statista.com/chart/1961/smartphone-market-share-2014/
http://arstechnica.com/information-technology/2014/10/html5-specification-finalized-squabbling-over-who-writes-the-specscontinues/
http://www.cnet.com/news/html5-enabled-phones-to-hit-1-billion-in-sales-in-2013/
Keaton Mowery, Hovav Shacham, “Pixel Perfect: Fingerprinting Canvas in HTML5”, Department of Computer Science and
Engineering, University of California, San Diego La Jolla, California, USA.
http://www.binvisions.com/articles/how-many-percentage-web-sites-using-html5/
http://www.incore.com/Fortune500HTML5/#infographic
https://software.intel.com/en-us/xdk/articles/using-feature-detection-to-write-cross-platform-html5-cordova-web-apps
http://cordova.apache.org/docs/en/4.0.0/guide_overview_index.md.html#Overview
http://www.accenture.com : HTML5: The path to cross platform Mobile Development
Acknowledgements
It is our proud privilege and duty to acknowledge the kind of help and guidance received from several people in preparation of this report. It
would not have been possible to prepare this report in this form without their valuable help, cooperation and guidance. First and foremost,
IJSWS 15-310; © 2015, IJSWS All Rights Reserved
Page 39
R. Chaudhary and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 36-40
we wish to record our sincere gratitude to Management of this college. Our sincere thanks to Dr.Pankaj Agarwal, Head, Department of
Computer Science and Engineering, IMS Engineering College, Ghaziabad. We express our sincere gratitude to our mentor, Mr.Vijai Singh,
Asst. Professor, Department of Computer Science and Engineering, IMSEC, Ghaziabad for guiding us in investigations for this seminar and
for his constant support and encouragement. Our discussions with him were extremely helpful. We hold his in esteem for guidance,
encouragement and inspiration received from his.The seminar on “CROSS PLATFORM MOBILE WEB APPLICATIONS USING
HTML5” was very helpful to us in giving the necessary background information and inspiration in choosing this topic for the seminar.Last
but not the least, we wish to thank our parents for financing our studies in this college as well as for constantly encouraging us to learn
engineering. Their personal sacrifice in providing this opportunity to learn engineering is gratefully acknowledged.
IJSWS 15-310; © 2015, IJSWS All Rights Reserved
Page 40
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net

An Analysis of Securities and Threats in Cloud Computing
Sakshi Sharma1, Akshay Singh2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
__________________________________________________________________________________________
Abstract: Cloud Computing is a model for delivering information technology services in which resources are
fetched from the internet rather than a direct connection to a server .Cloud computing as its name such as,
suggests store the accessed information in the “cloud”, and doesn't require a user to be in a particular place to
gain access to it. So it saves managing cost and time .Cloud Computing enables organizations to consume
compute resources as a utility ---just like electricity--- rather than having to build and maintain computing
infrastructure in-house. Many industries, such as banking, healthcare and education are moving towards the
cloud due to the effectiveness of services provided by the pay-per-use pattern based on the resources such as
processing power used, bandwidth consumed, data transferred or storage space occupied ,etc .IT organizations
have expressed their anxiety about security issues in cloud computing .It is the most important challenge in
cloud computing .This research paper outlines what cloud computing is, the various models of cloud and the
important security risks and issues that are currently present within the cloud computing industry .This paper
also analyzes the key challenges that the cloud computing is facing.
Keywords: Cloud Computing, Service models, Deployment models, Technical components, Security Issues,
Countermeasures and solutions.
__________________________________________________________________________________________
1,2
I.
Introduction
Cloud computing is a very important revolution in IT sector. Every company/individual have big data
(information) and it`s hard to carry it along with them, that`s why cloud computing came in light .Cloud
computing is nothing but internet computing (internet is a hub of data that`s why we call it cloud). It provides
safe, ease and fast way to store data .Gartner (Jay Heiser, 2009) defines cloud computing (Stanojevi et al., 2008;
Vaquero et al., 2009; Weiss, 2007; Whyman, 2008; Boss et al., 2009) as ‘‘a style of computing where massively
scalable IT enabled capabilities are delivered ‘as a service’ to external customers using Internet
technologies’’.Now you can save your data on internet and easily access it from anywhere anytime. Now-a-days
so many companies are providing cloud storage for your data/application or anything at very small price. The
convenience and low cost of cloud computing services have changed our daily lives. It is providing so many
benefits to the users but the problem is “is it safe?” for a company/individual to save or store data to cloud
because it is in someone else premises and user is fetching it remotely. Data is on the internet and if someone is
able to crack the server of the storage provider company then he can fetch the data. That`s why security is the
biggest problem in cloud computing .So as the cloud technology is emerging rapidly , this also leads to serious
security concerns. In this paper we will talk about investigation and research in security of cloud computing.
DataLossDB has reported that there were 1,047 data breach incidents in 2012, compared to 1,041 incidents in
2011[4]. There were two data breach victims Epsilon and Stratfor. In the data leakage accident, Epsilon leaked
information of millions of customer databases including their name and email addresses. Stratfor’s also loses
75,000 credit card numbers and 860,000 user names and passwords [4]. Hackers could also take advantage of
the massive computing power of clouds to fire attacks to users who are in the same or different networks. For
instance, hackers rented a server through Amazon’s EC2 service and carried out an attack to Sony's PlayStation
Network [4]. Therefore, it is very necessary to understand cloud security threats to provide more secure and
efficient services to cloud users.
II.
Literature Review
Farhan Bashir Shaikh et al [6], presented the review paper on "Security threats in Cloud Computing" and
discussed about the most susceptible security threats in cloud computing, which will make possible for both
users and service providers to know about the key security threats associated with cloud computing. Their
paper will enable researchers and security professionals to know in depth about different models and security
issues. [1] Kangchan Lee ,presented a review paper on "Security Threats in Cloud Computing Environments"
and discussed about technical components of cloud computing and its various threats like responsibility
ambiguity loss of governance, service provider lock-in, license risks, supplier lock-in, etc.[2] S.Subhashini et
al,g ave a review on "A survey on security issues in service delivery models of cloud computing and a gave
detailed information about different security issues that have been emerged due to the nature of service delivery
model present in a cloud." discussing about different security issues that has emanated due to the nature of the
service delivery models of a cloud computing system and the key security elements that should be carefully
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
Page 41
S. Sharma and A. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 41-49
considered as an important part of the Saas model application development and development process. Those
elements are as follows: (i) Data Security, (ii) Network security, (iii) Data Integrity, (iv) Data Segregation, (v)
Data Access, (vi) Data Breaches, and (vii) Virtualization Vulnerability. Keiko Hashizume et al, [3] presented the
review paper on "Analysis on security issues for Cloud Computing” and presented about SPI model,
Vulnerabilities, Countermeasures to deal with Cloud computing. Te-Shun Chou [4] gave the review paper on
“Security threats on cloud computing vulnerabilities” and described the difference between three cloud service
models and further discussed Real world cloud attacks and the techniques that hackers used against cloud
computing systems. In addition, countermeasures to cloud security breaches are presented. [5] Monjur Ahmed et
al, presented the review paper on “Cloud computing and security issues in the cloud” and gave the idea about
typical architecture of cloud and conclude that when dealing with cloud computing and its security issues,
technical as well as epistemological factors are equally important to take into consideration .[7] F. A. Alvi et al,
gave the review paper on "A review on cloud computing security issues & challenges "and described the survey
reports of IDC which clearly show the motivation for adoption of cloud computing and analysed the
SACS(Security Access Control Services) model through Hadoop map reduce framework.[8] Osama Harfoushi
et al, presented the review paper on "Data Security Issues and Challenges in Cloud Computing: A Conceptual
Analysis and Review” and gave the idea that an automated SLA, third trusted party, would be an interesting
study area to cover the security issues related to cloud computing. [9] CSA cloud security alliance, presented the
review on "Top Threats to Cloud Computing V1.0" identified the following threats cloud: (a) Abuse and
Nefarious Use of Cloud Computing, (b) Insecure Application Programming Interfaces, (c) Malicious Insiders, (d)
Shared Technology Vulnerabilities, (e) Data Loss/Leakage, and (f) Account, Service & Traffic Hijacking. Vahid
Ashktorab et al. [10] gave the review paper on “Security Threats and Countermeasures in Cloud Computing”
and dealt with Countermeasures for Challenges Inherited from Network Concept and Counter measures for CAS
proposed threats ". Kuyoro S. O. et al[11], presented the review paper on “Cloud Computing Security Issues and
Challenges” and described the features and flexibility of cloud computing.[12] Rajani Sharma et al, presented
the review paper on “Cloud Computing –Security Issues, Solution and Technologies” and discussed about cost
and time effectiveness of cloud computing.[13] Anitha Y et al, gave the review paper on "Security Issues in
Cloud Computing - A Review" and discussed the security issues of cloud computing and components that affect
the security of the cloud ,then it explores the cloud security issues and problems faced by cloud service provider
and some solutions for security.[14] Farzad sabahi gave a review paper on “Cloud Computing Security Threats
and Responses” and discussed about three important factors in cloud that is reliability, availability, and security
issues for cloud computing (RAS issues), and proposed possible and available solutions for some of them.
III.
Cloud Computing
A.
Understanding Cloud Computing
When we connect to the cloud we see that cloud is a single application, device, or document .Everything inside
the cloud including the hardware and the operating system that manages the hardware connections are invisible.
To start with there’s a user interface seen by individual users. This way users send their request which is then
passed to the system management, which search for the resources and then call for the system`s provisioning
services. Data storage is the main use of cloud computing. Multiple third-party servers are used to store the data.
The user sees only the virtual server which gives the impression that the data is stored at a single place with a
unique name. But the reality is something else. We used it as reference of the virtual space of our cloud. In
reality, the single or multiple computers could be used to store user`s data and to create the cloud.
B.
Cloud Computing Models
Cloud computing involves providing computing resources (e.g. servers, storage, and applications) as services to
end users by the use of cloud computing service providers. Web browsers are used to access these on-demand
cloud services. Cloud computing service providers offer specific cloud services and ensure that the quality of the
services is maintained. Basically, cloud computing consists of three layers: the system layer, the platform layer,
and the application layer.
The topmost layer is the application layer, also known as Software-as-a-Service (SaaS).The bottom layer is the
system layer, which includes computational resources such as infrastructure of servers, network devices,
memory, and storage. It is known as Infrastructure as-a-service (IaaS).The middle layer is the platform layer and
is known as Platform-as-a-Service (PaaS) [4].
C.
Cloud Computing Service Delivery Models
(a)
Software as a Service (SaaS):
This model enables the end users to use the services that are hosted on the cloud server[6]. Software as a Service
primarily consists of software running on the provider’s cloud infrastructure, delivered to (multiple) clients (on
demand) via a thin client (e.g. browser) over the Internet. Typical examples are Google Docs and
Salesforce.com [13].
(b)
Platform as a Service (PaaS):
Clients are provided platforms access, which makes it possible for them to put their own customized software`s
and other applications on the clouds [6]. This model gives a developer the ease to develop applications on the
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
Page 42
S. Sharma and A. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 41-49
providers’ platform. Entirely virtualized platform that includes one or more servers, operating systems and
specific applications. Main services that the model provides include storage, database, and scalability. Typical
examples are Google App Engine, Mosso, AWS: S3 [13].
(c)
Infrastructure as a Service (IaaS):
It enables the end users to manage the operating systems, applications, storage, and network connectivity.[6]The
service provider owns the equipment and is responsible for housing, running and maintaining it. The client
typically pays on a pay-per-usage basis. IaaS offers users elastic on demand access to resources (networking,
servers and storage), which could be accessed via a service API .Typical examples are Flexiscale, AWS:
EC2(Amazon Web Services) [13].
Figure 1: Cloud computing service delivery models [11]
D.
Cloud Deployments Models
In the cloud deployment model, networking, platform, storage, and software infrastructure are provided as
services that move up or down depending on the demand as depicted in figure 2.[11]The three main deployment
models available in cloud computing are as follows:
(a)
Private cloud
In the private cloud, scalable resources and virtual applications provided by the cloud vendor are grouped
together for cloud users to share and use. It differs from the public cloud in the aspect that all the cloud
resources,cloud services and applications are managed by the organization itself, similar to Intranet functionality.
Private cloud is much more secure than public cloud because of its specified internal exposure. Only the
organization and designated stakeholders may have access to operate on a specific Private cloud which is in
contrast to the public cloud where resources are dynamically available over the internet [11].The private cloud
describes cloud computing in the conventional sense. It is usually owned by a large organization (e.g. Amazon,
Google,s AppEngine and Microsoft Azure). This is the most cost-effective model leading to user with privacy
and security issues since the physical location of the provider’s infrastructure usually traverses numerous
national boundaries.
Figure 2: Cloud deployment model [11]
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
Page 43
S. Sharma and A. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 41-49
(b)
Public cloud
In Public cloud, resources are actively provided on self-service basis over the Internet, through web
applications/web services, from an off-site third-party provider who charges the user on the basis of usage. It is
based on a pay-per-use model i.e. user has to pay only for the amount of service used. Public clouds model offer
less security in comparison to the other cloud models because it places an additional burden of ensuring all
applications and data accessed on the public cloud are not subjected to malicious attacks [11].
(c)
Hybrid cloud
A combination of any two (or all) of the three models discussed above leads to another cloud called the 'Hybrid
Cloud'. Standardization of APIs has lead to easier distribution of applications across different cloud models
[13].This cloud provides virtual IT solutions by combining both public and private clouds .Hybrid cloud is a
private cloud linked to one or more external cloud services, centrally managed, provisioned as a single unit, and
circumscribed by a secure network. Hybrid Cloud provides more security of data and applications and allows
various parties to access information over the Internet. It also has an open architecture that allows interfaces
with other management systems. Hybrid cloud can describe configuration combining a local device, such as a
Plug computer with cloud services [11].
IV.
Technical Components of Cloud
As shown in the figure 3, functions of a cloud management system consist mainly of four layers: Resources &
Network Layer, Services Layer, Access Layer, and User Layer. Each layer has some pre-defined functions: The
service layer as its name suggests performs the function of providing service to the users and those services are
categorized as NaaS, IaaS, SaaS, PaaS and this layer also performs the operational functions. The Resources &
Network Layers are assigned with the task of managing the physical and virtual resources. The Access Layer
includes API termination function, and Inter-Cloud peering and federation function. The User Layer is
responsible for handling the users’ requests that is it performs user related functions as well as the administrator
related functions. Cross layer is an aggregation of all the layers so it performs tasks associated with
Management, Security & Privacy, etc. The important point about is this architecture is that layers can be selected
as per vendors convenience. This means that a cloud provider who is interested in using the architecture can
select layers as per his requirement and then implement them. However, keeping in mind the security to be
provided in cloud, the principal of separation of layers as per choice requires each layer to take charge of certain
responsibilities. For this the security controls are passed form layer to layer and other security related functions
can be implemented in either the cross layer or other layers.
Figure 3: The Cloud Computing Components [1]
V.
Survey Conducted On Cloud Computing By Idc
This section represents the outcomes of survey conducted by international data corporation (IDC). It shows the
increasing applications of Cloud in corporate world. This section presents the survey results related to: (i)
Growth of cloud, (ii) Increased usage of cloud, and (iii) Top ten technology priorities
A.
Growth of cloud
Table 2: Cloud Growth [7]
Year
Cloud IT Spending
Total IT Spending
Total-cloud spend
Cloud Total Spend
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
2008
$16 B
$383 B
$367 B
4.00%
2012
$42 B
$494 B
$452 B
9.00%
Growth
27.00%
7.00%
4.00%
Page 44
S. Sharma and A. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 41-49
B.
Increased usage of cloud
Table 3: Increased popularity of cloud [7]
2010
Number of Apps
Number of Devices
Connecting Apps to the Cloud
C.
2.3
2
64%
2011
6.5
4
87%
%Growth
82%
100%
38%
Top Ten Technology priorities
Figure 4: Top ten Technology priorities [7]
VI.
Real life example of cloud-computing
we use web-based email systems (e.g. Yahoo and Google) to exchange messages with others; social networking
sites (e.g. Facebook, LinkedIn, MySpace, and Twitter) to share information and stay in contact with friends; ondemand subscription services (e.g. Netflix and Hulu) to watch TV shows and movies; cloud storages (e.g.
Humyo, ZumoDrive, and Dropbox) to store music, videos, photos and documents online; collaboration tools
(e.g. Google docs) to work with people on the same document in real time; and online backup tools (e.g.
JungleDisk, Carbonite, and Mozy) to automatically back up our data to cloud servers. Cloud computing has also
been involved in businesses; companies rent services from cloud computing service providers to reduce
operational costs and improve cash flow. For example, the social news website, reddit, rents Amazon Elastic
Compute Cloud (EC2) for their digital bulletin board service. The digital photo sharing website, SmugMug,
rents Amazon S3 (Simple Storage Service) for their photo hosting service. The automaker, Mazda USA, rents
Rackspace for their marketing advertisements. The software company, HRLocker, rents Windows Azure for
their human resources software service [4].
VII.
Security issues in cloud-computing
A.
Layered Framework For Cloud Security
Layered framework assures security in cloud computing environment. There are four layers as shown in Figure
4 [13].
Figure 4: Layered Framework of Cloud Security
Virtual machine layer is the first layer which secures cloud. Cloud storage layer is the second layer of layered
framework which has a storage infrastructure. Storage infrastructure combine resources from various cloud
service providers to build a massive virtual storage system. Virtual network monitor layer is the fourth layer, to
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
Page 45
S. Sharma and A. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 41-49
handle problems this layer combines both the software and hardware solutions in virtual machines.
The Cloud Security Alliance (CSA) is developing standards and security for clouds. Under development by
some groups the Cloud Standards web site is coordinating and collecting information about cloud-related
standards. Solution providers, non-profits and individuals are gathered by CSA to enter into some discussion
regarding current and future best work for information assurance in the cloud. Open Web Application Security
Project (OWASP) is also one of them.
B.
Cloud Security Attacks
B.1
Malware Injection Attack
To access application servers using web browser, web-based applications provide dynamic web pages for users.
The applications can be simple or complicated. Study has shown that the servers are vulnerable to web-based
attacks . According to a report by Symantec, the number of web attacks in 2011 increased by 36% with over
4,500 new attacks each day. The attacks included cross site scripting, injection flaws, information leakage and
improper error handling, broken authentication and session management, failure to restrict URL access,
improper data validation, insecure communications, and malicious file execution. Malware injection attack is a
web-based attacks, in which hackers embed malicious codes into exploit vulnerabilities of a web application.
Cloud systems are also likely to malware injection attacks. Hackers design a malicious program, application or
virtual machine and inject them into target cloud service models SaaS, PaaS and IaaS, respectively. Once the
injection is completed, the malicious module is executed then, the hacker can do whatever he or she desires such
as data manipulation, eavesdropping or data theft. SQL injection attack and cross-site scripting attack are the
two most common forms of the malware injection attacks . SQL injection attack increased 69% in Q2 2012
compared to Q1, according to a report by secure cloud host provider FireHost . FireHost said that between April
and June, it blocked nearly half-million SQLi attacks. Sony’s PlayStation was a victim of an SQL injection
attack. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 3, June 2013
successfully used to plant unauthorized code on 209 pages promoting the PlayStation games, “SingStar Pop”
and “God of War” . Using botnet SQL injection attacks can be launched. The Asprox botnet used a thousand
bots that were equipped with an SQL injection kit to fire an SQL injection attack . The bots first sent encoded
SQL queries containing the exploit payload to Google for searching web servers that run ASP.net. Then, the bots
started an SQL injection attack against the web sites returned from those queries. Overall, approximately 6
million URLs belonging to 153,000 different web sites were victims of SQL injection attack by the Asprox
botnet. A scenario that demonstrates SQL injection attacking cloud systems was illustrated in. Cross-site
scripting (XSS) attacks are considered one of the most dangerous and malicious attack types by FireHost. 27%
of web attacks, cross-site scripting attack, were successfully blocked from causing harm to FireHost clients’ web
applications and databases during Q2 2012. Hackers inject malicious scripts, such as JavaScript, VBScript,
ActiveX, HTML, and Flash, into a vulnerable dynamic web page to execute the scripts on victim’s web browser.
Afterwards the attack could conduct illegal activities (e.g. execute malicious code on the victim’s machine and
steal session cookie used for authorization) for accessing to the victim’s account or tricking the victim into
clicking a malicious link. Researchers in Germany have successfully demonstrated a XSS attack against
Amazon AWS cloud computing platform. The vulnerability in Amazon’s store allowed the team to hijack an
AWS session and access to all customer data. The data includes authentication data, tokens, and even plain text
passwords. [4]
B.2
Wrapping Attack
When user uses web browser to request a service to web server then it uses Simple Object Access Protocol
(SOAP) messages via HTTP protocol with XML format to provide the service. WS-Security (Web Services
Security) is applied to ensure confidentiality and data integrity of SOAP messages in transit between clients and
servers. It uses digital signature to get the message signed and encryption technique to encrypt the content of the
message. Wrapping attacks use XML rewriting or XML signature wrapping to exploit a weakness when web
servers validate signed requests. When the translation of SOAP messages between a legitimate user and the web
server occurs the attack is done. The hacker embeds a bogus element into the message structure, by duplicating
the user’s account and password in the login period moves the original message body under the wrapper, he
sends the message to the server after replacing the content of the message with malicious code. Since the
original body is still valid, the server will be tricked into authorizing the message that has actually been altered
then the hacker is able to gain unauthorized access to protected resources and then process the various
operations.[4]
C.
Problems for Cloud Service Users
C.1
Responsibility ambiguity
Users of cloud service consume delivered resources using service models. The users and Providers of cloud
services may evoke some conceptual conflicts because of lack of understanding of responsibility among them.
The problem of which entity is the data processor and which one is the data provider stays open at an
international scale [1].
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
Page 46
S. Sharma and A. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 41-49
C.2
Loss of Governance
Cloud service providers can access a part of an enterprise/industry which is implemented on cloud. The cloud
service models is responsible for this loss of governance. [1]
C.3
Loss of Trust
Sometime it is difficult for a cloud service user to recognize trust level of his provider’s due to the black-box
feature of the cloud service. We cannot ensure provider’s security level in formalized manner. Lack of sharing
security level, this will become a serious security problem for user of cloud services.[1]
C.4
Unsecure Cloud Service User Access
Using remote connection and non-protected APIs as most of the resource delivered. Attack methods such as
phishing, fraud, and exploitation of software vulnerabilities take place. One can often reuse Credentials and
passwords, which increase the possibilities of these attack. Cloud solutions add a new threat to the landscape. If
any attacker can gain access to your credentials, they can manipulate your data, return falsified information,
redirect your clients to illegitimate sites and much more. The attacker May use your account for any purpose. [1]
C.5
Lack of Information/Asset Management
The cloud service user will have serious concerns on lack of information/asset management by cloud service
providers for ex. location of sensitive asset/information, lack of physical control for data storage, reliability of
data backup, countermeasures for BCP and Disaster Recovery and so on. The cloud service users also have
important concerns on exposure of data to foreign government and on compliance with privacy law such as EU
data protection directive. [1]
C.6
Data loss and leakage
If the cloud service user losses his encryption key or privileged access code then it will bring serious problems.
Accordingly, lack of cryptographic management information such as encryption keys, authentication codes and
access privilege will heavily lead sensitive damages on data loss and unexpected leakage to outside. [1]
VIII. Counter measures
A cloud computing infrastructure includes a cloud service provider, which provides computing resources to
cloud end users who consume those resources. In order to assure the best quality of service, the providers are
responsible for ensuring the cloud environment is secure [4].this security can be provided using any of the
following ways:
A.
Access Management
The data stored in the cloud belonging to various users is sensitive and private; and access control mechanisms
could be applied to ensure only authorized users can have access to their data. Not only do the physical
computing systems (where data is stored) have to be continuously monitored, the traffic access to the data
should be restricted by security techniques ensuring that no unauthorized users have access to these. Firewalls
and intrusion detection systems are common tools that are used to restrict access from untrusted resources and to
monitor malicious activities. In addition, authentication standards, Security Assertion Markup Language (SAML)
and eXtensible Access Control Markup Language (XACML), can be used to control access to cloud applications
and data[4].The following control statement should be considered to ensure proper access control management
and hence the security of cloud[14]:
1.
Authorized access to information
2.
Accounting user access right
3.
Restricted access to network services.
4.
Restrained access to operating systems.
5.
Authorized access to applications and systems.
B.
Data Protection
Data breaches caused by insiders could be either accidental or intentional. Since it is difficult to identify the
insiders’ behavior, it is better to apply proper security tools to deal with insider threats. The tools that are mainly
used include: data loss prevention systems, anomalous behavior pattern detection tools, format preserving and
encryption tools, user behavior profiling, decoy technology, and authentication and authorization technologies.
These tools provide functions such as real-time detection on monitoring traffic, audit trails recording for future
forensics, and trapping malicious activity into decoy documents [4].
C.
Security Techniques Implementation
The malware injection attack has become a major security concern in cloud computing systems. It can be
prevented by using File Allocation Table (FAT) system architecture. With the help of File Allocation Table it is
easy to predict in advance which code or application the user is about to run. By comparing the instance with
previous ones that had already been executed from the customer’s machine,the genuineness and the originality
of the new instance can be found out. Another way to prevent malware injection attacks is to store a hash value
on the original service instance’s image file. By performing an integrity check between the original and new
service instance’s images, malicious instances can be identified [4].
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
Page 47
S. Sharma and A. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 41-49
IX.
Counter measures for challenges inherited from network concept
A.
SQL injection attacks
Using filtering techniques to sanitize the user input etc. are used to check the SQL injection attacks. A proxy
based architecture is used to prevent SQL Injection attacks which dynamically detects and extracts users’ inputs
for suspected SQL control sequences [10].
B.
Cross Site Scripting (XSS) attacks
Various techniques that are commonly used to prevent XSS attacks consists of Data Leakage Prevention
Technology, refining the content and Web Application Vulnerability Detection Technology [10]. These
technologies not only adopt various methodologies to detect security flaws but also fix them.

Man in the Middle attacks (MITM): A few of the important points like: evaluating software as a
service security, separate endpoint and server security processes, evaluating virtualization at the end-point have
been done to tackle with this type of attack in cloud computing. In most of the cases, the security practices
implemented (in the organization’s private network) apply to the private cloud too. However, in case of a public
cloud implementation, network topology might need to be changed in order to implement the security features
[10][15].

Sniffer Attacks: In this, security Concerns with the Hypervisor .If a hacker is able to get control over
the hypervisor, he can make changes to any of the guest operating systems and get control over all the data
passing through the hypervisor. Based on the understanding of how the various components in the hypervisor
architecture behave, an advanced cloud protections system can be developed by monitoring the activities of the
guest machines and communication among different components [10][15].

Denial of Service Attacks: Usage of an Intrusion Detection System (IDS) is the most popular method
to secure the cloud against Denial of Service Attacks. Each cloud is provided with separate IDS which
communicate with each other and in case of attack on a particular cloud the ID's which were in cooperation
inform the whole system. A decision on trustworthiness of a cloud is taken by voting, and the overall system
performance is not hampered [10][15].

Cookie Poisoning: This can be avoided either by performing regular cookie cleanup or implementing
an encryption scheme for the cookie data .This method can help reasonably in confronting cookie poisoning
attack [10] [15].

Distributed Denial of Service Attacks: The use of IDS (Intrusion Detection System)in the virtual
machine can prove helpful to some extent in protecting the cloud from DDoS attacks. A SNORT like intrusion
detection mechanism is loaded onto the virtual machine for sniffing all traffics, either incoming, or outgoing.
Intrusion detection systems can be applied on all the machines to safeguard against DdoS attack [10][15].
Table 3: Solutions [7] [16]
Solution
Data Handling
Mechanism
Data Security
Mitigation
•
•
•
•
Design for
Policy
Standardization
Define policies for data destruction.
Encrypting personal data.
Avoid putting sensitive data in cloud.
Fair information principles are applicable.
Accountability
Mechanism for rising trust
Description
Classify the confidential Data.
Define the geographical region of data.
•
•
CSP should follow standardization in data tracking and
handling.
For businesses having data lost, leakage or privacy violation is
catastrophic
Accountability needs in legal and technical.
Audit is need in every step to increase trust
All CSP make contractual agreements.
Social and technological method to raise trust.
Joining individual personal rights, preferences and conditions
straightforwardly to uniqueness of data.
Devices connected should be under control by CSP.
Use intelligent software.
X.
Conclusion
Every company/individual have big data and it's hard to carry it along with them, that's why Cloud Computing
came into light .it provides easy and fast way to store data .cloud service providers provide computing
resources( eg, servers, storage and applications)to the users and they have control over the access to the services,
so it becomes the responsibility of the service providers to protect their network from unauthorized
accesses .But since data is on the internet and if someone is able to crack the server of the storage provider
company then he can fetch the data .That's why security is biggest problem in cloud computing .In this paper we
provide possible threats on cloud computing in terms of users and the vendors and the countermeasures and
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
Page 48
S. Sharma and A. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 41-49
solutions to these attacks.
XI.
FUTURE WORK
Cloud computing is not fully developed and still needs to be explored a lot especially when it comes to security
of users' data. After studying a lot on Cloud computing it has been observed that security is the most important
threat to both the users and the vendors of cloud computing .Since it is the most modern technology so lots of
issues are remained to consider .Some of them are technical that includes scalability, elasticity, data handling
mechanism, reliability, license software, ownership, performance, system development and management and
some are non-technical issues like legalistic and economic aspect [7]. Vendors, Researchers and IT security
professionals are working on security issues associated with cloud computing .Though different models and
tools have been proposed but still nothing fruitful found. While doing research on security issues of cloud
computing we came to know that there are no security standards available for secure cloud computing. In our
future work we will work on security standards for secure cloud computing and will try to make the cloud more
secure for both the users and the vendors [6].
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
Kangchan Lee ,”Security Threats in Cloud Computing Environments”,International Journal of Security and Its Applications Vol.
6, No. 4, October, 2012 .
S.Subhashini, V.Kavitha,”A survey on security issues in service delivery models of cloud computing, Journal of Network and
Computer Applications 34 (2011) 1–114.
Keiko hashizume,David G Rosado,EduardoFernandez-Medina and Eduardo B Fernandez,”An analysis of security issues for
cloud computing”,Journal of Internet Services and Applications 2013, 4:5 http://www.jisajournal.com/content/4/1/5.
Te-Shun Chou ,”SECURITY THREATS ON CLOUD COMPUTING VULNERABILITIES “,International Journal of Computer
Science & Information Technology (IJCSIT) Vol 5, No 3, June 2013 .
Monjur Ahmed and Mohammad Ashraf Hossain,”CLOUD COMPUTING AND SECURITY ISSUES IN THE CLOUD “,
International Journal of Network Security & Its Applications (IJNSA), Vol.6, No.1, January 2014 .
Farhan Bashir Shaikh ,Sajjad Haider,”SECURITY THREATS IN CLOUD COMPUTING”,6 th International Conference on
Internet Technology and Secured Transactions,11-14 December 2011,Abu Dhabi,United Arab Emirates.
F. A. Alvi, B.S Choudary ,N. Jaferry , E.Pathan,”A review on cloud computing security issues & challanges “,
Osama Harfoushi, Bader Alfawwaz, Nazeeh A. Ghatasheh, Ruba Obiedat, Mua’ad M. Abu-Faraj, Hossam Faris,”Data Security
Issues and Challenges in Cloud Computing: A Conceptual Analysis and Review “, (http://www.scirp.org/journal/cn)
http://dx.doi.org/10.4236/cn.2014.61003 .
Cloud Security Alliance ,”Top Threats to Cloud Computing V1.0”.
Vahid Ashktorab, Seyed Reza Taghizadeh,”Security Threats and Countermeasures in Cloud Computing “International Journal of
Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: [email protected],
[email protected], Volume 1, Issue 2, October 2012 .
Kuyoro S. O. ,Ibikunle F. ,Awodele O. ,”Cloud Computing Security Issues and Challenges “,International Journal of Computer
Networks (IJCN), Volume (3) : Issue (5) : 2011 .
Rajani Sharma, Rajender Kumar Trivedi ,”Literature review: Cloud Computing –Security Issues, Solution and Technologies “,
International Journal of Engineering Research Volume No.3, Issue No.4, pp : 221-225, 01 April 2014 .
Anitha Y,”Security Issues in Cloud Computing - A Review “,International Journalof Thesis Projects and Dissertations(IJTPD)Vol.
1, Issue 1, PP: (1-6), Month: October-December 2013, Available At: www.researchpublish.com .
Farzad Sabahi,” Cloud Computing Security Threats and Responses”,978-1-61284-486-2/111$26.00 ©2011
http://www.researchgate.net/publication/51940172_A_Survey_on_Security_Issues_in_Cloud_Computing
http://www.slideshare.net/piyushguptahmh/a-study-of-the-issues-and-security-of-cloud-computing-41759372
IJSWS 15-311; © 2015, IJSWS All Rights Reserved
Page 49
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
REAL TIME TRAFFIC LIGHT CONTROLLER USING IMAGE
PROCESSING
Yash Gupta1, Shivani Sharma2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
__________________________________________________________________________________________
Abstract: In this era vehicular traffic problems are increasing day by day particularly in urban areas due to the
fixed time duration of green, orange and red signals the waiting time is more and vehicles uses more fuel.
Traffic congestion occurs because infrastructure growth is slow as compared to increase in number of vehicles.
Growing vehicular dilemmas tend us to grasp new and advanced technology to improve the present situation of
traffic light system. It is a very cumbersome problem and it is the instinct for optimizing and simulating methods
to better tackle this situation. This paper proposes us to design a new real-time emergency vehicle revealing
system. In this paper we are presenting techniques by which traffic problem can be solved using intelligent
traffic light control system. The traffic system can be made more efficient by using image processing techniques
such as edge detection to appraisal actual road traffic.
1,2
Keywords: Real traffic control, image processing, camera, road traffic, intelligent control system
__________________________________________________________________________________________
I.
Introduction
Now days, as there is a rapid increase in the traffic and day to day life is becoming more hectic traffic problems
are also on an increasing demand. Road engineers and research workers in this area are facing a very large
variety of dilemmas generally regarding with the movement of vehicles and increasing traffic. Traffic system is
the core of world and development in many aspects of life influenced on it. Excessive number of traffic on roads
and improper controls create traffic jam. It hampers basic schedule, business, and commerce. Automated traffic
detection system is therefore required to run the system smooth and safe which will lead us towards analysis of
traffic, proper adjustment of control management and distribution of controlling signals [12]. The various
problems regarding traffic system are mentioned below:
HEAVY TRAFFIC JAMS-Sometimes there is a mammoth increase in number of vehicles which causes large
traffic congestion. The problem generally takes place in morning before the office hours and in the evening after
office hours.
NO TRAFFIC BUT STILL NEED TO WAIT- Many times there is no traffic but people still need to wait
because the traffic light stays red at that time. This leads to wastage of time and fuel. If people don’t follow
traffic rules then they have to pay fine.
EMERGENCY VEHICLE STUCK IN TRAFFIC JAM- In many cases emergency vehicles like ambulance, fire
brigade, and police got trap in traffic jam as waiting for the traffic light turn to green. This is a very critical
situation for cases involving risks of life.
II.
Literature Review
In order to estimate the traffic parameter intelligent traffic system uses various techniques such as inductive loop
detectors for counts and presence detection. Sensors that are placed on the pavements (magnetometers, road
tubes) can be damaged by snow removal equipment or street sweepers. In order to overcome these deficiencies
many researchers have applied computer vision techniques and image processing for making the traffic light
system fully adaptive and dynamic [8].
The traffic control system based on image processing technique is used which tries to reduce conflict of traffic
jams, caused by traffic light. Some researchers use image processing technique to traffic density measurement
and regulate the traffic light and implemented on Broadcom BCM2835 SOC. The system contain both hardware
and software module. Hardware module contains a Broadcom BCM2835 SOC which runs Linux based
operating system. A camera is connected to the board. Python 2.7 used as a platform in the image processing
algorithm [9].
Traffic management is becoming a very cumbersome problem in day to day life. It is a requirement for dealing
with traffic dilemmas and provides an efficient way to reduce waiting time of people and also to save fuel and
money. An intelligent traffic light controller using fuzzy logic concepts and active neural network approaches is
presented in [10].
IJSWS 15-312; © 2015, IJSWS All Rights Reserved
Page 50
Y. Gupta and S. Sharma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 50-54
S.aishwarya describes implementation of a real-time emergency vehicle revealing system. System can be made
more efficient with addition some new techniques like simulated computer vision, using image processing
techniques to compute time for each request for every road before permitting any signal. GSM technique can be
used to handle the emergency problems. The System also provides the provision to reduce the traffic level by an
alert SMS to the nearest traffic control station. [11]
The designed system of traffic control using both software and hardware module aims to achieve some
innovations such as to distinguish the presence and absence of vehicles in road images, Signalling the traffic
light to go red if the road is empty, Signal the traffic light to go red if the maximum time for the green light has
elapsed even if there are still vehicles present on the road. Using electronic sensors was an existing method.
Now the system will detect vehicles through images instead of using electronic sensors embedded in the
pavement. A camera will be installed along with the traffic light. It will capture image sequences. The image
sequence will then be analyzed using digital image processing and according to traffic conditions on the road
traffic light can be controlled as presented in [5]
Some existing Methods for Traffic Jam Detection include Magnetic Loop Detectors (MLD) that is used to count
the number of vehicles on the road using some magnetic properties. Current traffic control techniques providing
infra-red and radar sensors on the side only provide limited amount of traffic information. Inductive loop
detectors provide a cost-effective solution, but they are subject to a high failure rate when installed in poor road
surfaces and also obstruct traffic during maintenance and repair time. Light beams (IR, LASER etc) are also
used. Electronic devices can record these events and detect traffic jam. Infrared sensors are affected to a greater
degree by fog than video cameras and cannot be used for effective surveillance as presented in [13].
Increasing demand of traffic management system pressurize us to provide a new technology related to
controlling traffic. Automated traffic detection system is therefore required to run the system in a smooth and
safe manner. In Contemporary approaches; image processing, computer vision etc are highly recommended.
Under these types of approaches involvements of computers provide online features, facilitate central problem
of traffic control and develop compact platform. In these approaches, information fetched through telephone or
web networks or any other networks can easily be poised. Moreover, traffic flow of whole city can be observed
from a centre and statistics can be made [6].
Some researchers in this field use types of controlling systems regarding to traffic management. Manual
controlling system requires only man power to control the traffic. Depending on area the traffic polices along
with the things like sign board, sign light and whistle to control the traffic. Automatic traffic light is controlled
by timers and electrical sensors. In traffic light each phase a constant numerical value loaded in the timer. The
lights are automatically getting ON and OFF depending on the timer value changes. This system visualizes the
reality so it’s functioning is much better than those systems that rely on the detection of the vehicles’ metal
content .In edge detection technique some operators like sobel operator, Robert’s cross operator, prewitt’s
operator are used with both modules hardware and software. MATLAB version 7.8 as image processing
software that performs specific tasks has been use d in [14]. Existing methods also uses the technique of
optimizing the traffic light controller in a City using a microcontroller. The system tries to reduce possibilities of
traffic jams, caused by traffic lights, to a great extent. System is based on microcontroller that is 89V51RD2
which is MCS- 51 family based. The system also contains IR transmitter and IR receiver which are mounted on
the either sides of roads The IR system gets activated whenever any vehicle passes on road. Microcontroller
controls the IR system and counts number of vehicles passing on road and store counting of vehicles. It takes
decision and updates the traffic light delays as a result. The traffic light is situated at a certain distance from the
IR system. Thus based on vehicle count, traffic system is controlled.[15].
Captured image of traffic jam
III.
Objectives
In the past, control measures aimed to minimize delay for all vehicles using the road system. The various
objective functions must be put forward:
•
minimize overall delay to vehicles
IJSWS 15-312; © 2015, IJSWS All Rights Reserved
Page 51
Y. Gupta and S. Sharma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 50-54
•
•
•
•
•
•
minimize delays to public transport
minimize delays to emergency services
minimize delays to pedestrians
maximize reliability and accuracy
minimize accident potential for all users
minimize environmental impact of vehicular traffic
IV.
Results
A.
Image acquisition
In this image is taken as input via camera.
B.
Image enhancement
C.
Determining the threshold for the image
IJSWS 15-312; © 2015, IJSWS All Rights Reserved
Page 52
Y. Gupta and S. Sharma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 50-54
D.
Filling the hole and removing the noise
E.
Boundaries for the object detected
F.
Final Output
Object found is 19. Then according to object count we further calculate the density of traffic. Then using
algorithm we can calculate the time for which the traffic light signal should be open dynamically.
V.
Conclusion and Future Work
The improvement of town traffic condition is largely dependent on the modern ways of traffic management.
Real time traffic controllers contribute to the improvement of the traffic system. The fuzzy logic approach
presented delivers more reliable results using meteorological expertise. The enhancement of traffic control
systems by fuzzy logic greatly improves the reliability of traffic control. The idea of traffic jam detection can be
extended further depending upon the location of the camera at the road level the scenes can be used for number
plate recognition. Some software must be developed to identify the volume and density of the road vehicles.
Speed of vehicles can be detected and eventually it can help the traffic management system.
References
[1].
[2].
[3].
[4].
[5].
[6].
Fani Bhushan Sharma et. al., "Fuzzy logic applications for traffic control “an optimum and adaptive controlling application",
International Journal on Emerging Technologies, 10 Jan. 2010.
Madhavi Arora et. al. ,"Real Time Traffic Light Control System Using Morphological Edge Detection and Fuzzy Logic", 2nd
International Conference on Electrical, Electronics and Civil Engineering (ICEECE'2012), April 28-29, 2012.
U.F. Eze et.al. "Fuzzy Logic Model for Traffic Congestion ", IOSR Journal of Mobile Computing & Application (IOSR-JMCA),
May-Jun 2014 .
Sandeep Mehan,"Introduction of Traffic Light Controller with Fuzzy Control System",ISSN : 2230-9543IJECT Vol. 2, sep.
2011.
Pallavi Choudekar et. al. ,"REAL time traffic light control using image processing", Int. journal of Computer Science and
Engineering (IJCSE).
Khan Muhammad Nafee Mostafa, Qudrat- E-Alahy Ratul, “Traffic Jam Detection System”, pp 1-4.
IJSWS 15-312; © 2015, IJSWS All Rights Reserved
Page 53
Y. Gupta and S. Sharma, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 50-54
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
Prof. Uma Nagaraj et.al,”Traffic Jam Detection Using Image Processing", International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622, March -April 2013.
David Beymer et.al.”A real-time computer vision system for measuring traffic parameters”,‖ IEEE Conf. on Computer Vision
and Pattern Recognition, (1997).
Payal Gupta et.al.,“Real Time Traffic Light Control System (Hardware and Software Implementation)”, International Journal of
Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 5 (2014).
S. Rajeswari, “Design of Sophisticated Traffic Light Control System”, Middle-East Journal of Scientific Research 19 , IDOSI
Publication, 2014.
S.Aishwarya et.al., “Real Time Traffic Light Control System Using Image Processing”, IOSR Journal of Electronics and
Communication Engineering(IOSR-JECE).
Khan Muhammad Nafee Mostafa, Qudrat- E-Alahy Ratul, “Traffic Jam Detection System”, pp 1-4.
Vikramaditya Dangi et.al. , “Image Processing based Intelligent Traffic Controller”, Undergraduate Academic Research Journal
(UARJ), Volume-1, Issue-1, 2012, pp.1-17.
G. Lloyd Singh et.al.,”Embedded based Implementation: Controlling of Real Time Traffic Light using Image Processing
,National Conference on Advances in Computer Science and Applications with International Journal of Computer Applications
(NCACSA 2012).
Ms Promila Sinhmar,”Intelligent traffic light and density control using ir sensors and microcontroller”, International Journal of
Advanced Technology & Engineering Research (IJATER).
IJSWS 15-312; © 2015, IJSWS All Rights Reserved
Page 54
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Computational and Artificial Intelligence in Gaming
Shubham Dixit1, Nikhilendra Kishore Pandey2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
__________________________________________________________________________________________
Abstract: Artificial intelligence (AI), is the intelligence exhibited by digital computers or computer-controlled
automation to perform tasks reflective numerous skills like the power to reason, discover which means, to
generalize or learn from past expertise. Computational intelligence (CI) could be a methodology that addresses
advanced issues of the important world applications to those ancient methodologies like initial principles and
black-box are ineffective. It primarily includes Fuzzy logic systems, Neural Networks and Evolutionary
Computation. Games have long been a well-liked space of AI analysis, and become a quick growing software
system industry since Nineties. Video and symbolic games became an aspect of our daily lives and a major
approach of amusement. Despite the quick growth, analysis on intelligent games with learning ability remains
in its initial section. A game that learns to evolve through interactive actions and plays can build the game more
difficult, additional enticing and demand less for labor-intensive script style and planning once developing a
game. The recent analysis shows that computational intelligence techniques, characterized by numerical
learning, improvement and organization, might give a strong tool set to deal with the difficulties in game
learning. The objective of this project is to conduct literature survey on AI games with varied learning functions,
and develop an easy learning theme for AI games. In the 1st section, the most areas of AI games are going to be
investigated, together with major educational conferences that concentrate on AI games, and printed books. In
the second section, the task is to research the most methodologies used for enhancing game intelligence,
together with ancient AI strategies and new developed computational intelligence methods. First person
shooting games and real time strategy games are going to be thought-about. This section also details about the
role of Artificial Intelligence in various games. In the third section, the main target of investigation can lay on
the learning ability and improvement technology in AI games. The practicality of Q-learning, Bayesian
learning, applied mathematics learning, as well as system state house based mostly iterative learning, are going
to be explored.
1,2
Keywords: Game artificial intelligence, player experience modeling, procedural content generation, game data
mining, game AI flagships
_________________________________________________________________________________________
I.
Introduction
Games have long been a well-liked space of Artificial Intelligence analysis, and for an honest reason. They are
challenging nonetheless simple to formalize, creating it attainable to develop new AI methods, check how well
they are operating, and demonstrate that machines are capable of spectacular behavior typically thought to want
intelligence while not putting human lives or property in danger. Most of the analysis to this point has targeted
on games that may be represented in an exceedingly compact form with the help of symbolic representations,
like board and card games. Artificial intelligence has been involved to generate game like DEEP BLUE, which
defeated the World Chess Champion in 1997.Since the Nineteen Nineties, the gaming field has modified
enormously. Cheap nonetheless powerful computer hardware has created it attainable to simulate complex
physical environments, leading to associate degree explosion of the video game business. From modest
beginnings within the Nineteen Sixties (Baer 2005), the entertainment software sales have increased to $25.4
billion over the world in 2004 , according to Crandall and Sidak , 2006. Video games became a facet of the
many people’s lives and therefore the market continues to expand. Curiously, this enlargement has concerned
very little AI analysis. Many games use no AI techniques and those games which use AI are typically supported
by relatively normal, effortful authoring and scripting methods. The main reason is that games are terribly
completely different from the symbolic games. There are usually several agents involved, embedded in an
exceedingly simulated physical environment wherever they move through sensors and effectors that withstand
numerical instead of symbolic values. To be effective, the agents need to integrate noisy inputs from several
sensors, and that they need to react quickly and alter their behavior throughout the game. The techniques that are
developed for and with symbolic games aren't similar temperament for video games.
In distinction, soft computational intelligence (CI) techniques like neural networks, evolutionary computing, and
fuzzy systems are best suited for games. They are excellent in types of quick, obstreperous, statistical,
numerical, and changing domains that are offered by today’s video games. Therefore, video games represent a
IJSWS 15-314; © 2015, IJSWS All Rights Reserved
Page 55
S. Dixit and N. K. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 55-60
chance just like that of the symbolic game for GOFAI in Eighties and Nineties: an opportunity to develop and
take a look at CI techniques, and a chance to transfer the technology to business.
Much analysis is already being through with CI in video games, as explained by e.g. the CIG and AIIDE
symposia, also as recent gaming special sessions and problems in conferences and journals (Fogel et al. 2005;
Kendall and Lucas 2005; Laird and van Lent 2006; Louis and Kendall 2006; Young and Laird 2005; see Lucas
and Kendall 2006 for a review). The objective of this paper is to review the attainments and future possibilities
of a specific approach, that of evolving neural networks, or neuro-evolution (NE). Although NE ways were
originally developed keeping in mind the field of Robotics, and were initially applied to symbolic games, the
technique is especially similar temperament for video games. It is also useful for modifications of existing
games as well as it can be used as a basis for new game genres wherever machine learning plays a central role.
The most challenges are in guiding the evolution by making use of human data, and in achieving behavior that's
not solely triple-crown, however visibly intelligent to a human spectator. After an assessment of AI in gaming
and ongoing neuroevolution technology, these options and challenges are mentioned in this paper, making use
of many implemented systems as examples.
II.
Phase 1
Video Games: History
The history of artificial intelligence in video games may be considered from the middle sixties. The least recent
real artificial intelligence in gaming was the computer opponent in “Pong” or its variations, that there have been
many. The incorporation of microprocessors at that point would have allowed higher AI that failed to happen till
a lot of later. Game AI agents for sports games like soccer and basketball were primarily goal-oriented towards
grading points and ruled by easy rules that controlled when to pass, shoot, or move. There was a far higher
improvement within the development of Game AI when the appearance of fighting games like “Kung Foo” for
Nintendo or “Mortal Kombat”. The moves of the pc opponents were obtained by what every player was
presently doing and where they were standing. Within the most basic games, there was merely a lookup table for
what was presently happening and also the applicable best action. Opponent movement was based totally on
stored patterns .Within the most complicated cases, the computer would perform a brief minimax search of the
potential state space would be performed and best action would be came back. The minimax search had to be of
short depth and fewer time overwhelming since the game was occurring in time period. The emergence of recent
game genres within the Nineties prompted the employment of formal AI tools like finite state machines. There
was a starting of exhibition of games in all genres far better AI when starting to use nondeterministic AI
strategies. Currently, driving games like “Nascar 2002” have system controlled drivers with their own
personalities and driving designs and styles [1] [2] [12].
Main Areas Where AI Applies To Games:
• Non-Player Character AI
• Decision Making and Control
• Machine Learning
• Interactive Storytelling
• Cooperative Behaviors
• Player Control
• Content Creation [14]
Conferences:
Listed below square measure a number of the world’s major conferences that debate and focus on the growth
and development of game AI.
• GDC – Game Developers Conference
• The Annual International Conference on laptop Games
• IJCAI that is that the International Joint Conference on computer science
• AAAI Conference on AI
• The ACM international conference [15] [16] [19]
III.
Phase 2
Traditional strategies
The traditional technique for implementing game AI was by chiefly victimization tri-state state machines. The
complexness of the AI that would be enforced was restricted attributable to the tri-state state machines. Ancient
AI implementation strategies conjointly preponderantly followed the Deterministic implementation technique.
Within the deterministic implementation technique the behavior or the performance of the NPC’s and also the
games normally may be nominal beforehand and it absolutely was also too predicted. There was an absence of
uncertainty that contributed to the entertaining issue of the games. Deterministic behavior is best explained
through by taking example of simple chasing rule [20].
IJSWS 15-314; © 2015, IJSWS All Rights Reserved
Page 56
S. Dixit and N. K. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 55-60
Current strategies
For implementation of Game AI, the current methods used are different from neural network, Bayesian
technique, genetic algorithms, finite state machines and Path finding. All these strategies are achievable
however not applicable in each state of affairs given. Game AI could be a field wherever analysis remains going
on and developers are still perfecting the art implementing of these strategies in any situation given.
Path finding and Steering
Path finding addresses the matter of finding a suitable path from the starting point to the goal, avoiding
obstacles, avoiding enemies, and thus improving the efficiency of game. Movement involves choosing a path
and moving on it. At one end, there is a sophisticated pathfinder plus a trivial movement rule that would realize
a path once the object begins to maneuver and also the object would lead to that path, unheeding to everything
else. At the other end, a movement-only system wouldn't look ahead to search out a path (instead, the initial
"path" would be a straight line), however instead take one step at a time, considering the native environment at
each purpose. The gaming field has observed the optimum results by victimization each path finding and
movement algorithms [1] [4].
Finite State Machines and Decision Trees
Finite State Machines (FSMs) are used to explain that a current state is to get replaced by another under
occurrence of which condition or event.
AI is usually enforced with finite state machines (FSM's) or layers of finite state machines, which are
troublesome for game designers to edit. Watching typical AI FSM's, there are style designs that occurring many
times. One will use these patterns to create a custom scripting language that is powerful as well as attainable.
The technique may be more extended into a "stack machine" so characters have higher improved memory of
previous behaviors. [1] [7]
Neural Networks
For evolving the gaming AI as the player goes through the game, Neural Networks are used. For mapping
parameters from input space to output space we use neural network. The mapping is extremely nonlinear and the
structure of the neural network also affects this mapping. The most prominent factor with neural networks is that
they're going to continually evolve to suit the player, thus though the player changes his techniques, presently,
the network would develop on that. Using a neural network might permit game developers to write complicated
state machines or rules-based system codes by empowering key decision-making processes to at least one or
additional trained neural networks and neural networks provide the potential for the game's AI to adapt as soon
as the game is played. The most important downside with Neural Networks programming is that for a given
problem we don’t have any formal definitions so for producing a network perfectly suiting your needs may take
a lot of trials and errors. [10] [13] [20].
Genetic Algorithms
Genetic algorithms provide some way to unravel issues that are troublesome for ancient game AI techniques.
We will use a genetic rule to search out the simplest combination of structures to beat the player. So, the player
would bear tiny low level, and at the tip, the program would choose the monsters that faired the simplest against
the player, and use those within the next generation. Slowly, when lots of playing, some affordable
characteristics would be evolved. Genetic algorithms (GAs) square are one in every of a bunch of stochastic
process techniques. These techniques conceive to solve issues by looking for the solution space using some sort
of guided randomness. Simulated annealing is one for technique used for this purpose. Better solutions can be
provided by larger populations and our upcoming generations. One potential approach of doing this is often by
doing all of the Genetic rule work in-house and then producing an AI tuned by a GA. By having a GA engine to
figure on the user’s system whereas the game isn't being contend this may be achieved to an exact extent [3]
[20] [21].
III.
Phase 3
Need for a learning ability
Learning A.I. would permit the game to surprise the player and maintain the suspension of disbelief because the
systems remain invisible. Several games corporations are presently watching the possibility of creating games
which will match the player's ability by modifying techniques and strategy, rather than by rising the power of
opponents. There are measure few games within the market presently which can uncover a player's techniques
and adapt to them. Even on the toughest issue settings of most games most players tend to develop a routine,
that if they realize victimization flourishing, will continue victimization that so they win additional usually than
not. What would build it fascinating at this point is that if the AI might total their favorite activity places, or
uncover their winning techniques and adapt to them. This is often a really necessary feature because it would
prolong game-life significantly. Central to the method of learning, is that the adaptation of behavior so as to
boost performance. Fundamentally, this can be achieved either directly by changing the behavior or by testing
IJSWS 15-314; © 2015, IJSWS All Rights Reserved
Page 57
S. Dixit and N. K. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 55-60
modifications or indirectly by creating alterations to bound aspects of behavior supported observations. Here are
some of the issues that modern-day learning AI unremarkably encounters once being constructed:

Mimicking Stupidity

Over fitting

Native Optimality

Set Behavior [6] [17] [20]
Listed below are few of learning strategies. But we must keep in mind that all problems cannot be solved by
these learning strategies.
Q-learning
Q-learning could be a sort of reinforcement learning technique that works on the principle of learning an actionvalue function, offering the expected utility of performing an specified action in an exceedingly given state and
following a fixed policy thenceforth. One in every of the most blessings of Q-learning is that it's able to judge
the expected utility of the prevailing actions while not requiring a model of the atmosphere. Tables are used to
store information in Q-Learning. This terribly quickly loses viability as there is an increase in the complexity
levels of the system it's monitoring/controlling. The feasibility of learning technique is reduced because with
increasing complexity it loses viability with the systems on which it is being implemented. Therefore Q-learning
isn't a really renowned implementation for machine learning. One answer to the present downside is, use of an
adapted Artificial Neural Network in order to perform the function approximation [18].
Bayesian learning
Bayesian learning is technique through which a player through observation tries to depict the future moves of its
opponent. We study the long-run behaviors of the players in the evolutionary coordination games. In any
amount of time, signals corresponding to the players underlying actions, rather than the actions themselves, are
monitored and observed to record.
The rational quasi-Bayesian learning method is supposed to be used to analyze from the data extracted from the
observed signals. We discover that player’s long behaviors rely not solely on the correlations between actions
and signals, but also on the initial possibilities of risk-dominant and non-risk-dominant equilibrium being
chosen. There square measure conditions below that risk-dominant equilibrium, non-risk dominant equilibrium,
and also the being of each equilibrium emerges within the long-standing time. In some situations, the quantity of
limiting distributions grows unboundedly because the population size grows to time. Bayesian learning so far is
the most used learning technique and has been found very compatible with AI developers. [22].
Other learning techniques
Apart from the often used Bayesian and Q-learning techniques we have Statistical and iterative learning
techniques which might even be used. However their implementation into games is still far from away as it has
not been completely researched .There is a place where AI, statistics and epistemology-methodology converge
and this is often where Statistical learning comes into play.
Under this AI label we will build a machine which will realize and learn the regularities in an exceedingly
information set and then build the AI and also the game eventually to boost. Supported the end result of every
action it may be elect or avoided within the future [5][9].
The role of AI in numerous game genres
Before embarking on a discussion of the various game genres on the market these days, a glaring contradiction
must be clarified. A large, and endlessly growing, frame of analysis work into PC implementations of classic
games, like chess, GO and Othello, already exists. Once we talk about computer games, we aren't concerning
with games like these. Rather, we have ref to talk over with what can be more familiarly termed video games games created specifically to be vie on computers. Further, little of the analysis into classic games can be
applied to the games thought of by this report. The major cause for this can be that the quantity of degrees of
freedom in trendy video games is much on the far side that of classic games. What follows may be a description
of a number of the lot of vital genres of laptop games on the market these days, and tips to a number of the
attention-grabbing roles for the applying of AI to those games. This discussion can loosely follow an identical
discussion given in.
Action Games
Action games are the foremost standard game genre on the market these days. The premise of the games will
change from subjection Associate in Nursing alien horde single two-handed with simply your responsible pistol,
to Mad-Max style, post-apocalyptic vehicle primarily based murder. The game-play, however, remains a lot of
constant –high endocrine action wherever the aim of the sport is to shoot everything that moves. Today’s typical
action game takes place in an exceedingly totally rendered 3D atmosphere viewed from a primary person
perspective, and inhabited by innumerable sorts of fresh fish upon that to unbind your spleen through a vast
domain of exotic arms.
It is in making a lot of intelligent opponents that the foremost obvious potentialities for amalgamation of
revolutionary AI arise. At present, the tendency is to use scheme primarily based finite state machines (FSMs) to
see the behavior of the player’s adversaries. Though this has been achieved to terribly good impact (1999’s
IJSWS 15-314; © 2015, IJSWS All Rights Reserved
Page 58
S. Dixit and N. K. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 55-60
Game of the Year, Half-Life (www.valve.com) amazed game players with squad primarily based techniques,
and enemies with improbably authentic sensing prototypes), Finite State Machines are by their character very
inflexible and, act awfully once confronted by things not even thought of by the originator.
Many games have conjointly created spectacular application of teammates and helping characters that serve the
layer throughout the sport. Building upon this concept, some recent games have solid the player as a participant
of a force or team [27]. Significant examples embody Tom Clancy’s Rainbow Six: rascal Spear
(www.redstorm.com/rogue_spear) and Star Trek Voyager: Elite Force (www.ravensoft.com/eliteforce). There is
a true chance for the further application of refined AI within this space.
Adventure Games
Visually, the journey game has modified dramatically since “Adventure” was designed by Willie Crowther and
Don Woods within the early seventies. The premise of the genre, however, has remained much constant.
Gameplay involves the player on the road a restricted venue, resolution puzzles and interacting with characters
in an effort to any a story line.
While the initial samples of this genre were text primarily based (commands got through the player writing
artificial language commands – “eat the peach”, “enter building”, “open door” etc.), nowadays they're
diagrammatically gorgeous and input is given in an exceedingly kind of novel ways that – the foremost common
being the utilization of the mouse to direct the player’s character (from that came the name “point and click on
adventure”). Classic samples of this genre embody the Monkey Island (www.lucasarts.com) and also the Gabriel
Knight (www.sierrastudios.com) series. The most significant applications of Artificial Intelligence to the
gaming industry are the creation of a lot of realistic and fascinating NPCs along with maintenance of uniformity
in dynamic storylines.
Role taking part in Games
Often seen as Associate in Nursing extension of the journey game vogue, role taking part in games (RPGs) stem
from the popular Dungeons &amp; Dragons (www.playdnd.com) paper primarily based games that originated
within the 1970’s. Over the past twenty years the pc versions of those games have metamorphosed from being
mostly text primarily based to the fantastically rendered, vastly concerned games out there these days. Baldur’s
Gate (www.interplay.com/bgate) was a turning motive for the category. The amount of detail within the
Baldur’s Gate world involves quality so much on the far side something seen before, with completion of the
sport involving over one hundred hours of gameplay. RPGs see the player absorbing the role of Associate in
Nursing adventurer in an exotic, legendary world, wherever gameplay consists of questing across the land,
partaking in an exceedingly mixture of puzzle resolution and combat. Interactions with NPCs Associate in
nursing a convoluted plot are important within the genre. The variations between RPGs and journey games arise
from the scope concerned. RPGs take place in so much larger worlds and also the player has a lot of freedom to
explore the atmosphere at their own Step. Along with this, underlying RPGs is a few rule set stemming from the
initial, and quite complicated, Dungeons &amp; Dragons rules. The RPG format offers constant reasonably
challenges to the AI developer because the journey game. However, further complication is introduced owing to
the number of privileges provided to the player. Conserving story uniformity is a much bigger issue and also the
level of sophistication needed in Associate in Nursing RPG’s NPCs is on the far side that needed in journey
games.
Strategy Games
Strategy games solid a player responsible of a spread of military units, controlled from a “gods-eye view”,
which should be sent into fight against a single or more, rivals. Usually resources (such as gold, wood and
stone) should be harvested so as to form constituents or establish buildings. This organization of the
development of constituents is the key of strategy gameplay, as totally different units perform to variable levels
against one another, and are available at variable prices. A lot of recently, diplomacy has conjointly featured
powerfully in strategy gameplay. Now a days the Strategy games in the industry are an excellent mix between
legendary, fantasy and fantasy struggles; and recreations of ancient fights. Strategy genre has emerged two
different game categories. Flip primarily based strategy (TBS) games involve every player taking their
communicate move units, order production, mount attacks so on, one once another. One example of this type of
game is The Civilization Series. Real time strategy (RTS) games, because the title suggests, occur in time period
with players motions, instructing for production etc. simultaneously. The Conquer series, Age of Empires along
with Total Annihilation, appear as significant samples of this genre. One different sub-genre to spawn by these
type of games, is that of the God game. These rigid the player within the role of a protecting god. The most issue
distinctive God games from strategy games is within the manner within which the player will take action within
the atmosphere. The player has the ability to control the atmosphere – as an example to boost or flatten
mountains to form the coast more congenial, or to unbind the spleen of a cyclone or earthquake – and units are
controlled less directly as compared to strategy games. Classic samples of this genre embody SimCity
(www.simcity.com), the inhabited series (www.populous.net) and, the recently free Black and White
(www.lionhead.co.uk). AI in strategy games must be applied each at the amount of strategic opponents and at
the amount of individual units. AI at the strategic level involves the creation of laptop opponents capable of
IJSWS 15-314; © 2015, IJSWS All Rights Reserved
Page 59
S. Dixit and N. K. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 55-60
mounting ordered, cohesive, tactically and innovative campaigns against the human player. It’s really very
difficult as players quickly establish any rigid ways and learn to use them. At the unit level AI is needed so as to
permit a player’s units to hold out the player’s instructions very accurately. Problems at unit level embody
correct path finding and permitting units a degree of autonomy so as to be able to behave sanely while not the
player’s direct management.
V.
Conclusion and Future Work
Looking more into the longer term, the overall perception is that AI are going to be centered not solely on
optimizing an NPC’s behavior, but also on the player’s fun and experience in general. This reaches way on the
far side the steerage of single NPCs into learning what fun is for the player and shaping/changing the game’s
experience consequently. As an example creation of whole cities and civilizations in an exceedingly likely
approach. We will even have deep NPC characters, storytelling with dynamic tension and feeling coming up
with for the player. It looks to be an incredible perspective for AI in games. However one cannot prohibit game
AI growth simply to those fields and shut the window to new directions wherever it may be applied. Games
feature several nice technology directions, and AI is barely one in of them. But the advances in AI are those that
will fundamentally amendment the approach that games are designed. Learning AI and interactive story telling
AI are 2 of the foremost promising areas wherever AI growth within the future will cause a brand new
generation of games [8] [11].
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[2]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
Narayek, A.; “AI in laptop games”, Volume 1, Issue 10, Game Development, ACM, NewYork, February, 2004
Wexler, J.; “Artificial Intelligence in Games: a glance at the smarts behind Lionhead Studio’s‘Black and White’ and wherever it
will and can move into the future”, University of Rochester,Rochestor, NY, 2002
Hussain, T.S.; Vidaver, G.; “Flexible and Purposeful authority Behaviors victimisation time periodGenetic Control”, biological
process Computation, CEC 2006, 785–792, 2006
Hui, Y.C.; Prakash, E.C; Chaudhari, N.S.; “Game AI: computer science for 3D PathFinding”, TENCON 2004. 2004 IEEE
Region ten Conference, Volume B, 306- 309, 2004
Nareyek, A.; “Game AI is dead. Long live Game AI”, IEEE Intelligent Systems, vol. 22, no. 1, 9‐11, 2007
Ponsen, M.; Spronck, P.; “Improving adaptative game AI with biological process learning”, Facultyof Media &amp; data
Engineering, earthenware University of Technology, 2005
Cass, S.; “Mind Games”, Spectrum, IEEE, Volume: 39, Issue: 12, 40-44, 2002
Hendler, J.; “Introducing the longer term of AI”, Intelligent Systems, IEEE, Volume 21, Issue 3,2-4, 2006
Hong, J.H.; Cho, S.B.; “Evolution of aborning behaviors for shooting game characters inRobocode”, biological process
Computation, CEC2004, Volume 1, Issue 19-23, 634–638, 2004
Weaver, L.; Bossomaier, T.; “Evolution of Neural Networks to Play the sport of Dots-and-Boxes”, Alife V: Poster displays, 4350, 1996
Handcircus. (2006). the longer term of Game AI @ Imperial school. Retrieved Gregorian calendar month twenty five, 2008, from
http://www.handcircus.com/2006/10/06/the-future-of-game-ai-imperial-college
Wikipedia.
(2008).
Game
computer
science.
Retrieved
September
29,
2007,
fromhttp://en.wikipedia.org/wiki/Game_artificial_intelligence
Wikipedia.
(2008).
Artificial
Neural
Networks.
Retrieved
September
29,
2007,
from
http://en.wikipedia.org/wiki/Artificial_neural_network
AI Game Dev. (2008). Analysis Opportunities in Game AI. Retrieved January two, 2008,from
http://aigamedev.com/questions/research-opportunities
Game developers conference. (2008). continuing of GDC 2008.Retrieved November nine,2007, from
http://www.gdconf.com/conference/proceedings.htm
CG games 2008. (2008). twelfth International Conference on laptop Games: AI, Animation,Mobile, interactive multimedia
system &amp; Serious Games. Retrieved November nine, 2007 fromhttp://www.cgamesusa.com/
AI Depot. (2008). Machine Learning in games development. Retrieved Gregorian calendar month seventeen, 2008 from http://aidepot.com/GameAI/Learning.html
Wikipedia. (2008). Q-Learning. Retrieved Gregorian calendar month seventeen, 2008 fromhttp://en.wikipedia.org/wiki/Qlearning
2002 Game Developer’s Conference AI. (2002). AI conference Moderators Report.Retrieved Oct second, 2007 from
http://www.gameai.com/cgdc02notes.html
Seemann, G.; Bourg, D.M.; “AI for Game Developers”, O'Reilly, 2004 RetrievedDecember tenth, 2007 from
http://books.google.com.sg
Gignews.
(2008)
victimisation
Genetic
Algorithms
for
Game
AI.
Retrieved
March
19,
2008
fromhttp://www.gignews.com/gregjames1.htm
Chen, H.S.; Chow, Y.;Hsieh, J.; “Boundedly rational quasi-Bayesian learning incoordination games with imperfect monitoring”,
J. Appl. Probab., Volume 43, Number 2, 335-350, 200
Acknowledgements
We would like to thank Ms. Hema Kashyap , Mr. Vijai Singh and Mr. Pankaj Agarwal for guiding and motivating us throughout the
complete project in each potential approach. We'd conjointly wish to impart them for providing timely facilitate and recommendation on
varied problems that arose from time to time.
IJSWS 15-314; © 2015, IJSWS All Rights Reserved
Page 60
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Solving Travelling Salesman problem by Using Ant Colony Optimization
Algorithm
Priyansha Mishra1, Shraddha Srivastava2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
___________________________________________________________________________________
Abstract: This paper reviews the various proposed research papers based on solving TSP problem through
ACO algorithm. TSP is travelling salesman problem where the salesman starts from a single source and
traverse all the cities exactly once and returns to the source. The objective of the Travelling Salesman is to
cover the tour in minimum distance. This optimization problem can be solved by Ant Colony Optimization
Algorithm (ACO) as the artificial agents (ants) cover the minimum path between the source(nest) to
destination(food). The ACO algorithm is a heuristic approach combined with several other improved researches
over the past years and this paper reviews those researches.
Keywords: NP-complete; ant colony optimization; travelling salesman problem; pheromone
______________________________________________________________________________________
1,2
I.
INTRODUCTION
Among the most important requirement of the present era of the world is the optimization of the utilization of
resources. Great work has been employed since then in the concerned matter. The optimization algorithms
discovered so far are Ant Colony Optimization (ACO), Genetic Algorithm (GA), Differential Evolution (DE),
and Artificial Bee Colony Optimization (ABC). Ant Colony Optimization Algorithm [1] was firstly proposed by
Dorigo M. in 1991, designed to depict the foraging behavior of real ants colonies. ACO was used to solve
various combinatorial problems like Travelling Salesman Problem, Job Scheduling problem, Network Routing,
Vehicle routing problem etc. Various strategies proposed to speed up the algorithm and quality of the final
output are merging of local search, partition of the artificial ants into common ants and scout ants [2], updating
new pheromones [4]. Phenomena of the real ant colony is shown below in figure (a), (b),(c) and (d)
Figure 1 (a): Real ants travelling with equal probability of the paths from their nest to food source. [2]
Figure 1 (b): Obstacle in between the path of the food source and their nest.[2]
Figure 1 (c): Pheromones deposited more quickly on the shorter path. [2]
IJSWS 15-316; © 2015, IJSWS All Rights Reserved
Page 61
P. Mishra and S. Srivastava, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 61-64
Figure 1 (d): After sufficient time elapsed, the real ants chooses shorter path. [2]
II.
Literature Review
Gao Shang , Zhang Lei , Zhuang Fengting and Zhang Chunxian (2007) has used a new method to solve TSP
problem which is more efficient than earlier used methods .They have assisted ant colony optimization with
association rules which gave a better outcome. They have integrated AR with ACO and had tested the new
algorithm with different test cases which resulted into a better solution quality then earlier used ACO without
association rules [5]. Fanggeng Zhao Jinyan, Dong Sujian Li and Jiangsheng Sun (2008) proposed a new and
improved ant colony optimization algorithm with embedded genetic algorithm to solve the travelling salesman
problem. In this new algorithm, they employed a greedy approach for solution construction and introduced an
improved crossover operator for consultation in the embedded genetic algorithm. Various test cases reflected
that this algorithm was able to find better solutions in less iteration than already proposed ACO algorithms [6].
Weimin Liu, Sujian Li, Fanggeng Zhao and Aiyun Zheng (2009) proposed an algorithm to find out solution of
multiple traveling salesmen problem (MTSP). MTSP is generalized form of famous traveling salesmen
problem(TSP).They proposed an ant colony optimization (ACO) algorithm for the MTSP to minimize the
maximum tour length of all the salesmen .They compared the results of algorithm with genetic algorithm (GA)
on some benchmark instances in literatures. Computational results reflected that their algorithm performed
extremely well. ZHU Ju-fang and LI Qing-Yuan (2009) studied the basic principle and realization about ant
colony algorithm in this paper. The algorithm is realized under the Visual C++ compiler environment and used
to solve traveling salesmen problem. This simulated example shows the ant colony algorithm’s good
optimization ability. In addition, they also found that, the path length quickly goes down to be the shortest in the
first twenty steps, it proves that the ant algorithm has the disadvantage of stagnation, how to solve this problem
is worth to study in future [8]. Ramlakhan Singh Jadon and Unmukh Datta (2013) proposed an efficient and
modified ant colony optimization (ACO) algorithm with uniform mutation using self-adaptive approach for the
travelling salesman problem (TSP). This algorithm finds the final optimum solution by combining most
effective sub-solutions. Experimental result has showed that this algorithm serves better result than already
existing ant colony optimization algorithm. For Future research one can apply the proposed algorithm to other
combinational problems and check the efficiency by comparing this method with other proposed methods [2].
Shigang Cui and Shaolong Han (2013) pointed out the basic principle, model, advantages and disadvantages of
ant colony algorithm and the TSP problem in their paper. They tried to remove slow convergence and easy
stagnation problem of earlier proposed ant colony optimization algorithms. Parallel computation and positive
feedback mechanism was adopted in this ACO. They found that ACO has many advantages like fast
convergence speed, high precision of convergence, robustness but exists when numbers of ants are small. They
concluded that it needs further research to combine the ACO with other algorithms to improve its performance.
Ant Colony Optimization (ACO) algorithm is a novel meta-heuristic algorithm that has been widely used for
different combinational optimization problem and inspired by the foraging behavior of real ant colonies. It has
strong robustness and easy to combine with other methods in optimization. In this paper, an efficient modified
ant colony optimization algorithm with uniform mutation using self-adaptive approach for the travelling
salesman problem (TSP) has been proposed. Here mutation operator is used for enhancing the algorithm escape
from local optima. The algorithm converges to the final optimal solution, by accumulating most effective subsolutions. Experimental results show that the proposed algorithm is better than the algorithm previously
proposed. This paper presents an efficient approach for solving travelling salesman problem based on modified
ant colony optimization problem based on self-adaptive approach. Here one additional step in the form of
mutation operator is used in the original ACO algorithm for finding the global optimal solution. The use of
adaptive mutation is for enhancing the algorithm escape from local optima. The experimental final results
showed that the proposed algorithm performs better than the previous algorithms. Future work is to apply the
proposed algorithm to other combinational problems and check the efficiency by comparing this method with
other proposed methods [3].
III.
Advantages and Disadvantages of Ant colony system [3]
Advantages
1.
It does not rely on mathematical expression solely.
IJSWS 15-316; © 2015, IJSWS All Rights Reserved
Page 62
P. Mishra and S. Srivastava, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 61-64
2.
It has a capability of finding the global optimal solution.
3.
It is strongly robust, universal and global, applies methods of parallel computing and is easier to
combine with other enhanced algorithms.
Disadvantages
1.
Local search technique of ACO is weak and it has slower convergence rate.
2.
It requires a long computation time and stagnation phenomenon is easier to occur.
3.
It can’t solve continuous combinatorial problem.
IV.
Description of travelling Salesman problem
Travelling Salesman Problem was suggested by Sir William Rowan Hamilton and British Mathematician
Thomas Penyngton Kirkman at 19 century. It consists of a Salesman whose objective is to cover n cities only
once and return back to the starting city in minimum possible distance. The distance between the cities is given.
The graph theory of the travelling salesman problem is represented by G(V,E); where V is the number of cities
and E is the path between those cities; d(i,j) is the distance between ith and jth city. [3]
Figure 2: Initialised cities of eil51[4]
V.
Figure 3: solution by eil51[4]
Comparative Study of Research Papers
Research paper
Year
Cities
Time complexity
(avg case )
-
[4]
2007
51
[5]
[6]
2007
2008
30
Gen (n)
[7]
2009
[8]
2009
Gen
(n)
51
431.49
426.2
(51 cities)
478
(51 cities)
481
[2]
2013
51
425.23
[3]
2013
-
-
Special feature
Evolved values after 20
times
Program illteration=1000
Use
of
multiple
salesman(3,5,10,20)
C++
program
implementation
Use of mutation by self
adaptive approach
Matlab implementation
VI.
Conclusion and Future Work
This paper is literature review over various solutions available for Travelling Salesman Problem by Ant Colony
Optimization. The proposed approaches of memetic algorithm, self adaptive approach, Parallel computing and
positive feedback mechanism, association rule , etc are all working in the direction of improving the slow
convergence rate of the ant colony algorithm. It is observed that with respect to 51 cities, the best proposed
algorithm is embedded genetic with ACO. The future work on the ACO is focused on the removal of the
stagnation problem and the better convergence rate of the various combinatorial problems.
References
[1]
[2]
[3]
[4]
[5]
A. Colorni, M. Dorigo, V. Maniezzo, "Distributed optimization by ant colonies”. Proceedings of European Conference on
Artificial Life, Paris, France, pp. 134-142, 1991.
Ramlakhan Singh Jadon, Unmukh Datta, Modified Ant Colony Algorithm with Uniform Mutation using Self Adaptive Approach
for Travelling Salesman Problem,4th ICCCNT IEEE,2013.
Shigang Cui, Shaolang han, Ant Colony Algorithm and its application in solving Travelling Salesman Problem, Third
International Conference on Instrumentation, Measurement , Computer, Communication and Control, 2013.
Haibin Duan, Xiufen Yu, Hybrid Ant Colony Algorithm using Memetic Algorithm for Travelling Salesman Problem,IEEE,
2007.
Gao Shang, Zhang Lei, Zhuang Fengting, Zhang Chunxian, Solving Traveling Salesman Problem by Ant Colony Optimization
Algorithm with Association Rule, ICNC, 2007.
IJSWS 15-316; © 2015, IJSWS All Rights Reserved
Page 63
P. Mishra and S. Srivastava, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 61-64
[6]
[7]
[8]
Fanggeng Zhao, Jinyan Dong, Sujian Li and Jiangsheng Sun, An improved ant colony optimization algorithm with embedded
genetic algorithm for the traveling salesman problem, Proceedings of the 7 th World Congress on Intelligent Control and
Automation, 2008.
Weimin Liu12, Sujian Li1, Fanggeng Zhao, Aiyun Zheng, An Ant Colony Optimization Algorithm for the Multiple Traveling
Salesmen Problem,ICIEA,2009.
ZHU Ju-fang, LI Qing-, the Travelling Salesman Problem by the Program of Ant Colony Algorithm, IEEE, 2009.
IJSWS 15-316; © 2015, IJSWS All Rights Reserved
Page 64
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
The Future of Augmented Reality in Our Daily Lives
Nipun Gupta1, Utkarsh Rawat2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
__________________________________________________________________________________________
Abstract: In recent years, the use of augmented reality has become one of the important aspects of our
technological lives. It has found its applications in various fields such as military, industrial, medical,
commercial and entertainment areas. It depicts an advanced level of technological development and assistance
in the favour of human being’s life. Although invented forty years ago, it can now be used by almost anyone.
Augmented Reality holds a potential usage and a potential market, considering the scenario of current times.
Most people who experience Augmented Reality today will do so through a location based application, gaming,
internet, or on a mobile phone. In this paper we briefly describe current AR experiences available to the general
public, and then how current research might transfer into everyday AR experiences in the future.
1,2
Keywords: Augmented Reality; Archaeology; Pervasive computing; Spatial Augmented Reality; Head up
Displays
________________________________________________________________________________________
I.
INTRODUCTION
Augmented Reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements
are augmented (or supplemented) by computer-generated sensory input such as sound, video, graphics
or GPS data. Under the concept of Augmented Reality, a view of reality is modified by a computer system.
Basically, virtual reality replaces the real world with a simulated one. Augmentation exists with real-time and
with environmental elements, such as scores on TV screen during a match. With the help of advanced
Augmented Reality technology the information about the surrounding real world of the user
becomes interactive and can be digitally manipulated. Augmented Reality is an area of research that aims to
enhance the real world by overlaying computer-generated data on top of it. Azuma [1] identifies three key
characteristics of AR systems:
(1) Blending virtually generated images with the real world
(2) Three-dimensional registration of digital data
(3) Interactivity in real time
3-D Registration refers to correct configuration of real and virtual objects. Without accurate registration, the
illusion that the virtual objects exist in the real environment is severely compromised. Registration is a difficult
problem and a topic of continuing research. In addition, research is still going on in university and industry
laboratories to explore the future of Augmented Reality. It can change our lives in various manners like, urban
exploration, museum, shopping, travel and history, customer service, safety and rescue operations and moving
and decorating your home.
In the near future, AR means that the moment we lay our eye on something or indeed on someone, we will
instantly gain access to huge amounts of information on them. This facility will be used to gather information
about our surroundings. Some applications already used for military purposes could soon be integrated into our
everyday lives. Video games, medicine, education, socialization and marketing are just some of the areas which
have already begun to be influenced by AR.
For example, AR applications have already been successfully used in surgical operations, and educational
materials used in museums and schools have been enriched with vivid 3D recreations using these technologies.
AR will also be increasingly used for marketing purposes. We could walk in front of a retail outlet and see
ourselves in a jeans advert, to check out how good we look in them. This form of street marketing is already
within reach, allowing a mixing of contextual information related to both the client and the product. There are
already applications used to fit glasses using virtual try-on and product visualization.
II.
LITERATURE REVIEW
Augmented Reality is the new notion of computer vision where superficial objects are superimposed over a
frame captured from camera in order to provide a real camera frame a synthetic look. Augmented Reality
depends upon drawing objects on a camera frame without compromising on the frame rate. Due to faster
memory and process requirement such techniques are mainly limited to single system processing as camera
frame arrives at 30fps and can be compromised up to 15fps. [2]
IJSWS 15-317; © 2015, IJSWS All Rights Reserved
Page 65
N. Guptaand and U. Rawat, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 65-69
This paper surveys the field of Augmented Reality, in which 3-D virtual objects are integrated into a 3-D real
environment in real time. Registration and sensing errors are two of the biggest problems in building effective
Augmented Reality systems, so this paper summarizes current efforts to overcome these problems. [3]
Digital technology innovations have led to significant changes in everyday life, made possible by the
widespread use of computers and continuous developments in information technology (IT). Based on the
utilization of systems applying 3D technology, as well as virtual and augmented reality techniques, IT has
become the basis for a new fashion industry model, featuring consumer-centred service and production methods.
[4]
Pervasive computing is beginning to offer the potential to re-think and re-define how technology can support
human memory augmentation. For example, the emergence of widespread pervasive sensing, personal recording
technologies and systems for quantified self are creating an environment in which it is possible to capture finegrained traces of many aspects of human activity. Contemporary psychology theories suggest that these traces
can then be used to manipulate our ability to recall. [5]
Buildings require regular maintenance, and augmented reality (AR) could advantageously be used to facilitate
the process. However, such AR systems would require accurate tracking to meet the needs of engineers, and
work accurately in entire buildings. Popular tracking systems based on visual features cannot easily be applied
in such situations, because of the limited number of visual features indoor, and of the high degree of similarity
between rooms. In this project, we propose a hybrid system combining low accuracy radio-based tracking, and
high accuracy tracking using depth images. Results show tracking accuracy that would be compatible with AR
applications. [6]
Augmented Reality (AR) is technology that allows virtual imagery to be overlaid on the real world. Although
invented forty years ago, it can now be used by almost anyone. We review the state of the art and describe how
AR could be part of everyday life in the future. [7]
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
III.
TECHNOLOGY USED[8]
Hardware: It includes processor, display, sensors and input devices. In modern mobile computing
devices these elements often include a camera and MEMS sensors such as accelerometer, GPS, and
Solid State Compass.
Display: It includes optical projection systems, monitors, hand held devices, and display systems worn
by one person.
Head-Mounted Display: It is a display device paired to a headset such as a helmet. HMDs place
images of both the physical world and virtual objects over the user's field of view. HMDs can provide
users immersive, mobile and collaborative AR experiences.
Eyeglasses: AR displays work on devices resembling eyeglasses. Versions include eyewear that use
cameras to intercept the real world view and re-display its augmented view through the eye pieces and
devices in which the AR imagery is projected through the surfaces of the eyewear lens pieces.
Head up Displays: HUDs can augment only part of ones field of view like Google Glass is are
intended for an AR experience.
Contact Lenses: This technology is under development. These bionic contact lenses might contain the
elements for display embedded into the lens including integrated circuitry, LEDs and an antenna for
wireless communication.
Virtual Retina Display: It is a personal display device under development at the University of
Washington's Human Interface Technology Laboratory. With this technology, a display is scanned
directly onto the retina of a viewer's eye. The viewer sees what appears to be a conventional display
floating in space in front of them.
Spatial Augmented Reality: It augments real world objects and scenes without the use of special
displays. SAR makes use of digital projectors to display graphical information onto physical objects.
The displays are not associated with each user, SAR scales naturally up to groups of users, thus
allowing for collocated collaboration between users.
Tracking: Modern mobile augmented reality systems use one or more of the following tracking
technologies: digital cameras and/or other optical sensors, accelerometers, GPS, gyroscopes, solid state
compasses, RFID and wireless sensors.
Input Devices: Techniques include speech recognition systems that translate a user's spoken words
into computer instructions and gesture recognition systems that can interpret a user's body movements
by visual detection or from sensors embedded in a peripheral device.
Computer: It analyses the sensed visual and other data to synthesize and position augmentations.
IJSWS 15-317; © 2015, IJSWS All Rights Reserved
Page 66
N. Guptaand and U. Rawat, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 65-69
IV.
SOFTWARE AND ALGORITHM [8]
The software should conclude real world co-ordinates which are not dependent upon imageries. The procedure
is called as Image Registration and it uses dissimilar techniques of computer vision, mostly linked to video
tracking. Usually those methods entail two parts… Firstly we will have to detect interest points.
Stage 1 - It uses feature detection methods such as blob detection, corner detection, thresholding or edge
detection and/or other image processing methods.
Stage 2 - Here a real world coordinate system from the information acquired in the first stage is restored.
Mathematical approaches used in the second stage include projective geometry, geometric algebra, and rotation
representation with exponential map, and particle filters, nonlinear optimization, and robust statistics.
V.
APPLICATIONS IN OUR LIVES [8]
Archaeology
Augmenting archaeological features onto the modern landscape, enabling archaeologists to formulate
conclusions about site placement and configuration. Another application given to AR in this field is the
possibility for users to rebuild ruins, buildings, or even landscapes as they formerly existed.
Architecture
AR can aid in visualizing building projects. Computer-generated images of a structure can be superimposed into
a real life local view of a property before the physical building is constructed there. AR can also be employed
within an architect's work space, rendering into their view animated 3D visualizations of their 2D drawings.
Architecture sight-seeing can be enhanced with AR applications allowing users viewing a building's exterior to
virtually see through its walls, viewing its interior objects and layout.
Art
AR technology has helped disabled individuals create art by using eye tracking to translate a user's eye
movements into drawings on a screen.
Commerce
AR can enhance product previews such as allowing a customer to view what's inside a product's packaging
without opening it. AR can also be used as an aid in selecting products from a catalogue or through a kiosk.
Scanned images of products can activate views of additional content such as customization options and
additional images of the product in its use. AR is used to integrate print and video marketing.
Education
Augmented reality applications can complement a standard curriculum. Text, graphics, video and audio can be
superimposed into a student’s real time environment.
Emergency Management/Search and Rescue
Augmented reality systems are used in public safety situations - from super storms to suspects at large.
Everyday
Some applications already used for military purposes could soon be integrated into our everyday lives. Video
games, medicine, education, socialization and marketing are just some of the areas which have already begun to
be influenced by AR.
Gaming
Augmented reality allows gamers to experience digital game play in a real world environment.
Industrial Design
AR can help industrial designers experience a product's design and operation before completion.
Medical
Augmented Reality can provide the surgeon with information, which are otherwise hidden, such as showing the
heartbeat rate, the blood pressure, the state of the patient’s organ, etc. AR can be used to let a doctor look inside
a patient by combining one source of images such as an X-ray with another such as video.
Military
In combat, AR can serve as a networked communication system that renders useful battlefield data onto a
soldier's goggles in real time.
Navigation
AR can augment the effectiveness of navigation devices. Information can be displayed on an automobile's
windshield indicating destination directions and meter, weather, terrain, road conditions and traffic information
as well as alerts to potential hazards in their path.
Office Workplace
AR can help facilitate collaboration among distributed team members in a work force via conferences with real
and virtual participants. AR tasks can include brainstorming and discussion meetings utilizing common
visualization via touch screen tables, interactive digital whiteboards, shared design spaces, and distributed
control rooms.
IJSWS 15-317; © 2015, IJSWS All Rights Reserved
Page 67
N. Guptaand and U. Rawat, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 65-69
Sports and Entertainment
AR has become common in sports telecasting. Sports and entertainment venues are provided with see-through
and overlay augmentation through tracked camera feeds for enhanced viewing by the audience.
Television
Weather visualizations were the first application of Augmented Reality to television. It has now become
common in weathercasting to display full motion video of images captured in real-time from multiple cameras
and other imaging devices.
Tourism and Sightseeing
Augmented reality applications can enhance a user's experience when traveling by providing real time
informational displays regarding a location and its features, including comments made by previous visitors of
the site. AR applications allow tourists to experience simulations of historical events, places and objects by
rendering them into their current view of a landscape.
VI.
AR IN OUR DAILY LIFE [9]
Today when it comes to experiencing Augmented Reality we do so with the help of a location based application,
gaming, the internet or quite possibly on a mobile phone. Augmented Reality has a vast implication in our daily
lives today, mainly in the Entertainment sector. In the near future, AR means that the moment we lay our eye on
something or on someone, we will promptly gain access to huge amounts of information on them.
Location Based AR
Location Based AR is mainly regarding Global Positioning System (GPS) which is commonly available feature
on mobile phone platforms like Android. With the help of the camera of the phone, the application uses the
device in a common environment and determines its orientation. The dimensions, longitude and altitude are
determined from GPS, matching the co-ordinates of the current position, 3D virtual objects from a database can
be projected into the image and displayed on mobile screen.
AR Gaming
Amongst the most widespread areas for stand-alone AR applications is in gaming. Modern gaming consoles
such as the X-Box and PlayStation have camera accessories that can be used to support Augmented Reality. In
2007, Sony released the ‘Eye of Judgment’ game, the first AR console game. This uses a camera to track special
playing cards and to overlay computer-generated characters on the cards that would combat with each other
when placed next to each other.
Web Based AR
It has been possible to provide AR familiarities to anyone with a Flash-enabled web browser and web camera.
With over 1.5 billion people online and hundreds of thousands of Flash developers it is easy to develop web
based AR applications that people can use. Most of these AR applications have been concentrated on marketing
and advertising. Now it is possible to use both marker-based and marker-less tracking to bring a web-based AR
experience, and there are hundreds of websites that deliver this kind of experience.
Mobile AR
There are more than 450 million camera enabled smart phones sold in 2012, so mobile AR is a noteworthy
rising market. One of the most prevalent ways to experience AR on a mobile phone is through an AR browser.
This is software that lets a person to connect to different information channels and use their location to overlay
computer-generated images on a live video view of the real world. The phone’s GPS and compass sensors are
used to specify the user’s position. Apart from AR Browser software, there are more than a thousand other AR
applications available. Most use the phone GPS and compass sensors, although image based tracking is
becoming more common as the developer tools become available.
AR Tracking
Current AR systems classically use computer visualisation methods to track from printed images, or GPS and
compass sensor for wide area outdoor tracking. These methods are accurate, but only work over short distances
while tracking features are in view, but GPS and compass based tracking works over large distances, but is
inaccurate. Using computer vision alone, marker less image based tracking is news for widely available AR
systems. These algorithms allow almost any visually distinct printed image to be used for AR tracking and have
virtual content overlaid on it. The overall goal of this is to automatically and dynamically fuse data from
different types of tracking sensors and transparently provide it to AR applications.
AR Interaction
Maximum web based and mobile AR applications have very limited interaction. The user can do little more than
look at the content from different viewpoints or select it to find out more information about it. However
researchers around the world are exploring a wide range of different interaction techniques for AR applications.
The intimate linking between real and virtual objects has led to the development of the Tangible AR interaction
comparison where the perceptiveness of physical input devices can be combined with the enhanced display
possibilities provided by virtual image overlays. Unlike most computer interfaces, AR applications do not just
include virtual interface design, but also design of the physical objects that are used to manipulate the virtual
content. These have been shown to be very instinctive as gesture input is an easy way to directly interact with
IJSWS 15-317; © 2015, IJSWS All Rights Reserved
Page 68
N. Guptaand and U. Rawat, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 65-69
virtual content. In the future this means that users will be able to manipulate virtual content as naturally as real
objects, and also do things they can’t do in the real world, such as use their voice to change object properties.
AR Displays
Computer and mobile phone displays are the most extensively used means for viewing AR experiences;
however, in the future there will be a variety of different display options that users will be able to choose from.
Sutherland’s original AR interface used a head mounted display (HMD) to allow users to effortlessly view the
computer-generated content from any viewpoint. In the years since HMDs have not delivered a satisfactory
viewing experience, or have not been inexpensive enough, or small enough to be socially acceptable, and so
have not been widely used outside the research laboratories.
However companies are now making inexpensive HMDs that are similar to sunglasses. These can be connected
to computers or mobile devices and so can easily be used to view AR content. As HMDs endure to become
inexpensive and social acceptable they should be more widely used. In the future, HMDs and other types of AR
displays will be more common, providing a better AR experience. These types of AR displays can be constantly
worn and don’t need to be handheld to see the virtual imagery, and so will be less burdening.
VII.
ADVANTAGES

Anyone can use it.

When used in the medical field to train, it can save lives.

Can be used in exposing military personnel to real live situations without exposing them to real life
danger.

Can save millions of dollars by testing situations (like new buildings) to confirm their success.

Knowledge, information increments are possible.

Experiences are shared between people in real time.

Video Games provide an even more “real” experience.

Form of escapism.

The mobile user experience will be revolutionized by AR technology as did gesture and touch in
mobile phones.
VIII. DISADVANTAGES

Spam and Security.

Social and Real-Time vs. Solitary and Cached.

User Experience: Socially, using AR may be inappropriate in some situations

Content may obscure and/or narrow a user’s interests or tastes.

Privacy control will become a big issue. Walking up to a stranger or a group of people might reveal
status, Tweets, and information that may cause breaches of privacy.
IX.
CONCLUSION
Augmented Reality technology is becoming more widespread, especially through web-based and mobile AR. In
this paper we have provided examples of existing AR applications, as well as discussed major areas of research
that will change how everyday people will experience Augmented Reality in the future. In addition to the three
areas of Tracking, Interaction and Display covered here, there are many other AR research topics that will
augment the AR experience and make it more accessible. It is clear from the technology inclinations that AR is
going to become progressively more extensive and easy to use. One day soon Sutherland’s vision of the ultimate
AR interface may finally be achieved.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
Ronald T. Azuma, HRL Laboratories, LLC, “Recent Advances in Augmented Reality”, Computers & Graphics, November 2001.
Mr. Udaykumar N Tippa and Dr. Sujata Terdal, “Augmented Reality Application for Live Transform Over Cloud”, IJRIT
International Journal of Research in Information Technology, Volume 3, Issue 1, January 2014, Pg. 83-91
Ronald T. Azuma, HRL Laboratories,” Survey of Augmented Reality”, Tele operators and Virtual Environments 6, 4 (August
1997), 355-385
Miri Kim and Kim Cheeyong,” Augmented Reality Fashion Apparel Simulation using a Magic Mirror”, International Journal
of Smart Home Vol. 9, No. 2 (2015), pp. 169-178
Nigel Davies, Adrian Friday,” Security and Privacy Implications of Pervasive Memory Augmentation”, IEEE Pervasive
Computing 13, Jan-Mar 2015.
Stéphane Côté, François Rheault, “A Building-Wide Indoor Tracking System for Augmented Reality”
Mark Billing Hurst, “The Future of Augmented Reality in Our Everyday Life”, The HIT Lab NZ, The University of Canterbury
Wikipedia Website: http://en.wikipedia.org/wiki/Augmented_reality
The Future of Augmented Reality in Our Everyday Life by Mark Billing Hurst, the HIT Lab NZ, the University of Canterbury,
Ilam Road, Christchurch 8041, New Zealand
ACKNOWLEDGEMENTS
We are grateful to our Department of Computer Science & Technology for their support and providing us an opportunity to review on such
an interesting topic. While reading and searching about this topic we learnt about various important and interesting facts.
IJSWS 15-317; © 2015, IJSWS All Rights Reserved
Page 69
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Digital Watermarking for Rightful Ownership and Copyright Protection
Pooja Kumari1, Vivek Kumat Giri2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
_________________________________________________________________________________________
Abstract: Digital watermarking is the act of hiding a message related to a digital signal in different forms like
an image, song, video within the signal itself. Copyright protection and proof of ownership are two of the main
important applications of the digital image watermarking. The paper introduces the digital watermarking
technology i.e data hiding technique that embeds a message into a multimedia work such as an image or other
digital objects. This paper introduces the overview of digital watermarking .The paper mention a new paradigm
which is a 3d objects watermarking in digital watermarking. Digital watermarking is used to hide the
information inside a signal or image, which cannot be easily extracted by the third party. Its widely used
application is copyright protection of digital information. Several digital watermarking techniques are based on
discrete cosine transform (DCT), discrete wavelets transform (DWT) and discrete Fourier transforms (DFT).
The digital watermarking suffers from different types of attacks. The recovery from these attacks requires strong
detection techniques. The digital watermark agent provides a professional solution for these attacks. Digital
watermarking technology is a frontier research field and it serves an important role in information security.
Keywords: Digital watermarking, copyright protection, encryption, DCT, DWT, DFT, watermark embedding,
watermark detection, watermark extraction, Attacks, information security.
__________________________________________________________________________________________
1,2
I.
Introduction
Digital watermarking is a process of embedding information (or signature) directly into the media data by
making small modifications to them. With the extraction of the signature from the watermarked media data, it
has been claimed that digital watermarks can be used to identify the rightful owner, the intended recipients, as
well as the authenticity of a media data [1, 2]. Basically, if the owner wants to protect his/her image, the owner
of an image has to register the image with the copyright office by sending a copy to them. The copyright office
archives the image, together with information about the rightful owner. When dispute occurs, the real owner
contacts the copyright office to obtain proof that he is the rightful owner. If he did not register the image, then
he should at least be able to show the film negative. However, with the rapid acceptance of digital photography,
there might never have been a negative. Theoretically, it is possible for the owner to use a watermark embedded
in the image to prove that he/she owns it [4].A watermarking is the practice of embedding identification
information in an image, audio, video or other digital media element to provide privacy protection from
attackers [1-3]. The identification information is called “watermark pattern” and the original digital image that
contains watermark pattern is named “marked image”. The embedding takes place by manipulating the contents
of the digital image [5].Also, a secret key is given to embed “watermark pattern” and to retrieve it as well. The
easiest (simplest) way to watermark an image/video is to change directly the values of the pixels, in the spatial
domain. A more advanced way to do it is to insert the watermark in the frequency domain, using one of the well
-known transforms: FFT, DCT or DWT.
There are two important issues: [1]
(i)
To provide trustworthy evidence for protecting rightful ownership, Watermarking schemes are
required.
(ii)
Good watermarking schemes satisfy the requirement of robustness and resist distortions due to
common image manipulations (such as filtering, compression, etc.).
Digital documents means documents created in digital media are having certain Advantages like:

Efficient data storage, duplication, manipulation and transmission.

Copying without loss.
But due to some delimits of digital documents, they become inefficient to use. These delimits are as follows:

Illegal copying

Falsification (i.e. duplication)

No copyright protection

No ownership identification.
There are various techniques for prevention of illegal copying such as:

Encryption methods which include use of public and private keys to encode the data, so that image can
decoded only with required key.
IJSWS 15-318; © 2015, IJSWS All Rights Reserved
Page 70
P. Kumari and V. K. Giri, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 70-77



Site security methods which includes use of firewalls to access the data.
Using Thumbnail images.
Digital watermarking which includes robust labeling of an image with Information which is to be
prevent from illegal copying and also use of image checksum or other techniques to detect
manipulation of image data. Digital watermarking is a technology that opens a new door for authors,
publishers and service providers for protection of their rights and interest in multimedia documents.
The large use of networked multimedia system has created the need of “Copyright Protection” for
different digital medium as pictures, audio clips, videos etc. The term “Copyright Protection” involves
the authentication of ownership and identification of illegal copies of digital media” [1].
II.
Need of Digital Watermarking
The purpose of digital watermarks is to provide copyright protection for intellectual property that is in digital
format .First important application is copyright protection of digital media. In addition with copyright
protection, Digital watermarking is playing a important role in many fields of applications such as broadcast
monitoring, owner identification, transaction tracking, proof of ownership, fingerprinting, content
authentication, copy control, device control. Digital watermarks also serve the purposes of identifying quality
and assuring authenticity. A graphic or audio file bearing digital watermark can inform the viewer or listener
who owns to the item. The speed evolution of digital objects, software programs and the simple communication
and access techniques to these products make a necessary demand for the digital copy right protection, digital
watermarking systems form a practical mean to protect the digital content [6]. In the digital watermarking
systems the digital information which also called payload, do not affect the carrying object; it fights changes or
manipulation in the carrier. [7] Many Digital watermarking algorithms appeared during the last 20 years that
uses the carrier object properties to embed the desired watermark. [8] The purpose of this paper is to investigate
digital watermarking techniques, its requirements, types, applications and it’s advantageous.
III.
Objectives of Digital Watermarking
Digital watermarking is applied to protect the copyright of the digital media which unlike the analog media can
be stored, duplicated, and distributed without loss of its fidelity. Unauthorized copy of digital documents has
been a subject of concern for many years especially with respect to their authorship claims. Digital
watermarking, by hiding a certain information in the original data provides a solution. Digital watermarking
technology can effectively compensate for the deficiencies of the security and protection applications of
traditional information security technology. Digital watermarking prevents illegal duplicating, interpolating and
distributing the digital content technically.
Classification of digital watermarking [2, 3]
1. According to its characteristics.

Robust watermarking is mainly used to sign copyright information of the digital works, the embedded
watermark can resist the common edit processing, image processing and lossy compression, the
watermark is not destroyed after some attack and can still be detected to provide certification.

Fragile watermarking is mainly used for integrity protection, which must be very sensitive to the
changes of signal. We can determine whether the data has been tampered according to the state of
fragile watermarking.
2. Based on the attached media.

Image watermarking refers to adding watermark in still image.

Video watermarking adds digital watermark in the video stream to control video applications.

Text watermarking means adding watermark to PDF, DOC and other text file to prevent changes of text.

Graphic watermarking is embedding watermark to two-dimensional or three-dimensional computergenerated graphics to indicate the copyright.
3. According to the detection process.

Visual watermarking needs the original data in the testing course, it has stronger robustness, but its
application is limited.

Blind watermarking does not need original data, which has wide application field, but requires a higher
watermark technology.
4. Based on its purpose.

Copyright protection watermarking means if the owners want others to see the mark of the image
watermark then the watermark can be seen after adding the watermark to the image, and the watermark
still exists even if it is attacked.

Tampering tip watermarking protects the integrity of the image content, labels the modified content and
resists the usual lossy compression formats.

Note watermarking is added to the building process of the paper notes and can be detected after
printing, scanning, and other processes.
IJSWS 15-318; © 2015, IJSWS All Rights Reserved
Page 71
P. Kumari and V. K. Giri, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 70-77

Anonymous mark watermarking can hide important annotation of confidential data and restrict the
illegal users to get confidential data.
(a) The original Lena image
(c) Visible watermarked image
(b) the logo to be watermarked
(d) invisible watermarked image
IV.
Basic Characteristics of Digital Watermarking
Robustness:
Robustness means the watermark embedded in data has the ability of surviving after a variety of
processing operations and attacks. Then, the watermark must be robust for general signal processing
operation, geometric transformation and malicious attack. The watermark for copyright protection does
need strongest robustness and can resist malicious attacks, while fragile watermarking; annotation
watermarking do not need resist malicious attacks.

Non-perceptibility:
Watermark cannot be seen by human eye or not be heard by human ear, only be detected through
special processing or dedicated circuits.

Verifiability:
Watermark should be able to provide full and reliable evidence for the ownership of copyrightprotected information products. It can be used to determine whether the object is to be protected and
monitor the spread of the data being protected, identify the authenticity, and control illegal copying.

Security:
Watermark information owns the unique correct sign to identify, only the authorized users can legally
detect, extract and even modify the watermark, and thus be able to achieve the purpose of copyright
protection.

Capacity:
Image watermarking capacity is an evaluation of how much information can be hidden within a digital
image. It is determined by the statistical model used for the host image, by the distortion constraints on
the data hider and the attacker, and by the information available to the data hider, to the attacker, and to
the decoder.
V.
System Overview
The structure of Digital watermarking system consists of mainly three Parts - watermark insertion, watermark
extraction and watermark detection . Thus process of digital watermarking technique includes three Processes
i.e. watermark insertion process, watermark extraction process and watermark detection process. The watermark
insertion unit provides the generic approach to watermarking any digital media [4].The detection process
indicates whether the object contains a mark that is “close to” the original watermark. The meaning of “close”
depends on the type of alterations that a marked object might undergo the course of normal use. Watermark
insertion unit integrates the input image and watermark to form output watermarked image. Watermark
extraction uncovers the watermark in watermarked images, a technique usually applicable in verification
watermarks. Watermark detection detects presence of ID .e.g. in robust watermarks presence of specified ID

IJSWS 15-318; © 2015, IJSWS All Rights Reserved
Page 72
P. Kumari and V. K. Giri, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 70-77
(watermarks) can be detected using predefined threshold; i.e. answering is either true or false. Figure shows
what actual process is carried out during embedding and extraction process. Original data and watermark on
submitting to embedding algorithm gives watermarked data. During extraction, this watermarked data is given
to extraction algorithm gives extracted watermark.
Working Phase of Digital Watermarking
VI.
Applications of digital Watermarking
1. Broadcast monitoring:
This application identifies that when and where works are broadcast by recognizing watermarks embedded in
these works. There are varieties of technologies to monitor playback of sound recording on broadcast. The
DWM is alternative to these technologies due to its reliable automated detection. A single PC based monitoring
station can continuously monitor to 16 channels over 24 hours with no human interaction. Resulted monitoring
is assembled at central server . The system can distinguish between identical versions of songs, which are
watermarked for different distribution channel. Such system requires Monitoring infrastructure and watermarks
to be present in content. Watermarking video or music is planned by all major entertainment companies
possessing closed networks.
2. Encoding
According to the thinking of major music companies and major video studios, encoding happens at mastering
level of sound recording. In such downstream, transactional watermarks are also considered. Each song is
assigned with unique ID from the identifier database. After completion of all mastering processes, ID is encoded
in sound recording. To enhance encoding of audio or video recordings requiring special processing, the humanassisted watermark Key is available.
3. Copy and playback control:
The data carried out by watermark may contain information about copy and display permissions. We can add a
secure module into copy or playback equipment to automatically extract the permission information and block
further processing if required. This approach is being taken in Digital Video Disc (DVD).
4. Content authentication:
The content authentication is nothing but embedding the signal information in Content.
This signature then can be checked to verify that it has not been alter. By watermarks, digital signatures can be
embedded into the work and any modification to the work can be detected.
IJSWS 15-318; © 2015, IJSWS All Rights Reserved
Page 73
P. Kumari and V. K. Giri, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 70-77
5. Copyright Protection
Digital watermarking indicates the copyright owner, identify the buyer or provide additional information of
digital content, and embed the information into digital images, digital audio and video sequence. This paper
introduces an application of digital watermark in image copyright protection. The system can use DCT
algorithm to embed the chaotic sequence or meaningful watermark into the protected image, and provide
effective technical means for the identification of the image copyright.
VII.
Attacks
There is a need of adding, altering or removing false watermarks. Attacks on watermarks may be accidental or
intentional. Accidental attacks may cause due to the standard image processing or due to the compression
Procedures. Intentional attacks includes cryptanalysis, steganalysis, image processing techniques or other
attempts to overwrite or remove existing watermarks Following are the methods of attacks vary according to
robustness and Perceptibility.
Mosaic attack: Mosaic attack is the method in which pictures are displayed so as to confuse watermarksearching program, known as “Web Crawler”. When we subdividing the original image into randomly sized
small images and displaying the resulting image on webpage, mosaic can be created.
IJSWS 15-318; © 2015, IJSWS All Rights Reserved
Page 74
P. Kumari and V. K. Giri, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 70-77
The aim is to confuse the web crawler into thinking that there is no watermark within the picture because it has
been sub divided into smaller separate pictures. This form of attack becomes to a large degree obsolete due to
new improved methods of watermarking and more intelligent web crawlers.
Types of attacks:
1) Subtractive attackIt involves the attacker in the area of located watermark if imperceptible and then removing the mark by
cropping or digital editing.
2) Distortive attack In this attack, attacker attempts to make some uniform distortive changes in the images such that mark becomes
unrecognizable. These two watermark attacks are usually performed on robust watermark.
3) Stirmark attackStirmark is generic tool developed for simple robustness techniques of image marking algorithms and
steganographic techniques. In its simplest version, stirmark simulates resampling process in which it introduces
same kind of errors into an image to print it on high quality printer and scanning it again with high quality
scanner .It includes minor geometric distortion. This testing tool is an effective program to remove fairly robust
watermarks in images and become a form of attack on its own.
4) Forgery attackForgery attack is also known as ‘Additive attack’ in some cases. Forgery attack includes the attacker who can
add his or her own watermark overlaying the original image and marking the content as their own.
5) Inversion attack Inversion watermark render the watermark information ambiguous. idea behind the inversion attack that attacker
who receives watermarked data can claim that data contains his watermark also by declaring part of data as his
watermark. The attacker can easily generate the original data by subtracting the claimed watermark.
Types of attacks which are based on estimationIn these types of attacks, watermark information and original image estimates can be gained by using stochastic
techniques. Some attacks are [7]:
1. Removal and Interference attacks:
Removal attacks intend to transplant the watermark information from the watermarked image. The watermark is
mostly an agglomerative noise allusion appear in the host allusion, this is exploit by the removal and
interference attacks. And also interference attacks in which add adscititious noise to the watermarked image.
2. Cryptographic attacks:
Above attack do not break the rules of impunity of the digital watermark algorithm. But cryptographic attacks
negotiate with breaching of the impunity. One instance is, cryptographic attack in which exploring the esoteric
watermarking key using tedious Brute Force technique. Oracle attack is second instance of cryptographic attack.
3. Protocol attacks:
The Protocol or bunch of rule attacks exploits the slots in digital watermarking. IBM attack is an instance of
Protocol or bunch of rule attacks. The IBM attack is also known as the setback or deadlock attack, dummy
original attack or inversion attack. Protocol attacks embed one or many more watermarks in such a way that it is
implicit which the actual owner’s watermark was.
4. Active attacks:
In active attacks hacker attempts deliberately to transplant the watermark or easily make the watermark which
is not detectable. In copyright sheathed fingerprinting applications active attacks are very big teaser.
5. Passive attacks:
IJSWS 15-318; © 2015, IJSWS All Rights Reserved
Page 75
P. Kumari and V. K. Giri, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 70-77
In this Invasive is not attempting to transplant the watermark but easily trying to prescribe if an evolved mark is
appearing or not. As the lector should prudent, impunity adverse passive attacks is of the categorical heft in
secret communications in which easy information of the appearance of watermark is almost more to concession.
VIII. Classification of Image Watermarking Techniques
There are basically two main techniques:
1.
Spatial watermarking technique
Spatial watermarking can also be applied using color separation. In this way, the watermark appears in only one
of the color bands. This renders the watermark visibly subtle such that it is difficult to detect under regular
viewing. However, the mark appears immediately when the colors are separated for printing. This renders the
document useless for the printer; the watermark can be removed from the color band.
2.
Frequency domain watermarking technique
Compared to spatial-domain methods, frequency-domain methods are more widely applied. The aim is to embed
the watermarks in the spectral coefficients of the image. The most commonly used transforms are the Discrete
Cosine Transform (DCT), Discrete Fourier Transform (DFT), Discrete Wavelet Transform (DWT), the reason
for watermarking in the frequency domain is that the characteristics of the human visual system (HVS) are
better captured by the spectral coefficients.
Discrete Fourier Transform (DFT)
Fourier Transform (FT) is a process which transforms a continuous function into its frequency components. The
corresponding transform requires the Discrete Fourier Transform (DFT) for discrete valued function. In digital
image processing, the even functions that are non-periodic can be defined as the integral of sine and/or cosine
multiplied by a weighing function.
Discrete Cosine Transform (DCT)
DCT is faster and can be implemented in O (n log n) operations. In this Transformation, images get decomposed
into different frequency bands, and we mainly focused on middle frequency band. In this, watermark
information is easily embedded into the middle frequency band. The middle frequency bands are chosen to
avoid the most visual important parts of the image which are of low frequency without revealing themselves to
elimination through compression and noise attacks. DCT is important method for video processing. It also gives
accurate result in video watermarking and resists various attacks. Another advantage of DCT is that it breaks a
video frame is into different frequency band which enables it to easily embed watermarking information into the
middle frequency bands of a video frame [6].It not only improves the peak signal to noise ratio but is also more
robust against various attacks like frame dropping and frame averaging.
Discrete wavelet Transform (DWT)
Discrete wavelet transform (DWT) is based on small waves, called wavelets. It is a mathematical tool for
hierarchically decomposing an image. Non stationary signals can be processed by DWT. Both frequency and
spatial description of an image provided by the wavelet transform. Temporal information is retained in this
transformation process unlike conventional Fourier transform translations and dilations of a fixed function
created the wavelets called mother wavelet [6].
IX.
Watermarks in 3D Objects [7]
The increase use of Virtual reality applications make it necessity to have protection mechanisms for these
applications .digital watermarking proved its reliability as a protection technique. Recently many algorithms
proposed hiding watermarks in the 3D mesh of the 3D object. The main goal of these algorithms is that the
watermark mustn’t be perceptible by the human eye in the 3D mesh of the object. And if other senses used to
touch the object virtually by special devices any hidden data in the object must also be unpredictable .such
algorithms represent a revolution in the digital watermarking technology world.
X.
Literature Review
We can describe the various efforts done by the authors under this field in the form of table which can be easily
understandable [1, 4].
YEAR
2010
AUTHOR
Naderahmadian. Y.;
Hosseini-Khayat. S.
TITLE
Fast Watermarking
Based on QR
Decomposition in
Wavelet Domain
METHODOLOGY
QR code image by means of
wavelet transform
PERFORMANCES
Robustness to Attacks, Visible
Copyright Protection and
Authentication Tool
2011
Nan Lin; Jianjing
Shen; Xiaofeng Guo;
Jun Zhou
A robust image
watermarking based
on DWT-QR
decomposition
DWT and QR decomposition
Robustness, Invisibility and
Higher Embedding Capacity
2011
Qing Liu,Jun Ying
Grayscale Image
DWT, Spread Spectrum
Increased Complexity of
IJSWS 15-318; © 2015, IJSWS All Rights Reserved
Page 76
P. Kumari and V. K. Giri, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 70-77
Digital
watermarking
Technology Based
on Wavelet
Analysis
Technology
Method, Extraction Is Not
Good Enough
2012
Zhaoshan Wang,
Shanxiang Lv,Yan
Shna
A Digital Image
Watermarking
Algorithm Based on
Chaos and Fresnel
Transform
Fresnel diffraction plane,
Chaotic scrambling
Provide Both Good Robustness
and Security
2013
Jithin V M, K K
Gupta
Robust invisible QR
code image
watermarking in
DWT domain
QR Codes, It can be scanned by
using the QR code scanner
easily
More Robust Than Previous
Techniques
Fusion of DWT and
SVD digital
watermarking
Techniques for
robustness
a hybrid watermarking scheme
using SVD and DWT
2014
Jaishri Guru,
Hemant,Brajesh
Improved imperceptibility and
robustness under attacks and
preserve copyrights by using
this technique.
XI.
Conclusion and Future Scope
With the popularity of the network, the safety communication issues of digital products become an important
and urgent research topic. With the popularity of the network, the safety Communication issue of digital product
becomes an important and urgent research topic. Digital watermarking technology can provide a new way to
protect the copyright of multimedia information and to ensure the safe use of multimedia information. Digital
watermarking is more intelligible and easier method for data hiding. Also this is more robust and more capable
because of its efficiency than the other hiding techniques.It is very important to prepare the industry to the usage
of digital watermarks and it is very likely that fully functioning to convince them of the added value if they
employ digital watermarking technologies, their products can gain. The future is promising. Recently many
proposed algorithms searches the use of digital watermarking for 3D objects. Further work of integrating human
visual system characteristics is in progress.
References
[1].
[2].
[3].
[4].
[5].
[6].
[7]
[8]
[9]
Vineeta gupta, mr.Atul barve” A Review on image Watermarking and its Techniques”, ijarcsse vol.4 issue1, January 2014.
Shraddha S. Katariya (Patni)”Digital watermarking: review”, IJEIT vol.1, issue 2, February 2012.
Robert, L., and T. Shanmugapriya, “A Study on Digital Watermarking Techniques International Journal of Recent Trends in
Engineering, vol. 1, no. 2, pp. 223-225, 2009.
Ingemar J. Cox, J. P. Linnartz, “Some general methods for tampering with watermarks”, IEEE Journal on Selected Areas in
Communication, 1998, 16(4):587-593.
Hebah H.O. Nasereddin, “Digital watermarking a technology overview”, IJRRAS 6 (1), January 2011
Ekta Miglani*, Sachin gupta,"Digital watermarking methodologies-a survey”, ijarcsse vol.4, issue 5, May 2014
A. Formaglio.”Perceptibility of Digital Watermarking In Haptically Enabled 3D Meshes” University Of Siena, Italy
Dong Zheng Library and Archeives “Rst Invariance of Image Water Marking Algorithms and the Frame Work Of Mathematical
Analysis “Canada Isbn 978-0-494-50758-2.
YAZEED ALRASHED “Digital Water Marking For Compressed Images “Proquest LLC. UMI 1459036.
IJSWS 15-318; © 2015, IJSWS All Rights Reserved
Page 77
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Mobile Crowdsensing -Current State and Future Challenges
Pulkit Chaurasia1, Prabhat Kumar2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
______________________________________________________________________________________
Abstract: Mobile Crowd Sensing (MCS) presents a new sensing criterion which takes advantage of mobile
devices (wearable gadgets, smart phones, smart vehicles etc.) to effortlessly and conveniently collect data that
provides a base for developing various applications in terms of location, personal and surrounding context,
noise level, traffic conditions etc. In this paper, we describe how human involvement adds to the mobility that is
beneficial for both sensing coverage and transmitting data at low cost by using the already available sensors.
We present a brief overview of existing mobile crowd sensing applications, explain their unique characteristics,
illustrate various research challenges and discuss possible solutions. We believe that this concept has the
potential to become the most efficient and effective method for sensing data from the physical world once we
overcome the reliability issues related to the sensed data.
Keywords: Mobile crowd sensing; Internet of Things; Smartphone; Sensor; Participatory sensing;
Opportunistic Sensing; GPS.
1,2
____________________________________________________________________________________________________
I.
INTRODUCTION
With the recent advancements in mobile computing, we are entering the era of Internet of Things (IoT), which
aims at creating a network of “things” i.e. physical objects embedded with electronics, software, sensors and
connectivity to enable it to achieve greater value and service by exchanging data with the manufacturer, operator
and/or other connected devices. Therefore, consumer centric mobile sensing and devices connected to the
Internet will result in the evolution of IoT. According to a forecast for global smartphone shipments from 2010
to 2018 more than 1.28 billion smartphones will be shipped worldwide in 2014. By 2018, the number is
expected to climb to over 1.87 billion [1]. Smart phones already have several sensors, such as: gyroscope,
accelerometer, ambient light sensor, GPS sensor, digital compass, proximity sensor, pressure sensor, heart rate
monitor, fingerprint sensor, harmful radiation sensor [2][3]. In comparison to the static sensor networks, smart
phones and vehicular systems (like GPS, OBD-II) support more complex computations. Thus, sensors in smart
phones and vehicular systems represent a new type of geographically distributed sensing infrastructure that
enables consumer centric mobile sensing which is both scalable and cost-effective alternative to the existing
static wireless sensor networks over large areas. This novel sensing paradigm is called Mobile Crowd Sensing
(MCS). It has a wide range of applications such as urban environment monitoring, and street parking availability
statistics.
TABLE I shows the sensors on various mobile devices
DEVICE
ACCELEROME
TER& GYRO
CAMER
A
PROXIMITY
COMPASS
BAROMETE
R
SAMSUNG
GALAXY S6
IPHONE 6










SONY
XPERIA Z2
HTCONE
M9









HEART
RATE
SpO2


[4] Lane et al. classified sensing paradigms used in MCS as:
1. Participatory sensing: It requires active involvement of the participants to contribute sensor data (e.g.
taking a picture, alerting about traffic status) and consciously decide when, where, what, and how to
sense.
2. Opportunistic sensing: Unlike participatory sensing, opportunistic sensing is completely unconscious
i.e. the applications may run in the background and collect data without any active involvement of the
participant.
Transmission paradigms used in MCS are classified as:
1. Infrastructure-based transmission: It enables participants to transmit sensory data through the Internet
by mobile networks such as 2G, 3G or 4G network.
2. Opportunistic transmission: It involves opportunistic data transmission among users through short-
IJSWS 15-319; © 2015, IJSWS All Rights Reserved
Page 78
P. Chaurasia and P.Kumar, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 78-81
range radio communications such as Bluetooth or Wi-Fi.
Among the two above mentioned transmission paradigms, infrastructure-based transmission is used extensively
in MCS applications despite of its few limitations. This paradigm cannot be applied in remote areas with low
network coverage or network access is expensive [5].
II.
LITERATURE REVIEW
A.
MCS Characteristics and Functioning
The unique characteristics of MCS applications differentiate them from the traditional mote-class sensor
networks. One of the major characteristics of MCS is human involvement. Human mobility offers new
opportunities for both sensing and data transmission. For example, humans can easily identify available parking
spots and forward this information via pictures or text messages. The mobile devices at present have more
computing, communication and storage resources with multi-modality sensing capabilities than mote-class
sensors. Also, millions of these devices are already out in the world as people carry these devices with
themselves most of the time. Thus, by utilizing the sensors on these devices, we can easily develop numerous
large scale sensing applications. For instance, instead of installing road-side cameras to determine traffic data
and congestion level in order in avoid traffic jams, we can collect this data using the smartphones carried by
drivers on the road. Therefore, implementing such methods can result in cost minimization by eliminating the
need to create specialized sensing infrastructure. Another characteristic of MCS is that the same sensor data can
be used for different purposes in existing MCS applications whereas the conventional sensor network is
intended for a single application only. For instance, the accelerometer readings gathered through MCS can be
used in transportation mode identification, pothole detection, and human activity pattern extraction [6].
B.
Human Involvement in MCS
The involvement of citizens in the sensing loop is the chief feature of MCS. In traditional remote sensor
networks, humans are only the end consumers of the sensory data. One of the most important feature of MCS is
deeper involvement of humans in the circle of the data-to-decision process, including sensing, transmission, data
analysis and decision making. The proportion of human involvement will depend on application requirements
and device capabilities. By having human participation in the loop, several other issues regarding to the privacy
and security of data (e.g., sensitive information such as location may be revealed), quality of the data
contributed should also be identified It is easier to deploy the network at lower cost, because millions of mobile
devices or vehicles already exist in many cities around the world. Mobile users may not want to share their
sensory data, which may contain or reveal their private and sensitive information. Thus Human Involvement
powering the private concerns. The number of mobile users, the availability of sensors, and the data quality all
these factors make it more difficult to guarantee reliable sensing quality in terms of coverage, latency, and
confidence. It is easy to maintain the network, because mobile nodes often have more power supply, stronger
computation, and larger storage, and communication capacity.From a negative point, human involvement also
brings many new challenges[7].
C.
Mobile Crowd Sensing Applications
[8][9]MCS applications are classified into different categories based on the type of phenomenon being
measured1)
Smart Cities: Smart cities with high population density and a very large number of interconnected
issues make effective city management a challenging task. Therefore, Government and Industrial
research efforts are currently in progress to exploit the full potential of the sensing data by initiating
smart city systems to improve city efficiency by establishing smarter grids, watermanagement systems
[10] and the social progress [11].For example-A sensor network could be used to monitor traffic flows
and predict the effects of extreme weather conditions on water supplies, resulting in the delivery of
near real-time information to citizens through citywide displays and mobile applications. The
government of South Korea is building the Songdo Business District, a new smart city built from
scratch on 1,500 acres aims at becoming the first full-scale realization of a smart city [12]. But despite
of their advantages, they are turning to be too costly. Crowd sensing can reduce the costs associated
with large scale sensing and provide additional human-related data.
2)
Environment: In MCS Environmental Applications, the main focus is on environmental related issues
like measuring various pollution levels in a city, water level in a water sources (rivers, seas etc.). Such
applications enable the mapping of various large scale environmental phenomena by human
involvement. An example prototype deployment for pollution monitoring is Common Sense [13].It
uses specialized handheld air quality sensing devices that communicate with mobile phones (using
Bluetooth) to measure various air pollutants (e.g. CO2, NOx). These devices when utilized across a
large population, collectively measure the air quality of a community or a large area. Similarly, one can
utilize microphones on mobile phones to monitor noise levels in large areas.
3)
HealthCare and Wellbeing: People worn wireless sensors for heart rate monitoring [14] and blood
pressure monitoring [15] can communicate their information to the owners’ smart phones.Mobile
sensing can leverage these existing data into large scale healthcare studies that seamlessly collect data
from various groups of people, which can be selected based on location, age, etc.A general example
IJSWS 15-319; © 2015, IJSWS All Rights Reserved
Page 79
P. Chaurasia and P.Kumar, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 78-81
involves collecting data from people who eat regularly fast food. The phones can perform activity
recognition and determine the level of physical exercise doneby people, which was proven to directly
influence people’s health.
4)
Road Transportation: Departments of transportation can gather large scale data about traffic patterns
in the country/state using location and speed data provided by GPS sensors embedded in vehicles.
These data can then be used for traffic engineering, construction of new roads, etc. Drivers can receive
real-time traffic information based on the same type of data collected from smart phones [16].With the
use of ultrasonic sensors, Drivers can also benefit from real-time parking and similarly with the
involvement of GPS and accelerometer sensors ,Transportation agencies or municipalities can
efficiently and quickly repair the roads[17][18]. Twitter[19] has been used as a publish/subscribe
medium to build a crowd sourced weather radar and a participatory noise-mapping application.
MCrowd [20] is a system that enables mobile crowd sourcing by providing a marketplace for microsensing tasks that can be performed from your mobile phone. It offers several features including
location-based tasks, and rest APIs for easy integration with third-party sites. PEIR[21] is an
application that exploits mobile phones to evaluate if users have been exposed to airborne pollution,
enables data sharing to encourage community participation, and estimates the impact of individual
user/community behaviours on the surrounding environment. None of these existing applications and
platforms has addressed the reliability of the sensed data.
D.
MCS Challenges
One of the complex problem of MCS is to identify only those set of devices from the physical world that can
help achieve the main objective and further instruct the devices to sense in a way that ensures desired quality
results are obtained every time. The success of MCS somewhat depends on user’s cooperation. The user may
not want to share sensor data containing private or some sensitive information like current location. Moreover,
the user may incur energy, money or even make efforts themselves for sensing and transmitting the desired data.
So, unless there are strong incentives, the device users may not be willing to participate. This calls for a need to
have appropriate incentive mechanisms to involve human participants.Hence, MCS has the following main
challenges–
(a) Motivating the Participants (by use of incentives).
(b) Reliability of the sensed data.
(c) Privacy , Security and Data Integrity.
(d) Increasing network bandwidth demand caused by growing usage of data-rich multimedia sensors.
Other secondary challenges may include Battery Consumption. The validation of sensed data is important in a
mobile crowd sensing system to provide confidence to its clients who use the sensed data. However, it is
challenging to validate each and every sensed data point of each participant in scalable and cost-effective way
because sensing measurements are highly dependent on context. One approach to handle this issue is to validate
the location associated with the sensed data point in order to achieve a certain degree of reliability on the sensed
data[22][23].
III. CONCLUSIONS AND FUTURE RESEARCH
With the recent boom of sensor-rich smartphones in the market, Mobile crowd sensing has become a popular
research and application field in science and it is expected that the scope and depth of MCS research will further
expand in the years to come. The study of MCS is still in its early stages and there are numerous challenges and
research opportunities in the coming years. In this review paper, we identified the human involvement in MCS,
unique characteristics of MCS, applications of MCS, presented several research challenges of MCS and
discussed their solutions briefly. We also presented some approaches to exploiting the opportunities that human
mobility offers for sensing and transmission efficiency and effectiveness.
The Future work in this domain can have some of the following customsA
A Generic Framework for Data Collection
In MCS, mobile sensors from a collection of sensing nodes can potentially provide coverage where no static
sensing infrastructure is available. Due to large population of mobile nodes, a sensing task must identify which
node/s may accept a task.
B.
Varied Human Grouping
Interaction among the volunteers is necessary, at least should be an option, but is not present in most of current
MCS systems. Grouping users and facilitate the interaction among them should be a great challenge of MCS.
Key techniques to address this include community creation approaches, dynamic group formation metrics, social
networking methods, and so on.
C.
Merging the data from heterogeneous communities to develop new social apps.
Data from different spaces often characterizes one facet of a situation, thus the fusion of data sources often
draws amore better picture of the situation. Example- by integrating the mined theme from user posts and the
revealed location information from GPS-equipped mobile phones, Twitter has been exploited to support near
real-time report of earthquakes in Japan [24]. Other future works will identify open issues like battery
IJSWS 15-319; © 2015, IJSWS All Rights Reserved
Page 80
P. Chaurasia and P.Kumar, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 78-81
consumption of smart phones, reliability issues, security and integrity issues in MCS. We conclude that mobile
crowd sensing will become a widespread method for collecting sensing data from the physical world once the
data reliability issues are properly addressed [25].
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
Global smartphone shipments forecast. [Online]. Available: http://www.statista.com/statistics/263441/global-smartphoneshipments-forecast/
http://www.techulator.com/resources/9421-Overview-sensors-used-smartphones-tablets.aspx
http://www.phonearena.com/news/Did-you-know-how-many-different-kinds-of-sensors-go-inside-a-smartphone_id57885
N. Lane et al., “Urban Sensing Systems: Opportunisticor Participatory?” Proc. HotMobile, 2008, pp. 11–16.
"Opportunities in Mobile Crowd Sensing" Huadong Ma, Dong Zhao, and Peiyan Yuan
"Mobile Crowdsensing: Current State andFuture Challenges"Raghu K. Ganti, Fan Ye, and Hui LeiIBM T. J. Watson Research
Center,Hawthorne,NYrganti,fanye,[email protected]
"Opportunities in Mobile Crowd Sensing" Huadong Ma, Dong Zhao, and Peiyan Yuan
Mobile Crowdsensing: Current State andFuture Challenges"Raghu K. Ganti, Fan Ye, and Hui LeiIBM T. J. Watson Research
Center,Hawthorne,Nyrganti,fanye,[email protected]
"Mobile Crowd Sensing"ManoopTalasila, Reza Curtmola, and Cristian BorceaDepartment of Computer ScienceNew Jersey
Institute of TechnologyNewark, NJ, USAEmail: [email protected], [email protected], [email protected]
Londons water supply monitoring. [Online]. Available:http://www.ucl.ac.uk/news/news-articles/May2012/240512Ibm smarter planet. [Online]. Available: http://ntextbffgwww.ibm.com/smarterplanet/us/en/overview/ideas/
Songdo smart city. [Online]. Available: http://www.songdo.com
P. Dutta et al., “Demo abstract: Common sense: Participatory urban sensing using a network of handheld air quality monitors,” in
Proc. of ACM SenSys, 2009, pp. 349–350.
Garmin, edge 305. [Online]. Available: www.garmin.com/products/edge305/
Mit news. [Online]. Available:http://web.mit.edu/newsoffice/2009/blood-pressure-tt0408.html
Mobile millennium project. [Online]. Available: http://traffic.berkeley.edu/[17] S. Mathur, T. Jin, N. Kasturirangan, J.
Chandrasekaran, W. Xue,M. Gruteser, and W. Trappe, “Parknet: drive-by sensing of road-side parking statistics,” in Proceedings
of the 8th international conference on Mobile systems, application and services. ACM, 2010, pp. 123–136.
J. Eriksson, L. Girod, B. Hull, R. Newton, S. Madden, and H. Balakrishnan,“The pothole patrol: using a mobile sensor network
for road surface monitoring,” in Proceedings of the 6th international conference on Mobile systems, application and services.
ACM, 2008, pp. 29–39.
Twitter. [Online]. Available: http://twitter.com/
M. Demirbas, M. A. Bayir, C. G. Akcora, Y. S. Yilmaz, and H. Ferhatosmanoglu,“Crowd-sourced sensing and collaboration
using twitter,” in World of Wireless Mobile and Multimedia Networks(WoWMoM), 2010 IEEE International Symposium on a.
IEEE, 2010, pp. 1–9.
M. Mun, S. Reddy, K. Shilton, N. Yau, J. Burke, D. Estrin, M. Hansen,E.Howard, R. West, and P. Boda, “Peir, the personal
environmental impact report, as a platform for participatory sensing systems research,” in Proceedings of the 7th international
conference on Mobile systems , applications, and services. ACM, 2009, pp. 55–68.
Mobile Crowdsensing: Current State andFuture Challenges" Raghu K. Ganti, Fan Ye, and Hui LeiIBM T. J. Watson Research
Center, Hawthorne,NYrganti,fanye,[email protected]
"Mobile Crowd Sensing"ManoopTalasila, Reza Curtmola, and Cristian Borcea Department of Computer ScienceNew Jersey
Institute of TechnologyNewark, NJ, USAEmail: [email protected], [email protected], [email protected]
T. Sakaki, M. Okazaki, and Y. Matsuo, “Earthquake Shakes Twitter Users: Real-time Event Detection by Social Sensors,” Proc.
of WWW’10 Conf., 2010, pp. 851-860.
"From Participatory Sensing to Mobile Crowd Sensing" Bin Guo, Zhiwen Yu, Xingshe Zhou School of Computer Science
NorthwesternPolytechnical University Xi’an, P. R. China [email protected]
ACKNOWLEDGEMENTS
We are grateful to our Department of Computer Science & Technology for their support and providing us an opportunity to review on such
an interesting topic. While reading and searching about this topic we learnt about various important and interesting facts.
IJSWS 15-319; © 2015, IJSWS All Rights Reserved
Page 81
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Steganography in Audio Files
Nalin Gupta1, Sarvanand Pandey2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
_______________________________________________________________________________________
Abstract: In Today’s large demand of Internet, it is important to transmit data in secure manner. Data
transmission in public communication is not secure due to interceptions and manipulation by eaves dropper. So
the solution for this problem is Steganography , which is an art and science of writing hidden messages which
does not attracts any ones attention , and no one except the sender and the intend recipient knows about the
hidden message . All digital files like audio, video, image and text files can be utilized for hiding secret
information. In Audio steganography we send hidden messages by concealing it with AUDIO FILES. Audio
files are a great medium of sending hidden data because of its high DATA Transmission Rate and high degree
of Redundancy. Audio files have many formats, but MP3 format is the most famous among them, and also
widely used in data hiding.
1,2
Keywords: Steganography, MP3, Information Hiding, Secret message, LSB (Least Significant Bit).
_________________________________________________________________________________________
I.
INTRODUCTION
The increase in Internet usage stems from the growing availability of the global communication technology,
leading to electronically induced information gathering and distribution. However, it represents an enormous
challenge of information security. Every user wants a fast and secure transmission, communication, and
information across the transmission link. But the infiltration, jabbing and altering of communication is regular.
Information confidentiality is a matter, which is seriously taken by the governments and if abused attracts
penalty. However, it is still a problem getting a information transferred securely in the global network. The need
to secure information within global network is of most importance so the information is preserved until it gets to
the intended recipient undisclosed.
A lot of sensitive information goes through Internet on daily basis. This Information could be military code,
governments regarding, and personal. These extreme documents require protection from hacking and
infiltration. Therefore, creating a framework that protects the information as well as the identity of
sender/receiver is of prime importance. Cryptography and Steganography are the two best known approaches to
information confidentiality. Cryptography has been a technique which has been used widely since a long time, it
works with set of rules which converts the information into unrecognizable, meaningless and unintelligible
information format. These rules are called the KEYS, and serves as authentication purposes, because one who
knows the KEYS, can decrypt the encrypted information. The era of Steganography is raised by the increase in
demand of a technique which gives more security to information by not giving away its content and information
about sender/receiver within the communication link. Cryptography and Steganography both uses data
encryption approach but Cryptography make its content wide open, therefore making it vulnerable towards the
harms. A Steganography technique hides both, the information content and identity of sender, and the
information is concealed with cryptography.
Steganography works by embedding a secret message which might be an audio recording, a covert
communication, a copyright mark, or a serial number in a cover such as a video film, or computer code in such
a way, it cannot be accessed by unauthorised person during data exchange. A Stego-object is a cover containing
a secret data. It is advisable to both sender/receiver to destroy the Stego-object in order to prevent accidental
reuse. The basic model of Steganography system is given below.
The model contains two processes and two inputs, and both the inputs can be any audio, video, image and so on.
One input is the cover medium and the other one is the secret message. The two processes are contains
embedding and extracting processes.
The embedding process provides the framework and hides the secret message inside a cover, the embedding
process is protected by a key secret key or public key, or without a key. While using key, the one who possess it
can decrypt the message. Lately the use of image files in steganography was very frequent. However, the focus
has changed to audio steganography. It’s the sophisticated features of audio files, which helped in information
hiding, gained the attention of the encrypters. We can use various signal processing techniques to hide
information in audio files which cannot be seen. The challenge in defining the objective is the sensitiveness in
audio files to delay. 1) Imperceptibility, 2) robustness and 3) capacity are the three fundamental properties of
Steganography. Some more properties should be taken into account like computational time because when it
comes to the processing in real time, gives a hard time to deal with therefore cannot be tolerated.
IJSWS 15-321; © 2015, IJSWS All Rights Reserved
Page 82
N. Gupta and S. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 82-86
II.
AUDIO STEGANOGRAPHY
In earlier phase, the development and usage of the steganography techniques were only for images. But in the
later phase, the use of these of these types of techniques in audio files was also taken into consideration. It was
during this time that the algorithms which are known for steganography in audio files was founded. But for
steganography in audio files, there were only a few methods. Hence, a better secured environment is provided
by audio steganography. The term cover steganography was also introduced for describing many types of files
such as WAVE (Waveform audio file format) or MGEGs. In the same manner, the messages which are secret
and embedded, can be of secured types like speech or text. When we consider digital audio, the compression
which is most popular is MP3 format. The steganography that uses MP3 format as cover, the message which is
secret can be embedded during or after compression.The next section contains the description of the following
three terms:
1MP3 file structure
2MP3 encoding
3MP3 frames header
A.
MP3 file structure
The used encoding type determines the MP3 file contents. There are three main components of the MP3 file,
they are Tag, Padding bytes and frames, and those are shown in the following illustration.
There are basically two types of tags1The ID3v1
2The ID3v2
Now for the post-pending of the required information, the utilization of its end section is done by the ID3v2 tag.
The ID3vi tag has the length of 128 bytes which is separated to seven fields composing headings like name of
the artist, genre, and title of the song etc. One of its disadvantages is its size being static. Another demerit is
the lesser flexibility of implementation. In addition to it, theID3v1 tagging system is not accepted by all MP3
files. The ID3v2 tagging system is more compatible and acceptable because of its flexibility and tagging system.
The ID3v2 tags have frames of their own that are capable of storage of various bits of data. The advantages of
these tags are that it’s pre-pending system is provided by the no size limit to the capacity of the information, and
that the decoder here receives the beneficial hints prior to transmission.
The additional data embedding is provided by the Padding Byte; the provided data is then added to the frame.
The principle on which it works is that on the encoding event, the frame is filled evenly by the data that is
IJSWS 15-321; © 2015, IJSWS All Rights Reserved
Page 83
N. Gupta and S. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 82-86
additional; CBR (constant bit rate) is the place where this byte can be found, which in turn guarantees the size of
the frame.
B.
MP3 encoding
It refers to enhancer of the quality of both, sound that is compressed and the ratio of compression. There are
mainly three types encoding bit rates –
1The CBR
2The VBR (variable bit rate)
3The ABR (average bit rate)
CBR is the encoding mechanism which is standard and basic encoders use this. In this technique, same bit rate is
used in the audio data by each frame. The whole MP3 file has a fixed bit rate, as each part of the MP3 file uses
the same number of bits. However, the MP3 files quality is variable. The use of these files is for the prediction
of the size of the file which is encoded and by the product of the length of the song and the bit rate chosen, the
size can be calculated. VBR is the encoding mechanism that consists of the quality of the audio files during the
procedure of encoding. In this technique, the sound files size is unpredictable but the sound quality can be
specific. ABR is the technique which, for the part of the music uses higher bit rate by choosing the encoder adds
bits. The ending conclusion proves that the quality is better than the CBR. IN addition to this, there is a
predictable average size of the file.
C.
MP3 frame headers
These types of headers consists of bits 0 and 1, it can start with either one of them. However mostly, a frame
header is always set to 1.
The following figure illustrates the header.
The series of bits is called sync which represents a header. The frame is made up by these bits that have 12 bits
for the ones. In order to maintain longer data block, it is not necessary for the frames to have headers which are
unique. But there a set of conditions for the recognisation of a long byte data block. However the determination
of the size of the frame is not so easy, there are some implemented approaches for the utilization of the
beginning and end of the frames. But this situation is only fulfilled if the frame’s headers are identically same
that is, structurally.
There is a defined and recognized equation for the calculation of the volume of a specified frame;
Frame Size=144*Bitrate/(Sample Rate+ Padding)
Bit rate is measured in bits per second, Sample rate is defined as the rate of sample of original data and Padding
can be referred to as the data that is additional to the frame during the whole process of encoding so that the
frame should be filled evenly.
IJSWS 15-321; © 2015, IJSWS All Rights Reserved
Page 84
N. Gupta and S. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 82-86
III.
LITERATURE REVIEW
1.
Mahammad Asad, Adnan and Junaid suggested a three layered model of audio steganography which
use LSB replacement. First, the secret message is passed by two layers before it reaches third layer where it is
embedded with the cover message. Then the file is transmitted to the receiver over the network and the reverse
process is applied to get the secret message. The confidentiality of the secret message is the main objective of
the paper. They also discussed the difficulties in the implementation of the three layered model like capacity,
robustness, transparency. Experiments with the three layered model showed the ratio of 54.78dB of signal to
noise, whereas the conventional LSB method shows it 51.12dB.
2.
Lovey Rana, Saket Banarjee claimed that there model of audio steganography provide improved
security of secret message. They used dual layer randomization approach. In first layer , we randomly select
byte number or samples. By randomly selecting the bit position where the embedding needs to be done in the
selected samples, an additional layer of security is attached. These algorithm increases the transparency and
robustness of the steganography technique.
3.
Kirti Gandhi, Gaurav Garg gives a variant of the well known LSB method. The main drawbacks of
LSB method is less robustness and more vulnerability. So we use two bits (2 nd and 3rd ) are used for hiding
secret message. This increases data hiding capacity. After that a filter minimizes the changes occurred in stego
file. The stego file and the filtered file is used to make the unique key for encryption. The filtered file along with
the key is transmitted to the receiver. The key will be needed at the time of decryption at receiver end.
4.
Katariya Vrushabh, Bankar Priyanka, Patil Komal used a genetic algorithm. The robustness is
increased by embedding the message bits into higher and multiple LSB layer values. The robustness would be
increased in order to prevent the intentional attacks to get the secret message and also the unintentional attacks
like noises.
5.
Ashwani Mane, Amutha Jeyakumar, Gajanan Galshetwar presented the LSB method. In LSB method
each consecutive LSB’s is replace by secret message bits which is selected in sample cover audio. It is easy to
implement LSB method but with the greater cost of security as it has low robustness.
6.
S.S. Divya, M. Ram Mohan Reddy presented a method which uses multiple LSB steganography and
cryptography both to enhance the security of the secret message. The most efficient method of LSB
steganography is to alter 4 out of 16 bit per sample audio sequences as it preserves the host audio signal. These
novel approaches enhance the capacity of cover audio for additional data embedding. These methods uses upto 7
LSBs for data embedding as result the data hiding capacity of cover audio is improved by 35% to 70% in
comparison with conventional LSB method which uses 4 LSBs in data embedded.
7.
Gunjan Nehru and Puja Dhar studied in detail about audio steganography using LSB technique and
genetic algorithm approach. There paper is about to use genetic algorithm and LSB approach to attain better
security. It has applied some approach that helps in audio steganography. It has the art and science which helps
in writing a message in a way that no one suspects about its existence except the sender and intended recipient.
8.
Ajay B. Gadicha presented a new method which uses 4 th bit rate LSB audio. This method reduces
embedding distortion of the host audio. This approach uses embedding of message in the 4 th layer of LSBs,
increase in robustness in noise addition. Hearing test results showed enhancement in perceptual quality of audio
is much better than the conventional LSB method.
9.
Mazdak Zamani showed the problem of low robustness against attack in substitution technique, and he
proposed a solution to that problem. There are two types of attacks, first kind of attack tries to reveal the secret
message and the other one tries to destroy the secret message. The secret message is embedded in the LSB in
conventional LSB method which is more vulnerable towards the attack. So embedding message in other bit
position rather than LSB makes the message more secure. Bye embedding message in deeper bits more
robustness can be achieved. But the problem in altering bits closer to MSBs, the host audio signal gets altered.
So we can reduce this problem by changing other bits to lower the error. This will give us the power to use
multiple MSBs in embedding which in turn can enhance capacity and robustness
10.
Dr. A. Damodaram, R. Sridevi and Dr. S.V.L. Narasimham proposed an efficient method of audio
steganography by modifying the conventional LSB method and using strong encryption key in embedding of
secret message. EAS(Enhanced Audio Steganography) is combination of audio steganography and
cryptography. EAS works in two steps: in first step it secures the data by using most powerful encryption
algorithm and in second step it uses Modified LSB algorithm for embedding the secret message into cover audio
file.
IV.
APPLICATIONS
Some of the applications of AUDIO STEGANOGRAPHY are discussed below:
1.
Secret Communication: Many secret message, confidential data, military dates etc. requires secure data
transmission by the means of Internet. These data when send by as it is, is vulnerable due to
eavesdropping and can be easily revealed. In order to secure the data we use audio steganography
IJSWS 15-321; © 2015, IJSWS All Rights Reserved
Page 85
N. Gupta and S. Pandey, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 82-86
technique and hide the data secretly inside a audio file making the secret information hidden from
outside world. And no one suspects the secret message .
Data Storage: Audio steganography can be used in subtitled movies where the speech of actor or music
can be used in embedding the text.
2.
V.
CONCLUSION
Communicating secret message and information securely has been of prime importance in many fields for a
long time. In this paper we presented some of many audio steganography techniques which may help in secure
data transmission. The main challenges in audio steganography are achieving high capacity and robust
steganography system. So the motive is designing an steganographic system that ensures high security and
security of embedded data has led to greater achievements in the field of audio steganography techniques.
REFERENCES
1.
2.
3.
4.
5.
6.
7.
8.
10.
12.
Muhammad Asad, Adnan Khalid, Junaid Gilani, “Three Layered Model for Audio Steganography”, 2012 International
Conference on Emerging Technologies (ICET)
Lovey Rana, Saikat Banerjee, “Dual Layer Randomization in Audio Steganography Using Random Byte Position Encoding” ,
International Journal of Engineering and Innovative Technology, Volume 2, Issue 8, February 2013
Kirti Gandhi, Gaurav Garg, “ Modified LSB Audio Steganography Approach” International Journal of Emerging Technology
and Advanced Engineering, Volume 3, Issue 6, June 2012, pp 158-161
Katariya Vrushabh R, Bankar Priyanka R., Patil Komal K, “Audio Steganography using LSB’’, International Journal of
Electronics, Communication and Soft Computing Science and Engineering, March 2012, pp 90-92
Ashwini Mane, Amutha Jeyakumar, Gajanan Galshetwar, “Data Hiding Technique: Audio Steganography using LSB
Technique”, International Journal of Engineering Research and Applications, Vol.2, No.4, May- June 2012, pp 1123-1125
S.S. Divya, M. Ram Mohan Reddy, “Hiding Text In Audio Using Multiple LSB Steganography And Provide Security Using
Cryptography“, International Journal of Scientific &Technology Research, Vol. 1, pp. 68-70, July 2012.
Gunjan Nehru and Puja Dhar, “A Detailed Look Of Audio Steganography Techniques Using LSB And Genetic Algorithm
Approach”, International Journal of Computer Science (IJCSI), Vol. 9,pp. 402-406, Jan. 2012.
Ajay.B.Gadicha, “Audio wave Steganography”, International Journal of Soft Computing and Engineering (IJSCE), Vol. 1, pp.
174-177, Nov. 2011.
Mazdak Zamani et.al , “A Secure Audio Steganography Approach”, International Conference for Internet Technology and
Secured Transactions 2009.
Dr. A Damodaram, R Sridevi and Dr.Svl. Narasimham, “Efficient Method of Audio Steganography by Modified LSB Algorithm
and Strong Encryption Key With Enhanced Security”, Journal of Theoreticaland Applied Information Technology, pp. 771-778,
2009.
IJSWS 15-321; © 2015, IJSWS All Rights Reserved
Page 86
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Digital Image Processing
Sandeep Singh1, Mayank Saxena2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
______________________________________________________________________________________
Abstract: Digital Image Processing is a rapidly evolving field with growing applications in Science and
Engineering. Modern digital technology has made it possible to manipulate multi-dimensional signals. Digital
Image Processing has a broad spectrum of applications. They include remote sensing data via satellite, medical
image processing, radar, sonar and acoustic image processing and robotics. In this paper we will discuss
various components of digital image processing.
______________________________________________________________________________________
1,2
I.
INTRODUCTION
Image Processing is a method to convert an image into digital form and perform some operations on it, in order
to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in
which input is image, like video frame or photograph and output may be image or characteristics associated with
that image. Usually Image Processing system includes treating images as two dimensional signals while
applying already set signal processing methods to them. It is among rapidly growing technologies today, with its
applications in various aspects of a business. Image Processing forms core research area within engineering and
computer science disciplines too. There are two types of methods used for image processing namely, analogue
and digital image processing. Analogue image processing can be used for the hard copies like printouts and
photographs. In digital photography, the image is stored as a computer file. This file is translated using
photographic software to generate an actual image. The colours, shading, and nuances are all captured at the
time the photograph is taken, and the software translates this information into an image. The major benefits to
digital image processing: a consistently high quality of the image, a low cost of processing, and the ability to
manipulate all aspects of the process. As long as computer processing speed continues to increase while the cost
of storage memory continues to drop, the field is likely to grow. Nowadays image processing is becoming an
important assisting tool in many branches of science such as computer science, electrical and electronic
engineering, robotics, physics, chemistry, environmental science, biology, and psychology.
II.
LITERATURE REVIEW
An image is a 2D light intensity function f(x, y), where (x, y) denote spatial coordinates and the value of f at any
point (x, y) is proportional to the brightness or grey levels of the image at that point. A digital image is an image
f(x, y) that has been discretized both in spatial coordinates and brightness. The elements of such a digital array
are called image elements or pixels. Image Processing is a technique to enhance raw images received from
cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in normal day-to-day life for
various applications [1]. Image Processing systems are becoming popular due to easy availability of powerful
personnel computers, large size memory devices, graphics soft wares etc.
A.
Need of the Digital Image Processing
The digital image can be optimized for the application by enhancing or altering the structure. Within it based on
body part, diagnostic, viewing appearance etc. Digital Image processing consists of a collection of techniques
that seek to improve the visual appearance of an image or to convert the image to a form better suited for
analysis by a human or machine Digital image processing allows one to enhance image features of interest while
attenuating detail irrelevant to a given application, and then extract useful information about the scene from the
enhanced image.
B.
Components of Digital Image Processing
B1.
Image acquisition
Currently on the market most of them are based on DSP. These image acquisition systems are high cost, great
power consumption, and volume image acquisition system restriction, due to these limitations which dsp
processors are not suitable for some simple applications. With the development of image processing technology,
image acquisition system which based on ARM is more and more popular. Everywhere commonly used image
compression technique is jpeg. So implementation of such jpeg decoder is more useful. Using these jpeg
decoders one can decode jpeg images of any compression ratio can decode the jpeg images and displays output
on TFT LCD display. This paper developed such jpeg decoding algorithm on ARM processor. Which is portable
and efficient as well [1].
IJSWS 15-322; © 2015, IJSWS All Rights Reserved
Page 87
S. Singh and M. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 87-93
Fig. 1: Image acquisition
B2.
Image enhancement
Image enhancement is the task of applying certain alterations to an input image like as to obtain a more visually
pleasing image. Image enhancement is among the simplest and most appealing areas of digital image
processing. There are various techniques of image enhancement.
C.
Spatial Domain Techniques
Spatial domain techniques directly deal with the image pixels. The pixel values are manipulated to achieve
desired enhancement. Spatial domain techniques like the logarithmic transforms, power law transforms,
histogram equalization, are based on the direct manipulation of the pixels in the image. Spatial techniques are
particularly useful for directly altering the grey level values of individual pixels and hence the overall contrast of
the entire image. But they usually enhance the whole image in a uniform manner which in many cases produces
undesirable results [2]. It is not possible to selectively enhance edges or other required information effectively.
Techniques like histogram equalization are effective in many images.
C1.
Point Operation
Point operations, or image processing operations are applied to individual pixels only. The point operation is
represented by
g (m, n) = T [f (m, n)]
Where f (m, n) is the input image, g (m, n) is the processed image, and T is the operator defining the
modification process which operates on one pixel.
C2.
Mask Operation
In mask operation, each pixel is modified according to values in a small neighbourhood.
C3.
Global Operation
In global operation, all pixel values in the image are taken into consideration for performing operation.
D.
Frequency Domain Techniques
Frequency domain techniques are based on the manipulation of the orthogonal transform of the image rather
than the image itself. Frequency domain techniques are suited for processing the image according to the
frequency content [3]. The principle behind the frequency domain methods of image enhancement consists of
computing a 2-D discrete unitary transform of the image, for instance the 2-D DFT, manipulating the transform
coefficients by an operator M, and then performing the inverse transform. The orthogonal transform of the
image has two components magnitude and phase. The magnitude consists of the frequency content of the image.
The phase is used to restore the image back to the spatial domain [2]. The usual orthogonal transforms are
discrete cosine transform, discrete Fourier transform, Hartley Transform etc. The transform domain enables
operation on the frequency content of the image, and therefore high frequency content such as edges and other
subtle information can easily be enhanced.
Fig. 2: Image enhancement
IJSWS 15-322; © 2015, IJSWS All Rights Reserved
Page 88
S. Singh and M. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 87-93
D1.
Image restoration
It deals with improving the appearance of an image. The Image is corrected using different with improving the
appearance of an image. The Image is corrected using different correction methods like Median filtering, Linear
Filtering, Adaptive Filtering etc. in order to restore an image to its original forms. Fig shows a model of the
image degradation. The image degradation process can be modelled by the following equation [4]: g(x, y) =H(x,
y).f(x, y) +n(x, y) (1.1) Where, H (x, y) degradation function represents a convolution matrix that models the
blurring that many imaging systems introduce. For example, camera defocus, motion blur, imperfections of the
lenses all can be modelled by H. The values g(x, y), f(x, y), and ŋ(x, y) represent the observed or degraded
image, the original image or input image and the additive noise respectively
E.
Image Restoration Techniques
E1.
Median Filtering
As the name clear the median filter is statistics method. In this method we find the median of the pixel the
replace the pixel by median of the grey levels in their neighbourhood of that pixel [5]: f^ (x, y) =median {g(s,
t)}.
E2.
Adaptive Filtering
An adaptive filter that uses the grey and colour space for removal impulsive noise in images. All processing is
based on the grey and colour space. This can provide the best noise suppression results and better preserve thin
lines, edges and image details and yield better image quality compared to other filters.
E3.
Weiner Filtering
Wiener filter [4] incorporates both the degradation function and statistical characteristics of noise into the
restoration process. The method is founded on considering images and noise as random processes, and the
objective is to find an estimate f^ of the uncorrupted image f such that the mean square error between them is
minimized. This error measure is given by e2 = E {(f- f^) 2} Where E {.} is the expected value of the argument.
E4.
Histogram Equalization
Histogram equalization is implemented using probability. During histogram equalization the pixel values of the
image are listed and with their repetitive occurrence values. After they are listed the probability of any given
Points in the output image are calculated using cumulative probability distributed method. For this technique,
we have to use histeq function [5].
Fig.3: Histogram of median filter
Fig.4: Median filter
F.
Morphological processing
Morphology is a technique of image processing based on shape and form of objects. Morphological methods
apply a structuring element to an input image, creating an output image of the same size. The value of each pixel
in the input image is based on a comparison of the corresponding pixel in the input image with its neighbours.
IJSWS 15-322; © 2015, IJSWS All Rights Reserved
Page 89
S. Singh and M. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 87-93
By choosing the size and shape of the Neighbour, you can construct a morphological operation that is sensitive
to specific shapes in the Input image. The morphological operations can first be defined on grayscale images
where the Source image is planar (single-channel). The definition can then be expanded to full-colour images.
F1.
Morphological Operations
Morphological operations such as erosion, dilation, opening, and closing. Often combinations of these
operations are used to perform morphological image analysis [6]. There are many useful operators defined in
mathematical morphology. They are dilation, erosion, opening and closing.
F1.1
Dilation
Dilation is a transformation that produces an image that is the same shape as the original, but is A different size.
Dilation increases the valleys and enlarges the width of maximum regions, so it can remove negative impulsive
noises but do little on positives ones.
F1.2
Erosion
It is used to reduce objects in the image and known that erosion reduces the peaks and enlarges the widths of
minimum regions, so it can remove positive noises but affect negative impulsive noises little.
F1.3
Opening Operation
The opening of A by B is obtained by the erosion of A by B, followed by dilation of the resulting Image by
B:
Ao B = (A! B) A XOR B
In the case of the square of side 10, and a disc of radius 2 as the structuring element, the opening Is a square of
side 10 with rounded corners, where the corner radius is 2. The sharp edges start to disappear. Opening of an
image is erosion followed by dilation with the same structuring element.
F1.4
Closing Operation
Closing of an image is the reverse of opening operation. The closing of A by B is obtained by the dilation of A
by B, followed by erosion of the resulting structure by B:
A• B = (AA XOR B)! B
The method proposed is the block analysis where the entire image is split into a number of blocks and each
block is enhanced individually.
Fig.5: Morphological operations
G.
Segmentation
Segmentation is nothing but making the part of image or any object. Pattern recognition and image analysis are
the initial steps of image segmentation. Image segmentation is most of judging or analysing function in image
processing and analysis. Image segmentation refers to partition of an image into different regions that are
homogenous or similar and inhomogeneous in some characteristics. Image segmentation [8] approaches are
currently divided into following categories, based on two properties of image.
G1.
Detecting Discontinuities
The edge detection requires the detecting discontinuities property which includes image segmentation algorithm.
Intensity of the image is changed and partition an image. Edge detection is the segmentation by finding the
pixels [9] on the region boundary. Edge can be described by the boundary between the adjacent parts of an
image [9].
G2.
Detecting Similarities
It means to partition an image into regions that are similar according to a set of predefined criterion [10]; this
includes image segmentation algorithms like thresholding, region growing, region splitting and merging.
Thresholding[10] is a very common approach used for Region based segmentation where an image represented
as groups of pixels with values greater or equal to the threshold and values less to threshold value.Thresholding
can be used in the situation such as to concentrate on the essential, user wants to remove unnecessary detail or
part from an image[11].Clustering is also an approach for region segmentation where an image is partitioned
IJSWS 15-322; © 2015, IJSWS All Rights Reserved
Page 90
S. Singh and M. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 87-93
into the sets or clusters of pixels having similarity in feature space. There are three types of images as grey scale
hyper spectral and medical image
Fig. 6: Image segmentation
H.
Image representation and description
In order to find an image, the image has to be described or represented by certain features. Shape is an important
visual feature of an image. Searching for images using shape features has attracted much attention. There are
many shape representation and description techniques.
H1.
Contour-based shape representation and description Techniques
Contour shape techniques only exploit shape boundary information.[12] There are generally two types of very
different approaches for contour shape modelling: continuous approach (global) and discrete approach
(structural). Continuous approaches do not divide shape into sub-parts. Usually a feature vector derived from the
integral boundary is used to describe the shape. The measure of shape similarity is usually a metric distance
between the acquired feature vectors. Discrete approaches break the shape boundary into segments, called
primitives using a particular criterion. The final representation is usually a string or a graph (or tree), the
similarity measure is done by string matching or graph matching.
H2.
Region-based shape representation and description techniques
In region based techniques, all the pixels within a shape region are taken into account to obtain the shape
representation, rather than only use boundary information as in contour base methods. Common region based
methods use moment descriptors to describe shapes. Other region based methods include grid method, shape
matrix, convex hull and media axis. Similar to contour based method s, region based shape methods can also be
divided into global and structural methods, depending on whether they separate shapes into sub parts or not.
Fig.7: Image representation and description
I.
Image compression
Image compression [13] is an application of data compression that encodes the original image with few bits. The
objective of image compression is to reduce the redundancy of the image and to store or transmit data in an
efficient form.
IJSWS 15-322; © 2015, IJSWS All Rights Reserved
Page 91
S. Singh and M. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 87-93
I1.
Lossy and lossless Image compression
Image compression may be lossy or lossless. Lossless compression is preferred for archival purposes and often
for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially when used
at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such
as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a
substantial reduction in bit rate. The lossy compression that produces imperceptible differences may be called
visually lossless.
Fig. 8: Image compression
III.
Future work
This is already starting to happen with medical robots like the daVinci system, which allow doctors to remotely
perform delicate diagnoses and surgeries by "seeing" extremely high quality 3-D images of what they couldn't
have seen otherwise. "Social X-ray" glasses are being developed to help those suffering from autism decipher
body language. In-built grain-sized cameras capture images of faces and use software to analyse and compare
the various facial expressions (like confusion, anger, agreement) with the known expressions in a database.
"Digital cameras in many cell phones today look at the same wave lengths of light as we ourselves see. But we
will see many new kinds of other sensing modalities as well, which we are not so familiar with today," says
Baraniuk. For instance, infra-red imaging, which is already very important in applications like inspecting
packages, night-time security and in seeing through mist and heavy fog. Given the infinite applications of image
processing, and the various industries it is used in, it is understandable why there are no stats on it. (Ten years
back, there were no stats on Internet search either.) But given what Google has done, industry experts estimate
the market to be at least 30% of the current traditional search market over the next three years. "With the
proliferation of mobile devices and apps, it will accelerate at an even faster rate than traditional text search.
IV.
Summary
Digital Image Processing (DIP) involves the modification of digital data for improving the image qualities with
the aid of computer. The processing helps in maximising clarity, sharpness and details of features of interest
towards in formation extraction and further analysis. This form of remote sensing actually began in 1960s with a
limited number of researchers analysing airborne multispectral scanner data and digitised aerial photographs.
However, it was not until the launch of Landsat-1, in 1972, that digital image data became widely available for
land remote sensing applications. At that time not only the theory and practice of digital image processing was
in its infancy but also the cost of digital computers was very high and their computational efficiency was far
below by present standards. Today, access to low cost and efficient computer hardware and software is
commonplace and the sources of digital image data are many and varied. The digital image sources range from
commercial earth resources satellites, airborne scanner, airborne solid-state camera, scanning microdensitometer to high-resolution video camera.
References
[1].
[2].
[3].
[4].
[5].
[6].
[7].
P.Naga`Vardini*, T.Giri Prasad International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-962 Vol. 2,
Issue 4, July-August 2012, pp.1019-1024.
Arun R, Madhu S. Nair, R. Vrinthavani and Rao Tatavarti. “An Alpha Rooting Based Hybrid Technique for Image
Enhancement”. Online publication in IAENG, 24th August 2011.
Raman Maini and Himanshu Aggarwal, “A Comprehensive Review of Image Enhancement Techniques”, Journal of Computing,
Vol. 2, Issue 3, March 2010, ISSN 2151-9617.
Gonzalez, R.C., and Woods, R.E. “Digital Image processing”, Prentice Hall of India cliffs, 2002.
Poobal Sumathi,Ravindrang G,” the performance of fractal Image compression on Different imaging modalities Using objective
quality”Jan 2011.
K.Sreedhar and B.Panlal International Journal of Computer Science & Information Technology (IJCSIT) Vol 4, No 1, Feb. 2012.
C. R. González and E wood’s, Digital Image Processing. Englewood Cliffs, NJ: Prentice Hall, 2011.
IJSWS 15-322; © 2015, IJSWS All Rights Reserved
Page 92
S. Singh and M. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 87-93
[8].
[9].
[10].
[11].
[12].
[13].
[14].
Panagiotis Sidiropoulos, Vasileios Mezaris, Ioannis (Yiannis) Kompatsiaris and Josef Kittler,” Differential Edit Distance: A
Metric for Scene Segmentation Evaluation”, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO
TECHNOLOGY, VOL. 22, NO. 6, JUNE 2012.
Jesmin F. Khan, Sharif M. A. Bhuiyan, and Reza R. Adhami,” Image Segmentation and Shape Analysis for Road-Sign
Detection”, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, VOL. 12, NO. 1, MARCH 2011.
H. G. Kaganami, Z. Beij,“Region Based Detection versus Edge Detection”, IEEE Transactions on Intelligent information hiding
and multimedia signal processing, pp. 1217-1221, 2009.
H.P. Narkhede International Journal of Science and Modern Engineering (IJISME) ISSN: 2319-6386, Volume-1, Issue-8, July
2013.
D. Chetverikov, Y. Khenokh, Matching for shape defect detection, Lecture Notes in Computer Science, Vol. 1689, Springer,
Berlin, 1999, pp. 367–374.
Dengsheng Zhang, Guojun Lu www.elsevier.com/locate/patcog.
E.Kannan, G. Murugan Volume 2, Issue 2, February 2012 International Journal of Advanced Research in Computer Science and
Software Engineering.
IJSWS 15-322; © 2015, IJSWS All Rights Reserved
Page 93
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Security Threats Using Cloud Computing
Satyam Rai1, Siddhi Saxena2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
1,2
__________________________________________________________________________________________
Abstract: The set of resources and services offered through the Internet is cloud computing. These services are
delivered from data centers located throughout the world. Consumers are being facilitated by providing virtual
resources via internet. Example of cloud services is Google apps, provided by Google and Microsoft
SharePoint. The wide acceptance www has raised security risks along with the uncountable benefits, so is the
case with cloud computing. The boom in cloud computing has brought lots of security challenges for the
consumers and service providers. How the end users of cloud computing know that their information is not
having any availability and security issues? Every one poses, Is their information secure? This study aims to
identify the most vulnerable security threats in cloud computing, which will enable both end users and vendors
to know about the key security threats associated with cloud computing. Our work will enable researchers and
security professionals to know about users and vendors concerns and critical analysis about the different
security models and tools proposed.
_______________________________________________________________________________________
I.
Introduction
Threat: An Overview
A threat is an act of coercion wherein an act is proposed to elicit a negative response. It is a communicated
intent to inflict harm or loss on another person. It can be a crime in many jurisdictions. Threat (intimidation) is
widely seen in animals, particularly in a ritualized form, chiefly in order to avoid the unnecessary physical
violence that can lead to physical damage or death of both conflicting parties.
Defining Cloud Computing
The internet generally seems to be collection of clouds, so “Internet Computing” simply means “Cloud
Computing”; thus the word Cloud Computing can be defined as utilizing the internet to provide technology
enabled services to the people and organizations. Consumers can use Cloud Computing to access resources
online through the internet, from anywhere at any time without worrying about any technical or physical
management and maintenance issues of the original resources. Cloud computing is cheaper than other
computing models; zero maintenance cost is involved since the service provider is responsible for the
availability of services and clients are free from maintenance and management problems of the resource
machines.
Scalability is key attribute of cloud computing. It is the ability of a system, network, or process to handle a
growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth. After
creation of a cloud, Deployment of cloud computing differs with reference to the requirements and for the
purpose it will be used. The principal service models being deployed are:
Software as a Service (SaaS): Software‘s are provided as a service to the consumers according to their
requirement, enables consumers to use the services that are hosted on the cloud server.
Platform as a Service (PaaS): Clients are provided platforms access, which enables them to put their own
customized software‘s and other applications on the clouds.
Infrastructure as a Service (IaaS): Rent processing, storage, network capacity, and other basic computing
resources are granted, enables consumers to manage the operating systems, applications, storage, and network
connectivity.
II.
Review
A well-known new concept Cloud Computing that presents a wide number of benefits for its users; however,
some security problems also exist which may slow down its use. Understanding what vulnerabilities exist in
Cloud Computing will help organizations to make the shift towards the Cloud. Since Cloud Computing
leverages many technologies, it also inherits their security issues. Traditional web applications, data hosting, and
virtualization have been looked over, but some of the solutions offered are immature or inexistent. We have
presented security issues for cloud models: IaaS, PaaS, and IaaS earlier, which vary depending on the model. As
described, storage, virtualization, and networks are the biggest security concerns in Cloud Computing.
Virtualization which allows multiple users to share a physical server is one of the major concerns for cloud
users. Also, another challenge is that there are different types of virtualization technologies, and each type may
IJSWS 15-323; © 2015, IJSWS All Rights Reserved
Page 94
S. Rai and S. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 94-96
approach security mechanisms in different ways. Virtual networks are also target for some attacks especially
when communicating with remote virtual machines. Some surveys have discussed security issues about clouds
without making any difference between vulnerabilities and threats. We have focused on this distinction, where
we consider important to understand these issues. Cloud Computing represents one of the most significant shifts
in IT sector many of us are likely to see in our lifetimes. Reaching the point where computing functions as a
utility has great potential, promising innovations we cannot yet imagine. Customers are both excited and
nervous at the prospects of Cloud Computing. They are excited by the opportunities to reduce capital costs.
They are excited for a chance to divest themselves of infrastructure management, and focus on core
competencies. Most of all, they are excited by the agility offered by the on-demand provisioning of computing
and the ability to align information technology with business strategies and needs more readily. However,
customers are also very concerned about the risks of Cloud Computing if not properly secured, and the loss of
direct control over systems for which they are nonetheless accountable. To aid both cloud customers and cloud
providers, CSA developed “Security Guidance for Critical Areas in Cloud Computing”. This guidance has
quickly become the industry standard catalogue of best practices to secure Cloud Computing, consistently
lauded for its comprehensive approach to the problem, across 13 domains of concern. Numerous organizations
around the world are incorporating the guidance to manage their cloud strategies. The great breadth of
recommendations provided by CSA guidance creates an implied responsibility for the reader. Not all
recommendations are applicable to all uses of Cloud Computing. Some cloud services host customer
information of very low sensitivity, while others represent mission critical business functions. Some cloud
applications contain regulated personal information, while others instead provide cloud-based protection against
external threats. It is incumbent upon the cloud customer to understand the organizational value of the system
they seek to move into the cloud. Ultimately, CSA guidance must be applied within the context of the business
mission, risks, rewards, and cloud threat environment — using sound risk management practices.
The purpose of this document, “Top Threats to Cloud Computing”, is to provide needed context to assist
organizations in making educated risk management decisions regarding their cloud adoption strategies. In
essence, this threat research document should be seen as a companion to “Security Guidance for Critical Areas
in Cloud Computing”. As the first deliverable in the CSA’s Cloud Threat Initiative, the “Top Threats” document
will be updated regularly to reflect expert consensus on the probable threats which customers should be
concerned about.
III.
Architecture of Cloud Computing
Key functions of a cloud management system is divided into four layers, respectively the Resources & Network
Layer, Services Layer, Access Layer, and User Layer. Each layer includes a set of functions:

The Resources & Network Layer manages the physical and virtual resources.

The Services Layer includes the main categories of cloud services, namely, NaaS, IaaS, PaaS,
SaaS/CaaS, the service orchestration function and the cloud operational function.

The Access Layer includes API termination function, and Inter-Cloud peering and federation function.

The User Layer includes End-user function, Partner function and Administration function.
Three cloud service models (SaaS, PaaS and IaaS) not only provide different types of services to end users but
also disclose information security issues and risks of cloud computing systems. First, the hackers might abuse
the forceful computing capability provided by clouds by conducting illegal activities. IaaS is located in the
bottom layer, which directly provides the most powerful functionality of an entire cloud. It maximizes
extensibility for users to customize a “realistic” environment that includes virtual machines running with
different operating systems. Hackers could rent the virtual machines, analyze their configurations, find their
vulnerabilities, and attack other customers’ virtual machines within the same cloud. IaaS also enables hackers to
perform attacks, e.g. brute-forcing cracking, that need high computing power. Since IaaS supports multiple
virtual machines, it provides an ideal platform for hackers to launch attacks (e.g. distributed denial of service
(DDoS) attacks) that require a large number of attacking instances.
Second, data loss is an important security risk of cloud models. In SaaS cloud models, companies use
applications to process business data and store customers’ data in the data centers. In PaaS cloud models,
developers use data to test software integrity during the system
development life cycle (SDLC). In IaaS cloud models, users create
new drives on virtual machines and store data on those drives.
However, data in all three cloud models can be accessed by
unauthorized internal employees, as well as external hackers. The
internal employees are able to access data intentionally or
accidently. The external hackers gain access to databases in cloud
environments using a range of hacking techniques such as session
hijacking and network channel eavesdropping.
IJSWS 15-323; © 2015, IJSWS All Rights Reserved
Page 95
S. Rai and S. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 94-96
Third, traditional network attack strategies can be applied to harass three layers of cloud systems. For example,
web browser attacks are used to exploit the authentication, authorization, and accounting vulnerabilities of cloud
systems. Malicious programs (e.g. virus and Trojan) can be uploaded to cloud systems and can cause damage.
Fourth, malicious operations (e.g. metadata spoofing attacks) can be embedded in a normal command, passed to
clouds, and executed as valid instances. Fifth, in IaaS, the hypervisor (e.g. VMware vSphere and Xen)
conducting administrative operations of virtual instances can be compromised by zero day attack.
IV.
Identity Management
Identities are generated to access a cloud service by the cloud service provider. Each user uses his identity for
accessing a cloud service. Unauthorized access to cloud resources and applications is a major issue. A malicious
entity can impersonate a legitimate user and access a cloud service. Many such malicious entities acquire the
cloud resources leading to un-availability of a service for actual user. Also it may happen that the user crosses
his boundary at the time of service usage in the cloud environment. This could be in terms of access to protected
area in memory. Globally, 47% of those who are currently using a cloud computing service reported they have
experienced a data security lapse or issue with the cloud service their company is using within the last 12
months. India had the Incidence of data security lapse or issue increased from 43% in 2011 to 46% (excluding
Brazil, which was not surveyed in 2011) in 2012. India had the biggest increase of 12%, followed by Japan (7%
increase) and Canada (6% increase). Figure bellow illustrates this statics.
Data security lapse statics in different countries around the world
V.
Conclusion
Security concerns are an active area of research and experimentation. Lots of research is going on to address the
issues like network security, data protection, virtualization and isolation of resources. Addressing these issues
requires getting confidence from user for cloud applications and services. Obtaining user confidence can be
achieved by creating trust for cloud resource and applications, which is a crucial issue in cloud computing. Trust
management is attracting much attention. Providing secure access to cloud by trusted cloud computing and by
using service level agreements, made between the cloud provider and user; requires lots of trust and reputation
management. We will be focusing on the analysis of solution in the cloud computing environment. Also lots of
our survey based in the field of trust and trust management. In this article we gave a telling overview of security
threats of cloud computing. We have also provided the reader with some effective countermeasures, besides
introducing main elements of security in cloud computing.
VI.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
References
NIST (Authors: P. Mell and T. Grance), "The NIST Definition of Cloud Computing (ver. 15)," National Institute of Standards
and Technology, Information Technology Laboratory (October 7 2009).
J. McDermott, (2009) "Security Requirements for Virtualization in Cloud Computing," presented at the ACSAC Cloud Security
Workshop, Honolulu, Hawaii, USA, 2009.
J. Camp. (2001), “Trust and Risk in Internet Commerce,” MIT Press
T. Ristenpart et al. (2009) “Hey You Get Off My Cloud,” Proceedings of the 16th ACM conference on Computer and
communications security, Chicago, Illinois, USA
M. Armbrust, et al., "Above the Clouds: A Berkeley View of Cloud Computing," UC Berkeley Reliable Adaptive Distributed
Systems LaboratoryFebruary 10 2009.
Cloud Security Alliance, "Security Guidance for Critical Areas of Focus in Cloud Computing, ver. 2.1," 2009.
M. Jensen, et al., "On Technical Security Issues in Cloud Computing," presented at the 2009 IEEE International Conference on
Cloud Computing, Bangalore, India 2009.
P. Mell and T. Grance, "Effectively and Securely Using the Cloud Computing Paradigm," ed: National Institute of Standards and
Technology, Information Technology Laboratory, 2009.
IJSWS 15-323; © 2015, IJSWS All Rights Reserved
Page 96
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Review of Data Mining Techniques
Sandeep Panghal1, Priyanka Yadav2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
1,2
Abstract: The evolution of Information Technology has produced large amount of databases which have huge
data [5]. To store and handle this data many techniques are used so as to make decisions from the data. Data
Mining is the process of searching of meaningful information and interesting patterns from huge data. It is also
known as knowledge discovery in database [6]. This literature review provides the overview of the most
common data mining techniques. It discuss some of the most common and important techniques of data mining
which are Association, Clustering, Classification, Prediction and Sequential pattern. Data Mining is used in
business environment. Data mining tasks are distinct in nature due to large number of pattern in databases
Keywords: Data mining, Data mining techniques, Association, Classification, Clustering, Prediction, Sequential
Pattern, Decision trees
I.
Introduction
There is an exponential growth of data being stored and generated. With tremendous opportunities who have
capabilities to unlock the information embedded within this data, it introduces new challenges. The topics
included are: what data mining is ,what type of challenges it faces , what type of problems it can address , and
why it developed [5]. Data mining can be considered as search for valuable information in large volumes of
data. Nontrivial extraction of implicit, previously unknown and potentially valuable information such as
knowledge rules, constraints and regularities from data stored in repositories using pattern recognition as well as
mathematical and statistical technologies. Data mining is recognized by many companies as an important
technique that will have an impact on the performance of the companies. Data mining is an active research area
and research is ongoing to bring Artificial intelligence (AI) techniques and statistical analysis together to
address the issues [15]. Data mining is an analytical process which is designed to scrutinize or explore opulence
of data typically market or business related (also known as big data) .It is about searching consistent pattern or
systematic relationship between variables so that findings can be validated by applying the detected pattern.
Major elements of data mining are:

Extract, transform, and load transaction data onto the data warehouse system.

Provide data access to business analysts and information technology professionals.

Present the data in a useful format, such as a graph or table

Analyze the data by application software.

Store and manage the data in a multidimensional database system [15].
II.
Literature Review
Data mining is an important area from research point of view and artificial neural network solves many
problems of data mining. Artificial neural network is a biological system that helps in making predictions and
pattern detection. Neural network and genetic algorithm are common data mining methods. With help of
genetic algorithm we can build a better solution by combining the good parts of other solution. Many scientific
applications are dependent on these methodologies [1].Data mining makes use of some visualization techniques,
machine learning and various statistical techniques to discover or find patterns among the databases which can
be easily understood by humans. There are various data mining tools like WEKA, Tanagra, Rapid miner,
Orange. WEKA stands for Waikato Environment for Knowledge Learning. It was developed to identify
information from raw data collected from agricultural domains. TANAGRA tool is used in research purposes. It
is an open source application and provides helps to researchers in adding their own data mining methods. One
can use data mining tools to answer “What-if” questions and help in demonstration of real effects [2]. Data
mining and knowledge discovery in database (KDD) are related to each other. KDD is used for making sense of
data. Today there is a great need of tools and computational theories which can help humans in the extraction of
useful and meaningful information from the volumes of data present today. Data mining is a step in KDD
process in which data analysis is done to produce hidden values in database [3]. Many problems in data mining
research has been identified. No unifying theory for individual problems in data mining, handling high
dimensional data, mining of time series and sequence data, extracting complex knowledge from complex
databases, application of data mining methods to environmental and biological problems are some of the main
problems [4]. There are various steps in data mining process which are data cleaning, data integration, data
selection, data preprocessing, data transformation, data mining method and interpretation or knowledge
discovery. Today data is growing at a tremendous pace. This data must be stored cost-effectively and also
IJSWS 15-325; © 2015, IJSWS All Rights Reserved
Page 97
S. Panghal and P. Yadav, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 97-102
analysis of this data should be done so as to extract meaningful information from this data. Various data mining
tasks are employed for this purpose like classification, regression, cluster analysis, text analysis, link analysis
tasks [5]. Data mining can be used in business environment, weather forecasting, product design, load prediction
etc. It can be viewed as a decision support system than just using traditional query languages. Data mining has
become one of the most important area in database and information technology [6]. Data mining can be used for
decision making in pharmaceutical industry. Data in an organization can be used for extracting hidden
information rather than just for administrative purposes. User interface designed for accepting all kinds of
information from the user can be scanned and list of warning can be issued on the basis of the information
entered by the user. Neural network technique of data mining can be used to clinically test drugs on patients.
New drugs can be generated through clustering, classification and neural network [7]. Sequential pattern mining
is used to discover interesting sequential patterns from the large databases. It can be classified into two groups
namely Apriori based and Pattern growth based. Pattern growth based algorithms are more efficient than Apriori
based algorithms in terms of space utilization, running time complexity and scalability [8]. Association rule
mining is used to uncover interesting relationships among data. Many companies want to increase their profits
by mining association rules from the large datasets. Association rule is generally used in market based analysis
to analyze customer buying habits. Association rule is simple and can be implemented easily so as to increase
the profits [9]. Data mining applications are limited to educational contexts. Data mining approach can be
applied to educational data sets. Educational data mining (EDM) has emerged to solve educational related
problems. It can be used to understand student retention and attrition so as to make personal recommendation to
every individual student. EDM is also applied to admissions and enrollment [10].
III. Techniques of Data Mining
There are different kinds of methods and techniques to find meaningful patterns in databases. There are two data
mining models which are predictive model or descriptive model. Predictive model helps to predict the value of a
variable based on some the data. Descriptive model helps to study characteristics of data set [1]. The techniques
which are used in data mining projects are association, classification, clustering, decision tree, sequential
patterns and prediction.
A.
Association
Association Rule Mining is descriptive data mining technique. Association rule analysis is used to discover
Interesting relationships which are hidden in large data sets [5]. It exposes the associative relationship among
the objects or we can say it reveals uncovered relationships. Association technique is generally used in market
based analysis. Some other application domains are bioinformatics, Medical diagnosis, scientific analysis and
web mining. Association rules helps in marketing targeted advertising, floor planning, inventory control,
churning management and homeland security. An association rule is about relationship between two disjoint
item sets X and Y as X  Y. It presents the pattern when X occurs, Y also occurs.
1)
Example: Market Basket Analysis
Consider the example of a store [9] which sells different daily products like Bread, Butter, Milk, Ice-creams,
Beer, Pepsi and Cold coffee. The store owner might want to know which of these items customer are likely to
buy together. They are interested in analyzing the data and learn about the purchasing behavior of their
customers. Here is an example which illustrates the market based analysis .Consider a database consisting of 5
Transactions.
Transactions
Items
Customer 1
{Bread, Milk ,Beer}
Customer 2
{ Bread, Milk, Pepsi}
Customer 3
{Pepsi, Cold Coffee}
Customer 4
{Eggs ,Bread ,Butter, Milk, Pepsi}
Customer 5
{Pepsi, Beer}
The following rule can be extracted from following data set {Bread}  {Milk}. The rule suggests that many
Customers who buy Bread also buy Milk. Retailers can take help of these type of rules so as to increase their
sales.
B.
Clustering
Clustering is a descriptive task where data is divided into groups of similar objects. Each group is called cluster
[5] . Dissimilar objects are grouped into different clusters. Many data objects can be represented with few
clusters. This helps to achieve simplification but it may lose some finer details. Clustering algorithms can be
applied to web analysis, computational biology, CRM, marketing and medical diagnosis.
IV. Types of Clustering Technique
The two main types of clustering technique are Hierarchical and Non- Hierarchical Clustering. In hierarchical
clustering techniques a cluster hierarchy is created from small to big. The hierarchy of clusters can be viewed as
a tree where the smallest clusters are merged together to create the next larger level of clusters in the hierarchy.
IJSWS 15-325; © 2015, IJSWS All Rights Reserved
Page 98
S. Panghal and P. Yadav, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 97-102
The cluster hierarchy enables user to determine the exact number of clusters that summarizes the information
available in the dataset as well as providing the useful information.
1). Hierarchy clustering: algorithms are used to create a hierarchy of clusters. Two main types of hierarchical
clustering algorithms are:
 Agglomerative clustering technique
It is a bottom up technique which starts with n clusters. Each cluster contains one record. The clusters which are
similar or nearest to each other are paired together in one cluster. This process is successively repeated until we
are left with one cluster which contains all the data.
 Divisive Hierarchical Clustering
It is a top down clustering method and works opposite to agglomerative clustering. It stars with One cluster
which contains all the records and then split that cluster into smaller clusters until cluster of a single record
remains.
2). Non-hierarchical clustering: In Non-hierarchical clustering techniques the relationship between the clusters
is undetermined. These techniques sometimes create clusters by going through the database only once and add
record to existing clusters or it may create a new cluster. The user has to make decision about how many clusters
are required. Non-hierarchical techniques are faster than hierarchical techniques. Two main non-hierarchical
techniques are: single pass methods and reallocation methods. In single pass methods database is passed
through only once so as to create clusters. Reallocation Methods can go through the database multiple times so
as to create better clusters.
Hierarchical techniques have advantage over the non-hierarchical techniques as in hierarchical clustering the
cluster is defined only by the data or the records contrary to non-hierarchical clustering where the users
predetermine the number of clusters. The number can also be increased or decreased as one moves up or down
the hierarchy. It can also generate smaller clusters which can be helpful for discovery.
A.
Sequential Pattern
Sequential pattern is a technique of discovering relevant sequential patterns among large databases with userspecified minimum support [13]. It find out frequent sub-sequences from a given set of data sequences. The
sequential pattern problem was first introduced in 1995 by Agrawal and Srikant [13] and was defined as:
“Given a database of sequences, where each sequence consists of a list of transactions ordered by transaction
time and each transaction is a set of items, sequential pattern mining is to discover all sequential patterns with a
user-specified minimum support, where the support of a pattern is the number of data-sequences that contain
the pattern.”
1). Some basic concepts [8] of Sequential Pattern Mining are:
1. An itemset is the non-empty subset of items. Let I={i1,i2…,iN} is the set of items where each item is
associated with some attribute. Itemset with k items is called k-itemset.
2. A Sequence α= {A1,A2,…,Al} is an ordered collection list of itemsets. An itemset Ai where
(1 ≤ i ≤ l) in a sequence is called a transaction.
3. Number of transaction in a sequence is called the length of the sequence. A sequence with length l is called lsequence.
4. If there are two given sequences α={A1,a2,…,An} and β ={B1,B2,…,Bm} (n<=m) then α is called a
subsequence of β denoted by α⊆β, if there exist integers 1≤ i1< i2<…< in≤m such that A1⊆Bi1, A2⊆Bi2,…,
An⊆Bin .
2).Sequential Pattern Mining Algorithm:
Sequential pattern mining algorithm can be divided into two parts. They are Apriori-based and Pattern growth
based.
V. Apriori-Based algorithm
The algorithms depend on apriority property which states that “if a sequence S is not frequent then none of the
super-sequences of S will be frequent”. Key features of Apriori-based algorithm are Breadth first search,
Generate and Test, Multiple scans of the database. Some algorithms are GSP, SPADE, SPAM .
1) GSP (Generalized Sequential Pattern): This algorithm makes passes to the data multiple times. Steps
involved in GSP are Candidate Generation and Candidate Pruning Method. The outline of the method [8] is:
-Initially every item in database is of length-1.
-for each level (sequences of length-k) do
 Scan the database to collect support count for each candidate sequence
 Using Apriori generate candidate length-(k+1) sequences from length-k frequent sequences
-repeat until no frequent sequence or no candidate can be found.
2.) SPADE (Sequential Pattern Discovery using Equivalence classes): It is an Apriori based vertical format
sequential pattern algorithm [12]. The sequences in database are given in vertical order rather than horizontal
order. It consist of id list pairs (sequence-id, timestamp). The first value denotes customer sequence and second
value stand for transaction in it. This algorithm can make use of Breadth first and Depth first search. Lattice
IJSWS 15-325; © 2015, IJSWS All Rights Reserved
Page 99
S. Panghal and P. Yadav, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 97-102
theoretic approach is used to divide the original search space into smaller sub-lattices which can be processes in
main memory independently. SPADE algorithm reduces the input-output costs as well as computational costs.
3.) SPAM (Sequential Pattern Mining): SPAM [14] uses a vertical bit-map data structure representation of a
database. It integrates the concept of GSP, SPADE, Free-span algorithms. The sequence tree is traversed in
depth first manner and thus the performance is increased. Merging cost is reduced but it takes more space
compared to SPADE and thus there is a space-time-tradeoff.
VI.
Pattern Growth algorithm
In Apriori-based algorithms the candidate generation was exponential. When database were large, candidate
generation and candidate pruning method suffered greatly. This lead to Pattern growth algorithms in which
candidate generation and candidate pruning were avoided. Pattern growth algorithms are complex algorithms to
develop, test and maintain but these algorithms are faster when the database size is large. The key features of
pattern based algorithms are partitioning of search space, tree projection, depth-first traversal. Some pattern
based algorithms are FREESPAN, PREFIXSPAN.
1) FREESPAN (Frequent pattern-projected Sequential Pattern Mining): This algorithm was developed to
reduce candidate generation and testing of Apriori. Frequent Items are used to recursively project the sequence
database into projected database and make use of the projected database to confine the search and growth of
subsequence fragments [8][14]. The size of each projected database reduces with recursion.
2) PREFIXSPAN (Prefix-projected Sequential pattern mining): This algorithm [11] finds the frequent items
going through the sequence database only once. It is the fastest algorithm. The algorithm makes use of a divide
and search technique. It reduces the effort of candidate subsequence generation. It reduces the size of projected
database. The cost of memory space may be high due to creation of huge number of projected sub-databases. It
is a DFS based approach. It is one of the efficient pattern growth method and performs more efficiently than
GSP and Freespan. Prefixspan makes the search space smaller. Longer sequential patterns are grown from
shorter frequent ones.
A.
Classification and Regression
Classification and regression methods are predictive in nature involve e building a model to predict a target,
variable from a set of explanatory or dependent or independent variables .For the purpose of classification target
variable are assigned a small number of discrete value whereas for the purpose of regression the target variable
is continuous. Classification task example: fraudulent credit card transactions (Fawcett and Provost
1997).Regression task example: predicting future prices of a stock (Enke and Thawornwong 2005) [5].
B
Predictive Data Mining Algorithms
A basic knowledge of data mining algorithms is essential to know when each algorithm is relevant to about
advantages and disadvantages of each algorithm and use of the algorithms to solve real-world problems.
1.) Decision Trees: One of the popular classes of learning algorithms for classification tasks is decision tree
algorithm. An attribute is represented by the internal nodes of the decision tree and terminal nodes are labeled
with a class value. One follows the branches that match the attribute values for the example, until a leaf node is
reached, when it is represented with an example. Predicted value for the example is the class value assigned to
the leaf node. Let’s take an example in which decision tree will default on their automobile loan if their credit
rating is “poor” or it is not “poor” (i.e., “fair” or “excellent”) but the person is “middle aged” and their income
level is “low”.
Rating of Credit
Poor
Fair or Excellent
Age
Middle Age
Default=yes
Youth
Default=yes
Income
Low
Medium, High
Default=yes
Default=No
Figure 1: Decision tree model for Automobile Loan Data
IJSWS 15-325; © 2015, IJSWS All Rights Reserved
Page 100
S. Panghal and P. Yadav, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 97-102
2.) Rule Based Classifiers: Rule sets are generated from rule based classifiers. The first rule tot fire determines
the classification but in other cases all rules are evaluated and the final classification is made based on a voting
scheme. Rule-based classifiers are similar to decision tree learners. It has similar comprehensibility,
computation time and expensive power. these two classification methods is even more direct since any decision
tree can trivially be converted into a set of mutually exclusive rules , by creating one rule corresponding to the
path from the root of the tree to each leaf. Some rule-based learners such as C4.5Rules (Quinlan 1993) operate
this way, other rule learners, such as RIPPER (Cohen 1995), generate rules directly.
3.) Artificial Neural Networks: Artificial Neural Networks (ANNs) were basically inspired by attempts to
simulate some of the functions of the brain and can be used for both classification and regression tasks (Gurney
1997).It has interconnected set of nodes that includes an input layer, zero or more hidden layers and output
layer.
The ANN computes the output value from the input values as follows. First, the input values are taken from the
attributes of the training example, as it is inputted to the ANN. These values are then weighted and fed into the
next set of nodes, which in this example are H1 and H.A non-linear activation function is then applied to this
weighted sum and then the resulting value is passed to the next layer, where this process is repeated, until the
final value(s) are resulted. The ANN learns by incrementally modifying its weights so that, during the training
phase, the predicted output value paces closer to the observed value. The most popular algorithm for modifying
the weights is the back propagation algorithm (Rumelhart, Hinton, and William 1986). The ANN learns by
incrementally modifying its weights so that, during the training phase, the predicted output value moves closer
to the observed value. The most popular algorithm for modifying the weights is the back propagation algorithm
(Rumelhart, Hinton, and William 1986). Due to the nature of ANN learning, the entire training set is applied
repeatedly, where each application is referred to as an epoch.
VII. Future work
The definition of data mining technique is not complete as other methodologies such as social science
methodologies were not included in the survey. Research technology that is often used in social studies are
qualitative questionnaires and statistical methods. There are following examples : psychology and human
behavior are used to implement different methods for investigating specific human problems and cognitive
science, therefore other social science methodologies may include DMT in future studies. Future development
of DMT must be integrated with different methodologies as data mining technique is an interdisciplinary
research topic. New insights into the problems associated with DMT may be offered by the integration of
methodologies and cross-disciplinary research. Due to social and technical reasons change can either enable or
inhibit ES methodologies and development of applications. It can be viewed that stemming from the use of
routine problem solving procedures, stagnant knowledge sources and reliance on past experience, inertia or
knowledge may impede change, with respect to learning and innovation for organizations and individuals.
VIII. Conclusion
The crucial thing for making right decision is having right information. The problem of collecting data, which
used to be a major concern for most organizations, is almost resolved. Organizations will be competing in
generating information from data and not in collecting data in millennium. It has been indicated by industry
surveys that over 80 percent of Fortune 500 companies believe e that data mining would be a critical factor for
business success by the year 2000 (Baker and Baker, 1998).For sure in the coming future DATA mining will be
one of the main competitive focuses of organizations. Many issues remain to be resolved and much research has
to be done, though progresses are continuously been made in the data mining field. [15]
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
Nikita Jain, Vishal Srivastava, "DATA MINING TECHNIQUES: A SURVEY PAPER",IJRET: International Journal of
Research in Engineering and Technology, Volume: 02 Issue: 11 | Nov-2013
Y. Ramamohan, K. Vasantharao, C. Kalyana Chakravarti, A.S.K.Ratnam, "A Study of Data Mining Tools in Knowledge
Discovery Process "International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-2, Issue3,July 2012
Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth, "From Data Mining to Knowledge Discovery in Databases", AI
magazine,1997
QIANG YANG, XINDONG WU, "10 CHALLENGING PROBLEMS IN DATA MINING RESEARCH", International Journal
of Information Technology & Decision Making Vol. 5, No. 4 (2006) 597–604
G. Weiss and B. Davison,"Data Mining", in Handbook of Technology Management, John Wiley and Sons, expected 2010.
Kalyani M Raval, "Data Mining Techniques", International Journal of Advanced Research in Computer Science and Software
Engineering, Volume 2, Issue 10, October 2012
Jayanthi Ranjan, "APPLICATIONS OF DATA MINING TECHNIQUES IN PHARMACEUTICAL INDUSTRY", Journal of
Theoretical and Applied Information Technology, 2005 - 2007 JATIT.
Chetna Chand, Amit Thakkar, Amit Ganatra, "Sequential Pattern Mining: Survey and Current Research Challenges",
International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-2, Issue-1, March 2012
Irina Tudor,"Association Rule Mining as a Data Mining Technique",Vol. LX No. 1/2008
Richard A. Huebner, "A survey of educational data-mining research", Research in Higher Education Journal.
IJSWS 15-325; © 2015, IJSWS All Rights Reserved
Page 101
S. Panghal and P. Yadav, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 97-102
[11]
[12]
[13]
[14]
[15]
V. Uma, M. Kalaivany, Aghila, ”Survey of Sequential Pattern Mining Algorithms and an Extension to Time Interval Based
Mining Algorithm”, International Journal of Advanced Research in Computer Science and Software Engineering
M. Zaki, “SPADE: An efficient algorithm for mining frequent sequences”, Machine Learning, 2001.
R. Agrawal and R. Srikant, “Mining Sequential Patterns”. In Proc. of the 11th Int'lConference on Data Engineering, Taipei,
Taiwan, March 1995.
J. Han, G. Dong, B. Mortazavi-Asl, Q. Chen, U. Dayal and M.-C. Hsu, “FreeSpan: Frequent pattern-projected sequential pattern
mining”, Proc. 2000 International Conference of Knowledge Discovery and Data Mining (KDD’00), pp. 355-359, 2000.
Sang Jun Lee and Keng Siau , “A review of data mining techniques”. Industrial Management & Data Systems 101/1[2001]41- 46
Acknowledgments
We are grateful to our Department of Computer Science & Technology for their support and providing us an opportunity to review on such
an interesting topic. While reading and searching about this topic we learnt about various important and interesting facts.
IJSWS 15-325; © 2015, IJSWS All Rights Reserved
Page 102
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Budget Based Search Advertisement
Vijay Kumar1, Rahul Kumar Gupta2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
_________________________________________________________________________________________
Abstract: In this paper, we model and formulate the search-based advertising auction problem with multiple
options, choice behaviors of advertisers, and the popular general price based of a mechanism. A Lagrangianbased method is then proposed for tackling this problem. We present an extension to the subgradient algorithm
based on Lagrangian relaxation coupled with the column generation method in order to improve the dual
multipliers and accelerate its convergence. Simulation results show that the proposed algorithm is efficient and
it shows significant improvement compared to the greedy algorithm. Our main results are algorithmic and
complexity results for both these problems for our three stochastic models. In particular, our algorithmic results
show that simple prefix strategies that bid on all cheap keywords up to some level are either optimal or good
approximations for many cases; we show other cases to be NP-hard.
In search auctions, when the total budget for an advertising campaign during a certain promotion period is
determined, advertisers have to distribute their budgets over a series of sequential temporal slots (e.g., daily
budgets). However, due to the uncertainties existed in search markets, advertisers can only obtain the value
range of budget demand for each temporal slot based on promotion logs. In this paper, we present a stochastic
model for budget distribution over a series of sequential temporal slots during a promotion period, considering
the budget demand for each temporal slot as a random variable. We study some properties and present feasible
solution algorithms for our budget model, in the case that the budget demand is characterized either by uniform
random variable or normal random variable. We also conduct some experiments to evaluate our model with the
empirical data.

(Evaluation Problem) Given a bid solution, can we evaluate the expected value of the objective
function under different stochastic models?

(Optimization Problem) Can we determine a bid solution that maximizes the objective function in
expectation under different stochastic models?
_________________________________________________________________________________________
1,2
I.
INTRODUCTION
In search auctions, many brand managers from small companies face budget constraints due to their financial
problems (Chakrabarty et al., 2007). Moreover, there are plenty of uncertainties in the mapping from the budget
into the advertising performance in search auctions (Yang et al., 2013). Classical budget allocation methods
usually seek maximum profits or minimum cost with known parameters. However, it is difficult for advertisers
to predict necessary factors such as cost per click (CPC) and click-through rate (CTR) in search auctions.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to
redistribute to lists, requires prior specific permission and or a fee. Each advertiser also species a daily budget,
which is an upper bound on the amount of money they are prepared to spend each day. While many advertisers
use bids as the primary knob to control their spend and never hit their budget, there exists a significant fraction
of advertisers who would spend more than their budget if they participated in every auction that their keywords
match. Search engines often provide an option to automatically scale the advertiser's bids [1, 2], but a substantial
fraction of budget constrained advertisers do not opt into these programs. For these advertisers, the search
engine has to determine the subset of auctions the budget constrained advertiser should participate in. This
creates dependence between auctions on different queries, and leads to essentially a matching or an assignment
problem of advertisers to auctions. In this paper, we consider the problem of optimized budget allocation:
allocating advertisers to queries such that budget constraints are satisfied, while simultaneously optimizing a
specified objective. Prior work in this area has often chosen revenue as the objective to optimize. However, the
long term revenue of a search engine depends on providing good value to users and to advertisers. If users see
low quality ads, then this can result in ad-blindness and a drop in revenue. If advertisers see low return on
investment (ROI), then they will reduce their bids and budgets, again resulting in a drop in revenue.
A.
OBJECTIVES OF THE PAPER
In this paper, we focus on the budget distribution problem at the campaign level of the BOF framework. Due to
the uncertainty existed in search markets, the budget demand of each temporal slot cannot be known in advance,
and advertisers can only obtain its value range based on promotion logs. Moreover, the allocated budget of each
temporal slot is in an interval constraint, due to the limitation of search auction systems (e.g., the lower bound)
IJSWS 15-326; © 2015, IJSWS All Rights Reserved
Page 103
V. Kumar and R. K. Gupta, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 103-109
and the advertiser’s financial constraint (e.g., the upper bound). Considering the budget demand for each
temporal slot as a random variable, we utilize stochastic programming to deal with the budget distribution
problem. First, we take the budget demand of each temporal slot as a random variable, because it can to some
degree reflect the environmental randomness of budget-related decisions at the campaign level. The probability
distribution of budget demand can be extracted from promotion logs of historical campaigns. Second, we
present a stochastic model for budget distribution over a series of temporal slots (e.g., days), when the total
budget in a search market is given. Third, we discuss some properties and possible solutions of our model, by
taking the budget demand for each temporal slot as a uniform random variable or a normal random variable,
respectively. Furthermore, we conduct experiments to evaluate our model and the experimental results show that
the strategy driven by normal distributions outperform the other two in terms of total effective clicks, followed
by the uniform distribution strategy, and then the baseline strategy commonly used in practice. This can be
explained by the fact that the budget demand for each temporal slot is more likely to be normal distributed than
uniform distributed.
B. INTRODUCTION TO BUDGET PROBLEMS
In this paper, we consider the problem of optimized budget allocation: allocating advertisers to queries such
that budget constraints are satisfied, while simultaneously optimizing aspecied objective. Prior work in this area
has often chosen revenue as the objective to optimize. However, the long term revenue of a search engine
depends on providing good value to users and to advertisers. If users see low quality ads, then this can result in
ad-blindness and a drop in revenue. If advertisers see low return on investment (ROI), then they will reduce
their bids and budgets, again resulting in a dropin revenue. Thus, we explore two other objectives in this paper:
improving quality, and advertiser ROI.
C. INTERNET SEARCH ENGINES PROBLEMS
Advertising is the main source of revenuefor search engines such as Google, Yahoo and Bing.for every search
queris and found the reasult of the query, All search engines run an auction using the ADVERTISING keyword
bids to determine the more ads on the search engine. Search engine provide a service where sponsored link open
and displayed on the front page in addition to search reasult after a user has a search for a specific term.
Sponsored links show more offers of the product . Since one product more sales in a market decide a target.
These sponsored links are a kind of search-based advertisement (SA). Industrial reports have estimate that the
total revenues from SA would reach U.S. $17.7 billion in the U.S. by 2014 (see [4]).
II.
PROBLEM DEFINITION
Let A be the set of advertisers, and Q the set of queries. Each advertiser a 2 A comes with a daily budget Ba. Let
G(A; Q;E) be a bipartite graph such that for a 2 A and q 2 Q, edge (a; q) 2 E means that an ad of a is eligible for
the auction for query q (a's keywords match q). Let ctr(a;q) be the probability of a click on a's ad for q, and
bid(a;q) be the amount a is willing to pay per click. click at some chosen xed position. In other words, ctr(a;q)
does not depend on the position of the ad.) When a query q arrives, the eligible ads a for q are ranked by
bid(a;q)ctr(a;q) and shown in that order. Denoting the jth ad in the order as aj , the cost per click of aj is set as
cpc(aj) = bid(aj+1;q)ctr(aj+1;q)=ctr(aj ;q)
This is known as the generalized second price (GSP) auction (see, e.g., [25, 14, 6]).
Let Ta denote the spend of the advertiser a if a participates in all the auctions for which a is eligible via the
keyword match (ignoring a's budget). If Ta > Ba, the advertiser is budget constrained, and the search engine has
to limit (orthrottle) the set of auctions in which the advertiser participates.
III.
RELATED WORK
There have been two broad approaches to optimizing budget constrained spend: allocation and bid modification.
Allocation treats bids as xed, and allows only decisions about whether the advertiser should participate in the
auction. This is our setting, where we are constrained to not change bids, but only optimize allocations .The
second approach, bid modification, is in a setting where bids can be changed. This body of work typically
considers the problem from the advertiser's perspective, and assumes full knowledge of the information that
advertisers typically have (such as the value of clicks or conversions).However, this work can also be adapted to
be applicable from a search engine's perspective, for advertisers who have opted in and allow the search engine
to change their bids. Allocation The paper by Abrams et al. [5] is the closest to our work. They solve the online
problem (with complete knowledge of future query frequency) for head queries (the most popular search
queries) in the GSP auction setting using a linear program (LP), to optimize search engine revenue or advertiser
clicks per dollar. Thus their approach yields an optimal solution to the click maximization problem in the
general GSP setting. However, there are two reasons why this work is not the word on the allocation approach.
First, the LP can only run over head queries due to resource constraints, which brings up an interesting question:
Can a non-optimal algorithm that runs over the entire query stream beat an optimal algorithm that is restricted to
the head? Second, the LP formulation can yield solutions that are clearly unfair, in that the allocation for some
advertisers is very different from what the advertiser would choose for themselves for that objective (see Section
5). Hence it is unclear whether LP solutions can be deployed by search engines.
IJSWS 15-326; © 2015, IJSWS All Rights Reserved
Page 104
V. Kumar and R. K. Gupta, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 103-109
IV.
MOTIVATING DIAGRAM
Select the product and show the more offers on a particular product. And select a particular product brand and
show the offers and price in a specific product. Since any product ID is uniques but name is same and offers is
again. Since more product colors is same and product design is again same .since order any product and choose
the paymets option at a time. Since any person buy a product and paid bill on a product at a given option on the
website.
Figure 1
V.
EXPERIMENTAL VALIDATION
In this section, we conduct experiments to evaluate the effectiveness of budget strategies derived from our
model: the Strategy uniform and the Strategy normal, with the data during the period from Sep. 1st, 2009 to Sep.
30th, 2009. For comparison purpose, we implement a baseline strategy, called BASE-Average, representing a
strategy to allocate the budget to a series of temporal slots averagely.
Figure 2
IJSWS 15-326; © 2015, IJSWS All Rights Reserved
Page 105
V. Kumar and R. K. Gupta, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 103-109
Figure 3
The lower bound of the daily budget limited by search engines is 50, and the upper bound of the daily budget
given by the advertiser is 150. The total budget during this period (e.g., 30 days) is B= 3000, and the value range
of the budget demand is U (80,120). Figure 1 depicts clicks per unitcost and the effective CTR. Then for the
Strategy uniform strategy and the Strategy normal strategy, the random budget demand for each day during this
period satisfies U (80,120) and N (100,20/3), respectively
Figure 4
Figure 5
IJSWS 15-326; © 2015, IJSWS All Rights Reserved
Page 106
V. Kumar and R. K. Gupta, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 103-109
(1) The Strategy normal strategy and the Strategy uniform strategy obtain 187.43 and 183.51effective clicks,
respectively. The BASE-Average obtains 180.17 effective clicks. Both the Strategy normal strategy and the
Strategy uniform strategy outperform the BASE-Average strategy about 4.03% and 1.85% in terms of
cumulative effective clicks, respectively.
(2) The Strategy normal strategy outperforms the Strategy uniform strategy (about 2.14%), in terms of
cumulative effective clicks. It implies that the budget demand is more likely to be normal distributed than
uniform distributed.
(3) Most of the daily budget for Strategy normal strategy and the Strategy uniform strategy fall in (80,120).
VI. LITERATURE REVIEW
The major problem is a less amount of selling product and less profit. Since give more offers and Advertising
the offer. The customer is seen the product offers and purchase many product its helpful for the company and
more profitable. Since my company set the Budget is low price and more product. In the last few years, a
number of papers addressed the problem of revenue maximization or bidding optimization in sponsored search
auctions [1, 4, 2, 3, 7]. None of the previous work proposed a solution that employs distributional information
about prices and solves the bidding problem with multiple ad position, keywords, and time periods. Zhou et al.
[8].
VII. RESULTS
We show the changes in various objectives relative to the baseline of Vanilla Probabilistic Throttling (VPT). It
is important to note that while we expect the overall conclusions to carry over to an online setting where the
query distribution changes over time, the exact numbers will change. In general, the gains from optimized
budget allocation or bid scaling will be significantly lower in live experiments due to changes in query c. For
this reason, as well as data confidentiality, we omit the scale from our graphs below.
A.
Comparison with LP
Figure 6 shows the change in clicks per dollar for budget constrained advertisers for each of the algorithms. The
rst set of numbers, \head", show the results when we artificially restrict all the algorithms to operate over the
same set of head queries as LP-Clicks, with VPT on the tail. Since
LP-Clicks is not just optimal, but can also generate solutions that are not fair (unlike the other algorithms), it is
not surprising that LP-Clicks outperforms the alternatives. However, when we allow the algorithms to optimize
over the entire dataset { the \all" numbers { the algorithms that can use the full data dramatically outperform LPClicks. In fact, even OT-CTR, which is optimizing CTR and notCPC, yields a higher drop in CPC (or
equivalently, more clicks per dollar) than LP-Clicks. The reason for the poor performance of LP-Clicks is that
the LP can be run only on the head, and even though the head queries account for a substantial portion of
revenue, they are relatively homogeneous { the potential gains from optimization are more in the tail than the
head. We found that this held for the other metrics as well, i.e., the substantial majority of the gains from
optimization came from the tail queries.3
B.
Comparison with Bid Scaling
The other interesting comparison in Figure 6 is between OT-Clicks and Bid Scaling. Bid Scaling performs
slightly better than OT-Clicks when restricted to head queries, as many advertisers may appear for a relatively
small number of queries in the head. Thus OT-Clicks, which doesn't have the exibility to scale bids, has a bit
less room to maneuver. Over all queries, OT-Clicks has much more scope to differentiate between queries, and
hence does slightly better than Bid Scaling. However, OT-Clicks may be getting the gains by dropping high bid,
high cpc clicks which might still yield more profit for the advertiser than low bid, low cpc clicks. Figure
7.shows how the algorithms do on estimated profit-per-dollar: the sum of the bids minus the total cost, divided
by the total cost, over all budget constrained campaigns.
C.
Multiple Objectives
We now present results with metrics that blend the CTR and profit objectives. In Section 4 we had conjectured
that blended metrics might yield better results than individual metrics, since different advertisers may have
better scope for optimization along different dimensions. Figures 9 and 10 show the impact of two blended
metrics:
ctr(bidi -cpci)/cpc, and ctr2(bidi - cpci)/cpc.
Notice that OT- CTR-Profit, which uses the former as the metric, almost matches OT-Profit on profit-per-dollar,
while yielding significantly higher gains in CTR than OT-Profit. OT CTR2-Profit further increases CTR gains,
for a bit more drop in profit-per-dollar. In addition to validating our conjecture that blended metrics may yield
better results, such blended metrics let the search engine pick any arbitrary point in a curve that trades gains in
user quality for gains in advertiser value.
IJSWS 15-326; © 2015, IJSWS All Rights Reserved
Page 107
V. Kumar and R. K. Gupta, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 103-109
Figure 6: Impact on clicks-per-dollar, over budget constrained campaigns. The baseline is VPT.
Figure 7: Impact on profit-per-dollar, over budget constrained campaigns. The baseline is VPT.
Figure 8: Multiple objectives: impact on Profit-per-dollar. The baseline is VPT.
Figure 9: Impact on CTR (including all campaigns). The baseline is VPT.
IJSWS 15-326; © 2015, IJSWS All Rights Reserved
Page 108
V. Kumar and R. K. Gupta, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 103-109
Figure 10: Multiple objectives: impact on CTR. The baseline is VPT.
VIII. CONCLUSION
We studied the problem of allocating budget constrained spend in order to maximize objectives such as quality
for users, or ROI for advertisers. We introduced the concept of fair allocations (analogous to Nash
equilibriums), and constrained the space of algorithms to those that yielded fair allocations. We were also
constrained (in our setting) to not modify bids. We proposed a family of Optimized Throttling algorithms that
work within these constraints, and can be used to optimize different objectives. In fact, they can be tuned to pick
an arbitrary point in the tradeoff curve between multiple objectives. The Optimized Throttling algorithms are
designed for implementation in a high throughput production system. The computation overhead at serving time
is negligible: just a few comparisons. The algorithms also have a minimal memory footprint, as little as 8 bytes
(plus hash table overhead) per advertiser. Finally, they are robust with respect to errors in estimating future
traffic, since they only need the totalvolume of traffic and the distribution of the chosen metric, not the number
of occurrences of each query. We validated our system design by implementing our algorithms in the Google
ads serving system, and running experiments on live traffic. The experiments showed significant improvements
in both advertiser ROI (conversions per dollar) and user experience.
References
[1]
[2]
[3]
[4]
[5]
[6]
Y. Qu and G. Cheng, “Falcons concept search: A practical search engine for web ontologies,” IEEE Trans. Syst., Man, Cybern.
A, Syst., Humans, vol. 41, no. 4, pp. 810–816, Jul. 2011
W. An et al., “Hidden Markov model and auction-based formulations of sensor coordination mechanisms in dynamic task
environments,” IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 41, no. 6, pp. 1092–1106, Nov. 2011
S. Phelps, P. McBurney, and S. Parsons, “A novel method for strategy acquisition and its application to a double-auction market
game,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 40, no. 3, pp. 668–674, Jun. 2010
R. Jain and J. Walrand, “An efficient nash-implementation mechanism for network resource allocation,” Automatica, vol. 46, no.
8, pp. 1276–1283, 2010.
R. Jain and J. Walrand, “An efficient nash-implementation mechanism for network resource allocation,” Automatica, vol. 46,
no. 8, pp. 1276–1283, 2010
H. R. Varian, “Position auctions,” Int. J. Ind. Org., vol. 25, pp. 1163–1178, Oct. 2006.
IJSWS 15-326; © 2015, IJSWS All Rights Reserved
Page 109
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Mobile Software Agents for Wireless Network Mapping and Dynamic
Routing
Shivangi Saraswat1, Soniya Chauhan2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
__________________________________________________________________________________________
Abstract: In this paper we present the review about how software agents can wander in an unknown Ad hoc
network with cooperation To Report the Topology of the network. Mapping and dynamic routing are the basic
operations that are used for the interactions between nodes on that network. The dynamic nature of the topology
of the Ad hoc networks is Due to mobility of some nodes in the network: wireless links are broken and reformed
frequently. In this paper we present a review of dynamic, wireless, peer to peer network with routing tasks
Performed in a decentralized and distributed fashion by Mobile software agents that cooperate to accumulate
and distribute network connectivity information. Different types of agents which will use for routing purpose,
also discussed in this paper.
__________________________________________________________________________________________
1,2
I.
Introduction
The nature of the computer networks is heterogeneous that means there can be much different type of devices
get connect to the same network. The network devices are consist of many communication channels which are
use to provide the communication between the different devices. In today’s life the wireless networks have
become the essential part of the network word. Ad hock wireless network can contain a large variety of nodes
in them. The wireless ad hoc networks are portable computers, sensors; PDA’s and many more, the small subset
of nodes are connected to outside word. In this paper we consider two scenario 1) network mapping for wireless
network, 2) dynamic network routing for wireless network. We will use the ad hoc network for mapping and his
experimental model and the simulation result will be discus. At the end all the conclusion will be wrapping up
for that it can be use in future. Computer networks continue to grow in scale, diversity, scope and complexity.
Wireless networks, in particular, have become an essential part of the network’s world. A vast variety of nodes
could be participating in ad hoc wireless networks: portable computers, PDAs, sensors, cellular phones, pagers,
to name just a few. Some of these nodes are mobile, some require multi-hop communication, some suffer from
power source limitation others may have computational or communication constraints; finally a small subset of
them are connected to outside world like Internet or LAN. In current systems, routing maps are usually
generated in a centralized, and often human-mediated, manner.
II. Wireless Network Mapping
A. Network Descriptions, Environment
Here we have a set of wireless nodes which is distributed in a physical domain. Every node has a radio range.
There is a link between two nodes if two nodes can see one another, that is if they are located in the radio range
of each other. All nodes are fixed in location so the topology of the network is static during the life time. Due to
the assumption that nodes have the same range for radio range, links are bidirectional in [1]. Authors have also
assumed that radio range will stay fixed during the experiments. We eliminate these non-realistic assumptions in
our environment. First, the radio range of nodes is not always the same, so there might exist a link from node A
to node B but not vice versa. We also consider that there will be some degradation on a percentage of radio links
due to rely on battery power for some nodes. Such changes result in a directed graph for the network topology
and a quite dynamic nature during network life time. So the topology knowledge of the network become invalid
after awhile, such that we need to fire up the agents again to capture the changes in the network since the last
topology measurement.
B. Mobile Agents
A mobile agent is a process that can transform its state from one environment to another environment. These
mobile agents are capable of sharing the information between each other and with this they can share there state
with each other. The mobile agent is a specific form of mobile code with in the field of code mobility. The
mobile software agents can choose to migrate between the computers at any time during the execution. This
make them powerful tool for implementation.
IJSWS 15-327; © 2015, IJSWS All Rights Reserved
Page 110
S. Saraswat and S. Chauhan, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 110-113
Nelson Minar et al examined three different types of agents which roam in the network differently. As a baseline
they examined random agents, which simply move to a random adjacent node every update. Conscientious
agents are more sophisticated, choosing at each time step to move to an adjacent node that they have never
visited or have visited least recently. This algorithm is similar to a depth-first search of the network.
Conscientious agents base their movement decisions only on first-hand knowledge, ignoring information learned
from other agents when determining where to go. In contrast the third type of agents, super-conscientious
agents, also moves preferentially to nodes that have not been explored, but they use both their own experience
and learned data from their peers in deciding which nodes to move to [1]. Agents can visit each other when they
land on the same node. This visit can be considered as a direct communication or direct learning from others.
They get information about the network topology form others and keep them separately as second-hand
information. First-hand information is those which obtained by agent itself. Random agents use no information
for wandering, conscientious one use just first-hand and super-conscientious agent use both first and second
hand information for moving around network.
In paper [3] Manal Abdullah et al used a type of agent like our conscientious agent but their agent has about 5
times more overhead than ours. They also did not mention the network characteristics which used for
simulation. We also employ another kind of communication which adds almost no extra cost in agent’s
computational complexity. This form of implicit communication is referred to as stigmergy, a term used in
biology to describe the influence that previous changes to the environment can have on an individual’s current
actions [4]. Stigmergy is a common control method in lower animals, especially social insects [5] . The main
contribution of stigmergy has been in foraging and sorting domain; a given agent intentionally put some marks
or clues on the environment, which are realizable by teammate, in order to help them to do their actions with
less hardship or with higher reliability in light of agent’s mission. Stigmergy was first observed in nature: ants
communicate to one another by laying down pheromones along their trails, so where ants go within and around
their ant colony is a stigmergic system. Parunak, Sauter, et al, employ ant-inspired stigmergy in a software
environment that uses synthetic pheromone to coordinate unmanned aircraft that use potential field-based
navigation [6], [7].
According to A. Wurr and J. Anderson in [8], using of real physical marking to let their agents avoid bottlenecks
and local maxima. As a result, their agents more frequently discover an unknown goal and don’t stuck in a
limited area in the environment. At the same time, through stigmergic trail-making, their agents were able to
greatly increase the frequency and ease with which they subsequently located a previously discovered goal. In
this paper we use a kind of footprint concept; every agent leaves behind his footprint on the current node.
Agents imprint their next target node in the current node. They do this so that subsequent agents avoid following
previous one. In fact, here, agents try to not chase other agents intentionally. The intent is to not be followed by
others as opposed to encourage others to come after you in the ant society. Such footprints help agents to be
more distributed across network and explore the unvisited part of network rapidly no matter which algorithm
they use for wandering.
Single agents: In this we just use one agent and we measure the finishing time for two algorithms saperately.
There is no change of cooperation with the single agent. The single agent has a chance that stigmatic capability
will be beneficial, since in this context an agent avoids following itself. By the stigmatic capability the agent can
go through unexplored path by using the foot step in the network.
C. Wireless Network Mapping
In there is a set of wireless nodes which are distributed over the physical network, these nodes are capable of
sharing the data between each other. Every node has a radio range. If two nodes can see each other than there
will be link obtain between them by which they become capable of sharing the information among them. All
nodes are fixed in location so the topology of the network will be fixed during the over all life cycle. The radio
range of the nodes is not always the same, so if we have two nodes A and B so there might a link exist between
the not A to node B and not vice versa. We also know that there will be some degradation between the
percentages of radio links if the battery power for some nodes get relay.
D. Dynamic network routing
Dynamic routing is a a network technique provide optimal data routing. Unlike static routing dyamic routing
enable the router to select the path according to real time logic. In dymanic routing , the routing protocols are
responsible for creation, updating and maintenance of dynamic routing table. The dymanic routing uses the
multiple protocols and algorithms. In the dynamic routing router is the one which delever and receive the
messege on the router interface
E. Experimental Model
In another experiment Hamzeh Khazaei et al. present a simulation of mobile agents living in a network of
interconnected nodes that work cooperatively to build a map of the network. In simulation we assume nodes are
IJSWS 15-327; © 2015, IJSWS All Rights Reserved
Page 111
S. Saraswat and S. Chauhan, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 110-113
distributed in a two dimension environment randomly; and that the nodes which located in other nodes radio
ranges establish wireless links between them. An average packet needs to multi-hop through sequence of nodes
in order to get its destination. The goal of the agents is to map the network they are living in, to learn about the
connectivity of the nodes and to build a model of the network topology. Their model is implemented with a
simple discrete event, time-step based simulation engine. With each step of simulated time, an agent does four
things: First, the agent learns about all of the edges off the node it is on. This information, which the agent itself
has just experienced, is added to the agent’s store of first-hand knowledge. Second, the agent learns everything
it can from all of the other agents currently on the current node. This information, learned from other agents is
stored separately as second-hand knowledge. Third, the agent chooses a new node for its next move. Finally if
stigmergic capability is used the agent leaves its footprint on the current node. They first implement Nelson
Minar et al, experiments to confirm all their results and discussions on the new environment. They then compare
their agents with our modified agents in terms of performance, complexity and overhead. Their simulation
system consists of 2000 lines of Java code in 25 classes implementing a discrete event scheduler, a graphical
view and plots, a data-collection system, and the simulated objects themselves: network nodes, network
monitoring entity, wireless links, and mobile agents. In order to compare results across population sizes and
algorithms, they chose a single connected network consisting of 300 nodes with 2164 edges for all experiments.
They define finishing time on the simulation time step where all agents have a perfect knowledge about the
network topology. Such definition of finishing time reflects the efficiency of a team of agents rather than that of
an individual agent.
F. Effects of Cooperation
According to Hamzeh Khazaei et al. the single agent result is a good basis for measuring the effect of agent
cooperation. In the case of multiple agents, there is a chance for visiting other agents to have direct
communication. Such communication lets agents learn from each other. Therefore, agents may obtain some
information about network topology for nodes that they never visited before. In cooperation though, our agents
did much better than the N. Minar agents. Fifteen cooperation conscientious agents perform the mapping in 140
steps. A marginal returns thus far. Our own agents perform the mapping in shorter time, in 125 steps, due to the
fact, we assume, that they not only use the opportunity of getting second-half information from others but also
try to avoid chasing each other.
III. Review
According to [3] there is still potential to minimize the payload of agents (code plus data) for sparsely populated
and highly-dynamic areas. In their future work different migration strategies that also take into account the agent
size. We will have to use high-performance-computing cluster to simulate all the facts. Even a simulation will,
however, consume a critical amount of time at the MCI level (e.g. 18k hours for MCI level 1 for a run of 1000s
rescue operations). Still, the presented results, seen as work-in-progress, already emphasize the potential of
agent based routing as a valid alternative for routing within a set of mobile data platforms in a rescue scenario.
In addition, the agent approach profits by the possibility to take application level information, e.g. role of
message sender or receiver, into account for routing activities.
In Paper [7] the evolutionary algorithms explored in different experiments have demonstrated the ability to
automatically tune the parameters of a pheromone-based path planning system so it can successfully function in
a number of test scenarios. These solutions consistently outperformed the best hand tuned parameters that took
skilled programmers over a month to develop.
Nikolaos Nikou et al. present briefly dynamic and adaptive routing. According to the paper dynamic routing is
the dominant alternative to static routing. Dynamic routing can support mobile networks that change
dynamically by definition but also networks that, due to the parameters implied by Quality of Service need to
direct user traffic to alternative routes others than those implied by the static route definitions.
IV. Conclusion
In this paper we discussed two essential operations, network mapping and dynamic network routing, in the
wireless ad hoc networks using a software multi-agent solution. We also discussed direct communication in
dynamic routing; the agents were permitted to exchange the best route in meeting session. As we discussed
above, direct communication, has a positive effect on connectivity in case of random agents but on the other
hand, has a negative effect on connectivity in case of oldest node agents. Future work of the review is,
employing indirect communication, stigmergy, in dynamic routing problem as well and stigmergy, can improve
the agents performance effectively.
References
[1]
N. Minar, K. H. Kramer, and P. Maes, “Cooporative mobile agents for mapping network,” the First Hungarian National
Conference on Agent Based Computing May 24, 1998.
IJSWS 15-327; © 2015, IJSWS All Rights Reserved
Page 112
S. Saraswat and S. Chauhan, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 110-113
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
K. H. Kramer, N. Minar, and P. Maes, “Cooporative mobile agents for dynamic network routing,” ACM SIGMOBILE Mobile
Computing and Communications Review, vol. 3, pp. 12–16, 1999.
M. Abdullah and H. Bakhsh, “Agent-based dynamic routing system for manets,” ICGST, Computer Networks and Internet
Research Journal, vol. 9, 2009.
O. Holland and C. Melhuish, “Stigmergy, self-organization, and sorting in collective robotics,” Artificial Life, Springer, vol. 5,
pp. 173–202, 1999.
A. Perez-Uribe and B. Hirsbrunner, “Learning and foraging in robotbees,” In: SAB2000 Proceedings Supplement Book,
Honolulu, vol. 5, pp. 185–194, 2000.
H. V. D. Paunak, S. Brueckner, J. Sauter, and J. Posdamer, “Mechanisms and military applications for synthetic pheromones,”
In: Workshop on Autonomy Oriented Computation, 2001.
H. V. D. P. S. B. John A. Sauter, Robert Matthews, “Evolving adaptive pheromone path planning mechanisms,” In: Proceedings
of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems, ACM Press, pp. 434–440, 2002.
A. Wurr and J. Anderson, “Multi-agent trail making for stigmergic navigation,” Lecture Notes in Computer Science, vol. 31, pp.
422–428, 2004.
G. D. Caro1, F. Ducatelle1, and L. M. Gambardella1, “Anthocnet: An ant-based hybrid routing algorithm for mobile ad hoc
networks,” Lecture Notes in Computer Science, Springer, vol. 3242, pp. 461–470, 2004.
R. R. Choudhury, K. Paul, and S. Bandyopadhyay, “Marp: A multi-agent routing protocol for mobile wireless ad hoc networks,”
Autonomous Agents and Multi-Agent Systems, Springer, vol. 8, pp. 47–68, 2004.
K. A. Amin and A. R. Meckler, “Agent-based distance vector routing: a resource efficient and scalable approach to routing in
large communication networks,” Journal of Systems and Software, vol. 71, pp. 215–227, 2004.
IJSWS 15-327; © 2015, IJSWS All Rights Reserved
Page 113
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Brain Computing Interface
Shivam Sinha1, Sachin Kumar2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
__________________________________________________________________________________________
Abstract: Brain Computer Interface allows users to communicate with each others by using only brain activities
without using any peripheral nerves and muscles of human body. On BCI research the Electroencephalogram
(EEG) is used for recording the electrical activity along the scalp. EEG is used to measure the voltage
fluctuations resulting from ionic current flows within the neurons of the brain. Hans Berger a German
neuroscientist, in 1924 discovered the electrical activity of human brain by using EEG. Hans Berger was the
first one who recorded an Alpha Wave from a human brain. In 1970, Defense Advanced Research Projects
Agency of USA initiated the program to explore brain communication using EEG. The papers published after
this research also mark the first appearance of the expression brain–computer interface in scientific literature.
The field of BCI research and development has since focused primarily on neuroprosthetics applications that
aim at restoring damaged hearing, sight and movement. Nowadays BCI research is going on in a full swing
using non-invasive neural imaginary technique mostly the EEG. The future research on BCI will be dependent
mostly in nanotechnology. Research on BCI is radically increased over the last decade. From the last decade
the maximum information transfer rates of BCI was 5-25 bits/min but at present BCI’s maximum data transfer
rate is 84.7 bits/min.
1,2
Keywords: Brain Computer Interface (BCI), Neuron, Electroencephalography (EEG), Electrocortiocogram
(ECoG), Magnetoencephalogram (MEG), Functional Magnetic Resonance Imaging (fMRI).
__________________________________________________________________________________________
I.
Introduction
Brain–computer interface (BCI) is a direct communication between computer(s) and the human brain. It is a
communication system that facilitates the external device control by using signals measured from the brain.
Brain, spinal cord and peripheral ganglia are the main components of the central nervous system. The central
nervous system is composed of more than 100 billion neurons [1]. A neuron is an electrically excitable cell that
processes and transmits information by electrical and chemical signaling. Chemical signaling occurs via
synapse, specialized connections with other neuron cells. Neurons are maintaining voltage gradients across their
membranes by means of metabolically driven ion pumps, which combine with ion channels embedded in the
membrane to generate intracellular-versus-extracellular concentration differences of ions such as Na+, K+, Cland Ca+ [2]. When many of these ions are pushed out of many neurons at the same time, they can push their
neighbors, who push their neighbors, and so on, in a wave. When the wave of ions reaches the electrodes of
EEG on the scalp, they can push or pull electrons on the metal on the electrodes. Since metal conducts the push
and pull of electrons easily, the difference in push, or voltage, between any two electrodes is measured by a
voltmeter. Recording these voltages over time gives us the electroencephalography (EEG) [3]. EEG, or
electroencephalogram, is a tool which is used to record electrical activity of the brain while it is performing a
cognitive task. EEG measures voltage fluctuations resulting from ionic current flows within the neurons of the
brain [4]. It is the first non-invasive neuroimaging technique. Due to its ease of use, cost and high temporal
resolution this method is the most widely used one in BCIs nowadays. Besides that Magnetoencephalography
(MEG) and functional Magnetic Resonance Imaging (fMRI) have both been used successfully as non-invasive
BCIs [5]. Magnetoencephalography is a technique for mapping brain activity by recording magnetic fields
produced by electrical currents occurring naturally in the brain, using arrays of SQUIDs (superconducting
quantum interference devices). Functional MRI (fMRI) is a type of specialized MRI scan used to measure the
hemodynamic response (change in blood flow and blood oxygenation) related to neural activity in the brain [6].
On the other hand the invasive neuroimaging technique Electrocorticography (ECoG) is the practice of using
electrodes placed directly on the exposed surface of the brain to record electrical activity from the cerebral
cortex. A surgical incision into the skull is required to implant the electrode grid in this neuroimaging technique.
Many researcher teams are involved to research on BCI for several purposes from since 1970 to at present time
using these neuroimaging techniques. Research on BCI is radically increased over the last decade. From the last
decade the maximum information transfer rates of BCI was 5-25 bits/min [7] but at present BCI’s maximum
data transfer rate is 84.7 bits/min [8]. Nowadays BCI research is going on in a full suing using non-invasive
neural imaginary technique mostly the EEG. The future research on BCI will be dependent mostly in
IJSWS 15-328; © 2015, IJSWS All Rights Reserved
Page 114
S. Sinha and S. Kumar, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 114-119
nanotechnology.
II.
Brain Computer Interface
Though Brain Computer Interface facilitates direct communication between brain and computer or another
device so nowadays it is widely used to enhance the possibility of communication for people with severe
neuromuscular disorders, such as Amyotrophic Lateral Sclerosis (ALS) or spinal cord injury. Except the
medical applications BCI is also used for multimedia applications, such as: for making gaming instruments,
which becomes possible by decoding information directly from the user’s brain, as reflected in
electroencephalographic signals which are recorded non-invasively from user’s scalp. To understand the
electroencephalographic signals recording mechanism from the human or animals (such as: apes, bulls, rats,
monkeys, cats etc) brain by using electroencephalogram we need to consider two basic things, one is neuron and
another is neuronal signal recording technique such as: Invasive BCI, partially-invasive BCI and non-invasive.
III.
Types of BCI
Invasive: It is the brain signal reading process which is applied to the inside of grey matter of brain.
Partially Invasive: It is another brain signal reading process which is applied to the inside the skull but outside
the grey matter. Electrocorticography (ECoG) is the example of partially invasive BCI.
Non Invasive: It is the most useful neuron signal imaging method which is applied to the outside of the skull,
just applied on the scalp. Electroencephalography (EEG) is the most studied in the last decade and in the recent
time most of the researches are based on EEG. Besides the EEG there are some non -invasive neuron signal
imagings or reading techniques, such as: Magnetoencephalography (MEG), Magnetic resonance imaging (MRI)
and functional magnetic resonance imaging (fMRI).
IV.
EEG Signal Recording Method &Conventional Electrode Positioning
In the EEG system electrodes ware placed on the front and back of the head during the first successfully EEG
signaling by Hans Berger. Berger continued that method for a number of years but the EEG activity varied in
different locations on head was discovered by others (Adrian and Matthews, 1934; Adrian and Yamagiwa,
1935). That creates the necessity of standardized positioning of electrodes over the scalp. From that necessity a
committee of the International Federation of Societies for Electroencephalography and Clinical
Neurophysiology recommended a specific system of electrode placement for use in all laboratories under
standard conditions. Their recommendation was the system now known as the International 10 -20 system. The
standard placement recommended by the American EEG society for use in the International 10-20 system is for
21 electrodes. The International 10-20 system avoids both eyeball placement and considers some constant
distances by using specific anatomic landmarks from which the measurement would be made and then uses 10%
or 20% of that specified distance as the electrode interval. Often the earlobe electrodes called A1 and A2,
connected respectively to the left and right earlobes, are used as the reference electrodes.
V.
Magnetoencephalography (MEG)
Magnetoencephalography (MEG) is another non-invasive neuroimaging technique which is used for mapping
brain activity by recording magnetic fields produced by electrical currents occurring naturally in the brain
during this time superconducting quantum interface devices (SQUIDs) arrays are used. David Cohen, the
physicist of University of Illinois first measure the MEG signal in 1968. He used a copper induction coil as
detector before the availability of the SQUID. MEG was measured in a magnetically shielded room due to
reduce the magnetic background noise. After that Cohen built a better shielded room at MIT, and used one of
the first SQUID detectors which were developed by James E. Zimmerman who was a researcher at Ford Motor
Company. At first, a single SQUID detector was used to successively measure the magnetic field at a number of
points around the subject’s head. Present-day MEG arrays are set in helmet -shaped that typically contain 300
sensors, covering most of the head.
VI.
Functional Magnetic Resonance Imaging (FMRI)
Functional magnetic resonance imaging or functional MRI (fMRI) is a type of specialized MRI scan used to
measure the hemodynamic response (change in blood flow) related to neural activity in the brain or spinal cord
of humans or other animals. It is one of the most recently developed forms of neuroimaging. Since the early
1990s, fMRI has come to dominate the brain mapping field due to its relatively low invasiveness, absence of
radiation exposure, and relatively wide availability.
VII.
Electrocorticography (ECOG)
ECoG is the most popular Invasive neuroimaging technique in BCI. In this technique electrodes are placed
directly on the exposed surface of the brain to record electrical activity from the cerebral cortex. ECoG may be
performed either in the operating room during surgery (intraoperative ECoG) or outside of surgery
IJSWS 15-328; © 2015, IJSWS All Rights Reserved
Page 115
S. Sinha and S. Kumar, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 114-119
(extraoperative ECoG). Because a craniotomy (a surgical incision into the skull) is required to implant the
electrode grid, ECoG is an invasive procedure. ECoG is currently considered to be the “gold standard” for
defining epileptogenic zones in clinical practice.
VIII. Brain-Computer Interface: Past, Present
1929: First Record of EEG
1950s: First Wet-Brain Implants: In 1950s, Jose Delgado became infamous for implanting electrodes into the
brains of live animals (and later humans) and stimulating them using a stimoceiver, a radio receiver, planted
underneath the skull. Using signals sent through the electrodes in a technique called ESB – electronic
stimulation of the brain – Delgado was able to produce basic behavioral effects in human and animal subject,
such as calming or producing aggressive behavior. This was critical in proving thwe can actually "signal" to the
brain [9].
1970s: DARPA super BCI Research
1972: Birth of Bionic Ear: In 1972, Robin Michelson, M.D. convinced the scientific community that
meaningful sound could be conveyed to the brain by electrical stimulation of the auditory nerve. As of April
2009, approximately 188,000 people worldwide had received cochlear implants; in the United States, about
30,000 adults and over 30,000 children are recipients.
1976: First Evidence that BCI can be used for communication: Jaceques J. Vidal, the professor who coined
the term BCI, from UCLA's Brain Computer Interface Laboratory provided evidence that single trial visual
evoked potentials could be used as a communication channel effective enough to control a cursor through a two
dimensional maze. This presented the first official proof that we can use brain to signal and interface with
outside devices.
1978: First BCI to Aid the Blind: Dr William Dobelle's Dobelle Institute first prototype was implanted into
"Jerry", a man blinded in adulthood. Jerry was able to walk around safely and read large letters. The artificial
vision system works by taking an image from miniature camera and distance information from an ultrasound
sensor, each of which is mounted on one lens of a pair of sunglasses. These signals are processed by a 5 kg
portable computer and then a new signal is sent to 68 platinum electrodes implanted in the person's brain. The
electrodes are on the surface of the brain's visual cortex and stimulate the person to visualize the phosphenes,
specks of lights that show the edges of objects.
1980s: Recorded EEG in Macaque Monkey.
1998: First Implant in Human Brain that Produce High Quality Signals.
1999: Decode Cat’s Brain Signals [10]
BCI Used to Aid Quadriplegic: Researchers at Case Western Reserve University led by Hunter Peckham, used
64 -electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. Jatich has been a
human experiment for scientists striving to conquer paralysis. He has endured almost 20 years of torture in
which long electrodes and needles have been placed into his hands, arms and shoulders to see where muscles
can be shocked so they can move. He now uses his computer to do his old job, engineering drafting.
2000: BCI Experiments with Owl Monkey.
2002: Monkeys Trained to Control Computer Cursor: In 2002, implanted monkeys were trained to move a
cursor on a computer screen by researchers at Brown University, led by John Donoghue. Around 100 micro electrodes were use to tap up to 30 neurons, but because the electrodes targeted neurons that controlled
movement, only three minutes of data were needed to create a model that could interpret the brain signals as
specific movements. The monkeys were trained to play a pinball game where they were rewarded by quickly
and accurately moving the cursor to meet a red target dot.
2003: First BCI Game Exposed to the Public
BrainGate Developed: BrainGate, a brain implant system, was developed by the bio-tech company
Cyberkinetics in conjunction with the Department of Neuroscience at Brown University.
2005: First Tetraplegic BrainGate BCI Implementation:
Tetraplegic Matt Nagle became the first person to control an artificial hand using a BCI in 2005 as part of the
first nine-month human trial of Cyberkinetics Neurotechnology’s BrainGate chip-implant. Implanted in Nagle’s
right precentral gyrus (area of the motor cortex for arm movement), the 96-electrode BrainGate implant allowed
Nagle to control a robotic arm by thinking about moving his hand as well as a computer cursor, lights and TV.
Experiment: Monkey Brain Controlled Robotic Arm: A work presented at the annual meeting of American
Association of the Advancement of Science (AAAS) showed a monkey feeding itself using a robotic arm
electronically linked to its brain. The monkey’s real arms are restrained in plastic tubes. To control the robotic
arm, 96 electrodes – each thinner than a human hair – are attached to the monkey’s motor cortex, a region of the
brain responsible for voluntary movement.
IBM Blue Brain Project Launched: IBM Blue Brain launched by the EPFL in 2005. The Blue Brain Project is
an attempt to reverse -engineer the brain. The researchers of the Swiss-based Blue Brain Project have created a
virtual pack of neurons that acts just like the real thing, and hope to get an e-brain up and running.
IJSWS 15-328; © 2015, IJSWS All Rights Reserved
Page 116
S. Sinha and S. Kumar, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 114-119
2008: First Consumer off-the-shelf, Mass Market Game Input Device. High Accuracy BCI Wheelchair
Developed in Japan.
Numenta Founded to Replicate Human Neocortex ability.
Reconstruct Images Form Brain to Computer: Research developed in the Advanced Telecommunications
Research (ATR) Computational Neuroscience Laboratories in Kyoto, Japan allowed the scientists to reconstruct
images directly from the brain and display them on a computer.
Voiceless Phone Calls – The Audeo: Ambient, at a TI developers conference in early 2008, demoed a product
they have in development call The Audeo. The Audeo is being developed to create a human–computer interface
for communication without the need of physical motor control or speech production. Using signal processing,
unpronounced speech representing the thought of the mind can be translated from intercepted neurological
signals.
2009: Wireless BCI Developed: A Spanish Company, Starlab, developed a wireless 4-channel system called
ENOBIO. Designed for research purposes the system provides a platform for application development.
First Research Paper on Wireless BCI: A research paper was submitted by Lisa Zyga in December 21 that
details a system that can turns brain waves into FM radio signals and decodes them as sound is the first totally
wireless brain-computer interface. “A Wireless Brain-Machine Interface for Real-Time Speech Synthesis.”
BCI Communication between 2 people over the Internet:
Brain-Computer Interface Allows Person-to-person Communication through Power of Thought. Dr. Chris James
experiment had one person using BCI to transmit thoughts, translated as a series of binary digits, over the
internet to another person whose computer receives the digits and transmits them to the second user’s brain
through flashing an LED lamp.
Device that let Blind “See” with their Tongues:
BrainPort, the device being developed by neuroscientists at Middleton, Wisc.–based Wicab, Inc. (a company cofounded by the late Back-y-Rita), allows user to see with their tongues. Visual data are collected through a small
digital video camera about 1.5 centimeters in diameter that sits in the center of a pair of sunglasses worn by the
user.
BCI allows Person to Tweet by Thought: University of Wisconsin Department of Biomedical Engineering
created a system that allows a person to tweet with only a thought.
Honda Asimo Robot Controlled by Thought: Honda enabled its famous Asimo robot to be controlled by the
thoughts of a nearbyhuman.
Pentagon Funds “Silent Talk” BCI Research.
Pentagon Funds 'Silent Talk' Brain-Wave Research. DARPA typically backs offbeat or high-risk research that
sounds ripped straight out of science fiction, and the latest example is no exception – $4 million for a program
called Silent Talk that would allow soldiers to communicate via brain waves. The project has three major goals,
according to Darpa. First, try to map a person’s EEG patterns to his or her individual words. Then, see if those
patterns are generalizable — if everyone has similar patterns. Last, “construct a fieldable pre -prototype that
would decode the signal and transmit over a limited range.”
2010: BCI X -Prize Development Started: X PRIZE is a $10 million+ award given to the first team to achieve
a specific goal, set by the X PRIZE Foundation, which has the potential to benefit humanity. In 2004,
SpaceShipOne won the $10M Ansari X PRIZE for spaceflight. Virgin Galactic and Scaled Composites recently
rolled out SpaceShipTwo. The latest X PRIZE announced in 2010 is about the "inner space". The BrainComputer Interface (BCI) X PRIZE will reward nothing less than a team that provides vision to the blind, new
bodies to disabled people, and perhaps even a geographical “sixth sense” akin to a GPS iPhone app in the brain.
2011: January 02: The First Thought-Controlled Social Media Network is utilized by the Neurosky.
February 01: Ray Lee, technical director of the Princeton Neuroscience Institute, has developed the world’s
first dual-headed fMRI scanner. The new design allows an MRI machine to scan two brains at once, potentially
paving the way to future research on how different brains respond to stimuli and to each other. [11]
February 09: The U.S. Food and Drug Administration (FDA) proposed the Innovation Pathway, a priority
review program for new, breakthrough medical devices and announced the first submission: a brain-controlled,
upper- extremity prosthetic that will serve as a pilot for the program. A brand new wireless brain-reading
headset debuts at the Medical Design and Manufacturing conference and exhibition in Anaheim, California.
March 23: Paralyzed People Compose Music with Brainwaves. [48]. Indian Scientists working on BrainControlled Robot to help disabled people.
April 10: fMRI Feedback Improves ability to control thoughts.
April 13: Rabbi says Driving Thought-controlled Car is allowed on the Sabbath.
April 28: Nick Johnston, a Grade 10 Semiahmoo Secondary student will soon present the project he created to
explore the communication of word and letter combinations using brainwaves, essentially allowing people to
communicate without speaking.
June 26: A thought-controlled wheelchair system from the University of Technology, Sydney (UTS), has been
IJSWS 15-328; © 2015, IJSWS All Rights Reserved
Page 117
S. Sinha and S. Kumar, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 114-119
awarded III. Place in the Australian Innovation Award Anthill SMART 100 Index.
June 28: Minimally Invasive BioBolt Brain Implant converts thoughts into movement.
July 26: MyndPlay showcases mind –controlled movies at 7th annual Topanga Film Festival in California. This
is the world’s first mind controlled video and movie platform, about which you could read on Neurogadget.com
first in March 2011. MyndPlay allows the viewer to control movies using nothing but their emotions and will.
Android OS gets Mobile Brainwave measurement system.
August 25: UK teen Chole Holmes is the Youngest European with Bionic hand.
September 22-24: 5th International Brain Computer Interface Conference 2011 took place.
September 26: China-based Haier Electronics presented at this month’s IFA Expo “the world’s first braincomputer interface technology Smart TV.” The TV is manufactured by Haier and powered by NeuroSky
ThinkGear technology. The device is called Haier Cloud Smart TV and features the ability to let users interact
with apps using NeuroSky’s biofeedback headset. [12]
October 30: Dr. Bin He of the University of Minnesota and his team report a promising experiment that 85%
accuracy in steering a virtual helicopter with thoughts.
November 16: the world’s first video of the female brain during orgasm captured by fMRI. [13]
December 16: A small American company, Advancer Technologies has developed a plug-in -play USB
videogame controller that harnesses the power of electromyography (EMG), a technique for evaluating and
recording the electrical activity produced by skeletal muscles, to allow players to directly control computer
games with their muscles. [14]
2012: January 09: In December 2011 IBM unveiled its fifth annual “Next Five in Five” – a list of innovations
that have the potential to change the way people work. According to IBM in the next five years, technology
innovations will dramatically change people’s lives. The Next 5 in 5 of 2011 enlists a couple of optimistic
predictions like the elimination of passwords, and batteries that breathe air to power our devices, and most
importantly for us, mind-reading applications.
IBM scientists are among those researching how to link your brain to your devices, such as a computer or a
smartphone. If you just need to think about calling someone, it happens. Or you can control the cursor on a
computer screen just by thinking about where you want to move it. [15]
IX.
Indicators and predictions for the future of BCI
2015-2018: By 2018, most of us will be controlling our computers or gaming devices with some form of natural
input such as speech, touch, sight, brain waves, etc. This first wave of natural input devices will include brain
signals control, a major first step towards social awareness of BCI and encourages development in BCI.
2020-2025: The advancement of nanotechnology will may help us to create smaller and far superior chips. In
around 2020 to 2025, we will start seeing the first batch of researchers using computers to simulate human
brains. If quantum computing arrives, we may see this happening even faster.
2025-2030: Physically disabled people already getting help from BCI technology. This trend is increasing very
rapidly. Bionic ears implants are already very popular. Bionic eyes have been experimented for a few years and
the resolution of the bionic eyes are getting better each year. Nowadays user can control artificial arms and legs
by using their thoughts. By 2030, scientist can transplant Human brain to the robots.
2045: By around 2045, we can hope to unlock the complexity of our brain to fully understand how our brain
works and decode our thoughts.
2060: Human dreams can easily be visualized as video movie in the computer monitor.
2070: By 2070, we can expect that human can easily communicate wireless through thoughts with devices
around us. We can call it e-meditation.
2080: By 2080, human brain’s processing power and computer processor’s power can process several
mathematical or any other problems simultaneously.
2090: By 2090, Dead Human brain’s thinking capability or thinking pattern can be transferred to the computer
and computer can process and can continue this thinking pattern and can give result as Human. It may give
Human brain immortality.
X.
Conclusion
In the end my vision is to inform what BCI is and what its very near history is. I also demonstrate the present
research condition of BCI and I give some predictions about BCI from myself. Nowadays BCI is the most
popular research field in the world and it is becoming the best solution for recovering the motor disabled people
of the world. BCI will be best solution for the treatment of lunatic people and to develop the Intelligent Robot
with human brain capability. So, we can say that the research on this field is very important for the human kind.
References
[1]
[2]
[3]
[4]
Arthur C. Guyton (1991), “Textbook of Medical Physiology,” Eighth Edition,Philadelphia, W. B. Saunders Company.
http://en.wikipedia.org/wiki/Neuron [1 August 2011].
Tatum, W. O., Husain, A. M., Benbadis, S. R. (2008), “Handbook of EEG Interpretation”, USA, Demos Medical Publishing.
Niedermeyer E. and da Silva F.L. (2004). “Electroencephalography: Basic Principles, Clinical Applications, and Related
IJSWS 15-328; © 2015, IJSWS All Rights Reserved
Page 118
S. Sinha and S. Kumar, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 114-119
[5]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
Fields”, Lippincot Williams & Wilkins. ISBN0781751268.
http://en.wikipedia.org/wiki/Magnetoencephalography [13 August ,2011].
Jonathan R.Wolpaw (Guest Editor), Niels Birbaumer,William J. Heetderks, Dennis J. McFarland, P. Hunter Peckham, Gerwin
Schalk, Emanuel Donchin, Louis A. Quatrano, Charles J. Robinson, and Theresa M. Vaughan (Guest Editor) (2000). “BrainComputer Interface Technology : A Review of the First International Meeting” , IEEE TRANSACTIONS ON
REHABILITATION ENGINEERING, VOL. 8, NO. 2, pp. 164-173.
Peter Meinicke, Matthias Kaper, Florian Hoppe, Manfred Heumann and Helge Ritter (2003) “Improving Transfer Rates in Brain
Computer Interfacing: A Case Study”, pp. 1(Abstract).
http://www.wireheading.com/delgado/brainchips.pdf [1 December 2011]..
http://news.bbc.co.uk/2/hi/science/nature/471786.stm [1 December 2011].
http://neurogadget.com/2011/02/09/brain-controlled-prosthetic-arm-first-to-pass-speedy-approval-process/891 [2 December
2011].
http://neurogadget.com/2011/09/26/brain-controlled-tv-available-in-china-this-october/2649 [3 December 2011].
http://neurogadget.com/2011/11/16/female-brain-during-orgasm-captured-by-fmri/3287 [3 December 2011].
http://neurogadget.com/2011/12/16/video-play-super-mario-via-the-signals-generated-by-your-muscles/3408 [17 December
2011].
http://neurogadget.com/2012/01/09/ibms-next-5-in-5-predicts-smaller-mind-reading-devices-by-2017/3416[10 January 2012].
IJSWS 15-328; © 2015, IJSWS All Rights Reserved
Page 119
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Mouse Control using a Web Camera based on Colour Detection
1
Vinay Kumar Pasi, 2Saurabh Singh
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
1,2
Abstract: In this paper, we present a novel approach for Human Computer Interaction (HCI), where we have
tried to control the mouse cursor movement and click events of the mouse using hand gestures. Hand gestures
were acquired using a camera based on colour detection technique. we control cursor movement using a realtime image processing and color detection technique. The aim of this method is focused on the use of a Web
camera to develop a virtual human computer interaction device in a cost effective manner.
Keywords: Hand gesture, Human Computer Interaction, Colour Detection, Webcam, Real Time Image
processing
I.
Introduction
As the computer innovation is developing rapidly. Yet at the same time they require physical interaction with
user. Significance of human computer interaction is expanding rapidly. All the mobile devices uses touch screen
technology. But this technology is still not sufficiently shabby to be used on portable computer and desktop
systems. Making a virtual human computer interaction system to make mouse, touchpad touch screen
technology optional for users. The inspiration is to make an application to interact with the computer with ease,
and build up a virtual human computer interaction system. A virtual mouse is software that permits user to give
mouse inputs to a system without using a mouse. A virtual mouse can usually be operated with webcam input
technology. Virtual mouse which uses webcam works with the assistance of distinctive image processing
techniques. A color pointer has been used for the object recognition and tracking. Left and the right click events
of the mouse have been achieved by detecting the number of pointers on the images. The hand movements of a
user are mapped into mouse inputs. A web camera is set to take images continuously. The user must have a
particular colour in his hand so that when the web camera takes image it must be visible in the acquired image.
This color is detected from the image pixel and the pixel position is mapped into mouse The Graphical user
Interface (GUI) on (PCs) is quite grown, all around characterized and gives an effective interface to a user to
interact with the computer and access the different applications easily with the help of mice, trackpad, and so on.
In the present day situation the vast majority of the mobile phones are utilizing touch screen technology to
interface with the user. At the same time this technology is still not shabby to be utilized as a part of desktops
and portable laptops. Goal of this technology is to make a virtual mouse system utilizing Web cam to cooperate
with the PC in a more user friendly way that can be an option approach for the touch screen.
II.
Digital Image Processing
Digital images are snapshot of scene or scanned from documents, such as photographs, manuscripts, printed
texts, and artwork. The digital image is sampled and mapped as a grid of dots or picture elements (pixels). Each
pixel is assigned a tonal value (black, white, shades of gray or color), which is represented in binary code (zeros
and ones). The binary digits ("bits") for each pixel are stored in a sequence by a computer and often reduced to a
mathematical representation (compressed). The bits are then interpreted and read by the computer to produce an
analog version for display or printing.
A.
Image Analysis
Image analysis is the extraction of meaningful information from images. Mainly from digital images by means
of digital image processing techniques. Image analysis tasks can be as simple as reading bar coded tags or as
sophisticated as identifying a person from their face. In this project we do various image analyzing techniques.
The main thing done is the color detection. At first we receive an image from the web cam. Then each pixel is
retrieved from the image and extracts the red, green and blue values (RGB) from each pixel. Now we can easily
detect a particular color since all the colors are combinations of RGB values. Here we just to try to detect only
Red Green and Blue colors. This is done by traversing through the image, retrieving each pixel, extracting RGB
values and then comparing the color values to the RGB values. Digital image processing is the use of computer
algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing,
digital image processing has many advantages over analog image processing. It allows a much wider range of
algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal
IJSWS 15-329; © 2015, IJSWS All Rights Reserved
Page 120
V. K. Pasi and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 120-125
distortion during processing. Since images are defined over two dimensions digital image processing may be
modeled in the form of multidimensional system.
B.
Systems analysis
System analysis or study is an important phase of any system development process. The system is studied to the
minute detail and analyzed. The system analyst dwelled deep into the working of the present system. The system
was viewed as a whole and the input of the system are identified. During analysis phase for each problem
identified many alternative solutions were evaluated and selected the most feasible one. A feasibility analysis
was performed to evaluate possible solutions to recommend the most feasible one.
C.
Purpose
The aim for developing this system is to create virtual mouse that works with the assistance of a webcam. In this
system a camera persistently takes pictures of hand movement of user, which is then mapped into mouse inputs.
This implies that we can offer inputs to computer without having any physical interaction with the computer and
without the requirement of any additional input device like mouse or touchpad.
D.
Methodology
The implementation has been divided into various steps and each step has been explained below. The system
flow explains the overview of the steps involved in the implementation of virtual mouse.
E.
System Flow
F.
System Approach
1. Capturing real time video using Webcam.
2. Processing the individual image frame.
3. Flipping of each image frame.
4. Conversion of each frame to a gray scale image.
5. Color detection and extraction of the different colors (RGB) from flipped gray scale image.
6. Conversion of the detected image into a binary image.
7. Finding the region of the image and calculating its centroid.
8. Tracking the mouse pointer using the coordinates obtained from the centroid.
9. Simulating the left click and the right click events of the mouse by assigning different color pointers.
G.
Capturing the Real Time Video
For the system to work we need a sensor to detect the hand movements of the user. The webcam of the
computer is used as a sensor. The webcam captures the real time video at a fixed frame rate and resolution
which is determined by the hardware of the camera. The frame rate and resolution can be changed in the system
if required.
1. Computer Webcam is used to capture the Real Time Video.
2. Video is divided into Image frames based on the FPS (Frames per second) of the camera.[1]
IJSWS 15-329; © 2015, IJSWS All Rights Reserved
Page 121
V. K. Pasi and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 120-125
Figure 1: Capturing the Video
H.
Flipping of Images
When the camera captures an image, it is inverted. This means that if we move the color pointer towards the left,
the image of the pointer moves towards the right and vice-versa. It’s similar to an image obtained when west
and in front of a mirror (Left is detected as right and right is detected as left). To avoid this problem we need to
vertically flip the image. The image captured is an RGB image and flipping actions cannot be directly
performed on it. So the individual color channels of the image are separated and then they are flipped
individually. After flipping the red, blue and green colored channels individually, they are concatenated and a
flipped RGB image is obtained [1].
Figure 2: Flipped Images
I.
Conversion of Flipped Image into Gray scale Image
As compared to a colored image, computational complexity is reduced in a gray scale image. Thus the flipped
image is converted into a gray scale image. All the necessary operations were performed after converting the
image into gray scale [1].
J.
Colour Detection
Figure 3. Color Detection
IJSWS 15-329; © 2015, IJSWS All Rights Reserved
Page 122
V. K. Pasi and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 120-125
This is the most important step in the whole process. The red, green and blue color object is detected by
subtracting the flipped color suppressed channel from the flipped Gray-Scale Image. This creates an image
which contains the detected object as a patch of grey surrounded by black space [2].
K.
Conversion of Gray scale Image into Binary scale Image
The grey region of the image obtained after subtraction needs to be converted to a binary image for finding the
region of the detected object. A grayscale image consists of a matrix containing the values of each pixel. The
pixel values lay between the ranges 0 to 255 where 0 represents pure black and 255 represents pure white color.
We use a threshold value to convert the image to a binary image. This means that all the pixel values lying
below threshold value is converted to pure black that is 0 and the rest is converted to white that is thus the
resultant image obtained is a monochromatic image consisting of only black and white colors. The conversion to
binary is required because MATLAB can only find the properties of a monochromatic image.
Figure 4: Detected region
L.
Finding Centroid of an object and plotting Bounding Box
For the user to control the mouse pointer it is necessary to determine a point whose coordinates can be sent to
the cursor. With these coordinates, the system can control the cursor movement. An inbuilt function in
MATLAB issued to find the centroid of the detected region. The output of function is a matrix consisting of the
X (horizontal) and Y (vertical) coordinates of the centroid. These coordinates change with time as the object
moves across the screen.
1. Centroid of the image is detected and a bounding box is drawn around it.
2. Its coordinates are located and stored in a variable.
Figure 5. Bounding box drawn for the detected color pointers.
M.
Tracking the Mouse pointer
Once the coordinates has been determined, the mouse driver is accessed and the coordinates are sent to the
cursor. With these coordinates, the cursor places itself in the required position. It is assumed that the object
moves continuously, each time a new centroid is determined and for each frame the cursor obtains a new
position, thus creating an effect of tracking. So as the user moves his hands across the field of view of the
camera, the mouse moves proportionally across the screen. There is no inbuilt function in MATLAB which can
directly access the mouse drivers of the computer. But MATLAB code supports integration with other
languages like C, C++, and JAVA. Since java is a machine independent language so it is preferred over the
others. A java object is created and it is linked with the mouse drivers. Based on the detection of other colors
along with red the system performs the clicking events of the mouse. These color codes can be customized
based on the requirements.[3]
IJSWS 15-329; © 2015, IJSWS All Rights Reserved
Page 123
V. K. Pasi and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 120-125
N.
Performing Clicking Actions
The control actions of the mouse are performed by controlling the flags associated with the mouse buttons.
JAVA is used to access these flags. The user has to perform hand gestures in order to create the control actions.
Due to the use of color pointers, the computation time required is reduced. Furthermore the system becomes
resistant to background noise and low illumination conditions. The detection of green and blue colors follows
the same procedure discussed above.
Clicking action is based on simultaneous detection of two colors.
1. If Red along with single Blue color is detected, Left clicking action performed.
2. If Red along with double Blue color is detected, Right clicking action is performed.
III.
Problems and Drawbacks
Since the system is based on image capture through a webcam, it is dependent on illumination to a certain
extent. Furthermore the presence of other colored objects in the background might cause the system to give an
erroneous response. Although by configuring the threshold values and other parameters of the system this
problem can be reduced but still it is advised that the operating background be light and no bright colored
objects be present. The system might run slower on certain computers with low computational capabilities
because it involves a lot of complex calculations in a very small amount of time. However a standard pc or
laptop has the required computational power for optimum performance of the system. Another fact is that if the
resolution of the camera is too high then the system might run slow. However this problem can be solved by
reducing the resolution of the image by making changes in the system.
IV.
(a)
(b)
(c)
(d)
(e)
Literature Review
Erden[4] have used a camera and computer vision technology, such as image segmentation and gesture
recognition, to control mouse tasks.
Hojoon Park [5] used Computer vision technology and Web camera to control mouse movements.
However, he used fingertips to control the mouse cursor and the angle between the thumb and index
finger was used to perform clicking actions.
Chu-Feng Lien [6] had used an intuitive method to detect hand motion by its Motion History Images
(MHI). In this approach only fingertip was used to control both the cursor and mouse click. In his
approach the user need to hold the mouse cursor on the desired spot for a specific period of time for
clicking operation.
Kamran Niyazi [7] used Webcam to detect color tapes for cursor movement. The clicking actions were
performed by calculating the distance between two colored tapes in the fingers.
K N Shah[8] have represented some of the innovative methods of the finger tracking used to interact
with a computer system using computer vision. They have divided the approaches used in Human
Computer Interaction (HCI) in two categories: (i). HCI without using interface and (ii). HCI using
interface. Moreover
they have mentioned some useful applications using finger tracking through
computer vision.
V.
Conclusion
With the help of this technology an object tracking based virtual mouse application has been created and
developed using a webcam. The system has been implemented in MATLAB environment with the help of
MATLAB image processing Tool. This technology has wide applications in the fields of augmented reality,
computer gaming, computer graphics prosthetics, and biomedical instrumentation. Furthermore a similar
technology can be applied to create applications like a digital canvas which is gaining popularity among artists.
This technology can be used to help patients who don't have control of their limbs. In case of computer graphics
and gaming this technology has been applied in modern gaming consoles to create interactive games where a
person’s motions are tracked and interpreted as commands. The majority of the applications require extra
equipment which is often very costly. Our motive was to create this technology in the cheapest possible way and
also to create it under a standardized operating system. Various application programs can be written exclusively
for this technology to create a wide range of applications with the minimum requirement of resources.
References
[1]
[2]
[3]
[4]
Rafael C. Gonzalez and Richard E. Woods, Digital Image Processing, 2nd edition, Prentice Hall, Upper Saddle River, New
Jersey, 07458.
Shahzad Malik, “Real-time Hand Tracking and Finger Tracking for Interaction”, CSC2503F Project Report, December 18, 2003
The MATLAB website. [Online]. Available: http://www.mathworks.com/matlabcentral/fileexchange/28757-tracking-red-colorobjects-using-matlab
A. Erdem, E. Yardimci, Y. Atalay, V. Cetin, A. E. “Computer vision based mouse”, Acoustics, Speech, and Signal Processing,
Proceedings. (ICASS). IEEE International Conference, 2002
IJSWS 15-329; © 2015, IJSWS All Rights Reserved
Page 124
V. K. Pasi and S. Singh, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 120-125
[5]
[6]
[7]
[8]
Hojoon Park, “A Method for Controlling the Mouse Movement using a Real Time Camera”, Brown University, Providence, RI,
USA, Department of computer science, 2008
Chu-Feng Lien, “Portable Vision-Based HCI – A Realtime Hand Mouse System on Handheld Devices”, National Taiwan
University, Computer Science and Information Engineering Department
Kamran Niyazi, Vikram Kumar, Swapnil Mahe, Swapnil Vyawahare, “Mouse Simulation Using Two Coloured Tapes”,
Department of Computer Science, University of Pune, India, International Journal of Information Sciences and Techniques
(IJIST) Vol.2, No.2, March 2012
K N. Shah, K R. Rathod and S. J. Agravat, “A survey on Human Computer Interaction Mechanism Using Finger Tracking”
International Journal of Computer Trends and Technology, 7(3), 2014, 174-177
Acknowledgments
We are grateful to our Department of Computer Science & Technology for their support and providing us an opportunity to review on such
an interesting topic. While reading and searching about this topic we learnt about various important and interesting facts.
IJSWS 15-329; © 2015, IJSWS All Rights Reserved
Page 125
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0063
ISSN (Online): 2279-0071
International Journal of Software and Web Sciences (IJSWS)
www.iasir.net
Security Threats to Home Automation System
Priyaranjan Yadav1, Vishesh Saxena2
Student of Computer Science & Engineering,
Department of Computer Science and Engineering,
IMS Engineering College, Ghaziabad, 201009, Uttar Pradesh, INDIA
_____________________________________________________________________________________
Abstract: Home automation technologies can transform a home into a smart home. Lighting, heating and other
appliances are remotely controlled by the tenants via smartphone and Internet connection or automatically by
sophisticated control mechanisms and trained behaviour. Comfort, energy efficiency, health care, and security
are provided by a smart home to its users.
In spite of miscellaneous benefits, there are also new threats for the home and its tenants. An attacker can
manipulate the system by replay or injection attacks, for example, to take control over the appliances or to
cause a denial of service. In the worst case, the health and life of tenants can be endangered by an attack, which
suppresses an alarm. Successful attacks can also violate the privacy by observing life style habits of the tenants
like sleeping times, motion profiles or preferred radio programs. Furthermore, increasing adoption of home
automation technologies also increase the attractiveness of attacks on home automation systems. The used
communication standards are often proprietary and kept secret, which hinders independent security analyses.
Nevertheless, independent assessment of the security is necessary in order to allow customers to choose a
secure system and help manufacturers to increase the security.
1,2
Keywords: Home Automation System, Security Issues.
__________________________________________________________________________________________
I.
Introduction
Home automation is “The Internet of Things"…The way that all of our devices and appliances will be
networked together to provide us with a seamless control over all aspects of our home and more. Home
automation has been around from many decades in terms of lighting and simple appliance control, and only
recently has technology caught up for the idea of the interconnected world, allowing full control of your home
from anywhere, to become a reality. With home automation, you dictate how a device should react, when it
should react, and why it should react. Home automation is the use of one or more computers to control basic
home functions and features automatically and sometimes remotely. An automated home is sometimes called a
smart home. Home automation may include centralized control of lighting, HVAC (heating, ventilation and air
conditioning), appliances, security locks of gates and doors and other systems, to provide improved
convenience, comfort, energy efficiency and security. Home automation refers to the use of computer and
information technology to control home appliances and features (such as windows or lighting). Systems can
range from simple remote control of lighting through to complex computer/micro-controller based networks
with varying degrees of intelligence and automation. Home automation is adopted for reasons of ease, security
and energy efficiency.
In addition to the miscellaneous benefits, there are also new threats for the home and its tenants. An attacker can
manipulate the system by replay or injection attacks, for example, to take control over the appliances or to cause
a denial of service. In the worst case, the health and life of tenants can be endangered by an attack.
Below are some of three possible threat scenarios that may occur once attackers take advantage of these wellknown home automation protocols:
●
X10. Because X10 devices use 4-bit ID numbers, it is vulnerable to brute-force attacks. Furthermore,
because it can be turned off with just one command, a thief can turn-off X10-based alarm and infiltrate
a victim’s house.
●
ZigBee. Though ZigBee-based devices have a more secured communication, problems still exist in the
gateway between WLAN and an IP network. An attacker can bypass ZigBee authentication due to
user’s weak password or misconfiguration, allowing him to access devices like security cameras. With
this, an attacker can monitor user’s daily activities and change gateway configuration to connect to a
fake Domain Name System (DNS) or proxy server that may lead to data theft.
●
Z-Wave. By using tools readily available on the Internet, an attacker can sniff all traffic that flows in
WPAN. With this information, an attacker can monitor a user’s day-to-day activities and gain
information on the kind of devices used at home and how these are controlled. More tech-savvy thieves
can even execute random commands via WPAN.
IJSWS 15-330; © 2015, IJSWS All Rights Reserved
Page 126
P. Yadav and V. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 126-129
II.
Components
Home Automation usually is comprised of 4 main parts:
1.
Main Controller
2.
Interfaces
3.
Sensors
4.
Control methods
Main Automation Controller:
The Main controller is usually a computer of some sort, although many times it does not
look like a computer at all. There are some systems that do use an actual home computer
but these are usually not as reliable. The advantage to having a separate controller is its
focus is only on all of the Home Automation tasks.
An Interface is the way you interact with the Home automation controller.
There are many types of interfaces.
Remote Control:
Most of us are familiar with remote controls, in this case the visual side of the interface is
displayed on your TV screen and you use the remote control to interact with it. Some
remote controls also have a small LCD screen on them so you actually use that remote
without the TV on to still turn lights and other things on and off.
Touch Panels:
This can range from hardwired 4" screens to larger screens like 10" touch screens that
you could carry around your house, and set in a docking station to recharge when you are
done using it.
Mobile Devices:
The most popular of these is of course is an i-pod. Yes you can even use your i-pod's
touch screen to operate your home automation system.
Internet:
Your controller is most likely hooked up to your home network and also the Internet, this
means that you will be able to access your home automation system when you are on the
road. Turn lights off, turn the heat up before you come home, and view web cams in your
house.
Sensors:
Sensors are things that can tell that something is in a current state. You may have a
contact sensor on a door or window, this can tell the controller if that door is open or
closed. Another type of sensor is a motion sensor this will detect if and when motion
occurs. This for example could be used to tell the controller to turn lights on in a room
when motion occurs, this way you don't have to turn the lights on yourself, they just
come on for you.
III.
Control Methods
Controllers can communicate and control the many different parts of a Home Automation System in a variety of
ways. Some of these are IP (Internet Protocol sp), WiFi, Zig-bee, IR, Serial Data, and Relays (for motorization).
IP/TCP:
This is obviously used when you are interacting with your controller over the Internet, but it also used to allow
communication between your controller and wired touch panels, contacts, security systems, thermostats etc. It is
a standard based way of communicating that use very common cabling, which is nice because that means many
new houses are often being wired with it, and it is affordable for wiring if you are constructing a new home or
doing remodeling.
WiFi:
If use a laptop, you know what wifi is. Wi-Fi is a great option when you can not get Ethernet wiring to
locations. It is a fairly good medium for streaming music to different locations in the house,and will allow large
IJSWS 15-330; © 2015, IJSWS All Rights Reserved
Page 127
P. Yadav and V. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 126-129
bits of information to be passed back and forth with no wires. It is always best if you can get a wire to the
location you are trying to control, but sometimes this is not possible or would be cost prohibitive.
Zig-bee:
Zig-bee is a new form of wireless communication. It is also a standard based protocol which means it is reliable.
Zig-bee allows for 2 way communication of devices but can only transmit very small bits of information, so the
controller can say light off, then the light responds to the controller, OK, the light is off. This is great because it
is reliable, it means when you tell a light to turn off it will be turned off.
IR (infrared):
If you've used a hand held remote before you know what infrared is. It is a little beam of light that is used to
send commands one way. Usually though controllers use an IR router which will have small wires that carry the
signal to a little light emitter that sits on top of the devices sensor (such as a stereo).
Serial Data:
This is a connection that most people are not familiar with it usually looks like a port on a computer. The
advantage to using this type of communication over IR is it can be two way, and it does not rely on light
transmission it is pure transmission of digital commands. This type of port is also starting to be found on some
of the higher end home electronics. It is only used to send commands back and forth, it does not carry as much
information as say IP or WiFi.
IV.
Embedded Security Threats
Algorithm and implemented protocol weakness - A security algorithm or protocol might contain conceptual
design errors or logical weakness that can be easily exploited by an attacker to gain control over security system.
Unauthorized software upgrades - Device firmware are upgraded to a newer version without authentication.
For example, to exploit known security vulnerabilities or to circumvent license restrictions.
Eavesdropping - Messages between two legitimate devices of the building automation system can be
intercepted by attacker without authorization to gain insider knowledge and to steal private data.
Replay attack – Messages from legitimate devices of the building automation system are recorded and resent
later by an attacker without being noticed.
Man-in-the-middle attack – An attacker hooks into the communication between two or more parties and
intercepts, relays messages, thereby pretending a direct communication towards the legitimate parties.
Tampering – Messages between two legitimate devices of the building automation system can be manipulated
by an attacker by adding, replacing, or removing data without being noticed.
Identity spoofing or theft – An attacker spoofs or steals the identity of legitimate device of the building
automation system to gain access to restricted data, functions or services.
Denial-of-service – Functions provided or communications between legitimate devices of the building
automation system are deactivated or prevented, for instance, by overloading the communication channel with
myriads of malicious requests.
V.
Embedded Security Solutions
Security evaluation and certification assures that the applied security measures meet the required protection
goals or corresponding security standards. Hence, trustworthy, well-established external parties like officially
certified security evaluation laboratories perform comprehensive theoretical and practical security analyses of
the respective security solution in order indicated potential security gaps (if any) or, if everything is as required,
issue corresponding security certificate.
Firewalls, intrusion detection, and security gateways are installed at network boundaries to separate networks
with different access rights and different security levels. Thereby, they prevent, for instance, remote attackers
located in external networks trying to access internal networks by analyzing in- and outgoing network traffic
and dropping unauthorized or potentially malicious data immediately at network boundaries before the
malicious inputs reach the internal network.
Secure hardware like smart cards, security controllers, or secure memory assures authenticity and
confidentiality of stored data against software attacks and many physical attacks. Compared with regular
hardware, secure hardware implements passive or active countermeasures that increase the effort for an attacker
up to a level where it is economically unfeasibly to perform a certain attack.
Secure implementation assures that security issues are not caused by software security vulnerabilities like
buffer overflows. Secure implementation are realized, for instance, by applying secure coding standards like
code security reviews, code security testing, runtime security testing, and dedicated penetration testing.
Secure communication is applied to ensure multiple communication security goals at once. It provides entity
authentication which ensures that a communication partner is indeed the one he pretends to be. Further, secure
communication guarantees data authenticity that receivers are able to actually check if received data is the same
data that was originally sent by the communication partner.
IJSWS 15-330; © 2015, IJSWS All Rights Reserved
Page 128
P. Yadav and V. Saxena, International Journal of Software and Web Sciences, Special Issue-1, May 2015, pp. 126-129
Secure boot assures authenticity and integrity of the device software during bootstrap. This is achieved by
having a cryptographic reference signature or hash of all relevant software, which is stored in a way that it is
accessible only by the boot loader. During boot time, the boot loader then computes the cryptographic signature
of the current system which is then verified against the stored reference signature. Only if this verification is
successful the software is executed, otherwise the system will execute a predefined emergency function (e.g.,
error log or halt).
VI.
Literature Review
A variety of network-controlled home automation devices lack basic security controls, making it possible for
attackers to access their sensitive functions, often from the Internet, according to researchers from security firm
Trustwave. Some of these devices are used to control door locks, surveillance cameras, alarm systems, lights
and other sensitive system. [1]
To be able to integrate security-critical services, the implemented control functions, i.e., functions that control
the building automation services, have to be protected against unauthorized access and malicious interference
(security attack). A typical example of such a security attack is the manipulation of an access control system that
opens and closes an entrance door. To perform security attacks, the malicious entity (adversary) has to identify
vulnerabilities of a system that can be utilized to gain unauthorized access to the control functions. The
existence of vulnerabilities leads to a security threat which can be regarded as the potential for violation of
security that may or may not be utilized. [2]
The automation systems let users control a multitude of devices, such as lights, electronic locks, heating and air
conditioning systems, and security alarms and cameras. The systems operate on Ethernet networks that
communicate over the existing power lines in a house or office building, sending signals back and forth to
control devices. The problem is that all of these signals are sent unencrypted, and the systems don’t require
devices connected to them to be authenticated. This means that someone can connect a sniffer device to the
broadband power network through an electrical outlet and sniff the signals to gather intelligence about what’s
going on in a building where the systems are installed – such as monitor the movements of people in houses
where security systems with motion sensors are enabled.[3]
With the products available for professional or DIY home automation, the security dangers start with the
protocols used for wired and wireless connectivity. Even with devices that use WPA2 encryption when they
connect to your home wireless network, a breach could only be a matter of time and interest. An attacker could
eventually crack your authentication due to lack of proper passphrase or through social engineering techniques
to discover the passphrase and other information about your home automation implementation. [4]
VII.
Conclusion
Power lines don’t provide any authentication .Power lines and electronic devices security should be considered
equally important. Therefore, Home Automation security needs more development on software rather than
hardware. High Security design with good authentication is needed. The vulnerability to attacks and hacks is
very much.X10 automation is less secure as compared to z wave automation therefore Z-wave automation
security must be strong enough to overcome the shortcomings of X10 automation.
References
[1]
[2]
[3]
[4]
http://www.computerworld.com/article/2484542/application-security/home-automation-systems-rife-with-holes--securityexperts-say.html
http://www.auto.tuwien.ac.at/~wgranzer/sebas_tie.pdf
http://www.wired.com/2011/08/hacking-home-automation/
http://www.solutionary.com/resource-center/blog/2013/08/wireless-home-technologies-create-security-risks
Acknowledgements
We are grateful to Computer Science & I.T. department of our college for providing resources and guidance needed for the subject.
Teachers helped us a lot and helped to come across new technologies and security issues in Home Automation.
IJSWS 15-330; © 2015, IJSWS All Rights Reserved
Page 129