Download Page 1334-1412

yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Collaborative decision-making software wikipedia, lookup

Business intelligence wikipedia, lookup

International Conference on Emerging Trends in Science and Cutting Edge Technology
Dr. R. Prathiba Devi
Department of Apparel and Fashion Design, PSG College of Technology, India
Sound absorbing materials absorb most of the sound energy striking them and reflect very little. Therefore,
sound-absorbing materials have been found to be very useful for the control of noise. They are used in variety of
locations like close to source of noise in various paths and sometimes close to receiver. Porous sound absorbing
materials have evolved into the advanced material over the year. Non-woven, as it’s a web structure can absorb
more sound waves than other fabric structure. Non-woven produced from hollow polyester fibers blended with
solid polyester fibers which are to be tested for sound reduction property. In this study, the effects of physical
parameters on sound reduction properties of nonwoven fabrics were investigated. The samples including 50%
solid polyester and 50% hollow polyester resulted in the best sound reduction in the mid-to-high frequency
ranges. The increase in the amount of fiber per unit area resulted in an increase in sound reduction of the
Keywords: Hollow Polyester, Non Woven, Polyester, Sound Reduction Property.
Today much importance is given to the acoustical environment. Noise control and its principles play an
important role in creating an acoustically pleasing environment. This can be achieved when the intensity of
sound is brought down to a level that is not harmful to human ears. [1] Noise is a major cause of industrial
fatigue, irritation, reduced productivity and occupational accidents. Continuous exposure of 90dB or above is
dangerous to hearing. Installation of noise absorbent barriers (made from wood and textiles) between the source
and the subjects is one of the main methods of noise control. [2] Measurement techniques used to characterize
the sound absorptive properties of a material are reverberant field method, impedance tube method and steady
state method. Noise absorbent textile materials especially nonwoven structures or recycled materials have low
production costs, low specific gravity and are aesthetically appealing. Acoustic insulation and absorption
properties of nonwoven fabrics depend on fiber geometry and fiber arrangement within the fabric structure. [3]
Materials that reduce the acoustic energy of a sound wave as the wave passes through it by the phenomenon of
absorption are called sound absorptive materials. They are commonly used to soften the acoustic environment of
a closed volume by reducing the amplitude of the reflected waves. Absorptive materials are generally resistive
in nature, either fibrous, porous or in rather special cases reactive resonators. [4] Classic examples of resistive
materials are non-woven, fibrous glass, mineral wools, felt and foams porous materials used for noise control
are generally categorized as fibrous medium or porous foam. Fibrous media usually consists of glass, rock wool
or polyester fibers and hollow polyesters have high acoustic absorption. Sometimes fire resistant fibers are also
used in making acoustical products. [5] An absorber, when packed by a barrier, reduces the energy in a sound
1334 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
wave by converting the mechanical motion of the air particles into low grade heat. This action prevents a
buildup of sound in enclosed spaces and reduces the strength of reflected noise. [4]
In this study the non woven fabrics were produced with spun lace technology, in future it can also be produced
with thermal bonding and melt blown technology. Hollow polyester/ polyester have shown good sound
absorption properties so that they can be used for different applications like draperies, ear muff, etc., with any
kind of sound absorption chemical treatment also. [6]
2.1 Materials
Solid polyester fibers and hollow polyester fibers are used for preparation of spun lace non-woven fabrics. A
study also showed that fine denier fibers ranging from 1.5 to 6 denier per filament (dpf) perform better
acoustically than coarse denier fibers. [7] Polyester fibers are blended with hollow polyester in 2:2, 3:1 ratios
and 100% polyester with different GSM. Physical properties of the fibers are shown in Table 1.
Table.1 Physical properties of the fibers
Staple length
Solid polyester
6 denier
64 mm
Hollow polyester
6 denier
64 mm
2.2 Methods
The sourced and procured solid polyester and hollow polyester fibers were blended in three different ratios, to
produce non- woven fabric by spun lace technology suitable for acoustic end use. The basic parameters like
GSM, thickness and air permeability tests were analyzed for the non-woven fabrics and the sound reduction
properties of non - woven fabric were also analyzed using sound reduction tester apparatus.
2.2.1 Fabric weight (GSM)
Fabric sample of 5 cm X 5 cm was taken and weighed (w). The weight (w) is used to calculate GSM using the
GSM= (w * 25) / 10000
2.2.2 Fabric Thickness
The thickness of the fabric is measured using the standard TS 7128 EN ISO 5084. A piece of fabric is placed on
the reference plate of the instrument ensuring that there are no creases in the fabric. While placing the fabric it
should not be subjected to any stretch. The pressure foot is gradually brought down and after allowing it to rest
on the fabric for 30 sec, the gauge reading is taken.
2.2.3 Air permeability
1335 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
The fabric transport property most sensitive to fabric structure is air permeability, defined as the volume flow
rate per unit area of a fabric when there is a specified pressure differential across two faces of the fabric. Air
permeability of the samples were measured based on the standard TS 391 EN ISO 9237 method, using FX 3300
air permeability tester. The measurements performed at a constant pressure drop of 100 Pa (20 cm2 test area).
All the tests were performed under standard atmospheric conditions (20ºC, 65 %RH).
2.2.4 Thermal Conductivity
There are number of possibilities to measure thermal conductivity. The most commonly used methods are
Searle’s method and Lee’s disc method, for good and bad conductors of heat respectively, each of them being
suitable for a limited range of materials, depending on the thermal properties and the medium temperature. In
this study Lee’s disc method is used to determine the thermal conductivity of a bad conductor, e.g. Glass. The
formula used for the calculation is
m - Mass of the Lee’s disc = 870 x 10-3 kg
s – Specific heat capacity = 370 J Kg-1 K-1
d – Thickness of the sample (m)
r – Radius of the Lee’s disc (m2)
θ₁– steam chamber temperature (°C)
θ₂ – metal chamber temperature (°C)
2.2.5 Evaluation of sound insulation
Measurement techniques used to characterize the sound absorptive properties of a material are: a) Reverberant
Field Methods and b) Impedance Tube Methods. [8]A simple testing apparatus has been set up to measure the
permeability of sound through a spun laced non - woven fabric. It consists of sound insulating box made of thick
cardboard sheet with removable top lid. Inside one vertical wall of this box a sound source and decibel meter is
fixed. On the other side (to adjust the distance between sound source and receiver) of the vertical wall a decibel
meter is fixed opposite to the sound generator to measure the sound intensity. In between these two decibel
meters the sliding arrangement is kept to fix the sample vertically.
The sound of particular decibel is created by the control panel. The source decibel and recipient decibel have
been measured by two decibel meters S and R respectively without and with fabric sample. The reduction
responsible for fabric which is expressed in the measure of sound insulation can be calculated as given below
dBF = (Decibel Reduction with Sample) - (Decibel Reduction without Sample)
dBF = (dBS-dBR)ws - (dBS-dBR)wos
Where, dBF - sound reduction responsible for fabric; dB S - sound intensity at source; dBR - sound intensity at
receiver; WS- with sample; WOS-without sample.
The samples were tested applying the above procedure
and results were obtained.
1336 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 1 Schematic view of the experimental set-up
3.1 Non woven fabrics GSM
Table 2. GSM of the non woven fabrics
GSM of the non woven fabrics
Fiber Proportion
100% Polyester
25/75 (Hollow
50/50 (Hollow
82 GSM
52 GSM
Sample A
Sample A1
Sample B
Sample B1
Sample C
Sample C1
Based on the GSM, the samples with 82 GSM were named as A, B, C for 100% polyester, 25/75 (Hollow
Polyester/Polyester), 50/50 (Hollow Polyester/Polyester) respectively and the samples with 52 GSM were
named as A1, B1, C1 for 100% polyester, 25/75 (Hollow Polyester/Polyester), 50/50 (Hollow
Polyester/Polyester) respectively.
3.2 Fabric Thickness
Table 3. Fabric Thickness of the non woven fabrics
Sample Name
Thickness (mm)
1337 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
The thickness of the non-woven samples measured using thickness gauge was given in Table 3. Numerous
studies that dealt with sound absorption in porous materials have concluded that low frequency sound absorption
has direct relationship with thickness.[9] It is observed that the samples with higher GSM (A, B, C) showed
higher thickness than the samples A1, B1, C1. Thicker the material better the sound absorption values.
3.3 Air Permeability & Thermal Conductivity
Table 4. Air Permeability & Thermal Conductivity of the non woven fabrics
Sample Name
Air Permeability
Thermal Conductivity
(cm /cm /s)
Table 4 shows the air permeability and thermal conductivity of the non-woven samples. The samples A1, B1,
C1 shows higher air permeability rate than the samples A, B, C which shows comparatively lesser air
permeability. Also fibers interlocking in non-woven are the frictional elements that provide resistance to
acoustic wave motion. [10]This is also due to density of fibers being high in samples with higher GSM which
reduces air transfer rate. The thermal conductivity co-efficient of the non-woven fabric samples A, B, C, A1,
B1, C1 are 67%, 70%, 50%, 33%, 30% and 50% respectively. 100% polyester fabrics and fabrics with higher
GSM show higher thermal conductivity. Also Samples C, C1 which has equal percentage of polyester and
hollow polyester fibers showed same percentage of thermal conductivity.
3.4 Evaluation of sound insulation
3.4.1. Sound insulation of Sample A
Fig. 2 Sound absorption of sample A at 25 cm distance
1338 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 3 Sound absorption of sample A at 50 cm distance
Fig. 4 Sound absorption of sample A at 75 cm distance
Fig. 2, 3, 4 shows the sound absorption property of sample A (100% polyester) non-woven fabric with 82gsm,
which is evaluated with the frequency of 400 to 4000 Hz at a distance of 25cm, 50 cm, 75 cm respectively. The
fabric was evaluated up to 6 layers; it is inferred that the sound reduction increased with increase in layers of
fabric at a maximum frequency of 4000 Hz.
3.4.2. Sound insulation of Sample A1
Fig. 5 Sound absorption of sample A1 at 25 cm distance
1339 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 6 Sound absorption of sample A1 at 50 cm distance
Fig. 7 Sound absorption of sample A1 at 75 cm distance
Fig. 5, 6, 7 shows the sound absorption property of sample A1 (100% polyester) non-woven fabric with 52gsm,
which is evaluated with the frequency of 400 to 4000 Hz at a distance of 25cm, 50 cm, 75 cm respectively. The
fabric was evaluated up to 6 layers; it is inferred that the sound reduction increased with increase in layers of
fabric at a maximum frequency of 4000 Hz.
3.4.3. Sound insulation of Sample B
Fig. 8 Sound absorption of sample B at 25 cm distance
1340 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 9 Sound absorption of sample B at 50 cm distance
Fig. 10 Sound absorption of sample B at 75 cm distance
Fig. 8, 9, 10 shows the sound absorption property of sample B (25/75 Hollow Polyester/Polyester) non-woven
fabric with 82gsm, which is evaluated with the frequency of 400 to 4000 Hz at a distance of 25cm, 50 cm, 75
cm respectively. The fabric was evaluated up to 6 layers; it is inferred that the sound reduction increased with
increase in layers of fabric at a maximum frequency of 4000 Hz.
3.4.4. Sound insulation of Sample B1
Fig. 11 Sound absorption of sample B1 at 25 cm distance
1341 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 12 Sound absorption of sample B1 at 50 cm distance
Fig. 13 Sound absorption of sample B1 at 75 cm distance
Fig. 11, 12, 13 shows the sound absorption property of sample B1 (25/75 Hollow Polyester/Polyester) nonwoven fabric with 52gsm, which is evaluated with the frequency of 400 to 4000 Hz at a distance of 25cm, 50
cm, 75 cm respectively. The fabric was evaluated up to 6 layers; it is inferred that the sound reduction increased
with increase in layers of fabric at a maximum frequency of 4000 Hz.
3.4.5. Sound insulation of Sample C
Fig. 14 Sound absorption of sample C at 25 cm distance
1342 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 15 Sound absorption of sample C at 50 cm distance
Fig. 16 Sound absorption of sample C at 75 cm distance
Fig. 14, 15, 16 shows the sound absorption property of sample C (50/50 Hollow Polyester/Polyester) non-woven
fabric with 82gsm, which is evaluated with the frequency of 400 to 4000 Hz at a distance of 25cm, 50 cm, 75
cm respectively. The fabric was evaluated up to 6 layers; it is inferred that the sound reduction increased with
increase in layers of fabric at a maximum frequency of 4000 Hz.
3.4.6. Sound insulation of Sample C1
Fig. 17 Sound absorption of sample C1 at 25 cm distance
1343 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 18 Sound absorption of sample C1 at 50 cm distance
Fig. 19 Sound absorption of sample C1 at 75 cm distance
Fig. 17, 18, 19 shows the sound absorption property of sample C1 (50/50 Hollow Polyester/Polyester) nonwoven fabric with 52gsm, which is evaluated with the frequency of 400 to 4000 Hz at a distance of 25cm, 50
cm, 75 cm respectively. The fabric was evaluated up to 6 layers; it is inferred that the sound reduction increased
with increase in layers of fabric at a maximum frequency of 4000 Hz.
3.4.7. Sound reduction based on GSM of the samples
Fig. 20 Sound reduction of samples (A, B, C) at 75 cm distance
1344 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 21 Sound reduction of samples (A1, B1, C1) at 75 cm distance
From the Fig. 20 and Fig. 21, it is clear that sample C, C1 (50/50 hollow polyester/solid polyester) with 6 layers
has the maximum sound absorption of 13dB and 11.5dB 4000Hz respectively, when compared with other
3.4.8. Sound reduction of the pleated sample C
Fig. 22 Sound reduction of the pleated sample C at 75 cm distance
From Fig. 20 & 21, it was analysed that the samples C & C1 showed greater sound reduction than rest of the
samples. Amongst the two samples C had higher sound absorption. Hence the sample C was pleated and tested
at three different distances. It is concluded from the Fig. 22 that sample C (50/50 Hollow polyester/solid
polyester) non-woven fabric with 82 GSM and with pleat has the maximum sound absorption of 9.74dB 4000
Hz at 75 cm distance.
The non woven fabric samples made of polyester and hollow polyester fibers of denier 6 and of three different
proportions of 82 gsm and 52 gsm each, produced by spun lace technique were tested for their thickness, air
permeability, thermal conductivity and sound insulation. Non woven fabric samples with higher gsm showed
more thickness, which leads to lesser air permeability and higher thermal conductivity. Also Samples produced
with 52 gsm showed good air permeable rate but lesser thermal conductivity.
The sound evaluation test was conducted for all the samples at three different distances and up to 6 layers. As
the fabric layers and distance increased the samples showed more sound absorption ie., leading to reduction in
1345 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
transfer of sound outside. Based on the gsm it was evaluated that sample C with equal proportion of polyester
and hollow polyester fibers recorded the highest rate of sound absorption, and samples produced with 100%
polyester fibers recorded the lowest rates. It was also found that there is direct relationship between weight per
square meter and sound reduction. Similarly the pleated samples produced with high percentage of hollow fibers
recorded the highest rates of sound absorption than 100% polyester fibers.
The author is thankful to S. Gokila, D. Karthiga and P. Saranya, students of Fashion Technology, PSG College
of Technology for rendering their support during this study.
[1] Hoda S. Seddeq, “Factors Influencing Acoustic Performance of Sound Absorptive Materials”, Australian
Journal of Basic and Applied Sciences, 3(4): 2009, 4610-4617.
[2] Beranek, L. Leo, “Noise Reduction Prepared for a Special Summer Program at MIT” McGraw-Hill, New
York, 1960.
[3] Bruce Fader, “Industrial Noise Control” Interscience publication, John Wiley and Sons, 1981.
[4] Lewis, H., Bell, “Industrial Noise Control Fundamentals and Applications”, 2 nd Edition, New York, 1994:
M. Dekker
[5] Claudio Braccesi and Andrea Bracciali, “Least Squares Estimation of main Properties of Sound Absorbing
Materials Through Acoustical Measurements” Applied Acoustics, 54(1): 1998, 59-70.
[6] Youn Eung Lee, Chang Whan Joo, “Sound Absorption Properties of Thermally Bonded Nonwovens Based
on Composing Fibers and Production Parameters”, Journal Of Applied Polymer Science, 92: 2004, 22952302.
[7] Koikumi, T., N, Tsujiuchi and A Adachi, “The Development of Sound Absorbing materials Using Natural
Bamboo Fibers, High Performance” WIT Press, 2002.
[8] Takahashi, Y. T. Otsuru and R. Tomiku, ”In Situ Instruments of Surface Impedance and Absorption
Coefficients of Porous Materials Using Two Microphones and Ambient Noise, Applied Acoustics, 66:
2005, 845-865.
[9] Michael Coates and Marek Kierzkowski, “Acoustic Textiles – Lighter, Thinner and More Absorbent”,
Technical-Textiles-International, 2002.
[10] Mingzhang Ren and Finn Jacobsen, “A Method of Measuring the Dynamic Flow Resistance and Reactance
of Porous Materials”, Applied Acoustics, 39(4): 1993, 265-276.
1346 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Smita B Joshi1, A R Jani2
EC Department, G.H.Patel College of Engineering and Technology, Gujarat Technological University (India)
Department of Physics, Sardar Patel University (India)
The solar spectrum contains all wavelengths in the range starting from 25 0nm to 2500nm. The study of the
solar spectrum was done using spectral response meter. For different wavelengths, current and power were
measured. The graph of quantum efficiency vs. wavelength was plotted. The value of quantum efficiency
obtained was in the range of 40% to 99%. Overall reduction of Quantum efficiency (QE) is due to
recombination, transmission and low diffusion length.
Keywords -- Band gap, diffusion length, quantum efficiency, Solar Spectrum, Spectral
For designing and developing a solar cell from new materials, for improving its performance by improving its
efficiency, its key measurement is “spectral response” of a solar cell. Spectral response is basically sensitivity of
cell to light of different wavelengths. It can be defined as measure of short circuit current per unit light power
(A/W) [1, 2].
Fig. 1: Solar Radiation Spectrum
Solar radiation is the radiant energy emitted by the Sun. Fig 1 shows the solar radiation spectrum. The relevant
radiation for the applications in solar power industries lies within ultra violet (200 to 390 nm), visible range
(390 to 780 nm), near-infrared (780 to 4000 nm) and infrared (4000 to 100000 nm) [3].The electromagnetic
radiation emitted by the sun covers a very large range of wave-length from radio waves through the infrared,
visible and ultraviolet to X-rays and gamma rays. However, 99 percent of the energy of solar radiation is
contained in the wavelength band from 0.15 to 4µm, comprising the near ultraviolet, visible and near infrared
1347 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
regions of the solar spectrum, with a maximum at about 0.5µm.
The variations actually observed in association with solar phenomena like sunspots, prominences and
solar flares are mainly confined to the extreme ultraviolet end of the solar spectrum and to the radio waves. The
contribution of these variations to the total energy emitted is extremely small and can be neglected.
Different methods were detected by Patker [4] for florescence emission and excitation spectra and
correction curves were presented. Solar cells based on a new conjugated donor polymer with C 60 and C70 PCBM
acceptors afford high quantum efficiencies over a broad spectral range into the near- infrared. The cells provide
power conversion efficiencies of up to 4% under simulated AM1.5 G solar light conditions [5]. Martiin et. al [5,
6] tested the spectral response for efficient methanol. Neufeld [7] got external quantum efficiency as high
as 63% for III – nitride photovoltaic cells which were grown by metal- organic chemical vapor deposition on
(0001) sapphire.
In this research work authors have used the spectral response meter to measure the short circuit current
of the solar cell at selected wavelength over a broad range of wavelengths. For crystalline silicon cell, the
wavelength is 350nm to 1100nm.To carry out the results, the standard equipment uses a broadband or filters to
produce nearly monochromatic light and a device to record short circuit current gives spectral response of the
cell. The parameter which highly affects the efficiency of the solar cell are open circuit voltage and short circuit
current. High efficient solar cell can be used for utilization of solar energy even at night or in inadequate solar
radiation conditions. For this purpose, the power generated by solar cells can be stored in battery backup and
can be utilized for cooking, drying or for desalination of water. By using the sun tracker the efficiency can
further be increased [8-11].
Fig. 2: Spectral Response Meter
Fig. 3: Standard Arrangement of Spectral Response Meter
Figure 3 shows the standard arrangement for spectral response meter. At the top there is a broadband source
which can be either halogen lamp or Xenon lamp followed by monochromator to select light of desired
wavelength. In meter, a synthetic light source made up of 20 LEDs covering wavelength range from 360nm to
1060nm is required for measuring the spectral response of crystalline Silicon solar cell.
1348 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Equations (1) and (2) can be used to calculate Spectral Response (SR) and External Quantum Efficiency (ƞ)
Spectral Response (SR) = ISC(λ) / P(λ) (A/W)
External Quantum Efficiency (ƞ) ={ ISC(λ) / e }/ { P(λ)/ ћω}
Specral Responce (SR) can be converted to EQE by equation (3)
EQE(λ) = 1.238 [SR(λ) / λ ] where λ is in micron unit
Similarly, EQE can be converted to SR by equation (4)
SR(λ) = 0.808 λ [EQE (λ) ]
Multiplying EQE (λ) with the solar flux at each λ, over interval λ to λ+Δ λ, and summing over the
wavelength range of excitation of electrons in semiconductor, and from this we can find short circuit current
density. From spectral responce one additional characteristics of solar cell can be derived i.e. internal quantum
efficiency (IQE) provided the losses of light incedent on solar cell by reflection R(λ) and transmission T(λ) are
determined. Equation (5) gives the relation between IQE and EQE.
IQE = EQE / [1-R(λ) – T(λ) ]
The current is normalized with respect to light power versus wavelength spectral response of the cell.
In this spectral response meter, the combination of broad band light source and monochromator is replaced by
a synthetic source composed of light emitted diodes (LED) which covers the wavelength range from 360nm to
1060 nm. There are 20 LEDs mounted on the top of the spectral response meter on a holder which also
accommodates the solar cell. Each LED emits light over a wavelength range about 20- 30 nm .Emission peak
wavelengths of different LEDs are separated by about 40-60 nm. An LED of required wavelength can be
selected by pushbuttons and the wavelength can be varied by doing so. A 20 mA constant current source was
used to excite the light output from the selected LED. Power emitted by LED was measured by using a
calibrated reference cell. By exciting the corresponding diode, the photo current was measured. A 10 ohm
resistor was connected across the cell was used to develop voltage which was then amplified and digitalized by
using an analog to digital converter and stored.
TABLE 1: Equipments required for performance of experiment
Solar cell
Light source(20 LEDs)
constant current source
Measurement of power emitted from each LED
Controller for selecting LED and recording short
circuit current
4 x 4 cm2 C- Si cell
360 nm to 1060 nm
20 mA current flow through selected LED
Power emitted by each LED at 20 mA
LED wavelength, power and response
current display
Table 1 shows the equipment required for performing the experiment for studying the spectral response of a
solar cell along with their description. One standard crystalline solar cell was illuminated by Light emitting
diodes which emitted power at 20 mA. Controller for selecting LED was used which also record short circuit
current generated by solar cell.
1349 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
TABLE 2: The quantum efficiency at different wavelength.
Wavelength (nm)
Power (mW) Current (mA) Energy of photon = 1.24/ λg
( * 10-17)
Quantum efficiency
Table 2 shows the quantum efficiency at different wavelength. Current and power were measured by
spectral response meter and using these values of wavelengths their band gap energy and quantum efficiency
were calculated.
Figure 4 shows power vs. wavelength plot. The minimum power was 2.28 mW at 365 nm
wavelength while the maximum power was 20.94 mW for 435 nm wavelength.
Fig, 4 : power vs. wavelength
1350 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 5 : quantum efficiency vs. wavelength
Figure 5 shows quantum efficiency vs. wavelength. The crystalline Si starts to respond at about 1100
nm(band gap) and continuous to about 400 nm where it declines sharply. It is evident from figure 5 that the
spectral response is directly related to the quantum efficiency. It is seen that EQE rises from nearly zero at 1100
nm, then it remains nearly constant and then falls below 400 nm.
Quantum efficiency (QE) reduces at short wavelengths due to surface recombination. Quantum
efficiency also reduces at long wavelengths due to rear surface recombination, reduced absorption
and low diffusion length. Overall reduction of Quantum efficiency (QE) is due to recombination,
transmission and low diffusion length. Quantum efficiency (QE) is zero for wavelength longer than
band gap. The value of quantum efficiency obtained was in the range of 40% to 99%. If the cell responds
poorly at some wavelength, there may be either structural design or material property or fabrication process
related problems. The performance of solar cell can be improved by studying its spectral response
characteristics and quantum efficiency of solar cell. These highly efficient solar cells can be used for generating
power which can be utilized for the applications like cooking, drying, desalination and so on.
The authors wish to thank National Center for photovoltaic Research and Education
(NCPRE) established at IIT Bombay by Ministry of New and Renew able Energy (MNRE) for
providing a Solar Photo voltaic kit to the college.
[1] C S Solanki, Solar Photovoltaic Technologies, Cambridge University, Press India Private
Ltd., 2013
[2] S.Ashok and K.Pande, “Photovoltaic Measurements”, Solar Cells 14, 61(1 985)
[3] Ishan Purohit and Indira Karakoti, “The status of solar radiation in India” Solar Quarterly, 4 (2012) 10
1351 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
[4] Parker “Correction of fluorescence spectra and measurement of fluorescence quantum efficiency”,
Analyst, 1960, 85, 587-600,DOI: 10.1039/AN9608500587
[5] Martijin“Narrow-Band gap Diketo-Pyrrolo-Pyrrole Polymer Solar Cells: The Effect of
Processing on the Performance” Article first published online: 26 MAY 2008,
DOI: 10.1002/adma.200800456
[6] Martiin M. et al. “Efficient Methano[70] fullerene/MDMO-PPV Bulk Heterojunction
Photovoltaic Cells” Article first published online: 24 JUL 2003
DOI: 10.1002/ange.200351647
[7] Neufeld, Carl J.“High quantum efficiency InGaN/GaN solar cells with 2.95 eV band gap”,
Japanese Journal of Applied Physics, 52, 2013, 08JH05.
[8] Smita B. Joshi and
A.R.Jani, "Photovoltaic and Thermal Hybridized Solar Cooker" ISRN
[9] Smita B.Joshi and
A.R.Jani, “Certain Analysis of A Solar Cooker With Dual Axis Sun
Tracker”10.1109/NUiCONE.2013.6780150,ISBN 978-1-4799-0726-7, Publisher: IEEE
[10] Smita B. Joshi and
A.R.Jani, “Development of Heat Storage System For Solar Cooker”,
Volume III, Issue IV, April 2014, ISSN 2278-2540, International Journal of Latest technology in
Engineering, Management & Applied Science.
[11] Smita B. Joshi, Hemant R. Thakkar and
A.R.Jani, “A Novel Design Approach of Small Scale
Conical Solar Dryer”, Volume III, Issue IV, April, 2014, ISSN 2278-2540, International Journal
of Latest technology in Engineering, Management & Applied
Biographical Notes
Ms. Smita B.Joshi is presently pursuing Ph.D. in Department of Physics, Sardar Patel University,
Dr. A.R.Jani is working as Director in UGC academic Staff College, Sardar Patel University, India
1352 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Milan Patel1, Srushti Karvekar2, Zeal Mehta3
1, 2, 3
Institute Of Technology, Nirma University (India)
Customer behavior models seek common behaviors among particular groups of customers in order to predict
how a similar customer will behave under similar circumstances. This report shows the problem of customer
relationship management (CRM) and how data mining tools are used to support the decision making. We
describe the methods towards predicting customer’s behavior, such as collection and preparation of data,
segmentation and profiling modeling. This report covers discussion about web mining which can be considered
as a separate section due to its current popularity e-commerce. Data mining technologies extract hidden
information and knowledge from large data stored in databases or data warehouses, thereby supporting the
decision making process of firms. Data mining helps marketing professionals improve their understanding of
customer behavior. In turn, this better understanding allows them to target marketing campaigns more
accurately and to align campaigns more closely with the needs, wants and attitudes of customers and prospects.
Keywords: Customers, CRM, Data Mining, E-Commerce, Profiling
Traditional method of conducting business and industrial operation have undergone a sea change due to
globalization of business extensive use of internet and telecommunication networks and use of information
technology . The economics of customer relationships are changing in fundamental ways, and companies are
facing the need to implement new solutions and strategies that address these changes. Firms today are concerned
with increasing customer value through analysis of the customer lifecycle and study the customer‟s
psychological mindset and see that is there any technical format by which we can analyze his buying behavior.
The decision makers are required to read quickly to mission critical needs due to rapidly changing volatile and
competitive markets. The tools and technologies of data warehousing, data mining, and other customer
relationship management (CRM) techniques afford new opportunities for businesses to act on the concepts of
relationship marketing. The old model of “design-build-sell” (a product-oriented view) is being replaced by
“sell-build-redesign” (a customer-oriented view). Data mining has quickly emerged as highly desirable tools for
using current reporting capabilities to uncover and understand hidden pattern in vast database. These patterns
are then used in models that predict individual behavior with high accuracy. The result data mining helps in
decision making helps in Customer Relationship Management (CRM). The advent of the Internet has
undoubtedly contributed to the shift of marketing focus. As on-line information becomes more accessible and
abundant for consumers, it keeps them informed. Collecting customer demographics and behavior data helps in
1353 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
targeting easily. This kind of targeting also helps when devising an effective promotion plan to meet tough
competition or identifying prospective customers when new products appear. How customer relationship
management and data mining help in decision making are discussed in further sections. Section 1 contains
introduction of the paper. Information about data mining is given in section 2. Customer relationship
management is discussed in section 3. How data mining and CRM are related with each other is discussed in
section 4. Section 5 consists of various data mining techniques. Web mining which is part of data mining is
discussed in section 6. And finally conclusion of this paper is given in section 7.
Data mining is defined as a sophisticated data search capability that uses statistical algorithms to discover
patterns and correlations in data. Data mining discovers patterns and relationships hidden in data, and is actually
part of a larger process called “knowledge discovery” which describes the steps that must be taken to ensure
meaningful results. In simple terms, data mining is another way to find meaning in data. Data mining does not
find patterns and knowledge that can be trusted automatically without verification. Data mining helps business
analysts to generate hypotheses, but it does not validate the hypotheses. For example, as an automobile
manufacturer, it is surprising to know that a man with children tends to buy a sports car rather than a man with
no children. Hence, this pattern in valuable. Data mining is primarily used day by day comprise with a strong
consumer focus retail, financial, communication & marketing organizations. Data mining techniques are the
result of a long research and product development process. The origin of data mining lies with the first storage
of data on computers, continues with improvements in data access, until today technology allows users to
navigate through data in real time.
3.1 Definition
Customer relationship management (CRM) is a process that manages the interactions between a company and
its Customers. The primary users of CRM software applications are aiming to automate the process of
interacting with Customers. CRM comprises a set of processes and enabling systems supporting a business
strategy to build long term, profitable relationships with specific customers. CRM is defined by four elements of
a simple framework: Know, Target, Sell, and Service. CRM requires the firm to know and understand its
markets and customers. In selling, firms use campaign management to increase the marketing department‟s
effectiveness. Finally, CRM seeks to retain its customers through services such as call centers and help desks.
CRM is essentially a two-stage concept. The task of the first stage is to master the basics of building customer
focus. This means moving from a product orientation to a customer orientation and defining market strategy
from outside-in and not from inside-out. Companies in the second stage push their development of customer
orientation by integrating CRM across the entire customer experience chain, by leveraging technology to
achieve real-time customer management, and by constantly innovating their value proposition to customers.
3.2 Components of Customer Relationship Management
Customer relationship management is a combination of several components. Before the process can begin, the
firm must first possess customer information. There are several sources of internal data:
1354 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
summary tables that describe customers (e.g., billing records)
customer surveys of a subset of customers who answer detailed questions
behavioral data contained in transactions systems (web logs, credit card records, etc).
Most firms have massive databases that contain marketing, HR, and financial information.
Next, the CRM system must analyze the data using statistical tools, OLAP, and data mining. The firm should
employ data mining analysts who will be involved but will also make sure the firm does not misuse the data
retrieved. Thus, having the right people who are trained to extract information with these tools is also important.
The end result is segmentation of the market, and individual decisions are made regarding which segments are
attractive. The last component of a CRM system is campaign execution and tracking. These are the processes
and systems that allow the user to develop and deliver targeted messages in a test-and-learn environment. There
are software programs that help marketing departments handle the feedback procedure. Campaign management
software manages and monitors customer communications across multiple touch points, such as direct mail,
telemarketing, customer service, point-of-sale, e-mail, and the Web.
The application of data mining tools in CRM is an emerging trend in the global economy. Analyzing and
understanding customer behavior and characteristics serves as the foundation for the development of a
competitive CRM strategy which helps to acquire and retain potential customers and maximize customer value.
Appropriate data mining tools, which are good at extracting and identifying useful information and knowledge
from enormous customer databases. These databases serve as the best supporting tool for CRM. As such, the
application of data mining techniques in CRM is worth pursuing in a customer-centric economy.
For a customer-centric economy, we need a framework for understanding customer behavior.
In general, there are four key stages in the customer lifecycle:
1. Prospects—people who are not yet customers but are in the target market
2. Responders—prospects who show an interest in a product or service
3. Active Customers—people who are currently using the product or service
4. Former Customers—may be “bad” customers who did not pay their bills or who incurred high costs; those
who are not appropriate customers because they are no longer part of the target market; or those who may have
shifted their purchases to competing products.
The customer lifecycle provides a good framework for applying data mining to CRM.
On the “input” side of data mining, the customer lifecycle tells what information is available. On the “output”
side, the customer lifecycle tells what is likely to be interesting.
Data mining can predict the profitability of prospects as they become active customers, how long they will
be active customers, and how likely they are to leave. It will help the organization identify patterns in their
customer data that are predictive. For example, a company can concentrate its efforts on prospects that are
predicted to have a high likelihood of responding to an offer rather than contacting any random prospect. Data
clustering can also be used to automatically discover the segments or groups within a Customer data set. Rather
than one model to predict which Customers will churn, a business could build a separate model for each region
1355 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
and Customer type. Then instead of sending an offer to all people that are likely to churn, it may only want to
send offers to Customers that will likely take to offer. And finally, it may also want to determine which
Customers are going to be profitable over a window of time and only send the offers to those that are likely to
be profitable. .Businesses employing data mining may see a return on investment, but also they recognize that
the number of predictive models can quickly become very large.
CRM consists of four dimensions:
(1) Customer Identification
(2) Customer Attraction
(3) Customer Retention
(4) Customer Development
CUSTOMER IDENTIFICATION: Elements for customer identification include target customer analysis
which involves seeking the profitable segments of customers through analysis of customers‟ underlying
characteristics, and customer segmentation which involves the subdivision of an entire customer base into
smaller customer groups or segments.
CUSTOMER ATTRACTION: After identifying the segments of potential customers, organizations can direct
effort and resources into attracting the target customer segments. This could be done by direct marketing so as to
promote the customers to place orders through various channels. (e.g. direct mail or coupon distribution).
CUSTOMER RETENTION: Central concern of CRM is customer satisfaction which refers to the comparison
of customer‟s expectations with his or her perception of being satisfied, is the essential condition for retaining
CUSTOMER DEVELOPMENT: Customer development includes customer lifetime value analysis, up/cross
selling and market basket analysis. Customer lifetime value analysis means the prediction of the total net income
a company can expect from a customer. Up/Cross selling refers to promotion activities which aim at augmenting
the number of associated or closely related services that a customer uses within a firm. Market basket analysis
aims at maximizing the customer transaction intensity and value by revealing regularities in the purchase
behavior of customers.
These four dimensions can be known as a closed cycle of a customer management system. They share the
common goal of creating a deeper understanding of customers to maximize customer value to the organization
in the long term. Data mining techniques can help to accomplish such a goal by extracting hidden customer
characteristics and behavior from large databases.
The generative aspect of data mining consists of the building of a model from data. Each data mining technique
can perform one or more of the following types of data modelling:
(1) Association;
(2) Classification;
(3) Clustering;
(4) Forecasting;
1356 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
(5) Regression;
(6) Sequence discovery;
(7) Visualization.
Choices of data mining techniques should be based on the data characteristics and business requirements.
Association: Association aims to establishing relationships between items which exist together in a given
database. It is intended to identify strong rules discovered in database using different measures.
Classification: Classification is one of the most common learning models in data mining. It aims at building a
model to predict future customer behavior through classifying database records into a number of predefined
classes based on certain criteria. It represents the largest part of problems to which data mining is applied today
creating models to predict class membership.
Clustering: Clustering is the task of grouping a set of objects in a such a way that objects in the same
group(cluster) are similar to each other than to those in other clusters.
Forecasting: Forecasting estimates the future value based on a record‟s patterns. It deals with continuously
valued outcomes. It relates to modelling and the logical relationships of the model at some time in the future.
Regression: Regression is a kind of statistical estimation technique used to map each data object to a real value
provide prediction value. The regression functions are used to determine the relationship between the dependent
variable and one or more on independent variable. Regression process succeeds when a significant relationship
between variable and dependent variables is a tested one.
Visualization: Visualization refers to the presentation of data so that users can view complex patterns. It is used
in conjunction with other data mining models to provide a clearer understanding of the discovered patterns or
Segmentation: Segmentation is a process of identifying finite sets of data clusters. For example, Customer can
be clustered using following clustering criterion: Buying behavior, Value of purchase, preference for high value,
Preference for discount/ bargain purchase.
Link analysis: Link analysis is a process of finding the links between two sets of variables the link relationship
may be of following types:
1 Lang and lead- sale of umbrella lags the rainfall
2 Moving together- Bread & Butter
Configured link- .drinks, chips and soda.
Internet plays major roles in today‟s business. It offers huge business opportunities. So this leads us to Web
mining. It is the process of discovering information from the WWW and analysing that information for business
purposes. Web mining consists of two parts : Web Content mining and Web Usage mining. Web Content
mining includes discovering and organizing Web-based information. Web Usage mining includes analysing the
behavioural patterns from data gathered about internet users. So it is more related to customers profiling.
Because of this when we refer web mining it is same as web usage mining.
1357 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
6.1 Internet Marketing
For e-commerce traders, Web mining techniques are needed for maintaining customer relationship management
strategies. These techniques discover relationship, patterns and rules within Web data for 3 marketing actions:
1) Finding association rules for customer attraction
2) Finding patterns for customer retention
3) Finding classification rules and data clusters for cross-selling
6.1.1 Customer attraction
The two important parts of attraction are the selection of new customers and the acquisition of the selected
customers. One strategy to perform this task is to find common characteristics in already existing visitors‟
information and behavior of profitable and non-profitable customers. Then customers are given labels like „no
customers‟, „visitor once‟, „visitor regular‟ on the basis of their visiting behavior. Depending on the labels, a
dynamically created web-page is displayed which has contents depending on found associations between offered
products and browser information.
6.1.2 Customer retention
Customer retention is the task of trying to keep the online buyer as loyal as possible. This is the difficult task in
e-commerce. Here web-page is displayed with offers by considering associations across time which is known as
sequential patterns.
6.1.3 Cross sales
The purpose of cross-sales is to horizontally and/or vertically distinguishing selling activities to an current
customer base. For discovering potential customers, characteristic rules of current cross-sellers are discovered,
which is done by the application of attribute oriented induction. In attribute-oriented induction, a simple Webpage is replaced by its corresponding general page based on the page hierarchy. The entire set of these rules can
be used as the model to be applied on incoming requests from current customers.
6.2. Web Data Collection
Internet provides a variety of tools to gather information about Internet users and actual customers. Data that can
be collected are:
 Http-protocols which contain information about the users' OS, browser and browser version.
 Cookies contain information about the Internet user. A cookie is a tiny file which contains information about
what an user does on a web site. Thus it is the most efficient way to discover Internet users.
 Queries to a web server e.g. online search for products.
 Number of hits is related to how often web site elements are requested on the server.
 Page view is the number of requests of a whole web-page.
6.3. Web Data Processing
Before any important discovery takes place from collected Web data, the data goes through a pre-processing
phase to filter the data from irrelevant or redundant entries. Then the data is organized appropriately according
to the application (Association Rules and Sequential Patterns require the input data to be in different forms).
1358 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
6.4. Discovering Association Rules
In Web mining, discovering association rules turns into finding the correlations among accesses to various files
available on the server by a given client. Since usually transaction-based databases contain very large amounts
of data, current association rule discovery techniques try to prune the search space according to support for
items under consideration. In Web mining, the hierarchical organization of the files can be also used for
6.5. Discovering Sequential Patterns and Time Sequences
Generally, transaction-based databases collect data over some period of time, and the time-stamp for every
transaction is explicitly available. Given such a database of transactions, the problem of discovering sequential
patterns is to find inter-transaction patterns such that the presence of a set of items is followed by another item
in the time-stamp ordered transaction set. Usually, analyses are made using data taken within a certain time gap.
In Web server transaction logs, every visit by a client is stored over a period of time. The time-stamp attached
with a transaction in this case will be a time interval which is determined and attached to the transaction during
the data cleansing process. The techniques used in sequential pattern discovery are similar to those used in
association rule discovery or classification, except in this case the discovered rules have to be further classified
based on the time stamp information. For better decision making, non-typical patterns have to be discarded. To
do so, less frequent sequences are removed based on a minimum support threshold. The sequence is said
frequent if its support is higher than the threshold. In order to find frequent sequences, one needs to find all data
sequences satisfying the minimum support.
6.6. Classification and Clustering
After the discovery of hidden common patterns among data items, classification is used to develop a profile for
items belonging to a particular group according to their common attributes. This profile can then be used to
classify new data items that are added to the database. In Web mining, a profile is built for clients who access
particular server files based on demographic information about those clients. In some cases, valuable
information about the customers can be collected automatically from the client‟s browsers by the server. This
includes information available on the client side in the cookie files, history files etc. For clustering other
algorithms are used.
The main contribution of this paper lies in the focusing important issues to improve decision making to optimize
your relationships with Customer in highly Customer based business. The data mining system is useful to
Business house to find out the association of the customers with different products and how customers are
shifting from one brand to another brand of product to satisfy their need. It is being used increasingly in
business applications for understanding and then predicting valuable data, like Customer buying actions and
buying tendency, profiles of customers, industry analysis, etc. Application of data mining techniques in CRM is
an emerging trend in the industry. It aims to give a research summary on the application of data mining in the
CRM domain and techniques which are most often used. Data mining represents the link from the data stored
1359 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
over many years through various interactions with customers in diverse situations, and the knowledge necessary
to be successful in relationship marketing concepts. As customers and businesses interact more frequently,
businesses will have to leverage on CRM and related technologies to capture and analyze massive amounts of
customer information. Businesses that use customer data and personal information resources effectively will
have an advantage in becoming successful. In this paper, we have shown that data mining can be integrated into
customer relationship management and enhanced the process of CRM with betterment. So we can conclude that
since CRM and data mining are very useful for decision making.
[1] M-S Chen, J. Han and P. S. Yu. "Data Mining: An Overview from Database Perspective". IEEE Trans. On
Knowledge And Data Engineering, vol. 8, pp. 866-883, 1996.
[2] H. P. Crowder, J. Dinkelacker, M. Hsu. "Predictive Customer Relationship Management: Gaining Insights
About Customers in the Electronic Economy", in DM Review in February 2001
[3] B. Mobasher and N. Jain and E. Han and J. Srivastava, "Web mining: Pattern discovery from world wide
web transactions", Technical Report TR-96050, Dep. of Computer Science, University of Minnesota, M
inneapolis, 1996.
[4] Masseglia, P. Poncelet, and M. Teisseire. "Web Usage Mining: How to Efficiently Manage New
transactions and New Customers". research report of LIRMM, Feb. 2000. Short version in Proceedings of
the 4th European Conference on Principles of Data Mining and Knowledge Discovery (PKDD‟00), Lyon,
France, September 2000.
[5] P. Holmes, "Customer Profiling and Modeling", in DMG Direct, Direct marketing Association,
1360 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
S.Padmaja , Dr.Ananthi Sheshasaayee
Research Scholar, Research and Development Centre, Bharathiar University, Coimbatore, (India).
Associate Professor and Head, Department of Computer Science,
Quaid-e-Millath Government College for women, Chennai , (India).
Web usage mining performs mining on Web usage data, or Web logs. A Web log is a listing of page reference
data. Sometimes it is referred to as clickstream data because each entry corresponds to a mouse click. These
logs can be examined from either a client perspective or a server perspective.So in order to provide better
service along with enhancing the quality of websites, it has become very important for the website owner to
better understand their customers. This is done by mining web access log files to extract interesting
patterns.Web Usage mining deals with understanding the user behavior. The user behavior can be analyzed
with the help of Web Access Logs that are generated on the server while the user is accessing the website. A
Web access log contains the various entries like the name of the user, his IP address, number of bytes
transferred timestamp, URL etc. A different types of Log Analyzer tools exist which help in analyzing various
things like users navigational pattern, the part of the website the users are mostly interested in etc. The present
paper analyses the use of such log analyzer tool called Web Log Expert for ascertaining the behavior of users
with the help of sample data.
Keywords: Log Files, User Behavior, Pattern Recognition, Log Analyzer Tools,User Behavior
Prediction is the data mining technique that is used to predict missing or unavailable data.
In a way,
classification that is used to predict class labels can be treated as prediction when numerical data are predicted.
Prediction differs from classification by the fact that is used only for numerical data prediction as opposed to
classification that predicts class labels. The goal of data mining is to produce new knowledge that the user can
act upon. It does this by building a model of the real world based on data collected from a variety of sources
which may include corporate transactions, customer histories and demographic information, process control data
and relevant external databases such as credit bureau information or weather data. The results of the model
building are a description of patterns and relationships in the data that can be confidently used for prediction.
Pattern matching or pattern recognition finds occurrences of a predefined pattern in data. Pattern matching is
used in many diverse applications. A text editor uses pattern matching to find occurrences of a string in the text
being edited. Information retrieval and Web search engines may use pattern matching to find documents
1361 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
containing a predefined pattern (perhaps a keyword). Time series analysis examines the patterns of behaviour in
data obtained from two different time series to determine similarity. Pattern matching can be viewed as a type of
classification where the predefined patterns are the classes under consideration. The data are then placed in the
correct class based on a similarity between the data and the classes.
A file produced by a Web server to record activities on the Web server. It usually has the following features:
The log file is text file. Its records are identical in format.
Each record in the log file represents a single HTTP request.
A log file record contains important information about a request: the client side host name or IP
address, the date and time of the request, the requested file name, the HTTP response status and size,
the referring URL, and the browser information.
A browser may fire multiple HTTP requests to Web server to display a single Web page. This is
because a Web page not only needs the main HTML document; it may also need additional files, like
images and JavaScript files. The main HTML document and additional files all require HTTP requests.
Each Web server has its own log file format.
If your Web site is hosted by an ISP (Internet Service Provider), they may not keep the log files for
you, because log files can be very huge if the site is very busy. Instead, they only give you statistics
reports generated from the logs files.
Nearly all of the major Web servers use a common format for their log files. These log files contain information
such as the IP address of the remote host, the document that was requested, and a timestamp. The syntax for
each line of a log file is:
site logName fullName [date:time GMToffset] "req file proto" status length
Because that line of syntax is relatively meaningless, here is a line from a real log file: - - [03/Jul/1996:06:56:12 -0800] "GET /PowerBuilder/Compny3.htm HTTP/1.0" 200 5593
Even though the line is split into two, here, you need to remember that inside the log file it really is only one
Each of the eleven items listed in the above syntax and example are described in the following list.
Site-either an IP address or the symbolic name of the site making the HTTP request. In the example
line the remotehost is
LogName-login name of the user who owns the account that is making the HTTP request. Most remote
sites don't give out this information for security reasons. If this field is disabled by the host, you see a
dash (-) instead of the login name.
Full Name-full name of the user who owns the account that is making the HTTP request. Most remote
sites don't give out this information for security reasons. If this field is disabled by the host, you see a
dash (-) instead of the full name. If your server requires a user id in order to fulfil an HTTP request, the
user id will be placed in this field.
Date-date of the HTTP request. In the example line the date is 03/Jul/1996.
Time-time of the HTTP request. The time will be presented in 24-hour format. In the example line the
time is 06:56:12.
1362 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
GMToffset-signed offset from Greenwich Mean Time. GMT is the international time reference. In the
example line the offset is -0800, eight hours earlier than GMT.
Req-HTTP command. For WWW page requests, this field will always start with the GET command.
In the example line the request is GET.
is /PowerBuilder/Compny3.htm. There are three types of path/filename combinations.
There are a lot of Web log analysis tools out there, and many are free. This is a list of some of the best free log
analysis and Web analytics tools available.
4.1Web Log Expert
WebLog Expert is a fast and powerful access log analyzer. It will give you information about your site's visitors:
activity statistics, accessed files, paths through the site, information about referring pages, search engines,
browsers, operating systems, and more. The program produces easy-to-read reports that include both text
information (tables) and charts. View the WebLog Expert sample report to get the general idea of the variety of
information about your site's usage it can provide.The log analyzer can create report in HTML, PDF and CSV
formats. It also includes a web server that supportsdynamic HTML reports.
WebLog Expert can analyze logs of Apache and IIS web servers. It can even read GZ and ZIP compressed log
files so you won't need to unpack them manually.The program features intuitive interface. Built-in wizards will
help you quickly and easily create a profile for your site and analyze it.
4.2 Deep Log Analyzer
Deep Log Analyzer is the best free Web analytics software I've found. It is a local log analysis tool that works
on your site logs without requiring any codes or bugs on your site. It's not as fancy as Google Analytics, but it
does offer a few extra features. Plus, if you need more features, there is a paid version you can upgrade
to. Advanced and affordable web analytics solution for small and medium size websites. We can analyze web
site visitors’ behavior and get complete website usage statistics in several easy steps. With our website statistics
and web analytics software you will know exactly where your customers came from, how they moved through
your site and where they left it. This comprehensiveknowledge will help you to attract more visitors to your site
and convert them to satisfied customers.
4.3Google Analytics
Google Analytics is one of the best free Web log analysis tools available. There are a few reports that are not
included, but the graphs and well-defined reports make it very nice. Some people don't like giving a large
corporation like Google such direct access to their site metrics. And other people don't like needing a bug placed
on the Web pages in order to track them.
AWStats is a featureful tool that generates advanced web, streaming, ftp or mail server statistics, graphically.
This log analyzer works as a CGI or from command line and shows you all possible information your log
contains, in few graphical web pages. It uses a partial information file to be able to process large log files, often
and quickly. It can analyze log files from all major server tools like Apache log files (NCSA
1363 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
combined/XLF/ELF log format or common/CLF log format), WebStar, IIS (W3C log format) and a lot of other
web, proxy, wap, streaming servers,mail servers and some ftp servers.AWStats is a free software distributed
under theGNU General Public License. You can have a look at this license chart to know what you can/can't
do.As AWStats works from the command line but also as a CGI, it can work with all web hosting providers
which allow Perl, CGI and log access.
4.5 W3Perl
W3Perl is a CGI based free Web analytics tool. It offers the ability to use a page bug to track page data without
looking at log files or the ability to read the log files and report across them.
4.6 Power Phlogger
Power Phlogger is a free Web analytics tool that you can offer to other users on your site. This tool uses PHP to
track information. But it can be slow.Powerphlogger is a complete counter hosting tool. It lets you offers
counter service to others from your site. It’s built on PHP and requires a MySQL server. Your members don’t
need any PHP support on their webserver. They just pass the required data through JavaScript to PPhlogger that
is hosted on the server.
4.7 BBClone
BBClone is a PHP based Web analytics tool or Web counter for your Web page. It provides information about
the last visitors to your site tracking things like: IP address, OS, browser, referring URL and more.
4.8 Visitors
Visitors is a command line free log analysis tool. It can generate both HTML and text reports by simply running
the tool over your log file. One interesting feature is the real time streaming data you can set up.Visitors is a
very fast web log analyzer for Linux, Windows, and other Unix-like operating systems. It takes as input a web
server log file, and outputs statistics in form of different reports. The design principles are very different
compared to other software of the same type.
4.9 Webalizer
Webalizer is a nice little free Web log analysis tool that is easily ported to many different systems. It comes with
several different languages for reports and a bunch of stats to report on. The Webalizer is a fast, free web server
log file analysis program. It produces highly detailed, easily configurable usage reports in HTML format, for
viewing with a standard web browser.
4.10 Analog
Analog is a widely used free Web log analysis tool. It works on any Web server and is fairly easy to install and
run if you understand how your server is administered. It has a lot of good reports and with another cool can be
made even prettier.
RealTracker uses a code that is placed on your Web pages to track your pages, similar to Google Analytics.
It offers a bunch of different reports but the real benefit to this tool is that it's easy to add toyour pages andeasy
to read the results. And if you need more features, you can upgrade to the professional or enterprise versions.
4.12 Webtrax
Webtrax is a free Web analytics tool that is very customizable, but not as well programmed as it could be. The
author admits that there are some issues, and it doesn't appear to be under active support at this time. But it does
support a number of reports and proides good information from your log files.Webtrax is a log file analysis
1364 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
program for NCSA web server logs. It works best on logs that include the "referrer"and "browser"info, such as
the "NCSA Combined Format." Webtrax reads a web server's log file and produces up to twenty different
graphical and tabular reports of hits, and the activities of individual site visitors, including what pages they
viewed and for how long.Webtrax's output is extremely customizable.
4.13 Dailystats
Dailystats is a free Web analysis program that is not intended to be your complete analytics package. Instead,
Dailystats wants to give you a small sub-set of statistics that are useful for reviewing on a regular basis - such as
daily. It provides information on entry pages, pageviews of each page, and referrer log analysis.
4.14 Relax
Relax is a free Web analytics tool that tells you just who is referring people to your site. It looks at search
engines and search key words as well as specific referral URLs to give you precise information on who is
sending customers to your site. It's not a complete analytics package, but it works well for referral information.
4.15 Piwik
Piwik is an open source alternative to Google Analytics. It is very flashy with an Ajax or Web 2.0 feel to it. One
of the nicest features is that you can build your own widgets to track whatever data you want to track. It runs on
your PHP Web server, and requires that you have PHP PDO installed already. But if you have that it's fairly
easy to install and get up and running.
4.16 StatCounter
StatCounter is a Web analytics tool that uses a small script that you place on each page. It can also work as a
counter and display the count right on your page. The free version only counts the last 100 visitors, then it resets
and starts the count over again. But within that limitation, it provides a lot of stats and reports.
4.17 SiteMeter
The free version of SiteMeter offers a lot of good stats and reports for your site. It only provides information on
the first 100 visitors, and then after that it resets and starts over. But if you need more information than that, you
can upgrade to the paid version of SiteMeter. Like other non-hosted analytics tools, SiteMeter works by
inserting a script on every page of your site. This gives you real-time traffic but some people are concerned
about privacy implications.
4.18 MyBlogLog
MyBlogLog is a tool with many different features. The analytics are not very robust, but they are not intended to
be. In fact, the goal of the MyBlogLog analytics is to provide you with information about where your visitors
are going when they leave your site. This can help you to improve your site so they don't leave as quickly. I
wouldn't recommend MyBlogLog as your only analytics tool, but it does a good job on the stats it provides.
4.19 WebLog Expert Lite
WebLog Expert Lite is a free Apache and IIS log analyzer, light-weight version of WebLog Expert. It allows
you to quickly and easily analyze your log files and get information about your site's visitors: activity statistics,
what files visitors accessed information about referring pages, search engines, browsers, operating systems,
and more.
1365 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Traditionally there are four types of server logs: Transfer Log, Agent Log, Error Log and Referrer Log [6]. The Transfer and
the Agent Log are said to be standard whereas the error and referrer log are considered optional as they may not be turned
on. Every log entry records the traversal from one page to another, storing user IP number and all the related information .
There are three types of log files:
5.1 Shared Log Files
For SQL Serverdatabases, the defaults are session log files created in tempdb. Each user owns two tables:
SDE_logfiles and SDE_logfile_data. Shared log files are created the first time a user's selection exceeds the
required threshold (100 features in ArcGIS). If you use shared log files, remember:
Users require CREATE TABLE permission.
If permissions are insufficient, selections cannot be made.
Log files are not checked on connection.
5.2. Session log files
Session log files are dedicated to a single connection, not a database user. You can arrange for a pool of session
log files to be shared among users, and gsrvrs can also create their own session log files on the fly. As
mentioned in the previous section, session log files are the default for SQL Server databases.
Using session log files dramatically reduces contention for log files, since a single connection is using the log
file. Similarly, since only one connection is using the log file, the table will not grow as large as it would when
dozens or hundreds of connections are using the same log file.
And finally, some delete optimizations can be made in special circumstances. If only one log file is open, the log
file table can be truncated rather than deleted. Truncating a table is orders of magnitude faster than deleting.
5.2 Stand-alone log files
Stand-alone log files are useful in several situations. The advantage of this log file configuration is that it can
always be truncated and will never grow beyond the size of a single selection set. Of course, there has to be a
user configurable limit on how many of these stand-alone log files can be created by a connection. For example,
if 100 users are creating selection sets on 50-layer maps, an unrestricted growth of stand-alone log files would
1366 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
result in 5000 log file tables being created. To avoid this, set the MAXSTANDALONELOGFILE parameter
Log Types based on network traffic
Log Type
The traffic logs records all traffic to and through the FortiGate interface. Different categories
monitor different kinds of traffic, whether it be external, internal, or multicast.
The event logs record management and activity events within the device in particular areas:
System, Router, VPN, User, WAN, and WiFi. For example, when an administrator logs in or
logs out of the web-based manager, it is logged both in System and in User events.
The antivirus log records virus incidents in Web, FTP, and email traffic.
Web Filter
The web filter log records HTTP Fort iGATE log rating errors including web content
blocking actions that the Fort iGATE unit performs.
The intrusion log records attacks that are detected and prevented by the FortiGate unit.
Email Filter
The email filter log records blocking of email address patterns and content in SMTP, IMAP,
and POP3 traffic.
The Vulnerability Scan (Netscan) log records vulnerabilities found during the scanning of the
Data Leak
The Data Leak Prevention log records log data that is considered sensitive and that should
not be made public. This log also records data that a company does not want entering their
The VoIP log records VoIP traffic and messages. It only appears if VoIP is enabled on the
Administrator Settings page.
In this study, we have analyzed thesample log files of Web server of with the help of weblog Expert analyzer
tool. The sample log files consists the data fora month. In thisduration log files have stored 200MB data and we
have got 26.4 MB data after preprocessing. We have determined different types of errors that occurred in web
surfing. Statistics about hits, page views, visitors and bandwidth are shown below.Log analyzer tools are required
as they help extensively in analyzing the information about visitors, top errors which can be utilized by system administrator
and web designer to increase the effectiveness of the web site.
General Statistics: In this section we get general information pertaining to the website like how many times the website was
hit, an average of hits in a day, page views, bandwidth etc. It enlists all the general information which one should know
related to a website.
Summary of Result
Total Hits
Visitor Hits
Spider Hits
Average Hits per Day
1367 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Average Hits per Visitor
Cached Requests
Failed Requests
Total Page Views
Average Page Views per Day
Average Page Views per Visitor
Total Visitors
Average Visitors per Day
Total Bandwidth
Visitor Bandwidth
Spider Bandwidth
Average Bandwidth per Day1.29GB
Average Bandwidth per Hit19.98GB
Average Bandwidth per Visitor232.18GB
Activity Statistics
Activity by Hour of Day
Activity by Day of Week
Access statistics
Most Popular Pages
Most Downloaded Files
1368 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Most Requested Images
Most Requested Directories
Top Entry Pages
Top Exit Pages
Most Requested File Types
1369 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Top-Level Domains
Most Active Countries
Top Referring SitesSites
No Referrer
Referring URLs
Top Referring URLs
No Referrer
Most Used Browsers
Device Types
Most Used Operating Systems
Error Types
An important research area in Web mining is Web usage mining which focuses on the discovery of interesting
patterns in the browsing and navigation data of Web users. In order to make a website popular among its
visitors, System administrator and web designer should try to increase its effectiveness because web pages are
1370 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
one of the most important advertisement tools in international market for business. The obtained results of the
study can be used by system administrator or web designer and can arrange their system by determining
occurred system errors, corrupted and broken links. In this study, analysis of web server log files of Web Log
Expert Log Analyzer tool.One important use of patterns is to summarize data, since the pattern collections
together with the quality values of the patterns can be considered as summaries of the data. This paper presents
an overview of Log files, Content of a Log files and variety of log files etc. Web Log Analyzer tools are a part
of Web Analytics Software.They take a log file as an input, analyze it and generate results. Web Log Expert was
taken to analyze the web logs of the website as it provided extensive features that too in the free edition. The
results were examined and are being tried to incorporate in the website of the user. Such log analyzer tools
should be widely used as they help a lot in understanding the customer behavior to the analysts.
1] “Mining the Web- Discovering knowledge from Hypertext data” by SoumenChakrabarti, Morgan Kaufmann
3]“Identifying User Behavior by Analyzing Web Server Access Log File” byK. R. Suneetha, Dr. R.
Krishnamoorthi, International Journal of Computer Science and Network Security, VOL.9 No.4, April 2009pp
4] “An Overview of Preprocessing on Web Log Data for Web Usage Analysis”by Naga Lakshmi, Raja
SekharaRao , Sai Satyanarayana ReddyInternational Journal of Innovative Technology and Exploring
Engineering ISSN: 2278-3075, Volume-2, Issue-4, March 2013. Pp 274-279
5]“Web Usage Mining: A Survey on Pattern Extraction from Web Logs” Ankita KusmakarSadhna
Mishra,International Journal of Advanced Research in Computer Science and Software Engineering Volume 3,
Issue 9, September 2013 ISSN: 2277 128X pp834-838.
6] “Introduction to Data Mining with Case Studies: Web Data Mining” by G.K. Gupta, PHI Learning Private
Limited, pp. 231-233, 2011.
7] “Extraction of Frequent Patterns from Web Logs using Web Log Mining Techniques” byKanwal Garg, PhD.
Rakesh Kumar, and Vinod Kumar,International Journal of Computer Applications (0975 – 8887) Volume 59–
No.10, December 2012pp 19-26.
9]“Analyzing Users Behavior from Web Access Logs using Automated Log Analyzer Tool” by NehaGoel and
C.K. Jha, PhD, International Journal of Computer Applications (0975 – 8887) Volume 62– No.2, January
10] “Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data” By Bing Liu, Springer publications.
11] “Identification of Human Behavior using Analysis of Web log Data Mining”, by Dr. PunitGoyalIPASJ
International Journal of Information Technology, Volume 1, Issue 1, June 2013 ISSN 2321-5976pp 1-7.
1371 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Mary Roshni Roy
Department of Structural (Civil) Engineering, Amal Jyothi College of Engineering,
Kottayam, Kerala, (India)
Smart structures technology is one of the most rapidly growing technologies and its applications extend to
various fields. Piezoelectric effect is the most common smart effect studied due to its wide range of applications
in the engineering field and also due to its advantages over other smart effects. Smart structures are built with
piezoelectric patches surface-bonded or as embedded sheet actuators on laminated composite beams or plates.
A strain actuation is induced in the plate and this induced strain controls its bending, extension and twisting.
This work presents piezo-laminated plates with induced strain actuation. The formulation for the analysis is
based on Kirchoff’s hypothesis. The work theoretically validates the implementation of a multilayered threedimensional model based on the analogy between thermal strains and piezoelectric strains. To assess the
piezoelectric-thermal analogy for different loading conditions, the numerical results obtained from this model
are compared to the results obtained from a finite element reference model based on a three-dimensional
piezoelectric formulation. The static deflection for various loading conditions and free vibration frequencies are
obtained and are verified whether in good agreement with those obtained from Finite Element Methods.
Keywords: Free Vibration, Piezoelectric Effect, Smart Structure, Static Deflection, Thermal
Smart materials possess adaptive capabilities in response to external stimuli, such as loads or environment, with
inherent „intelligence‟. The stimuli could be pressure, temperature, electric and magnetic fields, chemicals or
nuclear radiation. The changeable physical properties could be shape, stiffness, viscosity or damping. The kind
of smart material is generally programmed by material composition, special processing, and introduction of
defects or by modifying the micro-structure, so as to adapt to the various levels of stimuli in a controlled
fashion. Optical fibres, piezo-electric polymers and ceramics, electro-rheological fluids, magneto-strictive
materials and shape memory alloys are some of the smart materials. [1]
Smart materials can be either active or passive. Active smart materials possess the capacity to modify their
geometric or material properties under the application of electric, thermal or magnetic fields, thereby acquiring
an inherent capacity to transduce energy. Smart materials that lack the inherent capacity to transduce energy are
called passive smart materials. Fibre optic material is a passive smart material whereas piezoelectric materials,
1372 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
shape memory alloys(SMAs), electro-rheological(ER) fluids are all active smart materials. Piezoelectric
materials are the most common type of smart materials and have a wide range of applications. They are used in
accelerometers, strain sensors, emitters and receptors of stress waves, vibration dampers, underwater acoustic
absorption, robotics, smart skins for submarines etc. The most recent advancements in the field of smart
materials and structures are:
Materials which can restrain the propagation of cracks by automatically producing comprehensive
stresses around them (Damage arrest).
Materials which can discriminate whether the loading is static or shock and can generate a large force
against shock stresses (Shock absorbers).
Materials possessing self-repairing capabilities, which can heal damages in due course of time (Selfhealing materials).
Materials which are usable up to ultra-high temperatures by suitably changing composition through
transformation (Thermal mitigation). [1]
Fig 1: A laminated plate with surface bonded piezoelectric actuator and sensor
A smart system/ structure can be hence defined as “A system or material which has built-in or intrinsic sensor,
actuator or control mechanism whereby it is capable of sensing a stimulus, responding to it in predetermined
manner and extend, in a short appropriate time, and reverting to its original state as soon as the stimulus is
removed.” [2] Based on level of sophistication, smart structures are classified as:
Sensory Structures
Adaptive Structures
Controlled Structures
Active Structures
Intelligent Structures
Structural vibrations are controlled by modifying the mass, stiffness and damping of the structure. This increases
the overall mass of the structure and is found to be unsuitable for controlling low frequency vibrations. This
method does not suit applications where weight restrictions are present and low frequency vibrations are
encountered. For such applications, smart structures are being developed which are light weight and attenuate
low frequency vibrations. A structure in which external source of energy is used to control structural vibrations
is called a „smart structure‟ and the technique is called active vibration control(AVC). A smart structure
essentially consists of sensors to capture the dynamics of the structure, a processor to manipulate the sensor
signal, actuators to obey the order of processor and a source of energy to actuate the actuators. [3]
1373 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig 2: Schematic of a smart structure
Many materials such as piezoelectric materials, shape memory alloys, electro-strictive materials, electromagneto-strictive materials, electro and magneto rheological fluids etc can be used for suppressing vibrations.
Piezoelectric sensors/ actuators are more widely used because of its excellent electromechanical properties, fast
response, easy fabrication, design flexibility, low weight, low cost, large operating bandwidth, low power
consumption, generation of no magnetic field while converting electrical energy into mechanical energy etc.
Piezoelectric materials generate strains when an electric signal is applied on them and vice versa. So they are
used as sensors and actuators for structural vibrations. They are used in the form of distributed layers, surface
bonded patches, embedded patches, cylindrical stacks, active fiber composite patches etc. Surface mounted or
embedded piezoelectric patches can control a structure better than a distributed one because the influence of
each patch on the structural response can be individually controlled.
The development of durable and cost effective high performance construction materials is important for the
economic well-being of a country. The use of smart materials is encouraged for the optimal performance and
safe design of buildings and other infrastructures particularly those under the threat of earthquake and other
natural hazards. There is a wide range of structural applications for smart materials. SMAs can repeatedly
absorb large amounts of strain energy under loading without permanent deformation. They have an
extraordinary fatigue resistance and great durability and reliability. SMAs can be used for the active structural
vibration control. When smart materials are used in composites, they can be monitored externally throughout the
life of the structure to relate the internal material condition by measuring stress, moisture, voids, cracks,
discontinuities via a remote sensor.[1]
SMAs can be used as self-stressing fibres and thus can be applied for retrofitting. Self-stressing fibres are the
ones in which reinforcement is placed into the composite in a non-stressed state. A pre-stressing force is
introduced into the system without the use of large mechanical actuators, by providing SMAs. Self-healing
behavior of smart structures is one of its major structural applications. Use of piezo-transducers, surface bonded
to the structure or embedded in the walls of the structure can be used for structural health monitoring and local
damage detection.
Smart materials offer several advantages over conventional materials. They include:
Improved strength
Better stiffness
Fatigues and impact resistance
Corrosion and wear resistance
Longer life
1374 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Cost advantage
Weight reduction
Smart materials also possess certain drawbacks when compared to conventional materials which include:
Complexity in the structure
Difficulty in repairing
Reuse and disposal is difficult
The design process of smart structures involves three main phases, namely, structural design, optimal size and
location of actuators/sensors and controller design. Performance of AVC depends on the proper placement of
piezoelectric sensors and actuators. The positioning of piezoelectric patches should be in such a way that the
structure should not be unstable. Optimization techniques are used in AVC to find the optimal sensor/actuator
locations in smart structures. There are five optimizing criteria based on which the locations are fixed which are
briefly explained:
A. Maximizing Modal Forces/Moments Applied by Piezoelectric Actuators
Piezoelectric actuators are desired to strain the host structure in a direction opposite to the strains developing in
the host structure. So, the piezoelectric actuators should be placed in the regions of high average strains and
away from the areas of zero strain. If an electric field is applied across piezoelectric actuators in the same
direction, the host structure will be deformed in extension mode. If the field is applied across piezoelectric
actuators in the opposite direction, the host structure will be deformed in bending mode.
B. Maximizing deflection of the Host Structure
When an external voltage is applied on the surface bonded piezoelectric actuator, it produces transverse
deflections in the host structure. Transverse deflection of the host structure is a function of actuator placement.
So, transverse deflection of the host structure can be used as a criterion for optimal placement of actuators.
C. Maximizing Degree of Controllability
The smart structure should be controllable for effective active vibration control. Controllability is a function of
both the system dynamics and location and number of actuators. Matrix B of the system is determined by
actuator locations on the smart structure. A standard check for the controllability of a system is a rank test of the
global matrix.
D. Maximizing Degree of Observability
A closed loop system is completely observable if, examination of the system output determines information
about each of the state variables. If one state variable cannot be observed in this way, the system is said to be
unobservable. Observability is a function of both system dynamics and location and number of sensors. The
output influence matrix C is determined by the position of sensors on the smart structure.
E. Minimizing Spillover Effects
A smart flexible structure is discretized into finite number of elements for vibration analysis and control. Only
first few low frequency modes of interest are considered which results in a reduced model. A reduced model
may excite residual modes. These residual modes appear in sensor output but are not included in the control
design. This closed-loop interaction with low damping of residual modes, results in spillover effects, which
should be minimized. [3]
1375 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
These optimization criteria should be followed for optimal placement of actuators and sensors in a smart
In this work, numerical results are obtained using Whitney‟s formulations for static deflection and free vibration
derived using Classical Laminated Plate Theory with thermal analogy for deriving equivalent Piezoelectric
strain. The results are compared with Finite Element Method done in Feast software.
FEAST (Finite Element Analysis of STructures) is the Indian Space Research Organisation (ISRO) structural
analysis solver software based on Finite Element Method (FEM) realized by Structures group of Vikram
Sarabhai Space Centre (VSSC). Initiated in 1977, the first version for the linear analysis of metallic structures
was released in 1981.
Based on the properties of the Piezoelectric material Lead Zirconate Titanate (PZT) the detection of damages,
the assessment of their severity level in non-accessible RC members and the monitoring of the possible damage
evolution with time are possible. A PZT sensor can produce electrical charges when subjected to a strain field
and conversely mechanical strain when an electrical field is applied.
The Structural Health Monitoring (SHM) and damage detection techniques have been developed based on the
coupling properties of the piezoelectric materials. The impedance-based SHM approach utilizes the
electromechanical impedance of these materials that is directly related with the mechanical impedance of the
host structural members, a property that is directly affected by the presence of any structural damage. Thus the
impedance extracts and its inverse, the admittance, constitute the properties on which the PZT approach is based
for the SHM of reinforced concrete structures. The produced effects by the structural damages on the PZT
admittance signals are vertical enlargement or/and lateral shifting of the baseline signals of the initially healthy
structure. These effects are the main damage indicators for damage detection.
To experimentally show the working of a smart structure, a RC model with piezoelectric patches is created in
Feast software. The frequency response of healthy condition is first recorded, followed by a frequency response
of a damaged condition which is applied to the model in the form of a hole and delaminate in the patches. The
increase in admittance indicates damage and can be rectified by using suitable dampers in damage detected
areas or by increasing the stiffness. In actual conditions, the admittance versus frequency signals are displayed
by the sensors which gives signals once the actuators on the piezoelectric patches get actuated.
Fig 3: Frequency Response of healthy and damaged cases
1376 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Smart structures are constructed of thin composite laminates with surface embedded or surface bonded
piezoelectric layers as induced strain actuators. In this work, strain given to the piezoelectric layer is in the form
of thermal strain. Consider a piezo-laminated plate as shown in figure. Dimensions of the plate are a and b
which are the plate length in x-direction and plate width in y-direction respectively. h is the total thickness of the
plate in z-direction. It consists of two piezoelectric layers embedded on the top and bottom surfaces of the plate.
The plate consists of four graphite-epoxy layers well stacked between the piezoelectric layers.
Laminated plates undergo large amplitude vibrations and deflections when they are subjected to large dynamic
loading conditions.
Fig 4 Simply supported piezo-laminated plate
The piezo-laminated plate considered consists of two distinctly different materials exhibiting two different
behaviours. The composite layers contribute to the stiffness characteristic and the piezo layer act as active
elements responding to external excitation owing to their piezoelectric characteristics. The heterogeneity in a
composite material is not only because of the bi-phase composition, but also because of laminations. This leads to
a distinctly different stress-strain behavior in laminates.
A. System Equations
Using the assumptions of Classical Laminated Plate Theory, the strain displacement and equations are motion are
derived. From these standard equations the static deflection and free vibration formulations are developed.
The constitutive relation for any ply of generic coupled piezo-laminated beam with surface bonded or embedded
induced strain actuator is:
where, QT - transformed reduced stiffness matrix,
ε - strain, Λ- Actuation strain.
The total strain is expressed using the Kirchhoff‟s hypothesis as:
where, the components of stress (σ), strain (ε˚), curvature changes (𝛋), actuation strain (Λ) and equivalent stress
due to actuation are:
1377 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
The induced strain actuation is modeled by considering the constitutive equations for piezoelectric materials,
which are poling direction dependant. The planar isotropy of the poled ceramics is expressed by their
piezoelectric constants, such that piezoelectric charge constant d 31=d32. The applied static electric field within
the piezoelectric actuator is assumed to be constant as the thickness of the layer is relatively small.
In order to include the effects of expansional strains, the following Duhamel-Neumann form of Hooke‟s law is
(i= 1,2,…….6)
are the anisotropic compliances and are generalized expansional strains.
For thermal expansion,
The inverted form of the equation B.1 for plane stressis given by the relationship
(i=1,2,6) (6)
are the reduced stiffness for plane stress. Using contracted notation, the following transformation
relations for expansional strains under the inplane rotation is obtained in the reduced form
denote expansional strains parallel and transverse to the principal material axis. Thus in case
of an off-axis orientation leads to an anisotropic shear coupling expansional strain
relative to the x-y plane.
The laminate constitutive relation is hence obtained as
And the expansional fore and moment resultants are defined as follows:
1378 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
For bending of symmetric laminated plates,
are the only non-vanishing elements of the coupling stiffness
matrix Bij.
For a uniform load q=q0=constant,
qmn =
(m,n odd)
qmn = 0
(m,n even)
For Free-vibration of unsymmetrical laminated plates,
When coupling is neglected (Bij=0), the equation
1379 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
which is the frequency equation for the flexural vibration of a simply-supported homogenous orthotropic plate.
Coupling reduces the fundamental vibration frequency. The fundamental frequency of orthotropic laminates
always occur for m=n=1.
Thermal analogy followed in the thesis is an indirect effect.
In order to find an equivalent piezoelectric strain with thermal analogy, we equate the two strains.
Breakdown Voltage = 75.9 MPa
Rated Stress
= 20.7 MPa
Table 1: Properties of materials used in the present analysis
Table 2: Comparison of Static deflection of piezo-laminated plates with and without smart effects
1380 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Table 3: Comparison of Free Vibration values of present analysis and FEM of piezo-laminated plate
The predicted results by the present formulation and finite element solutions utilizing FEAST software for static
deflection and free vibration show differences due to transverse shear deformation not being considered in the
formulation, assumed displacement field, Galerkin‟s approximation of the solution and high modular ration. The
results are in good agreement and hence the piezo-electric thermal analogy adopted is a valid relation. It is clear
that bending is comparatively less in case of smart structures and damages in the structure can also be rectified by
various methods. Hence smart technology should be put to use in various fields as they are much more
advantageous compared to conventional methods.
[1] D.Y.Gao, D.L.Russell, An Extended Beam Theory for Smart Materials Applications, Applied Mathematics
and Optimization 38:69-94, 1998
[2] F.L.Matthews, D.Hitchings, C.Santisi, Finite Element Modeling of Composite Materials and Structures,
Woodhead Publishing Limited
[3] Vivek Gupta, Manu Sharma, Nagesh Thakur, Optimization Criteria for Optimal Placement of Piezoelectric
Actuators/Sensors on a Smart Structure, Journal of Intelligent Material Systems and Structures, 21:1227,
[4] Jayakumar K. ,Deepak P. ,Anil Kumar P. V, Release document FEASTSMT
Version9.5 , October 2013, Structural Modelling and Software Development division (SMSD), VSSC.
[5] Inderjit Chopra, Review of State of Art of Smart Structures and Integrated Systems, AIAA Journal Vol 40,
No11, 2002
[6] F. Cote, P. Masson, N. Mrad, V. Cotoni, Dynamic and static modeling of piezoelectric composites, 2013.
1381 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Mrs Mukta Sharma1, Dr. R B Garg2,
Department of Computer Science, Research Scholar, TMU, Moradabad (India)
Ex-Professor, Department of Computer Science, Delhi University, Delhi (India)
The Online Banking Service is used extensively across the globe. Implementing security features for those
networks are very critical as the communication is done via an insecure channel i.e. Internet. So there are more
requirements to secure the data transmitted over different networks using different services. Different
encryption methods are used to provide the security to the network and data. Encryption is the process of
changing plain readable text into unreadable cipher text. Cryptographic algorithms play a vital role in the field
of network security. There are two basic types of cryptosystems such as symmetric cryptosystems and
asymmetric cryptosystems. Symmetric cryptosystems are characterized by the fact that the same key is used in
encryption and decryption transformations. Asymmetric cryptosystems use complementary pairs of keys for
encryption and decryption transformations. One key, the private key is kept secret like the secret key in a
symmetric cryptosystem. The other key, the public key, does not need to be kept secret [1]. This paper focuses on
designing an encryption algorithm to secure the online transactions. As many users are a continually growing
financial service of electronic commerce, Internet banking requires the development & implementation of a
sound security algorithm.
Keywords: Cryptography, Symmetric & Asymmetric Cryptography, Plain Text, Cipher Text
For the first few decades, internet was primarily used by military & university. Now millions of users are using
internet today for a large variety of commercial and non-commercial purposes. Therefore, it is essential to secure
the internet from various threats, spywares, malwares, hackers, phishers etc. Internet security is not about
protecting hardware or the physical environment. It is about protecting information [1].Ensuring the security is a
serious business on which various researches are going on. One way to secure transmission is to use
Cryptography has been derived from two Greek words Crypto (Secret) & Graphs (Writing) which means “Secret
Writing”. Cryptography allows secure transmission of private information over insecure channels. It is the art or
science encompassing the principles and methods of transforming an intelligible message into one that is
unintelligible and then retransforming that message back to its original form. It is the mathematical “scrambling”
of data so that only someone with the necessary key can “unscramble” it.
1.1. Characteristics of Cryptography
1382 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
AUTHENTICITY: Is the sender (either client or server) of a message who they claim to be?
PRIVACY: Are the contents of a message secret and only known to the sender and receiver?
INTEGRITY: Have the contents of a message been modified during transmission?
NON-REPUDIATION: Can the sender of a message deny that they actually sent the message? It
is the ability to limit parties from refuting that a legitimate transaction took place, usually by
means of a signature.
1.2. Basic Terminology
Plain text - the original message
Cipher text - the coded message
Cipher - algorithm for transforming plaintext to cipher text
Key - info used in cipher known only to sender/receiver
Encipher (encrypt) - converting plaintext to cipher text
Decipher (decrypt) - recovering cipher text from plaintext
Cryptography - study of encryption principles/methods
Cryptanalysis is (code breaking) - the study of principles/ methods of deciphering cipher text
without knowing key
Cryptology - the field of both cryptography and Cryptanalysis
Private Key/ Symmetric key
Public Key/ Asymmetric key
Fig. 1: Type of Keys
Secret key or Symmetric key- In this sender and receiver possess the same single key. It can be divided into
stream ciphers and block ciphers. Stream cipher encrypts a single bit of plain text at a time, whereas block cipher
encrypts a number of bits as a single unit.
Public key or Asymmetric key- Involves two related keys called a key-pair: one public key known to anyone and
one private key that only the owner knows.
1383 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Beginner‟s view about the algorithm was to make some use of the ASCII numbers & the Prime numbers.
Considering a character as plain text & n as ASCII Value of character the Cipher Text can be nth Prime Number.
But this had big flaw of no Key being used. So, easiest to decrypt. Certainly some key was to be used. Then
different approaches were taken before finalizing the Key for the process.
Considering a complete String coming as a Plain Text.
Approach – I - Minimum Value
Take ASCII values of characters in string.
Find the Minimum of them.
Key = Minimum Value.
FlawsIn a large text, zero has the highest probability of coming out as minimum value.
Same key will generate same cipher text every time.
Space Complexity of an array.
Approach - II - Mid-Value
Take ASCII values of characters in string.
Sort the array.
Find the Mid value of them.
Key = Mid-Value.
Flaws- Two different users interacting with the system will have same key for same string being entered.
User A -> Plain Text = “User”. Cipher Text = 1234.
User B -> Plain Text = “User”. Cipher Text = 1234.
However, for being more secure both should have different cipher text.
Approach - III – Random Number
Random Number
Key = Absolute Value of Random Number.
This Approach was able to resolve the issues discussed in previous two approaches.
For any analysis purpose, it was necessary to observe the values of the variables with respect to reference
variables and henceforth, the variables were correlated using repeating variable method (Edward, 2005). As
mentioned above, total number of variables considered for present investigation, „6‟. Three out of 6 variables
were considered as fundamental variables and a functional relationship was established as Φ (V, W, D, F, B, G)
= 0. The derived groups were, V/(D2.F), B/D and G/(D2.F). The relationship obtaining using Buckingham Pi
Theorem as, G/(D2.F) = f (B/D, V/D2F). Crack growth rate for specimen at fixing length 400 mm, 350 mm and
1384 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
300 mm were calculated at frequency of 60 Hz, 80 Hz, 100 Hz and 120 Hz. Calculated value of „G‟ were further
calculated and plotted for useful analysis.
The algorithm has two main steps:
Step One: Generate a Key.
Generate A Random Number. (16 bits)
Get Absolute Value of the Number generated
Key = Abs (RandomNo.)
So, the Algorithm is generating a 32 bits key for a plain text of 16 bits.
Step Two: Generate Cipher text using the Key, Prime No & the Plain Text.
Initialize nthPrime as ascii value of character of plain text & Cipher text as 0
The Objective is to find the nth Prime Number after the “Key”.
For Example: Key generated is “4”
character entered is Space i.e. “A“
The ASCII Value = 65.
i.e. We need to find 65th Prime Number after 4.
Obtained Value: 331, which is a 32nd Prime Number post 4.
Add Constant to the Obtained value to give a Cipher Text. Constant can be the last 2 digits of the Key.
Pictorial Representation of Encryption Algorithm
Fig. 2: Conversion of Plain Text to Cipher Text
Note: As we go higher in numbers the Prime Numbers turn sparse. The notion to add a Constant to the
obtained prime number is to offer a larger set of Natural Numbers.
1385 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Key(Random No.)
Ascii value
Add Constant
post Key
(Eg. 2)
Table1: Encryption Table
Pseudo Code for Step Two:
SET nthprime=0
SET Key=RandomNumber
GET Key = DETERMINE(AbsoluteValue(Key))
INIT Ciphertext =0
READ plaintext
FOR 1 to sizeof(plaintext)
SET nthVal = ascii(plaintext[i])
SET P=Key, count = 0
FOR P to count!=nthVal
SET status=1
IF P==2
ELSE IF P%2==0
FOR j = 3 j <= Math.sqrt(i) j+=2
IF P%j == 0
status = 0;
IF status != 0
status = 1;
Ciphertext = nthPrime+ Constant
1386 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Obtain the Key.
Read the Cipher Text.
Subtract the CONSTANT from the Cipher Text. Obtain the value which is nth prime number post
The Decryption algorithm executes till the time it is able to generate the same nthPrime Number
as obtainer in step (iii). Keep a counter of it.
Counter gives the ascii value.
Get character from the obtained ascii value.
PlainText = Character.toString((char) counter(CipherTxt – CONSTANT ))
Fig. 3: Conversion of Cipher Text to Plain Text
Key (Random No)
Nth Prime Post Key
(Subtracted 2)
Plain Text
Table2: Decryption Table
Pseudo Code:
READ cipher
INIT i=0;
INIT count;
INIT nthprime=0;
FOR SET i=key, count=0 nthprime!=cipher
INIT status=1
ELSEIF i%2==0
1387 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
FOR SET j = 3 , j <= Math.sqrt(i), j+=2
IF i%j == 0
status = 0
IF status != 0
status = 1
return plaintext.
Both the Algorithms are implemented in Eclipse IDE, Java 1.6 on Dell Laptop with Configuration: Intel Core i5
@2.60GHz 4GB RAM and 64 bit Windows 7 OS.
Fig 4: Screen Shot of the Encryption Implementation Code
1388 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 5: Screen Shot of the Decryption Implementation Code
The proposed algorithm implements a good strategy of making most out of the advantages of prime numbers and
ASCII values. The Space complexity has also been dealt as an essential objective to be met in this algorithm.
In cryptography, key size or key length is the size measured in bits of the key used in a cryptographic algorithm
(such as a cipher). An algorithm's key length is distinct from its cryptographic security, which is a logarithmic
measure of the fastest known computational attack on the algorithm, also measured in bits. The security of an
algorithm cannot exceed its key length (since any algorithm can be cracked by brute force), but it can be smaller.
Most symmetric-key algorithms in common use are designed to have security equal to their key length.[3] The
proposed algorithm is based on 32 bit key generation for a 16 bit plain text. Hence meeting the minimum
requirement of the symmetric-key algorithm key generation factor.
In the future work related to proposed algorithm, the encrypting and decrypting data with least execution time.
The concept of block wise parallel encryption using multithreading technique can enhance the speed of
encryption system.
[1] Bhati,S., Bhati,A.,Sharma K, S. (2012), A New Approach towards Encryption Schemes: Byte – Rotation
Encryption Algorithm, Proceedings of the World Congress on Engineering and Computer Science 2012 Vol II
1389 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
[2] Gupta, V.,Singh, G., Gupta, R. (2012), Advance cryptography algorithm for improving data security,
[8] International Journal of Advanced Research in Computer Science and Software Engineering, Volume 2,
issue 1, January 2012
[9] Kannan Muthu, P., Asthana, A. (2012), Secured Encryption Algorithm for Two Factor Biometric Keys,
International Journal of Latest Research in Science and Technology , Vol.1,Issue 2 :Page No.102-105 ,July
[10] Kumar, A., Jakhar, S., Makkar, S., (2012), Comparative Analysis between DES and RSA Algorithm‟s,
International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 2, Issue 7
[11] Mathur, A.(2012), A Research paper: An ASCII value based data encryption algorithm and its comparison
with other symmetric data encryption algorithms, International Journal on Computer Science and
Engineering (IJCSE), Vol 4, No.9
[12] Verma,S., Choubey,R., Soni, R. (2012), Design and Implementation of New Encryption algorithm Based on
Block Cipher Substitution Technique (Effect on Image file), International Journal of Computer Technology
and Electronics Communication.
[13] Chinchalkar, S., “Determination of Crack Location in Beams using Natural Frequencies”, Journal of Sound
Vibration Volume 247 (3), 2001, 417-429.
[14] Batabyal, A. K., Sankar, P., and Paul, T. K., “Crack Detection in Cantilever Beam using Vibration
Response”, „Vibration Problems ICOVP-2007‟, Springer Netherlands, 2008, Pages 27-33.
[15] Srinivasarao, D. Rao, K. M. and Raju, G.V., Crack identification on a beam by vibration measurement and
wavelet analysis, International Journal of Engineering Science and Technology 2(5), 2010, 907-912 .
[16] Zhong, S. and Oyadiji, S.O., Detection of cracks in simply-supported beams by continuous wavelet
transform of reconstructed modal data, Computers and Structures 89, 2011, 127-148.
[17] Jiang, X., John Ma, Z. and Ren W.X., Crack detection from the slop of the mode shape using complex
continuous wavelet transform, Computer- Aided Civil and Infrastructure Engineering 27, 2012, 187-201
[18] Ghadami, A. Maghsoodi, H. R. Mirdamadi, “A new adaptable multiple-crack detection algorithm in beamlike structures” Arch. Mech., Volume(65) 6, Warszawa 2013,1–15.
1390 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Raza A. Khan1, Dr. Kavita D. Rao2
Research Scholar, 2Professor, School of Planning & Architecture,
Jawaharlal Nehru Architecture and Fine Arts University, Hyderabad, India
The Heating, Ventilation, and Air-Conditioning (HVAC) systems are meant to provide the thermal comfort and
clean healthy air to the building occupants. The cities like Hyderabad in India have seen a remarkable
economic growth in recent times. It has resulted in a large number of air-tight sophisticated buildings that
employ HVAC systems throughout the year. Hence, these systems are the only means of air supply to the
building. A properly designed, commissioned and maintained HVAC system is thus crucial to good Indoor Air
Quality (IAQ) inside these buildings. This paper presents the insight into various aspects of HVAC systems and
the potential to improve the air quality within the building environment, as IAQ is vital to public health, their
performance and productivity.
Keywords: Building Environment, Filtration, HVAC Systems, Indoor Air Quality (IAQ), Ventilation
Today, many people spend most of their time inside modern buildings where the indoor climate is artificially
controlled to achieve thermal, visual, acoustical comfort in addition to the acceptable indoor air quality (IAQ)
conditions. IAQ has become an area of great interest because of its profound impact on the occupant’s health
and productivity. Occupants of buildings with air quality problems suffer from symptoms like eye, nose and
throat irritation, dry skin and mucous membranes, fatigue, headache, wheezing, nausea and dizziness resulting
in discomfort[1]. This leads to increased absenteeism, reduced performance and lower productivity. Poor IAQ in
buildings is primarily related to new building technology, building materials and energy management strategies.
Acceptable IAQ is defined by American Society of Heating, Refrigerating and Air-Conditioning Engineers
(ASHRAE) standard 62 as “air in which there are no known contaminants at harmful concentrations as
determined by cognizant authorities and with which a substantial majority (80 percent or more) of the people
exposed do not express dissatisfaction”[2]. This ensures that health considerations must be made along with the
human comfort. Eventually buildings were classified as healthy and sick. A building is known as sick when
more than 20% of its occupants exhibit any of the varied symptoms for more than two weeks and the symptoms
disappear after leaving the building. The air supplied through the air-conditioning system is increasingly
becoming the only means to dilute indoor pollutants as the building envelope is becoming tighter to meet the
requirements of energy efficient buildings. It has resulted in much less air leakage to naturally dilute indoor
contaminants and more reliance of the Heating, Ventilating, and Air-Conditioning (HVAC) systems. Indoor
building environment in general and IAQ issues in particular has not been a prime concern for research in India.
1391 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
In response to the global awareness for improved productivity and healthier occupants, it is high time to
investigate the air quality in our indoor spaces.
1.1 Statement of the Problem
HVAC systems are meant to control the comfort conditions in buildings with regard to temperature, humidity,
odour, air distribution and ventilation. These systems serve as the lungs to the building and the occupant’s health
depends on the effectiveness of these systems to a large extent. The cities like Hyderabad in India have, in
recent times, seen a remarkable transformation with many multi-national and software companies opening their
offices and facilities in the city. It has resulted in an increase in the number of air-tight modern sophisticated
buildings using innovative building materials and state of the art environmental control mechanical equipments.
These buildings employ HVAC systems almost throughout the year. All the air that building occupants breathe
has to pass through this HVAC system. Hence it has the potential to improve and deteriorate the air quality.
In recent years, much importance has been given to energy conservation measures which have resulted in
reduction of the outdoor air intake into the buildings and more re-circulated air is being used. With the reduction
in the ventilation rate, the air quality is naturally compromised.
To enhance the comfort and well being of the building occupants, indoor environments have been controlled
with extensive and often complicated HVAC systems. The primary function of these systems in a building is to
regulate the dry-bulb temperature (220-260C), relative humidity (30%-60%) and air quality by adding or
removing heat energy. Hence, these systems are responsible for providing thermal comfort and contaminant-free
clean air to the building occupants. These systems employ filtration, water, air currents and mechanical &
electrical devices which may accumulate organic dusts or microorganisms that become a source of bioaerosols.
Fig. 1 indicates the ASHRAE summer and winter comfort zones on the psychrometric chart.
Fig. 1: Psychrometric Chart and ASHRAE Summer and Winter Comfort Zones
It has been indicated that HVAC-related inadequacies are the primary cause of most IAQ problems
. These
problems could arise because of the deficiency in HVAC system design, maintenance, operation, controls, air
balancing, and occupancy related issues[3]. Varius studies have established that the HVAC system is responsible
for 50-60 percent of building generated IAQ problems, and it is capable of resolving up to 80 percent of these
1392 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
2.1 Types of HVAC Systems
A variety of HVAC systems are found in commercial and office buildings that differ from each other according
to the building size, occupant activities, building age, geographic location and climatic conditions. ASHRAE
has categorized the air handling unit systems as all-air system, all-water system, air-and-water system or
packaged unitary system.
There are various types of HVAC systems with different mechanical design and operational strategies. Some of
the most common types include the single zone system, variable air volume (VAV) systems, air handling units
(AHU), fan coil units (FCU) (as shown in Fig.2) and individual packaged units.
Fig. 2: Air Handling Unit (AHU) and Fan Coil Unit (FCU) with Exhaust and Outside Air Supply
2.2 HVAC Systems Design
A well-designed HVAC system is an essential component of healthy buildings. Poor design is frequently cited
as the primary cause of IAQ problems. The Honeywell IAQ diagnostic team found most of the design
difficulties in (a) ventilation and distribution (b) inadequate filtration and (c) maintenance accessibility [4].
The emphasis on energy conservation measures have resulted in the negligence of IAQ issues by HVAC
designers. A balance has to be done between energy efficiency and air quality, and the HVAC design
professionals must stay involved beyond the design stage. Many research works in the west have identified the
source of IAQ problems to be malfunctioning, poorly maintained, or inadequately designed HVAC systems.
2.3 HVAC Systems Commissioning
The ASHRAE guidelines define commissioning as “the process of achieving, verifying, and documenting a
concept through design, construction, and a minimum of one year of operation[5]. It establishes procedures for
the HVAC commissioning process for each phase of the project: program phase, design phase, construction
phase, acceptance phase, and post-acceptance phase.
IAQ concerns should be addressed at each phase of the process to avoid sick building syndrome problems. It is
estimated that the commissioning process could eliminate as much as half of all IAQ related complaints. This
process requires that the components and systems are inspected and tested under actual installed conditions.
2.4 HVAC Systems Operation and Maintenance
1393 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
The HVAC systems have been reported to be the cause of over 50 percent of all IAQ problems and complaints,
therefore its maintenance is essential to the operation of healthy buildings[6]. The lack of trained maintenance
personnel or an unsound operations and maintenance policy can be detrimental to the HVAC system’s
performance and can increase the risk of creating sources of contamination within the HVAC system. It has
been recommended to train personnel of O&M departments, and periodically test, adjust and rebalance HVAC
Poorly maintained ducts can be a major problem for IAQ. Moisture in the ductworks encourages microbial
growth, which results in building related illness[17]. The diffusers, metal registers, and perforated grillwork also
need regular maintenance. Volume dampers have to work properly for the adequate distribution of air [4].
2.5 Ventilation
Ventilation is the process of supplying and removing air by natural or mechanical means to and from the
conditioned space. Ventilation standards have evolved over the years and prescribed as minimum ventilation
rates in breathing zones for different occupancy category areas in ASHRAE standard 62.1 2010 “Ventilation for
Acceptable IAQ”. This standard suggests the following procedures for ventilation design:
(a) IAQ Procedure: This is a performance-based design procedure in which acceptable air quality is
achieved within the space by controlling known and specifiable contaminants
(b) Ventilation Rate Procedure: This is a prescriptive design procedure in which acceptable air quality
is achieved by providing ventilation air to the space based on space type/application, occupancy
level, and floor area
(c) Natural Ventilation Procedure: This is also a prescriptive design procedure in which outdoor air is
provided through the openings to the outdoors, in conjunction with the mechanical ventilation
In many situations, occupant generated CO 2 can serve as a suitable surrogate measure for IAQ. The CO2 content
is a good predictor for the amount of outdoor air required, as the ASHRAE guidelines aims to hold CO 2 levels
below 1000ppm[8]. Generally the outdoor air CO2 levels are below 350ppm.
2.6 Filtration
The outdoor air that is required to replenish the oxygen and dilute the pollutants needs to be filtered to free it
from the outdoor pollutants and particulate matter. Also the re-circulated air needs to be filtered to clean it from
the indoor generated contaminants. Providing an efficient air cleaning system is often the crucial step in assuring
that HVAC system will provide healthy and clean indoor environment.
1394 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Fig. 3: Efficiency of a Variety of HVAC Air Filters
Air contaminants can be eliminated either by absorption, physical adsorption, chemisorptions, catalysis or
combustion depending on the size and shape of the suspended particles in outdoor air. The influencing factor for
filter design and selection is the degree of air cleanliness desired.
Air cleaning can result in valuable and cost effective tactics to achieve and maintain an acceptable environment.
There are a variety of filters available with different efficiencies, airflow resistance and dust holding capacities
as shown in Fig.3. Deciding on the right filter efficiency is crucial for achieving acceptable indoor particulate
matter concentration and low energy use[18].
IAQ is of growing concern in modern airtight built environments. The World Health Organization (WHO) has
estimated that 30% of the newly built or renovated buildings have IAQ problems or Sick Building Syndrome
(SBS). Many reasons have been attributed to this enigma including energy conservation actions such as reduced
ventilation, use of synthetic materials in construction, and increasing the levels of outdoor air [9].
ASHRAE standard 62.1 also specifies basic equipment requirements that include sloped condensate drain pans
on cooling coils, cleanable surfaces, and accessibility to all areas of the air conveyance systems for inspection
and maintenance. It also includes HVAC system installation, commissioning and maintenance issues.
Many IAQ issues in buildings are the result of one of the three sources: (a) outside fresh air dampers have either
filed or closed to save energy, (b) indoor air pollutants generation rate is more than the ventilation rates, (c)
outside air is polluted or contaminated[10].
1395 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
3.1 Health Issues
The WHO defines health as a state of complete physical, mental, and social well being, rather than merely the
absence of disability. And hence an adverse health effect is the one that compromises health. The relationship
between health and IAQ is an area of concern to many researchers. Selected buildings are being investigated for
health issues that include building occupants’ questionnaire surveys and parametric measurements for
temperature, relative humidity and carbon dioxide.
ASHRAE standard 62 has expressed concern for health in all its versions. ASHRAE standard 62 -1973
recommended ventilation levels to suffice “for the preservation of the occupant’s health, safety, and wellbeing”. Its revision ASHRAE standard 62-1981 stated “to specify IAQ and minimum ventilation rates which
will be acceptable to human occupants and will not impair health”. ASHRAE standard 62-1989 stated “to
specify minimum ventilation rates and IAQ that will be acceptable to human occupants and are intended to
avoid adverse health effects”. ASHRAE standard 62-2010 states “to specify minimum ventilation rates and
other measures intended to provide IAQ that is acceptable to human occupants and that minimizes adverse
health effects”.
3.2 Sick Building Syndrome & Building Related Illness
There are many reasons attributed to the contamination of spaces such as human occupancy, building materials,
furnishings, space function, and impure outdoor air. When these contaminants increase beyond accepted
specified levels, then the building is known as sick building. The American Thoracic Society recognizes the
SBS syndromes as eye irritation, headache, throat irritation, recurrent fatigue, chest burning, cough, wheezing,
concentration or short-term memory problems, and nasal congestions[11]. The Commission of European
Communities and WHO add skin irritation to this list.
SBS has been recognized as a human health problem causing billions in lost annual productivity. These
symptoms appear during working hours and diminish when the occupants leave the buildings for weekends or
holidays[12]. Many sources of SBS have been indicated, including inadequate ventilation or thermal control,
deficient building design or maintenance, macromolecular organic dust, molecules of biological origin, air borne
endotoxins, and other physical, chemical, biological or psychosocial factors [11].
3.3 Contaminants
The major contaminants in building environment include carbon dioxide, carbon monoxide, volatile organic
compounds, environmental tobacco smoke, radioactive materials, microorganisms, viruses, allergens, and
suspended particulate matter. These pollutants vary considerably in terms of their classes, levels and sources.
Measurement of environmental parameters have been suggested and adopted for IAQ assessment by researchers
because of their adverse effects on human health[16]. The health effects of exposure to different types of
contaminants is shown in Fig. 4.
1396 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Danger Level - 1
Danger Level - 2
Danger Level - 3
Dust & Pollen
Mold & Fungi
Tobacco Smoke
Wood Smoke
Vehicle Exhaust
Dust Mite Feces
Pet Allergens
Insect Debris
Bacterial Infections
Viral Infectins
Cold Viruses
Carbon Monoxide
Mehtylene Chloride
Nitrogen Dioxide
Toluene & Benzene
Tobacco Smoke
Toxic Mold
Nose & Throat Irritation
Runny Nose
Cough & Wheezing
Asthma Flares
Upper Respiratory Infections
Throat & Ear Infections
Memory Lapse
Mild Depression
Lung Dysfunction
Blurred Vision
Fig. 4: Health Effects of Different Contaminants (Courtesy: Center for Disease Control)
3.3.1 Carbon Dioxide
Carbon dioxide is an exhaled byproduct of human metabolism, and for this reason CO 2 levels are normally
higher in occupied spaces than for outdoor air. The Environmental Protection Agency (EPA) recommends a
maximum level of 1000ppm for continuous CO2 exposure. The measurement of CO2 is utilized for a number of
different investigations related to adequate outdoor air supply and distribution within the spaces, thus making it
a powerful IAQ diagnostic tool[13].
3.3.2 Carbon Monoxide
Carbon monoxide (CO) is a chemical asphyxiant gas. Its affinity for hemoglobin in red blood cells is 200-250
times that of oxygen, which reduces the oxygen carrying capacity significantly. Tobacco smoking and
incomplete combustion of hydrocarbon fuels are two main sources of CO. Its concentration is high in buildings
with internal or nearby parking garages. CO is a toxic gas and levels near 15ppm can affect body chemistry to a
large extent.
3.3.3 Formaldehyde
Formaldehyde gas is one of the most common volatile organic compounds (VOC). Its health effects include
mucous membrane irritation, asthma, neuropsychological effects and malignant disease. It is used in the
production of cosmetics, shampoos, carpets, pressed boards, insulations, textiles, paper products, and phenolic
plastics, which continue to emancipate formaldehyde for long durations. Acceptable limit is 1ppm as a time
weighted 8-hour average[8]. Exposure above 50-100ppm can cause serious injury like inflammation of lungs or
3.3.4 Environmental Tobacco Smoke
Environmental Tobacco Smoke (ETS) is release in the air when tobacco products burn or when smokers exhale.
ETS contains a mixture of irritating gases and carcinogenic tar particles. It gives off other contaminants like
sulfur dioxide, ammonia, nitrogen oxides, vinyl chloride, hydrogen cyanide, formaldehyde, radionuclides,
1397 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
benzene, and arsenic. It is a known cause of lung cancer and respiratory symptoms, and has been linked to heart
3.3.5. Bioaerosols
Bioaerosols refer to biogenic agents that are airborne. Many bacterial and viral diseases are spread by direct
contact between individuals or indirectly as a result of droplets in air which are produced by talking, sneezing
and coughing. It can also transmit through the HVAC system. The main biological factors causing building
related sickness are mould, fungi, bacteria, viruses, protozoa, pollens, house dust mites, insect pests, algae,
pigeions and rodents. These pollutants may cause symptoms like stuffy nose, dry throat, chest tightness,
lethargy, loss of concentration, blocked, runny or itchy nose, dry skin, watering or itchy eyes or headaches in
sensitive people[15].
Legionnaire’s disease is a contagious disease that generally occurs as pneumonia. Exposure to contaminated
cooling towers, drinking water, cooling systems, and humidifiers are the reasons that have been attributed to its
It is evident from this study that in most cases, IAQ problems are either directly associated with the HVAC
systems or these systems could be a remedy to the problem. The major problems associated with the HVAC
systems have been identified as inadequate ventilation, inside contamination, microbiological contamination,
polluted outside air supply, and inadequate filtration. The importance of good design, commissioning, operation
and maintenance strategies for the HVAC systems is well accepted for good IAQ in the building environment.
Further, IAQ investigations can be carried out in some selected buildings, in India, that may include the building
inspections, building occupants questionnaire surveys, parametric measurements, and interviews with operation
and maintenance personnel.
E. Sterling , C. Collett, S. Turner, and C. Downing, Commissioning to Avoid Indoor Air Quality
Problems, ASHRAE Transactions, 99(1), 1993, 867-870.
ASHRAE Standard 62.1-2010 “Ventilation for Acceptable Indoor Air Quality”, American Society of
Heating, Refrigerating and Air-Conditioning Engineers, Inc., Atlanta, Georgia.
B.T. Tamblyn, Commissioning: An Operation And Maintenance Perspective, ASHRAE Journal, 34(10),
1992, 22-26.
S.J. Hansen, Managing Indoor Air Quality, The Fairmount Press, Inc. Lilburn, 2004.
ASHRAE Guideline 1.1-2007 “The HVAC Commissioning Process”, American Society of Heating,
Refrigerating and Air-Conditioning Engineers, Inc., Atlanta, Georgia.
S.M. Hays, R.V. Gobbell, and N.R. Ganick, Indoor Air Quality: Solutions and Strategies, McGraw Hill,
Inc., New York, 1995.
1398 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
H.E. Burroughs, IAQ: An Environmental Factor in the Indoor Habitat, HPAC Journal, 69(2), 1997, 5760.
F.C. McQuiston and J.D. Parker, Heating, Ventilating and Air-Conditioning: Analysis and Design, John
Wiley & Sons, Inc., New York, 2005.
C.W. Collett, J.A. Ross, E.M. Sterling, Strategies for the Investigation of Indoor Air Quality Problems
and Findings from their Implementation, ASHRAE Transactions, 99(2), 1993, 1104-1110.
[10]. D. Int-Hout, Total Environmental Quality, ASHRAE Transactions, 99(1), 1993, 960-967.
[11]. P.A. Ohman and L.E. Eberly, Relating Sick Building Syndrome to Environmental Conditions and
Worker Characteristics, Indoor Air, 8(2), 1998, 172-179.
[12]. M. Lahtinen, P. Huuhtanen and K. Reijula, Sick Building Syndrome and Psychosocial Factors – A
Literature Review, Indoor Air, 8(4), 1998, 71-80.
[13]. R.T. Stonier, CO2: Powerful IAQ Diagnostic Tool, HPAC Journal, 67(3), 1995, 88-90.
[14]. J. Namiesnik, T. Gorecki, B.K. Zabiegala and J. Lukasiak, Indoor Air Quality, Pollutants, their Sources
and Concentration Levels, Buildings and Environment, 27(3), 1992, 339-356.
[15]. J. Singh, Biological Contamination in the Built Environment and their Health Implications, Building
Research and Information, 21(4), 1993, 216-224.
[16]. Pui-Shan Hui, Kwok-Wai Mui and Ling-Tim Wong, Influence of Indoor Air Quality Objectives on AirConditioned Offices in Hong Kong, 144, 2008, 315-322.
[17]. M.S. Zuraimi, R. Magee, G. Nilsson, Development and application of a protocol to evaluate impact of
duct cleaning on IAQ of Office Buildings, Building and Environment, 56, 2012, 86-94.
[18]. M. Zaatari, A. Novoselac and J. Siegel, The Relationship Between Filter Pressure Drop, Indoor Air
Quality, and Energy Consumption in Rooftop HVAC Units, Building and Environment, 73, 2014, 151161.
1399 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Nitesh Kumar Singh1, Dr. Manoj Aggrawal2
1, 2
Computer Science, Sunriase University, (India)
Most University networks are built to accommodate the needs of a single organization or group. So in current
era, analysis of network simulation is very costly with real equipments, so by using some simulation tools which
helps to simulate the data on mathematical view and generate the accurate results. To understand the total data
transfer time from user to server and back in the network with network congestion. In this problem the queuing
theory application in time especially M/M/1 method is used to define the network congestion and buffer time to
provide the output of the query. Network simulation helps to examine problems with much less work and of
much larger scope than are possible with experiments on real hardware. An invaluable tool in this case is the
free OPNET network simulator that offers the tools for modeling, design, simulation, data mining and analysis.
OPNET can simulate a wide variety of different networks which are linked to each other.
As the technology is improving now days, demands of end users and their applications also increasing. A wide
variety of new applications are being invented daily. These Applications have different demands from the
underlying network protocol suite. High Bandwidth internet connectivity has become a basic requirement to the
success of almost all of these areas. Network simulation with queuing theory explains the various related
technologies like Simulation of network and OPnet technology, and also the how to use Queuing theory.
A simulation and analysis software for:
OPNET Modeler is the industry's leading network development software first introduced in 1986 by MIT
graduate. OPnet allows you to design and study communication networks, devices, protocols, and applications.
Modeler is used by the world's most prestigious technology organizations to accelerate the R&D process. Some
of the customers include Pentagon, MIT, UIC, and many more.
1400 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
The virtual network environment represents a network. It can have a slew of components in every salient
category. OPnet defines a topology as a “collection of links, nodes and its Configuration”. By “nodes” OPnet
means to include networking hardware of all kinds (routers, workstations, switches, hubs etc). By “links” the
underlying connectivity technology (Ethernet, ATM, etc) and relevant characteristics (latency, bandwidth) are
meant. In “configuration” things like routing protocols, addressing, are included.[3]
Figure 1 DFD Of Simulation Process
In OPnet common setting Tab of Simulation we performed with the following features:
Figure 2 Common Attributes Data Filling In Thesis_Nitesh-Scenario1
1401 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
1. Clients send “Short” requests to Control Servers. (node4, 5, 6, 7, 8, 9)
2. Control Servers compose and send “Long Request” to Data Servers. (Node 1)
3. Data Servers reply by sending the “Large Files”. (Node 3)
4. Control Servers broadcast the “Large Files” to all clients. (Node 1)
The verification of the link between different devices has been checked for proper data connectivity:
Simulation of network contained with the given Attributes
Duration: 2 hours (~=8000 sec)
Seed value: 128 (Random number generation seed).
1402 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Value per static: 100(collected for each).
Update Interval: 100,000(In simulation events between simulation performances updates sent back to the GUI).
Network congestion rate is changing all the time [2], the instantaneous congestion rate is used to analysis the
network traffic in network monitor. The instantaneous rate
is the congestion rate at the moment of t, the
can be obtained by solving the system length of the queue’s probability distributing, which is
. According to some properties of Markov Process, we know that
(i =0. . . C+1) satisfies
the following equation and by solving this equation, we get the network congestion rate
.... (1)
The results observed during the simulation are:
Simulated Time: 2 hrs
Total elapsed time for simulation: 3 sec
Event Happens: 442037 in 2 hrs
Speed average: 200197 events /sec
Speed current: 230971 events /sec
In the given results,
Event Happens: 442037 in 2 hrs = 2 sec
Therefore in 1 sec = 442037/2 = 221019 is arrival time in 1 second.
Average speed for 1 second is = 200197 is service time in 1 second.
Exponential inter arrival times with mean1/λ
Exponential service times with mean 1/μ
= 200197
.... (2)
.... (3)
Then we calculate,
The Value of λ, μ are calculated from equation (5) and (6) as
λ= 1/221019,
μ = 1/200197.
For t ≤ 0, the probability density function is
Then the probability density function of arrivals time with mean 1/ λ is
... (4)
For 0 ≥ t, the probability density function is
1403 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Then the service time t is
.... (5)
To getting μ and λ we calculating the value of
For t = 1, for first congestion in 1 sec i.e. 1 hours
From table the value of e^ (-0.000095) = 0.999905005
It is observed from the above calculation that the congestion rate increased by 0.5263 percent in 1 hour is
predicted in network. If the more number of switches are added in a network then it reduces the network
congestion and increase the performance of network.
OPnet simulator environment was used by experimenting with different network traffic data available with this
tool kit. The simulation was performed using the M/M/1 model of queue theory. The congestion rate observed
during the simulation is 0.5263 percent in 1 hour without increasing a single node in network. If number of
switches in a network increased then it only reduces the network congestion and increase the performance of
network. But due to using of this process model, network performance is increased and service is available for
longer times. In traffic received by switch in sending the data is same, but this type of observation is not
measure in long period (In case of packet transfer). It is also observed that the loss of network simulation data
i.e. the difference between received and sent data are14701-14602 = 99bytes at 1 hours duration of given data.
This all states shows that in network environment, congestion is always possible but it can be reduced by apply
Queuing theory approach in modeling. Otherwise use the extra switches in the network, which will reduce the
congestion on particular server and increase the speed of server and overall speed in the network will increase.
[1] Wang Jian-Ping and Huang Yong ” The monitoring of the network traffic based on queuing theory” The 7th
International Symposium on Operations Research and Its Applications (ISORA’08) Lijiang, China, October
31–Novemver 3, 2008
1404 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
[2] Rajive Bagrodia, M. Gerla, Leonard Kleinrock, Joel Short, and T-C. Tsai, “Short language tutorial: A
Hierarchical Simulation Environment for Mobile Wireless Networks”, Proceedings of the 1995 WSC ns
network simulator, Available at:
[3] V. S. Frost and B. Melamed, “Traffic Modeling For Telecommunications Networks”, Communication
Magazine, March 1994, Vol. 32, Issue 3.
[4] Vikram Dham,” Link Establishment in Ad Hoc Networks Using Smart Antennas”, Master of Science in
Electrical Engineering, Alexandria, Virginia, and January 15, 2003.
[5] Harry Perros,”Computer Simulation Techniques: The definitive introduction!”, 2009
[6] Garrett R. Yaun , David Bauer , Harshad L. Bhutada ,Christopher D. Carothers , Murat Yuksel and
Shivkumar Kalyanaraman,” Large-Scale Network Simulation Techniques: Examples of TCP and OSPF
Models”, vol. 9, no. 3, pp. 224–253, July 1999.
1405 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Sukhwinder Singh
Department of Electronics & Communication Engineering
PEC University of Technology, Chandigarh, (India)
In today’s world, everything is going fast and every human being tries to save their time as much as possible. So
different technical gadgets came in the market to maximise the output and minimise the time consumption. In today’s
transportation, we can see a Toll plaza at very short distances and to proceed through these Tolls without technology
is quite a huge time consuming. So we take the concept of smart Toll plaza. It simply works on LabVIEW software
platform. In this, we basically apply three sensors vertically at equal distances which will tell that the vehicle coming
to the Toll is a light weight vehicle or a heavy weight vehicle. According to the size of vehicle automatically the
receipt is printed for the fair after asking whether ride for one side or two sides. This project further can be used
more widely by applying web cam on the front which will note down the registration number of the car and can be
thus used for security purposes. It somehow reduces the man power at the Toll plazas to some instant and provides
fast data regarding vehicles to the administration for any further legal action etc.
Keywords: IR sensors, LabVIEW , Microcontroller, Radiations, Toll plaza.
In today’s time, there is high traffic in the G.T roads or main roads. Due to highly communication between the
cities, transportation have been improved which results to more take care of roads So More number of companies
are taking the contract to make roads ,bridges and fly-over’s. Due to high communication by roads there is huge
number of vehicle passes through the Toll. So our project is basically is to reduce the human work in Toll by
applying various techniques such as by sensing the car to see whether it is heavy vehicle or light vehicle and select
the Toll either one way or two way In our project we used techniques which will tell whether the heavy weight
vehicle or light weight vehicle is passed. This whole procedure will be done through sensors and other peripheral‘s
interface with LabVIEW software. It also asks whether the ride is for one way or two ways through the screen. The
remaining components of the projects are explained in section-II, III and IV etc.
1.1 Software Used
The input and output is controlled by LabVIEW software by serial communication. We used VISA instructions to
interface the microcontroller with LabVIEW software and baud rate is set to 9600 bps. Rx and Tx pins of
microcontroller port are used to sharing the data with the LabVIEW software which is in the form of strings and
sends 8 bits at a time.
1406 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
1.2 Sensor Mechanism
An IR sensor is basically a device which consists of a pair of an IR LED and a photodiode which are collectively
called a photo-coupler or an opto-coupler. The IR LED emits IR radiation which is detected by the photodiode and
dictates the output of the sensor. Now, there are so many ways by which the radiation may or may not be able to
reach the photodiode. Few are elaborated as below:
(a) Direct Incidence
We may hold the IR LED directly in front of the photodiode, such that almost all the radiation emitted, reaches the
photo-diode. This creates an invisible line of IR radiation between the IR LED and the photodiode. Now, if an
opaque object is placed obstructing this line, the radiation will not reach the photodiode and will get either reflected
or absorbed by the obstructing object. This mechanism is used in object counters and burglar alarms.
(b) Indirect Incidence
High school physics taught us that black color absorbs all radiation, and the color white reflects all radiation. We
have been used this very basic knowledge to build our IR sensor mechanism of this project. If we place the IR LED
and the photodiode side by side, close together, the radiation from the IR LED will get emitted straight in the
direction to which the IR LED is pointing towards and so is the photodiode, and hence there will be no incidence of
the radiation on the photodiode. If we place an opaque object in front the two, two cases occur:
(1) Reflecting Surface
If the object is reflective, (White or some other light color), then most of the radiation will get reflected by it, and
will get incident on the photodiode.
(2) Non- Reflecting Surface
If the object is non-reflective, (Black or some other dark color), then most of the radiation will get absorbed by it,
and will not become incident on the photodiode. It is similar to there being no surface (object) at all, for the sensor,
as in both the cases, it does not receive any radiation. We have used reflective indirect incidence for making
proximity sensors. The radiation emitted by the IR LED is reflected back on the photodiode by an object closer the
object, higher will be the intensity of the incident radiation on the photodiode. This intensity is made analogous to a
voltage by a circuit, which is then used to determine the distance.
Fig.1: Proximity sensors
1407 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Proximity sensors find use in Touch Screen phones, apart from many other devices. In a Touch Screen Phone, the
touch screen needs to disabled when it is held near the ear, while in use, so that even if the cheek makes contact with
the tough screen, there is no effect.
Fig. 2: One Set of Sensor Mechanism used in Designed Model
In the proposed model, three IR sensors are connected vertically in series to get the input near the bonnet. Because if
we apply it vertically then the last sensor may sense the vehicle if there is car know the model will tell it is heavy
vehicle which is not and we also not apply at near centre of the car because sometime there is luggage at the upper
part of car and the third sensor may sense it and our model tells it is heavy vehicle so to avoid these errors. We
applied it at all best suitable place near the bonnet. This input is fed to microcontroller which further is in serial
communicated with LabVIEW software. If the lower 2 sensors bits are high then it is light weight vehicle and all the
3 sensors are high then it is heavy vehicle. After its type detection, it will ask for one way fare or two way fare
through the display present on the Toll plaza. Accordingly that Toll is collected and gate will be opened.
The designed model is effiecent and much accurately detects the vehicle size and estimate the Toll tax value. But it
has many limitations like less accuracy due to less number of sensors. This can be improved by combining the
automatic visual system with the sensor mechanism. By adding number plate recognition system, we can use it as a
check post for security purposes. It can also be used to approximate vehicles coming in and out of particular city to
the administration; accordingly steps will be taken to minimize jams problems.
1408 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
Sukhwinder Singh
Department of Electronics & Communication Engineering
PEC University of Technology, Chandigarh, India
Home automation refers to control appliances and their parameters like ON and OFF, speed, volume and
dimming in a home. ICT (Information Communication Technology) have been increasingly visible into our
surroundings in the past few years. Home automation is judged on the basis of simplicity, protection and power
consumption effectiveness. Pattern password based protection is being implemented to allow only authorized
users to control the appliances. With the availability of mobile device integrated products and cloud networking
rapidly increasing, many users see how new technology can impact their everyday lives. The proposed systems
make use of wireless communications techniques to minimize the invasion of new devices.
Keywords: Automation and control systems, Home automation technologies, Smart-home, Voice
Recognition, Mobile Phone.
Life is constantly changing; now adays lifestyle is completely altered as compared to that of 10 years ago. The
main reason is the introduction of few technologies in our lives. Technology provides us a more comfortable life.
Since ancient times, technology has been influencing our lifestyles. Achievements such as the discovery of fire,
hunting techniques or language have made our lives more easy and comfortable. Nowadays, the most common
ones are result of aggregation of different techniques with technology having results like cars, phones, airplanes,
fridges, microwaves, clothes and etc.
Typically, it is easier to more fully outfit a house during construction due to the accessibility of the walls, outlets,
and storage rooms, and the ability to make design changes specifically to accommodate certain technologies.
Wireless systems are commonly installed when outfitting a pre-existing house, as they obviate the need to make
major structural changes. These communicate via radio or infrared signals with a central controller [1].
There is already a lot of electronic equipment in private homes with features which can help to manage and
reduce energy consumption and improve comfort in the home. Unfortunately, only a few people can find ways
to apply it in everyday life. This means that there is a large untapped potential for energy savings and
amendment of energy-using behaviour and habits in the home without compromising the comfort of the users.
In order to exploit this potential for energy savings a number of challenges must be solved. First of all many
different notions of communication are used between home automation equipment, both in terms of standards as
well as proprietary protocols. This limits the interoperability between devices significantly. Secondly, the
configuration of the devices such that energy savings can be achieved is challenging for average users. As a
consequence costs for installation and reconfiguration can be prohibitive. The high cost of the majority of home
1409 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
automation devices is also a limitation at the moment, but experience from the electronics industry shows that
once the quantity of products go up, the prices for these products will decline substantially[2].
Fig. 1: General Home Automation Schema
In this paper we discuss proposed Home Automation system that provides a perfect example for the integration
of smart mobile phones, cloud networking, power-line communication, and wireless communication to equip a
home with remote controlled appliances, garages, lights, air conditioner and similar devices. This project covers
mobile phone application, handheld wireless remote, and PC program to provide user interface for the home
automation. The home automation system differs from other systems by allowing the user to operate the system
without the dependency of a mobile carrier or Internet connection via the in-home wireless remote [3]. This
system is cheap, expandable allowing a variety of devices to be controlled and can be produced in mass number
on a larger scale.
Home Automation is the domestic from of the automation of corporate buildings. Home automation may
include centralized control of brightness of lighting, controlling on/off of lighting, heating, ventilation and air
conditioning, appliances, alarm systems, and other systems, to provide more convenience, comfort, energy
1410 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
efficiency and security. A home automation system integrates different utility devices in a house with each
other through wired or wireless electronic systems. The techniques employed in home automation include those
in building automation as well as the
Fig. 2: Block Diagram of Proposed System
control of domestic activities, such as TV, fan, electric tubes, refrigerator and washing machine. The system
allows the user to keep track of appliances and lights in their home from anywhere in the world through an
internet connection on mobile devices. It also allows the user to control the automated system within their home
through a remote. The wireless remote has main control over the system; therefore if the remote is active, any
mobile device will not be able to control the units in the home. This design prevents from the android, PC, and
wireless remote all trying to control the system at the same time. The system refreshes on the Smartphone and
1411 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)
International Conference on Emerging Trends in Science and Cutting Edge Technology
PC every time the user chooses an option to control or monitor a specific unit. The in-home remote is updated
on the LCD monitor every time the system receives a command. The project did run into a memory problem.
After research, we found that the ATmega32Duemilanove’s flash drive does not operate well with the
ATmega32 Ethernet Shield connected. Therefore all the incoming data from had to be saved on the ATmega32
EEPROM data storage. This posed a problem as the EEPROM only allows the user to rewrite over data for a
certain number of times. An external ATmega32 flash drive is considered for the option for further work [3].
The project as described in this paper was completed and was working. The project allows the user to control
appliances and lights from a smart phones and PC from anywhere in the world using an internet connection. It
also allows the user to control their appliances and lighting within their home from a remote control. The
wireless remote has parent control over the system, therefore if the remote is active neither of the devices will be
able to control the units of the home. This design prevents the signals from remote, smart phone and PC to
intermix while controlling the units. The project was tested on appliances such as: radio, fan, coffee maker, and
television. It also was tested to change the brightness of various light structures. The application refreshes on the
Smartphone and PC every time an option is chosen from the smart phone, PC or remote.
In this paper, we have discussed the designing and implementation of a smart phone based home automation
system. This system can be easily produced on a large scale for mass adoption due to its simplicity and
innovative design. Major advantage of this project is that the application software is Android based, which today
has the largest smart phone base. Cheap smart phones (as low as INR 3400) can be used in the controller of our
project, making the total production cost affordable for mass adoption. Further improvements can be done in the
system such as the integration of environment sensitive control of the appliances.
[1] Quing-Wen He Chen and Honggang Zhang, Modular Home Automation Systems for Senior Citizens, 2010.
[2] Sune Wolff, Peter Gorm Larsen, Kenneth Lausdahl, Augusto Ribeiro, Thomas Skjødeberg Toftegaard.,
Facilitating Home Automation Through Wireless Protocol Interoperatibility, Aarhus School of
Engineering, Engineering College of Aarhus, Aarhus University.
[3] Prof. M. B. Salunke, Darshan Sonar, Nilesh Dengle, Sachin Kangude, Dattadraya Gawade, “Home
Automation Using Cloud Computing and Mobile Devices”, IOSRJEN., vol. 3,issue 2, pp. 35-37, Feb.
1412 | P a g e
Venue: YMCA, Connaught Place, New Delhi (India)