Download - International Journal of Multidisciplinary Research and

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
International Journal of Multidisciplinary Research and Development
Online ISSN: 2349-4182 Print ISSN: 2349-5979
www.allsubjectjournal.com
Volume 3; Issue 3; March 2016; Page No. 174-175; (Special Issue)
Performance Issues in Computer Networks Speed
M. Mahalakshmi, Dr. N. Pasupathi, P. Manimegalai
Research scholar, department of Electronics, Erode Arts &Science College, Erode
Associate professor in Electronics, Erode Arts & science College, Erode
Abstract
The data path between any two computers involves dozens, sometimes thousands, of hardware and software devices. Any one
of these may have a substantial impact on performance.
At any given time there is likely to be one factor which most directly limits the maximum speed of a data flow. Identifying
the limiting factor for each data flow is vital to improving performance.
The limiting factor may change with time or in response to changes in other factors. Changing network conditions may
change the relative performance of MTP and TCP. Test under a variety of conditions and learn all you can about your
network environment.
Keywords: Computer Networks
Introduction
Performance issues are very important in computer
networks. When hundreds of thousands of computers are
connected together complex interaction with unforeseen
consequences are common. Frequently this complexity
Leeds to pore performance.
Five Aspects of Network Performance
Performance problems
Measuring network performance
System design for better performance
Fast TPDU processing
Protocol for future light performance networks
Some performance problems such as congestion are caused
by temporary resource over loads. If more traffic suddenly
arrives at a router then router can handle congestion will
build up and performance will suffer. Performance also
degrades when there in a structural resource imbalance. For
example, if a gigabit communication line is attached to a
low progress the incoming packets fast enough some will be
lost these packets will eventually be retransmitted adding
delay wasting bandwidth and generally reducing
performance.
Analyzing Network Performance
Tests should be conducted in the same real-world
environments where you plan to deploy the software. If that
is not practical, great care must be taken to ensure that the
test environment matches the production environment in
equipment, network devices, and traffic patterns. A checklist
of factors which may affect performance.
Avoid Emulators
Network emulators use statistical models that are designed
around TCP and TCP-like network traffic. While emulators
can be useful, they must be carefully configured using
statistics appropriate to the traffic being tested. DEI strongly
recommends testing MTP in real-world environments
whenever possible.
Firewalls, NAT gateways, and other such devices are
designed to interfere with network traffic. They may
selectively block or degrade MTP/IP applications. They are
often configured to interfere with some traffic more than
others. For testing purposes, you must disable, or at least be
aware of, any "bandwidth management", "packet shaping",
"traffic shaping", "stateful security, "dynamic security", or
other features which selectively slow down traffic.
Disable TCP acceleration devices you may already be using
a hardware or software device to "accelerate" your network.
Such systems may be configurable to work with MTP, but
for test purposes disable third-party acceleration devices.
You can re-enable them later once you have established a
baseline of performance. Measure Time Independently
Many FTP and HTTP clients (especially Internet Explorer)
are very "optimistic" when reporting transfer speeds because
they only report during bursts of data arrival and don't count
the times when TCP is stalled. For the most accurate results,
measure the time it takes to transfer data using a stopwatch
or a command line utility such as "time".
Test Different Times and Different Paths
TCP is very sensitive to the type of network being used and
the patterns of traffic present. Its performance will vary
between different sites and throughout the day and week.
For the most complete picture of how MTP and TCP
compare, try as wide a variety of tests sites and times as
possible.
Maximum Path Speed
MTP cannot move data faster than the underlying network
hardware. When hardware is the limiting factor, MTP will
not provide a throughput improvement, but will still provide
reliability and transaction speed improvements. Note that
the maximum path speed is the actual maximum throughput
of the slowest link between two computers. Very often,
hardware such as DSL lines and WiFi, or services such as
Virtual Private Networks, are rated at speeds much higher
than they actually deliver. Determine your actual path speed
before evaluating any performance options.
Configure Firewalls, NAT, and VPN Devices
Network Devices
174 As described above firewalls, emulators, routers, NATs,
acceleration appliances, and other devices may selectively
limit performance of some traffic flows and not others.
3.
4.
TCP Relative Speed
TCP's performance may vary substantially due to many
factors, including the time of day. If TCP is operating close
to the Maximum Path Speed, then MTP may not offer faster
throughput than TCP, because no transport protocol can be
faster than the network itself.
Conclusion
Overloads can also be synchronously triggered for example
if a TPDU contains a bad parameter in many cases the
recovers will thought fully send back an error notification if
a bad TPDU is broad cost to 10,000 machines each one
Wight send back an error message. The resulting broad cast
storm could cripple the network .Even though "speed" or
"throughput" is the most commonly talked about network
metric, it does not actually exist as a well-defined
characteristic. In any packet-switched network, datagrams
arrive at fixed points in time. Only the physical and link
layers deal with the signaling of individual bits. At the
network layer and above, the "speed" at any single moment
in time is either zero or infinity. To calculate a meaningful
speed statistic, discrete datagram arrival events must be
averaged over time. For example if two 1400 byte
datagrams arrive 10 milliseconds apart, you could say that
data is arriving at a rate of 2800 bytes per 10 milliseconds or
2.24 megabits per second. But if a third 1400 byte datagram
arrives 10 milliseconds after that, then this same calculation
(4200 bytes in 20 milliseconds) yields a speed of only 1.68
megabits per second. Even though data is arriving at the
same rate, we get a radically different result from the same
calculation. The speed of data movement heavily depends
on how, when, and where you measure it. The interaction of
device buffers, signaling rates, and competing traffic flows
means that speed will vary not just over time, but also
depending on how and where you measure it. For example,
data may appear to be moving very quickly as it leaves the
source, but will become slowed down as it crosses the
network and experiences latency and loss. The same idea
even applies at the receiving end. Data may appear to be
arriving quickly at the receiving network, but may become
delayed in I/O processing such as writing to disk. Speed
calculations are further complicated by the distinction
between the application-level data that is being transported,
versus all the overhead consisting of packet headers, link
frames, error correction, lost datagrams, and duplicate
datagrams. Thus the bit-rate of a link is only an upper bound
on speed. The actual rate of transport of useable data,
sometimes called "goodput", will be significantly lower.
5.
6.
7.
8.
Yunhong Gu and Robert L. Grossman. UDT: UDPbased data transfer for high-speed wide area networks.
Computer Networks, 51(7):1777—1799, 2007.
Hbase Development Team. Hbase: Bigtable-like
structured
storage
for
hadoop
hdfs.
http://wiki.apache.org/lucene-hadoop/Hbase, 2007.
Hillol Kargupta, editor. Proceedings of Next Generation
Data Mining 2007. Taylor and Francis, to appear.
D.A. Reed. Grids, the teragrid and beyond. Computer,
36(1):62–68, Jan 2003.
I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H
Balakrishnana. Chord: A scalable peer to peer lookup
service for internet applications. In Proceedings of the
ACM SIGCOMM ’01, pages 149–160, 2001.
Pang-Ning Tan, Michael Steinbach, and Vipin Kumar.
Data Mining. Addison-Wesley, 2006.
Reference
1. Robert L. Grossman. A review of some analytic
architectures for high volume transaction systems. In
The 5th International Workshop on Data Mining
Standards, Services and Platforms (DM-SSP ’07), pages
23—28. ACM, 2007.
2. Robert L Grossman, Michael Sabala, Yunhong Gu,
Anushka Anand, Matt Handley, Rajmonda Sulo, and
Lee Wilkinson. Distributed discovery in e- science:
Lessons from the angle project. In Next Generation
Data Mining (NGDM ’07), page to appear, 2008.
175